March 26, 2011

Internet Explorer 9/ Chrome 10/ and FireFox 4 - (A Coparison)

 Find Best Browser- IE9/Chrome 10/Firefox 4

Installation and updates

Though Microsoft owns its own operating system, IE9 is still the most difficult of the three browsers to install, requiring a lengthier download and full system restart to get it running. This is likely due to its reported use of hardware acceleration and other features. Chrome and Firefox 4 both install relatively quickly and without a full machine reboot.
Finally, when it comes to updates, Chrome currently leads the pack. Google has been cranking out updates every few weeks (sometimes days) for its Chrome browser, changing and tweaking things on a consistent basis. However, unlike most update processes, the changes are very transparent, allowing users to install updates by simply restarting Chrome. There are no loading bars or lengthy downloads and re-installations. Mozilla is usually pretty good about updating Firefox and we look forward to how the final version of Firefox 4 handles patches and updates. Microsoft hasn’t yet commented on how it plans to handle updates to IE9, but we’re hoping it will adopt a model closer to Google’s. Hopefully we won’t have to wait two years before getting significant updates to Microsoft’s slick new browser.

Design & ease of use

If I didn’t know better, I’d say that the current trend in browser design is for the browser to disappear entirely. IE9, Firefox, and Chrome all attempt to be as minimal as possible, offering next to no actual text and small, monochromatic buttons that blend right into the look of operating systems like Windows 7 and OS X. Overall, all three browsers appear to achieve their goals fairly well, with different strengths and weaknesses.
Internet Explorer offers the leanest tab and address bar configuration, managing to cram every feature of IE into a single thin row of icons, even combining IE 8 search and address bars into a single combined search-address bar. Unfortunately, the thin design can look a bit cramped if users open a lot of tabs and the search-address bar sometimes has as many as four small buttons inside of it, meaning those who don’t have the browser window maximized may have a difficult time viewing much of the URL of their current Web page. Some users won’t have a problem with this. Notifications are an equally mixed bag. IE9 now places them at the bottom of the browser. At times, they are almost unnoticeable. Some people will like this, others will wish notifications were more prominent in the design.
 

Chrome designs its tabs with a shape that makes them look like manilla folder tabs, opting to have a thin second row for the address/search bar, back and forward navigation, home, refresh, bookmark, and options buttons. This design works well, but the two rows make it a bit thicker than IE9′s design; those with limited headroom may prefer IE9. Google’s design, like IE9, gets cluttered quickly when users start opening dozens of tabs. Still, Chrome’s single-click bookmarking method, where you simply highlight a star (Firefox 4 also has this feature) is easier and more natural than Internet Explorer’s two-click method of bookmarking.


A number of major sites have performed Web Standards tests on all three major browsers. The good news: they’re all excessively compliant. Unlike every version of Internet Explorer in recent memory, Microsoft has finally decided to go the extra mile and support general Web standards.
When it comes to HTML5 support, Microsoft has some work to do. In an HTML5 test performed by PCmag, Chrome and Firefox handily outperformed Internet Explorer 9, more than doubling its score. However, even Chrome only scored a 288 out of 400 on the test, meaning all three browsers have a ways to go to be truly compatible with all of the goodness HTML5 has to offer.

Speed

So most of the browsers are compatible with Web standards, but how do they rank in speed? Well, pretty close, actually. A casual user probably won’t notice a difference in the Web page rendering speed of Chrome, IE9, and Firefox 4. We performed our own tests and found that Chrome edged out IE9 which edged out Firefox 4 most of the time, but not by much. All three browsers are much faster and leaner than browsers even a few years ago.

Extras

Each browser does have its own slate of differentiating features.
With IE9, we really like its heavy integration with Windows 7. Many functions, like turning tabs into new windows are much easier with Microsoft’s new browser. It has some unique features as well, like individual tab previewing from the task bar and a new feature called tab pinning, which lets you ‘pin’ a Web site to the Windows 7 task bar. However, unlike an ordinary taskbar shortcut, pinned Web sites can offer customized “right click” menus. For example, pinning the Facebook toolbar will let you right click and auto browse to different sections of the Facebook site like News, Messages, Events, and Friends. In addition, when you open a pinned site, the IE9 browser customizes itself to resemble the site your viewing. Currently, this means an icon in the upper left and new colors for the back and forward buttons, but we like the idea.
  
     Chrome differentiates itself through its constant updates, but also through its extensive Web Apps Store, which offers apps that blur the line between Web and local apps in some unique ways. Finally, Firefox has a strong slate of extensions that back it up. Developers will have to retool many of these to support Firefox 4, but one colleague of mine refuses to leave Firefox solely because it offers unique extensions that have become essential to his browsing experience. Most other browsers support add-ons, but Firefox may have a lead in mindshare here. We look forward to seeing more Firefox 4 extensions.

Which browser is best?

Good question. It may come down to preference. Each browser has strengths and weaknesses. Most of us here at Digital Trends are Google Chrome users, mostly because, until Firefox 4 and IE9, it was undoubtedly the fastest browser of the bunch. Now, we don’t know what we’ll do. Chrome still probably offers the fastest and leanest overall browsing experience, but IE9 and Firefox have narrowed its lead significantly, each offering new features that many users will find helpful and time-saving. Still, for those who like the bleeding edge, Google’s fast and frequent browser updates are hard to pass up.
For the first time in a long time we can’t claim a strong victor here. All three major browsers offer a solid browsing experience with few downsides. Things are heating up in the browser world.

March 16, 2011

Duplication and Deduplication Of Data in DataBase

Data Duplication And Deduplication


1.Duplication


     The definition of what constitutes a duplicate has somewhat different interpretations. For instance, some define a duplicate as having the exact syntactic terms and sequence, whether having formatting differences or not. In effect, there are either no difference or only formatting differences and the contents of the data are exactly the same.
     In any case, data duplication happens all the time. In large data warehouses, data duplication is an inevitable phenomenon as millions of data are gathered at very short intervals. 


       Several approaches have been implemented to counter the problem of data duplication. One approach is manually coding rules so that data can be filtered to avoid duplication. Other approaches include having applications of the latest machine learning techniques or more advance business intelligence applications. The accuracy of the different methods for countering data duplication varies. For very large data collection implementing some of the methods may be too complex and also expensive to be deployed in their full capacity.
      Data warehouse involves a process called ETL which stands for extract, transform and load. During the extraction phase, multitudes of data come to the data warehouse from several sources and the system behind the warehouse consolidates the data so each separate system format will be read consistently by the data consumers of the warehouse.

      A data warehouse is basically a database and having unintentional duplication of records created from the millions of data from other sources can hardly be avoided. In the data warehousing community, the task of finding duplicated records within large databases has long been a persistent problem and has become an area of active research. There have been many research undertakings to address the problems of data duplication caused by duplicate contamination of data.


Despite all these counter measures against data duplication and despite the best efforts in trying to clean data, the reality still remains that that data duplication will never be totally eliminated. So it is extremely important to understand its impact on the quality of a data warehouse implementation. In particular, the presence of data duplication may potentially skew content distribution.

There are some application systems that have duplication detection functions. These functions are developed by calculating a unique hash value for a certain data or group of data such as a document. Each document, for instance, is being examined for cases of duplication by comparing it against some hash value in either an in-memory hash or persistent lookup system. Some of the most commonly used hash functions include MD2, MD5, or SHA. These three are the most preferred due to their desirable properties. They are also easily calculated based on arbitrary data or document lengths and they have lower collision probability.

  Data duplication can also be similar to problems like plagiarism and clustering. But the case of plagiarism could either be exact data duplication or just plain similarity to a certain documents. Documents which are considered to be plagiarized may refer to the abstract idea and not the word for word content. Clustering on the other hand is a method which is used to make clusters of data that have somehow similar characteristics. Clustering is used for fast retrieval of relevant information from a database

Deduplication

                     
                   Watch this video to understand full deduplication of Data

Data deduplication (often called "intelligent compression" or "single-instance storage") is a method of reducing storage needs by eliminating redundant data. Only one unique instance of the data is actually retained on storage media, such as disk or tape. Redundant data is replaced with a pointer to the unique data copy. For example, a typical email system might contain 100 instances of the same one megabyte (MB) file attachment. If the email platform is backed up or archived, all 100 instances are saved, requiring 100 MB storage space. With data deduplication, only one instance of the attachment is actually stored; each subsequent instance is just referenced back to the one saved copy. In this example, a 100 MB storage demand could be reduced to only one MB.
Data deduplication offers other benefits. Lower storage space requirements will save money on disk expenditures. The more efficient use of disk space also allows for longer disk retention periods, which provides better recovery time objectives (RTO) for a longer time and reduces the need for tape backups. Data deduplication also reduces the data that must be sent across a WAN for remote backups, replication, and disaster recovery.
Data deduplication can generally operate at the file or block level. File deduplication eliminates duplicate files (as in the example above), but this is not a very efficient means of deduplication. Block deduplication looks within a file and saves unique iterations of each block. Each chunk of data is processed using a hash algorithm such as MD5 or SHA-1. This process generates a unique number for each piece which is then stored in an index. If a file is updated, only the changed data is saved. That is, if only a few bytes of a document or presentation are changed, only the changed blocks are saved; the changes don't constitute an entirely new file. This behavior makes block deduplication far more efficient. However, block deduplication takes more processing power and uses a much larger index to track the individual pieces.
Hash collisions are a potential problem with deduplication. When a piece of data receives a hash number, that number is then compared with the index of other existing hash numbers. If that hash number is already in the index, the piece of data is considered a duplicate and does not need to be stored again. Otherwise the new hash number is added to the index and the new data is stored. In rare cases, the hash algorithm may produce the same hash number for two different chunks of data. When a hash collision occurs, the system won't store the new data because it sees that its hash number already exists in the index.. This is called a false positive, and can result in data loss. Some vendors combine hash algorithms to reduce the possibility of a hash collision. Some vendors are also examining metadata to identify data and prevent collisions.
In actual practice, data deduplication is often used in conjunction with other forms of data reduction such as conventional compression and delta differencing. Taken together, these three techniques can be very effective at optimizing the use of storage space.




March 15, 2011

Epic Browser: First Indian Browser

Epic Browser: India’s First Web Browser &  World’s  First Browser With Anti virus!


      
      We all know about how difficult it becomes to make an impact in the global market for any new web browser, and thus it remains unknown in major parts of the world. But, on the other hand, such a web browser does make it big at the regional level, that is, the region where it belongs to. To justify this, we can consider the example of the Maxthon which has gained huge popularity in China, but till now it remains unknown to most in major parts of the world.
 
      Bangalore based startup known as Hidden Reflex, has now announced the launch of its “Epic” browser, which is fully based on the Indian culture and tradition, and is aimed at making Indians proud all over the world. The browser is both feature-rich and secure and is expected to meet all the requirements a user would expect from a web browser.

        Being the first web browser to have been made completely in India, Epic browser is based on the Mozilla platform, and has incorporated most of the features mainly with the Indian hues in mind. Before we get into the discussion about the browser, let us know the details about the company Hidden Reflex, which has brought forth this web browser.




        Founded by the then U.S. based Engineer Alok Bhardwaj in 2007, the company is currently based in Bangalore. Initially they comprised of a group of only three members, but now they seem to have grown into quite a bunch of individuals who are working on two separate products. Epic was the first product to be worked on and has been launched, while the other one is still under development.

        The Epic browser is highly customizable and after the first installation, you will notice the vibrant colors of it. It initially comes with a “Peacock” background which seems a bit uncomfortable for the eyes. With around zillions of backgrounds provided by the browser to choose from, this does not seem to be a difficulty. With the launch of the Epic browser, you shall be entitled to over 1500 themes to choose from, which are compatible and available for the Epic.
       The Epic browser is provided with a sidebar which contains a host of applications and they can be launched from there. Some of these applications include Twitter, Facebook, Gmail, Orkut, Jobs, Games, Travel and many more. With the click on any of these icons on the sidebar, a widget-like window will be launched at the side of the browser. One drawback is there though, that you can open only one such widget per window.

    The Epic browser also features to be the first browser which has built-in anti-virus protection. The browser also supports a long list of Indian languages (currently 12) and an Indian content sidebar that shall aggregate news headlines, TV to live cricket commentary and other things that matters to all Indians. This also includes a bunch of productivity applications like a free word processor, video sidebar and my Computer Browser.


    To experience a whole new genre of browsing and fun, you can try this appealing browser which is available for free and can be downloaded from here. Go ahead and try the hues of India now!

 Download Epic Web Browser:-  Click Here To Download Epic






Ajax Technology

What is Ajax?

       Web applications are fun to build. They are like the fancy sportscar of Web sites. Web applications allow the designer and developer to get together and solve a problem for their customers that the customers might not have even know they had. That's how the blogging tools like MovableType and Blogger came about after all. I mean, before Blogger, did you know you needed an online tool to build your Web site blog?
But most Web applications are slow and tedious. Even the fastest of them has lots of free time for your customers to go get a coffee, work on their dog training, or (worst of all) head off to a faster Web site. It's that dreaded hourglass! You click a link and the hourglass appears as the Web application consults the server and the server thinks about what it's going to send back to you.

Ajax is Here to Change That

Ajax (sometimes called Asynchronous JavaScript and XML) is a way of programming for the Web that gets rid of the hourglass. Data, content, and design are merged together into a seamless whole. When your customer clicks on something on an Ajax driven application, there is very little lag time. The page simply displays what they're asking for. If you don't believe me, try out Google Maps for a few seconds. Scroll around and watch as the map updates almost before your eyes. There is very little lag and you don't have to wait for pages to refresh or reload.

What is Ajax?

Ajax is a way of developing Web applications that combines:
In the traditional Web application, the interaction between the customer and the server goes like this:
  1. Customer accesses Web application
  2. Server processes request and sends data to the browser while the customer waits
  3. Customer clicks on a link or interacts with the application
  4. Server processes request and sends data back to the browser while the customer waits
  5. etc....
There is a lot of customer waiting.

Ajax Acts as an Intermediary

The Ajax engine works within the Web browser (through JavaScript and the DOM) to render the Web application and handle any requests that the customer might have of the Web server. The beauty of it is that because the Ajax engine is handling the requests, it can hold most information in the engine itself, while allowing the interaction with the application and the customer to happen asynchronously and independently of any interaction with the server.

Asynchronous

This is the key. In standard Web applications, the interaction between the customer and the server is synchronous. This means that one has to happen after the other. If a customer clicks a link, the request is sent to the server, which then sends the results back.
With Ajax, the JavaScript that is loaded when the page loads handles most of the basic tasks such as data validation and manipulation, as well as display rendering the Ajax engine handles without a trip to the server. At the same time that it is making display changes for the customer, it is sending data back and forth to the server. But the data transfer is not dependent upon actions of the customer.

Ajax is Not New Technology

Ajax is instead a new way of looking at technology that is already mature and stable. If you're designing Web applications right now, why aren't you using Ajax? Your customers will thank you, and frankly, it's just fun!

March 14, 2011

Grid Computing Vs Cloud Computing




 GRID COMPUTING
   Grid computing is the act of sharing tasks over multiple computers. Tasks can range from data storage to complex calculations and can be spread over large geographical distances. In some cases, computers within a grid are used normally and only act as part of the grid when they are not in use. These grids scavenge unused cycles on any computer that they can access, to complete given projects.  xyxabc@home is perhaps one of the best-known grid computing projects, and a number of other organizations rely on volunteers offering to add their computers to a grid.



    These computers join together to create a virtual super computer. Networked computers can work on the same problems, traditionally reserved for supercomputers, and yet this network of computers are more powerful than the super computers built in the seventies and eighties. Modern supercomputers are built on the principles of grid computing, incorporating many smaller computers into a larger whole.

The idea of grid computing originated with Ian Foster, Carl Kesselman and Steve Tuecke. They got together to develop a toolkit to handle computation management, data movement, storage management and other infrastructure that could handle large grids without restricting themselves to specific hardware and requirements. The technique is also exceptionally flexible.
Grid computing techniques can be used to create very different types of grids, adding flexibility as well as power by using the resources of multiple machines. An equipment grid will use a grid to control a piece of equipment, such as a telescope, as well as analyze the data that equipment collects. A data grid, however, will primarily manage large amounts of information, allowing users to share access
Grid computing is similar to  cluster computing, but there are a number of distinct differences. In a grid, there is no centralized management; computers in the grid are independently controlled, and can perform tasks unrelated to the grid at the operator's discretion. The computers in a grid are not required to have the same operating system or hardware. Grids are also usually loosely connected, often in a decentralized network, rather than contained in a single location, as computers in a cluster often are. 


CLOUD COMPUTING
 
Cloud computing is a general term for anything that involves delivering hosted services over the Internet. These services are broadly divided into three categories: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). The name cloud computing was inspired by the cloud symbol that's often used to represent the Internet in flowcharts and diagrams.
A cloud service has three distinct characteristics that differentiate it from traditional hosting. It is sold on demand, typically by the minute or the hour; it is elastic -- a user can have as much or as little of a service as they want at any given time; and the service is fully managed by the provider (the consumer needs nothing but a personal computer and Internet access). Significant innovations in virtualization and distributed computing, as well as improved access to high-speed Internet and a weak economy, have accelerated interest in cloud computing.


A cloud can be private or public. A public cloud sells services to anyone on the Internet. (Currently, Amazon Web Services is the largest public cloud provider.) A private cloud is a proprietary network or a data center that supplies hosted services to a limited number of people. When a service provider uses public cloud resources to create their private cloud, the result is called a virtual private cloud. Private or public, the goal of cloud computing is to provide easy, scalable access to computing resources and IT services.


Infrastructure-as-a-Service like Amazon Web Services provides virtual server instanceAPI) to start, stop, access and configure their virtual servers and storage. In the enterprise, cloud computing allows a company to pay for only as much capacity as is needed, and bring more online as soon as required. Because this pay-for-what-you-use model resembles the way electricity, fuel and water are consumed, it's sometimes referred to as utility computing.

Platform-as-a-service in the cloud is defined as a set of software and product development tools hosted on the provider's infrastructure. Developers create applications on the provider's platform over the Internet. PaaS providers may use APIs, website portals or gateway software installed on the customer's computer. Force.com, (an outgrowth of Salesforce.com) and GoogleApps are examples of PaaS. Developers need to know that currently, there are not standards for interoperability or data portability in the cloud. Some providers will not allow software created by their customers to be moved off the provider's platform.

In the software-as-a-service cloud model, the vendor supplies the hardware infrastructure, the software product and interacts with the user through a front-end portal. SaaS is a very broad market. Services can be anything from Web-based email to inventory control and database processing. Because the service provider hosts both the application and the data, the end user is free to use the service from anywhere.

Blade Server Technology

BLADE SERVER TECHNOLOGY OVERVIEW

   Blade servers were developed in response to a critical and growing need in the datacenter: the requirement to increase server performance and availability without dramatically increasing the size, cost and management complexity of an ever growing data center. To keep up with user demand and because of the space and power demands of traditional tower and rackmount servers, data centers are being forced to expand their physical plant at an alarming rate.
Enter blade servers. They consolidate power and system level functions into a single, integrated chassis and enable the addition of servers and other components such as communications and peripheral connections via easy to install blades. Blade server technology greatly increases server density, lowers power and cooling costs, eases server expansion and simplifies datacenter management.
Blade servers are not just a new way to package traditional computing components. Rather, they are integrated systems designed to deliver server performance in efficient, high density, easy to expand, and easy to manage units. 


Blade Server Benefits

  • Reduced Space Requirements - Greater density provides up to 35 to 45 percent improvement compared to tower or rackmounted servers.
  • Reduced Power Consumption and Improved Power Management - consolidating power supplies into the blade chassis reduces the number of separate power supplies needed and reduces the power requirements per server.
  • Lower Management Cost - server consolidation and resource centralization simplifies server deployment, management and administration and improves management and control.
  • Simplified Cabling - rack mount servers, while helping consolidate servers into a centralized location, create wiring proliferation. Blade servers simplify cabling requirements and reduce wiring by up to 70 percent. Power cabling, operator wiring (keyboard, mouse, etc.) and communications cabling (Ethernet, SAN connections, cluster connection) are greatly reduced.
  • Future Proofing Through Modularity - as new processor, communications, storage and interconnect technology becomes available, it can be implemented in blades that install into existing equipment, upgrading server operation at a minimum cost and with no disruption of basic server functionality.
  • Easier Physical Deployment - once a blade server chassis has been installed, adding additional servers is merely a matter of sliding in additional blades into the chassis. Software management tools simplify the management and reporting functions for blade servers. Redundant power modules and consolidated communication bays simplify integration into datacenters and increase reliability.

Key Blade Server Technologies

 

Hardware:

  • Servers Blades - high density computing engines with 1 to 4 processors and memory
  • Blade Chassis - enclosures with integrated power and racks for housing server blades, communication blades and connections to external peripherals and inter-chassis links
  • Communication Blades - integrated blades with Ethernet, InfiniBand and proprietary communication adapters and switches
  • Power and Cooling Systems - centralized power distribution components that power the blade chassis and components
  • Storage Subsystems - hard disk and tape storage subsystems can be inside the blade chassis or external to the chassis. Blade servers can be disk-less since they can boot from external storage in a Storage Area Network or SAN. This configuration can increase reliability and reduce space requirements by partitioning storage resources in one centralized location and computing resources in another. This also eliminates storage redundancies and simplifies storage management.

Software:

  • Software Management Tools - management software that enables server administrators to deploy, control and monitor server resources.
  • Virtualization Software - software that enables maximum usages of server resources by creating virtual server resources that tap physical resources as needed by the application usage

Summary

  Blade servers are efficient solutions for data centers requiring flexible, high-density deployment and management of high performance servers. Blade servers can pack more server performance into less space while reducing cost and complexity, simplifying deployment and management, and improving overall data center performance.

March 13, 2011

How to search in search engine perfectly

Learning about how the Google Searching Algorithm is actually very tough. This is not done manually at all. Nobody at Google sits there and decides that which inks should come up at the first page and in what order. But there are many important factors that need to be taken care of while posting about something.

Most of the times the article you are writing on is already indexed with Google but this does not mean that your results wont appear on first pages. There  are many keyword tools and by using basic SEO tricks you can try and fetch out better results for your posts.

A few Factors I would like to cover in this are:
  • Title must be keyword rich.
Make sure when you are writing a post the title should be first of all SEO friendly (Length should not be more than 60 words) and it should contain the most important keyword you are targeting. For example let us suppose I am targeting for SERP , it does exist in my title.
  • Post Excerpt or first few lines are very important.
Though this is not the most important but certainly yes your first few lines of the article should definitely be keyword rich. But make sure you just do not mix it up and keep on adding keywords and keywords Google algorithm will trace you and throw you out of the searches.


  • Use Keyword Density Checker Plugin.
In wordpress we have thousands of plugins available that make our work almost null so from them you can pic keyword density checker plugin that will guide you about the density of your keywords.
  • Google Adwords keywords tool.
This is very interesting and was conveyed to me by many high ranked blogging websites. You can check for any keyword that how many times monthly in a region it is being searched. With the help of results you can then decide whether to use the same keyword or manipulate it in some or the other way.