June 11, 2013

Send SMS and charge your phone's bettery


Good news for people on the go if your mobile battery gets low you can recharge it by sending just one SMS. Sounds too good to be true isn't it, but it is true you can get it recharged by just one SMS.

A London-based company Buffalo Grid has introduced a solar-powered cellphone charging station that is activated by text message.

A patchy or absent power grid poses a conundrum of problems for rural areas in the developing world, particularly in Africa and Asia, where the use of cellphones is rapidly rising.

The company’s basic technology, which was recently trialled in Uganda, should help tackle this issue, NewScientist reported.

The battery extracts power from the solar panel using a technique called maximum power point tracking (MPPT). A 60-watt solar panel charges a battery.

A solar panel’s power output is dictated by environmental conditions, such as temperature and the amount of sunlight, as well as the resistance of the circuits connected to it.

MPPT monitors conditions and changes the resistance to ensure the maximum possible power output at any given time.

The innovation lies in how the stored power is released to charge a phone. A customer sends a text message, which in Uganda costs 110 shillings, to the device. 

Once it receives the message, an LED above a socket on the battery lights up, indicating that it is ready to charge a phone.­

At the Konokoyi coffee cooperative in Uganda, each text message allows a phone to be charged for 1.5 hours. A fully charged Buffalo Grid unit can last for three days, has up to 10 charging points and charges 30 to 50 phones a day.

To bring the cost down further, Buffalo Grid hopes to co-opt the cellphone network operators into subsidising power for charging the phones, or even making it free.

“When you bring power to phones that don’t have any, people will use them more,” said Buffalo Grid’s Daniel Becerra.

December 27, 2012

Dual Core Fiber Optical Cable


  Optical fibers carry movies, messages, and music at the speed of light, but switching, routing, and buffering of data mostly rely on the use of relatively slow electronic components. Hoping to do away with these information speed bumps, a team of researchers has developed an elegant dual-core optical fiber that can perform the same functions just by applying a miniscule amount of mechanical pressure.



      A group of researchers recently unveiled a novel, dual-core optical fiber, capable of carrying movies, messages and music at a very fast speed – similar to optical fibers. Details of the latest nanomechanical fibers are available in the open-access journal Optics Express from Optical Society (OSA).

   These new nanomechanical fibers, which have their light-carrying cores suspended less than 1 micrometer apart from each other, could greatly enhance data processing and also serve as sensors in electronic devices. The researchers describe their new fiber and its applications today in the Optical Society’s (OSA) open-access journal Optics Express.

      “Nanomechanical optical fibers do not just transmit light like previous optical fibers,” said Wei H. Loh, deputy director of the EPSRC Centre for Innovative Manufacturing in Photonics and researcher at the Optoelectronics Research Centre, both at the University of Southampton. “Their internal core structure is designed to be dynamic and capable of precise mechanical motion. This mechanical motion, created by applying a tiny bit of pressure, can harness some of the fundamental properties of light to give the fiber new functions and capabilities.”

   The cores in the optical fiber are close enough to each other (less than 1 micron) to be optically coupled – a photon traveling down one core is physically affected by the presence of the nearby second core. By shifting the position of one of the cores by just a few nanometers, the researchers changed how strongly the light responded to this coupling effect. If the coupling effect is strong enough, the light immediately jumps from one fiber to the other.



    “Think of having a train traveling down a two-track tunnel and jumping the tracks and continuing along its way at the same speed,” explained Loh. The flexible suspension system of the fiber easily responds to the slightest bit of pressure, the researchers assert, bringing the two cores closer together or moving them apart, thereby controlling when and how the signals hop from one core to the other. The result is reproducing, for the first time, the function of an optical switch inside the actual fiber. 

    This same capability may also enable optical buffering, which has been very hard to achieve, according to the researchers. “With our nanomechanical fiber structure, we can control the propagation time of light through the fiber by moving the two cores closer together, thereby delaying, or buffering, the data as light,” said Loh. Buffers are essential when multiple data streams arrive at a router at the same time; they delay one stream so another can travel freely. 

      To create the new fibers, the researchers heated and stretched a specially shaped tube of optical glass with a hollow center containing two cores suspended from the inside wall (see image, courtesy of the University of Southampton). The fibers maintain this delicate structure as they are drawn and stretched to the desired thickness.


ccording to the researchers, this is the first time that nanomechanical dual-core fibers have been fabricated directly. Other types of multicore fibers have been fabricated previously, but their cores are encased in glass and therefore are mechanically locked.

September 27, 2012

Big Data : An Introduction

    Big data really isn't new.Big data is an IT buzzword nowadays, but what does it really mean? When does data become big?
    Data has been getting bigger for years, and companies have been finding ways to glean insights from their growing stockpiles of data for years. But it seems that calling it by a new buzzword is what it took to move it front and center for high level executives. It worked for cloud computing, didn't it?
"Big Data" is a catch phrase that has been bubbling up from the high performance computing niche of the IT market. Increasingly suppliers of processing virtualization and storage virtualization software have begun to flog "Big Data" in their presentations. What, exactly, does this phrase mean?
If one sits through the presentations from ten suppliers of technology, fifteen or so different definitions are likely to come forward. Each definition, of course, tends to support the need for that supplier's products and services. Imagine that.
In simplest terms, the phrase refers to the tools, processes and procedures allowing an organization to create, manipulate, and manage very large data sets and storage facilities. Does this mean terabytes, petabytes or even larger collections of data? The answer offered by these suppliers is "yes." They would go on to say, "you need our product to manage and make best use of that mass of data." Just thinking about the problems created by the maintenance of huge, dynamic sets of data gives me a headache.
An example often cited is how much weather data is collected on a daily basis by the U.S. National Oceanic and Atmospheric Administration (NOAA) to aide in climate, ecosystem, weather and commercial research. Add that to the masses of data collected by the U.S. National Aeronautics and Space Administration (NASA) for its research and the numbers get pretty big. The commercial sector has its poster children as well. Energy companies have amassed huge amounts of geophysical data. Pharmaceutical companies routinely munch their way through enormous amounts of drug testing data. What about the data your organization maintains in all of its datacenters, regional offices and on all of its user-facing systems (desktops, laptops and handheld devices)?
Large organizations increasingly face the need to maintain large amounts of structured and unstructured data to comply with government regulations. Recent court cases have also lead them to keep large masses of documents, Email messages and other forms of electronic communication that may be required if they face litigation.
Like the term virtualization, big data is likely to be increasingly part of IT world. It would be a good idea for your organization to consider the implications of the emergence of this catch phrase.
For More Detail on Big-Data  : ------> Click Here <-------

September 23, 2012

Which Windows browser is really the best?

                 Your Web browser is probably the most-used application on your PC. You check your email in it, you write in it, you collaborate with coworkers in it, you use it to watch cat videos. With so much at stake, you need a browser that works well for you.


Browser performance
When we looked at the browser contenders previously, we concluded that all the major browsers loaded webpages at similar speeds.
But many new Web apps and services rely heavily on HTML5 and JavaScript, so the browser makers have been spending a lot of development time making sure that their programs render such apps and services quickly and efficiently.
To gauge how well browsers handle HTML5 and JavaScript code, we subjected Chrome, IE, and Firefox to the Sunspider JavaScript benchmark and to the WebVizBench benchmark for HTML5. In addition, we tested on a PC with switchable Nvidia graphics hardware to see how each browser exploited the extra processing horsepower in the graphics card.
Our test PC was an Acer Aspire Timeline Ultra M5 laptop with a 1.7GHz Intel Core i5 processor and 6GB of memory. The switchable graphics system consisted of an integrated Intel HD Graphics 4000 chipset and a dedicated Nvidia GeForce GT 640M graphics card with 1GB of video memory.
In our WebVizBench HTML5 benchmark test, Chrome and IE 9 saw large increases in performance when we switched to the dedicated graphics card instead of the integrated graphics chip.
Chrome achieved an average score of 5502 when we used the integrated graphics system, and hit an average of 5825 when we used the Nvidia graphics card. IE 9 came in second with average scores of 4797 and 5642, respectively; Firefox finished third after posting average scores of 4492 and 5600. Notably, Chrome did almost as well on this test using the integrated graphics hardware as the other browsers did using the more powerful Nvidia graphics card. So if your PC has a weak graphics card, you'll probably get better performance from Chrome than from Firefox or IE.
Our tests for JavaScript performance were less conclusive, with all three browsers rendering the benchmark’s JavaScript code within 15 milliseconds of one another. Internet Explorer 9 eked out a narrow victory, completing the Sunspider benchmark in 200 milliseconds. Chrome 21 finished in second place at 206 milliseconds, and Firefox 15 rounded out the three at 214 milliseconds.


Winner: Google Chrome. Browser performance will vary some depending on your PC, but Chrome was a solid all-around performer in our testing.

Ease of use

Current browsers continue the less-is-more trend that began with Google Chrome's introduction in 2008, sporting thin toolbars and minimalist designs so that the page content takes center stage.
Browser toolbars compared: Firefox, Internet Explorer, Chrome.


Internet Explorer 9: In IE 9, Microsoft chose a hyperminimalist approach with an extremely narrow toolbar and few on-screen controls. By default, IE 9 shows the address bar and tabs in the same row, which can make things a little too tight, especially if you frequently have a lot of tabs open at once (you can choose to show the tab bar in a separate row, though). On the far-right edge of the toolbar lie three buttons that take you to your browser homepage, show your favorites, or toggle various settings.
One nicety in IE 9 is its unobtrusive method of providing notifications: Instead of popping up an alert box that interrupts your browsing, it displays the message in a bar at the bottom of the browser window, where you can address it when you're good and ready. In addition, IE 9 shows you a download's progress via its taskbar icon, which fills in with green as you download a file.
Chrome 21: Google has stuck with the same basic look and feel for Chrome since releasing it in 2008. It has no title bar, and by default it shows only the back, forward, and reload buttons, as well as the combined search/address bar and a button on the far right that opens a tools menu. The start screen helps you reach your most visited sites, as well as any Web apps you've added via the Chrome Web Store. When you download a file, it appears in a gray bar that lives at the bottom of the window.
Click the orange button in the upper-left corner of any Firefox browser window to access frequently used commands.


Firefox 15: While most other browsers now feature a combined search and address bar, Mozilla keeps the two separate in Firefox 15. Whether separate fields are better than combined ones is a matter of personal preference. (Note that Firefox does let you search from the address bar and remove the separate search box is you prefer.)
One convenient feature of Firefox allows you to switch between search engines readily: If you want to use Bing instead of Google, for instance, you can do that with two clicks. Chrome permits you to switch between search providers, too, but requires a quick tweak in the Settings screen. With IE you need to install an add-on for each search provider (other than Bing) you want to add.
Like other current Windows browsers, Firefox doesn't show a menu bar by default; the various menu options live in a single menu that pops up when you click the orange 'Firefox' button in the upper-left corner of the window.


Winner: Tie. In truth, you won't find much differentiation between browser interfaces these days. All the prominent ones work the same, save for a few fairly minor differences.

February 26, 2012

What is DDR4 !!


DIMMsDo almost anything often enough, and the individual steps become routine. It certainly feels that way when I’m building computers: There are days I feel like I could do it in my sleep (and, truth be told, there are days I probably have). But when you’re selecting parts, most of the components require at least a little analysis. The CPU and the motherboard are major, and have to be considered together. Of course storage is a big deal — you always want enough and you always want it to be fast, and optimizing those qualities with affordability can be a headache (especially at the moment). For a gaming system, the right video card is essential. And in addition to being your system’s face to the world, the case will affect both the way you build the computer and how much you can expand it later. And… that’s it. Right? Am I forgetting anything?
Oh yeah, RAM. Sorry about that. But I admit it: When I’m shopping for memory, I pretty much turn off my brain. It’s not intentional, and it’s not that I think RAM isn’t important. It’s just that it’s the easiest kind of hardware to shrug off. Most of us, even when we’re in our most rabid enthusiast mode, roll our eyes at the arcane industry of RAM timings. Provided the DIMMs’ voltages and speeds are compatible with the motherboard, everything else usually seems academic. Oh, if I’m being super picky about components, I might pay a bit more attention, but in reality that’s rarely necessary. These days, you can take it for granted that basic DDR3 is going to be what you want.
It occurred to me last week, however, that we’re rapidly approaching another time when that might not be the case. As processors and other hardware get more powerful, it becomes more difficult — and important — for RAM to keep up. Over the last 20 years or so we’ve seen single data rate (SDR) memory give way to double data rate (DDR) memory, which transfers data on both the rising and falling edges of the clock signal. And over the last decade, we’ve slowly progressed from DDR to DDR2 (with an internal clock running at half DDR’s rate) to DDR3 (operating at half DDR2’s rate), constantly upping performance while reducing power usage along the way. But what about the next logical step in this particular cycle: DDR4?
Samsung DD4 - close-upIt’s coming, though when — and whether — you’ll see it for use in your home computer is still somewhat up in the air.
Samsung announced a year ago that it haddeveloped the first 2GB DDR4 modules using a 30nm process technology, achieving with them transfer rates of 2.133Gbps at 1.2 volts. Three months later, Hynix came up with its own DDR4-2400 modules operating at the same voltage.
The website DDR4.org, which is (not surprisingly) devoted to information about the upcoming memory standard, is a bit light on substantive information but fleshes out a few of the details. Among them: Data transfer rates would start at 2,133MT/s (about where DDR3 is leaving off) and could eventually double to 4,266MT/s, and initial energy consumption is expected to be about 1.2 volts (though later chips might be able to use as little as 1.05 volts). The site goes on to say that “other sources have noted that the [power] consumption is 40 percent less for DDR4 than for an equivalent DDR3 chip.” Not bad. Another nugget of useful information on the site is the mention that DDR4 might make use of “pseudo open drain” technology, which was adapted from GDDR memory.

February 09, 2012

Laser-switched magnetic storage




























































































































































































































































1,000 times faster than current hard drives 














































Magnetic nanoisland storage

  Hold onto your hats: An international team of scientists working in England, Russia, Switzerland, and the Netherlands have completely rewritten the rules of magnetic storage. Instead of switching a magnetic region using a magnetic field (like a hard drive head), the researchers have managed to switch a ferrimagnetic nanoisland usinglasers. Storing magnetic data using lasers is up to 1,000 times faster than writing to a conventional hard drive.
   To achieve this, the researchers created ferrimagnetic nanoislands (pictured above) out of an alloy of iron and gadolinium (a rare earth metal). When these nanoislands are struck by a 60-femtosecond laser (0.06 picoseconds, or 0.00006 nanoseconds) their magnetism switches (pictured below). If you know a little about the science of magnetism, you’ll know that this behavior is rather funky as we usually associate heat with destroying magnetism, rather than switching it.
How magnetic nanoislands switch with femtosecond laserIn total, it takes a nanoisland five picoseconds to switch state (from binary 0 to binary 1). By comparison, it takes about one nanosecond to switch the value of a magnetic region on a hard drive platter. In other words, we are looking at a storage technique that’s about 1,000 times faster than current hard drives. Instead of storing hundreds of megabytes per second, the use of femtosecond lasers would enable the transfer of gigabytes or terabytes per second.
That’s not all! Current hard drive platters have a density of around three terabits per square inch, but nanoislands are so small that you could cram in 53 terabits per square inch. Instead of a terabyte per platter, we would be looking at 15 terabytes per platter and 45TB hard drives. Apparently femtosecond lasers are more efficient than spinning a hard drive head around at thousands of RPM, too, though the researchers don’t give any hard figures.
Finally, though, it’s worth noting that the scientists only discuss how to write data, not read. Ferrimagnets, like antiferromagnets, don’t generally have a magnetic field. In other words, York’s nanoisland storage medium can’t simply replace a hard drive platter. At the moment, data is probably read using a scanning tunneling electron microscope — and for the time being, they’re still very much room-sized devices