Thursday, September 6, 2007

E-paper with Photonic Ink

Photonic crystals are being used by a Toronto startup to create commercial devices that offer better color and resolution than other flexible displays.

By Duncan Graham-Rowe



Crystal light: Photonic crystals made out of silica beads (shown as gray balls) measuring 200 nanometers across are embedded in a spongy electroactive polymer and sandwiched between transparent electrodes. When a voltage is applied, an electrolyte fluid is drawn into the polymer composite, causing it to swell (shown as yellow in the middle image). This alters the spacing of the crystals, affecting which wavelengths of light they reflect. When the spacing is carefully controlled, the pixel can be made to reflect any color in the visible spectrum.
Credit: Nature Photonics



Scientists in Canada have used photonic crystals to create a novel type of flexible electronic-paper display. Unlike other such devices, the photonic-crystal display is the first with pixels that can be individually tuned to any color.

"You get much brighter and more-intense colors," says André Arsenault, a chemist at the University of Toronto and cofounder of Opalux, a Toronto-based company commercializing the photonic-crystal technology, called P-Ink.

Several companies, including MIT startup E Ink and French firm Nemoptic, have begun producing products with e-paper displays. E Ink's technology uses a process in which images are created by electrically controlling the movement of black or white particles within tiny microcapsules. Nemoptic's displays are based on twisting nematic liquid crystals. The benefit of such screens is that compared with traditional displays, they are much easier to view in bright sunlight and yet only use a fraction of the power.

While the quality and contrast of black-and-white e-paper displays were almost on par with real paper, color images were lacking because each pixel was limited to a single primary color. To display a range of colors, pixels must be grouped in trios. In each trio, one pixel is filtered red, another is filtered green, and the third is filtered blue. Varying the intensity of each pixel within the trio generates different colors. But Arsenault says that these old systems lack intensity. For example, if one wants to make the whole screen red, then only one-third of the pixels will actually be red.

With P-Ink, it's a different story. "We can get 100 percent of the area to be red," Arsenault says. This is because each pixel can be tuned to create any color in the visible spectrum. "That's a three-times increase in the brightness of colors," he says. "It makes a huge difference."

P-Ink works by controlling the spacing between photonic crystals, which affects the wavelengths of light they reflect. Photonic crystals are the optical equivalent of semiconductor crystals. While semiconductor crystals influence the motion of electrons, photonic crystals affect the motion of photons.

Although recently there has been a lot of research looking at using photonic crystals for anything from optical fibers to quantum computers, it's actually an ancient phenomenon. For example, photonic crystals are responsible for giving opals their iridescent appearance. "There are many organisms that have coloration that doesn't come from a dye," says Arsenault. "This is the basis of our technology."

With P-Ink, each pixel in a display consists of hundreds of silica spheres. Each of these photonic crystals is about 200 nanometers in diameter and embedded in a spongelike electroactive polymer. These materials are sandwiched between a pair of electrodes along with an electrolyte fluid. When a voltage is applied to the electrodes, the electrolyte is drawn into the polymer, causing it to expand.

The swelling pushes the silica beads apart, changing their refractive index. "As the distance between them becomes greater, the wavelengths reflected increases," says Arsenault. P-Ink is also bistable, meaning that once a pixel has been tuned to a color, it will hold that color for days without having to maintain a power source. "And the material itself is intrinsically flexible," Arsenault says.

The technology was developed with Geoffrey Ozin and Daniel Puzzo, among others, at the University of Toronto and Ian Manners at the University of Bristol, in the UK. The group demonstrated how 0.3-millimeter pixels--about the same size as many LCD displays--can independently generate a range of colors. Their results are published in the August issue of the journal Nature Photonics. "One single material can give all the necessary colors for a display without filters," says Arsenault.

In fact, by making the crystals slightly larger, it's also possible to take them beyond the visible-light range and into infrared, says Arsenault. The effects in this range would be invisible to the human eye but could be used to make smart windows that control the amount of heat that passes through them, he says.

This is a step forward, says Jacques Angele, a cofounder of Nemoptic. "The aim of these color-display technologies is to be comparable with paper. Unfortunately, the brightness of the [other technologies] today is limited to about 30 percent of paper."

"It's a spectacular innovation," says Edzer Huitema, chief technology officer of the Dutch firm Polymer Vision, based in Eindhoven. Even traditional screens, such as cathode-ray tubes, LCDs, and plasma displays, use three or even four differently colored pixels to generate color. "It's a major limitation for all color-display technologies," Huitema says. When the color of each pixel is controlled, not only does the color quality increase, but the resolution should also improve by a factor of three.

There is one display technology, however, that can tune individual pixel color, says Angele. Both Kent Displays, in Ohio, and Japanese electronics firm Fujitsu have been taking this approach, which, in essence, involves placing the three colored pixels on top of each other. But besides being technically difficult and expensive, this approach reduces the brightness of the colors, Angele says. "It's difficult to have an optical stack without optical losses."

Arsenault predicts that Opalux will have the first products on the market within two years, probably in the form of advertising displays. But, he says, it will be a long while before P-Ink will be in a position to completely replace traditional displays. "The caveat is that we are not at video speeds," Arsenault says.

Currently, the P-Ink system can switch pixels in less than a second, which is on par with other e-paper displays. "We're still early in our development, and there's a lot of room for [improving] the material and optimizing its performance," says Arsenault.
Source:http://www.technologyreview.com

A Better Way to Make Hydrogen?

A Purdue researcher claims aluminum alloys could make fuel-cell vehicles practical.

By Kevin Bullis

Gassing up: This aluminum alloy quickly pulls oxygen from water, in the process forming aluminum oxide and releasing hydrogen gas. The hydrogen could be used in place of gasoline in cars.
Credit: Jerry Woodall, Purdue University

A new process for using aluminum alloys to generate hydrogen from water could make fuel-cell vehicles more practical, says Jerry Woodall, a professor of electrical and computer engineering at Purdue.

Hydrogen fuel cells are attractive because they produce no harmful emissions, but hydrogen gas is hard to transport, and hydrogen vehicles have a limited range because it's difficult to store large amounts of hydrogen onboard. Many researchers are developing methods for storing more hydrogen, including packing it into carbon nanotubes or temporarily storing it in chemical compounds. Woodall's solution is to store hydrogen as water, splitting hydrogen from oxygen only when it's needed to power the vehicle.

Earlier this year, Woodall reported successfully generating significant amounts of hydrogen using a combination of aluminum and gallium. In those experiments, however, the alloy contained mostly gallium, which both limited the hydrogen-generating capacity of the material and kept costs high. At a nanotechnology conference on Friday, Woodall will present new work that shows that the process succeeds with an alloy containing 80 percent aluminum. This could make the system far more practical by reducing the amount of expensive gallium while increasing the amount of active material.

Woodall's process works because of aluminum's strong affinity for oxygen, which causes the metal to break water apart, forming aluminum oxide and releasing hydrogen. This basic chemical process is, of course, well known, but the problem has been that as soon as aluminum is exposed to air, it quickly forms a thin layer of aluminum oxide that seals off the bulk of the aluminum and prevents it from reacting with water. Woodall's insight, says Sunita Satyapal, who heads the Department of Energy's (DOE) hydrogen-storage program, is to use gallium to prevent this layer from completely sealing off the aluminum. Although the molecular mechanisms are still not understood, it's known that the gallium causes gaps in the oxide layer that allow the aluminum to react quickly with the oxygen in water, but not with the oxygen in air.

Woodall envisions a system in which aluminum pellets would be delivered to fueling stations where drivers would load about 50 kilograms of pellets and 20 kilograms of water into separate containers, with the two mixed as needed to generate hydrogen and aluminum oxide. (This would provide the equivalent of about 60 kilograms of gasoline, Woodall says.) The aluminum oxide can be recycled employing the same process used for aluminum cans, and the gallium can be easily separated from the aluminum oxide and used again.

But the electricity needed to recycle the aluminum could be a problem, since it would be a major source of pollution unless it comes from clean sources such as solar or wind. Also, Satyapal says that the energy efficiency of the process falls short of DOE goals.

The DOE, together with oil and car companies, has set goals for the amount of hydrogen that should be stored onboard a vehicle, aiming to provide the same range as gasoline-powered cars without changing vehicle designs or reducing cargo and passenger space. Woodall says that he can meet the goals for cars and other light vehicles, in part by recycling water produced by the fuel cells. The DOE, however, estimates that Woodall's process would take up too much room because, among other reasons, recycling water will likely not be practical, Satyapal says.

Woodall is working with AlGalCo, a startup based in West Lafayette, IN, to commercialize the process. The company's initial products will be fuel-cell generators that run on hydrogen produced with a version of his aluminum alloy.

Source: http://www.technologyreview.com

A New Type of Molecular Switch

IBM researchers report a potential breakthrough in molecular electronics.

By Duncan Graham-Rowe

Molecular switch: The tip of a scanning tunneling microscope (shown in silver) probes a cross-shaped molecular switch to turn on and off a neighboring molecule. By inducing voltages, the probe causes two hydrogen atoms within the naphthalocyanine molecule to flip from one orientation to another.
Credit: IBM Zurich Research Labs





IBM scientists have created a novel molecular switch that is able to turn on and off without altering its shape. While such a switch is still years from being used in working devices, the scientists suggest that it does show a potential way to link together such molecular switches to form molecular logic gates for future computers.

Researchers during the past decade have been working to use individual molecules as electronic switches in the hope that they will eventually help make electronic devices even smaller and more powerful. (See "Molecular Computing.") But so far, such efforts have involved molecular processes that in some way deform the geometric shape of the molecule, says Peter Liljeroth, a researcher at IBM Zurich Research Laboratory, in Switzerland.

The problem is that changing the molecule's shape makes it difficult to link them together as switches. If a researcher wants to make something more complicated than just a molecular switch, such as a logic gate, then he or she has to be able to couple them together, says Liljeroth. "Having a single molecular switch is not really going to be useful for anything."

Liljeroth and his colleagues exploit atomic changes that take place at the center of a molecular cage, which does not alter the molecule's overall structure. In the latest issue of the journal Science, the group shows how its molecule can be electrically switched on and off. The researchers also demonstrate how three of these molecules can be made to work together when placed next to one another. "Injecting a current in one molecule will switch the state of another," says Liljeroth.

"The report constitutes an outstanding and remarkable piece of fundamental science," says Fraser Stoddart, director of the California Nanosystems Institute at the University of California, Los Angeles, who also works on molecular switching.

The IBM molecule is a naphthalocyanine, a class of compounds used in paints and organic optical electronics because of their intense bluish-purple color. The structure of IBM's molecule forms a cross shape that contains two opposing hydrogen atoms on either side of a central square void.

When the researchers placed the molecule on an ultrathin substrate, these opposing hydrogen atoms were found to flip from the sides of this quadrant to the top and bottom, or vice versa, when a sufficient voltage was applied. Yet regardless of which of these two states it's in, the geometry of the molecule remains constant.

When a lower voltage is applied, it's possible to read the state of the switch by measuring the current flowing through it. "A low voltage does not switch it, so we can read the state of the molecule," says Liljeroth.

"It's beautiful science," says Mark Reed, a physicist at Yale University, in New Haven, CT, who studies molecular devices. "The fact that they have this reversible change of the structure is very nice."

IBM's discovery was made by accident. "What we were actually investigating was the molecular vibration caused by adding electrons to the molecule," says Liljeroth. But in doing so, the researchers noticed this flipping of hydrogen atoms, a molecular reaction known as tautomerization.

To switch the molecule, the group used a scanning tunneling microscope (STM) operating at extremely low temperatures and in a vacuum. However, the reaction is driven electrically, albeit at picoamps, so the STM is not necessary for this reaction to take place, says Liljeroth. But the low temperature could be a major obstacle to making the process practical.

For this particular molecule, the temperature had to be maintained at just five degrees kelvin in order for the reaction to occur in a controlled way. "The reaction still occurs at room temperature," says Liljeroth. "But at room temperature, it would happen spontaneously." Nevertheless, he says, the potential is there to find new molecules that exhibit this behavior at higher temperatures in the hope of eventually building logic devices.

Demonstrating that one molecular switch can be turned on and off by applying a current to a neighboring molecule is a first step toward such logic. "The ability to apply a voltage to one molecule and cause tautomerization of a neighboring one has interesting implications for logic devices," says Stoddart. But, he says, the temperature constraint remains a huge challenge.

Stoddart also rejects the IBM group's dismissal of molecular switches that change shape; he argues that such molecules are at a much more advanced stage and can operate at room temperature. "I find it galling that scientists in the field of molecular electronics continue to be unfairly dismissive of research by others that is much more technologically advanced than their own, and yet also has a very sound theoretical and experimental basis to it."

Yale's Reed is also skeptical about the practical implications of the IBM finding. Any talk of turning this reaction into a device amounts to "excessive hyperbole" at this stage, he says. "It's like saying we have discovered silicon semiconductors, therefore we can make a Pentium."
Source: http://www.technologyreview.com

Craig Venter's Genome

The genomic pioneer bares his genetic code to the world.

By Emily Singer

Personal genomes: Genomics pioneer Craig Venter (above) has sequenced his entire genome and released it to the world.

Credit: J. Craig Venter Institute

Five years ago, Craig Venter let out a big secret. As president of Celera Genomics, Venter had led the race between his company and a government-funded project to decode the human genome. After leaving Celera in 2002, Venter announced that much of the genome that had been sequenced there was his own. Now Venter and colleagues at the J. Craig Venter Institute have finished the job, filling in the gaps from the initial sequence to publish the first personal genome.

His newly released genome, published today in the journal PLoS Biology, differs from both of the previous versions of the human genome (one from Celera, the other from the Human Genome Project) in that it details all of the DNA inherited from both mother and father. Known as a diploid genome, this allows scientists to better estimate the variability in the genetic code. (In a genome sequence generated from a conglomerate of different individuals, some variations are lost in the averaging.) Within the genome of 2.810 billion base pairs, scientists found 4.1 million variations among the chromosomes; 1.2 million of these were previously unknown. Of the variations, 3.2 million were single nucleotide polymorphisms, or SNPs, the most well-characterized type of variation, while nearly one million were other kinds of variants, including insertions, deletions, and duplications.

Venter's genome will join that of another genomic pioneer, James Watson, codiscoverer of the structure of DNA. (See "The $2 Million Genome.") Announced in June, Watson's genome was sequenced by 454, a company based in Branford, CT, that's developing next-generation sequencing technologies. (For more on 454's technology, see "Sequencing in a Flash.")

Venter's and Watson's genomes are likely just the first in an upcoming wave of personal genomes, a crucial step in the advent of personalized medicine: the ability to tailor medical treatments to an individual's genetic profile. (See "The X Prize's New Frontier: Genomics.") Venter has already explored some of his genome, discovering that he carries genetic variations that put him at increased risk for Alzheimer's disease, heart disease, and macular degeneration. He says that he's been religiously taking statins, cholesterol-lowering drugs, ever since.

Venter talks with Technology Review about what lies ahead for his genome.

Technology Review: Why did you decide to embark on this project?

Craig Venter: The genome we published at Celera was a composite of five people. To put it together, it became clear that we had to make some informatics compromises--we had to leave out some of the genetic variation. We knew the only way to truly understand the genome would be to have the genome of one individual. Rather than starting from scratch, we decided to take what we had from the Celera genome and add more sequence. The goal was to get an accurate reference sequence from a single individual.

TR: How does your genome sequence add to what we know from the Human Genome Project?

CV: The government labs sequenced and assembled a composite haploid genome from several individuals [meaning it included a DNA sequence from only one of each chromosome pair]. There was the assumption back then that having half of the genome was all that was needed to understand human complexity. But it's become clear that we need to see the composite of the sets of chromosomes from both the mother and father to see the variation in the genome.

This genome has all the insertions and deletions and copy-number differences. That gives us a very different view.

TR: What's the most exciting finding so far?

CV: For me, the most exciting finding is that human-to-human variation is substantially higher than was anticipated from versions of the human genome done in 2001. If fact, it might be as much as tenfold higher: rather than being 99.9 percent identical, it's more like 99 percent identical. It's comforting to know we are not near-identical clones, as many people thought seven years ago.

TR: How will scientists use your genome sequence?

CV: It will serve as a reference genome. This is probably the first and last time anyone will spend the time, money, and energy to sequence a diploid genome using highly accurate Sanger sequencing. Future genomes, like those from 454 or George Church's Personal Genome Project, will be layered onto [existing] data, adding to the completeness of this genome. (See "The Personal Genome Project.") [The traditional Sanger sequencing method, used for the Human Genome Project and to generate Venter's sequence, generates longer pieces of DNA than do newer methods, such as that used by 454, making it easier to assemble the overlapping pieces.]

TR: James Watson released a version of his own genome earlier this summer. How is yours different?

CV: There has been nothing published yet on his genome, so we have no idea. But as I understand it, in contrast to really assembling a genome, they sequenced short fragments that are layered onto the sequence assembled at the NIH. So there are a lot of technical differences, but until it's published, we won't really know.

TR: You've had sections of your genome in the public domain for several years now. Any second thoughts about putting the entire high-quality sequence out there?

CV: No. And I applaud Watson for doing this as well. A key part of the message here is that people should not be afraid of their genetic codes or afraid to have other people see them. That's in contrast to the notion that this is dangerous information that should be kept under lock and key. We're not just our genetic code. There is very little from the code that will be 100 percent interpretable or applied.

TR: Have you searched your genome for disease-related mutations?

CV: Yes. I have a book coming out in October called A Life Decoded where I look at many variants and try to put them in context of my life. For example, I have a high statistical probability of having blue eyes, but you can't be 100 percent sure from my genome that I have them. The message is that everything in our genomes will be a statistical uncertainty. We're really just in the first stages of learning that.

Previous published genomes don't represent anyone, so we can't interpret human biology based on these. But now we can start to make human-genome inferences. We'll need tens of thousands to millions of genomes to put together a database that would allow interpretation of multiple rare variants and what they mean. That will take decades.

TR: How much did the project cost?

CV: The goal was not to see how cheaply we could sequence a genome; it was to see how accurately we could do it. It was clearly a multimillion-dollar project over the years.

Source: http://www.technologyreview.com

Saturday, September 1, 2007

Mapping Wildfires

NASA is using a new thermal-imaging sensor to track the fires in Santa Barbara.
By Brittany Sauser

Fire map: NASA engineers have developed a new thermal-imaging sensor that can accurately map a wildfire's behavior and pinpoint hot spots. The picture above is a composite of various images taken of a fire near Zaca Lake in Santa Barbara County, CA, on August 16, 2007. The researchers used Google Earth to visualize the data. The bright areas represent the fire’s hot spots.
Credit: NASA


At the onset of a wildfire, the United States Forest Service must deploy its resources as quickly and efficiently as possible to contain and stop the fire. Part of this process involves flying manned missions over the fire to map its location, hotspots, and the direction in which it's spreading. Now a new thermal-imaging sensor developed by NASA Ames Research Center (ARC) is making it easier for researchers to get an accurate picture of the ongoing fires in Santa Barbara. The system is still in development, but the researchers say that it could ultimately save resources, property, and lives.
The U.S. Forest Service and NASA are in the midst of testing the new technology on a remotely piloted unmanned aerial vehicle (UAV) flying over wildfires in California. The flight missions began on August 16, capturing images of a fire near Zaca Lake in Santa Barbara County, and they will continue once a week through September. The purpose of the missions is not only to test the sensor, but also to demonstrate the benefits of UAVs in wildfire tracking, their ability to handle and process data, and their ability to communicate this in real time, via satellite, to receiving stations on the ground.
The key to fighting wildfires is accurately knowing the positional information of a fire--not just taking an image of the fire, but understanding where the fire is and how it's behaving. "If you have one pixel [in an image] that shows there is a thermal heat source there, you need to know the latitude and longitude of that pixel," says Everett Hinkley, the National Remote Sensing Program manager at the U.S. Forest Service and coprincipal investigator on the project. To do so, the researchers use a scanner with a highly sensitive thermal mapping sensor designed by NASA.
The forest service's current system is similar but much less sophisticated: it only measures two portions of the light spectrum. The lack of data on other parts of the spectrum hinders the system's ability to precisely distinguish temperature gradients. The image files captured by the sensor must then be put on a "thumb drive" and dropped out of the aircraft through a tube as it flies near the command station, or the aircraft must land so that the data can be given to a colleague who performs the analysis.
The new equipment includes a 12-channel spectral sensor that runs from the visible spectrum into the reflected infrared and mid-infrared spectrum. Two of these channels were built specifically for the thermal portion of the spectrum and were highly calibrated to be able to distinguish hot spots. This is what makes it an effective wildfire imaging sensor, says Vince Ambrosia, an engineer at NASA ARC and the principal investigator of the fire missions.
The collection of images taken by the scanner is then processed onboard the aircraft in real time, and the data is automatically sent via satellite to a ground station, where it is incorporated into a geographic information system or map package. For the current fire missions, researchers are using Google Earth as their visualization tool. The data is displayed as an array of colors based on their intensity. The temperature ranges might be displayed as red, green, and blue, for example, with the hottest objects colored red. The system's ability to continuously send images of the fire allows researchers to better predict its next move. This helps fire fighters determine where to deploy resources.
The entire sensor package weighs less than 300 pounds and fits under the wing of an unmanned aircraft called Ikhana. Built by General Atomics Aeronautical Systems, Ikhana was acquired by NASA's Dryden Flight Research Center in November 2006. The Santa Barbara mission was the first for the fire-mapping system, but already the researchers are pushing its limits by demonstrating how the unmanned vehicle can collect data continuously for up to 24 hours. NASA hopes to continue using the system for earth-science and atmospheric-science data-collection missions.
"We are trying to augment current capabilities with unmanned aircraft and put them in situations where we wouldn't normally put a manned aircraft, such as dangerous circumstances or night flights at low altitude," says Hinkley. But he says that it will easily be 8 to 10 years before large UAVs, such as Ikhana, will be able to fly over fires on a regular basis, partly because of cost and man power. Currently the Federal Aviation Administration (FAA) requires that a pilot guide the plane from the ground, even though the plane could be programmed to fly on its own. In addition, the FAA hasn't established rules and regulations as to how such planes would fit in the national airspace.
Small, unmanned, aerial vehicles could very soon be used at local incidents, but the sensor technology has to be scaled down to be used on these planes, says Ambrosia.
For the foreseeable future, the U.S. Forest Service will continue to use manned aircraft. Once testing of the new thermal-imaging technology is complete, which is expected within a year, the U.S. Forest Service plans to put the system on its manned aircraft.

A Better Gauge on Battery Life

A new battery-gauge chip could make mobile phones more reliable and help them last longer on a single charge.
By Kevin Bullis
Credit: Istockphoto.com

Texas Instruments (TI), based in Dallas, has developed a battery-gauge chip that can tell mobile-phone users down to the minute how much talking or standby time they have left--a degree of accuracy much greater than that provided by existing battery gauges. Such a precise gauge could allow smart-phone developers to squeeze more energy out of the battery, potentially increasing by half or more the amount of time that it lasts between charges.
The new TI gauge is more accurate than today's gauges, which measure a battery's voltage, because it measures a number of electrical properties. Voltage-only-based gauges are erratic and unreliable because voltage doesn't fall steadily as the battery is discharged. What's more, the voltage changes as the battery ages and experiences different temperatures. It also varies with different power demands on the battery.
The TI gauge is more accurate--to within 1 percent of the actual energy left in the battery--because it measures electrical properties besides voltage. Most important, it measures a feature of battery cells that is at the root of the voltage changes that make today's gauges unreliable. This feature, called impedance, is a measure of the opposition to current flow, and it changes with temperature, battery age, and the power demands on a battery.
Knowing the changes in impedance allows the chip to reinterpret voltage changes, keeping it from being fooled by voltage changes caused by these factors. For example, when a person makes a call, the voltage drops as soon as the phone transmits the signal. A conventional gauge would interpret this as a sudden drop in the amount of energy left in the battery, which could engage battery-saving measures in power-management software. The new gauge would recognize that the cell still has plenty of energy. The approach also works with low voltages caused by cell age.
The new gauge chip, which is incorporated either into the circuit boards of a phone or directly into a battery pack, could be particularly useful in smart phones. Some phone users have to assume that after the gauge has reached the halfway point, the battery could die at any moment. What's more, poor battery gauges make it difficult to employ power-management software on phones that could extend battery capacity. Power-management software slows down processors, turns off the camera's flash, and dims the screen to save the battery once it's low. It may also save data and shut down applications just before the battery dies. Such software, however, may engage power-saving measures too soon if it relies on an inaccurate battery gauge, resulting in sluggish device performance while there is still plenty of charge.
The problem gets worse as the battery ages and, as the battery is depleted, voltage drops more quickly. With conventional gauges, this could trigger the phone to shut down when there is still quite a bit of energy left. Indeed, much of the perceived loss in battery life in older phones is actually just a problem with the battery gauge. "You can lose 30 percent of the energy in a battery simply because the device shuts itself down too early," says Richard DelRossi, an engineer at TI. He says that the new, more accurate battery gauge could increase the usable battery capacity by as much as 50 to 100 percent, depending on the power-management strategy.
Other phone and chip makers are also developing better battery gauges. Approaches taken by Motorola, based in Schaumburg, IL, and PowerPrecise, a chip maker based in Herndon, VA, that's funded by Intel, combine voltage data with current measurements to determine how much energy has been used. Subtracting the amount of energy used can give a good idea of how much is left, as long as it's known how much energy was there to start with. But the capacity of the battery, as with the voltage, depends on certain conditions, such as temperature, battery age, and power demand. To adjust for these factors, these systems can refer to models of battery behavior based on earlier tests to guess how these conditions will affect the battery's overall capacity. Such a system gives a much more accurate gauge of battery charge than do voltage measurements alone, says Jerry Hallmark, who heads Motorola's research on energy consumption in mobile devices.

Advanced Hurricane Forecasting

With the 2007 hurricane season under way, scientists believe their new forecasting model will make more-accurate predictions, thereby saving lives.
By Brittany Sauser
Forecasting destruction: An image of Hurricane Katrina nearing peak strength was taken on August 28, 2005, by NASA satellites (top). The new hurricane-forecasting model, HWRF, reproduced the life cycle of Hurricane Katrina and was able to more accurately predict its intensity (bottom image).
Credit: NASA (top); NOAA/National Weather Service Environmental Modeling Center (bottom).


Forecasters are predicting yet another very active hurricane season for 2007, but this year meteorologists expect to be able to more accurately predict the path, structure, and intensity of storms. The device that will make this happen is a new hurricane-forecasting model developed by scientists at the National Oceanic and Atmospheric Administration (NOAA) Environmental Modeling Center. It will utilize advanced physics and data collected from environmental-observation equipment to outperform current models and provide scientists with real-time three-dimensional analysis of storm conditions.
The model is able to see the inner core of the hurricane, where the eye wall is located, better and in higher resolution than all other models, says T. N. Krishnamurti, a professor of meteorology at Florida State University. The eye wall is the region around the hurricane eye where the strongest winds and heaviest rains are located, thus the place of the highest storm intensity. "It is a very comprehensive model that is a significant development for hurricane forecasting," says Krishnamurti.
Currently, experts at the National Hurricane Center and the National Weather Service rely mostly on the Geophysical Fluid Dynamics Laboratory (GFDL) model. The model, which has been in use since 1995, forecasts the path and intensity of storms. Until now, it was the only global model that provided specific intensity forecasts of hurricanes. And while it is a very good model, it's limited by the amount of data it's based on. "It has a very crude representation of storms," says Naomi Surgi, the project leader for the new model and a scientist in the Environment Modeling Center. "GFDL is unable to use observations from satellites and aircraft in its analysis of the storm."
Isaac Ginis, a professor of oceanography at the University of Rhode Island (URI) who helped develop the GFDL model, agrees that the old model "has too many limitations" and, while it's able to forecast the path of a storm well, it is not as skillful at forecasting the intensity or power of a storm. Ginis is now a principal investigator for the new model, called the Hurricane Weather Research and Forecast (HWRF) model, which is able to gather a more varied and better set of observations and assimilate that data to produce a more accurate forecast.
This new model will use data collected from satellites, marine data buoys, and hurricane hunter aircraft, which fly directly into a hurricane's inner core and the surrounding atmosphere. The aircraft will be equipped with Doppler radars, which provide three-dimensional descriptions of the storm, most importantly observing the direction of hurricane winds. The aircraft will also be dropping ocean probes to better determine the location of the loop current, a warm ocean current in the Gulf of Mexico made up of little hot spots, known as warm core eddies, that give hurricanes moving over them a "real punch," says Surgi.
The hurricane model will then assimilate the data--wind conditions, temperature, pressure, humidity, and other oceanic and atmospheric factors in and around the storm--and analyze it using mathematics and physics to create a model, explains Surgi. To understand hurricane problems in the tropics, it is imperative to understand the physics of the air-sea interface. "In the last several years, we have learned a lot about the transfer of energy between the upper part of the ocean and the lowest layers of the atmosphere," she says. "And the energy fluxes across that boundary are tremendously important in terms of being able to forecast a hurricane's structure."
Improving the intensity forecast of a storm and being able to precisely analyze a hurricane's structure were scientists' main goals in developing the new model. It can now forecast these aspects from 24 hours out up to five days out with extreme accuracy, says Ginis. The new model was put to the test by running three full hurricane seasons--2004, 2005, and 2006--for storms in both the Atlantic and east Pacific basin, totaling close to 1,800 tests runs. For example, the model was able to reproduce the life cycle of Hurricane Katrina very well, accurately forecasting that it would become a category 5 hurricane over the Gulf of Mexico--something the old model was unable to predict.
Over the next several years, scientists at NOAA will continue to improve upon these initial advancements with further use of ocean observations. They plan to couple the HWRF with a wave model, which will allow scientists to better forecast storm surge, inland flooding, and rainfall. NOAA has, in addition to partnering with URI in 2006, started collaborating with researchers at the University of Southern Alabama to work on coupling the HWRF with a wave model and enhancing its forecasting features.
"This model is enormously important for emergency response and emergency managers, and also the public," says Ginis, "because we not only want to know where the storm is going to make landfall, but also how powerful it is going to be."

Making Colors with Magnets

A new nanomaterial could lead to novel types of displays.
By Kevin Bullis
Rainbow rust: A solution of nanoscopic iron-oxide particles changes color as a magnet gets closer, causing the particles to rearrange. The color changes from red to blue as the magnetic field’s strength increases.
Credit: Yin laboratory, University of California, Riverside

A material developed by researchers at the University of California, Riverside can take on any color of the rainbow, simply by the scientists changing the distance between the material and a magnet. It could be used in sensors or, encapsulated in microcapsules, in rewritable posters or other large color displays.
The researchers made the material using a high-temperature method to synthesize nanoscale, crystalline particles of magnetite, a form of iron oxide. Each particle was made about 10 nanometers in diameter because, as they get much larger than this, magnetite particles become permanent magnets, and therefore would cluster together and fall out of solution. The 10-nanometer particles group together to form uniformly sized spherical clusters, each about 120 nanometers across; in tests, these clusters have stayed suspended in solution for months.
By coating these clusters with an electrically charged surfactant, the researchers cause the clusters to repel each other. When researchers use a magnet to counteract the repellent forces, the clusters rearrange and move closer together, changing the color of the light they reflect. The stronger the magnetic field, the closer the particles, with the color changing from the red end of the spectrum toward the blue, opposite end, as the magnet gets closer to the material. Moving the magnet away allows the electrostatic charge to force the particles apart again, returning the system to its original condition.
"The beauty of this system is that it is so simple," says Orlin Velev, a chemistry and biomolecular-engineering professor at North Carolina State University. "It can be used over large areas because it's very inexpensive and very easy to make." The work is published in the early online edition of the journal Angewandte Chemie.
A number of other researchers have developed color-changing materials, some of which are also controlled with magnetic forces; others use electrical or mechanical forces. The Riverside researchers, led by Yadong Yin, a professor of chemistry, however, are able to pack far more magnetic material per spherical building block that was previously possible. Sanford Asher, a professor of chemistry and materials science at the University of Pittsburgh who has encapsulated magnetite particles in polymer spheres, says that the new approach increases the amount of magnetic material by fivefold.
As a result, the new materials can be tuned to a larger number of colors than previously made materials. Indeed, North Carolina State's Velev, who works on materials that change color in response to electronic signals, says he knows of no other material capable of taking on such a wide range of colors.
The Riverside researchers found that processing the materials at high temperatures ensured that the 10-nanometer particles formed with a crystalline atomic structure. It also caused the particles to group together to form similarly sized clusters. In contrast, more commonly used room-temperature synthesis results in particles that form irregular agglomerations. The uniformity of the clusters and the crystallinity of the particles seem to improve the magnetic response of the materials, Yin says, although he and his colleagues are still looking into the underlying mechanisms involved.
The materials can switch colors at a rate of twice a second, which is still too slow for use in TVs and computer monitors. Yin hopes to increase switching speeds still more by using smaller amounts of material, perhaps in microscopic capsules. Such small amounts will make it easier to present a uniform magnetic field to the entire sample, potentially aiding the rearrangement of the clusters. Also, such microcapsules could be arranged to form pixels in a display, as is done now with E-Ink, a type of electronic paper used in some electronic book readers and a cell phones. (See "A Good Read.")
But even with faster speeds, Yin doesn't expect the materials to replace current computer-monitor technology. Rather, he has his sights set on larger-scale applications that would take advantage of the low cost of the materials. Examples could include posters that can be rewritten but don't have to change as fast as displays of video.
One significant drawback of the current materials is that they would need a constant power supply to preserve the magnetic field and hold the microcapsules at a set color. Yin's next step is to develop a version of the materials that remains stable after their color is changed--that is, until they're switched to a new color. If this is possible, then a poster could be printed with something like the read-write head on a hard drive, Yin says. It would preserve the image until it's rewritten with another pass of the print head, using no power in between.
"At this stage it's fun to play with," Velev says. "Maybe at later stages it could be used for some decorative purpose, such as paint that changes color, or some new types of labels or display boards. Right now it's a beautiful piece of research."


Source: http://www.technologyreview.com

Storing Light

A new optical device could make high-speed computing and communications possible.
By Kevin Bullis
A microscopic device for storing light developed by researchers at Cornell University could help free up bottlenecks in optical communications and computing. This could potentially improve computer and communications speeds by an order of magnitude.
The new device relies on an optically controlled "gate" that can be opened and closed to trap and release light. Temporarily storing light pulses could make it possible to control the order in which bits of information are sent, as well as the timing, both of which are essential for routing communications via fiber optics. Today, such routing is done, for the most part, electronically, a slow and inefficient process that requires converting light pulses into electrons and back again. In computers, optical memory could also make possible optical communication between devices on computer chips.
Switching to optical routing has been a challenge because pulses of light, unlike electrons, are difficult to control. One way to slow down the pulses and control their movement would be to temporarily confine them to a small continuous loop. (See "Tiny Device Stores Light.") But the problem with this approach is getting the light in and out of such a trap, since any entry point will also serve as an exit that would allow light to escape. What's needed is a way to close the entryway once the light has entered, and to do so very quickly--in less time than it takes for the light to circle around the loop and escape. Later, when the light pulse is needed, the entryway could be opened again.
The Cornell researchers, led by Michal Lipson, a professor of electrical and computer engineering at the university, use a very fast, 1.5-picosecond pulse of light to open and close the entryway. The Cornell device includes two parallel silicon tracks, each 560 nanometers wide. Between these two tracks, and nearly touching them, are two silicon rings spaced a fraction of the width of a hair apart. To trap the light in these rings, the researchers turned to some of their earlier work, in which they found that the rings can be tuned to detour different colors by shining a brief pulse of light on them.
Light of a certain color passes along the silicon track, takes a detour through one of the rings, and then rejoins the silicon track and continues on its way. However, if the rings are retuned to the same frequency the moment after a light pulse enters a ring, the light pulse will circulate between the rings in a continuous loop rather than rejoin the silicon track and escape. Tuning the rings to different frequencies again, such as by shining another pulse on one of the rings, allows the light to escape this circuit and continue on to its destination.
Work remains to be done before such a device will function in a commercial system. So far, the rings only capture part of a pulse of light. As a result, any information encoded in the shape of the overall pulse is lost. This can be solved by compressing the pulse and using a cascade of rings, says Mehmet Yanik, a professor of electrical engineering and computer science at MIT.
The other issue is that the length of time a light pulse can be stored is relatively short, Lipson says. If the light stays in the ring for too long, it will be too weak to use. Lipson says it might be possible to make up for light losses by amplifying the light signal after it leaves the rings to restore any lost power.
Other schemes for storing light have been demonstrated in the past, but these were impractical, requiring carefully controlled conditions, for example, or a large, complicated system. The new approach is an important step forward because it makes it possible to store light in ambient conditions and in a very small device, says Marin Soljacic, a professor of physics at MIT. Once you've done that, he says, "then it becomes interesting to industry."

Nano Memory

A nanowire device 100 times as dense as today's memory chips.
By Kevin Bullis

Two layers of 400 nanowires (blue and gray areas) encode data on molecules where they cross. Red lines are electrodes.
Credit: Jonathan E. Green and Habib Ahmad

Researchers at Caltech and the University of California, Los Angeles, have reached a new milestone in the effort to use individual molecules to store data, an approach that could dramatically shrink electronic circuitry. One hundred times as dense as today's memory chips, the Caltech device is the largest-ever array of memory bits made of molecular switches, with 160,000 bits in all. In the device, information is stored in molecules called rotaxanes, each of which has two components. One is barbell shaped; the other is a ring of atoms that moves between two stations on the bar when a voltage is applied. Two perpendicular layers of 400 nanowires deliver the voltage, reading or writing information. It's a big step forward from earlier prototype arrays of just a few thousand bits. "We thought that if we weren't able to make something at this scale, people would say that this is just an academic exercise," says James Heath, professor of chemistry at Caltech and one of the project's researchers. He cautions, however, that "there are problems still. We're not talking about technology that you would expect to come out tomorrow."

Tiny Device Stores Light

IBM researchers have fabricated a silicon device that's a significant advance in making practical optical interconnects.
By Prachi Patel-Predd
An SEM image of an optical-delay line that has up to 100 microrings, all connected to a common silicon nanowire. The optical buffer can store up to 10 optical bits.
Credit: IBM

By forcing light to circle multiple times through ring-shaped structures carved into silicon, researchers at IBM have been able to delay the flow of light on a microchip. Being able to delay light is crucial for high-performance, ultrafast optical computers of the future that will process information using light and electrical signals.

It's easy to store electronic data in computer memory; light is harder to control. The new silicon device, described in this week's issue of Nature Photonics, is ten times smaller than those made in the past. It also works much better at high data speeds. "This work is approximately a factor of ten over the best achieved with [ring-shaped devices] so far," says Keren Bergman, an electrical-engineering professor at Columbia University.

Storing light on silicon is key for electronic-optical hybrid computers that researchers believe will be available a decade from now. In these computers, devices will compute using electrons but will move data to other devices and components using light--avoiding the use of copper wires or interconnects that tend to heat up at high computer frequencies.

But the optical interconnects would have to be laid out in an intelligent network, just as the copper wires on today's chips are. To transfer data packets efficiently between devices, the copper network on a chip has nodes where many interconnects converge. If a processor is sending data to a logic circuit, the data travels from node to node until it gets to the logic circuit. Each node in the network reads and processes the data packet to route it correctly to the next node. While the node makes a routing decision, it temporarily stores the data in electronic memory. To process and route data at the nodes of an optical-interconnect network, one would need to store, or delay, light so the node can make the routing decision.

Yurii Vlasov and his colleagues at IBM's T.J. Watson Research Center delay light on a silicon microchip by circulating it 60 to 70 times through ring-shaped structures, called resonators. The researchers make these resonators on a thin silicon layer mounted on an insulting silicon-oxide layer. They etch parallel trenches into the silicon that reach down to the oxide. The raised portion between the trenches acts like a silicon wire that shuttles light.

The researchers employ the same silicon wafers and techniques that are used to fabricate microprocessors at IBM. This makes it easy to "think of combining optical circuitry with electrical circuitry on the same chip," Vlasov says.


By connecting many rings, the researchers can build up the delay. With 56 rings connected to a common silicon wire, they get the longest delay: about half a nanosecond--which amounts to storing 10 optical bits--at a data speed of 20 gigabits per second.

Other researchers have made resonators on silicon before. But the smallest resonators so far have been about 100 micrometers wide, and cascading tens of them yields a device that is a few millimeters long--too big to be integrated into an electronic circuit. The IBM researchers make rings that are 12 micrometers in diameter, and they can fit up to 100 ring resonators into an area that is less than one-tenth of a square millimeter.

The size of the device is a major advance, Bergman says: "It is very close to the kinds of densities you would like to have on chip for optical interconnects." Achieving a delay of 10 bits at gigabits-per-second speeds, which would be typical of the data speed that optical interconnects of the future would be handling, is a breakthrough, she says. "This is a major step towards making optical interconnects a reality."

The device loses more light than would be acceptable in practical circuits, and Vlasov says that he and his team are working to reduce these losses. Once they do that, he says, they could put thousands of resonators together to store even more optical bits. For practical optical interconnects, you would need to store hundreds and thousands of bits.

It might take another 10 years before we see optical interconnects in computers, but the IBM research shows that the technology is viable, says Risto Puhakka, president of market-research firm VLSI Research, in Santa Clara, CA. "There are legs on this technology, and it could eventually be integrated with current circuits into chips."

Source: http://www.technologyreview.com


Microsoft's Plan to Map the World in Real Time

Researchers are working on a system that allows sensors to track information and create up-to-date, searchable online maps.
By Kate Greene

Researchers at Microsoft are working on technology that they hope will someday enable people to browse online maps for up-to-the-minute information about local gas prices, traffic flows, restaurant wait times, and more. Eventually, says Suman Nath, a Microsoft researcher who works on the project, which is called SenseWeb, they would like to incorporate the technology into Windows Live Local (formerly Microsoft Virtual Earth), the company's online mapping platform.

By tracking real-life conditions, which are supplied directly by people or automated sensor equipment, and correlating that data with a searchable map, people could have a better idea of the activities going on in their local areas, says Nath, and make more informed decisions about, for instance, what driving route to take.

[For images from the SenseWeb application click here.]

"The value that you get out of [real-time data mapping] is huge," he says, and the applications can range from finding a parking spot in a cavernous parking garage to checking the traffic flow in different parts of a city.

Other research groups at the University of California at Berkeley, UCLA, Stanford, and MIT are working on similar projects for tracking environmental information. For instance, UCLA has a project in which sensors -- devices that measure physical quantities such as temperature, pressure, and sound -- are integrated with Google Earth, the company's downloadable mapping software. In addition, companies such as Agent Logic and Corda process real-time data and can correlate it with a location, mostly for businesses and governmental organizations.

Moreover, within the past year, Microsoft, Google, and Yahoo have been vying with each other to generate the most useful electronic maps (see "Killer Maps"). For the most part, though, the local information offered by Web-based mapping applications is updated only infrequently. And sites that offer real-time, local updates (about the status of public transportation, for instance), while useful, are designed for a single purpose.

What makes Microsoft's experimental project different from others that track information, Nath says, is that it would allow people to search for different types of real-time data within a user-specified area on a map, and progressively narrow that search. For instance, a person could highlight a region of a city and search for restaurants. SenseWeb would gather information provided by restaurants about their wait times and display it in various ways: the wait at specific establishments, the average wait for all restaurants in the region, or the minimum and maximum waits. If you needed to find a place to eat quickly, says Nath, but you learn that the minimum wait is 30 minutes in a certain part of town, you'd know to look in a different area. "You don't have to take the time to look at each individual restaurant," Nath says.

Additionally, a person could zoom into an area and see newly calculated information, such as maximum, minimum, and average wait times, according to the newly defined geography.

Searching for these types of real-time statistics within different areas on a map is a new take on displaying data on maps, says Phillip Levis, professor of computer science at Stanford University. "It's very different to give the average wait time in the city than it is to scan around the city and see each restaurant's wait time," he says.

SenseWeb is composed of three basic parts: sensors (or data-collecting units), Microsoft's database indexing scheme that sorts through the information, and the online map that lets users interact with the data. The sensors used in the project can vary in form and function, and can include thermometers, light sensors, cameras, and restaurant computers. SenseWeb puts baseline sensor information, such as location and function, into a database that's searchable by location and type of sensor information.

Then, if someone wants to check traffic conditions along a stretch of highway, for instance, the database will direct queries to cameras ("Web cams") located along the route -- and an image of traffic shows up on the map.

In order for people with sensors -- from researchers at universities to a private citizen with a Web cam -- to participate in SenseWeb, Nath says, they would have to be able to upload data to the Internet and provide information to the Microsoft group about their sensor, such as latitude, longitude, and the type of data it provides (for example, gas prices, temperature, or video).

One challenge for the SenseWeb project will be making the different types of information pulled into its database consistent enough to analyze and sort, says Samuel Madden, professor of computer science at MIT. For instance, there would need to be standard units for temperatures. "As soon as you start integrating all this data, you can imagine that weird things will happen," he says. "It's really a challenge to build tools that work with generic data and to come up with a way that anyone can publish their information."

Another, more fundamental hurdle for the SenseWeb project, Nath says, is getting people to register their sensors and sign on to the free program. Gas stations or restaurants may not even know about the project, or may not have an efficient way to pass along their data.

Therefore, in coming months, the Microsoft group will extend SenseWeb to universities that have already deployed sensors for other projects. In addition, the team is talking to a company that has sensors on parking spots, which, if integrated into Live Local, could help people find available parking more easily, he says.

For now, though, SenseWeb and Live Local are separate projects, according to Nath. The Live Local team "really loves this technology," he says, but right now "what's missing is the actual data."