Thursday, August 30, 2007

Saving Power in Handhelds

Taking advantage of human error tolerance could make cell phones more energy efficient.
By Larry Hardesty
Credit: Sándor Kelemen, Istockphoto.com

With the advent of the Apple iPhone and its big, clear screen, the idea of using the morning commute to catch up on missed episodes of Lost became a lot more attractive. But video chews through a handheld's battery much faster than, say, playing MP3s does. In the most recent issue of the Association for Computing Machinery's Transactions on Embedded Computing Systems, researchers at the University of Maryland describe a simple way for multimedia devices to save power. In simulations, the researchers applied their technique to several common digital-signal-processing chores and found that, on average, it would cut power consumption by about two-thirds.

The premise of the technique, says Gang Qu, one of its developers, is that in multimedia applications, "the end user can tolerate some execution failure." Much digital video, for example, plays at a rate of 30 frames per second. But "in the old movie theaters, they played at 24 frames per second," Qu says. "That's about 80 percent. If you can get 80 percent of the frames consistently correct, human beings will not be able to tell you've made mistakes."

Unlike the movies in the old theaters, a digital video isn't stored on reels of wound plastic; it's stored as a sequence of 1s and 0s. That sequence is decoded as the video plays, and the decoding time can vary from one frame to the next. So digital media systems are designed to work rapidly enough that even the hardest-to-decode frames will be ready to be displayed on time.

Qu thinks that's a waste of processing power. If the viewer won't miss the extra six frames of video per second, there's no reason to decode them. Lower decoding standards would mean less work for the video player's processor, and thus lower power consumption.

The straightforward way to ensure a decoding rate of 80 percent would be to decode, say, eight frames in a row and ignore the next two. That approach--which Qu calls the "naive approach"--did introduce power savings in the Maryland researchers' simulations. The problem is that such a system doesn't distinguish frames that are hard to decode from those that are easy: if frame five is the hardest, the decoder will still plow through it; if frame nine is the easiest, the decoder will still skip it.

Qu and his colleagues wrote an algorithm that imposes a series of time limits on the decoding process; if any of the limits is exceeded, the decoding is aborted. "You set certain milestones," Qu says, "and you say, 'Okay, after this time I still haven't reached that first milestone, so it seems this is a hard task. Let me drop this one.'" Using statistics on the durations of particular tasks, the researchers can tune the algorithm to guarantee any desired completion rate.

Raj Rajkumar, director of the Real-Time and Multimedia Systems Laboratory at Carnegie Mellon University, mentions that his colleague John Lehoczky and the University of Wisconsin's Parmesh Ramanathan have investigated approaches similar to Qu's. But he says that Qu's work is "the logical extension of earlier work. I think that what Gang did is very useful." Ramanathan adds that with Qu's approach, "my guess is that there will be considerable savings in power consumption. I think one can save quite a bit."

Indeed, the Maryland researchers' algorithm fared well in simulations, offering a 54 percent energy savings over the naive approach. "If you are using the current approach, which is going to keep on decoding everything," Qu says, "we are going to probably consume only slightly more than one-third of that energy. That means you can probably extend the battery life by three times."

Qu is quick to point out that the researchers' simulations involved signals similar, but not identical, to video signals; real video decoding might not produce such dramatic results. On the other hand, Qu says that more-recent video-coding standards call for frame rates higher than 30 frames per second. That means the decoding rate could drop below 80 percent, saving even more power.

And the tested algorithms do accurately model cell-phone voice decoding. In some handheld devices--notably the iPhone--voice communication is almost as big a battery drain as video playback. Without the handy reference of a near-century of analog movies, however, user tolerance for error in voice is harder to gauge.

Qu says his and his colleagues' power-saving scheme could be implemented in either hardware or software, although in the near term, software would certainly be the cheaper option. He adds that the work has drawn some corporate interest, but that there are no plans to commercialize it at the moment. Nonetheless, "if we got some partners," Qu says, "if they have a top engineer trying to work with us, this could be done in half a year."

Source: http://www.technologyreview.com

"Personalized" Embryonic Stem Cells for Sale

A company offers to generate and store stem cells from leftover IVF embryos.
By Emily Singer
Stem-cell insurance: A company called StemLifeLine offers to generate embryonic stem cells (shown above) from leftover embryos created for in vitro fertilization. The cells could potentially be used for future medical treatments, although no embryonic-stem-cell-based treatments exist yet.
Credit: David Scharf, Science Photo Library

It's a new, rather dicey form of life insurance. A company in California called StemLifeLine has announced that it will offer a service to generate stem cells from excess frozen embryos stored after in vitro fertilization (IVF). The company promises a huge potential payoff: the cells could one day be used to treat disease in the buyers or in their families. But the service is already garnering criticism from some scientists and ethicists who say that without current medical uses for those cells, there's no point in people paying for them.

"I think the company's website overly hypes what may be possible," says Lawrence Goldstein, director of the stem-cell research program at the University of California, San Diego. "They are almost guaranteeing that therapies are around the corner, and now is the time to start banking stem cells, but that strikes me as premature for the field."

The new service is meant to take advantage of a growing interest in the field of regenerative medicine. Stem cells from adult blood or umbilical-cord blood are already used to treat some diseases, including sickle-cell anemia and several forms of leukemia. But these cells are largely limited to treating blood-related disorders and can't be grown in large numbers. Embryonic stem cells, on the other hand, can be coaxed to form virtually any type of cell in the body and can theoretically be replicated indefinitely. Scientists are developing ways to use them to replenish cells lost or damaged in ailments such as diabetes, Parkinson's disease, and heart disease. But as of now, those treatments are limited to the lab: no embryonic stem-cell-based therapies are approved for human use.

Couples who have had children via IVF are often left with extra embryos--and the rather difficult decision of what to do with them. As of 2003, an estimated 400,000 embryos remained in cryopreservation in the United States. Embryos can be donated to research or to other couples, destroyed, or left languishing in frozen storage. According to Ana Krtolica, StemLifeLine's CEO, the inspiration to form the company came from requests from clients at IVF clinics who were donating their embryos to research but wanted to know if they would have access to those cells if they were ever needed. (The answer is no.)

"We had a patient whose husband is a paraplegic," says Russell Foulk, a member of StemLifeLine's advisory board andmedical director of the Centers for Reproductive Medicine, a private clinic with offices in Nevada and Idaho. "They wanted to have a child and were excited about the possibility of creating neural cells from the extra embryos."

The technology to derive these cells is not new. Scientists at StemLifeLine use a similar procedure to that employed by research scientists for almost a decade, although the StemLifeLine scientists have refined it so that the resulting cells are fit for human use. For less than $10,000 (actual price depends on the collaborating IVF clinic), clients can send in their excess embryos and, in return, receive a line of stem cells that have been "quality assured," meaning they have been checked for the molecular markers that signify that the cells can be differentiated into multiple cell types. The company received certification as a tissue bank from the state of California last month, and it's in the process of generating cell lines for its first group of clients.

However, critics say that the service is premature. Extra embryos can remain in frozen storage for years. And in the case of the paraplegic man, no treatments using neural stem cells are yet available. "There is no reason to take your embryos out of cryopreservation and make a line of stem cells and then freeze them again until the technology is available to actually use them," says Eric Chiao, a stem-cell biologist at Stanford's Institute for Stem Cell Biology and Regenerative Medicine, in Palo Alto.

Chiao and others argue that by the time scientists have figured out how to use embryonic stem cells as therapies, they will likely have developed better ways of generating the stem cells themselves, possibly using cloning, in which scientists would generate perfectly matched stem cells from an adult cell of the patient to be treated. "My offspring would be better off if they used cloning to generate stem cells for themselves," says Arthur Caplan, an ethicist at the University of Pennsylvania. "In America, the best thing you can do is take the money you would have used and invest it in an insurance policy to maximize the likelihood that your kid will have health insurance someday."

Krtolica counters that because it takes two to three months to generate the cells, it's better to have them ready before an approved use in case a client needs them immediately.

Stem-cell scientists also say that StemLifeLine's description of its product as "personalized" stem cells is misleading. As with organ transplants, cell transplants require that the immune profile of the transplanted cells match the host as closely as possible. Scientists generally use the term personalized stem cells to refer to a type of stem cell not yet possible to create: those generated through cloning, making them a perfect genetic match to the donor. Cells made from discarded embryos would not be a perfect match to family members, says Doug Melton, codirector of the Harvard Stem Cell Institute, in Cambridge, MA. "This would be like having stem cells from a sibling, so immunosuppression is still an issue."

The prospect of generating stem-cell lines from embryos is likely to ignite new ethical arguments over embryonic stem cells. Critics of embryonic-stem-cell research oppose generating stem cells from embryos for any reason. But this service could spark growth of a practice that some find even more problematic: the creation of embryos solely as a source of cells. For example, some people might want to undergo IVF expressly for the stem cells, not to have a child. Krtolica says that she hasn't yet fielded any such requests but that ultimately, it would be up to the fertility clinics. Foulk, for one, says he would perform IVF under these circumstances.

Source: http://www.technologyreview.com



Intel's New Strategy: Power Efficiency

Spurred by competitor AMD's rapid success, Intel is shifting its strategy toward more power-efficient microprocessors.
By Kate Greene

Amid increasing competition from Advanced Micro Devices (AMD), Intel is changing its chip-making philosophy: it's paying more attention to the power requirements of its microprocessors.

In July 2006, the chip-making giant will release a new microprocessor, called Core 2 Duo, designed for laptops and desktops. The new chip is based on Intel's current chip architecture, which replaced traditional single-core processing with two processing centers on a single chip. The company says that the Core 2 Duo will perform better than its current dual-core chip, and will be more energy-efficient, which could make laptop batteries last longer and desktop towers run cooler.

Paying attention to power consumption in microprocessors is a relatively new concept for the company, says Steve Pawlowski, a senior fellow at Intel, adding that the move may help Intel regain market share from its rival AMD. Historically, the most important metric in the industry has been processor performance -- the speed at which a processor can complete a task, such as calculating a spreadsheet. "We've always focused on performance at the expense of power [use]," Pawlowski says.

But basic changes have occurred in the PC market, which first led AMD, and now Intel, to rethink microprocessor designs. First, mobile devices have become the primary PC for many consumers -- who don't want a device that quickly drains a battery or gets too hot. Furthermore, as the size of transistors shrink, they're more likely to waste electricity through a physical process called "leakage," says Kevin McGrath, an AMD fellow -- and the more transistors on a chip, the more electricity is wasted.

AMD has been working on more-efficient microprocessors for several years, and now Intel is trying to level the playing field. Both Intel and AMD have tackled part of the problem by converting their chip line-ups to dual-core processors (see "Multicore Mania," December 2005), which turns out to be one way to increase efficiency. "Interestingly, going to multiple cores can be a very power-efficient way of computation," says Milo Martin, professor in the computer and information sciences department at the University of Pennsylvania.

Three aspects of multicore chips make them more efficient. First, when a chip has more than one core, the speed at which each core computes can be slowed down without impeding the speed of the entire chip. By slowing down the clock speed, explains Martin, engineers can decrease the computational rate of a single core by a factor of five, from one gigahertz to 200 megahertz, and the core consumes only one-30th of the power. Then, he says, even if five of those cores are assembled onto a single chip, only one-sixth of the power is consumed, yet the total computational rate of one gigahertz is maintained.

Second, smaller processor sizes reduce power consumption. The number of transistors each core has and the amount of silicon real-estate they take up determines the amount of power the core uses -- smaller processors have fewer transistors and thus use less power than larger processors. In a dual-core chip, the total number of transistors is greater than it is in a single-core chip, but each core has fewer transistors, making it more power efficient.

Third, some of the processor functions, such as controlling memory, can be shared between cores, so that each core consumes less energy by not performing a redundant task.

So transitioning to a multicore architecture is an obvious way to save power, and both Intel and AMD have done so. But they're looking at other ways to create efficiency. As Pawlowski explains, managing processors at the circuit and individual transistor level can also save power. For instance, specific circuits on a transistor are designated to control the manipulation of a photo or to play a DVD. When that circuit needs to be used, the transistors that comprise the circuit are turned on with a certain voltage. In a perfectly efficient chip, those transistors would turn on and off only when they're needed. However, even when a circuit is idle, its transistors are using a small voltage that slowly leaks out of the transistor, says Pawlowski. This leakage produces heat and wastes electricity.

While there is much overlap in the ways that AMD and Intel are approaching this problem of waste and leakage at the circuit level, their solutions are different. Intel is working to solve the problem by designating "sleep transistors" on a chip to micromanage the circuits in each core. These transistors completely turn off the voltage to transistors in circuits that are dormant.

AMD also puts portions of the processor to sleep, explains McGrath; but it does so by having an algorithm instruct the processor to go into various levels of sleep, by shutting down its clock speed so that standby computations aren't carried out as quickly. The algorithm "can ask a part to go into its lowest power state," he says, "there are five or six of these power states that are used depending on the load of the processor."

Intel has announced prices for its new energy-efficient chips -- they're less expensive than AMD's current offerings, which will put pressure on its rival. For Intel, though, the test of whether its power-saving chips can compete well against AMD's offerings won't come until its new processors hit the market.


Frozen Bacteria Repair Own DNA for Millennia

Mason Inman
from National Geographic News

Bacteria can survive in deep freeze for hundreds of thousands of years by staying just alive enough to keep their DNA in good repair, a new study says.

In earlier work, researchers had found ancient bacteria in permafrost and in deep ice cores from Antarctica.

These bacteria, despite being trapped for millennia, were able to be revived and grown in the lab.

Some researchers had thought that bacteria would have to turn into dormant spores to survive for so long.

But if bacteria merely went dormant, metabolism would stop and various environmental factors would begin damaging their DNA.

Like an ancient scroll that's crumbling apart, the DNA becomes so damaged that it's indecipherable after about a hundred thousand years. Then the cells can't ever reproduce and the bacteria are effectively dead.

"Our results show that the best way to survive for a long time is to keep up metabolic activity," said Eske Willerslev, lead study author and a researcher at the University of Copenhagen in Denmark.

Doing this "allows for continuous DNA repair," Willerslev added.

The work suggests that if bacterial life existed on Mars or on Jupiter's moon Europa, it might still survive locked in icy soils.

The new study appears this week in the online advance edition of the Proceedings of the National Academy of Sciences.

Living, Just Barely

The new study examined DNA from bacteria found in permafrost from Siberia in Russia and Canada. The permafrost dated back to about a half-million years ago.

What the scientists found is that the bacteria appear to have kept up their metabolism.

These barely living bacteria did not seem to be reproducing, but they were still taking in nutrients and giving off carbon dioxide, like humans do when they breathe.

The bacteria were using some of these resources to keep their DNA in good shape, the study authors said.

But the researchers found that bacteria couldn't keep chugging along like this forever.

"You see a large diversity [of bacteria] in the modern samples, and as you get older and older, the diversity declines," Willerslev said.

The amount of carbon dioxide the bacteria gave off also dropped with age.

The limit for life in the permafrost is somewhere around 600,000 years old, the researchers say.

In older permafrost, the team couldn't detect any carbon dioxide emissions or any large pieces of DNA indicative of living bacteria.

By about 750,000 years old, the bacteria trapped in the permafrost seemed to be completely dead.

Soil vs. Ice

Some scientists have claimed to be able to revive far older bacteria preserved in amber or salts, but Willerslev has doubts about these results.

"I've been extremely skeptical about these previous results," Willerslev said.

But in the much colder environments of Mars or Europa, life might be able to survive while frozen for much longer, Willerslev said.

At those lower temperatures, DNA damage would accumulate more slowly.

So the new results "could suggest that if you had similar life on Mars, it could exist for much longer," he said.

Brent Christner of Louisiana State University welcomes the new results, which he finds convincing.

Christner and others have been studying ancient ice from deep in the Antarctic ice sheet and have found live bacteria there that have been frozen in place for perhaps one to two million years.

These ancient bacteria seemed to be repairing themselves, but the team didn't have direct evidence showing how the microbes were surviving so long.

"This study confirms and corroborates everything we've been finding with ancient glacial ice," Christner said.

Still, Willerslev is cautious about making this connection.

Glacial ice, he said, "is a completely different environment from permafrost, which is basically frozen soil" and contains lots of nutrients.

Supersonic "Hail" Seeds Star Systems With Water

John Roach
from National Geographic News

Evidence of water vapor "raining down" on a newly forming star system is offering the first direct look at how water likely gets incorporated into planets, NASA researchers announced.

(Related: "First Proof of Wet 'Hot Jupiter' Outside Solar System" [July 11, 2007].)

The water—enough to fill Earth's oceans five times over—falls at supersonic speeds in the form of a hail-like substance from the envelope of dust and gas that gave birth to the star.

The hail vaporizes when it smacks into the dusty disk around the embryonic star where planets are thought to take shape, according to models that best explain the observed data.

"This is the first time we've ever seen the process by which the surrounding envelope's material arrives at the disk," said Dan Watson, an astrophysicist at the University of Rochester in New York.

Watson is lead author of a paper describing the discovery in tomorrow's issue of the journal Nature.

"Since the disk is what's eventually going to give rise to the planetary system around the star, what we are seeing is the process by which that disk formed and therefore the initial conditions of planetary formation."

Star Development

The new work is based on observations of an embryonic star system taken with NASA's Spitzer Space Telescope (see images of stellar nurseries captured by Spitzer).

Astronomers observe such protostar systems in the infrared spectrum, because visible light is easily absorbed by the systems' dusty environments, making them invisible to the naked eye.

Water vapor emits a distinctive spectrum in infrared light.

The protostar lies about a thousand light-years from Earth in a cloud gas and dust. The whole system is called NGC 1333-IRAS 4B, or IRAS 4B for short.

The star is a warm, dense blob of material at the core of the cloud. A disk of planet-forming material is believed to circle the blob.

The radius of the disk is just larger than the distance between Pluto and the sun: about 3.6 billion miles (5.8 billion kilometers).

Based on their data, Watson and his colleagues say that the surface of the disk is -153 degrees Fahrenheit (-103 degrees Celsius).

While this seems frigid by Earth standards, Watson explained, the properties of water are different at the atmospheric pressure of the protostar, which is about a billionth of the pressure at sea level on Earth.

In addition, material equal to 23 times the mass of Earth arrives at the disk each year, Watson said.

"That's the material that's heating on arrival and then gradually cooling as it joins the lower parts of the disk," he said.

"This is very wet stuff. The original state is very wet," he added. "There's plenty of water to make a solar system out of."

Right Angle

Of the 30 embryonic star systems observed with Spitzer, only IRAS 4B showed signs of water vapor.

According to Watson, this is most likely because the protostar's axis points almost directly at Earth.

"The other 29 could very well have just as much water emission as IRAS 4B, but they are turned the wrong way and you can't see them," he said.

The team has already identified hundreds more protostar systems like IRAS 4B and plans to observe them with the Spitzer telescope, including more stars that exhibit this rare orientation.

Wednesday, August 29, 2007

Higher Games

It's been 10 years since IBM's Deep Blue beat Garry Kasparov in chess. A prominent philosopher asks what the match meant.

By Daniel C. Dennett

World chess champion Garry Kasparov during his sixth and final game against IBM’s Deep Blue in 1997. He lost in 19 moves.
Credit: Stan Honda/AFP/Getty Images

In the popular imagination, chess isn't like a spelling bee or Trivial Pursuit, a competition to see who can hold the most facts in memory and consult them quickly. In chess, as in the arts and sciences, there is plenty of room for beauty, subtlety, and deep originality. Chess requires brilliant thinking, supposedly the one feat that would be--forever--beyond the reach of any computer. But for a decade, human beings have had to live with the fact that one of our species' most celebrated intellectual summits--the title of world chess champion--has to be shared with a machine, Deep Blue, which beat Garry Kasparov in a highly publicized match in 1997. How could this be? What lessons could be gleaned from this shocking upset? Did we learn that machines could actually think as well as the smartest of us, or had chess been exposed as not such a deep game after all?

The following years saw two other human-machine chess matches that stand out: a hard-fought draw between Vladimir Kramnik and Deep Fritz in Bahrain in 2002 and a draw between Kasparov and Deep Junior in New York in 2003, in a series of games that the New York City Sports Commission called "the first World Chess Championship sanctioned by both the Fédération Internationale des Échecs (FIDE), the international governing body of chess, and the International Computer Game Association (ICGA)."

The verdict that computers are the equal of human beings in chess could hardly be more official, which makes the caviling all the more pathetic. The excuses sometimes take this form: "Yes, but machines don't play chess the way human beings play chess!" Or sometimes this: "What the machines do isn't really playing chess at all." Well, then, what would be really playing chess?

This is not a trivial question. The best computer chess is well nigh indistinguishable from the best human chess, except for one thing: computers don't know when to accept a draw. Computers--at least currently existing computers--can't be bored or embarrassed, or anxious about losing the respect of the other players, and these are aspects of life that human competitors always have to contend with, and sometimes even exploit, in their games. Offering or accepting a draw, or resigning, is the one decision that opens the hermetically sealed world of chess to the real world, in which life is short and there are things more important than chess to think about. This boundary crossing can be simulated with an arbitrary rule, or by allowing the computer's handlers to step in. Human players often try to intimidate or embarrass their human opponents, but this is like the covert pushing and shoving that goes on in soccer matches. The imperviousness of computers to this sort of gamesmanship means that if you beat them at all, you have to beat them fair and square--and isn't that just what ­Kasparov and Kramnik were unable to do?

Yes, but so what? Silicon machines can now play chess better than any protein machines can. Big deal. This calm and reasonable reaction, however, is hard for most people to sustain. They don't like the idea that their brains are protein machines. When Deep Blue beat Kasparov in 1997, many commentators were tempted to insist that its brute-force search methods were entirely unlike the exploratory processes that Kasparov used when he conjured up his chess moves. But that is simply not so. Kasparov's brain is made of organic materials and has an architecture notably unlike that of Deep Blue, but it is still, so far as we know, a massively parallel search engine that has an outstanding array of heuristic pruning techniques that keep it from wasting time on unlikely branches.

True, there's no doubt that investment in research and development has a different profile in the two cases; Kasparov has methods of extracting good design principles from past games, so that he can recognize, and decide to ignore, huge portions of the branching tree of possible game continuations that Deep Blue had to canvass seriatim. Kasparov's reliance on this "insight" meant that the shape of his search trees--all the nodes explicitly evaluated--no doubt differed dramatically from the shape of Deep Blue's, but this did not constitute an entirely different means of choosing a move. Whenever Deep Blue's exhaustive searches closed off a type of avenue that it had some means of recognizing, it could reuse that research whenever appropriate, just like Kasparov. Much of this analytical work had been done for Deep Blue by its designers, but Kasparov had likewise benefited from hundreds of thousands of person-years of chess exploration transmitted to him by players, coaches, and books.

It is interesting in this regard to contemplate the suggestion made by Bobby Fischer, who has proposed to restore the game of chess to its intended rational purity by requiring that the major pieces be randomly placed in the back row at the start of each game (randomly, but in mirror image for black and white, with a white-square bishop and a black-square bishop, and the king between the rooks). Fischer ­Random Chess would render the mountain of memorized openings almost entirely obsolete, for humans and machines alike, since they would come into play much less than 1 percent of the time. The chess player would be thrown back onto fundamental principles; one would have to do more of the hard design work in real time. It is far from clear whether this change in rules would benefit human beings or computers more. It depends on which type of chess player is relying most heavily on what is, in effect, rote memory.

The fact is that the search space for chess is too big for even Deep Blue to explore exhaustively in real time, so like Kasparov, it prunes its search trees by taking calculated risks, and like Kasparov, it often gets these risks precalculated. Both the man and the computer presumably do massive amounts of "brute force" computation on their very different architectures. After all, what do neurons know about chess? Any work they do must use brute force of one sort or another.

It may seem that I am begging the question by describing the work done by Kasparov's brain in this way, but the work has to be done somehow, and no way of getting it done other than this computational approach has ever been articulated. It won't do to say that Kasparov uses "insight" or "intuition," since that just means that ­Kasparov himself has no understanding of how the good results come to him. So since nobody knows how Kasparov's brain does it--least of all Kasparov himself--there is not yet any evidence at all that Kasparov's means are so very unlike the means exploited by Deep Blue.

People should remember this when they are tempted to insist that "of course" Kasparov plays chess in a way entirely different from how a computer plays the game. What on earth could provoke someone to go out on a limb like that? Wishful thinking? Fear?

In an editorial written at the time of the Deep Blue match, "Mind over Matter" (May 10, 1997), the New York Times opined:

The real significance of this over-hyped chess match is that it is forcing us to ponder just what, if anything, is uniquely human. We prefer to believe that something sets us apart from the machines we devise. Perhaps it is found in such concepts as creativity, intuition, consciousness, esthetic or moral judgment, courage or even the ability to be intimidated by Deep Blue.

The ability to be intimidated? Is that really one of our prized qualities? Yes, according to the Times:

Nobody knows enough about such characteristics to know if they are truly beyond machines in the very long run, but it is nice to think that they are.

Why is it nice to think this? Why isn't it just as nice--or nicer--to think that we human beings might succeed in designing and building brain­children that are even more wonderful than our biologically begotten children? The match between Kasparov and Deep Blue didn't settle any great metaphysical issue, but it certainly exposed the weakness in some widespread opinions. Many people still cling, white-­knuckled, to a brittle vision of our minds as mysterious immaterial souls, or--just as romantic--as the products of brains composed of ­wonder tissue engaged in irreducible non­computational (perhaps alchemical?) processes. They often seem to think that if our brains were in fact just protein machines, we couldn't be responsible, lovable, valuable persons.

Finding that conclusion attractive doesn't show a deep understanding of responsibility, love, and value; it shows a shallow appreciation of the powers of machines with trillions of moving parts.

Daniel Dennett is the codirector of the Center for Cognitive Studies at Tufts University, where he is also a professor of philosophy.

Uninspiring Vista


How Microsoft's long-awaited operating system disappointed a stubborn fan.

By Erika Jonietz

Vista's Aero visual environment includes the flip 3-D feature, which allows a user to cycle through a stack of open windows to find the desired application, shown above, and translucent window borders. Vista also offers "Gadgets," small programs that recall Mac "Widgets" (far right of screen above).

For most of the last two decades, I have been a Microsoft apologist. I mean, not merely a contented user of the company's operating systems and software, not just a fan, but a champion. I have insisted that MS-DOS wasn't hard to use (once you got used to it), that Windows 3.1 was the greatest innovation in desktop operating systems, that Word was in fact superior to WordPerfect, and that Windows XP was, quite simply, "it."

When I was forced to use Apple's Mac OS (versions 7.6 through 9.2) for a series of jobs, I grumbled, griped, and insisted that Windows was better. Even as I slowly acclimated at work, I bought only Windows PCs for myself and avoided my roommate's recherché new iBook as if it were fugu. I admitted it was pretty, but I just knew that you got more computing power for your buck from an Intel-based Windows machine, and of course there was far more software available for PCs. Yet my adoration wasn't entirely logical; I knew from experience, for example, that Mac crashes were easier to recover from than the infamous Blue Screen of Death. At the heart of it all, I was simply more used to Windows. Even when I finally bought a Mac three years ago, it was solely to meet the computing requirements of some of the publications I worked with. I turned it on only when I had to, sticking to my Windows computer for everyday tasks.

So you might think I would be predisposed to love Vista, Microsoft's newest version of Windows, which was scheduled to be released to consumers at the end of January. And indeed, I leaped at the opportunity to review it. I couldn't wait to finally see and use the long-delayed operating system that I had been reading and writing about for more than three years. Regardless of widespread skepticism, I was confident that Vista would dazzle me, and I looked forward to saying so in print.

Ironically, playing around with Vista for more than a month has done what years of experience and exhortations from Mac-loving friends could not: it has converted me into a Mac fan.

A little context and a caveat: in order to meet print deadlines, I had to review the "RC1" version of Vista Ultimate, which Microsoft released in order to gather feedback from over-eager early adopters. Such post-beta, prerelease testing reveals bugs and deficits that in-house testing misses; debuggers cannot mimic all the various configurations of hardware, software, and peripherals that users will assemble. And Vista RC1 was maddeningly buggy. Although I reminded myself repeatedly that most of the problems I encountered would be fixed in the final version, my opinions about Vista are probably colored by my frustrations.

Still, my very first impression of Vista was positive. Quite simply, it's beautiful. The Aero visual interface provides some cool effects, such as translucent window borders and a way to scroll through a 3-D "stack" of your open windows to find the one you want. Networking computers is virtually automatic, as it was supposed to be but never quite has been with Windows XP. The Photo Gallery is the best built-in organizer I've used to manage digital pictures; it even includes basic photo correction tools.

But many of Vista's "new" features seemed terribly familiar to me--as they will to any user of Apple's OS X Tiger operating system. Live thumbnails that display petite versions of minimized windows, search boxes integrated into every Explorer window, and especially the Sidebar--which contains "Gadgets" such as a weather updater and a headline reader--all mimic OS X features introduced in 2005. The Windows versions are outstanding--they're just not really innovative.

Unfortunately, Vista RC1 contained bugs that rendered some promising features, such as the new version of Windows Media Center, unusable for me (an acquaintance who acquired a final copy of Vista ahead of release assures me that all that has been fixed).

My efforts to get Media Center working highlighted two big problems with Vista. First, it's a memory hog. The hundreds of new features jammed into it have made it a prime example of software bloat, perhaps the quintessence of programmer Niklaus Wirth's law that software gets slower faster than hardware gets faster (for more on the problems with software design that lead to bloat, see "Anything You Can Do, I Can Do Meta"). Although my computer meets the minimum requirements of a "Vista Premium Ready PC," with one gigabyte of RAM, I could run only a few ­simple programs, such as a Web browser and word processor, without running out of memory. I couldn't even watch a movie: Windows Media Player could read the contents of the DVD, but there wasn't enough memory to actually play it. In short, you need a hell of a computer just to run this OS.

Second, users choosing to install the 64-bit version of Vista on computers they already own will have a hard time finding drivers, the software needed to control hardware sub­systems and peripherals such as video cards, modems, or printers. Microsoft's Windows Vista Upgrade Advisor program, which I ran before installing Vista, assured me that my laptop was fully compatible with the 64-bit version. But once I installed it, my speakers would not work. It seems that none of the companies concerned had written a driver for my sound card; it took more than 10 hours of effort to find a workaround. Nor do drivers exist for my modem, printer, or several other things I rely on. For some of the newer components, like the modem, manufacturers will probably have released 64-bit drivers by the time this review appears. But companies have no incentive to write complicated new drivers for older peripherals like my printer. And because rules written into the 64-bit version of Vista limit the installation of some independently written drivers, users will be virtually forced to buy new peripherals if they want to run it.

Struggling to get my computer to do the most basic things reminded me forcefully of similar battles with previous versions of Windows--for instance, the time an MIT electrical engineer had to help me figure out how to get my computer to display anything on my monitor after I upgraded to Windows 98. Playing with OS X Tiger in order to make accurate comparisons for this review, I had a personal epiphany: Windows is complicated. Macs are simple.

This may seem extraordinarily obvious; after all, Apple has built an entire advertising campaign around the concept. But I am obstinate, and I have loved Windows for a long time. Now, however, simplicity is increasingly important to me. I just want things to work, and with my Mac, they do. Though my Mac barely exceeds the processor and memory requirements for OS X Tiger, every bundled program runs perfectly. The five-year-old printer that doesn't work at all with Vista performs beautifully with OS X, not because the manufacturer bothered to write a new Mac driver for my aging standby, but because Apple included a third-party, open-source driver designed to support older printers in Tiger. Instead of facing the planned obsolescence of my printer, I can stick with it as long as I like.

And my deepest-seated reasons for preferring Windows PCs--more computing power for the money and greater software availability--have evaporated in the last year. Apple's decision to use the same Intel chips found in Windows machines has changed everything. Users can now run OS X and Windows on the same computer; with third-party software such as Parallels Desktop, you don't even need to reboot to switch back and forth. The chip swap also makes it possible to compare prices directly. I recently used the Apple and Dell websites to price comparable desktops and laptops; they were $100 apart or less in each case. The difference is that Apple doesn't offer any lower-end processors, so its cheapest computers cost quite a bit more than the least-expensive PCs. As Vista penetrates the market, however, the slower processors are likely to become obsolete--minimizing any cost differences between PCs and Macs.

I may need Windows for a long time to come; many electronic gadgets such as PDAs and MP3 players can only be synched with a computer running Windows, and some software is still not available for Macs. But the long-­predicted migration of software from the desktop to the Internet is finally happening. Organizations now routinely access crucial programs from commercial Web servers, and consumers use Google's services to compose, edit, and store their e-mail, calendars, and even documents and spreadsheets (see "Homo Conexus," July/August 2006). As this shift accelerates, finding software that works with a particular operating system will be less of a concern. People will be able to base decisions about which OS to use strictly on merit, and on personal preference. For me, if the choice is between struggling to configure every feature and being able to boot up and get to work, at long last I choose the Mac.

Erika Jonietz is a Technology Review senior ­editor.

WINDOWS VISTA operating system
$99.95-$399.00
www.microsoft.com/windowsvista

Electric Fields Kill Tumors

A promising device uses electric fields to destroy cancer cells in the brain.

By Katherine Bourzac

Zapping tumors: Brain-cancer patients in a trial for a portable device that sends a weak electric field into the brain must wear electrodes almost constantly. One patient in a pilot clinical trial for the device, who still had cancer after radiation, chemotherapy, and surgery, experienced a complete recovery. The MRI at top shows a tumor on the left side of this patient’s brain before treatment. The MRI at bottom, taken after eight months of treatment, shows no tumor.
Credit: Yoram Palti, NovoCure (top image); Proceedings of the National Academy of Sciences (bottom MRIs)



An Israeli company is conducting human tests for a device that uses weak electric fields to kill cancer cells but has no effect on normal cells. The device is in late-stage clinical trials in the United States and Europe for glioblastoma, a deadly brain cancer. It is also being tested in Europe for its effectiveness against breast cancer. In the lab and in animal testing, treatment with electric fields has killed cancer cells of every type tested.

The electric-field therapy was developed by Yoram Palti, a physiologist at the Technion-Israel Institute of Technology, in Haifa, who founded the company NovoCure to commercialize the treatment. Palti's electric fields cause dividing cancer cells to explode while having no significant impact on normal tissues. The range of electric fields generated by the device harms only dividing cells. And since normal cells divide at a much slower rate than cancer cells, the electric fields target cancer cells. "An Achilles' heel of cancer cells is that they have to divide," says Herbert Engelhard, chief of neuro-oncology in the department of neurosurgery at the University of Illinois, Chicago.

Even after chemotherapy, radiation therapy, and surgery, about 85 to 90 percent of glioblastoma patients' cancer still progresses, and their survival rates are low, says Engelhard. He has about 10 glioblastoma patients enrolled in the trial, which is testing the unusual treatment in patients for whom all other approaches have failed. Engelhard says that the results are encouraging but that it's too early to comment on the treatment's efficacy.

The electric fields' different effects on normal and dividing cells mostly have to do with geometry. A dividing cell has what Palti calls "an hourglass shape rather than a round shape." The electric field generated by the NovoCure device passes around and through round cells in a uniform fashion. But the narrow neck that pinches in at the center of a dividing cell acts like a lens, concentrating the electric field at this point. This non-uniform electric field wreaks havoc on dividing cells. The electric field tears apart important biological molecules, such as DNA and the structural proteins that pull the chromosomes into place during cell division. Dividing cells simply "disintegrate," says Palti.

Palti, who for years has been studying the effect of electric fields on cancer and normal cells, says that he has verified this mechanism in computer models and experiments in the lab. "The physics are solid," says David Cohen, associate professor of radiology at Harvard Medical School.

Patients in the glioblastoma clinical trial wear the device almost constantly, carrying necessary components in a briefcase. A wire emerging from the briefcase connects to adhesive electrodes covering the skull. Alternating electric fields pass through the scalp, into the skull, and on to the brain. The Food and Drug Administration approved the device for late-stage clinical trials for glioblastoma following promising results from a pilot study in 10 patients, one of whom had a complete recovery.

One exciting result from his studies, says Palti, is that there is "excellent synergy between electric-field treatment and chemotherapy." In an unpublished lab study of several types of cancer, he says, adding electric-field treatment makes several chemotherapeutics more effective at lower doses. NovoCure is now conducting a pilot trial in Europe in which patients begin electric-field treatment in conjunction with chemotherapy when they are first diagnosed with glioblastoma. The results are preliminary, but, Palti says, "I strongly believe that the combination treatment will ... enable one to reduce the chemo doses to levels where their side effects will be significantly reduced."

Palti says that after more than 200 cumulative months of electric-field treatment in several patients, there have been no side effects beyond irritation of the scalp. "So far, toxicity seems to be low," says Engelhard. This stands in stark contrast to chemotherapy and radiation, which cause many side effects, including nausea, hair loss, and fatigue.

One worry is that the electric-field treatment could affect healthy cells that are dividing. The electric fields emerging from the electrodes can't be focused, says Cohen, and although they are primarily concentrated in the brain in the glioblastoma trial, they may also reach other parts of the body where cells are dividing. Cells in the bone marrow, for example, multiply at a great rate to create red blood cells and immune cells. But Palti says that the electric fields have no effect on blood-cell counts. The bone and muscle surrounding the marrow appear to protect the cells..

It's unclear how long patients will need to wear the device. "We're hesitant to stop treatment, because the consequences could be severe," says Palti, although one patient whose cancer has disappeared has stopped wearing the device. Patients must go to the clinic twice a week to have their heads shaved so that their hair doesn't interrupt contact between the scalp and the electrodes. The device itself costs only about $1,000 to manufacture, but replacing the electrodes twice a week is expensive.

Engelhard says that he got involved with the NovoCure clinical trial because the electric-field treatment is "radically different" from all existing cancer treatments. For patients with recurrent glioblastoma and other deadly forms of cancer, there are few options. "Researching and testing new therapies for this type of patient is very important," says Engelhard.

Nanowire LEDs

Infrared light-emitting nanowires could lead to optical communications on microchips.

By Kevin Bullis

Microscopic LED: A thin indium-nitride nanowire spans two electrodes. When a current is applied, it emits infrared light.
Credit: IBM Research


Researchers at IBM Research in Yorktown Heights, NY, have demonstrated a new way to convert electricity into light in nanowire-based light-emitting devices (LEDs). The nanowire LEDs could eventually be used for telecommunications and for faster communications between devices on microchips. The approach could also pave the way for a new type of bright, efficient display.

The researchers built an LED resembling a transistor that consists of an indium-nitride nanowire stretched between two electrodes on top of a silicon substrate. The nanowire is about 100 nanometers wide and spans a distance of less than 10 micrometers. When the researchers apply a current to the nanowire, it emits light. While nanowires that emit light have been made before, the new devices rely on different physical mechanisms that are simpler; as a result, the nanowire LED could be more efficient and have improved performance. What's more, the device succeeds in emitting infrared light, which has been particularly difficult for nanowires to do, says Phaedon Avouris, one of the IBM researchers.

Typically, light in LEDs is produced by injecting both electrons and their positive counterparts, holes, into an active material, where they combine and emit light. With the new devices, the researchers only have to inject electrons; these cause electrons and holes to form locally, inside the nanowires. The mechanism could be more efficient because a single electron can be used to generate more than one electron-hole pair. What's more, the researchers have demonstrated that the nanowires can produce more intense light emission than other LEDs.

The nanowires' small size and compatibility with silicon make them attractive for integration on chips, says Eugene Fitzgerald, a professor of materials science and engineering at MIT. The nanowires also emit infrared light, which makes them ideal for fiber-optic telecommunications and for optical communications between devices on microchips that could help dramatically speed up computers.

The nanowire LEDs extend the range of colors that can be emitted from nitride-based materials, Fitzgerald says. Nitride materials are the basis of the blue lasers in high-definition DVD players, he says, and they have also been useful for emitting green light. If the nanowires can be tuned to emit red light, as seems likely, then red, green, and blue LEDs could all be created with variations of the same material, making it practical to manufacture them all on the same substrate. Eventually, it may be possible to arrange such LEDs into the pixels of full-color displays that are brighter, more efficient, and better looking than today's flat-panel LCD displays, Fitzgerald says.

Not only did the wires emit infrared light, but they also showed a peculiar ability to emit more intense light as temperatures rose; ordinarily, at high temperatures light emission dims or stops. This could lead to LEDs that can withstand high temperatures, a property that could be useful for certain military applications, Avouris says.

The novel physical mechanisms underlying the indium-nitride nanowires' ability to emit light might have wider implications for nanowire research. If the mechanism used here works in other materials, it could expand the number of materials that might be used to create LEDs, Fitzgerald says. That could make LEDs cheaper and give researchers far greater versatility in creating devices with improved performance.

Ultrastrong Paper from Graphene

A new paperlike material could lead to novel types of light and flexible materials.

By Prachi Patel-Predd

The right stuff: Researchers at Northwestern University have reassembled one-atom-thick graphene sheets that make up soft and flaky graphite crystals in order to create a tough, flexible, paperlike material.
Credit: Dmitriy Dikin



Using graphite--the black flaky stuff employed in pencils--researchers at Northwestern University have created a strong, flexible, and lightweight paperlike material. It could be used as electrolytes or hydrogen storage materials in fuel cells, electrodes in supercapacitors and batteries, and super-thin chemical filters. It could also be mixed with polymers or metals to make materials for use in aircraft fuselages, cars, and buildings.

The new material is made of overlapping layers of graphene, one-atom-thick sheets of carbon atoms arranged in honeycomb-like hexagons. In contrast, graphite, which becomes powdery under pressure, is made of graphene sheets stacked one on top of the other.

Rodney Ruoff, a Northwestern nanoengineering professor who led the work, published in Nature this week, says that the methods behind making the novel graphene paper could lead to even stronger versions. Right now, water molecules hold together the individual 10-nanometer-thick graphene flakes to create the micrometers-thick graphene paper. By using other chemicals as glues, the researchers could make ultrastrong paperlike materials with various properties. "The future is particularly bright because the system is very flexible ... The chemistry is almost infinite," Ruoff says.

Individual sheets of graphene were not known to exist until three years ago, when Andre Geim, a professor of physics at the University of Manchester, in the UK, used adhesive tape to get a few flakes of graphene from a graphite crystal. Researchers still don't understand all of graphene's properties, but they know that it can conduct electrons extremely well and is known to be exceptionally strong. "Graphene is the toughest material in the world--tougher than diamond," Geim says. But in graphite, the graphene sheets are assembled in such a way that they do not bind strongly to each other. So they simply flake off under friction, creating a pencil's black marks.

Ruoff's idea was to "disassemble graphite into individual layers and reassemble them in a different way than they are in graphite." The goal was to find a way to glue the graphene platelets together while reassembling them, which would create a tough and flexible material.

Since it's hard to separate the graphene sheets in graphite, the researchers first used an acid to oxidize graphite and make graphite oxide. Then they put the graphite oxide in water. Individual graphene-oxide sheets easily separated in water.

When the researchers filtered the suspension, the graphene-oxide flakes settled down on the filter, randomly overlapping with each other. Water glued the flakes together; its hydrogen atoms bonded with the carbon atoms in adjacent flakes. The result was a dark-brown, thin, flexible graphene-oxide paper. By adjusting the concentration of graphite oxide in the water, the researchers changed the thickness of the paper, ranging from 1 to 100 micrometers.

In an effort to develop superstrong lightweight materials, others have used carbon nanotubes. And the new graphene-oxide paper is not as strong as carbon-nanotube films, Geim says. "The advantage of materials made from carbon nanotubes is they're much tougher, because they entangle like spaghetti," he says. "When you're dealing with flat sheets, they entangle very little and are breakable."

But the graphene-oxide paper has other key advantages. Graphite is a cheap raw material, and the filtration method is simple and leads to lots of graphene. Most important, the Northwestern researchers' work opens up a way to manipulate graphene sheets and make paperlike materials with different properties.

When Ruoff and his colleagues oxidize graphene into graphene oxide, for instance, the carbon-based material goes from being an electrical conductor to being an insulator. Ruoff says that he can alter graphene's chemistry in other ways to change its electrical properties and make it an insulator, a conductor, or even a semiconductor.

That electrical versatility combines with an ultrastrong material has some observers excited. "They haven't used any tough glue between the [graphene platelets]," Geim says. "I expect very, very tough materials if a proper glue between graphene is used."

Self-Assembling Nanostructures

Researchers find an easy route to complex nanomaterials.

By Kevin Bullis

No assembly required: Nanorods of cadmium sulfide with silver-sulfide quantum dots (dark spots) form automatically when researchers mix together the right starting chemicals.
Credit: Paul Alivisatos/University of California, Berkeley

Researchers at the University of California, Berkeley, have found an easy way to make a complex nanostructure that consists of tiny rods studded with nanocrystals. The new self-assembly synthesis method could lead to intricate nanomaterials for more-efficient solar cells and less expensive devices for directly converting heat into electricity.

In the structures, the quantum dots are all about the same size and are spaced evenly along the rods--a feat that in the past required special conditions such as a vacuum, with researchers carefully controlling the size and spacing of different materials, says Paul Alivisatos, the professor of chemistry and materials science at Berkeley who led the work. In contrast, Alivisatos simply mixes together the appropriate starting materials in a solution; these materials then arrange themselves into the orderly structure.

Such solution-processing techniques can lead to manufacturing methods in which materials, such as those used in solar cells, are printed on continuous sheets, driving down costs compared with other methods. "Anytime you make something in solution, rather than in a vacuum, it becomes a lot easier and cheaper," says Moungi Bawendi, a chemistry professor at MIT who was not involved in this work.

To make the rods, Alivisatos mixes a combination of methanol and a silver salt into a solution that already contains cadmium-sulfide nanorods. Cadmium ions have a strong affinity for methanol. As a result, when the materials are mixed, the methanol draws cadmium out of the nanorods. Silver ions then fill in the vacant spots left by the cadmium, forming areas of silver sulfide within the rod. At the same time, differences in the crystalline structures of the cadmiun-sulfide rods and the silver-sulfide quantum dots regulate the dots' size and spacing. This is the first time such differences have been used to control the self-assembly of materials in solution.

The nanocrystal-studded rods could prove useful for solar cells and thermoelectric devices that convert heat directly into electricity. For example, in conventional solar cells, each photon only generates a single electron. But certain kinds of quantum dots convert single photons into multiple electrons, which could more than double the efficiency of solar cells. (See "Silicon and Sun.") The problem has been capturing those electrons to create an electrical current. Embedding quantum dots inside rods of another material could help with this problem, says Alivisatos. The quantum dots would absorb the light, while the other material would capture the electrons that the dots generate.

A similar configuration is promising for thermoelectrics, devices that directly convert heat into electricity. The alternating crystal structures in the nanorods could block the transfer of heat while allowing electrons to pass--two key features of such devices.

Having demonstrated the new method for making the structures, Alivisatos and his colleagues are beginning to study the potential photoelectric and thermoelectric properties of the materials. They will likely need to turn to different compounds, such as copper sulfide and cadmium sulfide--a combination that has been used for solar cells in the past, Alivisatos says. There's no guarantee, however, that these materials will form the same orderly structures, or indeed that the structures will perform as the researchers hope they will.

Even if these particular structures do not prove to be the key to low-cost, high-efficiency solar cells, the new self-assembly method for making nanostructures could inspire new materials that are. And Bawendi highlights the need to continue basic research like this to solve today's energy problems. "We don't know what the solution is going to be," he says. But if we create high-quality, carefully described materials as Alivisatos has done, "some of them may be the answer," Bawendi says.

Self-Assembling Nanostructures

Researchers find an easy route to complex nanomaterials.

By Kevin Bullis

No assembly required: Nanorods of cadmium sulfide with silver-sulfide quantum dots (dark spots) form automatically when researchers mix together the right starting chemicals.
Credit: Paul Alivisatos/University of California, Berkeley

Researchers at the University of California, Berkeley, have found an easy way to make a complex nanostructure that consists of tiny rods studded with nanocrystals. The new self-assembly synthesis method could lead to intricate nanomaterials for more-efficient solar cells and less expensive devices for directly converting heat into electricity.

In the structures, the quantum dots are all about the same size and are spaced evenly along the rods--a feat that in the past required special conditions such as a vacuum, with researchers carefully controlling the size and spacing of different materials, says Paul Alivisatos, the professor of chemistry and materials science at Berkeley who led the work. In contrast, Alivisatos simply mixes together the appropriate starting materials in a solution; these materials then arrange themselves into the orderly structure.

Such solution-processing techniques can lead to manufacturing methods in which materials, such as those used in solar cells, are printed on continuous sheets, driving down costs compared with other methods. "Anytime you make something in solution, rather than in a vacuum, it becomes a lot easier and cheaper," says Moungi Bawendi, a chemistry professor at MIT who was not involved in this work.

To make the rods, Alivisatos mixes a combination of methanol and a silver salt into a solution that already contains cadmium-sulfide nanorods. Cadmium ions have a strong affinity for methanol. As a result, when the materials are mixed, the methanol draws cadmium out of the nanorods. Silver ions then fill in the vacant spots left by the cadmium, forming areas of silver sulfide within the rod. At the same time, differences in the crystalline structures of the cadmiun-sulfide rods and the silver-sulfide quantum dots regulate the dots' size and spacing. This is the first time such differences have been used to control the self-assembly of materials in solution.

The nanocrystal-studded rods could prove useful for solar cells and thermoelectric devices that convert heat directly into electricity. For example, in conventional solar cells, each photon only generates a single electron. But certain kinds of quantum dots convert single photons into multiple electrons, which could more than double the efficiency of solar cells. (See "Silicon and Sun.") The problem has been capturing those electrons to create an electrical current. Embedding quantum dots inside rods of another material could help with this problem, says Alivisatos. The quantum dots would absorb the light, while the other material would capture the electrons that the dots generate.

A similar configuration is promising for thermoelectrics, devices that directly convert heat into electricity. The alternating crystal structures in the nanorods could block the transfer of heat while allowing electrons to pass--two key features of such devices.

Having demonstrated the new method for making the structures, Alivisatos and his colleagues are beginning to study the potential photoelectric and thermoelectric properties of the materials. They will likely need to turn to different compounds, such as copper sulfide and cadmium sulfide--a combination that has been used for solar cells in the past, Alivisatos says. There's no guarantee, however, that these materials will form the same orderly structures, or indeed that the structures will perform as the researchers hope they will.

Even if these particular structures do not prove to be the key to low-cost, high-efficiency solar cells, the new self-assembly method for making nanostructures could inspire new materials that are. And Bawendi highlights the need to continue basic research like this to solve today's energy problems. "We don't know what the solution is going to be," he says. But if we create high-quality, carefully described materials as Alivisatos has done, "some of them may be the answer," Bawendi says.

Global 'sunscreen' has likely thinned

Global 'sunscreen' has likely thinned
A new NASA study has found that an important counter-balance to the warming of our planet by greenhouse gases sunlight blocked by dust, pollution and other aerosol particles appears to have lost ground.

The thinning of Earths "sunscreen" of aerosols since the early 1990s could have given an extra push to the rise in global surface temperatures. The finding, published recently in the journal Science, may lead to an improved understanding of recent climate change. In a related study published last week, scientists found that the opposing forces of global warming and the cooling from aerosol-induced "global dimming" can occur at the same time.

"When more sunlight can get through the atmosphere and warm Earth's surface, you're going to have an effect on climate and temperature," said lead author Michael Mishchenko of NASA's Goddard Institute for Space Studies (GISS), New York. "Knowing what aerosols are doing globally gives us an important missing piece of the big picture of the forces at work on climate".

The study uses the longest uninterrupted satellite record of aerosols in the lower atmosphere, a unique set of global estimates funded by NASA. Scientists at GISS created the Global Aerosol Climatology Project by extracting a clear aerosol signal from satellite measurements originally designed to observe clouds and weather systems that date back to 1978. The resulting data show large, short-lived spikes in global aerosols caused by major volcanic eruptions in 1982 and 1991, but a gradual decline since about 1990. By 2005, global aerosols had dropped as much as 20 percent from the relatively stable level between 1986 and 1991.

The NASA study also sheds light on the puzzling observations by other scientists that the amount of sunlight reaching Earth's surface, which had been steadily declining in recent decades, suddenly started to rebound around 1990. This switch from a "global dimming" trend to a "brightening" trend happened just as global aerosol levels started to decline, Mishchenko said.

While the Science paper does not prove that aerosols are behind the recent dimming and brightening trends changes in cloud cover have not been ruled out another new research result supports that conclusion In a paper published March 8 in the American Geophysical Union's Geophysical Research Letters, a research team led by Anastasia Romanou of Columbia University's Department of Applied Physics and Mathematics, New York, also showed that the apparently opposing forces of global warming and global dimming can occur at the same time.

The GISS research team conducted the most comprehensive experiment to date using computer simulations of Earth's 20th-century climate to investigate the dimming trend. The combined results from nine state-of-the-art climate models, including three from GISS, showed that due to increasing greenhouse gases and aerosols, the planet warmed at the same time that direct solar radiation reaching the surface decreased. The dimming in the simulations closely matched actual measurements of sunlight declines recorded from the 1960s to 1990.

Further simulations using one of the Goddard climate models revealed that aerosols blocking sunlight or trapping some of the sun's heat high in the atmosphere were the major driver in 20th-century global dimming. "Much of the dimming trend over the Northern Hemisphere stems from these direct aerosol effects," Romanou said. "Aerosols have other effects that contribute to dimming, such as making clouds more reflective and longer-lasting. These effects were found to be almost as important as the direct effects".

The combined effect of global dimming and warming may account for why one of the major impacts of a warmer climate the spinning up of the water cycle of evaporation, more cloud formation and more rainfall has not yet been observed. "Less sunlight reaching the surface counteracts the effect of warmer air temperatures, so evaporation does not change very much," said Gavin Schmidt of GISS, a co-author of the paper. "Increased aerosols probably slowed the expected change in the hydrological cycle".

Whether the recent decline in global aerosols will continue is an open question. A major complicating factor is that aerosols are not uniformly distributed across the world and come from many different sources, some natural and some produced by humans. While global estimates of total aerosols are improving and being extended with new observations by NASA's latest generation of Earth-observing satellites, finding out whether the recent rise and fall of aerosols is due to human activity or natural changes will have to await the planned launch of NASA's Glory Mission in 2008.

"One of Glory's two instruments, the Aerosol Polarimetry Sensor, will have the unique ability to measure globally the properties of natural and human-made aerosols to unprecedented levels of accuracy," said Mishchenko, who is project scientist on the mission.


Posted by: Brooke Source

Sound Waves to Ignite Sun's Ring of Fire

Sound Waves to Ignite Sun's Ring of Fire
Researchers have found that the sun's magnetic field allows the release of wave energy from its interior, permitting sound waves to travel through thin fountains, or "spicules", upward and into the chromosphere. The chromosphere is the region of the sun that looks like a red ring of fire during an eclipse.
Credit: Zina Deretsky, National Science Foundation
Sound waves escaping the sun's interior create fountains of hot gas that shape and power a thin region of the sun's atmosphere which appears as a ruby red "ring of fire" around the moon during a total solar eclipse, as per research funded by the National Science Foundation (NSF) and NASA.

The results are presented today at the American Astronomical Society's Solar Physics Division meeting in Hawaii.

This region, called the chromosphere because of its color, is largely responsible for the deep ultraviolet radiation that bathes the Earth, producing the atmosphere's ozone layer.

It also has the strongest solar connection to climate variability.

"The sun's interior vibrates with the peal of millions of bells, but the bells are all on the inside of the building," said Scott McIntosh of the Southwest Research Institute in Boulder, Colo., lead member of the research team. "We've been able to show how the sound can escape the building and travel a long way using the magnetic field as a guide".

The new result also helps explain a mystery that's existed since the middle of the last century -- why the sun's chromosphere (and the corona above) is much hotter than the visible surface of the star. "It's getting warmer as you move away from the fire instead of cooler, certainly not what you would expect," said McIntosh.

"Researchers have long realized that observations of solar magnetic fields are the keys that will unlock the secrets of the sun's interior," said Paul Bellaire, program director in NSF's division of atmospheric sciences, which funded the research. "These scientists have found an ingenious way of using magnetic keys to pick those locks".

Using spacecraft, ground-based telescopes, and computer simulations, the results show that the sun's magnetic field allows the release of wave energy from its interior, permitting the sound waves to travel through thin fountains upward and into the solar chromosphere. The magnetic fountains form the mold for the chromosphere.

Scientists say that it's like standing in Yellowstone National Park and being surrounded by musical geysers that pop up at random, sending out shrill sound waves and hot water shooting high into the air.

"This work finds the missing piece of the puzzle that has fascinated a number of generations of solar astronomers," said Alexei Pevtsov, program scientist at NASA. "If you fit this piece into place, the whole picture of chromosphere heating becomes more clear".

Over the past twenty years, researchers have studied energetic sound waves as probes of the Sun's interior because the waves are largely trapped by the sun's visible surface -- the photosphere. The research observed that some of these waves can escape the photosphere into the chromosphere and corona.

To make the discovery, the team used observations from the SOHO and TRACE spacecraft combined with those from the Magneto-Optical filters at Two Heights, or MOTH, instrument in Antarctica, and the Swedish 1-meter Solar Telescope on the Canary Islands.

The observations gave detailed insights into how some of the trapped waves and their pent-up energy manage to leak out through magnetic "cracks" in the photosphere, sending mass and energy shooting upwards into the atmosphere above.

By analyzing motions of the solar atmosphere in detail, the researchers found that where there are strong knots in the Sun's magnetic field, sound waves from the interior can leak out and propagate upward into its atmosphere.

"The constantly evolving magnetic field above the solar surface acts like a doorman opening and closing the door for the waves that are constantly passing by," said Bart De Pontieu, a scientist at the Lockheed Martin Solar and Astrophysics Laboratory in Palo Alto, Calif.

These results were confirmed by state-of-the-art computer simulations that show how the leaking waves propel fountains of hot gas upward into the sun's atmosphere, and fall back to its surface a few minutes later.

Other research team members are Stuart Jeffries of the University of Hawaii and Viggo Hansteen of the University of Oslo and the Lockheed Martin Solar and Astrophysics Laboratory.


Posted by: Brooke Source

New View of Doomed Star

New View of Doomed Star
Credit: X-ray: NASA/CXC/GSFC/M.Corcoran et al.; Optical: NASA/STScI
Eta Carinae is a mysterious, extremely bright and unstable star located a mere stone's throw - astronomically speaking - from Earth at a distance of only about 7,500 light years. The star is believed to be consuming its nuclear fuel at an incredible rate, while quickly drawing closer to its ultimate explosive demise. When Eta Carinae does explode, it will be a spectacular fireworks display seen from Earth, perhaps rivaling the moon in brilliance. Its fate has been foreshadowed by the recent discovery of SN2006gy, a supernova in a nearby galaxy that was the brightest stellar explosion ever seen. The erratic behavior of the star that later exploded as SN2006gy suggests that Eta Carinae may explode at any time.

Eta Carinae, a star between 100 and 150 times more massive than the Sun, is near a point of unstable equilibrium where the star's gravity is almost balanced by the outward pressure of the intense radiation generated in the nuclear furnace. This means that slight perturbations of the star might cause enormous ejections of matter from its surface. In the 1840s, Eta Carinae had a massive eruption by ejecting more than 10 times the mass of the sun, to briefly become the second brightest star in the sky. This explosion would have torn most other stars to pieces but somehow Eta Carinae survived.

The latest composite image shows the remnants of that titanic event with new data from NASA's Chandra X-ray Observatory and the Hubble Space Telescope. The blue regions show the cool optical emission, detected by Hubble, from the dust and gas thrown off the star. This debris forms a bipolar shell around the star, which lies near the brightest point of the optical emission. This bipolar shell is itself surrounded by a ragged cloud of fainter material. An unusual jet points from the star to the upper left.

Chandra's data, depicted in orange and yellow, shows the X-ray emission produced as material thrown off Eta Carinae rams into nearby gas and dust, heating gas to temperatures in excess of a million degrees. Animation of Massive Star Explosion This hot shroud extends far beyond the cooler, optical nebula and represents the outer edge of the interaction region. The X-ray observations show that the ejected outer material is enriched by complex atoms, particularly nitrogen, cooked inside the star's nuclear furnace and dredged up onto the stellar surface. The Chandra observations also show that the inner optical nebula glows faintly due to X-ray reflection. The X-rays reflected by the optical nebula come from very close to the star itself; these X-rays are generated by the high-speed collision of wind flowing from Eta Carinae's surface (moving at about 1 million miles per hour) with the wind of the companion star (which is about five times faster).

The companion is not directly visible in these images, but variability in X-rays in the regions close to the star signals the star's presence. Astronomers don't know exactly what role the companion has played in the evolution of Eta Carinae, or what role it will play in its future.


Posted by: Brooke Source

Ready for NASA climate change, ozone mission in tropics

Ready for NASA climate change, ozone mission in tropics
The NASA WB-57 plane will fly into clouds at 60,000 feet during the TC4 mission in Costa Rica, sampling cloud particles and chemistry.
A high-flying NASA mission over Costa Rica and Panama in July and August should help researchers better understand how tropical storms influence global warming and stratospheric ozone depletion, says a University of Colorado at Boulder professor who is one of two mission researchers for the massive field campaign.

Brian Toon, chair of CU-Boulder's atmospheric and oceanic sciences department, said the $12 million effort will mobilize in San Jose, Costa Rica, and involve about 400 scientists, students and support staff operating three NASA aircraft, seven satellites and a suite of other instruments. The team is targeting the gases and particles that flow out of the top of the vigorous storm systems that form over the warm tropical ocean, said Toon.

The warm summer waters of the Pacific Ocean in Central and South America are a breeding ground for heat-driven convective storms targeted by the mission, said NASA officials. Such tropical systems are the major mechanism for Earth's system to loft air into the upper troposphere and stratosphere and are characterized primarily by cumulus clouds with large dense anvils and wispy cirrus clouds.

Known as the Tropical Composition, Cloud and Climate Coupling mission, or TC4, The expedition runs from July 16 through Aug. 8 and is NASA's largest field campaign in several years. The tropical storm systems under study pump air more than 40,000 feet above the surface, where they can influence the make-up of the stratosphere, home of Earth's protective ozone layer.

"This is a very little-studied region of the atmosphere, but it is crucial to understanding global climate change and changes in stratospheric ozone," Toon said.

One mission goal is to understand how transport of chemical compounds - both natural and man-made - occurs from the surface to the lower stratosphere, which is roughly 10 miles in altitude. Another goal is to understand the properties of high-altitude clouds and how they impact Earth' s radiation budget, Toon said.

As a TC4 mission scientist, Toon will be coordinating daily flights of three NASA aircraft filled with scientific instruments that will collect data in concert with NASA satellites. The aircraft include the ER-2 -- NASA's modern version of the Air Force U2-S reconnaissance aircraft -- which can reach an altitude of 70,000 feet and which will fly above the clouds and act as a "surrogate satellite," he said.

The mission also includes a broad-winged WB-57 research plane that will fly into the cirrus clouds at 60,000 feet and sample cloud particles and the make-up of chemicals flowing from massive tropical storm systems. The third plane, a converted DC-8, will fly at about 35,000 feet to probe the region between Earth' s troposphere and stratosphere and sample cloud particles and air chemistry.

"The critical lever in greenhouse warming is water in the upper troposphere," said Toon. "Added water, or more extensive clouds as a result of global warming, would significantly amplify the greenhouse effect from human made pollutants such as carbon dioxide." On the other hand, more extensive convection due to rising sea-surface temperatures could lead to more precipitation and less cloud cover, acting to "retard" greenhouse warming, he said.

Toon and his graduate students will be studying the size and role of ice particles in clouds to better understand how Earth might respond to warming temperatures. "We'd really like to understand the processes that control water as it is going into the stratosphere, which should help improve climate models," he said.

Toon, who spent several years helping to design the NASA mission and chaired the committee that organized the effort, also will be working with CU-Boulder graduate student Charles Bardeen in San Jose on daily weather forecasts, which will help dictate when planes can safely sample in targeted atmospheric regions.

Other participants from CU's oceanic and atmospheric sciences department, or ATOC, include Associate Professor Linnea Avallone, who will work with graduate students to sample water condensed in clouds. Associate Professor Peter Pilewskie and his students will study reflected sunlight from bright clouds to better understand Earth's energy budget in relation to climate change, while Research Associate Frank Evans will study ice cloud properties using radiometry.

Researchers from CU-Boulder's Cooperative Institute for Research in Environmental Sciences -- a joint venture of CU-Boulder and the National Oceanic and Atmospheric Administration -- also will participate in the mission. CIRES and NOAA have 14 scientists involved in the TC4 mission from Boulder.

Observations from a suite of NASA satellites flying in formation, known as the "A-Train," will complement the aircraft measurements. The satellites will measure ozone, water vapor, carbon monoxide and map clouds, charting the aerosol particles inside that affect their formation.

"The potential economic repercussions of global warming are almost unimaginable," said Toon. "We could lose large fractions of entire states over the next century or so if there are significant increases in sea level. "This mission will help us understand Earth's systems and what happens when we modify the planet".

Toon said NASA has a made a huge investment in its satellite fleet over the years and in finally implementing the TC4 mission. "NASA has a commitment to better understand these complex issues," he said. "And our graduate students will probably be writing theses on data from the TC4 mission for the next decade".


Posted by: Tyler Source

Search for 'weird' life

Search for 'weird' life
A new report from the National Research Council, examines the search for life elsewhere in the universe and whether the fundamental requirements for life as we generally know it are the only ways phenomena recognized as "life" could be supported beyond our planet.

Whether "weird" life, as researchers sometimes refer to life with a different biochemical structure than life here, should be considered in the search for extraterrestrial life is looked at in the report.


Posted by: Brooke Source

Life elsewhere in Solar System

Life elsewhere in Solar System
The search for life elsewhere in the solar system and beyond should include efforts to detect what researchers sometimes refer to as "weird" life -- that is, life with an alternative biochemistry to that of life on Earth -- says a new report from the National Research Council. The committee that wrote the report observed that the fundamental requirements for life as we generally know it -- a liquid water biosolvent, carbon-based metabolism, molecular system capable of evolution, and the ability to exchange energy with the environment -- are not the only ways to support phenomena recognized as life. "Our investigation made clear that life is possible in forms different than those on Earth," said committee chair John Baross, professor of oceanography at the University of Washington, Seattle.

The report emphasizes that "no discovery that we can make in our exploration of the solar system would have greater impact on our view of our position in the cosmos, or be more inspiring, than the discovery of an alien life form, even a primitive one. At the same time, it is clear that nothing would be more tragic in the American exploration of space than to encounter alien life without recognizing it".

The tacit assumption that alien life would utilize the same biochemical architecture as life on Earth does means that researchers have artificially limited the scope of their thinking as to where extraterrestrial life might be found, the report says. The assumption that life requires water, for example, has limited thinking about likely habitats on Mars to those places where liquid water is believed to be present or have once flowed, such as the deep subsurface. However, as per the committee, liquids such as ammonia or formamide could also work as biosolvents -- liquids that dissolve substances within an organism -- albeit through a different biochemistry. The recent evidence that liquid water-ammonia mixtures may exist in the interior of Saturn's moon Titan suggests that increased priority be given to a follow-on mission to probe Titan, a locale the committee considers the solar system's most likely home for weird life.

"It is critical to know what to look for in the search for life in the solar system," said Baross. "The search so far has focused on Earth-like life because that's all we know, but life that may have originated elsewhere could be unrecognizable compared with life here. Advances throughout the last decade in biology and biochemistry show that the basic requirements for life might not be as concrete as we thought".

Besides the possibility of alternative biosolvents, studies show that variations on some of the other basic tenets for life also might be able to support weird life. DNA on Earth works through the pairing of four chemical compounds called nucleotides, but experiments in synthetic biology have created structures with six or more nucleotides that can also encode genetic information and, potentially, support Darwinian evolution. Additionally, studies in chemistry show that an organism could utilize energy from alternative sources, such as through a reaction of sodium hydroxide and hydrochloric acid, meaning that such an organism could have an entirely non-carbon-based metabolism.

Scientists need to further explore variations of the requirements for life with particular emphasis on origin-of-life studies, which will help determine if life can exist without water or in environments where water is only present under extreme conditions, the report says. Most planets and moons in this solar system fall into one of these categories. Research should also focus on how organisms break down key elements, as even non-carbon-based life would need elements for energy, structure, and chemical reactions.

The report also stresses that the future search for alien life should not exclude additional research into terrestrial life. Through examination of extreme environments, such as deserts and deep under the oceans, studies have determined that life exists essentially anywhere water and a source of energy are found together on Earth. Field scientists should therefore seek out organisms with novel biochemistries and those that exist in areas where vital resources are scarce to better understand how life on Earth truly operates, the committee said. This improved understanding will contribute greatly toward seeking Earth-like life where the conditions necessary for its existence might be met, as in the case of subsurface Mars.

Space missions will need adjustment to increase the breadth of their search for life. Planned Mars missions, for example, should include instruments that detect components of light elements -- particularly carbon, hydrogen, oxygen, phosphorous, and sulfur -- as well as simple organic functional groups and organic carbon. Recent evidence indicates that another moon of Saturn, Enceladus, has active water geysers, raising the prospect that habitable environments may exist there and greatly increasing the priority of additional studies of this body.


Posted by: Jaison Source