Showing posts with label Infotech. Show all posts
Showing posts with label Infotech. Show all posts

Thursday, September 6, 2007

E-paper with Photonic Ink

Photonic crystals are being used by a Toronto startup to create commercial devices that offer better color and resolution than other flexible displays.

By Duncan Graham-Rowe



Crystal light: Photonic crystals made out of silica beads (shown as gray balls) measuring 200 nanometers across are embedded in a spongy electroactive polymer and sandwiched between transparent electrodes. When a voltage is applied, an electrolyte fluid is drawn into the polymer composite, causing it to swell (shown as yellow in the middle image). This alters the spacing of the crystals, affecting which wavelengths of light they reflect. When the spacing is carefully controlled, the pixel can be made to reflect any color in the visible spectrum.
Credit: Nature Photonics



Scientists in Canada have used photonic crystals to create a novel type of flexible electronic-paper display. Unlike other such devices, the photonic-crystal display is the first with pixels that can be individually tuned to any color.

"You get much brighter and more-intense colors," says André Arsenault, a chemist at the University of Toronto and cofounder of Opalux, a Toronto-based company commercializing the photonic-crystal technology, called P-Ink.

Several companies, including MIT startup E Ink and French firm Nemoptic, have begun producing products with e-paper displays. E Ink's technology uses a process in which images are created by electrically controlling the movement of black or white particles within tiny microcapsules. Nemoptic's displays are based on twisting nematic liquid crystals. The benefit of such screens is that compared with traditional displays, they are much easier to view in bright sunlight and yet only use a fraction of the power.

While the quality and contrast of black-and-white e-paper displays were almost on par with real paper, color images were lacking because each pixel was limited to a single primary color. To display a range of colors, pixels must be grouped in trios. In each trio, one pixel is filtered red, another is filtered green, and the third is filtered blue. Varying the intensity of each pixel within the trio generates different colors. But Arsenault says that these old systems lack intensity. For example, if one wants to make the whole screen red, then only one-third of the pixels will actually be red.

With P-Ink, it's a different story. "We can get 100 percent of the area to be red," Arsenault says. This is because each pixel can be tuned to create any color in the visible spectrum. "That's a three-times increase in the brightness of colors," he says. "It makes a huge difference."

P-Ink works by controlling the spacing between photonic crystals, which affects the wavelengths of light they reflect. Photonic crystals are the optical equivalent of semiconductor crystals. While semiconductor crystals influence the motion of electrons, photonic crystals affect the motion of photons.

Although recently there has been a lot of research looking at using photonic crystals for anything from optical fibers to quantum computers, it's actually an ancient phenomenon. For example, photonic crystals are responsible for giving opals their iridescent appearance. "There are many organisms that have coloration that doesn't come from a dye," says Arsenault. "This is the basis of our technology."

With P-Ink, each pixel in a display consists of hundreds of silica spheres. Each of these photonic crystals is about 200 nanometers in diameter and embedded in a spongelike electroactive polymer. These materials are sandwiched between a pair of electrodes along with an electrolyte fluid. When a voltage is applied to the electrodes, the electrolyte is drawn into the polymer, causing it to expand.

The swelling pushes the silica beads apart, changing their refractive index. "As the distance between them becomes greater, the wavelengths reflected increases," says Arsenault. P-Ink is also bistable, meaning that once a pixel has been tuned to a color, it will hold that color for days without having to maintain a power source. "And the material itself is intrinsically flexible," Arsenault says.

The technology was developed with Geoffrey Ozin and Daniel Puzzo, among others, at the University of Toronto and Ian Manners at the University of Bristol, in the UK. The group demonstrated how 0.3-millimeter pixels--about the same size as many LCD displays--can independently generate a range of colors. Their results are published in the August issue of the journal Nature Photonics. "One single material can give all the necessary colors for a display without filters," says Arsenault.

In fact, by making the crystals slightly larger, it's also possible to take them beyond the visible-light range and into infrared, says Arsenault. The effects in this range would be invisible to the human eye but could be used to make smart windows that control the amount of heat that passes through them, he says.

This is a step forward, says Jacques Angele, a cofounder of Nemoptic. "The aim of these color-display technologies is to be comparable with paper. Unfortunately, the brightness of the [other technologies] today is limited to about 30 percent of paper."

"It's a spectacular innovation," says Edzer Huitema, chief technology officer of the Dutch firm Polymer Vision, based in Eindhoven. Even traditional screens, such as cathode-ray tubes, LCDs, and plasma displays, use three or even four differently colored pixels to generate color. "It's a major limitation for all color-display technologies," Huitema says. When the color of each pixel is controlled, not only does the color quality increase, but the resolution should also improve by a factor of three.

There is one display technology, however, that can tune individual pixel color, says Angele. Both Kent Displays, in Ohio, and Japanese electronics firm Fujitsu have been taking this approach, which, in essence, involves placing the three colored pixels on top of each other. But besides being technically difficult and expensive, this approach reduces the brightness of the colors, Angele says. "It's difficult to have an optical stack without optical losses."

Arsenault predicts that Opalux will have the first products on the market within two years, probably in the form of advertising displays. But, he says, it will be a long while before P-Ink will be in a position to completely replace traditional displays. "The caveat is that we are not at video speeds," Arsenault says.

Currently, the P-Ink system can switch pixels in less than a second, which is on par with other e-paper displays. "We're still early in our development, and there's a lot of room for [improving] the material and optimizing its performance," says Arsenault.
Source:http://www.technologyreview.com

Saturday, September 1, 2007

Advanced Hurricane Forecasting

With the 2007 hurricane season under way, scientists believe their new forecasting model will make more-accurate predictions, thereby saving lives.
By Brittany Sauser
Forecasting destruction: An image of Hurricane Katrina nearing peak strength was taken on August 28, 2005, by NASA satellites (top). The new hurricane-forecasting model, HWRF, reproduced the life cycle of Hurricane Katrina and was able to more accurately predict its intensity (bottom image).
Credit: NASA (top); NOAA/National Weather Service Environmental Modeling Center (bottom).


Forecasters are predicting yet another very active hurricane season for 2007, but this year meteorologists expect to be able to more accurately predict the path, structure, and intensity of storms. The device that will make this happen is a new hurricane-forecasting model developed by scientists at the National Oceanic and Atmospheric Administration (NOAA) Environmental Modeling Center. It will utilize advanced physics and data collected from environmental-observation equipment to outperform current models and provide scientists with real-time three-dimensional analysis of storm conditions.
The model is able to see the inner core of the hurricane, where the eye wall is located, better and in higher resolution than all other models, says T. N. Krishnamurti, a professor of meteorology at Florida State University. The eye wall is the region around the hurricane eye where the strongest winds and heaviest rains are located, thus the place of the highest storm intensity. "It is a very comprehensive model that is a significant development for hurricane forecasting," says Krishnamurti.
Currently, experts at the National Hurricane Center and the National Weather Service rely mostly on the Geophysical Fluid Dynamics Laboratory (GFDL) model. The model, which has been in use since 1995, forecasts the path and intensity of storms. Until now, it was the only global model that provided specific intensity forecasts of hurricanes. And while it is a very good model, it's limited by the amount of data it's based on. "It has a very crude representation of storms," says Naomi Surgi, the project leader for the new model and a scientist in the Environment Modeling Center. "GFDL is unable to use observations from satellites and aircraft in its analysis of the storm."
Isaac Ginis, a professor of oceanography at the University of Rhode Island (URI) who helped develop the GFDL model, agrees that the old model "has too many limitations" and, while it's able to forecast the path of a storm well, it is not as skillful at forecasting the intensity or power of a storm. Ginis is now a principal investigator for the new model, called the Hurricane Weather Research and Forecast (HWRF) model, which is able to gather a more varied and better set of observations and assimilate that data to produce a more accurate forecast.
This new model will use data collected from satellites, marine data buoys, and hurricane hunter aircraft, which fly directly into a hurricane's inner core and the surrounding atmosphere. The aircraft will be equipped with Doppler radars, which provide three-dimensional descriptions of the storm, most importantly observing the direction of hurricane winds. The aircraft will also be dropping ocean probes to better determine the location of the loop current, a warm ocean current in the Gulf of Mexico made up of little hot spots, known as warm core eddies, that give hurricanes moving over them a "real punch," says Surgi.
The hurricane model will then assimilate the data--wind conditions, temperature, pressure, humidity, and other oceanic and atmospheric factors in and around the storm--and analyze it using mathematics and physics to create a model, explains Surgi. To understand hurricane problems in the tropics, it is imperative to understand the physics of the air-sea interface. "In the last several years, we have learned a lot about the transfer of energy between the upper part of the ocean and the lowest layers of the atmosphere," she says. "And the energy fluxes across that boundary are tremendously important in terms of being able to forecast a hurricane's structure."
Improving the intensity forecast of a storm and being able to precisely analyze a hurricane's structure were scientists' main goals in developing the new model. It can now forecast these aspects from 24 hours out up to five days out with extreme accuracy, says Ginis. The new model was put to the test by running three full hurricane seasons--2004, 2005, and 2006--for storms in both the Atlantic and east Pacific basin, totaling close to 1,800 tests runs. For example, the model was able to reproduce the life cycle of Hurricane Katrina very well, accurately forecasting that it would become a category 5 hurricane over the Gulf of Mexico--something the old model was unable to predict.
Over the next several years, scientists at NOAA will continue to improve upon these initial advancements with further use of ocean observations. They plan to couple the HWRF with a wave model, which will allow scientists to better forecast storm surge, inland flooding, and rainfall. NOAA has, in addition to partnering with URI in 2006, started collaborating with researchers at the University of Southern Alabama to work on coupling the HWRF with a wave model and enhancing its forecasting features.
"This model is enormously important for emergency response and emergency managers, and also the public," says Ginis, "because we not only want to know where the storm is going to make landfall, but also how powerful it is going to be."

Microsoft's Plan to Map the World in Real Time

Researchers are working on a system that allows sensors to track information and create up-to-date, searchable online maps.
By Kate Greene

Researchers at Microsoft are working on technology that they hope will someday enable people to browse online maps for up-to-the-minute information about local gas prices, traffic flows, restaurant wait times, and more. Eventually, says Suman Nath, a Microsoft researcher who works on the project, which is called SenseWeb, they would like to incorporate the technology into Windows Live Local (formerly Microsoft Virtual Earth), the company's online mapping platform.

By tracking real-life conditions, which are supplied directly by people or automated sensor equipment, and correlating that data with a searchable map, people could have a better idea of the activities going on in their local areas, says Nath, and make more informed decisions about, for instance, what driving route to take.

[For images from the SenseWeb application click here.]

"The value that you get out of [real-time data mapping] is huge," he says, and the applications can range from finding a parking spot in a cavernous parking garage to checking the traffic flow in different parts of a city.

Other research groups at the University of California at Berkeley, UCLA, Stanford, and MIT are working on similar projects for tracking environmental information. For instance, UCLA has a project in which sensors -- devices that measure physical quantities such as temperature, pressure, and sound -- are integrated with Google Earth, the company's downloadable mapping software. In addition, companies such as Agent Logic and Corda process real-time data and can correlate it with a location, mostly for businesses and governmental organizations.

Moreover, within the past year, Microsoft, Google, and Yahoo have been vying with each other to generate the most useful electronic maps (see "Killer Maps"). For the most part, though, the local information offered by Web-based mapping applications is updated only infrequently. And sites that offer real-time, local updates (about the status of public transportation, for instance), while useful, are designed for a single purpose.

What makes Microsoft's experimental project different from others that track information, Nath says, is that it would allow people to search for different types of real-time data within a user-specified area on a map, and progressively narrow that search. For instance, a person could highlight a region of a city and search for restaurants. SenseWeb would gather information provided by restaurants about their wait times and display it in various ways: the wait at specific establishments, the average wait for all restaurants in the region, or the minimum and maximum waits. If you needed to find a place to eat quickly, says Nath, but you learn that the minimum wait is 30 minutes in a certain part of town, you'd know to look in a different area. "You don't have to take the time to look at each individual restaurant," Nath says.

Additionally, a person could zoom into an area and see newly calculated information, such as maximum, minimum, and average wait times, according to the newly defined geography.

Searching for these types of real-time statistics within different areas on a map is a new take on displaying data on maps, says Phillip Levis, professor of computer science at Stanford University. "It's very different to give the average wait time in the city than it is to scan around the city and see each restaurant's wait time," he says.

SenseWeb is composed of three basic parts: sensors (or data-collecting units), Microsoft's database indexing scheme that sorts through the information, and the online map that lets users interact with the data. The sensors used in the project can vary in form and function, and can include thermometers, light sensors, cameras, and restaurant computers. SenseWeb puts baseline sensor information, such as location and function, into a database that's searchable by location and type of sensor information.

Then, if someone wants to check traffic conditions along a stretch of highway, for instance, the database will direct queries to cameras ("Web cams") located along the route -- and an image of traffic shows up on the map.

In order for people with sensors -- from researchers at universities to a private citizen with a Web cam -- to participate in SenseWeb, Nath says, they would have to be able to upload data to the Internet and provide information to the Microsoft group about their sensor, such as latitude, longitude, and the type of data it provides (for example, gas prices, temperature, or video).

One challenge for the SenseWeb project will be making the different types of information pulled into its database consistent enough to analyze and sort, says Samuel Madden, professor of computer science at MIT. For instance, there would need to be standard units for temperatures. "As soon as you start integrating all this data, you can imagine that weird things will happen," he says. "It's really a challenge to build tools that work with generic data and to come up with a way that anyone can publish their information."

Another, more fundamental hurdle for the SenseWeb project, Nath says, is getting people to register their sensors and sign on to the free program. Gas stations or restaurants may not even know about the project, or may not have an efficient way to pass along their data.

Therefore, in coming months, the Microsoft group will extend SenseWeb to universities that have already deployed sensors for other projects. In addition, the team is talking to a company that has sensors on parking spots, which, if integrated into Live Local, could help people find available parking more easily, he says.

For now, though, SenseWeb and Live Local are separate projects, according to Nath. The Live Local team "really loves this technology," he says, but right now "what's missing is the actual data."


Wednesday, August 29, 2007

Higher Games

It's been 10 years since IBM's Deep Blue beat Garry Kasparov in chess. A prominent philosopher asks what the match meant.

By Daniel C. Dennett

World chess champion Garry Kasparov during his sixth and final game against IBM’s Deep Blue in 1997. He lost in 19 moves.
Credit: Stan Honda/AFP/Getty Images

In the popular imagination, chess isn't like a spelling bee or Trivial Pursuit, a competition to see who can hold the most facts in memory and consult them quickly. In chess, as in the arts and sciences, there is plenty of room for beauty, subtlety, and deep originality. Chess requires brilliant thinking, supposedly the one feat that would be--forever--beyond the reach of any computer. But for a decade, human beings have had to live with the fact that one of our species' most celebrated intellectual summits--the title of world chess champion--has to be shared with a machine, Deep Blue, which beat Garry Kasparov in a highly publicized match in 1997. How could this be? What lessons could be gleaned from this shocking upset? Did we learn that machines could actually think as well as the smartest of us, or had chess been exposed as not such a deep game after all?

The following years saw two other human-machine chess matches that stand out: a hard-fought draw between Vladimir Kramnik and Deep Fritz in Bahrain in 2002 and a draw between Kasparov and Deep Junior in New York in 2003, in a series of games that the New York City Sports Commission called "the first World Chess Championship sanctioned by both the Fédération Internationale des Échecs (FIDE), the international governing body of chess, and the International Computer Game Association (ICGA)."

The verdict that computers are the equal of human beings in chess could hardly be more official, which makes the caviling all the more pathetic. The excuses sometimes take this form: "Yes, but machines don't play chess the way human beings play chess!" Or sometimes this: "What the machines do isn't really playing chess at all." Well, then, what would be really playing chess?

This is not a trivial question. The best computer chess is well nigh indistinguishable from the best human chess, except for one thing: computers don't know when to accept a draw. Computers--at least currently existing computers--can't be bored or embarrassed, or anxious about losing the respect of the other players, and these are aspects of life that human competitors always have to contend with, and sometimes even exploit, in their games. Offering or accepting a draw, or resigning, is the one decision that opens the hermetically sealed world of chess to the real world, in which life is short and there are things more important than chess to think about. This boundary crossing can be simulated with an arbitrary rule, or by allowing the computer's handlers to step in. Human players often try to intimidate or embarrass their human opponents, but this is like the covert pushing and shoving that goes on in soccer matches. The imperviousness of computers to this sort of gamesmanship means that if you beat them at all, you have to beat them fair and square--and isn't that just what ­Kasparov and Kramnik were unable to do?

Yes, but so what? Silicon machines can now play chess better than any protein machines can. Big deal. This calm and reasonable reaction, however, is hard for most people to sustain. They don't like the idea that their brains are protein machines. When Deep Blue beat Kasparov in 1997, many commentators were tempted to insist that its brute-force search methods were entirely unlike the exploratory processes that Kasparov used when he conjured up his chess moves. But that is simply not so. Kasparov's brain is made of organic materials and has an architecture notably unlike that of Deep Blue, but it is still, so far as we know, a massively parallel search engine that has an outstanding array of heuristic pruning techniques that keep it from wasting time on unlikely branches.

True, there's no doubt that investment in research and development has a different profile in the two cases; Kasparov has methods of extracting good design principles from past games, so that he can recognize, and decide to ignore, huge portions of the branching tree of possible game continuations that Deep Blue had to canvass seriatim. Kasparov's reliance on this "insight" meant that the shape of his search trees--all the nodes explicitly evaluated--no doubt differed dramatically from the shape of Deep Blue's, but this did not constitute an entirely different means of choosing a move. Whenever Deep Blue's exhaustive searches closed off a type of avenue that it had some means of recognizing, it could reuse that research whenever appropriate, just like Kasparov. Much of this analytical work had been done for Deep Blue by its designers, but Kasparov had likewise benefited from hundreds of thousands of person-years of chess exploration transmitted to him by players, coaches, and books.

It is interesting in this regard to contemplate the suggestion made by Bobby Fischer, who has proposed to restore the game of chess to its intended rational purity by requiring that the major pieces be randomly placed in the back row at the start of each game (randomly, but in mirror image for black and white, with a white-square bishop and a black-square bishop, and the king between the rooks). Fischer ­Random Chess would render the mountain of memorized openings almost entirely obsolete, for humans and machines alike, since they would come into play much less than 1 percent of the time. The chess player would be thrown back onto fundamental principles; one would have to do more of the hard design work in real time. It is far from clear whether this change in rules would benefit human beings or computers more. It depends on which type of chess player is relying most heavily on what is, in effect, rote memory.

The fact is that the search space for chess is too big for even Deep Blue to explore exhaustively in real time, so like Kasparov, it prunes its search trees by taking calculated risks, and like Kasparov, it often gets these risks precalculated. Both the man and the computer presumably do massive amounts of "brute force" computation on their very different architectures. After all, what do neurons know about chess? Any work they do must use brute force of one sort or another.

It may seem that I am begging the question by describing the work done by Kasparov's brain in this way, but the work has to be done somehow, and no way of getting it done other than this computational approach has ever been articulated. It won't do to say that Kasparov uses "insight" or "intuition," since that just means that ­Kasparov himself has no understanding of how the good results come to him. So since nobody knows how Kasparov's brain does it--least of all Kasparov himself--there is not yet any evidence at all that Kasparov's means are so very unlike the means exploited by Deep Blue.

People should remember this when they are tempted to insist that "of course" Kasparov plays chess in a way entirely different from how a computer plays the game. What on earth could provoke someone to go out on a limb like that? Wishful thinking? Fear?

In an editorial written at the time of the Deep Blue match, "Mind over Matter" (May 10, 1997), the New York Times opined:

The real significance of this over-hyped chess match is that it is forcing us to ponder just what, if anything, is uniquely human. We prefer to believe that something sets us apart from the machines we devise. Perhaps it is found in such concepts as creativity, intuition, consciousness, esthetic or moral judgment, courage or even the ability to be intimidated by Deep Blue.

The ability to be intimidated? Is that really one of our prized qualities? Yes, according to the Times:

Nobody knows enough about such characteristics to know if they are truly beyond machines in the very long run, but it is nice to think that they are.

Why is it nice to think this? Why isn't it just as nice--or nicer--to think that we human beings might succeed in designing and building brain­children that are even more wonderful than our biologically begotten children? The match between Kasparov and Deep Blue didn't settle any great metaphysical issue, but it certainly exposed the weakness in some widespread opinions. Many people still cling, white-­knuckled, to a brittle vision of our minds as mysterious immaterial souls, or--just as romantic--as the products of brains composed of ­wonder tissue engaged in irreducible non­computational (perhaps alchemical?) processes. They often seem to think that if our brains were in fact just protein machines, we couldn't be responsible, lovable, valuable persons.

Finding that conclusion attractive doesn't show a deep understanding of responsibility, love, and value; it shows a shallow appreciation of the powers of machines with trillions of moving parts.

Daniel Dennett is the codirector of the Center for Cognitive Studies at Tufts University, where he is also a professor of philosophy.

Uninspiring Vista


How Microsoft's long-awaited operating system disappointed a stubborn fan.

By Erika Jonietz

Vista's Aero visual environment includes the flip 3-D feature, which allows a user to cycle through a stack of open windows to find the desired application, shown above, and translucent window borders. Vista also offers "Gadgets," small programs that recall Mac "Widgets" (far right of screen above).

For most of the last two decades, I have been a Microsoft apologist. I mean, not merely a contented user of the company's operating systems and software, not just a fan, but a champion. I have insisted that MS-DOS wasn't hard to use (once you got used to it), that Windows 3.1 was the greatest innovation in desktop operating systems, that Word was in fact superior to WordPerfect, and that Windows XP was, quite simply, "it."

When I was forced to use Apple's Mac OS (versions 7.6 through 9.2) for a series of jobs, I grumbled, griped, and insisted that Windows was better. Even as I slowly acclimated at work, I bought only Windows PCs for myself and avoided my roommate's recherché new iBook as if it were fugu. I admitted it was pretty, but I just knew that you got more computing power for your buck from an Intel-based Windows machine, and of course there was far more software available for PCs. Yet my adoration wasn't entirely logical; I knew from experience, for example, that Mac crashes were easier to recover from than the infamous Blue Screen of Death. At the heart of it all, I was simply more used to Windows. Even when I finally bought a Mac three years ago, it was solely to meet the computing requirements of some of the publications I worked with. I turned it on only when I had to, sticking to my Windows computer for everyday tasks.

So you might think I would be predisposed to love Vista, Microsoft's newest version of Windows, which was scheduled to be released to consumers at the end of January. And indeed, I leaped at the opportunity to review it. I couldn't wait to finally see and use the long-delayed operating system that I had been reading and writing about for more than three years. Regardless of widespread skepticism, I was confident that Vista would dazzle me, and I looked forward to saying so in print.

Ironically, playing around with Vista for more than a month has done what years of experience and exhortations from Mac-loving friends could not: it has converted me into a Mac fan.

A little context and a caveat: in order to meet print deadlines, I had to review the "RC1" version of Vista Ultimate, which Microsoft released in order to gather feedback from over-eager early adopters. Such post-beta, prerelease testing reveals bugs and deficits that in-house testing misses; debuggers cannot mimic all the various configurations of hardware, software, and peripherals that users will assemble. And Vista RC1 was maddeningly buggy. Although I reminded myself repeatedly that most of the problems I encountered would be fixed in the final version, my opinions about Vista are probably colored by my frustrations.

Still, my very first impression of Vista was positive. Quite simply, it's beautiful. The Aero visual interface provides some cool effects, such as translucent window borders and a way to scroll through a 3-D "stack" of your open windows to find the one you want. Networking computers is virtually automatic, as it was supposed to be but never quite has been with Windows XP. The Photo Gallery is the best built-in organizer I've used to manage digital pictures; it even includes basic photo correction tools.

But many of Vista's "new" features seemed terribly familiar to me--as they will to any user of Apple's OS X Tiger operating system. Live thumbnails that display petite versions of minimized windows, search boxes integrated into every Explorer window, and especially the Sidebar--which contains "Gadgets" such as a weather updater and a headline reader--all mimic OS X features introduced in 2005. The Windows versions are outstanding--they're just not really innovative.

Unfortunately, Vista RC1 contained bugs that rendered some promising features, such as the new version of Windows Media Center, unusable for me (an acquaintance who acquired a final copy of Vista ahead of release assures me that all that has been fixed).

My efforts to get Media Center working highlighted two big problems with Vista. First, it's a memory hog. The hundreds of new features jammed into it have made it a prime example of software bloat, perhaps the quintessence of programmer Niklaus Wirth's law that software gets slower faster than hardware gets faster (for more on the problems with software design that lead to bloat, see "Anything You Can Do, I Can Do Meta"). Although my computer meets the minimum requirements of a "Vista Premium Ready PC," with one gigabyte of RAM, I could run only a few ­simple programs, such as a Web browser and word processor, without running out of memory. I couldn't even watch a movie: Windows Media Player could read the contents of the DVD, but there wasn't enough memory to actually play it. In short, you need a hell of a computer just to run this OS.

Second, users choosing to install the 64-bit version of Vista on computers they already own will have a hard time finding drivers, the software needed to control hardware sub­systems and peripherals such as video cards, modems, or printers. Microsoft's Windows Vista Upgrade Advisor program, which I ran before installing Vista, assured me that my laptop was fully compatible with the 64-bit version. But once I installed it, my speakers would not work. It seems that none of the companies concerned had written a driver for my sound card; it took more than 10 hours of effort to find a workaround. Nor do drivers exist for my modem, printer, or several other things I rely on. For some of the newer components, like the modem, manufacturers will probably have released 64-bit drivers by the time this review appears. But companies have no incentive to write complicated new drivers for older peripherals like my printer. And because rules written into the 64-bit version of Vista limit the installation of some independently written drivers, users will be virtually forced to buy new peripherals if they want to run it.

Struggling to get my computer to do the most basic things reminded me forcefully of similar battles with previous versions of Windows--for instance, the time an MIT electrical engineer had to help me figure out how to get my computer to display anything on my monitor after I upgraded to Windows 98. Playing with OS X Tiger in order to make accurate comparisons for this review, I had a personal epiphany: Windows is complicated. Macs are simple.

This may seem extraordinarily obvious; after all, Apple has built an entire advertising campaign around the concept. But I am obstinate, and I have loved Windows for a long time. Now, however, simplicity is increasingly important to me. I just want things to work, and with my Mac, they do. Though my Mac barely exceeds the processor and memory requirements for OS X Tiger, every bundled program runs perfectly. The five-year-old printer that doesn't work at all with Vista performs beautifully with OS X, not because the manufacturer bothered to write a new Mac driver for my aging standby, but because Apple included a third-party, open-source driver designed to support older printers in Tiger. Instead of facing the planned obsolescence of my printer, I can stick with it as long as I like.

And my deepest-seated reasons for preferring Windows PCs--more computing power for the money and greater software availability--have evaporated in the last year. Apple's decision to use the same Intel chips found in Windows machines has changed everything. Users can now run OS X and Windows on the same computer; with third-party software such as Parallels Desktop, you don't even need to reboot to switch back and forth. The chip swap also makes it possible to compare prices directly. I recently used the Apple and Dell websites to price comparable desktops and laptops; they were $100 apart or less in each case. The difference is that Apple doesn't offer any lower-end processors, so its cheapest computers cost quite a bit more than the least-expensive PCs. As Vista penetrates the market, however, the slower processors are likely to become obsolete--minimizing any cost differences between PCs and Macs.

I may need Windows for a long time to come; many electronic gadgets such as PDAs and MP3 players can only be synched with a computer running Windows, and some software is still not available for Macs. But the long-­predicted migration of software from the desktop to the Internet is finally happening. Organizations now routinely access crucial programs from commercial Web servers, and consumers use Google's services to compose, edit, and store their e-mail, calendars, and even documents and spreadsheets (see "Homo Conexus," July/August 2006). As this shift accelerates, finding software that works with a particular operating system will be less of a concern. People will be able to base decisions about which OS to use strictly on merit, and on personal preference. For me, if the choice is between struggling to configure every feature and being able to boot up and get to work, at long last I choose the Mac.

Erika Jonietz is a Technology Review senior ­editor.

WINDOWS VISTA operating system
$99.95-$399.00
www.microsoft.com/windowsvista

Sunday, August 26, 2007

Apple's iPhone

An inside look at a sensation.
By Daniel Turner

Credit: Photography by Christopher Harting


Apple's latest offering proves that revolutionary tech products don't have to be that revolutionary. Upon the iPhone's release, enthusiasts around the world rushed to tear it apart, eager to see something new. Instead, they found that Apple had relied mostly on tried-and-true components--with one big exception: a truly stunning multitouch screen that allows users to manipulate data and images in entirely unprecedented ways.
Two BoardsOne of the iPhone's two circuit boards includes the CPU, the flash memory, and other system memory chips that allow the phone to run its stripped-down version of Apple's OS X operating system and serve as a media device. The other board hosts the elements that enable communications: chips from ­Infineon that provide connectivity over GSM (global system for mobile) and EDGE (enhanced data rates for GSM evolution) mobile-phone networks, as well as an 802.11b/g chip from ­Marvell. ­Howard ­Curtis, the VP of global services at ­Portelligent, which analyzes electronic products, says this design leaves Apple with options. "You could isolate changes to one board and swap it out," he says--say, to provide support for CDMA, another popular mobile-phone standard.
Communications CenterThe chips that make the iPhone a phone "seem to be pretty standard," says Kyle Wiens of iFixit, an online Apple parts retailer. ­Portelligent's Howard ­Curtis agrees: "They're plain vanilla." A standard Infineon Technologies processor supplies the EDGE ­wireless-­data capabilities and supports the camera and the movie playback system. There's also a transceiver for quad-band GSM connectivity. Marvell's chip is accompanied by a Cambridge Silicon Radio chip that offers Bluetooth 2.0. Critics scorn the iPhone for not working with AT&T's 3G network, but Apple has said that incorporating 3G hardware would add heat and reduce battery life. Wiens says the real issue is that 3G "is practically nonexistent outside large cities." Still, he adds, Apple will need to address this issue if it wants to sell the iPhone in Europe.
NAND Flash MemoryThe iPhone comes in two models, the only difference being storage capacity: one has four gigabytes, the other eight. Both use flash memory chips from ­Samsung that are "very, very similar to, if not the same as, the ones in iPods," says Kyle Wiens.
Move slider to take apart the iPhone and see its parts. Credit: Alastair Halliday
CPU The phone's brain is a custom-for-Apple CPU built by Samsung and based on a 32-bit, 620-megahertz core from ARM, which makes dedicated systems for use in cars, handheld games, smart cards, and other applications where power is at a premium. ­Howard ­Curtis says that working with ARM, a company prominent in the "embedded" market, could be significant for Apple. "OS X is now in the embedded space," he says, even as Microsoft keeps trying to build a desirable version of Windows for the same market.
BatteryThough the iPhone's lithium-ion battery is nothing new technically--"it's just like the battery in an iPod, but big, very big," says Wiens--it has gotten a lot of attention. That's because unlike the batteries in other cell phones, the iPhone's is soldered on and not (easily) replaceable by the user. (Apple will change a dead battery for $79 plus shipping.) At least one consumer has filed suit against Apple for its battery policy. Apple executives say that even after 400 complete depletion-and-recharge cycles, the battery will retain 80 percent of its charge capacity, which should be good for well over six hours of talk time.
Multitouch DisplayApple has had problems with the plastic screens on its iPods, which tend to show scratches, but the iPhone's screen is made of optical-quality glass. That's all the more critical because the screen is the interface. Instead of buttons or a keyboard, the iPhone uses a combination of new software and a unique multi­touch screen manufactured by the German company Balda. Users tap "soft" buttons directly on the screen and zoom in or out of images or Web pages with two-­fingered gestures (zoom out is a pinch, zoom in is a spread). This new control scheme abandons the WiMP (window, icon, menu, pointer) system that has dominated graphical interfaces on computers for decades.
AccelerometersLike Nintendo's Wii game console (see Hack, July/August 2007), the iPhone uses miniaturized accelerometers that measure its movement. These sensors detect whether the user is holding the iPhone in its "portrait" or "landscape" orientation; the operating system rotates the display accordingly.