ENERGY | WIRELESS | NANOTECH | MEMS | OPTICS | QUANTUM | 3D | CHIPS | ALGORITHMS

Monday, March 13, 2006

"CHIPS: Defects dodged at nanoscale"

Defects in fabrication and errors during operation will become a fact of life for electronic circuits at the nanoscale. To compensate, researchers are crafting schemes to correct fabrication defects and processing errors on the fly. Georgia Institute of Technology, with funding from Intel Corp., is pioneering probabilistic CMOS to trade off processing errors for cooler running temperatures. And Hewlett-Packard Co. recently demonstrated a chip that uses massive redundancy and automatic recovery to compensate for fabrication errors in a 100-Gbit/cm2 nanowire storage array. Hewlett-Packard's current demonstration used nanoimprint lithography to fabricate 15-nanometer-wide wires with just 19 nm between their edges. Using a superlattice-pattern transfer technique, the researchers fabricated a 300-layer GaAs/AlGaAs array of 150 silicon nanowires with just a 34-nm pitch. By using the same imprinting mold to pattern a second array of identical nanowires above the first set, but at a 90° angle, they produced a platinum nanowire crossbar switch with a cell density of 100 Gbits/cm2. HP worked around the inevitable fabrication errors inherent in such high-density circuitry by using 50 percent redundancy and an automatic demultiplexer algorithm drawn from coding theory to reroute defective connections. Using a code similar to those in digital cell phone systems, the demultiplexer for the incredibly dense crossbar array was able to route around inevitable defects. HP predicts that circuit densities will necessitate the use of its scheme for chips produced in about six to seven years' time. Instead of correcting errors with redundancy, Georgia Institute of Technology researchers, with funding from Intel and the Defense Advanced Research Projects Agency, propose harnessing errors to lower chip temperatures. Called probabilistic CMOS (PCMOS) it can achieve valid results with far less energy than traditional logic. The new approach is particularly suited to a growing body of algorithms that use probability as a computational component.
Text: http://www.eetimes.com/showArticle.jhtml?articleID=181501772