NIST logo

Tech Beat - June 3, 2014

Tech Beat Archives

Submit an email address to receive news from NIST:

Editor: Michael Baum
Date created: June 3, 2014
Date Modified: June 3, 2014 
Contact: inquiries@nist.gov

Countdown to Net Zero: NIST Test House Pursues Energy Surplus in Final Month

Heading into the final stretch of a year-long trial run, the experimental net-zero energy house at the National Institute of Standards and Technology (NIST) in Gaithersburg, Md., must overcome an energy deficit of 154 kilowatt hours—equivalent to about $20—during the month of June.

(All cost calculations are based on the Maryland average of 12.84 cents per kilowatt hour in 2012, as reported by DOE.)
NIST Net Zero House heading towards the black. All cost calculations are based on the Maryland average of 12.84 cents per kilowatt hour in 2012, as reported by DOE.
Credit: NIST

The facility was designed to produce at least as much energy as it consumes over the course of a year. At the end of May, the research residence still owed on its total energy bill, which averaged less than $2.00 a month over the first 11 months. In contrast, the monthly expenditure for electric power alone averaged $129 for Maryland households in 2012, according to the U.S. Department of Energy.

"After a harsh winter and a cool spring, I'm cautiously optimistic that, come July 1, our annual energy statement will be in the black," said Hunter Fanney, the mechanical engineer who leads research at NIST's Net-Zero Energy Residential Test Facility (NZERTF). "A few months back, it seemed as though the local weather would beat us this year."

And it still could. A spate of cloudy days and hot, muggy conditions during the final month of the test run could require the house to draw energy from the electric grid for cooling and other tasks to supplement output from its array of solar-energy technologies.

So, the "countdown to net zero" is on. For those interested in keeping score, NIST is posting a running daily tally of net energy use through June 30. Each day's results will be reported on NIST's NZERTF web page, under Recent Research Results, and highlighted on NIST's Twitter account (use the hashtag #Countdown2NetZero).

Both a laboratory and a home, the 2,700-square-foot (252-square-meter) NZERTF is a two-story, four-bedroom, three-bath house that incorporates energy-efficient construction and appliances, as well as energy-generating technologies such as solar water heating and solar photovoltaic systems. The suburban-style home is inhabited by a virtual family of four—two working parents and two children, ages 8 and 14, who "moved in" on July 1, 2013.

From July through October 2013, the house registered monthly energy surpluses. In November and December, when space-heating demands increased and the declining angle of the sun reduced the energy output of its photovoltaic system, NZERTF began running monthly deficits, a pattern that continued through March 2014. The five-month span included the fourth coldest winter since 2000 and far above average snowfall. Snow covered the house's solar panels for more than 38 days, according to Fanney.

By March 31, the facility had imported about a total of 1,700 kilowatt hours of power from the local grid. But in April, gray skies began to clear up for NZERTF, and monthly energy surpluses, which are exported to the grid, returned.

To learn more about NZERTF, go to: www.nist.gov/el/nzertf/.

Media Contact: Mark Bello, mark.bello@nist.gov, 301-975-3776

back to top

Lithium Sulfur: A Battery Revolution on the Cheap?

Whip together an industrial waste product and a bit of plastic and you might have the recipe for the next revolution in battery technology. Scientists from the National Institute of Standards and Technology (NIST), the University of Arizona in Tucson and Seoul National University in Korea have combined common ingredients to make an inexpensive, high-capacity lithium-sulfur battery that can be cycled hundreds of times without losing function.*

sulfur
Sulfur, often an industrial waste product, could be key to future high-performance batteries.
Credit: ©S_E-Fotolia_com

The new battery’s performance would be competitive in today’s marketplace, says NIST materials scientist Christopher Soles. “Five hundred cycles with the capacity we’ve shown is definitely better than what’s in your laptop today.”

Batteries deliver power by shuttling positive ions between two electrodes—an anode and a cathode—while electrons travel around a circuit and do useful work. In the past decade, compact batteries using tiny lithium ions have achieved ever larger energy densities, packing more power in smaller volumes and helping to make smart phones and other mobile technologies ubiquitous. But lithium-ion batteries require bulky cathodes, typically made from ceramic oxides like cobalt oxide, to house the ions, which limits the battery’s energy density. This means that for more power-intensive applications like long-range electric vehicles, even lithium-ion technology does not cut it.

Enter lithium-ion’s slimmer cousin, lithium-sulfur. These batteries’ cathodes are made mainly of sulfur, a cheap waste product of petroleum processing. Sulfur weighs barely half as much as cobalt, atom for atom, and can pack more than twice as many lithium ions into a given volume as can cobalt oxide; thus, lithium-sulfur batteries have several times the energy density of lithium-ion batteries. But sulfur cathodes have two major weaknesses. Sulfur easily combines with lithium to form compounds that crystallize and gum up the battery’s insides, and it tends to crack under the stress of repeated cycling. As a result, a typical lithium-sulfur battery becomes useless within a few dozen cycles—far too few for a laptop or car battery that may get cycled once a day for years.

To create a more stable cathode, the research team heated sulfur to 185 degrees Celsius, melting the element’s eight-atom rings into long chains. They then mixed the sulfur chains with DIB,** a carbon-based plastic precursor that links the sulfur chains together, creating what is known as a co-polymer. The team dubbed their manufacturing process “inverse vulcanization” because it resembles the process used to make rubber tires, with one crucial difference: In tires, carbon-containing material makes up the bulk, and sulfur is just sprinkled in.

Adding DIB to the cathodes prevents them from cracking as easily and keeps lithium-sulfur compounds from crystallizing. The scientists tested different mixtures of sulfur and DIB and found that the optimum mix contained between 10 and 20 percent DIB by mass: Less DIB did not provide the cathode-protecting properties while more of the electrochemically inactive DIB began to drag down the battery’s energy density.

The researchers ran their optimized battery through 500 cycles and found that it retained more than half its initial capacity. Other experimental lithium-sulfur batteries have performed similarly, but their cathodes require more complex manufacturing processes that would be expensive to scale up, says Jeffrey Pyun, a chemist at the University of Arizona and Seoul National University. By contrast, the team’s polymer cathode requires only easily available materials and moderate heat. “We take it, we melt it in one step and pow, we get this plastic,” Pyun says. “If you were to come to our lab, we could do this in five minutes.”

Even so, we aren’t likely to see lithium-sulfur batteries in stores right away. Soles notes that a commercial battery technology has to do more than just meet performance specs. For example, lithium can combust if exposed to air, so any commercial lithium-sulfur battery will need to undergo rigorous safety testing before it hits the market.

*A.G. Simmonds, J.J. Griebel, J. Park, K.R. Kim, W.J. Chung, V.P. Oleshko, J. Kim, E.T. Kim, R.S. Glass, C.L. Soles, Y-E. Sung, K. Char and J. Pyun. Inverse vulcanization of elemental sulfur to prepare polymeric electrode materials for Li−S batteries. ACS Macro Lett. 2014, 3, 229−232 DOI: 10.1021/mz400649w.
**diisopropenylbenzene

Media Contact: Michael Baum, michael.baum@nist.gov, 301-975-2763

back to top

All-Natural Mixture Yields Promising Fire Retardant

What sounds like fixings for a wizard’s potion—a dash of clay, a dab of fiber from crab shells, and a dollop of DNA—actually are the ingredients of promising green fire retardants invented by researchers at the National Institute of Standards and Technology (NIST).

Applied to polyurethane foam, the bio-based coatings greatly reduced the flammability of the common furniture padding after it was exposed to an open flame. Peak and average rates of heat release—two key indicators of the magnitude of a fire hazard—were reduced by 48 percent and 77 percent, respectively, the NIST team reports in the journal Green Materials.*

“This is the biggest reduction in flammability that we have achieved to date,” says team leader Rick Davis. The all-natural coatings outperform other promising experimental fire-retardants that the NIST researchers have devised with their layer-by-layer assembly method.** But Davis says the bio-based coatings must be applied more generously, in stacks of about 20 layers as compared with six or seven layers.

Although still under study, the all-natural formulations might offer an alternative to existing fire retardants, including some that have been linked to human health risks and environmental problems.

The new coatings use negatively charged DNA molecules to link two positively charged materials known to enhance fire resistance: montmorillonite, a type of soft clay that forms tiny crystals, and chitosan, a fiber derived from the shells of shrimp, lobsters and other crustaceans. For its part, DNA, which was obtained from herring sperm, may also confer added protection because it bubbles and swells when heated, protecting the material beneath.

The team tested four different combinations of the three ingredients. In each combination, clay, chitosan and DNA were ordered in a specific arrangement and then stacked 20 to 30 layers high. Of the four, the best candidate for a bio-based fire retardant, according to the researchers, appears to be 10 repeating bilayers of chitosan overlain by a mixture of DNA and montmorillonite.

Besides providing the highest level of fire protection, the bilayer arrangement “is likely to be easier, faster, and less expensive to fabricate” than the other combinations, the team reports. However, this coating increased the weight of the foam by 16 percent. A lighter alternative, which provides only slightly less fire protection, is a coating that features five repeating four-layer stacks, each consisting of chitosan, DNA, chitosan, and clay. This arrangement increases the foam’s weight by 5 percent.

“Both recipes are great candidates” for environmentally benign fire-retardant coatings, the team says.

Ongoing research aims to simplify processing, enhance effectiveness, and test strategies to ensure durability.

*Y-C Li, Y-H Yang, Y.S. Kim, J. Shields and R.D. Davis, DNA-based nanocomposite biocoatings for fire-retarding polyurethane foam. Green Materials. Available on line at: http://www.icevirtuallibrary.com/content/issue/gmat/2/2
**See, for example, Layered Security: Carbon Nanotubes Promise Improved Flame-Resistant Coating or Novel Clay-based Coating May Point the Way to New Generation of Green Flame
Retardants

Media Contact: Mark Bello, mark.bello@nist.gov, 301-975-3776

back to top

Better Materials for Safer Sports: Time to Use Our Heads

On May 29, 2014, at the White House Healthy Kids and Safe Sports Concussion Summit, President Obama highlighted both the need for greater national awareness of the risks our young athletes face from traumatic brain injuries and the need for increased research on how to combat these potentially life-altering injuries.

14MML009_201405_acrylic_fracture_LR

A simple example of making a material fail "better": By fine-tuning the thickness of the connecting spokes in a sheet of acrylic, we can change how it transmits force when fractured. With thick spokes (left), fractures propagate in a straight line and concentrate the impact. Thin spokes (right) divert the fracture across the sheet, diffusing the impact.

Courtesy Center for Hierarchical Materials Design
View hi-resolution image

In a post on The Commerce Blog, Laurie E. Locascio, director of the National Institute of Standards and Technology (NIST) Material Measurement Laboratory, writes that NIST recognizes that the use of advanced materials in protective equipment, such as helmets, can play a critical role in this effort. For that reason, she says, NIST is investing $1 million per year for 5 years on tools to accelerate the development of advanced materials that can provide better protection for the athlete against concussions.

The funding is part of NIST's work on behalf of the Materials Genome Initiative (MGI), a multi-agency effort focused on replacing trial-and-error experimentation with physical theory, advanced computer models, vast materials properties databases and complex computations to design new materials with specific properties. NIST plans to work closely with the recently created Center for Hierarchical Materials Design, a NIST Center of Excellence that was established specifically to pursue tools for creating custom materials.

Read more …

Media Contact: Michael Baum, michael.baum@nist.gov, 301-975-2763

back to top

NIST: Performance of Facial Recognition Software Continues to Improve

Who is that stranger in your social media photo? A click on the face reveals the name in seconds, almost as soon as you can identify your best friend. While that handy app is not quite ready for your smart phone, researchers are racing to develop reliable methods to match one person’s photo from millions of images for a variety of applications. The National Institute of Standards and Technology (NIST) reports that results from its 2013 test of facial recognition algorithms show that accuracy has improved up to 30 percent since 2010.

faces
In NIST's one-to-many tests of facial recognition software, algorithms attempt to match an "unknown" image such as the one on the left to a different image of the same individual (right) in a large collection of 1.6 million "known" images.
Credit: NIST
high resolution image

The report by NIST biometric researchers Patrick Grother and Mei Ngan,  Performance of Face identification Algorithms,* includes results from algorithms submitted by 16 organizations. Researchers defined performance by recognition accuracy—how many times the software correctly identified the photo—and the time the algorithms took to match one photo against massive photo data sets.

“We studied the one-to-many identification because it is the largest market for face recognition technology,” Grother said. “These algorithms are used around the world to detect duplicates in databases, fraudulent applications for passports and driving licenses, in token-less access control, surveillance, social media tagging, lookalike discovery and criminal investigations.”

Four research groups enrolled in both the 2013 and the previous 2010 test,** allowing NIST researchers to compare performance improvements over time. They found that those groups had improved their performance on the tests by from 10 and almost 30 percent. One organization decreased its error rate from 8.9 percent in 2010 to 6.4 percent in 2013.

In both years the study used a database of 1.6 million faces. In 2010, the images were frontal “mugshot” images from law enforcement agencies that closely comply with the ANSI/NIST ITL 1-2011 Type 10 standard. In 2013, researchers added a small database of images taken for visa applications that meet an ISO/IEC (International Organization for Standardization/International Electrotechnical Commission) standard and 140,000 webcam images taken in poorly controlled environments that do not comply with any standard.

The tested algorithms performed the best on the relatively high-quality ISO standardized images collected for passport, visa and driving license applications. Detecting duplicates in those applications is the biggest segment of the face recognition marketplace. No algorithms worked well with the webcam images. Search failure rates for those images were around three times greater than for the higher quality images.

The study also shows that rates of missing facial matches increase as the database size increases as expected, but that it does so only slowly. When the number of facial images increased by a factor of 10—from 160,000 to 1.6 million—the error rate only increased by about 1.2 times. This slower-then-expected growth in error rates occurs in many natural phenomenon, and “is largely responsible for the operational utility of face identification algorithms,” explains Grother.

Images of older individuals were identified more accurately than those of younger persons, suggesting that we become steadily easier to recognize using facial recognition software, and more distinguishable from our contemporaries, as we age.

*P. Grother and M. Ngan. Performance of Face identification Algorithms (NIST Interagency Report 8009). May 2014. Available at www.nist.gov/manuscript-publication-search.cfm?pub_id=915761.
**P. Grother, G.W. Quinn and P.J. Phillips. Report on the Evaluation of 2D Still-Image Face Recognition Algorithms (NIST Interagency Report 7709). August 2011. Available at www.nist.gov/manuscript-publication-search.cfm?pub_id=905968.

Media Contact: Evelyn Brown, evelyn.brown@nist.gov, 301-975-5661

back to top

Rush a Light Wave and You'll Break Its Data, say NIST Scientists

Quantum information can't break the cosmic speed limit, according to researchers* from the National Institute of Standards and Technology (NIST) and the University of Maryland's Joint Quantum Institute. The scientists have shown how attempts to "push" part of a light beam past the speed of light results in the loss of the quantum data the light carries. The results could clarify how noise might limit the transfer of information in quantum computers.

quantuminfo_wavestop_copyright_jonyu_elvistudio-Fotolia_com_LR
Quantum information cannot be pushed faster than the speed of light--and if you try to rush even part of the waves carrying it, the information breaks down, like a wave against the shore.
credit: ©Jonyu and ©elvistudio at Fotolia.com

The speed of light in vacuum is often thought to be the ultimate speed limit, something Einstein showed to be an unbreakable law. But two years ago,** members of the research team found a sort of "loophole" in the law when they devised a new way to push part of the leading edge of a pulse of light a few nanoseconds faster than it would travel normally. While the 'pulse front' (the initial part of the pulse) still traveled at the usual constant speed, the rising edge and the pulse peak could be nudged forward a bit. Since waves carry information, the team decided to explore what their previous results might mean for quantum information.

"How does the beam's quantum information behave if you try to speed up the leading edge?" says NIST's Ryan Glasser. "We knew if you could speed the information up successfully, it would give rise to all kinds of causality problems, as you see in science fiction movies about people traveling back in time. So while no one expects it to be possible, just what prevents it from happening? That's what we wanted to know."

The team set up a new experiment that "entangled" the photons in two different light beams, which means that quantum information in one beam—such as amplitude—is strongly correlated to information in the other. Ordinarily, measuring these parameters in one beam can reveal those in the second. But when the team nudged the waves in one beam forward and took their measurements, they found the correspondence with the second beam started to taper off, and the more they pushed, the more degraded with noise the signal became.

"We sped up the peak of the correlation between the two beams," Glasser says, "but we couldn't push the quantum information any faster than the speed of light in vacuum."

While further work is needed to determine what is fundamentally enforcing this information speed limit, the current findings could be useful for understanding information transfer within quantum systems such as those that will be needed within quantum computers. "We speculate that quantum noise and distortion set that limit," Glasser says.

A more detailed explanation of the study is available at http://jqi.umd.edu/news/advanced-light

* J.B. Clark, R.T. Glasser, Q. Glorieux, U. Vogl, T. Li, K.M. Jones and P.D. Lett. Quantum mutual information of an entangled state propagating through a fast-light medium. Nature Photonics. Published online May 25, 2014. DOI: 10.1038/nphoton.2014.112,

** See the May 2012 Tech Beat story, "First Light: NIST Researchers Develop New Way to Generate Superluminal Pulses" at www.nist.gov/public_affairs/tech-beat/tb20120502.cfm#light.

 

Media Contact: Chad Boutin, chad.boutin@nist.gov, 301-975-4261

back to top

Proposed Risk Management Guidelines Aim to Bolster Security of Federal ICT Supply Chains

The National Institute of Standards and Technology (NIST) has published a second public draft of Supply Chain Risk Management Practices for Federal Information Management Systems and Organizations* for public comment. The new version incorporates changes made in response to comments on the original draft issued Aug. 16, 2013.

supply chain
Products from across the world add risk to information communications supply chains.
Credit: ©freshidea-Fotolia_com

Between the growing sophistication and complexity of modern information and communication technology (ICT) and the lengthy and geographically diverse ICT supply chains, important federal information systems are at risk of being compromised by counterfeits, tampering, theft, malicious software and poor manufacturing practices. A counterfeit chip could cause a computer system to break down; malware could lead to loss of critical information.

The NIST guide to securing ICT supply chains details a set of processes for evaluating and managing that risk. “It builds on NIST’s Managing Information Security Risk** publication,” explains lead author Jon Boyens.

NIST recommends that evaluating ICT supply chains should be part of an organization’s overall risk management activities and should involve identifying and assessing applicable risks, determining appropriate mitigating actions, and developing a plan to document mitigating actions and monitoring performance. The plan should be adapted to fit each organization’s mission, threats and operating environment, as well as its existing ICT supply chains.

The draft publication also calls for building ICT supply chain risk management activities on existing supply chain and cybersecurity practices, employing an organization-wide approach, and focusing on the systems and components that are most vulnerable and can cause the largest impact if compromised.

The guidance is designed for use with high-impact systems as categorized in NIST’s Standards for Security Categorization of Federal Information and Information Systems *** and can be used on moderate systems, if deemed appropriate, Boyens says.

This second public draft is based on an extensive review and comments contributed by the ICT community. NIST is asking for feedback on some of the key changes that appear in this draft, including:

  • Increased emphasis on balancing the risks and costs of ICT supply chain risk management processes and controls throughout the publication,
  • An ICT supply chain risk management controls summary table that provides a baseline and maps to NIST Special Publication 800-53 Revision 4 High baseline controls in Appendix D, and
  • An annotated ICT Supply Chain Risk Management Plan Template in Appendix H.

Supply Chain Risk Management Practices for Federal Information Systems and Organizations, Second Public Draft (NIST SP 800-161) can be downloaded from http://csrc.nist.gov/scrm/publications.html. The public comment period ends July 18, 2-14. Comments may be submitted by email to scrm-nist@nist.gov using the template on the web page.

*J. Boyens, C. Paulsen, R. Moorthy and N. Bartol. Supply Chain Risk Management Practices for Federal Information Management Systems and Organizations. NIST Special Publication 800-161. Second Public Draft. June 2014.Available at http://csrc.nist.gov/publications/drafts/800-161/sp800_161_2nd_draft.pdf.
**Joint Task Force Initiative. Managing Information Security Risk: Organization, Missions, and Information System View. Special Publications 800-39. March 2011. Available at www.nist.gov/manuscript-publication-search.cfm?pub_id=908030.
***NIST’s Standards for Security Categorization of Federal Information and Information Systems. Federal Information Processing Standard Publication 199. February 2004.

Media Contact: Evelyn Brown, evelyn.brown@nist.gov, 301-975-5661

back to top

NIST Requests Public Comment on Proposed SHA-3 Cryptographic Standard

The National Institute of Standards and Technology (NIST) has requested public comments on its newly proposed "Secure Hash Algorithm-3" (SHA-3) Standard, which is designed to protect the integrity of electronic messages.

The draft Federal Information Processing Standard Publication 202, SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions, specifies six permutation-based "sponge" functions based on Keccak, the winning algorithm selected from NIST's SHA-3 Cryptographic Hash Algorithm Competition. The functions include four fixed-length cryptographic hash functions, and two closely related "extendable-output" functions (XOFs). The four fixed-length hash functions provide alternatives to the SHA-2 family of hash functions specified in FIPS 180, Secure Hash Standard, which FIPS 202 will supplement. The XOFs can be specialized to hash functions, subject to additional security considerations, or used in a variety of other applications.

Cryptographic hash algorithms are a cornerstone of modern information security. They transform a digital message into a short "message digest" for use in digital signatures. Even a small change in the original message text creates a change in the digest, making it easier to detect accidental or intentional changes to the original message. Hash algorithms are used by many security applications, including random bit generation.

Comments from the public on the draft of FIPS 202 are welcome for the next 90 days until August 26, 2014, after which NIST will incorporate them into the final version of the specification. The draft is available at http://csrc.nist.gov/publications/drafts/fips-202/fips_202_draft.pdf. Comments may be sent to NIST either electronically or by mail. Full details appear in the Federal Register at https://federalregister.gov/a/2014-12336.

NIST strongly encourages the public to continue analyzing the security of the Keccak family of permutation-based sponge functions in general, and the six algorithms specified in this draft of FIPS 202 in particular, and to submit those analyses as official comments in response to this request.

Media Contact: Chad Boutin, chad.boutin@nist.gov, 301-975-4261

back to top

Novel NIST Laser System Mimics Sunlight to Test Solar Cell Efficiency

Researchers at the National Institute of Standards and Technology (NIST) have developed a laser-based instrument that generates artificial sunlight to help test solar cell properties and find ways to boost their efficiency.

Tasshi Dennis
NIST engineer Tasshi Dennis with NIST's solar simulator based on a white light laser. The instrument simulates sunlight to help measure the properties of solar cell materials. The instrument's beam is illuminating a gallium arsenide solar cell (yellow diamond) in the lower left corner of the photo.
credit: J. Burrus/NIST
high-res version
The novel NIST system simulates sunlight well across a broad spectrum of visible to infrared light. More flexible than conventional solar simulators such as xenon arc-lamps or light-emitting diodes, the laser instrument can be focused down to a small beam spot—with resolution approaching the theoretical limit—and shaped to match any desired spectral profile.

The new simulator is based on a white light laser that uses optical-fiber amplifier technology to boost the power and a photonic crystal fiber to broaden the spectrum. NIST researchers used the simulator to measure the efficiency of thin-film solar cells made of gallium-arsenide, crystalline silicon, amorphous silicon and copper-indium-gallium-selenide, and the results agreed with independent measurements.* 

“We can focus the light down to a spot less than 2 micrometers in diameter, despite the wide spectral content. You can't do this with sunlight,” NIST researcher Tasshi Dennis says. “We then used this focused spot to scan across solar cell materials while monitoring the current the light generated. This allowed us to create spatial maps (images) of the response of a solar cell at the micrometer level.

”The new instrument may help researchers understand solar cells’ optical and electrical characteristics, including defects and the impact of unusual designs. In particular, the new simulator’s capability to make rapid, accurate spectrum adjustments will help characterize the most efficient solar cells, which use multi-junction materials in which each junction is tuned to a different part of the spectrum. The instrument is designed to probe small research samples, individual concentrator solar cells and microstructures, not to determine the efficiencies of large solar cell panels and modules. NIST researchers have been working to make the new simulator programmable and portable for use outside NIST. 

For more details see www.nist.gov/pml/div686/solar-simulator.cfm.

* T. Dennis, J.B. Schlager and K.A. Bertness. A novel solar simulator based on a super-continuum laser for solar cell device and materials characterization. IEEE Journal of Photovoltaics. Posted online May 26. DOI: 10.1109/JPHOTOV.2014.2321659.

Media Contact: Laura Ost, laura.ost@nist.gov, (303) 497-4880

back to top

Robotics Pioneer Joins NIST Advisory Committee

Rodney Brooks, founder, chairman and chief technology officer of Rethink Robotics, Inc., has joined the Visiting Committee on Advanced Technology (VCAT), the primary advisory committee of the National Institute of Standards and Technology (NIST). Brooks was appointed to a three-year term by Under Secretary of Commerce for Standards and Technology and NIST Director Patrick Gallagher.

Rodney Brooks
Rodney Brooks
Credit: © Stephen F. Bevacqua/courtesy Rethink Robotics

Brooks founded Rethink Robotics in 2008. The company’s goals include making robots for manufacturing and research that are affordable, safe around people and easy to use.

From 1984 to 2010, Brooks was on the faculty at the Massachusetts Institute of Technology (MIT), becoming the Panasonic Professor of Robotics. He was also the founding director of MIT’s Computer Science and Artificial Intelligence Laboratory, and served in that role until 2007. In 1990, he co-founded iRobot, where he served variously as CTO, chairman and board member until 2011.

Brooks received his undergraduate degree in mathematics from Flinders University of South Australia and a Ph.D. in computer science from Stanford University.

He was elected to the National Academy of Engineering and is a fellow of the American Academy of Arts and Sciences, the Association of Computing Machinery, the Association for the Advancement of Artificial Intelligence, the American Association for the Advancement of Science and the Institute of Electrical and Electronics Engineers.

The VCAT was established by Congress in 1988 to review and make recommendations on NIST's policies, organization, budget and programs. The next VCAT meeting will be June 11, 2014, in Gaithersburg, Md. For more information on the VCAT and the meeting, visit www.nist.gov/director/vcat/.

Media Contact: Jennifer Huergo, jennifer.huergo@nist.gov, 301-975-6343

back to top