NIST logo

Tech Beat - December 16, 2014

Tech Beat Archives

Submit an email address to receive news from NIST:

Editor: Michael Baum
Date created: June 23, 2010
Date Modified: December 16, 2014 

NIST Sensor Could Improve One of Nano Research’s Most Useful Microscopes

Spotting molecule-sized features—common in computer circuits and nanoscale devices—may become both easier and more accurate with a sensor developed at the National Institute of Standards and Technology (NIST). With their new design, NIST scientists may have found a way to sidestep some of the problems in calibrating atomic force microscopes (AFMs).

photon diving board
Self-calibrating AFM probe: Light travels down the optical fiber on the left, striking the top edge of the gold-plated segment lodged in the sensor’s tip. The light pressure sets the sensor vibrating. The long fiber on the right connects to an interferometer (not shown), which measures the tip’s movement, giving a value for the probe's stiffness. The sensor is attached to the wall at left in two places; darker sections of the image are empty space.
Credit: Melcher/NIST
View hi-resolution image

The AFM is one of the main scientific workhorses of the nano age. It can resolve features as small as individual atoms. Instead of magnifying with a lens, AFMs “feel” a surface, using a flexible cantilever with a tiny, sharp tip. As the tip passes near a nanoscale feature on a surface, interactions between the atoms on the tip and on the object’s surface cause the cantilever to bend, revealing the finest of details. Because the forces that cause the tip to bend are fairly weak, scientists have increased AFM sensitivity by making the tip vibrate at a particular frequency as it passes over the surface and measuring how much the frequency changes. Frequency can be measured more precisely than almost anything else in the physical sciences.

The trouble comes in calibrating the tip’s sensitivity. AFMs operate well in a near vacuum at temperatures around minus 268 degrees Celsius. This means the tip and specimen interact in a tight space behind several walls—hardly an easy spot to cram calibration equipment. As a result, calibration entails removing the tip and checking it at room temperature, a process that not only can skew AFM results but requires calibration equipment that few people outside national metrology institutes possess.

“With our sensor, that problem could disappear,” says NIST’s Gordon Shaw. “The tools you need to calibrate the tip are built right into the sensor, so it would not need to be removed from the AFM.”

The NIST team’s sensor is a redesign of the device that makes the tip vibrate. Made of a silicate material akin to the quartz used in some wristwatches, the cantilever is a roughly three-millimeter-long rectangle that looks a bit like a hollow diving board. At the end where the diver would bounce is a mirror that reflects light shining from an LED. The LED can be adjusted to deliver a specific amount of energy. When the photons strike the mirror, they exert enough pressure to set the cantilever vibrating. The distance the tip travels upwards and downwards—measured by an interferometer—reveals how stiff the diving board is at that temperature. This is the critical figure needed to equate a change in the tip’s frequency to a change in atomic force.

“The sensor is capable of resolving forces as small as femtonewtons, about 1,000 times less than the force necessary to stretch out a single DNA molecule,” Shaw says. “It gives us a useful reference, which is hard to come by when you’re working with such tiny forces.”

It is the first of a class of self-calibrating NIST-on-a-chip embedded standards that merges laser power, force and mass calibrations in a portable package that can be used in tight spaces, like the AFM.

*J. Melcher, J. Stirling, F.G. Cervantes, J.R. Pratt and G.A. Shaw. A self-calibrating optomechanical force sensor with femtonewton resolution. Applied Physics Letters, doi:10.1063/1.4903801, Dec. 10, 2014.

Media Contact: Chad Boutin,, 301-975-4261

back to top

NIST Issues New Revision of Guide to Assessing Information Security Safeguards

The National Institute of Standards and Technology (NIST) has released the final version of the 2014 update to its core guide to assessing the security and privacy safeguards for federal information systems and organizations. The revised guide was issued in draft for public comment last August.

Assessing Security and Privacy Controls in Federal Information Systems and Organizations (NIST Special Publication 800-53A, Revision 4) is one of two basic NIST publications used by government IT security professionals to assess a wide range of software configurations, physical security measures and operating procedures meant to safeguard information systems from both chance failures and hostile attacks. The document is a guide to the tests and procedures needed to check that security controls are both in place and functioning as intended.

The assessment guide complements NIST’s Security and Privacy Controls for Federal Information Systems and Organizations (SP 800-53), a catalog of available methods or “controls” that can be used to safeguard information systems ranging from desktop computers to major data networks. The fourth revision of SP 800-53 was issued in April 2013.

The latest revision of SP 800-53A, the assessment guide, brings it into alignment with the most recent version of SP 800-53, and includes several significant changes from the previous edition released in 2010. In addition to adding new assessment methods for some controls and clarifying some of the terminology, the new edition has improvements meant to provide better support for continuous monitoring and ongoing authorization programs, and for use with automated assessment and monitoring tools. All of these modifications are aimed at making IT security procedures more flexible and responsive to changing threats.

The new edition of SP 800-53A also continues an ongoing process to better integrate privacy safeguards into the information security framework in parallel with the privacy controls defined in SP 800-53, Appendix J. The privacy assessment procedures that will be added to this guide in the future currently are under development by a joint interagency working group established by the Best Practices Subcommittee of the CIO Council Privacy Committee. They will be separately vetted through the traditional NIST public review process and integrated into SP 800-53A.

SP 800-53A Revision 4, Assessing Security and Privacy Controls in Federal Information Systems and Organizations, is available at

Media Contact: Evelyn Brown,, 301-975-5661

back to top

NIST Tests: Firefighters Portable Radios May Fail at Elevated Temperatures

New test results* from the National Institute of Standards and Technology (NIST) confirm that portable radios used by firefighters can fail to operate properly within 15 minutes when exposed to temperatures that may be encountered during firefighting activities.

firefighting radio testing
A portable firefighter radio is instrumented for testing in a NIST-designed apparatus that consistently creates thermal conditions representative of typical fire environments. Data and performance measurements recorded with the equipment are furnished to the National Fire Protection Association, which is developing a performance standard for portable radios used by emergency personnel.
Credit: NIST
View hi-resolution image

Firefighters rely on the radios to report their location and to communicate with other first responders as well as the incident command post or communications center. Performance problems with portable radios have been identified by the National Institute for Occupational Safety and Health as contributing factors in some firefighter fatalities.

All seven of the firefighter portable radios tested by NIST failed to perform properly within 15 minutes when exposed to temperature levels encountered in “fully involved” fires, as when all the contents in a room or structure are burning. Four of the handheld radios stopped transmitting, and three experienced significant “signal drift,” rendering the radios unreliable for communication.

The failures occurred while the radios were subjected to a temperature of 160 degrees Celsius (320 degrees Fahrenheit), termed Thermal Class II conditions.** The temperature is representative of a fully involved fire or conditions outside a room when its contents burst into flames simultaneously, a phenomenon known as flashover.

During the post-test cool-down period, three of the radios did not recover normal function.

Funded by the U.S. Department of Homeland Security, the NIST tests further ongoing work to develop performance standards for firefighter portable radio equipment, which includes radios, wearable combinations of speakers and microphones, and related items. The existing standard provides only general guidance—that portable radios “be manufactured for the environment in which they are to be used.”

NIST researchers are furnishing their test data and performance measurements to the National Fire Protection Association, which is developing a performance standard for portable radios used by emergency personnel.

As important, the NIST team designed a prototype apparatus to electronically control testing equipment intended to consistently create thermal conditions representative of typical fire environments.

All radios tested by NIST performed reliably when exposed to a temperature of 100 degrees Celsius (212 degrees Fahrenheit) for 25 minutes, or Thermal Class I conditions, akin to a small fire in a room or fighting a fire from a distance.

No tests were conducted under more extreme fire conditions (Thermal Classes III and IV).

“Realistic and reliable performance tests provide clear design targets for portable radio manufacturers,” explains NIST fire protection engineer Michelle Donnelly. “Standards incorporating these tests provide firefighters with the assurance that their equipment will perform as expected under specified thermal conditions.”

*M.K. Donnelly, W.F. Young, and D. Camell, Performance of Portable Radios Exposed to Elevated Temperatures (NIST Technical Note 1850); September 2014.
**See NIST Technical Note 1474 at

Media Contact: Mark Bello,, 301-975-3776

back to top

Cloud Metrics Could Provide the Goldilocks Solution to Which Cloud Vendor Is 'Just Right'

As government agencies and other organizations invest in cloud computing services, they are challenged to determine which cloud provider and service will best meet their needs. As the nation’s official measurement experts, the National Institute of Standards and Technology (NIST) has developed a guide to creating cloud metrics that could aid decision makers in finding the cloud service that is “just right.”

cloud on road calipers
Credit: Irvine/NIST and ©nikhg, vvoe and magann/Fotolia

The new NIST guide, which is being offered as a draft for public comment, proposes a model for developing metrics—objective measures of capabilities and performance—that cloud-shopping organizations can use to navigate a rapidly expanding marketplace.

New cloud computing providers and services are entering the market at a dizzying pace. Different organizations and groups often use the same cloud computing terms with slightly different, or even contradictory meanings, leading to confusion among cloud service providers, customer and carriers. The lack of clear definitions for cloud computing terms makes them inherently immeasurable.

The key to choosing a good cloud service, according to NIST experts, is having clear, measurable requirements and data on those capabilities such as quality of service, availability and reliability. The NIST definition of cloud computing* includes “measured service” as one of five essentials characteristics of cloud.

The new NIST publication Cloud Computing Service Metrics Description** discusses the basic nature of the problem of measuring cloud services and offers a model and method for developing appropriate cloud metrics. A metric provides knowledge about characteristics of a cloud property through both its definition (expression, measurement unit, rules) and the values resulting from the observation of the property. Metrics must be well-defined and understood by cloud stakeholders—particularly customers and service providers—so they can rely on them with confidence.

For example, many people use an email service based in a cloud. One potential cloud metric, a customer response metric, could be defined as the time it takes from someone hitting “send” on an email until it’s delivered to a recipient on the same cloud service. The metric should provide the necessary information needed to reproduce and verify observations and measurement results.

Metrics can play a critical role in selecting cloud services, but they also help in other ways such as in defining and enforcing service agreements that organizations contract with providers. They also could be used to provide a rigorous foundation for monitoring cloud services and for accounting and auditing.

NIST is responsible for accelerating the United States Government’s secure adoption of cloud computing by leading efforts to develop standards and guidelines. This document grew out of a working group on cloud metrics that included an international mix of members from the public, private and academic sectors.

The deadline for comments on Cloud Computing Service Metrics Description is midnight, January 24, 2015. Please send comments to Frederic de Vaulx, National Institute of Standards and Technology, 100 Bureau Dr., Stop 8970, Gaithersburg, MD 20899, or to For more on the NIST cloud computing program, see

* P. Mell and T. Grance. The NIST Definition of Cloud Computing (NIST Special Publication 800-145). September 2011. Online at:
**NIST Cloud Computing Reference Architecture and Taxonomy Working Group. Cloud Computing Service Metrics Description (NIST Draft Special Publication 500-307). December 2014. Online at:

Media Contact: Evelyn Brown,, 301-975-5661

back to top

NIST Study 'Makes the Case' for RFID Forensic Evidence Management

Radio frequency identification (RFID) tags—devices that can transmit data over short distances to identify objects, animals or people—have become increasingly popular for tracking everything from automobiles being manufactured on an assembly line to zoo animals in transit to their new homes. Now, thanks to a new NIST report, the next beneficiaries of RFID technology may soon be law enforcement agencies responsible for the management of forensic evidence.

RFID System Overview

A typical radio frequency identification (RFID) system like those recommended by a new NIST report for improved management of forensic evidence.
Credit: Used with permission, Daniel M. Dobkin, author of "The RF in RFID," 2nd edition, © 2013 Elsevier Inc.

A typical RFID system consists of a microchip programmed with identifying data—the “tag”—and a two-way radio transmitter-receiver, called an interrogator or a reader depending on its use. The tag can be attached or embedded in the item to be tracked, with the radio either sending a signal to the tag or reading its response.

Common examples of RFID systems include the FasTrak and E-ZPass in-car tags for automatically collecting tolls, tagged prescription drugs that help pharmacies meet federal and state safety regulations, and credit cards with embedded RFID chips that provide a more secure way of transmitting card numbers than magnetic strips. RFID systems can read hundreds of tags in a few seconds and track an item as it moves through a process. More advanced RFID tags can sense and report on environmental conditions, or encrypt the data they send.

While some law enforcement agencies have used barcodes to improve their forensic evidence tracking, storage and retrieval processes, very few have implemented RFID because of concerns about startup costs, the reliability of the technology and the current lack of relevant RFID standards for property and evidence handling. To help agencies better understand these issues and properly assess the pros and cons of RFID evidence management, NIST recently published RFID Technology in Forensic Evidence Management, An Assessment of Barriers, Benefits, and Costs. The report is the result of a NIST-funded study on automated identification technology (AIT). The Technical Working Group on Biological Evidence Preservation, cosponsored by NIST and the National Institute of Justice (NIJ), commissioned the study and report.

The NIST report includes a helpful overview of AITs—focusing primarily on RFID and barcode technologies—and how they work. It describes, in depth, the types of RFID systems available (passive, active and battery-assisted), their price ranges, and the components necessary for a complete system. The report also details the barriers that agencies may encounter, followed by a series of successful RFID management case studies, including examples from the pharmaceutical and retail industries, and one law enforcement agency that has made the switch, the Netherlands Forensics Institute.

The practical question that agencies must consider—and one that the NIST report can help them answer—is whether RFID technology can produce measurable benefits and a positive return on the funds invested in a new system. The NIST report estimates that RFID systems can pay back their initial set-up cost in about two years.

Various factors can affect the payback period. For example, systems that track and manage larger inventories of evidence (100,000 or more items) will recoup costs more quickly than those handling smaller inventories. However, if multiple jurisdictions share the costs of a system, the payback period can be shorter.

To learn more about RFID technology for evidence management, go to The new report, RFID Technology in Forensic Evidence Management, An Assessment of Barriers, Benefits, and Costs (NIST IR 8030), is available at

Media Contact: Michael E. Newman,, 301-975-3025

back to top