NIST logo
*
Bookmark and Share

 TRANSTAC_Framework

This is a description of the figure:
  • MAIN-LAB EVALUATIONS at the SYSTEM LEVEL: These evaluations were designed to measure progressive development of the system’s technical capabilities and predict their impact on the Soldier’s or Marine’s performance in a range of scenarios.
    • What was tested? – Laptop systems in an idealistic environment with no background noise and participants being stationary

  • OFFLINE EVALUATIONS at the COMPONENT LEVEL: These evaluations were developed to measure the technical performance at the component level. This evaluation tests the TRANSTAC system with exactly the same set of data so comparison among systems is truly “apples to apples.”
    • What was tested?
      • ASR – Automatic speech recognition (from audio data)
      • MT – Machine translation (from text data)
      • TTS – Text to speech

  • NAMES EVALUATIONS at the COMPONENT AND CAPABILITY LEVELS: These evaluations were created to analyze the TRANSTAC systems’ ability to recognize and translate names within dialogues.
    • What was tested? – Laptop-based systems running specialized names TRANSTAC software

  • UTILITY-LAB EVALUATION at the SYSTEM LEVEL: Utility-lab evaluations are designed to collect technical performance and assess the utility of the field versions of the TRANSTAC technologies in the controlled environment of the lab so that direct comparisons may be drawn between the technical performance of the live lab evaluations and the utility-lab evaluations.
    • What was tested? – Field systems in the lab environment

  • UTILITY-FIELD EVALUATION at the SYSTEM LEVEL: Utility-field evaluations are intended to assess the Soldier’s or Marine’s utility of the field-ready TRANSTAC technologies in more realistic, use-case environments.
    • What was tested? – Field systems in a more operationally-relevant environment where the English speaker is carrying the technology