July 1, 2009
Author(s)
Antoine Fillinger, Imad Hamchi, Stephane Degre, Lukas L. Diduch, Richard T. Rose, Jonathan G. Fiscus, Vincent M. Stanford
Using data streams from acoustic, video, proximity, location, and even physiological sensors, to recognize user intent and respond appropriately is one of the grand challenges to the multimodal research community. We describe our sensor-net middleware, the