Video on the Semantic Sensor Web

Document Type

Article

Abstract

Millions of sensors around the globe currently collect avalanches of data about our world. The rapid development and deployment of sensor technology is intensifying the existing problem of too much data and not enough knowledge. With a view to alleviating this glut, we propose that sensor data, especially video sensor data, can be annotated with semantic metadata to provide contextual information about videos on the Web. In particular, we present an approach to annotating video sensor data with spatial, temporal, and thematic semantic metadata. This technique builds on current standardization efforts within the W3C and Open Geospatial Consortium (OGC) and extends them with Semantic Web technologies to provide enhanced descriptions and access to video sensor data.

APA Citation

Henson, C. A., Sheth, A. P., Jain, P., Pschorr, J., & Rapoch, T. (2007). Video on the Semantic Sensor Web.
https://corescholar.libraries.wright.edu/knoesis/212

Share

COinS