Semantic and Visual Cues for Humanitarian Computing of Natural Disaster Damage Images
Citations Over TimeTop 25% of 2016 papers
Abstract
Identifying different types of damage is very essential in times of natural disasters, where first responders are flooding the internet with often annotated images and texts, and rescue teams are overwhelmed to prioritize often scarce resources. While most of the efforts in such humanitarian situations rely heavily on human labor and input, we propose in this paper a novel hybrid approach to help automate more humanitarian computing. Our framework merges low-level visual features that extract color, shape and texture along with a semantic attribute that is obtained after comparing the picture annotation to some bag of words. These visual and textual features were trained and tested on a dataset gathered from the SUN database and some Google Images. The best accuracy obtained using low-level features alone is 91.3 %, while appending the semantic attributes to it raised the accuracy to 95.5% using linear SVM and 5-Fold cross-validation which motivates an updated folk statement "an ANNOTATED image is worth a thousand word ".
Related Papers
- 다중 사용자 환경에서 Annotation 인터페이스의 설계 및 구현(2002)
- Social Filtering 환경에서 사용자 관심사를 고려한 Annotation 디스플레이 설계 및 구현(2002)
- On the Important Content Characters about Annotation of Xiaojing by Tang Xuan_zong(2005)
- Annotation of Li Shan WenXuan——One Annotation Phenomenon Which is Poles Apart with China Classics Annotation(2006)
- A Review of Annotation of the Pedagogic Colen Corpus(2006)