Using Compressed Audio-visual Words for Multi-modal Scene Classification
Kurcius, J.J.; Breckon, T.P.
We present a novel approach to scene classification using combined audio signal and video image features and compare this methodology to scene classification results using each modality in isolation. Each modality is represented using summary features, namely Mel-frequency Cepstral Coefficients (audio) and Scale Invariant Feature Transform (SIFT) (video) within a multi-resolution bag-of-features model. Uniquely, we extend the classical bag-of-words approach over both audio and video feature spaces, whereby we introduce the concept of compressive sensing as a novel methodology for multi-modal fusion via audio-visual feature dimensionality reduction. We perform evaluation over a range of environments showing performance that is both comparable to the state of the art (86%, over ten scene classes) and invariant to a ten-fold dimensionality reduction within the audio-visual feature space using our compressive representation approach.
Kurcius, J., & Breckon, T. (2014). Using Compressed Audio-visual Words for Multi-modal Scene Classification. In Computational Intelligence for Multimedia Understanding (IWCIM), 2014 International Workshop on, 1-2 November 2014, Paris, France ; proceedings (99-103). https://doi.org/10.1109/iwcim.2014.7008808
|Conference Name||Proc. International Workshop on Computational Intelligence for Multimedia Understanding|
|Publication Date||Nov 2, 2014|
|Deposit Date||Dec 9, 2014|
|Publicly Available Date||Feb 4, 2015|
|Publisher||Institute of Electrical and Electronics Engineers|
|Book Title||Computational Intelligence for Multimedia Understanding (IWCIM), 2014 International Workshop on, 1-2 November 2014, Paris, France ; proceedings.|
|Keywords||Multi-resolution, Bag of words, MFCC, Compressed sensing, Audio-visual, Multi-modal.|
Accepted Conference Proceeding
© 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
You might also like
Robust Semi-Supervised Anomaly Detection via Adversarially Learned Continuous Noise Corruption
ACR: Attention Collaboration-based Regressor for Arbitrary Two-Hand Reconstruction
Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation
Region-based Appearance and Flow Characteristics for Anomaly Detection in Infrared Surveillance Imagery