Research Overview
The purpose of this research is to develop a real-time sound analysis technology to support unmanned surveillance systems. Industrial needs for unmanned surveillance systems are growing due to the increasing cost of labor. Today’s computer vision technology combined with deep learning algorithms is rapidly advancing to meet the goal of unmanned visual surveillance. On the other hand, auditory information is also a critical part of surveillance. Auditory information can detect abnormal events happening outside the visual angle and abnormal events which are hard to detect using visual information such as people screaming. In this research, auditory features and deep learning technology are used to achieve our objectives.
The research has three key objectives.
1. Detection and classification of abnormal sound event
2. Sound event direction of arrival(DOA) estimation
3. Robot motor noise cancellation
Publications
•
Wansoo Kim and Kyogu Lee, “Abnormal Sound Event Detection Using Auditory Transition Feature and 1D CNN”, 전자정보통신 학술대회, 대한전자공학회, 2017
•
Wansoo Kim, Gwang Seok An and Kyogu Lee, ”Detecting Abnormal Sound Events Using 4-channel Microphone Array and Deep Learning”, 음성통신 및 신호처리 학술대회,한국음향학회, 2018
•
Wansoo Kim and Kyogu Lee, “Sound Event Localization and Detection Using Modular Neural Network Structure”, 대한전자공학회 추계 학술대회, 대한전자공학회, 2018.