Search

Automatic Music Transcription

태그
Transcription

Research Overview

Automatic music transcription (AMT) aims to extract a musical notation in a form of symbolic data from a recording of a music signal automatically. It encompasses a wide range of tasks in the music signal processing, including note onset detection, pitch estimation, and multi-instrument separation. For a full transcription, by means of the complete conversion from an audio signal to a “piano-roll” representation, two essential data of “onset/offset” and “pitch” should be obtained from both temporal and spectral analysis. In addition, our recent studies have been extended to find new features that imply musical expressions, such as a singing voice and varied playing styles of musical instruments.

Publications

S. Chang, K. Lee, “A pairwise approach to simultaneous onset/offset detection for singing voice using correntropy,” in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), 2014.
H. Heo, D. Sung, K. Lee, “Note Onset Detection based on Harmonic Cepstrum Regularity,” in Proc. IEEE Int. Conf. Multimedia and Expo (ICME), 2013.

Datasets

Project Members

Hoon Heo, Sungkyun Chang, Dooyong Sung, Yoonchang Han