Research Overview
This project is associated with three laboratories from KAIST and Seoul National University, and Samsung Cooperation.
This project challenges the question whether a machine can perform music in an emotionally expressive manner with a specific style, and mainly aims to develop a novel human-competitive system that generates music performance conveying an emotion and a style. The proposed system is composed of multiple neural network modules to acquire, analyze and synthesize music performance data.
Figure 1. A diagram of overall system
In order to successfully build the overall system, we divide the main goal into several subtasks; in MARG, we focus on defining useful features on analyzing emotion from acquired performance data as well as building a system that quantitatively evaluates music performance using machine learning techniques.
The outcomes from the project would provide a number of useful insights on how to represent the elements of music performance in a manner that computer can understand, and modeling the artistic process with various types of neural networks.