Music
Do Sharp and Flat Keys Evoke Opposing Sensory Perception?: Focusing on the Crossmodal Influence Inherent in the Pitch and Characteristics of Enharmonic Key
•
Ahyeon Choi(chah0623@snu.ac.kr), Woojae Cho
•
Our study aimed to investigate the effects of sharp and flat keys on the perception of brightness, warmth, and sharpness, given that these keys are associated with tension and relaxation respectively in music.
•
We found that these keys significantly influenced all three sensory perceptions among 30 musicians, implying that musical keys can affect listeners' perception, thereby providing valuable insights for composers and performers to create music that triggers specific sensory responses.
Towards a New Interface for Music Listening: YouTube Case Studies
•
Ahyeon Choi(chah0623@snu.ac.kr), Eunsik Shin, Haesun Joung
•
Despite the rise of various music streaming services, an increasing number of users are choosing YouTube for music consumption due to its unique features.
•
Our research, involving semi-structured interviews with 27 active YouTube users, reveals systematic usability features of Youtube as a music streaming service such as the provision of a rich array of music-related videos and the ability to explore diverse music genres, share unique playlists, and interact with others.
•
Furthermore, we have developed the wireframe for a new youtube UI for music listening, providing practical solutions to improve user satisfaction. This work has significant implications for YouTube and other music streaming platforms aiming to enhance their user experience.
Speech
Hearing rehabilitation based on neurofeedback using real-time speech-based attention decoding
•
work with University of Iowa (Inyong Choi, Joosung Ham)
•
Jinhee Kim(ginnykim9@snu.ac.kr), Woojae Jo, Mina Jo
•
This research project is focused on developing a neurofeedback-based auditory rehabilitation system, which enables users to visually verify whether they have successfully attended to the target in real-time, thereby improving their selective attention capabilities and central auditory cognitive abilities.
•
Our system integrates EEG measurement, presentation of attention objectives, sound presentation, real-time auditory attention decoding, and feedback on attention success.
Revealing relationship between EEG and speech using self-supervised deep learning model
•
Jinhee Kim(ginnykim9@snu.ac.kr), Haesun Joung
•
This research explores the complex relationship between EEG signals and speech through a match-mismatch classification approach, using deep learning to uncover non-linear relationships.
•
While we're currently facing challenges in the model training phase, our goal is to analyze the activation values within our trained deep learning model and compare these with brain activation, thereby contributing to bridging the gap between deep learning modeling and neuroscience.