Welcome to Sound and Music Computing Lab at National University of Singapore! The NUS Sound and Music Computing Lab strives to develop Sound and Music Computing (SMC) technologies, in particular Music Information Retrieval (MIR) technologies, with an emphasis on applications in e-Learning (especially computer-assisted music and language edutainment) and e-Health (especially computer-assisted music-enhanced exercise and therapy).
We seek to harness the synergy of SMC, MIR, mobile computing, and cloud computing technologies to promote healthy lifestyles and to facilitate disease prevention, diagnosis, and treatment in both developed countries and resource-poor developing countries..
[2022.06] Our lab member Yuchen Wang won Outstanding Computing Project Prize from NUS School of Computing. Congrats to Yuchen!
[2022.01] New paper Exploring Transformer’s Potential on Automatic Piano Transcription accepted to ICASSP 2022. Congrats to Longshen!
[2022.01] Our lab member Hengguan Huang won Dean’s Graduate Research Excellence Award from NUS School of Computing. Congrats!
[2022.01] Our lab member Xichu (Stan) Ma won Research Achievement Award from NUS School of Computing. Congrats to Stan!
[2021.11] Prof. Ye Wang delivered a virtual talk entitled "Neuro-inspired SMC for Bilingualism & Human Potential" at Standford University. Video recording is now available.
[2021.11] Prof. Ye Wang hosted a panel discussion on the topic of "MIR for Human Health and Potential" at ISMIR 2021. Video recording is now available.
[2021.08] Prof. Ye Wang delivered a keynote speaking on "Music & Wearable Computing for Health and Learning" at Computing Research Week of NUS. Video recording is now available.
Ou, L., Guo, Z., Benetos, E., Han, J., & Wang, Y. (2022, May). Exploring Transformer’s Potential on Automatic Piano Transcription. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 776-780). IEEE.
Ma, X., Wang, Y., Kan, M. Y., and Lee, W. S., (2021 October). AI-Lyricist: Generating Music and Vocabulary Constrained Lyrics. In 2021 ACM Multimedia Conference (MM ’21), October 20-24, 2021, Virtual Event, China. ACM.
Dai, S., Ma, X., Wang, Y., & Dannenberg, R. B. (2021). Personalized Popular Music Generation Using Imitation and Structure. arXiv preprint arXiv:2105.04709.
Huang, H., Liu, H., Wang, H., Xiao, C., & Wang, Y. (2021, July). STRODE: Stochastic Boundary Ordinary Differential Equation. In International Conference on Machine Learning (ICML-2021), 4435-4445. [code] [slides]
Huang, H., Xue, F., Wang, H. & Wang, Y. (2020, July). Deep Graph Random Process for Relational-Thinking-Based Speech Recognition. In Proceedings of the 37th International Conference on Machine Learning (ICML-20). [video] [supplementary] [code] [slides]
Wei, W., Zhu, H., Benetos, E., & Wang, Y. (2020, May). A-CRNN: A Domain Adaptation Model for Sound Event Detection. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 276-280). IEEE.
Sharma, B., & Wang, Y. (2019). Automatic Evaluation of Song Intelligibility using Singing Adapted STOI and Vocal-specific Features. IEEE/ACM Transactions on Audio, Speech, and Language Processing. [code] [data]
Gupta, C., Li, H., & Wang, Y. (2019). Automatic Leaderboard: Evaluation of Singing Quality without a Standard Reference. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 1–1. doi: 10.1109/taslp.2019.2947737
Anderson, B., Shi, M., Tan, V. Y., & Wang, Y. (2019). Mobile Gait Analysis Using Foot-Mounted UWB Sensors. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 3(3), 73.
Phaye, S. S. R., Benetos, E., & Wang, Y. (2019, May). SubSpectralNet–Using Sub-spectrogram Based Convolutional Neural Networks for Acoustic Scene Classification. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 825-829). IEEE.
Sharma, B.*, Gupta, C.*, Li, H., & Wang, Y. (2019, May). Automatic Lyrics-to-audio Alignment on Polyphonic Music Using Singing-adapted Acoustic Models. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 396-400). IEEE. (*equal contributors)
Wang, Y. (2019). Singing Voice Modelling for Language Learning (Dagstuhl Seminar 19052). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.
[2021.11] APSIPA Distinguished Lecture 1: Neuroscience-Inspired Sound and Music Computing (SMC) for Bilingualism and Human Potential – Wang Ye [video]
[2021.11] NUS Sound and Music Computing Lab Showcase at ISMIR 2021 [video]
[2021.11] Special Session on MIR for Human Health and Potential at ISMIR2021 [video]
[2021.8] Wang, Y., Keynote at Computing Research Week Aug 2021, “Music & Wearable Computing for Health and Learning: a Decade-long Exploration on a Neuroscience-inspired Interdisciplinary Approach”, National University of Singapore. [slides] [video]
Addr: 11 Computing Dr, 117416
Tel: (65) 6516 2980
Fax: (65) 6779 4580
Office: AS6 #04-08
Lab Director: A/Prof. Ye Wang