Self-supervised audio-visual source separation leverages natural correlations between audio and vision modalities to separate mixed audio signals. In this work, we first systematically analyze the performance of existing multimodal fusion methods for audio-visual separation task, demonstrating that the performance of different fusion strategies is closely linked to the characteristics of the sound—middle fusion is better suited for handling short, transient sounds, while late fusion is more effective for capturing sustained and harmonically rich sounds. We thus propose a hierarchical fusion strategy that effectively integrates both fusion stages. In addition, training can be made easier by incorporating high-quality external audio representations, rather than relying solely on the audio branch to learn them independently. To explore this, we propose a representation alignment approach that aligns the latent features of the audio encoder with embeddings extracted from pre-trained audio models. Extensive experiments on MUSIC, MUSIC-21 and VGGSound datasets demonstrate that our approach achieves state-of-the-art results, surpassing existing methods under the self-supervised setting. We further analyze the impact of representation alignment on audio features, showing that it reduces modality gap between the audio and visual modalities.
Click ▶ to play the audio example. Our method (row “Ours”) produces cleaner separated sounds than CLIPSep.
@inproceedings{hu2025audio,
title={Audio-Visual Separation with Hierarchical Fusion and Representation Alignment},
author={Hu, Han and Lin, Dongheng and Huang, Qiming and Hou, Yuqi and Chang, Hyung Jin and Jiao, Jianbo},
booktitle={British Machine Vision Conference (BMVC)},
year={2025}
}