Audio-Visual Separation with Hierarchical Fusion and Representation Alignment

The MIx Group, School of Computer Science, University of Birmingham
BMVC 2025

*Indicates Equal Contribution

Abstract

Self-supervised audio-visual source separation leverages natural correlations between audio and vision modalities to separate mixed audio signals. In this work, we first systematically analyze the performance of existing multimodal fusion methods for audio-visual separation task, demonstrating that the performance of different fusion strategies is closely linked to the characteristics of the sound—middle fusion is better suited for handling short, transient sounds, while late fusion is more effective for capturing sustained and harmonically rich sounds. We thus propose a hierarchical fusion strategy that effectively integrates both fusion stages. In addition, training can be made easier by incorporating high-quality external audio representations, rather than relying solely on the audio branch to learn them independently. To explore this, we propose a representation alignment approach that aligns the latent features of the audio encoder with embeddings extracted from pre-trained audio models. Extensive experiments on MUSIC, MUSIC-21 and VGGSound datasets demonstrate that our approach achieves state-of-the-art results, surpassing existing methods under the self-supervised setting. We further analyze the impact of representation alignment on audio features, showing that it reduces modality gap between the audio and visual modalities.

Relationship Between Acoustic Properties and Fusion

Relationship between Acoustic Properties of Musical Instruments and Fusion Strategies

Relationship Between Acoustic Properties of Musical Instruments and Fusion Strategies. Instruments with shorter transient properties and simpler harmonic structures are more suited to middle fusion. Conversely, instruments with sustained notes and complex harmonic structures benefit more from late fusion.

Model Overview

Pipeline of our proposed method

Pipeline of our proposed method. The pipeline consists of three key components: audio-visual feature extraction, hierarchical fusion, and representation alignment. It takes an audio mixture and corresponding video frames as input.

Results

Qualitative Results

Click ▶ to play the audio example. Our method (row “Ours”) produces cleaner separated sounds than CLIPSep.

Quantitative Results

Table 1: Performance comparison on MUSIC dataset

Table 1: Audio-visual separation performance comparison on the MUSIC dataset.

Table 2: Performance comparison on VGGSound dataset

Table 2: Audio-visual separation results on VGGSound.

BibTeX

@inproceedings{hu2025audio,
  title={Audio-Visual Separation with Hierarchical Fusion and Representation Alignment},
  author={Hu, Han and Lin, Dongheng and Huang, Qiming and Hou, Yuqi and Chang, Hyung Jin and Jiao, Jianbo},
  booktitle={British Machine Vision Conference (BMVC)},
  year={2025}
}