Generative Audio Extension and Morphing
Adobe Research
* Equal contribution
In audio-related creative tasks, sound designers often seek to extend and blend different sounds from their libraries. Generative audio models, capable of creating audio using examples as references, offer promising solutions. By masking the noisy latents of a DiT and applying a novel variant of classifier-free guidance on such masked latents, we demonstrate that: (i) given an audio reference, we can extend it both forward and backward for a specified duration, and (ii) given two audio references, we can blend them seamlessly for the desired duration. Furthermore, we show that by fine-tuning the model on different types of stationary audio data we mitigate potential hallucinations. Our method's effectiveness is validated through objective metrics, showing that the resulting audios achieve Fréchet Audio Distances (FADs) comparable to original, non-generated ones. Additionally, we validate our results through a subjective listener test, where subjects gave positive ratings to the proposed model generations. This technique paves the way for more controllable and expressive generative sound frameworks, enabling sound designers to focus less on tedious, repetitive tasks and more on their actual creative process.
If you use our work, please cite it as follows:
@inproceedings{seetharaman2026genextend,
author = {Seetharaman, P., Nieto, O., Salamon, J.},
title = {{Generative Audio Extension and Morphing}},
booktitle = {Proc. of the 51st International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
year = {2026},
address = {Barcelona, Spain}
}
Given an audio prompt, GenExtend seamlessly continues the audio forward or backward for a specified duration.
Example 1
Example 2
Given two audio prompts, GenMorph creates a seamless blend between them for the desired duration.
Example 1
Example 2
Example 3
Convolutional Noise Matching (CNM) — a non-generative baseline for audio extension.
Example 1
Example 2