PrincetonComputer SciencePIXL GroupPublications → [Manocha et al. 2024] Local Access
CoRN: Co-trained Full- and No-Reference Speech Quality Assessment

ICASSP 2024, to appear, April 2024

Pranay Manocha, Donald Williamson, Adam Finkelstein
Abstract

Perceptual evaluation constitutes a crucial aspect of various audio-processing tasks. Full reference (FR) or similarity-based metrics rely on high-quality reference recordings, to which lower-quality or corrupted versions of the recording may be compared for evaluation. In contrast, no-reference (NR) metrics evaluate a recording without relying on a reference. Both the FR and NR approaches exhibit advantages and drawbacks relative to each other. In this paper, we present a novel framework called CoRN that amalgamates these dual approaches, concurrently training both FR and NR models together. After training, the models can be applied independently. We evaluate CoRN by predicting several common objective metrics and across two different architectures. The NR model trained using CoRN has access to a reference recording during training, and thus, as one would expect, it consistently outperforms baseline NR models trained independently. Perhaps even more remarkable is that the CoRN FR model also outperforms its baseline counterpart, even though it relies on the same training data and the same model architecture. Thus, a single training regime produces two independently useful models, each outperforming independently trained models.
Paper
Citation

Pranay Manocha, Donald Williamson, and Adam Finkelstein.
"CoRN: Co-trained Full- and No-Reference Speech Quality Assessment."
ICASSP 2024, to appear, April 2024.

BibTeX

@inproceedings{Manocha:2024:CCF,
   author = "Pranay Manocha and Donald Williamson and Adam Finkelstein",
   title = "{CoRN}: Co-trained Full- and No-Reference Speech Quality Assessment",
   booktitle = "ICASSP 2024, to appear",
   year = "2024",
   month = apr
}