LinearVC: Linear transformations of self-supervised features through the lens of voice conversion

Herman Kamper, Benjamin van Niekerk, Julian Zaïdi, Marc-André Carbonneau

[paper]   [arXiv]   [code]

We introduce LinearVC, a simple voice conversion method that sheds light on the structure of self-supervised representations. First, we show that simple linear transformations of self-supervised features effectively convert voices. Next, we probe the geometry of the feature space by constraining the set of allowed transformations. We find that just rotating the features is sufficient for high-quality voice conversion. This suggests that content information is embedded in a low-dimensional subspace which can be linearly transformed to produce a target voice. To validate this hypothesis, we finally propose a method that explicitly factorizes content and speaker information using singular value decomposition; the resulting linear projection with a rank of just 100 gives competitive conversion results. Our work has implications for both practical voice conversion and a broader understanding of self-supervised speech representations.

Comparing LinearVC to other systems

Source
Target
FreeVC
SoundStorm
kNN-VC
LinearVC
LinearVC cont. fact.

Constrained linear transformations

We consider voice conversion results when we constrain the types of linear transformations allowed in LinearVC.

Source
Target
Bias only
Orthogonal
No constraints
Cont. fact. r=6
Cont. fact. r=16
Cont. fact. r=100
Cont. fact. r=256