Artificial neural networks combined with the principal component analysis for non-fluent speech recognition

Abstrakt

The presented paper introduces principal component analysis application for dimensionality reduction of variables describing speech signal and applicability of obtained results for the disturbed and fluent speech recognition process. A set of fluent speech signals and three speech disturbances—blocks before words starting with plosives, syllable repetitions, and sound-initial prolongations—was transformed using principal component analysis. The result was a model containing four principal components describing analysed utterances. Distances between standardised original variables and elements of the observation matrix in a new system of coordinates were calculated and then applied in the recognition process. As a classifying algorithm, the multilayer perceptron network was used. Achieved results were compared with outcomes from previous experiments where speech samples were parameterised with the Kohonen network application. The classifying network achieved overall accuracy at 76% (from 50% to 91%, depending on the dysfluency type).

Autorzy

Wiesława Kuniszyk-Jóźkowiak
Wiesława Kuniszyk-Jóźkowiak
artykuł
SENSORS
Angielski
2022
22
1
321
otwarte czasopismo
CC BY 4.0 Uznanie autorstwa 4.0
ostateczna wersja opublikowana
w momencie opublikowania
2022-01-01
100
3,9
0
5