Development of a Novel System for Speaker Verification
Citations Over Time
Abstract
In speaker Verification we are verifying the person by extracting the features from his/her sound and then classifying to prove that is he/her is valid speaker or not, who is claiming that he is. This is the most useful and well-known biometric recognition techniques in those fields and area in which security is our first priority. In this paper we have designed a speaker verification system which is giving good results when we mutate our classifier up to 50 thousand time. We take voice samples of different speaker and extract characteristics by Mel frequency Cepstrum coefficient (MFCC) and classify the features through Cartesian genetic programing evolved artificial neural network (CGPANN). The designed system has better accuracy than existing system for speaker verification.
Related Papers
- → Infant cry recognition based on feature extraction(2010)3 cited
- → Pitch-based cepstral features for gender classification in noisy environments(2013)1 cited
- A Robust Mel-frequency Cepstrum Coefficients(2008)
- Application of Biomimetic Technology to Feature Extraction from Acoustic Objects(2014)
- → A supplementary feature for speaker verification in constrained room acoustics(2005)