Skip to content

Latest commit

 

History

History
12 lines (6 loc) · 1.33 KB

File metadata and controls

12 lines (6 loc) · 1.33 KB

Speaker Identification in Multispeaker Environment using Deep Neural Networks

Abstract

Human beings are capable of performing unfathomable tasks. A human being is able to focus on a single person’s voice in an environment of simultaneous conversations. We have tried to emulate this particular skill through an artificial intelligence system. Our system identifies an audio file as a single or multi-speaker file as the first step and then recognizes the speaker(s). Our approach towards the desired solution was to first conduct pre-processing of the audio (input) file where it is subjected to reduction and silence removal, framing, windowing and DCT calculation, all of which is used to extract its features. Mel Frequency Cepstral Coefficients (MFCC) technique was used for feature extraction. The extracted features are then used to train the system via neural networks using the Error Back Propagation Training Algorithm (EBPTA). One of the many applications of our model is in biometric systems such as telephone banking, authentication and surveillance.

Keywords: Speaker identification, neural network, Multi- Speaker, Mel Frequency Cepstral Coefficients (MFCC).

Research Paper published in Springer Journal.

For more details: download file ResearchPaper.pdf, projectreport