Skip to main content

Research Repository

Advanced Search

EfficientTDNN: Efficient Architecture Search for Speaker Recognition

Wang, Rui; Wei, Zhihua; Duan, Haoran; Ji, Shouling; Long, Yang; Hong, Zhen

EfficientTDNN: Efficient Architecture Search for Speaker Recognition Thumbnail


Authors

Rui Wang

Zhihua Wei

Haoran Duan haoran.duan@durham.ac.uk
PGR Student Doctor of Philosophy

Shouling Ji

Zhen Hong



Abstract

Convolutional neural networks (CNNs), such as the time-delay neural network (TDNN), have shown their remarkable capability in learning speaker embedding. However, they meanwhile bring a huge computational cost in storage size, processing, and memory. Discovering the specialized CNN that meets a specific constraint requires a substantial effort of human experts. Compared with hand-designed approaches, neural architecture search (NAS) appears as a practical technique in automating the manual architecture design process and has attracted increasing interest in spoken language processing tasks such as speaker recognition. In this paper, we propose EfficientTDNN, an efficient architecture search framework consisting of a TDNN-based supernet and a TDNN-NAS algorithm. The proposed supernet introduces temporal convolution of different ranges of the receptive field and feature aggregation of various resolutions from different layers to TDNN. On top of it, the TDNN-NAS algorithm quickly searches for the desired TDNN architecture via weight-sharing subnets, which surprisingly reduces computation while handling the vast number of devices with various resources requirements. Experimental results on the VoxCeleb dataset show the proposed EfficientTDNN enables approximate 1013 architectures concerning depth, kernel, and width. Considering different computation constraints, it achieves a 2.20% equal error rate (EER) with 204 M multiply-accumulate operations (MACs), 1.41% EER with 571 M MACs as well as 0.94% EER with 1.45 G MACs. Comprehensive investigations suggest that the trained supernet generalizes subnets not sampled during training and obtains a favorable trade-off between accuracy and efficiency.

Citation

Wang, R., Wei, Z., Duan, H., Ji, S., Long, Y., & Hong, Z. (2022). EfficientTDNN: Efficient Architecture Search for Speaker Recognition. IEEE/ACM Transactions on Audio, Speech and Language Processing, 30, 2267-2279. https://doi.org/10.1109/taslp.2022.3182856

Journal Article Type Article
Online Publication Date Jun 17, 2022
Publication Date 2022
Deposit Date Sep 14, 2022
Publicly Available Date Sep 14, 2022
Journal IEEE/ACM Transactions on Audio, Speech, and Language Processing
Print ISSN 2329-9290
Electronic ISSN 2329-9304
Publisher Association for Computing Machinery (ACM)
Peer Reviewed Peer Reviewed
Volume 30
Pages 2267-2279
DOI https://doi.org/10.1109/taslp.2022.3182856
Public URL https://durham-repository.worktribe.com/output/1191908

Files

Accepted Journal Article (2.2 Mb)
PDF

Copyright Statement
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.






You might also like



Downloadable Citations