Access the full text.
Sign up today, get DeepDyve free for 14 days.
D. Rumelhart, Geoffrey Hinton, Ronald Williams (1986)
Learning representations by back-propagating errorsNature, 323
(2001)
In the CIPIC HRTF database, workshop on applications of signal processing to audio and acoustics, pp
Xavier Glorot, Yoshua Bengio (2010)
Understanding the difficulty of training deep feedforward neural networks
(2015)
ADAM: Amethod for stochastic optimization
Zhirong Wu, Shuran Song, A. Khosla, F. Yu, Linguang Zhang, Xiaoou Tang, Jianxiong Xiao (2014)
3D ShapeNets: A deep representation for volumetric shapes2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Hongmei Hu, Lin Zhou, Hao Ma, Zhen-yang Wu (2008)
HRTF personalization based on artificial neural network in individual virtual auditory spaceApplied Acoustics, 69
(1997)
Spatial Hearing, Revised edn
B. Katz (2001)
Boundary element method calculation of individual head-related transfer function. I. Rigid model calculation.The Journal of the Acoustical Society of America, 110 5 Pt 1
C. Jin, P. Guillon, N. Epain, R. Zolfaghari, A. Schaik, Anthony Tew, Carl Hetherington, Jonathan Thorpe (2014)
Creating the Sydney York Morphological and Acoustic Recordings of Ears DatabaseIEEE Transactions on Multimedia, 16
Nitish Srivastava, Geoffrey Hinton, A. Krizhevsky, Ilya Sutskever, R. Salakhutdinov (2014)
Dropout: a simple way to prevent neural networks from overfittingJ. Mach. Learn. Res., 15
(1994)
Localization using non-individualized head-related transfer functions
Xavier Glorot, Antoine Bordes, Yoshua Bengio (2011)
Deep Sparse Rectifier Neural Networks
P. Simard, David Steinkraus, John Platt (2003)
Best practices for convolutional neural networks applied to visual document analysisSeventh International Conference on Document Analysis and Recognition, 2003. Proceedings.
A. DeLouise, C. Ottenritter (2019)
Spatial AudioNonfiction Sound and Story for Film and Video
Xiangyang Zeng, Shu-guang Wang, Lipan Gao (2010)
A hybrid algorithm for selecting head-related transfer function based on similarity of anthropometric structuresJournal of Sound and Vibration, 329
Edgar Torres-Gallegos, F. Orduña-Bustamante, F. Arámbula-Cosío (2015)
Personalization of head-related transfer functions (HRTF) based on automatic photo-anthropometry and inference from a databaseApplied Acoustics, 97
Simone Spagnol, M. Geronazzo, F. Avanzini (2013)
On the Relation Between Pinna Reflection Patterns and Head-Related Transfer Function FeaturesIEEE Transactions on Audio, Speech, and Language Processing, 21
M. Shahnawaz, Lucio Bianchi, A. Sarti, S. Tubaro (2016)
Analyzing notch patterns of head related transfer functions in CIPIC and SYMARE databases2016 24th European Signal Processing Conference (EUSIPCO)
Takanori Nishino, Naoya Inoue, K. Takeda, F. Itakura (2007)
Estimation of HRTFs on the horizontal plane using physical featuresApplied Acoustics, 68
P. Bilinski, J. Ahrens, Mark Thomas, I. Tashev, John Platt (2014)
HRTF magnitude synthesis via sparse representation of anthropometric features2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
C. Chun, J. Moon, G. Lee, Nam Kim, H. Kim (2017)
Deep Neural Network Based HRTF Personalization Using Anthropometric MeasurementsJournal of The Audio Engineering Society
Xiaochao Guo, Duanqin Xiong, Yanyan Wang, Yiping Ma, Dongdong Lu, Qingfeng Liu (2016)
Head-Related Transfer Function Database of Chinese Male Pilots
G. Lee, H. Kim (2018)
Personalized HRTF Modeling Based on Deep Neural Network Using Anthropometric Measurements and Images of the EarApplied Sciences
Simone Spagnol, F. Avanzini (2015)
Frequency Estimation Of The First Pinna Notch In Head-Related Transfer Functions With A Linear Anthropometric Model
E. Núñez, E. Steyerberg, Julio Núñeza (2017)
Focus on : Contemporary Methods in Biostatistics ( I ) Regression Modeling Strategies
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations
An accurate head-related transfer function can improve the subjective auditory localization performance of a particular subject. This paper proposes a deep neural network model for reconstructing the head-related transfer function (HRTF) based on anthropometric parameters and the orientation of the sound source. The proposed model consists of three subnetworks, including a one-dimensional convolutional neural network (1D-CNN) to process anthropometric parameters as input features and another network that takes the sound source position as input to serve as a marker. Finally, the outputs of these two networks are merged together as the input to a third network to estimate the HRTF. An objective method and a subjective method are proposed to evaluate the performance of the proposed method. For the objective evaluation, the root mean square error (RMSE) between the estimated HRTF and the measured HRTF is calculated. The results show that the proposed method performs better than a database matching method and a deep-neural-network-based method. In addition, the results of a sound localization test performed for the subjective evaluation show that the proposed method can localize sound sources with higher accuracy than the KEMAR dummy head HRTF or the DNN-based method. The objective and subjective results all show that the personalized HRTFs obtained using the proposed method perform well in HRTF reconstruction.
Acoustics Australia – Springer Journals
Published: Nov 5, 2020
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.