Loudspeaker System Identification Based on Adaptive Gradient Descent Algorithm
-
摘要: 扬声器模型参数辨识中,常规的固定步长梯度下降算法耗时较长,且在初始参数误差较大时,参数辨识常常会不稳定。因此,提出了一种在频域中识别扬声器系统参数的变步长梯度下降算法。变步长梯度下降方法监测识别参数辨识的趋势,并自适应地调整相应的学习速率。该自适应方法消除了手动调整学习速率的需要。此外,由于直接计算复杂模型的梯度并不容易,采用了中心差分的方法近似计算模型的梯度。通过建立动圈扬声器模型,设置不同初值和迭代误差结束标准,比较了固定步长方法、最小二乘法和自适应步长方法的收敛性以及辨识效果,并使用微型扬声器进行测试验证。仿真和实验表明,该方法具有更高的效率,对初始误差有更好的普适性和鲁棒性。Abstract: In loudspeaker model parameter identification, the conventional fixed-step gradient descent algorithm is time-consuming and often unstable when the initial parameter error is large. Therefore, a variable-step gradient descent algorithm for identifying speaker system parameters in the frequency domain is proposed. The adaptive method monitors the trend of the parameter identification process and adaptively adjusts the corresponding learning rate, eliminating the need for manual adjustment. Additionally, since directly calculating the gradient of a complex model is challenging, a central difference method is employed to approximate the model's gradient. By establishing a dynamic loudspeaker model and setting different initial values and iteration error termination criteria, the convergence and identification performance of the fixed-step method, least squares method, and adaptive-step method are compared. Micro loudspeakers are used for testing and verification. Simulations and experiments demonstrate that the proposed method has higher efficiency and better robustness to initial errors, exhibiting superior adaptability and universality.
-
Key words:
- metrology /
- loudspeakers /
- adaptive algorithms /
- gradient descent algorithm /
- Adam algorithm /
- system identification
-
表 1 仿真模型参数初值设定
Table 1. Initial parameter settings for the simulation model
Bl/(N/A) Cms/(N/m) Le/(H) Rms/(kg/s) Mms/(kg) R1 0.8 5e-4 4.7e-5 0.1 1e-4 P1 0.6 2e-3 6e-5 0.3 5e-5 P2 0.4 2e-3 6e-5 0.3 5e-5 Re/Ω R2/Ω L2/H Creep fmin/Hz R1 7.4 0.15 5e-5 0.2 280 P1 6.5 0.16 6e-5 0.15 300 P2 6 0.2 1e-5 0.1 300 表 2 三种方法辨识参数结果
Table 2. Identification parameter results using three methods
原始 自适应梯度 定步长 最小二乘 Bl/(N/A) 0.8 0.8007 0.6625 0.8027 Cms/(N/m) 5e-4 4.99e-4 5.82e-4 5.42e-4 Le/(H) 4.7e-5 4.70e-5 6.16e-5 4.70e-5 Rms/(kg/s) 0.1 0.1001 0.2202 0.1006 Mms/(kg) 1e-4 1e-4 8.35e-5 1e-4 Re/Ω 7.4 7.4006 6.6173 7.4009 R2/Ω 0.15 0.1488 0.1611 0.1494 L2/H 5e-5 4.99e-5 2.94e-4 4.82e-5 Creep 0.2 0.2004 0.1480 0.1896 fmin/Hz 280 281.88 300 102.13 $ {\xi }_{{Z}_{e}} $/% 0.0132 12.98 0.0126 $ {\xi }_{{H}_{x}} $/% 0.0101 21.22 0.2080 表 3 辨识初值参数设定
Table 3. Setting of initial parameters for actual measurement identification
Bl/(N/A) Cms/(N/m) Le/(H) Rms/(kg/s) Mms/(kg) T1 0.9 3.33e-4 5e-5 0.1 6e-5 T2 0.9 2.86e-4 5e-5 0.1 6e-5 Re/Ω R2/Ω L2/H Creep fmin/Hz T1 7 0.5 1.5e-5 0.12 800 T2 7 0.5 1.5e-5 0.12 800 表 4 实测参数辨识结果
Table 4. Identification results of measured parameters
Bl/(N/A) Cms/(N/m) Le/(H) Rms/(kg/s) Mms/(kg) $ {\xi }_{{Z}_{e}} $/% 改进方法 1.1813 2.88e-4 6.25e-5 0.1174 1.25e-4 0.7751 最小二乘 1.1953 2.86e-4 6.12e-5 0.1195 1.32e-4 2.7983 Re/Ω R2/Ω L2/H Creep fmin/Hz $ {\xi }_{{H}_{e}} $/% 改进方法 7.1960 0.8044 1.24e-5 0.1584 1360 0.6411 最小二乘 7.2040 0.9035 1.31e-5 0.2070 765 2.8394 -
[1] 沈勇. 扬声器系统的理论与应用[J]. 音响技术, 2012(2): 19. [2] R Small. Direct Radiator Loudspeaker System Analysis[J]. Audio Eng. Soc, 1972, 20(5): 307-327. [3] W Klippel. Green Speaker Design (Part 1: Optimal Use of Transducer Resources) [C]. Dublin, 2019. [4] W Klippel. Green Speaker Design (Part 2: Optimal Use of System Resources) [C]. Dublin, 2019. [5] 孔晓鹏, 曾新吾, 田章福. 动圈扬声器涡流阻抗建模[J]. 国防科技大学学报, 2014, 36(6): 6. doi: 10.11887/j.cn.201406007 [6] 夏洁, 沈勇. 单元扬声器支撑系统的蠕变效应[J]. 电声技术, 2011, 35(2): 21-24. [7] 刘成, 沈勇. 微型扬声器蠕变效应与小信号参数研究[C]. 2009年浙苏黑鲁津四省一市声学学术会议论文集, 2009 [8] Klippel W. Adaptive nonlinear control of loudspeaker systems[J]. Journal of the Audio Engineering Society, 1998, 46(11): 939-954. [9] Klippel W. Nonlinear Adaptive Controller for Loudspeakers with Current Sensor[J]. Audio Engineering Society Convention, 1999, 5: 9900-9905. [10] Klippel, Wolfgang. Adaptive Stabilization of Electrodynamic Transducers[J]. Journal of the Audio Engineering Society: Audio, Acoustics, Applications, 2015, 63(3): 154-160. [11] Amrhein W . Method and arrangement for actuating electromechanical transducers: US07/681511[P]. [2023-12-26]. [12] Klippel W, Seidel U. Fast and Accurate Measurement of the Linear Transducer Parameters [C]. Amsterdam : 110th Audio Engineering Society, 2001. [13] Bright A. Active Control of Loudspeakers: An Investigation of Practical Applications[J]. Technical University of Denmark, 2002, 11: 1. [14] Bright A. Adaptive iir filters for loudspeaker parameter tracking[J]. Audio Engineering Society, 2007, 9: 32. [15] Chen L, Pan K, Zhang Z, et al. Gradient Descent Method With Multiple Adaptive Step Sizes for Identifying Loudspeaker Nonlinearities[J]. Journal of the Audio Engineering Society, 2021, 69(3): 182-190. doi: 10.17743/jaes.2020.0071 [16] Ruder S . An overview of gradient descent optimization algorithms[J/OL]. Machine Learning . DOI: 10.48550/arXiv.1609.04747. [17] Qian N. On the momentum term in gradient descent learning algorithms[J]. Neural Networks, 1999, 12(1): 145-151. doi: 10.1016/S0893-6080(98)00116-6 [18] Shamir O. Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization[J/OL]. Omnipress. DOI: 10.48550/arXiv.1109.5647. [19] Lee J D , Simchowitz M , Jordan M I , et al. Gradient Descent Only Converges to Minimizers[C]. JMLR, 2016. [20] Butz M V , Goldberg D E , Lanzi P L . Gradient descent methods in learning classifier systems: improving XCS performance in multistep problems[J/OL]. IEEE Press. DOI: 10.1109/TEVC.2005.850265. [21] DUCHI J, HAZAN E, SINGER Y. Adaptive subgradient methods for online learning and stochastic optimization[J]. Journal of Machine Learning Research, 2011, 12(7): 2121-2159. [22] ZEILER M D. ADADELTA: an adaptive learning rate method[EB/OL]. [2021-12-22].https://doi.org/10.48550/arXiv.1212.5701. [23] TIELEMAN T, HINTON G. RMSProp: divide the gradient by a running average of its recent magnitude[R]. Toronto: University of Toronto, 2012. [24] KINGMA D, BA J. Adam: a method for stochastic optimization[C]. San Diego : Proc of the 3rd International Conference on Learning Representations, 2015. [25] REDDI S J, KALE S, KUMAR S. On the convergence of Adam and beyond[C]. Vancouver : Proc of the 6th Int Conf for Learning Representations, 2018. [26] Shin Y, Jérme Darbon, Karniadakis G E. Accelerating gradient descent and Adam via fractional gradients[J]. Neural Networks, 2023, 161: 185-201. doi: 10.1016/j.neunet.2023.01.002 [27] 陈立, 田兴, 夏洁, 等. 四阶带通箱的自回归滑动平均模型[J]. 应用声学, 2020, 39(4): 618-624. [28] Tian X, Shen Y, Chen L, et al. Identification of Nonlinear Fractional Derivative Loudspeaker Model[J]. Journal of the Audio Engineering Society, 2020, 68(5): 355-363. doi: 10.17743/jaes.2020.0010 [29] Dodd M , Klippel W , Oclee-Brown J . Voice Coil Impedance as a Function of Frequency and Displacement[J/OL]. Audio Engineering Society Convention. DOI:http://dx. doi.org/. [30] 孔晓鹏. 电动扬声器分数阶建模及非线性失真分析[D]. 长沙: 国防科学技术大学, 2015. [31] 龚纯, 王正林. 精通Matlab最优化计算 [M]. 第三版. 北京: 电子工业出版社, 2009.