当前位置: 首页 > 学术活动 > 正文
Gradient-based neuro-fuzzy learning algorithms for TS systems
时间:2021年10月12日 14:59 点击数:

报告人:刘燕

报告地点:腾讯会议ID:889 864 8458

报告时间:2021年10月14日星期四14:30-15:30

邀请人:徐东坡

报告摘要:

It has been proven that Takagi–Sugeno systems are universal approximators, and they are applied widely to classification and regression problems. The main challenges of these models are convergence analysis and their computational complexity due to the large number of connections and the pruning of unnecessary parameters. The L1/2 regularizer has a specific sparsity capacity and it is representative of Lq(0 <q<1)regularizations. In this report, we propose a gradient-based neuro-fuzzy learning algorithm with a smoothing L1/2regularization for the first-order Takagi–Sugeno fuzzy infer-ence system. The proposed approach performs better bypruning inactive connections, where the number of the redundant connections for removal is higher than that generated by the original L1/2regularizer, while it is also implemented by simultaneous structure and parameter learning processes; it is possible to demonstrate the theoretical convergence analysis of this learning method, which we focus on explicitly. We also provide a series of simulations to demonstrate that the smoothing L1/2regularization can often obtain more compressive representations than the current L1/2regularization.

会议密码:115119

主讲人简介:

刘燕,大连工业大学教授,硕士生导师。近年来主要进行模糊系统、算法构造、收敛性分析等理论研究,并将理论成果应用于海产品质构分析等领域。近五年研究成果发表在多个国际期刊,包括《Fuzzy sets and systems》、《Neural processing letters》等。主持国家青年基金一项,博士后科学基金面上项目一项,大连市青年科技之星项目一项;参与国家重点研发计划子课题、NSFC辽宁联合基金等项目。

©2019 东北师范大学数学与统计学院 版权所有

地址:吉林省长春市人民大街5268号 邮编:130024 电话:0431-85099589 传真:0431-85098237