Recently, we have witnessed the success of total variation (TV) for many imaging applications. However, traditional TV is defined on the original pixel domain, which limits its potential. In this work, we suggest a new TV regularization defined on the neural domain. Concretely, the discrete data is continuously and implicitly represented by a deep neural network (DNN), and we use the derivatives of DNN outputs w.r.t. input coordinates to capture local correlations of data. As compared with classical TV on the original domain, the proposed TV on the neural domain (termed NeurTV) enjoys two advantages. First, NeurTV is not limited to meshgrid but is suitable for both meshgrid and non-meshgrid data. Second, NeurTV can more exactly capture local correlations across data for any direction and any order of derivatives attributed to the implicit and continuous nature of neural domain. We theoretically reinterpret NeurTV under the variational approximation framework, which allows us to build the connection between classical TV and NeurTV and inspires us to develop variants (e.g., NeurTV with arbitrary resolution and space-variant NeurTV). Extensive numerical experiments with meshgrid data (e.g., color and hyperspectral images) and non-meshgrid data (e.g., point clouds and spatial transcriptomics) showcase the effectiveness of the proposed methods.
赵熙乐,电子科技大学教授、博士生导师,中国工业与应用数学学会副秘书长,入选电子科技大学百人计划和四川省学术和技术带头人后备人选。撰写Elsevier出版社和科学出版社出版的学术专著章节2章,以第一或通讯作者在高水平期刊和会议发表学术论文60余篇,包括应用数学权威SIAM 系列期刊和图像处理权威IEEE系列期刊(TPAMI、TIP、TNNLS、TCYB、TCI和TGRS)及人工智能权威会议CVPR和AAAI等。主持国家自然科学基金面上项目和青年项目、四川省应用基础研究项目等。研究成果获四川省科技进步奖一等奖(自然科学类、科技进步类)、中国计算数学学会青年优秀论文竞赛二等奖、第一、第二届连续两届川渝科技学术大会优秀论文一等奖、首届四川省数学会应用数学奖二等奖。