We consider relative error low rank approximation of tensors with respect to the Frobenius norm: given an order-q tensor A, output a rank-k tensor B for which |A-B|_F <= (1+eps)OPT, where OPT =inf_{rank-k A'} |A-A'|_F. Despite the success on obtaining relative error low rank approximations for matrices, no such results were known for tensors. One structural issue is there may be no rank-k tensor A' achieving the above infinum. Another, computational issue, is that an efficient relative error low rank approximation algorithm for tensors would allow one to compute the tensor rank, which is NP-hard. We bypass these issues via (1) bicriteria and (2) parameterized complexity solutions. As an example, we give an algorithm which outputs a rank (k/eps)^{q-1} tensor B for which |A-B|_F <= (1+eps)OPT in nnz(A)+n*poly(k/eps) time in the real RAM model for an n x n x ... n tensor. Here nnz(A) is the number of non-zero entries in A. For outputting a rank-k tensor, or even a bicriteria solution with rank-Ck for a certain constant C>1, we show an exp(k^{1-o(1)}) time lower bound under the Exponential Time Hypothesis. Our results also give the first relative error low rank approximations for tensors for a large number of robust error measures for which nothing was known, as well as column row and tube subset selection problems, generalizing and strengthening results for CUR decompositions of matrices. Based on joint with Zhao Song (UT Austin) and Peilin Zhong (Columbia).
David Woodruff,本科、硕士以及博士均毕业于麻省理工学院(Massachusetts Institute of Technology),现为卡耐基梅隆大学(Carnegie Mellon University,CMU)计算机系终身教授,担任CMU 数据科学项目创建者及主席,曾供职于 IBM Almaden 研究中心10年,曾担任IBM 技术学院首席科学家,为STOC 2013 以及 PODS 2010 最佳学术研究论文奖得主,以及EATCS Presbuger(表彰计算机科学领域年度最卓越的年轻科学家) 奖得主。研究兴趣为机器学习、数值线性代数、随机化算法以及稀疏恢复。