PaperHub
7.2
/10
Poster4 位审稿人
最低3最高5标准差0.8
5
3
4
3
ICML 2025

Tensorized Multi-View Multi-Label Classification via Laplace Tensor Rank

OpenReviewPDF
提交: 2025-01-13更新: 2025-07-24

摘要

关键词
Multi-view learningMulti-label Learning

评审与讨论

审稿意见
5

In this paper, the authors propose a novel approach that introduces a low-rank tensor classifier combined with the innovative Laplace Tensor Rank (LTR), which jointly captures high-order feature correlations and label dependencies. Extensive experiments across six benchmark datasets demonstrate TMvML’s superior performance.

给作者的问题

See the above weaknesses.

论据与证据

Yes, the central claims are supported by evidence. Extensive experiments across six benchmark datasets demonstrate TMvML’s superior performance. The superiority of LTR over existing tensor rank approximations is validated through ablation studies.

方法与评估标准

Yes, the methodological design is reasonable. The rotation operation on the tensor classifier is a clever design choice, enabling the exploration of interactions between different views and labels through frontal slice comparisons.

理论论述

Yes, the proofs for Theorem 3.1 is correct.

实验设计与分析

Yes, the experimental designs is reasonable. The use of six widely adopted MVML datasets and five standard metrics ensures a comprehensive and fair comparison. The inclusion of statistical tests further strengthens the reliability of the results.

补充材料

Yes, the codes are provided in the supplementary material.

与现有文献的关系

TMvML builds on prior work in tensor-based method and multi-label classification. The paper extends the principles of low-rank representation to the MVML setting, leveraging a novel tensorized classifier to capture high-order correlations across views and labels.

遗漏的重要参考文献

All relevant works critical to understanding the main contribution of the method are cited in the paper.

其他优缺点

Strengths:

(1) This work innovatively leverages a concise low-rank MVML tensor classifier to excavate cross-view feature correlations and characterize multi-label semantic relationships simultaneously. The whole paper is well organized and easy to understand.

(2) This paper designs a new Laplace Tensor Rank, which preserves larger singular values and discards smaller ones to obtain an accurate low-rank tensor representation. Such new component is welcome to multi-view community.

Weaknesses:

(1) The Figure 2 is ambiguous. Unclear labeling of vertical and horizontal coordinates, for example, why the true rank is 3 and the singular value gets progressively larger from 0 to 9.

(2) Some expressions are not accurate enough. For example, The matrix S is designed to ensure the multi-view representation is predictive corresponding to the known labels, and it is not clear.

其他意见或建议

See the above weaknesses.

作者回复

Thank you for the feedback on our paper. We appreciate the time and effort you have put into reviewing our work. In this rebuttal, we respond to the concerns raised in the reviews.

W1: In this figure, we tested the ability of multiple low-rank tensor norms (including TNN, ETR, LTSpN and LTR) to approximate the true rank of three-dimensional tensors. Specifically, we constructed a series of three-dimensional tensors with a fixed true rank of 3, while varying singular values that progressively increase from 0 to 9. The horizontal axis represents the singular values, while the vertical axis represents the approximation value of the rank function. Such setup allows us to evaluate how well each method approximates the true rank under different singular value distributions.

We will revise the figure to include a detailed caption explaining the experimental setup. We hope these changes address the reviewer’s concerns.

W2: To clarify, our learning process follows a transductive learning paradigm, where the model leverages both labeled and unlabeled data during training but only uses the labels from the training set for supervision. The matrix S\bf S is a filtering matrix designed to ensure that the optimization process only utilizes the label information from the training set, while excluding any label information from the test set. Specifically, S\bf S is defined as a diagonal matrix:

Sii={1if the i-th sample belongs to the training set,0otherwise (samples belongs to the test set).\mathbf{S}_{ii}=\begin{cases}1&\text{if the }i\text{-th sample belongs to the training set,}\\\\0&\text{otherwise (samples belongs to the test set).}\end{cases}

审稿意见
3

This paper proposes a method named TMvML for multi-view multi-label learning (MVML). The approach includes a low-rank tensor classifier to capture both consistent correlations across views and modeling complex multi-label relationships. Additionally, a new Laplace Tensor Rank (LTR) is introduced to capture higher-order correlations within the tensor space. This approach leads to significant improvements in MVML, as demonstrated by extensive experiments.

给作者的问题

I would like to learn about the authors' response to the weaknesses listed above, which may give me a clearer perspective on the paper's contribution.

论据与证据

The paper's main claims are supported by convincing evidence. Extensive experiments on six datasets demonstrate TMvML’s superiority over state-of-the-art methods across multiple metrics.

方法与评估标准

The proposed methods make sense for the problem. TMvML innovatively leverages tensor classifier to encode high-order correlations across both multi-view and multi-label, while the Laplace Tensor Rank (LTR) constraint effectively balances the preservation of critical semantic relationships and the suppression of noise. The experimental evaluation utilizes widely recognized MVML benchmark datasets and state-of-the-art baseline methods.

理论论述

I checked the correctness of the proofs for theoretical claims, including the theorems and proofs related to the effectiveness of LTR and closed-form solution in optimization.

实验设计与分析

I checked the validity of the experimental designs and analyses. Extensive experiments are conducted on six widely-used MVML benchmark datasets, with results averaged over multiple runs to ensure statistical reliability. The issues are listed behind in the Weaknesses.

补充材料

I reviewed the supplementary material, which includes the code for the proposed method and the code can reproduce the experimental results.

与现有文献的关系

The method TMvML is the first attempt to utilize tensorized low-rank MVML classifier in MVML problem.

遗漏的重要参考文献

There are no related works that are not currently discussed in the paper.

其他优缺点

The paper proposes a Tensorized Multi-View MultiLabel Classification method (TMvML), which is the first attempt to utilize tensorized low-rank MVML classifier to achieve the high-order feature correlations extraction and multi-label semantic correlations characterization simultaneously. Meanwhile, the paper designs a new Laplace Tensor Rank (LTR), which serves as a tighter surrogate of tensor rank to effectively capturing high-order fiber correlations.

There are also some weaknesses:

  1. In tensor-based methods, the Tensor Nuclear Norm (TNN) is commonly used to capture the low-rank structure of tensors [1,2]. The proposed Laplace Tensor Rank (LTR) should be compared with TNN in the experiments to better highlight its effectiveness and potential advantages.

  2. The proposed tensor classifier construction involves merging view-specific mapping matrices and rotating the resulting tensor to align label-view interactions. While this rotating design is theoretically motivated, ablation studies would strengthen the claim that rotation is essential for capturing label consistency and view correlations. Such experiments would provide concrete evidence of the rotation operation’s contribution to the method’s overall performance.

  3. In Fig 6, the font of the coordinate axis should be enlarged.

[1] Zhao S, Wen J, Fei L, et al. Tensorized incomplete multi-view clustering with intrinsic graph completion[C] // Proceedings of the AAAI Conference on Artificial Intelligence. 2023, 37(9): 11327-11335.

[2] Zhang C, Li H, Lv W, et al. Enhanced tensor low-rank and sparse representation recovery for incomplete multi-view clustering[C] // Proceedings of the AAAI conference on artificial intelligence. 2023, 37(9): 11174-11182.

其他意见或建议

I would like to learn about the authors' response to the weaknesses listed above, which may give me a clearer perspective on the paper's contribution.

作者回复

Thank you for the feedback on our paper. We appreciate the time and effort you have put into reviewing our work. In this rebuttal, we respond to the concerns raised in the reviews.

W1: We agree that comparing the proposed Laplace Tensor Rank (LTR) with the widely used Tensor Nuclear Norm (TNN) is essential to highlight the advantages of our method. In fact, we have already performed a comparison of the LTR function with the TNN function and the results are shown in Fig. 3. According to the figure, LTR provides a tighter approximation to the true rank function compared to TNN, especially for larger singular values. Theoretically, LTR’s nonconvex formulation more aggressively suppresses small (noise-corrupted) singular values while preserving larger (signal-carrying) ones, leading to a more accurate low-rank representation. This property is not fully captured by TNN, which tends to over-penalize larger singular values due to its convex nature.

However, we fully agree with the reviewer that comparing LTR with TNN in the experiments would better highlight its effectiveness. Thus, we compared TMvML with its variant TMvML-TNN, where the LTR was replaced by TNN to capture the low-rank tensor structure, and we report the results for the numerical experiments:

EmotionsYeastCorel5kPlantEspgameHuman
TMvMLAP0.811±0.0200.771±0.0080.440±0.0080.608±0.0070.306±0.0010.631±0.010
TMvML-TNNAP0.738±0.0140.747±0.0140.382±0.0150.601±0.0170.270±0.0200.621±0.007
TMvMLCov0.300±0.0700.460±0.0020.266±0.0060.169±0.0130.409±0.0080.150±0.003
TMvML-TNNCov0.344±0.0320.467±0.0090.279±0.0030.171±0.0120.452±0.0090.162±0.005

Experimental results demonstrate that TMvML consistently outperforms TMvML-TNN across all datasets. This compelling evidence proves that our proposed LTR is more effective than traditional TNN in modeling the complex high-order correlations in multi-view multi-label learning tasks, particularly in preserving discriminative singular values while suppressing noise-corrupted ones.

W2: We agree that ablation study to validate the rotation operation in the tensor classifier construction would provide concrete evidence of the rotation’s contribution to capturing label consistency and view correlations. We compared TMvML with a variant that removes the rotation operation (denoted as TMvML-NoRot), and the results are summarized below:

EmotionsYeastCorel5kPlantEspgameHuman
TMvMLAP0.811±0.0200.771±0.0080.440±0.0080.608±0.0070.306±0.0010.631±0.010
TMvML-NoRotAP0.628±0.0210.733±0.0210.231±0.0020.511±0.0150.191±0.0010.532±0.005
TMvMLCov0.300±0.0700.470±0.0020.266±0.0060.169±0.0130.409±0.0080.150±0.003
TMvML-NoRotCov0.473±0.0050.485±0.0040.350±0.0020.210±0.0070.498±0.0000.185±0.002

The results show that TMvML consistently outperforms TMvML-NoRot across all datasets. This significant performance gap highlights the critical role of rotation in extraction of both cross-view consistent correlations and multi-label semantic relationships simultaneously.

W3: Thank you for pointing this out. We will enlarge the font of the coordinate axes to improve readability and ensure better clarity in the visualization.

审稿意见
4

This paper presented a method for Multi-View Multi-Label Learning (TMvML) which utilizes tensorized MVML classifier to achieve the high-order feature correlations extraction and multi-label semantic relationships characterization simultaneously. Moreover, a new Laplace Tensor Rank is designed to characterize a better low-rank tensor structure. Experiments show some good results.

update after rebuttal

Thank you for your response. The new explanations and experimental results have strengthened the evaluation. After reading the authors' response, I would like to raise my rating to "Accept".

给作者的问题

Are there formal proofs or conditions that ensure the convergence to a stationary point?

论据与证据

The claims are supported by evidence. TMvML’s superiority is evident in its consistent outperformance of existing methods.

方法与评估标准

The method design makes sense. Motivated by the fact that tensor can characterize the low-rank structure of multi-dimensional, tensorized MVML classifier can deal with MVML problem.

理论论述

The proofs for theoretical claims in optimization are correct.

实验设计与分析

The experimental designs and analyses are reasonable, with thorough benchmark datasets of varying scales and complexities. The experiment also performs relevant ablation experiments, convergence analysis and hyperparametric analysis.

补充材料

I reviewed the code in the supplementary material.

与现有文献的关系

This approach aligns with recent efforts to enhance tensor rank approximations, but goes further by integrating multi-view and multi-label learning into a unified framework.

遗漏的重要参考文献

All related works are cited or discussed in the paper.

其他优缺点

Strengths:

  • The organization of this article is reasonable and well-written.

  • The proposed LTR offers a non-convex surrogate for tensor rank.

  • Extensive experiments demonstrate TMvML’s superior performance.

Weaknesses:

  • Recent tensor-based MVML methods should be compared, which can judge whether TMvML’s gains stem from tensorization itself. A direct comparison with recent tensor-based MVML methods would provide clearer insights into the specific contributions of tensorization and help validate the effectiveness of the proposed framework.

  • The modified Laplace function used in LTR introduces an additional exponential term eδe^{\delta} compared with the original Laplace function. The authors need to elaborate on the specific advantages of this modification.

  • The paper lacks a dedicated convergence analysis with formal theoretical proofs. Especially, are there formal proofs or conditions ensuring convergence to a stationary point? Addressing these points would enhance the theoretical rigor of the paper.

其他意见或建议

  • Recent tensor-based MVML methods should be compared, which can judge whether TMvML’s gains stem from tensorization itself. A direct comparison with recent tensor-based MVML methods would provide clearer insights into the specific contributions of tensorization and help validate the effectiveness of the proposed framework.

  • The modified Laplace function used in LTR introduces an additional exponential term eδe^{\delta} compared with the original Laplace function. The authors need to elaborate on the specific advantages of this modification.

作者回复

Thank you for the feedback on our paper. We appreciate the time and effort you have put into reviewing our work. In this rebuttal, we respond to the concerns raised in the reviews.

W1(C1): Existing tensor-based methods have primarily been applied to multi-view clustering tasks for mining higher-order feature correlations, while some matrix-based methods employ low-rank constraints to capture label semantic relevance. To the best of our knowledge, our proposed TMvML represents the first attempt to utilize tensor structures for MVML tasks, designed to simultaneously model both multi-view high-order correlations and multi-label co-occurrence patterns. The tensor formulation provides a natural and effective framework for capturing the intrinsic multi-dimensional relationships in MVML data that conventional matrix-based approaches cannot fully characterize.

Although direct comparisons with other tensor methods are unavailable, we validate the effectiveness of our proposed Laplace Tensor Rank (LTR) by comparing TMvML with its variant TMvML-TNN, where we replace LTR with the traditional Tensor Nuclear Norm (TNN) for low-rank tensor structure approximation. Experimental results demonstrate that TMvML consistently outperforms TMvML-TNN across all datasets. This compelling evidence proves that our proposed LTR is more effective than traditional TNN in modeling the complex high-order correlations in multi-view multi-label learning tasks.

EmotionsYeastCorel5kPlantEspgameHuman
TMvMLAP0.811±0.0200.771±0.0080.440±0.0080.608±0.0070.306±0.0010.631±0.010
TMvML-TNNAP0.738±0.0140.747±0.0140.382±0.0150.601±0.0170.270±0.0200.621±0.007
TMvMLCov0.300±0.0700.460±0.0020.266±0.0060.169±0.0130.409±0.0080.150±0.003
TMvML-TNNCov0.344±0.0320.467±0.0090.279±0.0030.171±0.0120.452±0.0090.162±0.005

W2(C2): The introduction of the additional exponential term e^δ in the modified Laplace function, fLTR(x)=1exp(eδxδ)f_{\mathrm{LTR}}(x)=1-\exp\left(-\frac{e^\delta x}{\delta}\right), provides several key advantages over the original Laplace function, fLaplace(x)=1exp(xδ)f_{\mathrm{Laplace}}(x)=1-\exp\left(-\frac{x}{\delta}\right).

  • The modified function offers enhanced flexibility by allowing dynamic adjustment of the growth rate and magnitude through eδe^δ. When δδ is large, eδe^δ amplifies xx, making the function grow faster for small singular values. When δ is small, the effect of eδe^δ is reduced, and the function behaves similarly to the original Laplace function. This adaptability makes the modified function more versatile in handling different data distributions.

  • The modified function exhibits faster convergence for large values of xx due to the exponential scaling eδe^δ, improving optimization efficiency in tasks involving large-scale data or high-dimensional tensors.

Thank you again for this valuable feedback. Please let us know if there is any additional information we can provide to assist with your evaluation.

W3(Q1): We agree that theoretical guarantees for convergence are critical for ensuring the reliability and robustness of the optimization framework. We have added formal theoretical proofs ensuring convergence to a stationary point and reported the convergence theorem and its detailed proof in our rebuttal to Reviewer miAu.

审稿意见
3

This paper proposes a Tensorized Multi-View Multi-Label Classification (TMvML) method to address the limitations of existing approaches that independently model cross-view consistent correlations and multi-label semantic relationships in MVML learning. The method reconstructs multi-view multi-label mapping matrices into a tensor classifier, where tensor rotation and low-rank constraints are jointly applied to unify view-level feature consistency and label-level semantic co-occurrence. Moreover, Laplace Tensor Rank is designed as a tight surrogate of tensor rank to capture high-order fiber correlations. The experimental results demonstrate the effectiveness of the proposed framework.

给作者的问题

Please see the points under Weaknesses above.

论据与证据

In this paper, the claims are supported:

  1. TMvML’s superiority over SOTA is validated via experiments (Table 2) and statistical tests (Friedman/Bonferroni-Dunn).

  2. LTR’s effectiveness is justified theoretically and empirically (Figure 3, Figure 5).

方法与评估标准

This paper uses the tensorized classifier for MVML, as tensors naturally model multi-dimensional relationships. The rotation operation cleverly reorients the tensor to align label-view interactions, addressing a key limitation of matrix-based methods. This paper evaluates the proposed framework on widely-used MVML benchmark datasets with five standard evaluation metrics and the results demonstrate the effectiveness of the method.

理论论述

Theorem 3.1 is correct and clearly proven.

实验设计与分析

  1. This paper conducts extensive experiments on several datasets, and the experimental results demonstrate the effectiveness of the proposed framework.

  2. Ablation studies and parameter sensitivity tests rigorously validate the contributions of LTR and hyperparameter stability.

补充材料

The code in supplementary material allows for reproducibility and further exploration of the method's implementation.

与现有文献的关系

The paper builds on foundational works in tensor-based multi-view learning and multi-label classification, but it uniquely integrates these two paradigms through a unified tensorized framework. It applies the tensor framework to the MVML problem for the first time.

遗漏的重要参考文献

There are no essential references missing or overlooked in the paper's discussion of related work.

其他优缺点

Strengths:

  • Originality: This paper applies the tensor framework to the MVML problem for the first time, advancing the field by unifying cross-view consistency and label semantics in a single tensor classifier. This addresses a critical gap in existing MVML methods, which often handle these aspects independently.

  • Experiments: The experiments are sufficient and the effectiveness of the proposed method is substantiated through these experiments.

  • Clarity: This paper is well organized and the proposed method is clearly written to understand. All experiments details are provided and the codes are also released.

Weaknesses:

  • In Section 5.3, the authors provide a textual analysis of the convergence behavior of TMvML, supported by empirical convergence curves (Figure 7). While the empirical results demonstrate stable convergence across datasets, the paper would benefit from theoretical guarantees to further strengthen the credibility of the optimization process.

其他意见或建议

Please see the points under Weaknesses above.

作者回复

W1: The convergence of TMvML is guaranteed through the validation presented in Theorem 1, with comprehensive and rigorous proof below.

Theorem 1: Let Pk=(Zkv,Ekv,Akv,Wkv,Bkv,C_k,G_k)_k=0\\{\mathcal{P}_k = ({\bf Z}_k^{v}, {\bf E}_k^{v}, {\bf A}_k^{v}, {\bf W}_k^{v}, {\bf B}_k^{v}, \mathcal{C}\_k, \mathcal{G}\_k)\\}\_{k=0}^{\infty} be the sequence generated by Algorithm 1, then the sequence {Pk}\{\mathcal{P}_k\} meets the following two principles:

1). Pk\\{\mathcal{P}_k\\} is bounded;

2). Any accumulation point of Pk\\{\mathcal{P}_k\\} is a KKT point of Algorithm 1.

To prove Theorem 1, we first introduce two lemmas.

Lemma 1 [1]: Let H\mathcal{H} be a real Hilbert space with inner product ,\langle \cdot, \cdot \rangle, norm \\|\cdot\\|, and dual norm dual\\|\cdot\\|^{dual}. For yxy \in \partial \\|x\\|, we have ydual=1\\|y\\|^{dual} = 1 if x0x \neq 0 and ydual1\\|y\\|^{dual} \leq 1 if x=0x = 0.

Lemma 2 [2]: Let F:Rm×nRF: \mathbb{R}^{m \times n} \to \mathbb{R} be defined as F(X)=f(σ(X))F(\mathbf{X}) = f(\sigma(\mathbf{X})), where X=UDiag(σ(X))VT{\bf X} = {\bf U} \mathrm{Diag}(\sigma({\bf X})) {\bf V}^T is the SVD of X{\bf X}, r=min(m,n)r = \min(m,n), and f:RrRf: \mathbb{R}^r\to\mathbb{R} is differentiable and absolutely symmetric at σ(X)\sigma(\mathbf{X}). Then,

F(X)X=UDiag(f(σ(X)))VT.\frac{\partial F(\mathbf{X})}{\partial\mathbf{X}}=\mathbf{U}\mathrm{Diag}(\partial f(\sigma(\mathbf{X}))) \mathbf{V}^T.

where f(σ(X))=(f(σ1(x))X,,f(σr(x))X).\partial f(\sigma(\mathbf{X})) = (\frac{\partial f(\sigma_1(x))}{\partial \mathbf{X}}, \dots, \frac{\partial f(\sigma_r(x))}{\partial\mathbf{X}}).

Proof of the first part: On the k+1k+1 iteration, from the updating rule of Ek+1 \mathbf{E}_{k+1}, the first-order optimal condition should be satisfied.

0=\alpha\partial\left\\|{\bf E}_{k+1}^v\right\\|\_{2,1}+\mu_k({\bf E}\_{k+1}^v-({\bf X}^v-{\bf{Z}\_{k+1}}^v{\bf A}^v+{\bf B}\_k^v/\mu\_k))=\alpha\partial\left\\|{\bf E}\_{k+1}^v\right\\|\_{2,1}-{\bf B}\_{k+1}^v,

Thus, we have

\frac{1}{\alpha}[\mathbf{B}\_{k+1}^{v}]\_{i,j}=\partial\left\\|\left[\mathbf{E}\_{k+1}^{v}\right]\_{:,j}\right\\|\_{2},

The _2\ell\_{2} norm is self-dual, so based on the Lemma 1, we have \left\\|\frac{1}{\alpha}[\mathbf{B}\_{k+1}^{v}]\_{:,j}\right\\|\_{2}\geq1. So the sequence B_k+1v\\{\mathbf{B}\_{k+1}^{v}\\} is bounded.

Next, according to the updating rule of G\mathcal{G}, the first-order optimality condition holds:

G_k+1_LTR=C_k+1\partial \\|\mathcal{G}\_{k+1}\\|\_{LTR}=\mathcal{C}\_{k+1}

Let USVT\mathcal{U} * \mathcal{S} * \mathcal{V}^{T} be the t-SVD of tensor G\mathcal{G}. Based on Lemma 2 and the definition of LTR, we have:

G_k+1_LTR_F2e2δmin(n_1,n_2)δ2n_32\\|\partial\\|\mathcal{G}\_{k+1}\\|\_{\text{LTR}}\\|\_F^2\leq\frac{e^{2\delta}\min(n\_1,n\_2)}{\delta^2 n\_3^2}

Thus, C_k+1\\{\mathcal{C}\_{k+1}\\} is bounded.

Based on the iterative method constructed in the algorithm, we can deduce:

L_k(Z_k+1v,E_k+1v,A_k+1v,W_k+1v,G_k+1,B_kv,C_k)L_k1+ρ_k+ρ_k12ρ_k12C_kC_k1_F2+μ_k+μ_k12μ_k12_vB_kvB_k1v_F2\mathcal{L}\_{k}(\mathbf{Z}\_{k+1}^v, \mathbf{E}\_{k+1}^v, \mathbf{A}\_{k+1}^v, \mathbf{W}\_{k+1}^v, \mathcal{G}\_{k+1}, \mathbf{B}\_k^v, \mathcal{C}\_k) \leq \mathcal{L}\_{k-1} + \frac{\rho\_k + \rho\_{k-1}}{2 \rho\_{k-1}^2} \\| \mathcal{C}\_k - \mathcal{C}\_{k-1} \\|\_F^2 + \frac{\mu\_k + \mu\_{k-1}}{2 \mu\_{k-1}^2} \sum\_v \\| \mathbf{B}\_k^v - \mathbf{B}\_{k-1}^v \\|\_F^2

Summing both sides results in: L_k\mathcal{L}\_k is bounded, and consequently, all its components are bounded, including G_k+1_LTR\\| \mathcal{G}\_{k+1}\\|\_{\text{LTR}}. Then, the boundedness of G_k+1,Z_k+1,A_k+1\\{\mathcal{G}\_{k+1}, \mathbf{Z}\_{k+1}, \mathbf{A}\_{k+1}\\} is easy to prove. Therefore, the sequence P_k\\{\mathcal{P}\_k\\} is bounded.

Proof of the second part: By the Weierstrass-Bolzano theorem [3], there exists at least one accumulation point of the sequence P_kk=1\\{{\mathcal{P}\_k}\\}_{k=1}^{\infty}, denoted as P_\mathcal{P}\_*. Then, we have:

limk(Z_kv,E_kv,A_kv,B_kv,W_kv,C_k,G_k)=(Z_v,E_v,A_v,B_v,W_v,C_\*,G_\*)\lim_{k\to\infty}({\bf Z}\_k^v, {\bf E}\_k^v, {\bf A}\_k^v, {\bf B}\_k^v, {\bf W}\_k^v, \mathcal{C}\_k, \mathcal{G}\_k)=({\bf Z}\_*^v, {\bf E}\_*^v, {\bf A}\_*^v, {\bf B}\_*^v, {\bf W}\_*^v, \mathcal{C}\_\*,\mathcal{G}\_\*)

From the update rules of B_kv\mathbf{B}\_k^v, C_k\mathcal{C}\_k, with B_kv\\{\mathbf{B}\_k^v\\}, C_k\\{\mathcal{C}\_k\\} bounded and the fact limkμk=\lim_{k\to\infty}\mu_{k}=\infty, we obtain:

limk(XvZ_k+1vA_k+1vE_k+1v)=0Xv=Z_vA_v+E_v\lim_{k\to\infty}(\mathbf{X}^v-\mathbf{Z}\_{k+1}^v \mathbf{A}\_{k+1}^v - \mathbf{E}\_{k+1}^v) = 0 \Rightarrow \mathbf{X}^v = \mathbf{Z}\_*^v \mathbf{A}\_*^v + \mathbf{E}\_*^v

limk(W_k+1G_k+1)=0W_=G_\lim_{k \to \infty} (\mathcal{W}\_{k+1} - \mathcal{G}\_{k+1}) = 0 \Rightarrow \mathcal{W}\_* = \mathcal{G}\_*

Combining the first-order optimality conditions of E_k+1v\mathbf{E}\_{k+1}^v and G_k+1\mathcal{G}\_{k+1}, we take the limit and obtain:

B_v=αE_v_2,1,C_\*=βG_\*_LTR\mathbf{B}\_*^v=\alpha \partial\\|\mathbf{E}\_*^v\\|\_{2,1}, \quad \mathcal{C}\_\*=\beta\partial\\|\mathcal{G}\_\*\\|\_{LTR}

Therefore, the accumulation point P_\mathcal{P}\_* generated by TMvML satisfies the KKT conditions.

[1] The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv, 2010.

[2] Nonsmooth analysis of singular values. Part I: Theory. Set-Valued Analysis, 2005.

[3] Introduction to real analysis, Wiley New York, 2000.

最终决定

This paper proposes Tensorized Multi-View Multi-Label Classification (TMvML), introducing a novel Laplace Tensor Rank (LTR) to jointly model high-order feature correlations and multi-label dependencies. The method is evaluated on six benchmark datasets, demonstrating superior performance over existing approaches.

TR provides a tighter low-rank surrogate than traditional tensor norms (e.g., TNN). Extensive experiments (ablation studies, sensitivity tests, statistical significance) support claims. The paper makes a significant contribution to MVML by introducing a unified tensor framework with rigorous theory and empirical validation. All reviewers were satisfied with the rebuttal, and the revised manuscript (with minor clarifications) will strengthen the final version.