PaperHub
7.3
/10
Poster4 位审稿人
最低4最高5标准差0.5
4
5
5
4
4.0
置信度
创新性2.8
质量3.3
清晰度2.8
重要性2.8
NeurIPS 2025

Adversarial Graph Fusion for Incomplete Multi-view Semi-supervised Learning with Tensorial Imputation

OpenReviewPDF
提交: 2025-05-06更新: 2025-10-29
TL;DR

We present the first framework to tackle the Sub-Cluster Problem in incomplete GMvSSL via adversarial graph fusion and tensor-based structure recovery.

摘要

关键词
Incomplete multi-view learninggraph-based semi-supervised learning

评审与讨论

审稿意见
4

This paper tackles the challenge of missing views in multi-view semi-supervised learning, which often disrupts local data structures and impairs classification accuracy. The authors introduce AGF-TI, a novel framework that combats structural discontinuities by generating a unified graph through adversarial learning. It leverages high-order consistency via tensor-based structure recovery and incorporates anchor points to enhance efficiency. Extensive experiments confirm that AGF-TI consistently outperforms existing methods across multiple benchmarks.

优缺点分析

Strengths:

  1. The writing and presentation are good, then the readers can quickly grasp the core idea of the paper.
  2. The experiments are sufficient to verify the effectiveness of the proposed method.

Weaknesses:

  1. The anchor learning and low-rank tensor decomposition have been widely studies in multi-view clustering, I cannot see some insightful observations and interesting idea.
  2. The authors demonstrate the Sub-Cluster Problem on a Synthetic dataset. However, this could be artificially crafted. Hence, the such verification should be conducted on real-world datasets.
  3. The most used multi-view datasets are too small, it’s suggested to add more large-scale datasets.
  4. The author claims that introducing anchors can decrease the computational burden, then the running time should be compared.

问题

Please refer to the weaknesses.

局限性

yes

最终评判理由

The authors basically addressed my concerns, so I decide to raise my score.

格式问题

N/A

作者回复

Thank you so much for the valuable comments! We are committed to addressing each question you have raised.


W1: Novelty of the Proposed AGF-TI

We acknowledge that both anchor learning and low-rank tensor decomposition are existing techniques in the multi-view clustering field. Actually, in this paper, the anchor strategy and tensor method are utilized to enhance efficiency and unify the framework. Beyond combining these techniques, our work mainly focuses on introducing a novel adversarial graph fusion (AGF) operator based on anchors to learn a robust consensus graph for semi-supervised classification, specifically designed to address the sub-cluster problem. The proposed AGF offers the following advantages:

  • Anchor graph fusion paradigm

Most existing multi-view semi-supervised learning (MvSSL) methods [10, 12] directly fuse the sample-level graphs SvRn×n\mathbf{S}_v\in \mathbb{R}^{n \times n}. In contrast, AGF-TI operates on anchor-based bipartite graphs of size n×mn \times m, enabling fusion at a lower dimensional level. Besides, the subsequent tensor learning can also be conducted on an m×V×nm\times V\times n tensor instead of n×V×nn\times V\times n. This paradigm significantly enhances the computational efficiency for both graph fusion and tensor imputation.

  • Adversarial min-max framework

We are the first to introduce a min-max optimization framework into graph-based MvSSL, enabling the learning of a robust consensus graph. This novel approach mitigates the impact of distorted local structures and improves the robustness of graph fusion. Unlike conventional min-min paradigms, our adversarial min-max cycle avoids collapse into a single dominant view and achieves a better exploration–exploitation balance, ultimately leading to improved classification performance.

In addition to the proposed AGF, we also present a severe yet neglected problem, the Sub-Cluster Problem, when encountering view missing scenarios.

Sub-cluster observation. We identify a novel challenge in graph-based MvSSL with missing views, called Sub-Cluster Problem (SCP). Specifically, missing samples in individual views can violate the core smoothness assumption underlying label propagation, resulting in inaccurate graph fusion and poor label prediction. Moreover, we validate the presence of the SCP in real-world webpage and document classification datasets in the next W2.

Other contributions. We develop an effective optimization algorithm that combines alternating direction method of multipliers and reduced gradient descent method to tackle the challenges posed by the proposed objective function. Furthermore, we provide a theoretical convergence guarantee under a mild assumption, highlighting the soundness of our approach.


W2: SCP in Real-world Datasets

Thanks. To address the reviewer’s concern, we validate the SCP using two real-world datasets: WebKB [R1] and Wikipedia [R2], commonly used in webpage and document classification tasks. Specifically, WebKB comprises 1,051 webpages across 2 classes, 230 for course and 821 for no-course, with page and link views. Wikipedia contains 693 multimedia documents from 10 categories, represented by image and text views.

To quantify the number of category clusters in each view, we employ the density-based clustering algorithm DBSCAN [R3]. For each view, we calibrated DBSCAN at VMR=0% by setting minpts=5minpts=5 and tuning ϵ[0.1,1]\epsilon \in [0.1, 1] (step size 0.01) to approximate the real number of classes. To account for the reduction in sample size at higher VMRs, we decrement minptsminpts by one for each increase in VMR. The results are shown below, and one can observe that the number of detected clusters by DBSCAN rapidly grows with VMR, indicating that class manifolds fragment into smaller sub-clusters. Notably, in the ''link'' view of WebKB, DBSCAN identifies 11 clusters even at VMR=0% (more than 2 classes), which suggests that SCP may not only be widely present in incomplete multi-view data but also arise in complete data scenarios. These findings support the validity and generality of SCP in real-world applications.

WebKB#class0%30%50%70%
page view22349
link view211151927
Wikipedia
image view10791116
text view1010111522

W3: Additional Large-scale Datasets

Thanks. Actually, the datasets in our experiments are commonly used in the multi-view learning field, exhibiting considerable variation in sample sizes (600 to 10,158). To address the reviewer's concern, we conduct additional experiments on the NoisyMNIST datasets (30k/70k samples, 2 views, 10 classes) [R4] to evaluate the classification performance of compared algorithms on larger-scale datasets. With VMR and LAR set to 50% and 5%, respectively, the results below demonstrate that AGF-TI continues to achieve superior performance, confirming its scalability and effectiveness.

NoisyMNIST30K70K
MethodACCPRECF1ACCPRECF1
AMMSS60.7259.7462.59---
AMGL51.851.0252.11---
MLAN------
AMUSE60.2959.6359.5---
FMSSL78.1277.4477.11---
FMSEL52.6651.9451.51---
CFMSC72.6772.1171.67---
MVAR61.8161.1560.4268.2167.3967.35
ERL-MVSC77.3177.0476.83---
AMSC72.0471.3473.0971.4570.8872.75
SLIM66.7466.1865.63---
Ours81.4881.0181.0877.3477.7876.75

Note: Some results were not reported due to out-of-memory errors or because the computations had not been completed within the allotted time.


W4: Running Time

Thanks. To address the reviewer's concern, we compared algorithm running times on all datasets, setting VMR=50% and LAR=5%. For baselines requiring complete multi-view data, the total running time includes both the DMF completion process and their subsequent execution. The results are recorded below, and one can observe that AGF-TI maintains an acceptable running time. Although AGF-TI takes slightly longer than AMSC and SLIM due to its missing data imputation and shared bipartite graph learning, it consistently achieves significantly better performance across all datasets. Moreover, due to its anchor strategy, AGF-TI's running time scales efficiently with the sample size, ensuring its practicality for large-scale applications.

Time (s)CUBUCI-DigitCaltech101-20OutSceneMNIST-USPSAwA
AMMSS4.317.8086.2515.6148.58621.34
AMGL4.165.5165.999.6039.89587.51
MLAN4.6820.29141.6574.99150.061068.73
AMUSE6.3731.77102.7655.59209.931361.76
FMSSL4.775.8182.9424.5174.75808.80
FMSEL6.4532.70120.1860.12234.501359.60
CFMSC6.7820.36106.9526.29159.53815.56
MVAR4.265.4069.8410.3639.30601.27
ERL-MVSC4.315.6867.7110.5542.66605.83
AMSC0.852.1511.034.403.7333.54
SLIM0.334.4635.5914.1211.43128.83
Ours1.0615.3545.0335.8431.06354.54

References

[R1] Blum, Avrim, and Tom Mitchell. "Combining labeled and unlabeled data with co-training." Proc. Conf. Learn. Theory, 1998.

[R2] Wang, Hao, et al. "Multi-view clustering via concept factorization with local manifold regularization." IEEE ICDM, 2016.

[R3] Ester, Martin, et al. "A density-based algorithm for discovering clusters in large spatial databases with noise." kdd, 1996.

[R4] Yang, Mouxing, et al. "Robust multi-view clustering with incomplete information." IEEE TPAMI, 2022.

评论

I appreaciate many efforts made by the authors. Although I sitll think the novelty is limieted, I'm glad to raise my score.

评论

Thank you for taking the time to review our rebuttal and raising the score. We sincerely appreciate your support and endorsement!

审稿意见
5

This paper proposes AGF-TI, an innovative approach for incomplete multi-view semi-supervised learning, to address the Sub-Cluster Problem (SCP) caused by missing views. By integrating adversarial graph fusion (min-max optimization) and tensorial imputation (low-rank tensor learning), AGF-TI reconstructs robust consensus graphs and recovers missing structures. Combined with an anchor-based strategy for computational efficiency, AGF-TI demonstrates superior performance on various datasets, validating its effectiveness in handling both view incompleteness and label scarcity challenges.

优缺点分析

Strengths:

  1. This paper identifies the Sub-Cluster Problem in graph-based multi-view semi-supervised learning, an important but previously under-explored challenge.
  2. AGF-TI innovatively combines min-max scheme with tensorial imputation, forming a robust framework to address SCP.
  3. Clear and mathematically rigorous formulation of the optimization problem, with a solid convergence analysis in the appendix.
  4. Experimental results demonstrate a substantial improvement in classification performance. The paper also includes a detailed ablation study to demonstrate the effectiveness of each component in AGF-TI.

Weaknesses:

  1. Although the time complexity is discussed, it would be better to provide the running time results of the AGF-TI and baselines for comparison.
  2. What does the parameter β\beta mean in Eq. (3)?
  3. How are the permutation matrices Tv\mathbf{T}_v initialized in practice?

问题

See the weaknesses.

局限性

yes

最终评判理由

The authors' response have addressed most of my concerns, I will maintain my score as 5.

格式问题

No

作者回复

Thank you so much for your valuable comments! We are committed to addressing each question you have raised.


W1: Running Time

Thanks. To address the reviewer's concern, we compared algorithm running times on all datasets, setting VMR=50% and LAR=5%. For baselines requiring complete multi-view data, the total running time includes both the DMF completion process and their subsequent execution. The results are recorded below, and it can be observed that AGF-TI maintains an acceptable running time. Although AGF-TI takes slightly longer than AMSC and SLIM due to its missing data imputation and shared bipartite graph learning, it consistently achieves significantly better performance across all datasets. Moreover, due to its anchor strategy, AGF-TI's running time scales efficiently with the sample size, ensuring its practicality for large-scale applications.

Time (s)CUBUCI-DigitCaltech101-20OutSceneMNIST-USPSAwA
AMMSS4.317.8086.2515.6148.58621.34
AMGL4.165.5165.999.6039.89587.51
MLAN4.6820.29141.6574.99150.061068.73
AMUSE6.3731.77102.7655.59209.931361.76
FMSSL4.775.8182.9424.5174.75808.80
FMSEL6.4532.70120.1860.12234.501359.60
CFMSC6.7820.36106.9526.29159.53815.56
MVAR4.265.4069.8410.3639.30601.27
ERL-MVSC4.315.6867.7110.5542.66605.83
AMSC0.852.1511.034.403.7333.54
SLIM0.334.4635.5914.1211.43128.83
Ours1.0615.3545.0335.8431.06354.54

W2: β\beta in Eq. (3)

Thanks for pointing it out. In Eq. (3), β\beta is the trade-off parameter of the regularization term PF2||\mathbf{P}||_ F^2. In our final objective, it is multiplied by λ\lambda and written as βλ\beta_\lambda. We will clarify this in the revision.


W3: Initialization of Tv\mathbf{T}_v

Thanks. In our experiments, the permutation matrix Tv\mathbf{T}_v is initialized as an identity matrix.

Last

Thank you again for your suggestion. We will include these details in the revised paper.

评论

Thank you for taking the time to review our rebuttal. We're glad our clarifications addressed your concerns, and we sincerely appreciate your support and endorsement! We will carefully revise the paper to incorporate the discussed points.

评论

The authors' response have addressed most of my concerns, I will maintain my score as 5.

审稿意见
5

The paper proposes Adversarial Graph Fusion with Tensorial Imputation (AGF-TI) for Incomplete Multi-view Semi-supervised Learning. This paper addresses a newly identified issue termed Sub-Cluster Problem (SCP). SCP arises when missing views in multi-view data disrupt the smoothness assumption of label propagation by creating disconnected sub-clusters. AGF-TI addresses SCP through i) an adversarial graph fusion scheme leveraging a min-max optimization framework to fuse view-specific graphs and 2) a tensorial imputation mechanism that recovers missing similarity information using high-order correlations. It also employ acceleration strategy to reduce the computational cost.

优缺点分析

Strength:

  1. Extensive experiments on six public datasets under various missing-view and label-scarcity settings show that AGF-TI consistently outperforms state-of-the-art methods in terms of accuracy, precision, and F1-score.
  2. The paper offers a comprehensive optimization algorithm combining reduced gradient descent and ADMM, supported by theoretical convergence analysis.

Weaknesses

  1. The experimental protocol creates missing views by randomly selecting V% of the samples and randomly deleting 1 to V – 1 of their views. In Figure 1, however, the Sub-Cluster Problem is illustrated as arising when whole groups of neighboring samples are missing, leaving a "vacuum region" that fractures a class manifold into multiple sub-clusters and breaks the smoothness assumption of label propagation. If missing views are indeed independent and spatially random, contiguous voids, thus SCP should occur far less frequently. Could the authors quantify how often SCP actually appears under the current random-missing setup, or ii) provide experiments with more realistic, block-missing / structured-missing patterns to show that AGF-TI remains effective in scenarios where SCP is genuinely likely to occur?
  2. What is the computational and memory footprint of the tensor nuclear-norm step when the number of views, anchors, and samples all scale simultaneously? Specifically, can the authors validate them on a truly large-scale (>100 k samples) benchmark?
  3. The related-work section should discuss recent missing-view representation methods, such as "COMPLETER: Incomplete Multi-view Clustering via Contrastive Prediction" and "Decoupled Contrastive Multi-view Clustering with High-order Random Walks."
  4. It would be better to give the full name of TNN in Definition 1.
  5. The evaluation lacks deep learning baselines. Benchmarking only against classical or shallow methods limits the paper’s relevance to modern multi-view semi-supervised learning.
  6. In Definition 1, please spell out TNN’s full name Tensor Nuclear Norm.

问题

If the authors could address the above weaknesses during rebuttal, I would like to keep my rating.

局限性

yes

最终评判理由

The responses have addressed my concerns, I will increase my score.

格式问题

NA

作者回复

Thank you for your acknowledgment and insightful comments! Your feedback is extremely helpful, and we are committed to addressing each question you have raised.


W1: Show SCP Appears under Current Random-missing Setup

Thanks. Our experimental setup replicates that of previous work [17] to ensure a fair comparison. To address the reviewer's concern, we adopt a density-based adaptive clustering method [R1], DBSCAN, to validate that "current random-missing setup effectively mimics SCP as VMR increases".

For validation, we calibrated DBSCAN on the first view of each dataset at VMR=0% by setting minpts=5minpts=5 and tuning ϵ[0.1,1]\epsilon \in [0.1, 1] (step size 0.01) to approximate the real number of classes. To control for the sample size reduction accompanying higher VMRs, we decrement minptsminpts by one for each increase in VMR. The results are shown below, and one can observe that the number of detected clusters by DBSCAN rapidly grows with VMR, indicating that class manifolds fragment into smaller sub-clusters. This confirms our experimental setup can effectively simulate SCP and validates the effectiveness of our proposed AGF-TI for addressing SCP.

Dataset#Classes0%30%50%70%
CUB10991317
UCI-Digit1010111832
Caltech101-202020253147
OutScene8871018
MNIST-USPS1010111327
AwA50434662126

W2: Computational and Memory Footprint of the Tensor Nuclear-Norm Step

Thanks. Theoretically, we have discussed the time complexity of the tensor nuclear-norm (TNN) step in Appendix D.2, which is O(nmVlog(nV)+nmV2)\mathcal{O}(nmV\text{log}(nV)+nmV^2). To address the reviewer's concern, we adopt the YTF50 dataset (~126k) [R2] with four views and empirically analyze the computational and memory footprint of the TNN step when view number, anchor number, and sample size scale simultaneously. The results under VMR=50% and LAR=5% are shown below. As is evident, the empirical trend aligns well with our theoretical analysis. Due to the anchor strategy, the running time of TNN step does not increase dramatically with the sample size and remains within an acceptable burden. This indicates AGF-TI is capable for practical use.

  • #Sample = 10k
Time (s)Memory (GiB)
#view\#anchor1282565121,024#view\#anchor1282565121,024
20.350.420.761.1720.0950.130.380.76
30.350.490.881.5730.140.290.571.15
40.480.671.051.9540.190.380.761.53
  • #Sample = 70k
Time (s)Memory (GiB)
#view\#anchor1282565121,024#view\#anchor1282565121,024
21.933.024.568.8921.172.184.288.56
32.113.676.3814.7431.613.216.4212.87
42.984.919.0623.1042.144.288.5617.13
  • #Sample = 126k
Time (s)Memory (GiB)
#view\#anchor1282565121,024#view\#anchor1282565121,024
21.735.759.4019.3021.933.897.7115.48
32.337.2816.2333.0033.026.0311.5923.26
43.079.2021.2960.4443.857.7115.4230.86

W3: References

Thanks for pointing them out. We will cite and discuss these papers in the revision.


W4/6: Full Name of TNN in Definition1

Thanks for pointing it out. We will revise TNN to its full name in Definition 1.


W5: Additional Deep Learning Baselines

Thanks. To address the reviewer's concern, we conduct additional experiments with two deep learning-based multi-view semi-supervised methods, i.e., IMvGCN [R3] and GEGCN [R4], on all datasets with VMR=50% and LAR=5%. We adopt the recommended learning rate and network structures as their baselines. The table below shows that our AGF-TI outperforms them in almost all cases, further demonstrating the effectiveness of AGF-TI.

ACCCUBUCI-DigitCaltech101-20OutSceneMNIST-USPSAwA
IMvGCN63.382.768.653.779.965.8
GEGCN49.990.167.155.391.358.8
Ours80.295.280.069.295.670.6
PREC
IMvGCN67.783.551.253.781.160.8
GEGCN56.590.634.056.291.457.0
Ours80.295.253.168.395.660.0
F1
IMvGCN61.282.540.252.479.558.6
GEGCN50.590.131.655.591.350.4
Ours79.195.354.867.795.658.6

References

[R1] Ester, Martin, et al. "A density-based algorithm for discovering clusters in large spatial databases with noise." kdd, 1996.

[R2] Huang, Dong, et al. "Fast multi-view clustering via ensembles: Towards scalability, superiority, and simplicity." IEEE TKDE, 2023.

[R3] Wu, Zhihao, et al. "Interpretable graph convolutional network for multi-view semi-supervised learning." IEEE TMM, 2023.

[R4] Lu, Jielong, et al. "Generative essential graph convolutional network for multi-view semi-supervised classification." IEEE TMM, 2024.

评论

The responses have addressed my concerns, I will increase my score.

评论

Thank you for taking the time to review our rebuttal and raising the score. We're glad our clarifications addressed your concerns, and we sincerely appreciate your support and endorsement! We will carefully revise the paper to incorporate the discussed points.

审稿意见
4

This paper introduces a novel method, Adversarial Graph Fusion with Tensorial Imputation (AGF-TI), to address the Sub-Cluster Problem (SCP) in graph-based multi-view semi-supervised learning (GMvSSL) where missing data can distort local structures and degrade classification performance. AGF-TI utilizes an adversarial min-max framework to learn a robust consensus graph that is resilient to these distortions. Simultaneously, it recovers missing structural information by stacking similarity graphs into a tensor and leveraging high-order correlations for imputation. The authors conduct extensive experiments to validate the effectiveness of AGF-TI in handling scenarios with both missing views and limited labeled data.

优缺点分析

Strengths:

1.The paper is well-written, and the motivations are clearly described. The identification and formalization of a previously unaddressed challenge in this field, SCP, provide a strong motivation and a clear target for the proposed method.

2.The AGF-TI model is the first to adopt a min-max framework for graph fusion, which is a novel contribution to GMvSSL. Incorporates an anchor-based strategy to reduce the computational cost, making AGF-TI scalable to larger datasets.

3.I enjoyed the significance test in Tables 5, 6, and 7 in Appendix. The provided code and sufficient information make it easy to reproduce the experimental results.

Weaknesses:

1.The AGF-TI model is inherently complex, involving a tri-level max-min-max optimization problem.

2.The theoretical convergence relies on the sufficient optimization assumption.

问题

1.The central formulation is a max-min-max problem. Could the authors elaborate on the intuition behind this structure? Specifically, how does the outer max over the view weights αv\alpha_v in conjunction with the inner min over the fused graph PP lead to a more robust consensus graph than a simpler min-min formulation, as seen in methods like CFMSC?

2.Please clarify how the permutation matrix TvT_v is initialized.

局限性

Yes

最终评判理由

Thanks for the authors’ response. My concerns have been adequately addressed, and I decide to maintain my original score.

格式问题

N/A

作者回复

Thank you so much for the valuable comments! We are delighted that you found the strengths of AGF-TI. Below, we address each of your questions in detail.


W1/Q1: Max-Min-Max Framework and Its Benefits

Thanks. Compared to the traditional minimization paradigm, the proposed AGF-TI introduces an adversarial learning method to optimize the shared bipartite graph, thus formulated as a max-min-max framework. As discussed in Subsection 4.3, such adversarial learning enhances the performance and robustness of the approach. On the other hand, although our formulation looks intractable, it actually can be solved by an efficient optimization algorithm through ADMM and the reduced gradient descent method.

In the traditional min-min paradigm, if a particular view temporarily achieves better performance, the algorithm tends to increase its weight, thereby diminishing the influence of other views (please see Fig. 3). This can lead to the algorithm becoming overly reliant on a single view, which may compromise its final performance.

In contrast, our max-min-max framework creates an adversarial dynamic between maximizing the overall objective and minimizing the view weights α\boldsymbol{\alpha}. This min-max cycle effectively prevents the algorithm from collapsing to a single dominant view, achieving exploration–exploitation balance. This mechanism enables the proposed AGF-TI to fully integrate complementary information from individual views to learn a stable and reliable fused bipartite graph and improve the final performance. Furthermore, as supported by recent studies [33, 36, 48], this adversarial property across views also enhances the model's robustness to noise when facing noisy views or datapoints.


W2: Sufficient Optimization Assumption

Thanks. According to Corollary 1 in Appendix D.1, Algorithm 1 will converge to the global optimum of the Inner Step, hence the sufficient optimization assumption is actually a mild assumption and can be quickly satisfied in practice. To validate the reasonable of this assumption, we track the error of both α\boldsymbol{\alpha} and P\mathbf{P} between iteration steps, i.e., α(k+1)α(k)22||\boldsymbol{\alpha}^{(k+1)}-\boldsymbol{\alpha}^{(k)}||_2^2 and P(k+1)P(k)F2||\mathbf{P}^{(k+1)}-\mathbf{P}^{(k)}||_F^2 during Algorithm 1 to approximate the error between numerical and exact solutions in Eq. (36). The results on all datasets with VMR=50% and LAR=5% are shown below. These results show that the error of both α\boldsymbol{\alpha} and P\mathbf{P} can be rapidly decreased to a small number (e.g., 1e51e-5), suggesting the reasonableness of the sufficient optimization assumption.

k123456
CUB
α\boldsymbol{\alpha} error1.02E-05-----
P\mathbf{P} error2.83E-10-----
UCI-Digit
α\boldsymbol{\alpha} error2.14E-064.34E-075.17E-08---
P\mathbf{P} error6.82E-051.31E-122.02E-17---
Caltech101-20
α\boldsymbol{\alpha} error1.36E-046.71E-053.26E-051.53E-057.61E-063.22E-06
P\mathbf{P} error0.877.31E-097.56E-151.62E-201.34E-253.12e -30
OutScene
α\boldsymbol{\alpha} error1.41E-044.67E-051.58E-055.20E-061.73E-06-
P\mathbf{P} error1.88E-051.80E-103.48E-158.06E-202.02E-24-
MNIST-USPS
α\boldsymbol{\alpha} error6.33E-06-----
P\mathbf{P} error1.41E-04-----
AwA
α\boldsymbol{\alpha} error0.0168.52E-081.22E-078.52E-081.22E-078.52E-08
P\mathbf{P} error0.0043.84E-044.50E-057.46E-061.10E-063.31E-07

Note: '-' means converged.


Q2: Initialization of Tv\mathbf{T}_v

Thanks. In our experiments, the permutation matrix Tv\mathbf{T}_v is initialized as an identity matrix. We will include this detail in the revised paper.

评论

Dear Reviewer wnph,

Please provide your detailed response to the rebuttal.

Best regards,

The AC

评论

I thank the authors for their detailed responses and the additional experiments, which have well addressed my concerns regarding the intuition of max-min-max Framework and method details. Considering the contributions and quality of this work, I think it can make a significant contribution to the incomplete multi-view semi-supervised learning community. Therefore, I decide to maintain my original score.

评论

Thank you for taking the time to review our paper and rebuttal. We're glad our clarifications addressed your concerns, and we sincerely appreciate your support and endorsement!

最终决定

After the rebuttal, all reviewers acknowledged that their concerns had been addressed and were consistently positive about this submission. I recommend accepting it. The authors are suggested to improve the camera-ready version according to the discussion.