Optimal Rates for Generalization of Gradient Descent for Deep ReLU Classification
we established optimal risk bound of $1/(\gamma^2 n)$ for GD with deep ReLU networks
摘要
评审与讨论
Improved bounds on the excess risk rate for deep ReLU networks.
优缺点分析
Strengths. Improved bounds wrt those known that are polynomial in depth. Weaknesses. No attempt at an experimental confirmation, but they are on time to present it for the rebuttal.
问题
- gradient decent -> gradient descent, line 57
局限性
The limitations are the assumptions of their theorems, which are clearly stated.
最终评判理由
I confirm the positive evaluation. The point of saying that their contribution is only theoretical is weak, since to my knowledge no work in NeurIPS is only theoretical. The authors could have missed a mathematical step, that the reviewers could have also missed, so a few experimental confirmations never hurt. To my understanding however the article is strong enough to be accepted.
格式问题
Correct.
Thanks for acknowledging the strengths of our work. Our point-to-point response to your comments follows next.
Strengths And Weaknesses Strengths. Improved bounds wrt those known that are polynomial in depth. Weaknesses. No attempt at an experimental confirmation.
- The improvement over is only part of our contribution. The main result is that we are the first to prove that deep ReLU networks trained by gradient descent can achieve optimal rate (). The existing works either obtain a bound of suboptimal rate or focus on smooth activations.
- This is a paper on learning theory. We focus on whether and how deep ReLU networks could achieve optimal generalization error. Hence, we do not take experiments. We believe we have improved the understanding of deep neural networks theoretically.
Q. gradient decent -> gradient descent, line 57
Thank you for pointing it out. We will revise it in the next version.
This paper studies the convergence and generalization analysis of deep fully connected neural networks. Specifically, this paper proves that GD achieves a sublinear convergence rate in the order of . Furthermore, the derived generalization bound exhibits an optimal rate of with respect to the sample size . Additionally, the dependence on has been improved from an exponential to a polynomial order by establishing a tighter Rademacher complexity bound.
优缺点分析
Strength:
-
A tighter and more practically relevant generalization bound.
-
A thorough comparison with existing works.
Weakness:
Overall, I feel that while this paper shows improvements over prior work, it focuses on a less meaningful direction. The insights gained from this improvement are either unclear or do not align well with what is observed in practice. As a result, I am not convinced that this setting is appropriate for studying the generalization error of deep neural networks.
-
The assumption that the data is NTK-separable with a margin lacks explanation in practical contexts. Specifically, what kinds of data will succeed or fail in this assumption? In addition, this assumption is extremely strong in the sense that it requires the network to be initialized in such a way that the data, when processed through this predefined model, is already linearly separable.
-
Compared with prior work, the main difference I observe is that the generalization error bound has been improved from an exponential dependence to a polynomial dependence. However, the key insight remains the same as in previous studies: increasing the number of layers leads to worse generalization. I do not see any fundamentally new insights presented in this paper.
-
The theoretical insights presented in this paper do not adequately explain the current success and widespread effectiveness of deep learning models that rely on large and deep architectures.
-
The paper does not include any experiments to validate the theoretical results it presents.
问题
Since the major contribution of this paper is the improvement in the dependence of the generalization bound with respect to , I have several questions:
-
What are the key insights that this paper aims to convey through this improved bound?
-
If Assumption 3 holds for real-world data, what would be the practical difference between using a deep neural network versus a shallow one? For example, how does relate to in this context?
-
Could you specify lines that reflect "novel control of activation patterns near a reference model"?
局限性
Yes
最终评判理由
I increased the score from 3 to 4 because the author has addressed my concerns regarding the gap between the theoretical results and real applications by providing numerical experiments. However, I still have concerns about Assumption 3, which deviates significantly from real-world applications.
格式问题
N/A
Thanks for acknowledging the strengths of our work. Our point-to-point response to your comments follows next.
W2.&Q1. Compared with prior work, the main difference I observe is that the generalization error bound has been improved from an exponential dependence to a polynomial dependence. However, the key insight remains the same as in previous studies: increasing the number of layers leads to worse generalization. I do not see any fundamentally new insights presented in this paper. What are the key insights that this paper aims to convey through this improved bound?
- The improvement over is only part of our contribution. Our key insight is that we are the first to prove that deep ReLU networks trained by gradient descent can achieve optimal rate (). The existing works either obtain a bound of suboptimal rate or focus on smooth activations. For more details, please refer to Table 1 of our paper.
- Despite the success of deep learning in practice, understanding it in theory is often challanging due to its nonconvexity and non-smoothness. We address these problems by developing two novel techniques. Firstly, we derive a tighter Rademacher complexity bound for deep ReLU networks based on the control of activation patterns. Secondly, we prove the Lipschitzness of deep ReLU networks. These key improvements open the door to further analysis of deep learning, which is beyond the scope of this paper.
W1. The assumption that the data is NTK-separable with a margin lacks explanation in practical contexts. Specifically, what kinds of data will succeed or fail in this assumption? In addition, this assumption is extremely strong in the sense that it requires the network to be initialized in such a way that the data, when processed through this predefined model, is already linearly separable.
We would like to clarify the motivation and interpretation of the NTK-separability assumption, which has been widely used in the literature [2,3,4,5].
- Role of the NTK separability assumption:
The NTK separability assumption is used in our paper as a tool to analyze in Theorem 2, which reflects the performance achievable by the network class. This assumption enables a clean characterization of generalization in Theorem 2. Importantly, our main result (Theorem 2) does not require data to be NTK-separable—it holds for any with bounded complexity. Thus, NTK separability is not a limiting assumption of our theory, but rather a benchmark scenario to make comparisons with existing literature.
- We believe Assumption 3 is not a strong assumption
NTK separability has been theoretically justified in a range of settings. For instance, [2] analyzes the margin through its dual form in different settings, including linearly separable case and the noisy -XOR distribution. [8] also shows that NTK can learn polynomials. Hence, Assumption 3 holds for data that are separable by polynomials. In practice, it can be verified through random features by solving a regularized linear classification problem.
Q2. If Assumption 3 holds for real-world data, what would be the practical difference between using a deep neural network versus a shallow one? For example, how does relate to in this context?
This is a good question. In standard NTK theory, the margin does not explicitly depend on . However, we conjecture that deeper networks tend to induce larger NTK margins . Our intuition is based on the structure of the NTK and its associated reproducing kernel Hilbert space (RKHS).
- Let denote the NTK of an -layer ReLU network, and its corresponding RKHS. As shown in [6], the NTK satisfies the recursive relation: , where is some derivative covirance matrix and is the covariance matrix of a Gaussian process. This implies that contains as part of its structure. Consequently, the RKHSs satisfy a nested relationship: .
- Consider the finite-width version: As (see [7]). Therefore, , and an -layer deep network has a larger NTK-margin than an -layer network, according to the definition of .
Taken together, these suggest that deeper networks admit richer RKHSs, which can better separate the data and thus lead to a larger margin . Therefore, although is not explicitly parameterized by , it is indirectly affected through the expressiveness of the induced RKHS. We emphasize that this argument remains heuristic, and we leave a precise characterization of how scales with as an important open question for future work.
W3. The theoretical insights presented in this paper do not adequately explain the current success and widespread effectiveness of deep learning models that rely on large and deep architectures.
Our theoretical framework could not explain the effectiveness of large and deep models adequately, but we provide a partial explanation here.
- In Theorem 2, we establish a generalization bound of the form , where represents the risk of a well-structured (but fixed) reference model near initialization in the same network class. As long as there exists a deep network with small , gradient descent is guaranteed to find a predictor with low generalization error.
- Importantly, the quantity is directly tied to the expressive power of the network class. For more complex data distributions, a shallow network may not be able to achieve a small , while a deep architecture can do so due to its ability to capture hierarchical structures. In particular, existing results (e.g., [1]) show that deep networks can approximate certain function classes with exponentially fewer parameters than shallow ones. Therefore, our theory does capture the advantage of depth: deeper architectures can produce smaller , and Theorem 2 ensures that gradient descent can generalize well in such cases. We believe this provides an explanation for why deep models succeed in practice.
- In addition, our analysis is conducted in the overparameterized regime, where the network width is large. Most of our results require to ensure good optimization behavior (e.g., descent direction, weak convexity). From this perspective, large models also help with optimization—they make gradient-based methods more effective, which partially explains why large-scale networks work well in practice.
Q3. Could you specify lines that reflect "novel control of activation patterns near a reference model"?
- In Lemma 22, we express the Rademacher complexity via products of sparse matrices, whose norms are tightly controlled using optimization informed estimates. This helps to derive a sharper bound, line 704. Please also refer to remark 2, line 214.
- Specifically, we first levearage the useful expression
Then we control by Theorem 1 and Lemma 15. This leads to a tight bound of Rademacher complexity .
- However, previous works use the bound on the shelf, ignoring how activation patterns would affect the complexity. For example, [4] derive the bound of , which suffers from exponential term and explicit dependence on .
W4. The paper does not include any experiments to validate the theoretical results it presents.
This is a paper on learning theory. We focus on whether and how deep ReLU networks could achieve optimal generalization error. Hence, we do not take experiments. We believe we have improved the understanding of deep neural networks theoretically.
[1] Telgarsky et al. Benefits of depth in neural networks.
[2] Ji et al. Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow relu networks.
[3] Taheri et al. Generalization and stability of interpolating neural networks with minimal width.
[4] Chen et al. How much over-parameterization is sufficient to learn deep relu networks?
[5] Taheri et al. Sharper guarantees for learning neural networkclassifiers with gradient methods.
[6] Arora et al. On Exact Computation with an Infinitely Wide Neural Net.
[7] Xu et al. Overparametrized multi-layer neural networks: Uniform concentration of neural tangent kernel and convergence of stochastic gradient descent.
[8] Arora et al. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks.
Thanks for the rebuttal. I need to point out that I respectfully disagree with the last point. If the theoretical results cannot be justified in real applications, how can we trust and utilize these theoretical insights to genuinely push understanding or guide the design of practical systems? As you mentioned, your goal is to improve the theoretical understanding of deep neural networks (DNNs). However, any results derived under assumptions that significantly deviate from real-world conditions risk being detached from practice. For example, if applying gradient-based methods in real-world applications cannot practically achieve a generalization error that scales on the order of , then what is the point of establishing such a result? While the theoretical bound might be mathematically elegant, if it cannot be observed under realistic training conditions or architectures, its relevance becomes questionable.
Unless you can justify your assumptions and ensure that the model architecture aligns with real-world applications, it is essential to include numerical experiments to support any theoretical results based on those assumptions.
Thank you for your constructive comments. We agree that numerical experiments are valuable in supporting our theoretical analysis. Accordingly, we have conducted some experimental verifications to support our excess risk analysis.
Our excess risk analysis in Theorem 3 imposes an NTK separability assumption, which has been validated in the literature. For example, the work [1] demonstrates that Assumption 3 holds for a noisy 2-XOR distribution, where the dataset is structured as follows:
Here, the factor ensures that , above denotes the Cartesian product, and the label only depends on the first two coordinates of the input . As shown in [1], this dataset satisfies Assumption 3 with , which implies that our excess risk bound in Theorem 3 becomes for this dataset. We conducted numerical experiments and observed that the test error decays linearly with . The population loss for the test error is computed over all points in the distribution.
- We train two-layer ReLU networks by gradient descent on noisy 2-XOR data. We fix the width .
- With a fixed dimension , we vary the sample size and obtain the following results
| Test Error | ||
|---|---|---|
| 10 | 3.60 | 0.4625 |
| 12 | 3.00 | 0.4500 |
| 14 | 2.57 | 0.3625 |
| 16 | 2.25 | 0.3438 |
| 18 | 2.00 | 0.3219 |
| 20 | 1.80 | 0.3063 |
| 24 | 1.50 | 0.2469 |
| 28 | 1.29 | 0.2062 |
- With a fixed sample size , we vary the dimension and obtain the following results
| Test Error | ||
|---|---|---|
| 7 | 0.77 | 0.0125 |
| 8 | 1.00 | 0.1313 |
| 9 | 1.27 | 0.2484 |
| 10 | 1.56 | 0.3080 |
| 11 | 1.89 | 0.3365 |
| 12 | 2.25 | 0.4190 |
In both experiments, we observe that the test error is of the order (approximately ). This shows the consistency between our excess risk bounds in Theorem 3 and experimental results. We will include these numerical experiments in the revised version. We will also conduct more experimental verifications.
[1] Ji, Z., & Telgarsky, M. (2019). Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow relu networks.
Thank you for the additional experiments. I have increased my score from 3 to 4.
Dear Reviewer usBV,
Thank you for recognizing the contribution of our paper. We really appreciate your constructive suggestions and comments.
Best regards, Authors
The authors establish a convergence rate for Gradient Descent applied on deep neural network with rail activations that is polynomial in the depth of the network .
优缺点分析
Strengths
- The paper is well written and easy to follow.
- The authors present their proof technique and the relation to prior work in a very clear manner.
- The result is significant and improves the exponential bound given in previous work.
Weaknesses
-
My main concern is about the correctness of Lemma 16, which is a central component of the paper. While Lemma 15 is proved only for the data points $x_i$ in the training set, Lemma 16 applies it to points in the covering set $D$, which includes points outside the training set. Why is this use of Lemma 15 justified? Could the authors elaborate on this point? If I’m mistaken, I will revise my score accordingly. Another reason this transition may not be valid is that the network’s weights depends on the training set. Therefore, a bound that holds for the training set may not necessarily apply to points outside it.
-
Additionally, I am uncertain about the correctness of the proof of Lemma 15. In the inductive argument showing that , it is crucial that the constant hidden in the big- notation remains uniform across all induction steps. Otherwise, for example, if the constant grows multiplicatively at each step, the resulting bound would be exponential in . Could you clarify why this issue does not arise in the proof of Lemma 15?
-
A minor concern: The authors obtain a bound only in the case where $m \geq d$.
问题
Why is it necessary to assume that ?
局限性
Yes.
最终评判理由
The paper is clear and well written. The result is significant and improves results presented in previous work. For these reasons, I consider the paper to be above the acceptance threshold.
格式问题
N.A
Thanks for acknowledging the strengths of our work. Our point-to-point response to your comments follows next.
W1. My main concern is about the correctness of Lemma 16, which is a central component of the paper. While Lemma 15 is proved only for the data points in the training set, Lemma 16 applies it to points in the covering set , which includes points outside the training set. Why is this use of Lemma 15 justified? Another reason this transition may not be valid is that the network’s weights depend on the training set. Therefore, a bound that holds for the training set may not necessarily apply to points outside it.
We appreciate the opportunity to clarify the connection between Lemma 15 and Lemma 16. We claim that all of our technical lemmas (including Lemma 15) hold for any finite set. In particular, Lemma 15 should be stated as follows:
- Lemma 15'
Given a set of points on the sphere . Suppose . Then with probability at least over the randomness of initialization, for any , there holds
As vary, we could get concentration inequalites on different sets.
- Applying Lemma 15' to the training set
We get the uniform control over points (the original Lemma 15): Suppose . Then with probability at least over the randomness of initialization, for any , there holds
- Applying Lemma 15' to the covering set
Let to be a -cover of the unit sphere ( is a universal constant). Then we have: Suppose , then with probability at least over the randomness of initialization, for any , there holds
- Uniform control via covering
For any , there exists such that . It then follows that . Hence by triangle inequality we have .
- On training set dependence
- We use (2) to analyze the optimization and estimate the Rademacher complexity. This depends on the training set.
- We use (3) to prove Lemma 16, which depends on the covering set and is independent of the training set. Lemma 16 demonstrates the -Lipschitzness of deep ReLU networks near initailization, which is independent of the optimization trajectory. The weights here are not updated via gradient descent; they remain within a neighborhood of the initialization. Lemma 16 is mainly used to derive a bound for in Lemma 1. It is crucial to obtain a generalization bound that is only polynomial in (see Appendix C).
- As , (2) and (3) hold simutaneously with probability at least . Both of them are important and they contribute to our theory from different aspects.
We will revise Lemma 15 accoardingly and put (2) as a corollary in the next version.
Q. Why is it necessary to assume that ?
The assumption is introduced for notational simplicity and does not affect the generality of our results. In particular, the bound can be written as . Indeed, all of in our results can be replaced by by removing the assumption .
W2. Additionally, I am uncertain about the correctness of the proof of Lemma 15. In the inductive argument showing that , it is crucial that the constant hidden in the big- notation remains uniform across all induction steps. Otherwise, for example, if the constant grows multiplicatively at each step, the resulting bound would be exponential in . Could you clarify why this issue does not arise in the proof of Lemma 15?
In our analysis, the constants hidden in the big- notation are uniform across all layers, i.e., they do not depend on the layer index . This is explicitly ensured by Lemma 12, which provides uniform bounds under overparameterization assumptions for all layers.
- Specifically, in eq.(32) of our paper, the bound holds uniformly for all . In Lemma 15, we do not apply a direct recursive argument where each layer builds on the previous one with new constants. Instead, we rely on Lemma 12 to establish uniform control at each layer to finish the induction, assuming the preconditions hold (which are guaranteed by a sufficiently large width ).
- More concretely, we assume , in Lemma 15. Let the universal constant in eq.(32) be . Within Lemma 12, let . We only need to choose to satisfy the following inequalities (which are independent of ): . Hence, all constants are fixed prior to the induction, and do not accumulate as increases. This ensures that the final bounds in Lemma 15 remain polynomial in , as claimed.
- Similar analyses also appeared in other works [1,2], which adopt uniform constants to control inductive arguments across layers.
W3. A minor concern: The authors obtain a bound only in the case where .
- We study the generalization performance of deep ReLU networks in the overparameterized regime, which aligns with common practical settings. In this case, the landscape is non-smooth and nonconvex, though the models can generalize well. Our work improves the understanding of gradient descent methods in deep neural networks. The assumption has been widely adopted in the existing literature [1,2].
- Moreover, the assumption that the network has sufficiently many neurons is often necessary to ensure expressivity in high dimensions. [3] shows that neurons are required for NTK to learn a degree- polynomial. In the feature learning regime, even for simple data distribution (e.g., Guassian or boolean), [4,5] demonstrate that neural networks need neurons to learn single-index or multi-index models, where denotes the information exponent or leap complexity.
- We agree that it is an interesting question to study the performance of deep networks in the regime. However, doing so might require additional structural assumptions, such as sparsity in the data distribution, low-dimensional manifold structure, or specific architectural biases. We view this as a promising direction for future research.
[1] Zou et al. Stochastic gradient descent optimizes parameterized deep relu networks.
[2] Chen et al. How much over-parameterization is sufficient to learn deep relu networks?
[3] Ghorbani et al. When Do Neural Networks Outperform Kernel Methods?
[4] Bietti et al. Learning single-index models with shallow neural networks.
[5] Abbe et al. SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics.
Thank you for the clarification. The concerns I raised in my review have been addressed.
Following the rebuttal, I do have one additional question. The authors mention that “the landscape is non-smooth and non-convex.” However, the paper relies on Lemma 1 from Srebro (2010), which was originally proved under the assumption that the predictors are smooth. Could you please clarify why it is valid to apply this lemma in the non-smooth setting considered in the paper?
Thank you for your reply. We are glad to hear that the concerns raised in your previous review have been addressed. We also appreciate your additional comments regarding the smoothness.
To clarify, Lemma 1 from Srebro (2010) only requires the loss function to be smooth; it does not impose any smoothness assumptions on the model itself. The key idea in Srebro (2010) is that the Lipschitz constant of the loss can be bounded by the loss value itself—known as the self-bounding property of smooth losses. This Lipschitz constant plays a crucial role in relating the covering numbers of the loss function class to those of the model class.
By leveraging this self-bounding property, we can bound the Lipschitz constant in terms of the training errors, which then leads to the formulation of Lemma 1 from Srebro (2010). This explains why the training error appears in the bound presented in Srebro (2010). Importantly, this approach depends solely on the smoothness of the loss function, regardless of whether the model itself (e.g., a neural network) is smooth or not.
In our work, we consider a smooth logistic loss. Therefore, Lemma 1 from Srebro (2010) remains applicable in our case. We will clarify this point in the revised version of the paper to avoid this confusion.
Thanks for the clarification regarding the smoothness assumption.
After carefully reading the authors’ responses to all reviewers, I find the improvements in the dependencies on and to be important contributions. I have therefore raised my score to support the acceptance of this work.
Dear Reviewer iHRn,
Thank you for your constructive suggestions and for recognising our contribution. We truly appreciate your efforts.
best regards Authors
This paper investigates the generalization error of -th layer ReLU neural networks trained in the NTK regime. On the data that is -separated by the neural tangent kernel, the authors provide the excess risk of order , which is faster than the well-known suboptimal one and only depends polynomially on the network depth . They also provide a global convergence guarantee for the network using gradient descent and the NTK framework. In the analysis, they carefully evaluate the dependence on for the convergence analysis and Rademacher complexity bound, which enables them to obtain a sharper bound.
优缺点分析
Strength
- As the authors mentioned in the paper, much of the existing literature on the generalization error bound of deep neural networks exponentially depends on the network depth. This work mitigates this dependence to the polynomial one, which is a significant refinement.
- This paper is easy to follow, with detailed explanations of theorems and concise proof overviews.
Weakness
- The primary concern is that this paper mainly focuses on NTK-separable data, which imposes a substantial restriction on the data structure. It appears that several refinements, such as the polynomial dependence on the network depth, do not require the separability assumption. Therefore, it would be more plausible to compare the results obtained in this work with other NTK-based results that do not focus on the separability condition.
问题
- How is the initialization scheme defined in Assumption 1 crucial for obtaining the sharper bound? For example, while [Chen et al., 2021] considers the standard Gaussian initialization for the weights, can the same bound be derived for such settings?
- As the authors also mentioned as a future work, extending the results to stochastic optimization, which is covered by [Chent et al., 2021], will be a significant topic. What is the difficulty of such an extension?
- Another question is about the extension to other models. For example, how is the ReLU activation essential for obtaining the results, and is it possible to extend the results to other activations? Moreover, I am curious about the applicability of this approach to different models, such as ResNet and CNN.
局限性
The authors properly address the limitations.
最终评判理由
I would like to maintain my original score.
格式问题
N/A
Thanks for acknowledging the strengths of our work. Our point-to-point response to your comments follows next.
W1. This paper mainly focuses on NTK-separable data, which imposes a substantial restriction on the data structure.
Our main contribution is that we are the first to prove that deep ReLU networks trained by gradient descent can achieve optimal rate (). The existing works either obtain a bound of suboptimal rate or focus on smooth activations. Our results hold in general settings, not limited to NTK-separable data.
- Specifically, in Theorem 2, we provide generalization bounds in general cases, i.e., . Hence, as long as there exists with small error, the performance of the deep network trained with gradient descent is guaranteed.
- The NTK-separability assumption is only used to analyze in a concrete case and to better illustrate our main result. This assumption allows us to derive explicit estimates of , rather than being a necessary condition for our general theory. We believe our general generalization bound can be applied to various settings and demonstrate the power of deep learning.
- The NTK separability assumption has been widely used in the literature [1,2,3,4]. It has been theoretically justified in a range of settings. For instance, [2] analyzes the margin through its dual form in different settings, including linearly separable case and the noisy -XOR distribution.
W2. It would be more plausible to compare the results obtained in this work with other NTK-based results that do not focus on the separability condition.
Many NTK-based results assume positive eigenvalues of NTK Gram matrix. [5] points out that NTK separable data is a milder assumption. In general, we need to estimate , or construct a predictor that achieves low training error under the structural data assumption. For example, this is closely related to the Neural Tangent Random Feature (NTRF) model, which has been well studied in the appealing work [3].
Q1. How is the initialization scheme defined in Assumption 1 crucial for obtaining the sharper bound? For example, while [Chen et al., 2021] considers the standard Gaussian initialization for the weights, can the same bound be derived for such settings?
The symmetric initialization assumed in Assumption 1 is a commonly used technique in the theoretical deep learning literature [6], primarily to simplify the analysis.
- Specifically, this ensures at the beginning, , as illustrated in Lemma 7. This significantly reduces the complexity of bounding the error dynamics.
- While our current proof uses this symmetric setup, it is not essential. Under standard Gaussian initialization, these quantities can still be shown to be concentrated near zero using standard concentration tools. For example, [3] shows that these quantities can be bounded by terms, which are negligible in our final bounds.
Q2. As the authors also mentioned as a future work, extending the results to stochastic optimization, which is covered by [Chen et al., 2021], will be a significant topic. What is the difficulty of such an extension?
- While [3] has studied the online SGD setting, we believe that our theoretical framework can be adapted to multi-pass SGD. However, a major difficulty lies in carefully controlling the trade-off between optimization error and generalization gap. For example, how the number of updates grows with the dataset size .
- Moreover, analyzing variants of SGD such as noisy SGD, decentralized SGD, or SGD with momentum introduces additional challenges. These algorithms involve extra sources of randomness, communication constraints, or memory terms, which make it harder to track the evolution of the parameters.
Q3. How is the ReLU activation essential for obtaining the results, and is it possible to extend the results to other activations?
- ReLU is widely used in practice for its efficienct signal propagation and sparsity. Theoretically, it enjoys a variance-preserving property at initialization: . This property makes it possible to control the hidden layers and activation patterns rigorously (Appendix A).
- For other activations, the core techniques in our paper can still be applied, but additional assumptions may be needed. For example, smoothness, convexity, Lipschitzness or other conditions to ensure signal propagation remains well-behaved.
Q4. The applicability of this approach to different models, such as ResNet and CNN.
Thanks for raising this interesting point. We agree that the generalization analysis in different models is interesting for understanding deep ReLU networks. The core techniques such as controlling the Lipschitz constant and Rademacher complexity can be extended to other architectures. For example, [7] studies the NTK Gram matrix of ResNet and CNN. Building on their insights, it is interesting to explore how our framework can be used in these models, deriving architecture-specific generalization bounds.
[1] Ji et al. Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow relu networks.
[2] Taheri et al. Generalization and stability of interpolating neural networks with minimal width.
[3] Chen et al. How much over-parameterization is sufficient to learn deep relu networks?
[4] Taheri et al. Sharper guarantees for learning neural network classifiers with gradient methods.
[5] Nitanda et al. Gradient descent can learn less over-parameterized two-layer neural networks on classification problems.
[6] Xu et al. Overparametrized multi-layer neural networks: Uniform concentration of neural tangent kernel and convergence of stochastic gradient descent.
[7] Du et al. Gradient descent finds global minima of deep neural networks.
I sincerely appreciate the authors' response. The responses adequately address my concern, and then I would like to maintain my current score.
This paper proves optimal generalization rates for deep networks in the NTK regime trained on a classification task. A new method for handling the regularity of the outputs w.r.t. to small changes of the weights around initialization, thus replacing an exponential dependence on depth with a polynomial one. The optimal rates of convergence are then proven assuming that the classes are separable with a certain margin w.r.t. the NTK geometry. The paper mixes approaches from NTK analysis and uniform generalization bounds.
We are happy to accept this paper as a poster at NeurIPS, in line with the recommendations of the reviewers.