PaperHub
6.3
/10
Rejected3 位审稿人
最低5最高8标准差1.2
5
6
8
4.0
置信度
正确性2.7
贡献度2.7
表达2.7
ICLR 2025

DAS-GNN: Degree-Aware Spiking Graph Neural Networks for Graph Classification

OpenReviewPDF
提交: 2024-09-26更新: 2025-02-05
TL;DR

DAS-GNN applies spiking neural networks (SNNs) to graph classification using degree-aware group-adaptive neurons and an adaptive threshold mechanism, leading to significant performance improvements over existing methods.

摘要

关键词
Spiking Neural NetworkGraph Neural NetworkGraph Classification

评审与讨论

审稿意见
5

This paper proposes the starvation problem of spiking neurons, which is widespread in existing spiking graph neural networks. And the paper observes that the spike frequency of neurons is related to the connectivity of the graph and entries in the node feature. Based on these observations, a novel spiking graph representation learning method called DAS-GNN is proposed. DAS-GNN is composed of two modules, degree-aware group adaptive neurons and the learnable inference base threshold. These components are responsible for regulating the threshold voltages of neurons by groups and reducing the influence of the base threshold voltages, respectively. Extensive experiments demonstrate that DAS-GNN outperforms its Artificial Neural Network (ANN) counterparts across some graph classification datasets. These findings highlight the potential of DAS-GNN for energy-saving and accurate graph-based systems.

优点

  1. This study proposes interesting observations that the connectivity of the graph will affect the receiving and firing patterns of spiking neurons.

  2. The authors carefully design two components, degree-aware group adaptive neurons and the learnable inference base threshold, to effectively balance the predictive performance and the energy consumption.

  3. Extensive experimental evaluations are provided, which are informative.

缺点

  1. Some descriptions of neurons and nodes are unclear. In Section 3, neurons are related to the feature dimension. But, in Section 4.2, the paper claims that neurons can be grouped by their degrees.

  2. The definition of the "degree group" is vague and the paper lacks more explanations about the groups NgN_g or grouping strategies.

  3. One of my main concerns is the technical quality. In the experiment, the baseline, PGNN, is always considered as a spiking neuron variant, referred to as Parametric Leaky Integrate-and-Fire model (PLIF) in the previous study [1]. However, the other baselines are spiking graph neural networks, which can adopt PLIF models as their spiking neurons. The categorization of baselines is confusing. Besides, the paper lacks details on the settings of these baselines.

问题

  1. For those graphs following the power-law distribution, simply dividing nodes by their degrees may result in an uneven number of nodes between different groups. How does the grouping strategy proposed in the paper alleviate this issue?

  2. In the experiment, the new and relevant spiking graph neural networks [2-3] and spiking neuron variants [4] can be introduced separately as baselines to demonstrate the model's effectiveness. Additionally, the authors should provide architectural details of baselines built from spiking neuron variants.

  3. As mentioned in the paper, the real-world graphs tend to exhibit an extremely skewed distribution of degrees. The issue also remains in many large-scale graphs. Providing extra experiment results on node classification tasks would enhance the persuasiveness of the paper.

  4. For spiking neural networks, there is a trade-off between the firing rate and the energy consumption. In this paper, the energy consumption of DAS-GNN is not only related to the firing rate but also to the grouping strategies. It would be clearer if the authors could provide the energy consumption (in formula) and further discuss the impact of the aforementioned settings on the energy consumption.

  5. As depicted in Figure 5 and Appendix H, PGNN also shows highly diversified spike distributions on MUTAG and ENZYMES datasets. But MUTAG-PGNN-GCN and ENZYMES-PGNN-GAT remain obvious performance gaps compared to other baselines. It would be interesting if the authors could provide further discussions on these unexpected situations.

[1] Wei Fang et al. Incorporating learnable membrane time constant to enhance learning of spiking neural networks. In ICCV. 2021.

[2] Jintang Li et al. A Graph is Worth 1-bit Spikes: When Graph Contrastive Learning Meets Spiking Neural Networks. In ICLR. 2024.

[3] Mingkun Xu et al. Exploiting spiking dynamics with spatial-temporal feature normalization in graph learning. In IJCAI. 2021.

[4] Xingting Yao et al. GLIF: A Unified Gated Leaky Integrate-and-Fire Neuron for Spiking Neural Networks. In NIPS. 2022.

评论

Q3: As mentioned in the paper, the real-world graphs tend to exhibit an extremely skewed distribution of degrees. The issue also remains in many large-scale graphs. Providing extra experiment results on node classification tasks would enhance the persuasiveness of the paper.

\to We provide additional experiment results on the large-scale graphs for both graph classification and node classification tasks. We used REDDIT-BINARY and COLLAB datasets, which are large enough to exhibit a skewed distribution of degrees in the graph classification tasks. We show that DAS-GNN still outperforms when applied to large-scale graph classification tasks.

In addition, we experimented with two node classification datasets: ogbn-arxiv, and ogbn-mag. DAS-GNN still performs the best, but the gap is smaller compared to the graph classification tasks. We believe this is because the node classification depends more on per-node properties rather than the graph structures, benefitting less from the proposed techniques.

MethodREDDIT-BINARYCOLLABOGBN-ARXIVOGBN-MAG
GCN
ANN90.4580.6071.5535.10
SpikingGNN81.1067.2453.5227.78
SpikeNet82.6068.7860.1828.82
P-GNN82.3568.4658.4527.52
GGNN83.1067.9258.2328.75
ours83.5083.5260.8429.73
0.4014.740.660.91
GAT
ANN76.3055.6071.0629.60
SpikingGNN50.0052.0052.2114.05
SpikeNet50.0552.0053.6717.30
P-GNN52.0552.0453.7716.97
GGNN53.0552.0053.2616.54
ours76.2575.5853.9417.37
23.2023.540.170.07
GIN
ANN80.7575.1161.0127.55
SpikingGNN79.7552.0456.7223.31
SpikeNet79.7053.6655.7923.37
P-GNN83.3053.0250.8821.64
GGNN83.7058.9454.8822.39
ours84.2070.3859.9123.98
0.5011.443.190.61

Q4: DAS-GNN energy consumption is not only related to the firing but also to the grouping strategies. It would be clearer if the authors could provide the energy consumption (in the formula) and further discuss the impact of the aforementioned settings on the energy consumption.

\to Yes, it is true that we need a small additional cost for updating the threshold per group. The overhead for computing the adaptive threshold operation would be 2EAC×2 \cdot E_{AC} \times num_groups ×\times hidden_dimension to account for the two additional multiplications related to γ\gamma per group in Eq. 10. We modified the energy consumption table to consider the operations needed for updating the thresholds, which demonstrates the small overhead. Please also find it in section 5.6 and Appendix J of the revision.

MUTAGPROTEINSENZYMESNCI1IMDB-BINARY
GCN
ANN0.53mJ6.92mJ3.29mJ21.41mJ3.16mJ
SpikingGNN0.02mJ1.28mJ0.36mJ3.92mJ0.10mJ
SpikeNet0.06mJ0.13mJ0.25mJ2.64mJ0.35mJ
PGNN0.10mJ0.79mJ0.84mJ6.54mJ0.28mJ
GGNN0.08mJ0.91mJ0.48mJ3.37mJ0.21mJ
+Additional cost0.02μJ0.08μJ0.05μJ0.02μJ0.30μJ
DAS-GNN0.10mJ0.94mJ0.52mJ5.28mJ0.70mJ

Q5: GNN also shows highly diversified spike distributions on MUTAG and ENZYMES datasets. But MUTAG-PGNN-GCN and ENZYMES-PGNN-GAT remain obvious performance gaps.

\to We believe this is related to a trend we found where the spike rate diversity of the ultimate layer has an especially high correlation with the performance in general. In both the PGNN-MUTAG-GCN and PGNN-ENZYME-GAT, low diversity is observed ultimate layer (layer 3 for GCN, layer 2 for GAT). Although an exact reason is up for further investigation, we believe this is a reasonable behavior because it is directly used in the final output. We have discussed this in Section 5.5 and Figures 27-28 of the revision.

评论

I would like to thank the authors for their response. The above response has resolved some of my concerns, and we are willing to raise my rating. However, we believe the current method still has several limitations:

  1. As mentioned in the response, the current grouping strategy will result in uneven numbers of nodes in each group. Therefore, allowing low-degree nodes to have higher firing rates seems to be a double-edged sword. For large-scale graphs, a large number of nodes with the low degree generate numerous spikes would undoubtedly increase the energy consumption of the model. A more in-depth exploration of the grouping strategy is necessary, and the paper fails to delve into this intriguing issue.

  2. For empirical experiments, many advanced spiking neuron variants have been proposed including learnable threshold voltages and temporal-based encoding mechanisms. The baselines like LIF and Adaptive LIF are quite simple and outdated, making it difficult to determine whether the proposed methods have a significant improvement in enhancing spike diversity compared to other advanced neuron variants.

  3. The typical use cases provided in the paper do not involve node features, which limits the scalability of the proposed neurons.

评论

Thank you for taking the time to share your thoughts. We genuinely appreciate your comments and feedback, and we're confident your insights will help us further enhance our work.

If you have any other questions or comments, please don’t hesitate to reach out to us at any time.

评论

We appreciate acknowledging the strength in our work and providing detailed feedback.

W1: Some descriptions of neurons and nodes are unclear. In Section 3, neurons are related to the feature dimension. But, in Section 4.2 the paper claims that neurons can be grouped by their degrees

\to The neurons belonging to vertices of the same degree and the same feature position are grouped together. This yields groups shaped as vertically long bars as newly depicted with green boxes in Fig. 1(b) of the revised paper. We apologize for the confusion and moved the description regarding feature splitting to appear earlier in the revision.

W2: The definition of the “degree group” is vague and the paper lacks more explanations about the groups N_g or grouping strategies.

\to A degree group comprises multiple degrees that can be found within an input graph. It is a concept we introduced in Appendix E to represent a case where we would want less number of groups than the number of unique degrees. We provided the definition in the revision. In addition, NgN_g denotes the set of neurons within group gg. We also clarified the definition in the revision.

W3: Confusion of categorizing baselines and lack of details on the settings of these baselines.

\to All baselines are spiking neuron variants that are compared against our method at the neuron-level. To avoid confusion, we replaced their names in Table 1 with the type of neurons (e.g., Vanilla LIF neuron for SpikingGNN and adaptive techniques added neuron for SpikeNet). We apologize for the concern caused by the ambiguous terminology.

We added experimental architecture details in the experiment settings on how we configured the baselines Appendix B and Section 5.1, such as the number of layers and hidden dimension size.

Q1: For those graphs following the power-law distribution, simply dividing nodes by their degrees may result in an uneven number of nodes between different groups. How does the grouping strategy proposed in the paper alleviate this issue?

\to Yes, DAS-GNN could result in an uneven number of nodes between groups. However, we believe this would not be very problematic. Although the size of each group could be large, the neurons within the group will exhibit similar firing rates. Considering that the purpose of grouping is to share a threshold together among neurons with similar firing rates, we found that this empirically does not cause a problem. Please also see Table 7 for results with larger graphs.

Q2: In the experiment, the new and relevant spiking graph neural networks and spiking neuron variants can be introduced separately as baselines to demonstrate the model’s effectiveness. Additionally, the authors should provide architectural details of baselines built from spiking neuron variants.

MethodMUTAG OrigMUTAG DASGNNPROTEINS OrigPROTEINS DASGNNENZYMES OrigENZYMES DASGNNNCI1 OrigNCI1 DASGNNIMDB-BINARY OrigIMDB-BINARY DASGNN
SpikeGCL91.4997.3477.8779.1528.1732.1765.6967.7671.7073.30
GC-SNN88.3393.1372.3373.3243.5053.0063.3665.3869.9077.20
GA-SNN66.4992.0559.5765.8533.1751.8352.2166.1350.0078.60

\to We added [4] to our main table (Table 1) as ‘GLIF’. The tendency of the result did not change after we added a new baseline (GLIF).

Additionally, we conducted experiments with the three proposed techniques from [2-3], with and without techniques from DAS-GNN as shown in the table below. It shows that DAS-GNN is orthogonal to the techniques, and can be used to further improve the performance in all the tested cases. The result can be also found in Appendix K.

In addition, we added architectural details of baselines on how we configured the baselines in Appendix B.

审稿意见
6

This paper proposes degree-aware spiking graph neural networks with adaptive thresholds based on a group of neurons for graph classification. The paper first diagnoses the poor performance as the existence of neurons under starvation caused by the graph structure. Then the paper proposes adaptive threshold among neurons partitioned by degrees, as well as learnable initial threshold and decay rate to reduce the sensitivity. Experiments on several datasets show superior performance of the proposed method and the potential low energy costs.

优点

  1. This paper identifies the starvation problem of spiking graph neural networks that causes performance drop when adapting them to graph classification.

  2. This paper proposes a novel degree-aware group-adaptive technique to overcome the problem.

  3. Experiments show superior performance on several datasets, some outperforming ANNs.

缺点

  1. This paper does not discuss the influence of the proposed method for deployment on potential neuromorphic hardware, while SNNs mainly target those hardware to obtain energy efficiency. The proposed degree-aware group-adaptive neurons require the thresholds to depend on other neurons. Is it plausible for potential neuromorphic hardware? Or will it introduce much more computation for communications between neurons?

  2. It is not clear if the energy consumption analysis takes the costs of this adaptive threshold operation (and potential communications between neurons) into account.

问题

Some recent works also study SNN for link prediction tasks in graphs besides node-level classification [1], which aims to better leverage the temporal spiking time property of SNNs beyond just the spiking nature. For the proposed method, is only spike rate considered? Is it possible to better leverage these properties of SNNs beyond rate, which are considered important features since the origin of SNNs [2]? These can be discussed.

[1] Temporal Spiking Neural Networks with Synaptic Delay for Graph Reasoning. ICML 2024.

[2] Networks of spiking neurons: the third generation of neural network models. Neural Networks, 1997.

评论

We thank the reviewer for acknowledging the novelty of the work and providing constructive feedback. We have addressed the comments below.

W1: The proposed degree-aware group-adaptive neurons require the thresholds to depend on other neurons. Is it plausible for potential neuromorphic hardware? Or will it introduce much more computation for communications between neurons?.

\to Yes. Our method would indeed be plausible for potential neuromorphic hardware [1-3] without introducing significant computational overhead for communication between neurons. Neuromorphic processors are typically designed with neuron cores as fundamental units, where each core handles neuron states (e.g., membrane potential) stored in local memory (often an SRAM), updates them based on synaptic inputs, and generates spikes upon reaching a threshold. To implement DAS-GNN, each neuron group can be placed together in a core as a whole. Then, the sum of firing rates (Eq. 9) can reside in the local memory and be updated by counting spikes from each neuron of the same group. Since the threshold adjustment happens locally at the core level, it doesn’t need extensive communication overhead between neurons. Please find this discussion in section 5.6 of the revision.

[1] Merolla, Paul A., et al. "A million spiking-neuron integrated circuit with a scalable communication network and interface." Science 345.6197 (2014): 668-673.

[2] Akopyan, Filipp, et al. "Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip." IEEE transactions on computer-aided design of integrated circuits and systems 34.10 (2015): 1537-1557.

[3] Davies, Mike, et al. "Loihi: A neuromorphic manycore processor with on-chip learning." Ieee Micro 38.1 (2018): 82-99.

W2: It is not clear if the energy consumption analysis takes the costs of this adaptive threshold operation (and potential communications between neurons) into account.

\to We updated the energy consumption analysis to include the adaptive threshold operation. As mentioned in W1, DAS-GNN does not incur any additional communication. The overhead for computing the adaptive threshold operation would be 2EAC×2 \cdot E_{AC} \times num_groups ×\times hidden_dimension to account for the two additional multiplications related to γ\gamma per group in Eq.10. However, as shown in the table, this only adds a minimal amount of overhead to the energy consumption. This was also added to section 5.6 of the revised version.

MUTAGPROTEINSENZYMESNCI1IMDB-BINARY
GCN
ANN0.53mJ6.92mJ3.29mJ21.41mJ3.16mJ
SpikingGNN0.02mJ1.28mJ0.36mJ3.92mJ0.10mJ
SpikeNet0.06mJ0.13mJ0.25mJ2.64mJ0.35mJ
PGNN0.10mJ0.79mJ0.84mJ6.54mJ0.28mJ
GGNN0.08mJ0.91mJ0.48mJ3.37mJ0.21mJ
+Additional cost0.02μJ0.08μJ0.05μJ0.02μJ0.30μJ
DAS-GNN0.10mJ0.94mJ0.52mJ5.28mJ0.70mJ

Q1. For the proposed method, is only the spike rate considered?

\to We agree that considering temporal information could further enhance the performance of DAS-GNN. For example, we could apply a similar idea by assigning synaptic delays based on the degree of each node, or encode community information as a temporal property. We added the discussion in Appendix K.

评论

I thank the authors for the responses. I have an additional question regarding the GGNN results supplemented in the energy comparison Table in the response. Why does it show significantly smaller costs than DAS-GNN? The percentage of its energy saving compared with DAS-GNN is larger than that of DAS-GNN compared with ANN. From Table 1 in the revised version, its performance is also acceptable. This may influence the claim "DAS-GNN maintains energy consumption comparable to other baseline SNN models". Can the authors provide more discussions?

评论

We sincerely apologize for our mistake in the rebuttal table regarding the GGNN values. We found there was a corruption during adding a new baseline to our automated formula. We replaced the table with the corrected values. Looking at the correct values, GGNN’s energy consumption shows similar trends to those of the other SNN baselines’ energy consumption values.

评论

Thank you for the clarification and I'm glad to keep the positive rating.

评论

Thank you! We greatly appreciate the reviewer’s thorough consideration of our response and the valuable insights shared with us. If you have any additional questions or thoughts, please feel free to contact us.

审稿意见
8

The article introduces DAS-GNN, a novel approach for graph classification task using spiking neural networks. The paper discusses the challenges of applying SNNs for graph classification i.e. varying spike frequency and to overcome these challenge authors proposed a Degree aware group adaptive neurons (DAG) that group neurons based on node degrees, and a Learnable inference base thresholds (LIBT) to reduce sensitivity of DAG to inference thresholds, The proposed method shows significant improvements over baseline approaches and even outperforms traditional ANNs in several cases while maintaining the energy efficiency.

优点

1.In-depth Analysis of Spike Frequency Variation: Significant strength of the work is that authors have thoroughly studied spike frequency variation problem in graph networks along with providing clear visualizations which makes its easier to follow. 2- Ablation studies : The authors conducted component wise ablation studies which helps to understand the method contributing most to performance improvements. The authors have also conducted additional experiments to understand sensitivity to different hyperparameters such as threshold and learning rates.

缺点

1 - The paper lacks the theoretical justification of why the proposed DAG method provides better performance than other methods. 2- Simplifying mathematical notations and providing detailed explanations for degree-aware neuron adaptation might make it more accessible to wider audience.Also the model architecture and spiking mechanism needs to be more detailed.

问题

1- How would the proposed method scale to very large graph with very skewed degree distributions? 2- Did authors perform any sensitivity for any other hyperparameters?

评论

Q2. Sensitivity for other parameters

\to We provide additional sensitivity studies regarding γ\gamma from Eq. 10, and the size of GNN hidden dimensions. We find that DAG is generally insensitive to γ\gamma, with minimal degradation within the given range. We use 0.2 as the default value, which is generally the best setting across different model architectures. For the study on GNN hidden dimension size, we find that our method is insensitive to varying sizes of GNN, consistently outperforming PLIF neurons across different hidden dimension sizes. Please also find the results in Table 3. and Appendix H, Appendix I of the revision.

Table: Sensitivity study on γ\gamma

γ\gamma0.050.100.150.200.250.300.350.40
MUTAGGCN95.7996.8496.3297.3793.1396.3296.2995.76
GAT95.2393.6692.6094.2194.7492.6092.6092.08
GIN96.8495.2695.7996.3295.2695.2695.2695.79
PROTEINSGCN76.6477.0977.3677.7276.9177.1877.3677.09
GAT70.6270.5371.7072.1471.7074.4072.6071.34
GIN79.5179.7879.5280.0279.7879.1579.4278.98
ENZYMESGCN63.1761.5061.3360.1761.6760.8361.1762.33
GAT60.6761.6761.3361.0062.8360.8362.8363.33
GIN55.5054.5054.0057.8356.5057.0051.8350.83
NCI1GCN76.9176.6276.5977.2576.5276.5075.0174.60
GAT74.7074.9674.1473.8274.3173.8273.0973.33
GIN78.6177.4777.2577.4575.6475.1375.0473.41
IMDB-BGCN80.7080.1080.4080.6080.4080.8081.0080.50
GAT75.4075.8076.6077.8078.2077.1077.1078.10
GIN76.7078.0077.7079.4078.7079.0078.4078.10

Table: Sensitivity on layer hidden dimension DAS-GNN

Hidden Dim.3264128256
MUTAGGCN94.7495.7997.3796.84
GAT94.7494.2193.6893.68
GIN95.7995.7696.3293.65
PROTEINSGCN76.6477.9977.7277.99
GAT73.5872.1469.9170.23
GIN80.1479.3380.0279.70
ENZYMESGCN53.3356.5060.1764.00
GAT58.3361.0054.8355.83
GIN53.6757.3357.8357.33
NCI1GCN72.1974.0977.2578.13
GAT75.8873.8275.1675.72
GIN76.8176.1177.4575.67
IMDB-BGCN80.3080.3080.6080.90
GAT77.0077.8076.7077.30
GIN80.0078.9079.4076.20

Table: Sensitivity on layer hidden dimension PLIF

Hidden Dim.3264128256
MUTAGGCN87.8186.7587.2888.33
GAT83.0182.4983.0183.01
GIN93.1393.1394.1894.18
PROTEINSGCN75.1176.8277.7276.46
GAT68.7364.0667.2966.93
GIN79.0778.4479.1677.81
ENZYMESGCN53.0057.1760.5059.83
GAT37.1738.5037.8336.67
GIN51.8351.5050.1748.67
NCI1GCN70.3972.6076.5277.69
GAT60.4068.3260.4258.52
GIN73.7073.3375.3872.02
IMDB-BGCN66.8068.8071.6071.30
GAT50.0050.0050.0050.00
GIN74.9076.3072.8069.80
评论

We thank the reviewer for acknowledging our contributions and giving positive feedback. We try our best to faithfully address the comments below.

W1: Lack of the theoretical justification of the DAG method.

\to Although a complete analysis would be our future work, we provide a rough theoretical justification for grouping neurons based on their degree. By reformulating the message passing part of Eq.7 in the perspective of a single neuron, the membrane potential of a neuron ii is computed as Ui=jactive(i)Wi,jU_i=\sum_{j\in active(i)}W_{i,j}, where active(i)active(i) denotes firing neurons among neighbors of ii. When we follow a common assumption where Wi,jN(0,σ)W_{i,j} \sim N(0,\sigma), it leads to UiN(0,σactive(i))U_i \sim N(0, \sigma \cdot |active(i)|). Since UiU_i determines the firing rate and active(i)|active(i)| is proportional to the degree of ii, grouping neurons by its vertex degree has an effect of grouping neurons with similar membrane potential variance. This aligns with our findings from Section 3, and we added this analysis in Section 4.2 of the revised paper.

W2: Simplifying mathematical notations and providing detailed explanations for spike mechanism and model architecture.

\to We simplified some mathematical notations in Eqs.8-10 and clarified explanations. In addition, we added model architecture details for experimental settings. A major change is that we changed the symbol for membrane potential from V(t)V(t) to U(t)U(t), which could have caused some confusion with the threshold voltage VthV_{th}. In addition, we added our proposed grouping method in Figure 1(b) with green boxes for more intuitive understanding.

Q1. Scalability to a very large graph with skewed degree distribution

\to We show additional experiment results on two large-scale graph datasets used for graph classification: REDDIT-BINARY and COLLAB. We could find that our DAS-GNN still outperforms other baselines in large graphs with skewed degree distribution. Please also find these results in Appendix D of the revision. For another demonstration on performance on large graphs, we extend our method to node classification datasets (ogbn-arxiv, ogbn-mag), where the results further show that DAS-GNN maintains its competitive performance.

MethodREDDIT-BINARYCOLLABOGBN-ARXIVOGBN-MAG
GCN
ANN90.4580.6071.5535.10
SpikingGNN81.1067.2453.5227.78
SpikeNet82.6068.7860.1828.82
PGNN82.3568.4658.4527.52
GGNN83.1067.9258.2328.75
Ours83.5083.5260.8429.73
0.4014.740.660.91
GAT
ANN76.3055.6071.0629.60
SpikingGNN50.0052.0052.2114.05
SpikeNet50.0552.0053.6717.30
PGNN52.0552.0453.7716.97
GGNN53.0552.0053.2616.54
Ours76.2575.5853.9417.37
23.2023.540.170.07
GIN
ANN80.7575.1161.0127.55
SpikingGNN79.7552.0456.7223.31
SpikeNet79.7053.6655.7923.37
PGNN83.3053.0250.8821.64
GGNN83.7058.9454.8822.39
Ours84.2070.3859.9123.98
0.5011.443.190.61
评论

We sincerely thank all the reviewers for giving us insightful, valuable comments which have helped us improve our work.

We have revised our manuscript and the changes can be summarized as follows:

  • Additional baselines for the main table in the experiment section (Table 1, GLIF [1]).
  • Modification of energy consumption analysis to consider adaptive operations.
  • Additional experiments on large-scale graph classification datasets (REDDIT-BINARY, COLLAB) in Appendix D.
  • Additional sensitivity study regarding adaptive step size and hidden dimension.
  • Additional description of the spiking mechanism.
  • Additional architectural details for our baselines.

[1] Xingting Yao et al. GLIF: A Unified Gated Leaky Integrate-and-Fire Neuron for Spiking Neural Networks. In NIPS. 2022.

AC 元评审

Summary: DAS-GNN is a novel approach for graph classification using spiking neural networks (SNNs) that addresses the challenge of varying spike frequency by introducing Degree-aware Group Adaptive Neurons (DAG) and Learnable Inference Base Thresholds (LIBT). The method demonstrates significant improvements over baseline approaches and even outperforms traditional ANNs in some cases, while maintaining energy efficiency.

Strengths:

DAS-GNN provides an in-depth analysis of spike frequency variation in graph networks and offers a thorough understanding of the problem through clear visualizations and ablation studies.

The method shows superior performance on several datasets, outperforming ANNs and highlighting its potential for energy-saving and accurate graph-based systems.

Weaknesses:

The paper lacks a theoretical justification for the proposed DAG method and could benefit from simplifying mathematical notations and providing more detailed explanations of the degree-aware neuron adaptation.

There is a lack of discussion on the influence of the proposed method for deployment on neuromorphic hardware, and the energy consumption analysis may not fully account for the costs of the adaptive threshold operation and potential communications between neurons.

Given the mixed results and the concerns raised about the paper's clarity, theoretical justification, and practical implications for neuromorphic hardware, I must reject this work as it does not fully meet the acceptance criteria.

审稿人讨论附加意见

Concerns are not well-addressed.

最终决定

Reject