PaperHub
8.3
/10
Oral4 位审稿人
最低4最高5标准差0.4
4
4
4
5
ICML 2025

On Differential Privacy for Adaptively Solving Search Problems via Sketching

OpenReviewPDF
提交: 2025-01-15更新: 2025-07-24
TL;DR

We develop data structures for search problems such as regression and nearest neighbor search that are robust against an adaptive adversary.

摘要

Recently differential privacy has been used for a number of streaming, data structure, and dynamic graph problems as a means of hiding the internal randomness of the data structure, so that multiple possibly adaptive queries can be made without sacrificing the correctness of the responses. Although these works use differential privacy to show that for some problems it is possible to tolerate $T$ queries using $\widetilde{O}(\sqrt{T})$ copies of a data structure, such results only apply to {\it numerical estimation problems}, and only return the {\it cost} of an optimization problem rather than the solution itself. In this paper we investigate the use of differential privacy for adaptive queries to {\it search} problems, which are significantly more challenging since the responses to queries can reveal much more about the internal randomness than a single numerical query. We focus on two classical search problems: nearest neighbor queries and regression with arbitrary turnstile updates. We identify key parameters to these problems, such as the number of $c$-approximate near neighbors and the matrix condition number, and use different differential privacy techniques to design algorithms returning the solution point or solution vector with memory and time depending on these parameters. We give algorithms for each of these problems that achieve similar tradeoffs.
关键词
data structureadaptive robustnessdifferential privacy

评审与讨论

审稿意见
4

First the authors consider the differentially private ANN problem. They reduce this problem of differentially private ANN to differentially private selection problem, and then solve the differentially private selection problem via differentialy private selection methods. Next, the authors consider the problem of differentially private linear regression. They solve this problem by applying matrix sketching techniques. The key idea is to use standard a number of different random matrix sketches, and then to aggregate the solutions in a differentially private manner via an extension of a differentially private median approach of (Beimel et all 2022). Finally, to bound the utility they use they compute an ell-infinity bound on the error introduced by sketching. As the authors note, this is an interesting approach to bounding the utility as it is an uncommon use of ell-infinity bounds in matrix sketching.

update after rebuttal

Thank you for the helpful clarifications. I have raised my score accordingly. Please include these clarifications in the final version.

给作者的问题

It would be good if the authors could clarify how their work improves on prior works.

论据与证据

Yes

方法与评估标准

Yes

理论论述

I read the technical overview, but did not carefully check the proofs.

实验设计与分析

There are no experiments in the paper.

补充材料

Please see question about "proofs" above.

与现有文献的关系

The authors do a good job of discussing which previous techniques they build on.

However, as far as I can tell, the authors do not provide much comparison to prior related works. It would be good if the authors could clarify how their work improves on prior works, or point to where the improvement over prior works is discussed in the paper.

遗漏的重要参考文献

Please see "Relation To Broader Scientific Literature" question above.

其他优缺点

I think the strength of the paper is in the techniques, which are well explained in the technical overview. However, the comparison to prior works can be improved. Specifically, it would be good if the authors could clarify how their work improves on prior works.

其他意见或建议

N/A

作者回复

We thank the reviewer for their question about prior work, which we address below:

  1. Prior works using differential privacy for adaptivity focus on numerical estimates [Hassidim et al., 2022; Beimel et al., 2022; Song et al., 2023; Cherapanamjeri et al., 2023]. Our work instead tackles the search problem—returning a point using an adaptive DP data structure. While differentially private heavy hitter algorithms (e.g., [Chadha et al., 2023]) relate to private search, they lack adaptivity. To our knowledge, this is the first work combining DP with adaptive search.

  2. Because of this, prior adaptive search methods either (a) build TT data structures (one per query), or (b) use a net argument over the space, requiring O~(d)\widetilde{O}(d) structures.

Below we compare our method to these baselines for ANN and regression. We omit O~()\widetilde{O}(\cdot) for clarity.

Approximate Nearest Neighbor (ANN)

MethodSpaceAmortized Prep TimeQuery TimeUpdate Time
TT copiesTn1+ρ+ndT n^{1+\rho} + ndn1+ρdn^{1+\rho} dnρdn^\rho dTnρdT n^\rho d
dd copiesn1+ρdn^{1+\rho} ddTn1+ρd\frac{d}{T} n^{1+\rho} dnρdn^\rho ddnρdd n^\rho d
OursTsn1+ρ+nd\sqrt{T} s n^{1+\rho} + ndsTn1+ρd\frac{s}{\sqrt{T}} n^{1+\rho} dsnρds n^\rho dTsnρd\sqrt{T} s n^\rho d

For TT adaptive queries/updates, the exact method scans the dataset. If TdT \leq d, we outperform TT-copies when sminT,ns \leq \min\\{\sqrt{T}, n\\}; if dTd \leq T, we outperform dd-copies when smindT,ns \leq \min\\{\frac{d}{\sqrt{T}}, n\\}.


Regression

Let URn×dU \in \mathbb{R}^{n \times d} and bRnb \in \mathbb{R}^n, with goal to compute xx minimizing Uxb2\\|Ux - b\\|_2 up to (1+α)(1+\alpha). Each step updates UU, bb via vtv_t; κ\kappa bounds UU's condition number.

MethodSpaceAmortized Prep TimeQuery TimeUpdate Time
TT copiesTd2α2\frac{T d^2}{\alpha^2}nnz(U,b)+d3+d2α2\text{nnz}(U,b)+d^3+\frac{d^2}{\alpha^2}dω+1α2\frac{d^{\omega+1}}{\alpha^2}T(nnz(vt)+d3+d2α2)T(\text{nnz}(v_t)+d^3+\frac{d^2}{\alpha^2})
ndnd copiesnd3α2\frac{n d^3}{\alpha^2}ndT(nnz(U,b)+d3+d2α2)\frac{nd}{T}(\text{nnz}(U, b)+d^3+\frac{d^2}{\alpha^2})dω+1α2\frac{d^{\omega+1}}{\alpha^2}nd(nnz(vt)+d3+d2α2)nd(\text{nnz}(v_t)+d^3+\frac{d^2}{\alpha^2})
OursTd2.5κ2α2\frac{\sqrt{T} d^{2.5} \kappa^2}{\alpha^2}dT(nnz(U,b)+d3+d2κ2α2)\sqrt{\frac{d}{T}}(\text{nnz}(U, b)+d^3+\frac{d^2 \kappa^2}{\alpha^2})dω+1κ2α2\frac{d^{\omega+1} \kappa^2}{\alpha^2}Td(nnz(vt)+d3+d2κ2α2)\sqrt{T d}(\text{nnz}(v_t)+d^3+\frac{d^2 \kappa^2}{\alpha^2})

We outperform TT-copies when κ2T/d\kappa^2 \leq \sqrt{T/d}, and ndnd-copies when κ2nd/T\kappa^2 \leq n \sqrt{d/T}—conditions often met in practice.


Regression with Sparse Label Updates

If only bb is updated and each vtv_t changes at most ss entries, κ2\kappa^2 dependence can be reduced to polylogarithmic.

MethodSpaceAmortized Prep TimeQuery TimeUpdate Time
TT copiesTd2α2\frac{T d^2}{\alpha^2}nnz(U)+d3+d2α2\text{nnz}(U)+d^3+\frac{d^2}{\alpha^2}d2d^2T(s+d3+d2α2)T(s+d^3+\frac{d^2}{\alpha^2})
nn copiesnd2α2\frac{n d^2}{\alpha^2}nT(nnz(U)+d3+d2α2)\frac{n}{T}(\text{nnz}(U)+d^3+\frac{d^2}{\alpha^2})d2d^2n(s+d3+d2α2)n(s+d^3+\frac{d^2}{\alpha^2})
OursTdd2α2\frac{\sqrt{T d} \cdot d^2}{\alpha^2}dT(nnz(U)+d3+d2α2)\sqrt{\frac{d}{T}}(\text{nnz}(U)+d^3+\frac{d^2}{\alpha^2})d2d^2Td(s+d3+d2α2)\sqrt{T d}(s+d^3+\frac{d^2}{\alpha^2})

Only nn copies are needed. Our method is faster when TnT \leq n and dTd \leq T, or when nTn \leq T and Tdn\sqrt{T d} \leq n. Since TT is user-chosen (rebuilding occurs every TT queries), trade-offs can be tuned (see Claims D.16–D.17).


Summary:
Our method leverages DP with advanced composition, achieving O~(T)\widetilde{O}(T) scaling and improved space, preprocessing, query, and update complexity. These advantages are especially notable when TT is large, or when sparsity ss and condition number κ\kappa are favorable. We will ensure these comparisons are clearly included in the revision.

审稿意见
4

The paper addresses the problem of hiding internal randomness of data structures using differential privacy. Specifically, the authors focus on nearest neighbor search and regression problem and aim to protect the internal randomness against adaptive adversary using differentially private techniques.

Update after rebuttal

The author response provided during the rebuttal clearly addressed my questions regarding the interaction between the privacy mechanism and the underlying data structure, and the privacy loss accumulation. I have raised my rating.

给作者的问题

See above

论据与证据

A sequence of theorems presented in Section 1.1 clearly demonstrates improvements over existing algorithms. They provide probabilistic utility guarantees as well as time and memory complexity, but it is not immediately clear how differential privacy interacts with these bounds. While I understand that characterizing the interplay between data structure and differential privacy is not the main focus, it could be of interest to a broader audience.

方法与评估标准

  • Section 2 provides explanation on how the search problem can be formulated as differentially private selection, which completely makes sense. The authors use the standard Laplace mechanism to achieve pure (ϵ,0)(\epsilon, 0)-DP. I wonder if the Gaussian mechanism can be used instead to achieve (ϵ,δ)(\epsilon, \delta)-DP. In that case, how does it affect the guarantees stated in theorems in Section 1?

  • Another aspect not discussed in the paper is the accumulation of privacy loss. In differential privacy, the total privacy loss accumulates as the mechanism releases private answers to successive queries, which means the mechanism must shut down the database and stop answering after a certain number of queries. However, it is unclear from the paper whether there is a limit to the number of queries the underlying data structure can handle.

理论论述

I didn’t thoroughly check the correctness of the proofs.

实验设计与分析

This is a theory paper that doesn't contain experimental results.

补充材料

No supplementary materials are available.

与现有文献的关系

This paper develops efficient data structures that protect their internal randomness from adaptive adversaries, offering a secure and private foundation for machine learning tasks.

遗漏的重要参考文献

I don't find any missing related work.

其他优缺点

  • Strength
    • The paper presents a sequence of theorems that describe improvements over existing results.
    • The authors provide interpretations of theorems.
  • Weakness
    • This may not be a weakness, but rather a comment. While the paper is clear about the algorithmic properties of underlying data structures, e.g., how its time and storage complexities scale with input size, they don’t reveal much about the properties of differential privacy, such as composition and privacy loss accumulation.
    • I believe that including some figures or diagrams would have been helpful for better understanding the proposed technique.

其他意见或建议

None

作者回复

We thank you for your very helpful comments. We would also like to address your questions and comments regarding differential privacy.

Let us start with a high-level overview of the framework, first introduced in [Hassidim et al., 2022; Beimel et al., 2022]. The rough idea is to treat the random seeds used by data structures as the database for which we want to preserve privacy. In the simplest setting, to achieve a target privacy level ϵ\epsilon, one might need to aggregate the responses of TT data structures in a differentially private manner. However, by the advanced composition theorem, the same level of privacy in terms of ϵ\epsilon can be achieved using only O~(T)\widetilde{O}(\sqrt{T}) independent outputs—at the cost of moving from pure DP to approximate DP with δ>0\delta > 0. As long as δ\delta is small, the approximation is acceptable.

Responses to Specific Questions

Q: Can the Gaussian mechanism be used for (ϵ,δ)(\epsilon, \delta)-DP? In that case, how does it affect the final guarantees?

A: Yes, the Gaussian mechanism is valid for achieving (ϵ,δ)(\epsilon, \delta)-DP, provided that δ\delta is not too large. Suppose we want the data structure to succeed with probability at least 1β1 - \beta. Let δfinal\delta_{\text{final}} denote the total privacy loss, composed of:

  • δGaussian\delta_{\text{Gaussian}}: from the Gaussian mechanism
  • δadvanced\delta_{\text{advanced}}: from advanced composition

Then:

δfinal=δadvanced+TδGaussian.\delta_{\text{final}} = \delta_{\text{advanced}} + T \cdot \delta_{\text{Gaussian}}.

In our original setting, we set δadvanced=β/100\delta_{\text{advanced}} = \beta / 100 using an appropriate ϵ\epsilon. To use the Gaussian mechanism instead, we may set δGaussian=β200T\delta_{\text{Gaussian}} = \frac{\beta}{200T} and reduce δadvanced\delta_{\text{advanced}} to β/200\beta/200. This still yields δfinal=β/100\delta_{\text{final}} = \beta / 100, matching the Laplace mechanism’s privacy loss.

We chose the Laplace mechanism for conceptual simplicity and because the task is essentially a counting problem, where Laplace noise is more natural.

Q: In DP, privacy loss accumulates with repeated queries, so mechanisms are often shut down after a limit. Does your adaptive data structure exhibit this behavior?

A: Yes, the adaptive data structure’s ability to handle TT queries hinges on privacy composition. Using the advanced composition theorem, we guarantee (ϵ,δ)(\epsilon, \delta)-DP across TT interactions by using O~(T)\widetilde{O}(T) resources (e.g., data structure copies or random seeds), rather than O(T)O(T).

However, this requires TT to be fixed in advance. For indefinite querying, a standard approach is to rebuild the data structure periodically after every TT queries. This permits optimization of TT to balance space, amortized preprocessing time, and query/update efficiency.

Additional Comments

It would be better to discuss more on differential privacy, such as composition and loss accumulation.

A: Thank you for pointing this out. We will ensure that the final version includes a more detailed discussion on differential privacy, particularly covering composition theorems and cumulative privacy loss (please refer to our high-level overview and answers regarding loss accumulation above).

Including figures and diagrams would be helpful for explaining the techniques.

A: We appreciate the suggestion. We agree that visualizations—especially for illustrating how the sparse argmax mechanism is applied in our ANN algorithm—would enhance clarity and intuitively explain why our algorithm is both correct and efficient. We will include appropriate figures and diagrams to better illustrate the framework in the revision.

审稿意见
4

The paper explores using differential privacy for answering adaptively chosen queries (potentially by and adversary) for search problems. The authors consider approximate nearest neighbor and regression as the main problems in this work. The main contributions are showing that under reasonable assumptions, it is possible to maintain O(T)O(\sqrt{T}) copies of data structure to answer TT queries for both problems, matching the bounds from previous work that uses DP for adversarial streaming. For the ANN problem, they reduce the problem to private selection and provide a new mechanism that obtains better runtime for sparse vectors. For regression problem with updates to both design matrix and response vector, the approach involves maintaining and updating multiple sketch matrices, solving to generate multiple solution vectors and using DP median to generate the final solution. The authors also show application to online weighted matching and computing terminal embeddings.

给作者的问题

No additional questions.

论据与证据

All the claims made in paper are accompanied with proofs. Reasoning and justification for the assumptions made are also provided.

方法与评估标准

The paper does not have experiments.

理论论述

The proofs are correct to the best of my knowledge. The contributions sublinear dependence on TT for time/space complexity in answering queries. The baseline comparison are the naive or existing bounds of O(T)O(T) and O(dim)O(dim). The accuracy guarantees follow from success probability for the data structure or the generalization guarantee for DP algorithms.

实验设计与分析

This is a theoretical work, no experiments are provided.

补充材料

The proofs in appendix are correct to the best of my knowledge.

与现有文献的关系

This work contributes line of prior work that explores connection between differential privacy and robustness to adaptivity. The paper makes novel contribution in this field by giving approaches that work for search problems.

遗漏的重要参考文献

Not to the extent of my knowledge.

其他优缺点

The sparse argmax mechanism is a nice contribution and might be of interest to broader DP community. For regression problem, the dependence on κ\kappa is quadratic which might be suboptimal -- in addition the alternative approach suggested obtains linear in TT.

其他意见或建议

No additional comments.

作者回复

We thank you for your very helpful comments and we would like to address your comments on the quadratic dependence on κ\kappa and linear dependence on TT for the alternative.

For the κ2\kappa^2 dependence, the main reason is that by using the \ell_\infty guarantee together with the standard relative error guarantee of sketched regression, we can show that the regression cost of the private median estimator gg satisfies Ugb2(1+κα)Uxb2\\|Ug-b\\|_2 \leq (1+\kappa \alpha’)\\|Ux^*-b\\|_2. Hence, to account for the blowup in condition number, we need to scale down the approximation factor by κ\kappa. Since the sketching dimension depends inverse quadratically on the approximation factor, we have a κ2\kappa^2 dependence in the runtime. One could argue that known sketching-based methods based on the following framework need to incur such a dependence: our algorithm estimates each coordinate to high precision, and this inevitably incurs a dependence on the condition number for the error, and the sketching dimension in turn depends quadratically on the condition number. When there are no updates to UU, we could use an iterative method instead to remove the polynomial condition number dependence. It is an interesting open question to develop an iterative algorithm that can remove the dependence on the condition number while also supporting updates to UU. Therefore, while the κ2\kappa^2 dependence is a consequence of our current sketching approach for worst-case guarantees, we believe the overall framework offers significant advantages, and reducing this dependence under updates is an important avenue for future work.

The alternative method based on the bounded path technique can be treated as a refinement to the method of using a fresh sketch for each update and query. Specifically, the algorithm depends on logP\log |{\cal P}|, and P|{\cal P}| can be naively upper bounded by (nκ)dT(n\kappa)^{dT}, making it less appealing. However, we note that P|{\cal P}| is a more fine-grained parameter that could capture the structure of the input, e.g., if only entry changes between queries are allowed, then the upper bound reduces to (nd)T(nd)^T. If the updates are infrequent, e.g., only T\sqrt T updates are performed, then the bound becomes (nd)T(nd)^{\sqrt T}. Hence, while the alternative is no more efficient in the most general sense, it provides more fine-grained control and a speedup in situations for which the instance has extra structure.

审稿意见
5

The paper studies the problem of adaptive algorithms: the algorithms where the adversary interacts with a randomized algorithm and the goal of the adversary is to increase the error probability.

It is known that differential privacy could be used to design such algorithms, but the existing applications were for algorithms computing numeric results; this paper generalizes the approach to vector-valued algorithms. As a result, it builds an algorithm for the nearest neighbor problem with better than previously known running and preprocessing time.

给作者的问题

N/A

论据与证据

The evidence sufficiently supports the claims.

方法与评估标准

Methods and evaluations are appropriate for the problem at hand.

理论论述

The proofs presented in the paper are correct.

实验设计与分析

There are no experiments in the paper.

补充材料

I haven't reviewed supplementary material.

与现有文献的关系

For me, personally, the idea of using differential privacy for designing adaptive algorithms is one of the most exciting ideas in the last couple of years: it connects the fields that from the first glance have almost nothing in common. While this is not the first paper in this direction, it generalizes the idea to a much wider class of problems.

遗漏的重要参考文献

I don’t think any specific reference is missing.

其他优缺点

N/A

其他意见或建议

Line 40: It would be better to rephrase the sentence about queries; it is a bit confusing in the current form. Line 94: The assumption doesn’t say anything about neighbours, it only talks about cr balls. Perhaps the explanation needs to be rewritten. Line 258: It should be argmax \sum b_{(i), j}; the j index is missing in the paper.

作者回复

We appreciate your encouraging and helpful comments. Regarding your comments:

  • Line 40:
    We will clarify this sentence. Our intention is to express both ANN and regression in a unified way, and we will revise it to clearly explain the query and update models in each problem.

  • Line 94:
    The assumption states that the predicate function fvtf_{v_t} is sparse, and we define the predicate as the intersection of the dataset UU and the crcr-ball around vtv_t—that is, all points in UU within distance crcr of the query. We acknowledge that the current formulation does not make this connection explicit and will revise it for clarity.

  • Line 258:
    We will fix the missing index jj.

最终决定

The paper extends the literature on using differential privacy as a tool to design randomized data structures that maintain their correctness properties against adaptive (rather than oblivious) adversaries. The main contribution is the development of techniques that allow solving search problems under adaptive queries, in particular approximate near neighbor search. The paper addresses important problems, and introduces interesting techniques. The reviewer consensus is that the paper would be a strong contribution to the ICML program.