PaperHub

暂无评分数据

ICLR 2024

Optimal Rates for Convex Optimization with Multiway Preferences

OpenReviewPDF
提交: 2023-09-23更新: 2024-03-26
TL;DR

We demonstrate that multiway and batched comparison feedback can be used to speed-up convergence for convex optimization and show optimality of our speed-up factor.

摘要

We address the problem of convex optimization with preference feedback, where the goal is to minimize a convex function given a weaker form of comparison queries. Each query consists of two points and the dueling feedback returns a (noisy) single-bit binary comparison of the function values of the two queried points. Here we consider the sign-function-based comparison feedback model and analyzed the convergence rates with batched and multiway (argmin of a set queried points) comparisons. Our main goal is to understand the improved convergence rates owing to parallelization in sign-feedback-based optimization problems. Our work is the first to study the problem of convex optimization with multiway preferences and analyze the optimal convergence rates. Our first contribution lies in designing efficient algorithms with a convergence rate of $\smash{\widetilde O}(\frac{d}{\min\{m,d\} \epsilon})$ for $m$-batched preference feedback where the learner can query $m$-pairs in parallel. We next study a $m$-multiway comparison (`battling') feedback, where the learner can get to see the argmin feedback of $m$-subset of queried points and show a convergence rate of $\smash{\widetilde O}(\frac{d}{ \min\{\log m,d\}\epsilon })$. We show further improved convergence rates with an additional assumption of strong convexity. Finally, we also study the convergence lower bounds for batched preferences and multiway feedback optimization showing the optimality of our convergence rates in terms of the parameter $m$.
关键词
convexoptimizationduelingsigncomparisonsmoothstronglyconvergenceratesuboptimalitygapminimumoptimal

评审与讨论

暂无评审记录