Advanced Black-Box Tuning of Large Language Models with Limited API Calls

Published in AAAI, 2026

Optimizing Large Language Models (LLMs) via Black-Box Tuning (BBT) is challenging because it relies on derivative-free optimization, typically requiring thousands of expensive API calls to estimate gradients. This paper addresses this inefficiency by introducing Advanced Black-Box Tuning (ABBT), a method designed to achieve competitive performance with significantly limited query budgets.

The authors propose two key innovations to accelerate convergence. First, Instruction-Aware Mapping (IAM) replaces the standard random initialization by leveraging open-source instruction-tuned models to generate semantically meaningful initial prompt embeddings. Second, Langevin Dynamics-based Evolution (LDE) is introduced to guide the search process, offering a more efficient exploration-exploitation balance than traditional Covariance Matrix Adaptation Evolution Strategy (CMA-ES). Experiments show that ABBT matches state-of-the-art performance while reducing API calls by 10$\times$ (e.g., achieving convergence in 500 calls versus the typical 8,000).