LLM | Retriever | Accuracy | Recall | Search Calls | Calibration Error |
---|
End-to-end agent accuracy on BrowseComp-Plus across LLMs and retrievers
Deep-Research agents, which integrate large language models (LLMs) with search tools, have shown success in improving the effectiveness of handling complex queries that require iterative search planning and reasoning over search results. Evaluations on current benchmarks like BrowseComp rely on black-box live web search APIs, with notable limitations in (1) fairness: dynamic and opaque web APIs hinder fair comparisons and reproducibility of deep research methods; (2) transparency: lack of control over the document corpus makes it difficult to isolate retriever contributions. In other words, the current evaluations may compare a complete deep research system at a given time, but they do not foster well-controlled experiments to provide insights into the capability of underlying deep research LLMs. To address these challenges, we introduce BrowseComp-Plus, a benchmark derived from BrowseComp, employing a fixed, carefully curated corpus. Each query in BrowseComp-Plus includes human-verified supporting documents and mined challenging negatives, enabling controlled experimentation. The benchmark is shown to be effective in distinguishing the performance of deep research systems. For instance, the open-source model Search-R1, when paired with the BM25 retriever, achieves 3.86% accuracy, whereas the GPT-5 achieves 55.9%. Integrating the GPT-5 with the Qwen3-Embedding-8B retriever further enhances its accuracy to 70.1% with fewer search calls. This benchmark allows comprehensive evaluation and disentangled analysis of deep research agents and retrieval methods, fostering insights into retrieval effectiveness, citation accuracy, and context engineering in Deep-Research systems.
Accuracy vs. number of search calls for Deep-Research agents with different retrievers.
The figure shows that Deep-Research agents mostly improve the final accuracy at a cost of more search calls, whereas better retrieval systems not only improve the overall accuracy but also reduce the number of search calls. That is, better retrievers lead to both efficiency and effectiveness. For reference, GPT-5 achieves 59.9\% accuracy when evaluated using the Google Search API.
BrowseComp-Plus contains 830 queries sourced from BrowseComp, each of which could take a human more than 2 hours to answer using a search engine. We carefully construct a corpus of ~100K web documents for these queries, designed to meet three criteria:
For each query, we collect the evidence documents in a two-stage process: (1) OpenAI's o3 retrieves candidate evidence documents from the web using the ground-truth question–answer pairs; (2) Human annotators verify the candidates and add missing documents to ensure the corpus contains all evidence needed to fully answer each query.
Positive collection pipeline: o3 searches initial candidates, enhanced by human annotators
In addition to evidence documents, annotators also label the documents that semantically contain the final answer, designated as gold documents. These labels are later used to perform retriever-only evaluation.
For example, a query might ask for the number of publications by an author, with the ground-truth answer being "7". A gold document could be the author's personal webpage listing their publications; while it may not contain the string "7" explicitly, it semantically contains the answer.
For the negative collection, we take each query from BrowseComp, and prompt GPT-4o to decompose the query into simpler, self-contained sub-queries. For each sub-query, we use a Google Search API provider to search the web, and scrape the results as hard negatives.
Negative collection pipeline: GPT-4o decomposes queries into sub-queries, Google Search sub-queries to get hard-negatives
LLM | Retriever | Accuracy | Recall | Search Calls | Calibration Error |
---|
End-to-end agent accuracy on BrowseComp-Plus across LLMs and retrievers
We evaluate popular Deep-Research agents paired with different retrievers on the following metrics:
Stronger retrievers (e.g., Qwen3-Embedding-8B) consistently improve end-to-end accuracy of Deep-Research agents. They also reduce the number of search calls, likely because higher-quality initial retrievals reduce the need for follow-up searches; further, fewer search calls translate to fewer output tokens. That is, better retrievers deliver both efficiency and effectiveness gains.
In general, more search calls correlate with higher accuracy. Closed-source agents tend to make substantially more search calls than open-source ones; for instance, OpenAI's gpt-5 and o3 average over 20 search calls per query, while Qwen3-32B and SearchR1-32B make fewer than 2, despite being explicitly prompted to use the tool. This gap in the ability to interleave extensive search calls and reasoning likely contributes to the gap in end-to-end accuracy between closed- and open-source agents.
We analyze how the reasoning effort of LLMs influences answer quality and retrieval behavior. To isolate this effect, we focus on OpenAI's OSS family, which offers three reasoning modes: low, medium, and high. These modes differ in the amount of computational effort and deliberation the model applies before producing an answer, with higher modes generally involving longer intermediate reasoning steps. Across all model sizes and retrievers, increasing the reasoning effort consistently boosts accuracy.
Retriever | Recall@5 | Recall@100 | Recall@1000 | nDCG@10 |
---|
We also evaluate the effectiveness of different retrievers alone, measuring each retriever's recall@k and nDCG@k scores against the labeled evidence and gold documents.
Coming soon...