π [2025-06-05]: We release the Swing-Arena! π
We present SwingArena, an adversarial evaluation framework for Large Language Models (LLMs) that approximates real-world software development workflows. Unlike traditional static benchmarks, SwingArena models the collaborative process of software iteration by pairing LLMs as submitters, who generate patches, and reviewers, who create test cases and verify the patches through continuous integration (CI) pipelines. To support these interactive evaluations, we introduce a retrieval-augmented code generation (RACG) module that handles long-context challenges by providing relevant code snippets from large codebases across multiple programming languages (C++, Python, Rust, and Go). Our adversarial evaluation can surface limitations that are often overlooked by traditional evaluation settings. Our experiments, using over 400 high-quality real-world GitHub issues selected from a pool of 2,300 issues, indicate differing behavioral tendencies across models in patch generation versus validation. SwingArena offers a scalable and extensible approach to evaluating LLMs in CI-driven software development settings.
Swing-Arena
The data construction pipeline for Swing-Arena consists of several key stages, including repository collection, pull request extraction, task instance creation, quality filtering, and multiple CI-based validation.
Clarity and Difficulty Distribution
Length distributions in different languages
Experiment Results
Evaluation of Code Submission vs. Test Submission Capabilities Among Proprietary LLMs.
@article{xu2025swingarena,
title={SwingArena: Competitive Programming Arena for Long-context GitHub Issue Solving},
author={Xu, Wendong and Xiong, Jing and Zhao, Chenyang and Chen, Qiujiang and Wang, Haoran and Shen, Hui and Wan, Zhongwei and Dai, Jianbo and Wu, Taiqiang and Xiao, He and others},
journal={arXiv preprint arXiv:2505.23932},
year={2025}
}