Logo Swing-Arena

Competitive Programming Arena for Long-context GitHub Issue Solving

1The University of Hong Kong, 2University of California, Los Angeles, 3Tsinghua University,
4University of Michigan, 5The Ohio State University, 6University of Edinburgh, 7The Hong Kong University of Science and Technology (Guangzhou), 8The Hong Kong Polytechnic University, 9The Chinese University of Hong Kong 10LMSYS Org
* These authors contributed equally.
Dataset overview.

Illustration of the SwingArena adversarial evaluation framework.

πŸ””News

πŸš€ [2025-06-05]: We release the Swing-Arena! πŸš€

Introduction

We present SwingArena, an adversarial evaluation framework for Large Language Models (LLMs) that approximates real-world software development workflows. Unlike traditional static benchmarks, SwingArena models the collaborative process of software iteration by pairing LLMs as submitters, who generate patches, and reviewers, who create test cases and verify the patches through continuous integration (CI) pipelines. To support these interactive evaluations, we introduce a retrieval-augmented code generation (RACG) module that handles long-context challenges by providing relevant code snippets from large codebases across multiple programming languages (C++, Python, Rust, and Go). Our adversarial evaluation can surface limitations that are often overlooked by traditional evaluation settings. Our experiments, using over 400 high-quality real-world GitHub issues selected from a pool of 2,300 issues, indicate differing behavioral tendencies across models in patch generation versus validation. SwingArena offers a scalable and extensible approach to evaluating LLMs in CI-driven software development settings.

Logo Swing-Arena

Overview of Swing-Arena data construction pipeline

The data construction pipeline for Swing-Arena consists of several key stages, including repository collection, pull request extraction, task instance creation, quality filtering, and multiple CI-based validation.

comparison with existing benchmarks

Statistics

Logo Experiment Results

Main result of Swing-Arena

comparison with existing benchmarks

Evaluation of Code Submission vs. Test Submission Capabilities Among Proprietary LLMs.

BibTeX

@article{xu2025swingarena,
  title={SwingArena: Competitive Programming Arena for Long-context GitHub Issue Solving},
  author={Xu, Wendong and Xiong, Jing and Zhao, Chenyang and Chen, Qiujiang and Wang, Haoran and Shen, Hui and Wan, Zhongwei and Dai, Jianbo and Wu, Taiqiang and Xiao, He and others},
  journal={arXiv preprint arXiv:2505.23932},
  year={2025}
}