Amazon’s SWE-PolyBench just exposed the dirty secret about your AI coding assistant

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Amazon Web Services today introduced SWE-PolyBench, a comprehensive multi-language benchmark designed to evaluate AI coding assistants across a diverse range of programming languages and real-world scenarios. The benchmark addresses significant limitations in existing evaluation frameworks and offers researchers and developers new ways to assess how effectively AI agents navigate complex codebases.
“Now they have a benchmark that they can evaluate on to assess whether the coding agents are able to solve complex programming tasks,” said Anoop Deoras, Director of Applied Sciences for Generative AI Applications and Developer Experiences at AWS, in an interview with VentureBeat. “The real world offers you more complex tasks. In order to fix a bug or do feature building, you need to touch multiple files, as opposed to a single file.”
The release comes as AI-powered coding tools have exploded in popularity, with major technology companies integrating them into development environments and standalone products. While these tools show impressive capabilities, evaluating their performance has remained challenging — particularly across different programming languages and varying task complexities.
SWE-PolyBench contains over 2,000 curated coding challenges derived from real GitHub issues spanning four languages: Java (165 tasks), JavaScript (1,017 tasks), TypeScript (729 tasks), and Python (199 tasks). The benchmark also includes a stratified subset of 500 issues (SWE-PolyBench500) designed for quicker experimentation.
“The task diversity and the diversity of the programming languages was missing,” Deoras explained about existing benchmarks. “In SWE-Bench today, there is only a single programming language, Python, and there is a single task: bug fixes. In PolyBench, as opposed to SWE-Bench, we have expanded this benchmark to include three additional languages.”
The new benchmark directly addresses limitations in SWE-Bench, which has emerged as the de facto standard for coding agent evaluation with over 50 leaderboard submissions. Despite its pioneering role, SWE-Bench focuses solely on Python repositories, predominantly features bug-fixing tasks, and is significantly skewed toward a single codebase — the Django repository accounts for over 45% of all tasks.
“Intentionally, we decided to have a little bit over representation for JavaScript and TypeScript, because we do have SWE-Bench which has Python tasks already,” Deoras noted. “So rather than over representing on Python, we made sure that we have enough representations for JavaScript and TypeScript in addition to Java.”
Why simple pass/fail metrics don’t tell the whole story about AI coding performance
A key innovation in SWE-PolyBench is its introduction of more sophisticated evaluation metrics beyond the traditional “pass rate,” which simply measures whether a generated patch successfully resolves a coding issue.
“The evaluation of these coding agents have primarily been done through the metric called pass rate,” Deoras said. “Pass rate, in short, is basically just a proportion of the tasks that successfully run upon the application of the patch that the agents are producing. But this number is a very high level, aggregated statistic. It doesn’t tell you the nitty gritty detail, and in particular, it doesn’t tell you how the agent came to that resolution.”
The new metrics include file-level localization, which assesses an agent’s ability to identify which files need modification within a repository, and Concrete Syntax Tree (CST) node-level retrieval, which evaluates how accurately an agent can pinpoint specific code structures requiring changes.
“In addition to pass rate, we have the precision and recall. And in order to get to the precision and recall metric, we are looking at a program analysis tool called concrete syntax tree,” Deoras explained. “It is telling you how your core file structure is composed, so that you can look at what is the class node, and within that class, what are the function nodes and the variables.”
How Python remains dominant while complex tasks expose AI limitations
Amazon’s evaluation of several open-source coding agents on SWE-PolyBench revealed several patterns. Python remains the strongest language for all tested agents, likely due to its prevalence in training data and existing benchmarks. Performance degrades as task complexity increases, particularly when modifications to three or more files are required.
Different agents show varying strengths across task categories. While performance on bug-fixing tasks is relatively consistent, there’s more variability between agents when handling feature requests and code refactoring.
The benchmark also found that the informativeness of problem statements significantly impacts success rates, suggesting that clear issue descriptions remain crucial for effective AI assistance.
What SWE-PolyBench means for enterprise developers working across multiple languages
SWE-PolyBench arrives at a critical juncture in the development of AI coding assistants. As these tools move from experimental to production environments, the need for rigorous, diverse, and representative benchmarks has intensified.
“Over time, not only the capabilities of LLMs have evolved, but at the same time, the tasks have gotten more and more complex,” Deoras observed. “There is a need for developers to solve more and more complex tasks in a synchronous manner using these agents.”
The benchmark’s expanded language support makes it particularly valuable for enterprise environments where polyglot development is common. Java, JavaScript, TypeScript, and Python consistently rank among the most popular programming languages in enterprise settings, making SWE-PolyBench’s coverage highly relevant to real-world development scenarios.
Amazon has made the entire SWE-PolyBench framework publicly available. The dataset is accessible on Hugging Face, and the evaluation harness is available on GitHub. A dedicated leaderboard has been established to track the performance of various coding agents on the benchmark.
“We extended the SWE-Bench data acquisition pipeline to support these three additional languages,” Deoras said. “The hope is that we will be able to extrapolate this process further in the future and extend beyond four languages, extend beyond the three tasks that I talked about, so that this benchmark becomes even more comprehensive.”
As the AI coding assistant market heats up with offerings from every major tech company, SWE-PolyBench provides a crucial reality check on their actual capabilities. The benchmark’s design acknowledges that real-world software development demands more than simple bug fixes in Python—it requires working across languages, understanding complex codebases, and tackling diverse engineering challenges.
For enterprise decision-makers evaluating AI coding tools, SWE-PolyBench offers something invaluable: a way to separate marketing hype from genuine technical capability. After all, the true test of an AI coding assistant isn’t how well it performs on simplified demos, but whether it can handle the messy, multi-language complexity of actual software projects — the kind developers wrestle with every day.