Quantifying the Impact of GenAI Developer Tools
.png)
Share
It’s widely agreed that GenAI will transform software development, and GenAI dev tools have emerged as cornerstones of 8VC’s portfolio and broader AI productivity thesis. Up to now, however, hard data on the scale and specifics of this shift have been missing from the equation. In competitive industries, the speed and efficiency gains promised by GenAI coding tools could well mean the difference between market leadership and obsolescence. Companies can’t afford to select the wrong tools and end up on the wrong side of the AI adoption curve.
This challenge is hardly unique to the present day. Traditionally, managing software team performance has been more art than science — with little supporting, quantitative data. As development leaders, we believe we know great work when we see it. We also think of development as inherently rigorous in a way that, say, sales is not. Ironically, sales teams have far more quantitative data and systems than development teams, because they are ultimately measured in dollars.
True believers rave about massive GenAI productivity gains, with claims of enabling everyone to become a 10x developer. However, most of the evidence is anecdotal or based on qualitative surveys, not quantitative data. And while AI developer tools have evolved at a dazzling pace — from autocomplete to agentic in just a few years—visibility into their effectiveness has remained fuzzy.
Without data, companies risk making decisions based on outdated assumptions and word of mouth, rather than firm evidence. Given the massive investments being made in AI, and the importance of software across industries and institutions, this risk is unacceptable.
That’s why we’re launching the first empirical study of its kind to measure how GenAI coding tools affect software development on teams across participating companies, including:
- Productivity impacts of GenAI tools, including GitHub Copilot, Amazon Q, Cursor, Cognition, etc.
- Which coding assistants provide the biggest impacts
- ROI for specific tools
This study will finally give engineering and business leaders the data and KPIs they need to choose the right tools for their teams, and adopt GenAI with greater confidence. We are partnering with Software.com to fully automate data collection, using their standardized KPIs:
- Productivity: New Deliveries per Developer
- Quality: Rework (vs. New)
- Velocity: Lead Time
- Cost Efficiency: Cost per New Delivery
- Scale: Active Developers
Participation requires no coding or data integration, and includes standard connectors for GitHub, GitLab, Bitbucket, and Azure DevOps. Their platform does not read code, requires minimum permissions to analyze Git metadata, encrypts all data at rest, and is SOC 2 Type 2 compliant.
Participating companies will receive the final study free of charge, along with a complimentary analysis of their teams’ performance using AI coding tools as compared to industry benchmarks. We will share the findings from the study’s first iteration in a follow-up blog post.
In addition, we’ll be joined by Software.com’s CEO, Jedidiah Yueh, for an American Optimist episode to dive into the results, and discuss broader implications for development observability and productivity. As founding CEO of Avamar (acquired by EMC) and Delphix (acquired by Perforce), Jedidiah has led pioneering companies spanning two waves of data management. He’s earned the trust of both CIOs and development teams, and brings a unique perspective to the GenAI wave.
We should caveat that this study is very much a starting point. We will collect statistically significant data over time, across a wide range of environments, transparently comparing multiple methods and iterating based on community feedback. While a perfect quantitative approach is unlikely, computation has always pushed the frontier of what can be known as well as what can be achieved, and we expect this era to be no different.
At a time when zealots and skeptics dominate the conversation around AI, this study will provide the objective data needed to help identify the right tools, increase their adoption, and optimize their usage. Our goal is to help participants accelerate their journey into AI-assisted development with rigor and clarity.
Join this landmark study to measure the true impact of GenAI development tools.