Artificial Intelligence 13 min read

Codeium’s CPO and PCW Metrics: Measuring the Real Value of AI Code Completion

The article explains Codeium’s two new metrics—Characters per Opportunity (CPO) and Percentage of Code Written (PCW)—how they are calculated, why they better capture the true value of AI‑powered code completion compared to traditional benchmarks, and how they guide product decisions.

Continuous Delivery 2.0
Continuous Delivery 2.0
Continuous Delivery 2.0
Codeium’s CPO and PCW Metrics: Measuring the Real Value of AI Code Completion

Codeium introduced two metrics, Characters per Opportunity (CPO) and Percentage of Code Written (PCW), to provide a more reliable measurement of the value delivered by AI code‑completion assistants.

CPO is defined as the average number of characters generated by the AI for each opportunity to suggest code. It is calculated by multiplying Attempt Rate, Feedback Rate, Acceptance Rate, the average number of tokens per acceptance, and the average number of characters per token:

Characters per Opportunity =
    Attempt rate *
    Feedback rate *
    Acceptance rate *
    (Avg Num Tokens / Acceptance) *
    (Avg Num Characters / Token)

The metric breaks down into five factors: Attempt Rate (how often the AI tries to suggest), Feedback Rate (how often a suggestion reaches the developer), Acceptance Rate (how often a suggestion is accepted), average tokens per acceptance, and average characters per token.

Automatic completion is highlighted as a dense, passive AI feature that can provide thousands of suggestions per developer per day, making its value far greater than chat or other modes, even if those modes appear useful.

Existing benchmark metrics (e.g., acceptance rate alone) are easily gamed, leading to misguided product decisions. CPO, being less susceptible to manipulation, offers a more trustworthy indicator of actual developer value.

Comparisons with other tools illustrate this point: GitHub Copilot shows a high acceptance rate but low attempt/feedback rates; SourceGraph Cody suffers from high latency that reduces feedback; TabNine inflates acceptance by offering very short suggestions. All these tools can appear to perform well on traditional metrics while delivering less real value.

Codeium’s current numbers show a line‑end CPO of 1.78 and an inline CPO of 0.30, resulting in a weighted CPO of 1.27. Its PCW stands at 44.6%, meaning that 44.6% of newly committed code originates from Codeium suggestions.

The CPO metric directly influences product decisions. For example, a larger completion model increased acceptance rate but also raised latency, which lowered feedback rate enough to offset the CPO gain, demonstrating the trade‑off between model size, latency, and overall value.

The article distinguishes between actual value (captured by CPO) and perceived value (driven by acceptance rate), noting that developers feel satisfaction when they press Tab to accept suggestions, even if the underlying metric does not reflect true productivity gains.

PCW measures the proportion of new code characters that can be attributed to accepted AI suggestions, providing a concrete view of how much of the codebase is AI‑generated.

In conclusion, Codeium emphasizes building products that deliver genuine value rather than optimizing for marketing‑friendly metrics, confidently stating that nearly half of new code from typical users is written with Codeium’s assistance.

software engineeringproduct metricsCodeiumAI code completionCPO metricPCW metric
Continuous Delivery 2.0
Written by

Continuous Delivery 2.0

Tech and case studies on organizational management, team management, and engineering efficiency

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.