Qodo Secures $70M to Verify AI-Generated Code as Adoption Accelerates

4

The rapid expansion of AI-powered code generation is creating a critical new challenge: ensuring the reliability and security of the resulting software. Qodo, a New York-based startup, is addressing this bottleneck with AI agents designed for code review, testing, and governance. The company just raised a $70 million Series B round led by Qumra Capital, bringing total funding to $120 million.

The Emerging Crisis of Trust in AI Code

AI tools like OpenClaw and Claude Code are producing unprecedented volumes of code, but output speed doesn’t guarantee quality. Many enterprises are discovering that AI-generated code requires rigorous verification to prevent bugs, security vulnerabilities, and compliance issues. This is not a theoretical problem. A recent survey shows that 95% of developers don’t fully trust AI-generated code, yet nearly half (48%) fail to consistently review it before deployment.

This discrepancy between awareness and practice reveals a critical gap. Companies are pushing ahead with AI code generation, but they lack the tools to manage the inherent risks. The core problem is that AI models lack the contextual understanding needed to evaluate code within a real-world organizational framework.

Qodo’s Approach: Beyond Simple LLM Checks

Qodo differentiates itself by focusing on systemic impact rather than superficial changes. Instead of just identifying what has changed, Qodo assesses how those changes affect the entire system. This includes factoring in company-specific standards, historical context, and risk tolerance.

Founder Itamar Friedman explains, “Quality is subjective. It depends on organizational standards, past decisions, and tribal knowledge. An LLM can’t fully understand that context.” Friedman’s background is key to Qodo’s strategy. His experience at Mellanox (later acquired by Nvidia) showed him that verification systems require fundamentally different tools than generation systems. Later, at Alibaba’s Damo Academy, he saw AI evolve toward reasoning over human language.

Leading Performance and Enterprise Adoption

Qodo 2.0, the startup’s multi-agent code review system, is currently ranked No. 1 on Martian’s Code Review Bench, scoring 64.3%—significantly outperforming competitors like Claude Code Review. This performance advantage is critical in a crowded market.

The company has already secured major enterprise clients, including NVIDIA, Walmart, Red Hat, Intuit, Texas Instruments, Monday.com, and JFrog. This early adoption demonstrates that Qodo is solving a real pain point for organizations that are seriously integrating AI into their software development pipelines.

“We’re entering a new phase: moving from stateless AI to stateful systems—from intelligence to ‘artificial wisdom.’ That’s what Qodo is built for.” – Itamar Friedman

Qodo’s success hinges on the realization that AI-generated code is not inherently trustworthy. The next stage of software development will be defined by verification. Qodo is positioned to dominate this space by providing a solution that combines AI-powered automation with the contextual awareness that LLMs alone cannot deliver.