By turning user feedback into something executable and continuous, what we call “user feedback as code,” GenAI closes one of the last major gaps in continuous value delivery.
One of the defining shifts in modern software development has been the gradual automation of feedback. We no longer wait for quarterly reviews to understand performance; we instrument systems, collect telemetry, run experiments and close loops continuously. Yet, one feedback loop has remained stubbornly manual: user feedback.
From a Radical and continuous value delivery perspective, this is a glaring inconsistency. If value is defined by user outcomes, but insight into user behavior arrives late, sparsely and expensively, then user feedback becomes the slowest, and often weakest, signal in the system.
Generative AI may finally change that. Not by ‘simulating users’ in a superficial sense, but by turning user feedback into something that increasingly resembles code: executable, versioned, composable and continuously evaluated.
Most software-intensive organizations already operate several AI-enabled feedback loops. There are operational feedback loops, including telemetry, logs, traces and anomaly detection, that provide continuous feedback for use in reliability and performance decisions. Also, the business has its feedback loops, such as conversion metrics, funnel analysis, pricing experiments and revenue signals, that guide prioritization. And, of course, we have feedback loops for the behavior of the product, such as feature usage, clickstreams and A/B testing, to inform incremental optimization.
What’s missing is a user intent and experience loop that’s equally fast, scalable and automated. Traditional user testing simply doesn’t fit the cadence of modern development. This is the gap GenAI-based virtual users aim to fill.
Classic personas are static representations of intended users. In Radical terms, they’re hypotheses, but passive ones. They influence design discussions but don’t actively test value. The technical shift enabled by GenAI is the transformation of personas into executable specifications.
Platforms like Inamo and Speqs.io are building systems where a persona is no longer a document, but a configuration for an autonomous agent. This configuration contains attributes that define who the user is, behavioral models that define how the user acts, memory that defines what the user believes they’ve learned and, finally, test scenarios that define what the user is trying to achieve.
When instantiated, these agents interact with a product much like real users do, such as navigating flows, making mistakes, forming expectations and reacting to friction. In effect, personas become runnable artifacts. This is the moment where user feedback starts to look like code.
Framing this as “user feedback as code” is useful because it highlights several important properties. First, it becomes executable as feedback is generated by running agents, not by scheduling studies. Second, it’s sufficiently deterministic to be automated. While not perfectly predictable, results are consistent enough to be used as signals in development workflows. Third, it’s composable, allowing personas, behaviors and scenarios to be combined, reused and extended. Finally, it can be continuously evaluated as feedback can be generated on every design iteration, every release or even every commit. In Radical terms, this turns user feedback from an event into a loop. And loops, not artifacts, are what drive continuous value delivery.
A natural question is whether virtual users are ‘as good as’ real users. That question is understandable and slightly misguided. Inamo’s background as a human-based user testing platform allows it to benchmark AI-generated results against real-world data. Its stated ambition is to reach parity within roughly 0.5–2 percent for certain classes of tests. That’s impressive, but also beside the deeper point. Modern product development doesn’t require perfect signals; it requires fast and directionally correct ones. If virtual users consistently surface the same usability breakdowns, the same confusing flows or the same mismatches between intent and implementation, then they become extremely valuable, even if they occasionally miss nuance. Low-latency feedback beats high-fidelity feedback that arrives too late to matter.
Seen in this light, GenAI-based user testing isn’t about replacing humans; it’s about changing where humans add the most value. Virtual users excel at continuous validation of assumptions, broad coverage across personas and edge cases and early detection of UX regressions. Humans remain essential for emotional resonance and trust, social and cultural interpretation and discovering entirely new behaviors and needs. The result is a layered feedback system: AI handles breadth and speed; humans handle depth and surprise.
Radical organizations are defined by their ability to learn faster than their competitors. That learning speed is constrained by the slowest feedback loop. By turning user feedback into something executable and continuous, what we call “user feedback as code,” GenAI closes one of the last major gaps in continuous value delivery. User intent, experience and friction stop being qualitative anecdotes and start becoming structured, repeatable signals. And once user feedback participates in the same automation culture as testing, deployment and monitoring, it fundamentally reshapes how products evolve. Not because AI understands users better than humans, but because it lets organizations listen continuously. To end with Arie de Geus: “Learning faster than your competitors is the only sustainable competitive advantage.”


