Deriv Reimagines Software Engineering With AI-Centric SDLC

Deriv Reimagines Software Engineering With AI-Centric SDLC

Anand Naidu is a seasoned veteran in the fintech space, currently spearheading the transition from manual coding to AI-driven intent at the highest levels of engineering management. With a foot in both the frontend and backend worlds, he has seen firsthand how the traditional bottleneck of human bandwidth is being dismantled by smarter, integrated workflows. In this conversation, we explore the radical shift of the developer’s role, moving away from being a primary executor of code to becoming a strategic director of systems. We delve into the mechanics of building a deterministic pipeline on top of non-deterministic AI and how global teams can achieve 24/7 operational autonomy without the exhaustion of constant on-call alerts.

The following discussion examines the evolution of performance metrics in an AI-centric environment and the specific training required to keep engineers grounded in technical fundamentals. We also cover the practicalities of embedding steering documents into repositories to ensure consistency, the technical architecture that allows automated agents to resolve service errors independently, and the high-level judgment calls that remain the sole domain of human leadership. Finally, we look at the “last mile” of deployment and how real-time feedback loops are fundamentally altering the long-term accumulation of technical debt.

Traditional development cycles often face bottlenecks during code reviews and documentation handoffs. When engineers shift from being primary executors to directors of intent, how do you redefine their daily performance metrics, and what specific training ensures they maintain a deep understanding of the underlying code?

The shift from executor to director is a profound psychological and operational change that requires moving away from measuring raw output, like the number of tickets closed or lines of code written. Instead, we look at the clarity of the intent defined and the rigorousness of the evaluation performed on the AI’s output. Our performance metrics now prioritize the speed and quality of the entire system’s throughput rather than just individual human bandwidth. To ensure our engineers don’t lose their edge, we emphasize a “fundamentals-first” training approach where they are expected to dissect why an AI made a specific decision. It is the difference between a junior developer who blindly accepts a block of code and a director who understands the underlying logic well enough to spot a subtle architectural flaw.

AI is inherently non-deterministic, yet deployment pipelines require absolute consistency to be trustworthy. How do you integrate steering documents directly into a repository to standardize output, and what are the practical steps for balancing AI-driven discovery with the need for rigid, deterministic CI/CD gates?

We tackle the non-deterministic nature of AI by separating the creative discovery process from the rigid execution phase. We bake steering documents, best practices, and specific quality gates directly into the repository so that the standards travel with the code itself. When the AI generates a solution, it isn’t working in a vacuum; it is reading these local documents and coding against them to ensure the output matches our organizational DNA. This allows the AI to be flexible and creative in problem-solving while the CI/CD pipeline remains a cold, hard gatekeeper of consistency. The practical balance is achieved by ensuring that every AI-generated piece of code must still pass through a deterministic testing suite that produces the exact same result every single time.

Automated agents can now detect service errors, log tickets with full context, and generate permanent tests without human intervention. Could you walk through the technical architecture required to enable this level of autonomy and share how this capability changes the operational demands on global, 24/7 engineering teams?

The architecture relies on a specialized QA agent that monitors our services in real-time across various regions, such as our operations in Sri Lanka. When an error occurs, the agent doesn’t just send a vague alert; it identifies the issue, captures full context including screenshots, and logs a detailed ticket immediately. Crucially, it then generates the specific tests required to ensure that exact class of problem can never slip through the pipeline again. For a global team, this is a game-changer because it eliminates the need for middle-of-the-night pages or frantic morning handovers where an engineer has to work backward to reconstruct a failure. We are moving toward a world where the system heals itself during off-hours, allowing our human talent to focus on innovation rather than fire-fighting.

High-level architecture and strategic trade-offs still require human judgment to determine if a solution will hold up at scale. What specific criteria should leaders use to evaluate AI-generated technical approaches, and how do you encourage engineers to investigate the “why” behind an AI’s decision rather than just accepting the output?

Human judgment is most critical when deciding between competing technical approaches and evaluating how a single component will interact with a massive, scaling ecosystem. Leaders should evaluate AI-generated work based on its long-term maintainability and how it aligns with the broader strategic trade-offs of the organization, such as choosing between latency and consistency. We foster a culture of investigation by making it clear that an engineer’s primary value is their ability to critique the AI’s logic. If an engineer cannot explain the “why” behind a generated solution, they haven’t actually performed their job as a director of intent. This scrutiny keeps our team sharp and ensures that we aren’t just generating more output, but actually building more robust systems.

The final stage of a modern pipeline involves moving from tested code to live production without manual infrastructure interference. What milestones must an organization reach to automate this “last mile,” and how does real-time crash detection feeding back into the testing loop impact the long-term accumulation of technical debt?

To reach that “last mile” of total automation, an organization must first achieve a state where every piece of code is reviewed and tested by AI-driven gates that are more rigorous than a manual check. We are working toward a milestone where our QAClaw system triggers automatically on every single release, moving away from scheduled nightly tests to instantaneous validation. When you combine this with real-time crash detection that feeds directly back into the testing loop, you create a self-correcting cycle that catches bugs before the majority of users even see them. This drastically reduces technical debt because we are standardizing how code is written and tested from the jump, preventing the messy variations that usually pile up and slow down development teams over the long term.

What is your forecast for the future of AI-driven software engineering?

I believe we are entering an era where skill is the new programming language, and the traditional ceremony of manual execution will soon be seen as an expensive relic of the past. In the near future, the gap between a feature idea and live production will shrink to almost nothing, as systems become capable of handling everything from infrastructure provisioning to real-time optimization without human intervention. The competitive edge will go to teams that have spent the time building the right guardrails and foundational principles, while those who simply use AI to churn out code faster will find themselves buried under a mountain of inconsistent, unmaintainable output. Ultimately, the engineer’s role will evolve into that of a high-level architect and strategist, where the quality of their input determines the success of the entire automated engine.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later