Is AI a Systemic Risk for Tech Portfolios?

Is AI a Systemic Risk for Tech Portfolios?

In the rapidly evolving world of financial technology, the disruptive power of artificial intelligence is no longer a distant forecast—it’s a present-day reality impacting balance sheets. To navigate this new landscape, we sat down with Anand Naidu, a leading development expert with deep proficiency in both frontend and backend systems, to discuss how major financial players are rethinking risk. He sheds light on the shift from traditional credit analysis to a more “forensic” examination of software investments, the new variables being plugged into “severe” stress tests, and how firms are distinguishing between market panic and genuine, long-term threats to established SaaS revenue models.

Financial firms are now treating AI disruption as a balance-sheet risk. How does this new stress testing for software businesses differ from traditional credit analysis, and what specific AI-driven scenarios, like those affecting seats-based revenue models, are now being modeled? Please share some practical steps.

It’s a fundamental shift in perspective. Traditional credit analysis often looks at historical performance, debt-to-equity ratios, and market position. But when you’re dealing with AI, the past becomes a less reliable guide. We’re now stress testing the very core of a software company’s business model. A key practical step is modeling the erosion of “seats-based revenue,” which has been a cash cow for SaaS companies for years. We simulate scenarios where a sophisticated AI platform, like Anthropic’s Claude, can perform the tasks of 10, 50, or even 100 employees, drastically reducing a client’s need for individual user licenses. It’s no longer just about whether a company can pay its bills; it’s about whether its entire value proposition can be made obsolete in a matter of months.

When conducting a “forensic” analysis of software investments over a seven-year horizon, what does this process involve? Could you walk us through the key metrics used to evaluate the resilience of cash flows against potential, long-term AI disruption and platform dependency?

A seven-year forensic analysis is about peeling back the layers of a company’s revenue to find what is truly durable. The first thing we do is move beyond surface-level growth metrics and look at the “stickiness” of the customer base. We analyze churn rates not just in aggregate, but within specific cohorts exposed to AI disruption. A key metric is the cost of customer replacement versus the lifetime value, but with an AI-risk discount applied. We’re also intensely focused on platform dependency—how much of their functionality is built on a defensible, proprietary data set versus something that a large language model could easily replicate? We’re essentially looking at the resilience of their cash flows under extreme pressure, imagining a future where their competitive edge is constantly under assault.

Given that historical data is becoming less reliable for modeling software risk, what specific variables are now being used in more “severe” stress tests? Can you provide an example of a scenario being applied to a firm’s private credit and equity book?

That’s the core of the challenge; we’re modeling for events we’ve never seen before. Instead of relying on past market downturns, we’re introducing new variables like “technology substitution velocity” and “margin compression potential.” For example, we might take a company in our private credit book that provides automated customer service software. A “severe” stress test would model a scenario where a new, more powerful generative AI platform enters the market and offers a similar, or even superior, service for 50% of the cost. We then simulate the impact on our portfolio company’s revenue, its ability to service its debt, and the knock-on effect on the valuation of our equity stake. It’s about being prepared for a level of disruption that historical data simply doesn’t account for.

Exposure to enterprise software embedded in regulated industries like insurance and education is seen as more defensible. What specific characteristics or “moats” make these businesses less vulnerable to AI disruption, and how do you quantify that resilience during a portfolio assessment?

The “moats” in these sectors are built from regulatory complexity and deep integration. Think about insurance broking or educational administration software. These aren’t just tools; they are deeply woven into workflows that must comply with a web of legal and industry-specific standards. An AI can’t just come in and replicate that overnight, because the value isn’t just the code, it’s the years of navigating compliance and building trust. To quantify this resilience, we look at factors like the cost and time it would take a competitor—AI-powered or not—to achieve the same level of regulatory certification. We also assess how much of the software is a “system of record,” a single source of truth that is incredibly difficult and risky for a client to replace. These businesses have a gravity that makes them far less vulnerable to the shiny new object syndrome.

Market sentiment around software valuations has shifted abruptly from “darling to concern.” How does your team separate immediate market noise from fundamental, long-term risks to entrenched SaaS revenue models? Please describe the process for ensuring loan loss rates remain stable amidst this volatility.

The market can swing from euphoria to panic in a matter of days, as we saw recently. Our job is to keep a level head and focus on the fundamentals. The process starts with a rigorous, bottom-up analysis of each company in our portfolio. We ignore the daily headlines and focus on our own internal stress tests and the cash flow resilience I mentioned. For example, while the market is panicking about AI replacing all software, we’re looking at a specific company and asking: Is their revenue tied to regulated processes? How high are their customer switching costs? This detailed, forensic approach allows us to remain confident that our loan loss rates will hold steady, just as they have historically, because our lending decisions were based on durable business models, not on fleeting market sentiment.

What is your forecast for the intersection of AI, software valuations, and financial risk management over the next five years?

Over the next five years, I believe we’ll see a great divergence. The current market-wide anxiety will be replaced by a much more nuanced understanding of risk. Software companies with genuine “moats”—like proprietary data or deep regulatory integration—will see their valuations stabilize and even grow, as investors recognize their defensibility. Conversely, companies offering generic, easily replicable services will face immense pressure, and many won’t survive. For risk management, AI itself will become the primary tool for assessing AI-related risks. We will use sophisticated models to simulate disruption scenarios with incredible speed and accuracy, turning what is now a “forensic” art into a data-driven science. The key will be to continuously adapt, because the pace of change will only accelerate.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later