For years, Anand Naidu has been a leading voice in enterprise architecture, advocating for a pragmatic, best-of-breed multicloud strategy long before it was fashionable. While major cloud providers pushed for vendor loyalty, Anand championed the real-world needs of businesses, arguing that true innovation happens when you use the right tool for the job, regardless of the brand. With AWS’s recent, dramatic shift in its multicloud strategy, his long-held perspective has been vindicated.
We sat down with Anand to discuss the implications of this industry-shaking move. Our conversation delves into the real-world costs of the single-cloud dogma that dominated the last decade and explores the technical workarounds that innovative teams were forced to build. We’ll also map out a practical path forward for leaders looking to implement a true best-of-breed architecture and examine how native interoperability finally solves the frustrating security and networking challenges of the past. Finally, we’ll look beyond mere connectivity to understand the next set of hurdles enterprises will face in this new, truly multicloud era.
You opened with a powerful story about a financial firm hurt by single-cloud dogma. Can you share another real-world anecdote of the “missed opportunity costs” you mentioned and quantify the specific business impact, whether in dollars, time-to-market, or competitive advantage?
Absolutely. I worked with a large e-commerce company that went all-in on the AWS ecosystem about five or six years ago. They built their entire product recommendation engine on AWS’s native machine learning services at the time. The problem was, their direct competitor, a more agile digital-native startup, was using Google Cloud’s more advanced AI and ML platforms. While the e-commerce giant was spending millions trying to fine-tune their suboptimal models and wrestling with data, the competitor was delivering hyper-personalized recommendations that felt almost psychic to the customer. The quantifiable impact was stark: over an 18-month period, our client saw a measurable decline in cart conversion rates and customer lifetime value, while the competitor’s market share grew. The opportunity cost wasn’t just the extra millions spent on AWS services; it was the ceding of a competitive edge in customer experience that they are still fighting to win back today.
Your article claims AWS’s past warnings about multicloud were for “market control, not customer value.” What were the most persuasive technical arguments they used to promote this single-vendor narrative, and how did innovative companies technically work around those challenges before Interconnect-multicloud existed?
The arguments were always wrapped in a blanket of operational safety. The most common one I heard in presentations was the specter of “added complexity.” They would paint a picture of engineers needing to learn two or three different sets of APIs, IAM policies, and networking constructs, suggesting it would slow everything down. Another persuasive point was security; they’d argue that creating a consistent security posture across different clouds was nearly impossible and would inevitably open up vulnerabilities. Of course, this was a self-fulfilling prophecy, as they made interoperability so difficult. To get around this, innovative companies had to become masters of brute-force integration. We built complex and fragile solutions using third-party SD-WAN overlays to create a virtual network fabric, and we deployed intricate security information and event management systems to try and unify logs and threat detection. It was a frustrating, patchwork effort that required immense engineering talent, but it was the only way to access best-of-breed services.
You advocate for a “best-of-breed” strategy, citing AWS for compute and Google for AI. Could you walk us through a specific architecture where this combination is ideal and outline the first three practical steps a CTO should take to implement such a hybrid solution?
A perfect example is a media streaming service. They could use AWS’s incredibly elastic and cost-effective EC2 instances for the heavy lifting of video transcoding—a pure, massive-scale compute task where AWS excels. But for the recommendation engine that decides what video to show a user next, they need top-tier AI. In this architecture, raw video files are processed on AWS, and as they are transcoded, metadata is streamed directly over a high-speed link to Google Cloud’s AI Platform for analysis and model training. The results are then fed back to the application front-end running on AWS. For a CTO looking to implement this, the first step is to clearly define the workload and data flow—don’t just go multicloud for its own sake. Second, they must design a unified security and identity model from day one to ensure data is protected in transit and at rest in both environments. Finally, start with a contained proof-of-concept. Use the new AWS Interconnect to link a single AWS VPC with a Google Cloud project and prove the performance, cost, and security benefits on a small scale before going all-in.
The new AWS Interconnect-multicloud promises to simplify what once took weeks into a “single click.” Can you detail the most common security and networking complexities of the old DIY approach and explain how this new native interoperability directly solves those specific pain points for an architect?
The old DIY approach was a nightmare of complexity. From a networking perspective, you were often managing brittle VPN tunnels or paying exorbitant fees for a third-party fabric. You had to manually configure routing tables, deal with overlapping IP address spaces, and constantly monitor for latency and packet loss. It felt like you were holding it all together with duct tape. On the security front, you had to manage disparate identity systems, manually replicate firewall rules, and struggle to get a single, coherent view of traffic flowing between clouds. It was an enormous operational burden. This new native interoperability solves this by treating a connection to another cloud as a first-class citizen within the AWS environment. An architect can now extend their AWS Transit Gateway directly to Azure or Google Cloud. This means you get a private, dedicated, and resilient connection without the patchwork. The “single click” isn’t just marketing; it represents the abstraction of all that painful network engineering and allows security teams to apply consistent policies to a known, reliable connection point.
Now that AWS has acknowledged the multicloud reality, what is your forecast for the future of multicloud management?
I predict the next major hurdle for enterprises will be moving from connectivity to true optimization. The conversation is about to shift from “How do we connect our clouds?” to “How do we intelligently manage, govern, and secure our workloads across them?” We’re going to see a surge in demand for a unified control plane—a new generation of management and security tools that can provide a single pane of glass for cost management, compliance enforcement, and workload orchestration across all major providers. Simply connecting the clouds is just the first step. The real value will come from tools that can, for example, automatically shift a workload from AWS to Azure to take advantage of spot pricing, or enforce a single data sovereignty policy that applies seamlessly to storage buckets in Google Cloud and AWS S3. The race is no longer about building walls; it’s about building the best universal remote for the entire cloud ecosystem.