Treat AI as an ERP Stakeholder to Ensure Success

Treat AI as an ERP Stakeholder to Ensure Success

Anand Naidu is a seasoned development expert with a deep proficiency in both frontend and backend engineering, specializing in the intersection of complex coding languages and enterprise logic. As a leading voice at Atigro, he helps organizations navigate the high-stakes environment of ERP transformations by treating technology as a strategic partner rather than a simple utility. His approach focuses on the structural integrity of business processes, ensuring that digital tools are integrated into the core architecture of a company to drive genuine intelligence and scalable growth.

In this conversation, we explore the shift from viewing AI as a peripheral tool to recognizing it as a critical stakeholder in business operations. We discuss the necessity of structured onboarding for technology, the dangers of deferring AI integration, and the rigorous data governance required to avoid “confident errors” at scale. Furthermore, we delve into the evolving roles of employees who must transition from users to managers of AI and how current architectural decisions will define organizational success for the next decade.

High-level executive hires require extensive onboarding to understand business logic and culture. How do you define the “onboarding” process for AI within an ERP framework, and what specific feedback mechanisms prevent it from producing confident errors at scale?

Onboarding AI is remarkably similar to integrating a C-suite executive; you cannot simply hand it a login and expect it to understand your unique business culture by Friday. I define this process as providing “structured context,” where we feed the system specific business priorities, process logic, and clearly defined boundaries of authority. To prevent the “confident errors” that occur when AI hallucinates based on vague data, we must build in calibration spaces where outputs are measured against known successful outcomes. This involves creating a continuous feedback loop where the system is corrected in real-time by subject matter experts, effectively sharpening its performance until its “judgment” aligns with the organization’s strategic goals.

Many organizations treat AI as a peripheral tool rather than a core component of process architecture. What are the specific risks of deferring AI integration until after the design phase, and how does this delay impact long-term data governance?

When you treat AI as a bolt-on feature, you are essentially building a house and trying to install the plumbing after the concrete has dried. The primary risk is that your core process architecture and decision rights are established using legacy logic, leaving AI to function merely as an expensive search engine rather than a force multiplier. This delay creates a massive governance gap because the data models aren’t built to the level of granularity AI requires, forcing the system to operate within ambiguity. Over the long term, this leads to fragmented master data that is nearly impossible to retrofit, ensuring that your ERP remains a rigid, outdated system rather than a flexible, intelligent platform.

Successful implementations distinguish between autonomous AI actions and those requiring human judgment. How do you determine which procurement or financial exceptions AI should handle independently, and where should the final liability sit when a human overrides the system?

Determining autonomy requires a granular breakdown of transaction types—for instance, allowing AI to handle high-volume, low-risk procurement within strictly defined parameters while escalating complex financial exceptions to a human. This isn’t a technical choice; it is a governance decision that must be mapped out during the design phase to define exactly where the AI’s agency ends. The final liability must always sit with the human owner of the process, which is why the “override” is a critical management discipline. We need to ensure that when a human steps in, the system captures that decision as a learning data point, but the accountability for the outcome remains firmly with the person who signed off on the deviation.

Data debt is often an afterthought in large-scale transformations. What specific levels of master data granularity are necessary for AI to function effectively, and what are the immediate consequences of expecting AI to compensate for fragmented or ungoverned data?

AI is a reflection of the environment it inhabits, and if that environment is cluttered with “data debt,” the system will only amplify those errors at a much faster scale. For AI to be effective, master data must be governed at the most granular level, with clear definitions for every business rule and process workflow. If you expect AI to “clean up” or compensate for fragmented data, the immediate consequence is a surge in “confident errors” where the system makes incorrect decisions with 100% certainty. This can lead to catastrophic supply chain failures or financial discrepancies that cost millions because the foundation was built on inconsistent, ungoverned information.

Integrating AI requires staff to shift from being mere users to becoming managers of the technology. What does a modern “human-AI” role design look like in practice, and how do you train employees to critically review and sharpen AI outputs?

A modern role design shifts the employee from a “doer” of tasks to a “manager” of digital stakeholders, where their primary responsibility is the oversight and optimization of automated workflows. In practice, this looks like a procurement specialist who no longer enters data but instead audits AI-generated reports for anomalies and provides the “human-in-the-loop” feedback necessary for the system to evolve. We train employees by treating AI output as a draft that requires critical review, encouraging them to question the “why” behind an AI’s recommendation. This requires an investment in new capabilities, where people are evaluated on their ability to sharpen the technology’s performance and recognize exactly when a machine-led process needs a human touch.

Architectural decisions made today will lock in AI performance for the next decade. When redesigning workflows, how do you ensure the operating model remains flexible enough for future AI advancements without requiring a total system retrofit?

To ensure flexibility, we must treat the ERP transformation as a structural moment to redesign how the organization works at a fundamental level, rather than just upgrading software. This involves building a modular operating model where process logic is decoupled from the specific software version, allowing us to plug in more advanced AI capabilities as they emerge. By prioritizing data governance and clean process definitions today, you create a “future-proof” foundation that can absorb new technology without a total retrofit. The goal is to design an architecture that is “AI-native” from the start, ensuring that your workflow logic is robust enough to handle the next decade of innovation without breaking.

What is your forecast for the future of AI in ERP transformations?

My forecast is that the “successful” ERP implementations of the next few years will not necessarily be the ones with the most AI features, but the ones with the best human-AI governance models. We are moving toward a reality where AI is viewed as a legitimate stakeholder in the business, and the companies that win will be those that invested tens of millions of dollars into their data and organizational design rather than just the software license. I believe we will see a massive divide between companies that used AI as a force multiplier for good processes and those that used it to amplify their existing mistakes at an uncontrollable scale. Ultimately, the future of ERP is not about the technology itself; it’s about how well we manage the relationship between human intelligence and machine speed.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later