In the rapidly evolving world of enterprise software, the line between human expertise and artificial intelligence is blurring. To navigate this new landscape, we sat down with Anand Naidu, a leading development expert with deep proficiency in both frontend and backend architecture. His work focuses on integrating AI into the core of ERP systems, not as a bolt-on feature, but as a foundational element that enhances user control and business agility.
This conversation explores the practicalities of building trust in AI, moving from simple assistance to true autonomous operations. We delve into the technical safeguards that make AI-driven finance and inventory management safe, the real-time data mechanics behind industry-specific tools like Project 360, and the architectural principles that ensure deep customizations can survive major upgrades. Anand also shares valuable lessons learned from beta testing and provides a look ahead at how data privacy and the human-in-the-loop model will define the next generation of intelligent ERP.
Your roadmap moves from “assistive AI” to more autonomous operations. Could you walk us through the practical steps and trust-building milestones a customer would experience as they gradually reduce human oversight, and what specific safeguards like rollback ensure they always remain in control?
That’s a fantastic question because it gets to the heart of our design philosophy: automation with control. The journey begins with assistive AI embedded directly in the UI—think of it as a smart assistant suggesting the next step or completing a routine task. A customer first experiences this in low-risk scenarios, maybe generating a summary for a report. As they see it working reliably, they build confidence. The next milestone is to enable AI-driven workflows that still require explicit approval. For example, the system might process an entire batch of AP invoices but will hold them for a manager’s final click. This is where safeguards are critical. From a development standpoint, every AI-assisted action is logged as a distinct, reversible transaction. If a user grants more autonomy and an automated process yields an unexpected result, the rollback safeguard allows them to revert the entire set of changes with a single action, as if it never happened. This builds trust, allowing them to gradually reduce oversight and move toward true autonomous operations at a pace that feels comfortable for their business.
You describe AI Studio as a proxy connecting to a customer’s own LLM. Could you explain the step-by-step process for a user to connect their private OpenAI endpoint? Also, how do your structural safeguards, like human-in-the-loop approvals, mitigate risks like hallucinations in financial scenarios?
Absolutely. We designed AI Studio to be an open connector rather than a black box. For a user, connecting their private OpenAI endpoint is a straightforward configuration process. Within their Acumatica instance, they navigate to the AI Studio settings, select OpenAI from the list of providers, and simply enter their private endpoint URL and API key. That’s it. The platform handles the secure connection, and from that moment on, any prompt initiated through AI Studio is routed through their own controlled LLM deployment. This is crucial because it means we aren’t dictating model behavior. Instead, we provide the structural framework for safety. In a sensitive financial scenario, a user might use AI to draft a dunning letter. The model could, theoretically, hallucinate an incorrect overdue amount. However, our safeguards prevent that error from ever reaching the customer. The generated text appears in the UI for review, and the workflow can be configured to require a manager’s approval before the letter is sent. This human-in-the-loop checkpoint is a non-negotiable part of the architecture for critical processes, ensuring that the AI is a powerful tool for efficiency, but the human user always has the final say and maintains accountability.
For Project 360 in construction, you cite “real-time visibility” from embedded dashboards. Can you describe the specific data sources feeding this—like RFIs and field data—and the mechanism that prevents conflicting signals from creating stale project views for a site manager?
The magic behind Project 360’s real-time visibility is a new technology we developed that fundamentally changes how data is presented. Instead of forcing a site manager to jump between a project screen and a separate reporting module, we embed interactive dashboards directly within their data entry screens. The data sources are comprehensive, pulling from financial modules, change orders, and, critically, date-sensitive information that captures progress over time. When a crew in the field updates a task or submits an RFI via their mobile device, that data is instantly shared and reflected in the dashboard. The core mechanism preventing stale views is the elimination of data lag. Because the analytics are embedded, they refresh as the underlying data changes, not on a nightly batch schedule. This creates a single source of truth, so a project manager in the office and a site manager in the field are always looking at the exact same, up-to-the-minute information, preventing decisions from being made on conflicting or outdated reports.
You mentioned fixing an unexpected AP automation failure for private cloud customers during the beta. Could you share an anecdote about how this issue surfaced and the specific technical refinements made? How did this experience influence the design of other features like Order Orchestration?
That was a very important learning moment for our team during beta. The issue surfaced not as a complete failure, but as an inconsistent performance problem. For our SaaS customers, AP automation was flying. But we started getting feedback from private cloud testers about slow processing times and, in some cases, timeouts during document recognition. We dug in and found that the technical architecture required for the feature to perform reliably at scale—specifically around secure, high-speed communication with the Azure Form Recognizer service—couldn’t be consistently guaranteed across the diverse hardware and network configurations found in private cloud deployments. The refinement we made was a tough but necessary one: we limited the feature to SaaS customers to ensure a reliable and scalable experience for everyone using it. This experience was a direct influence on how we approached Order Orchestration. We built in more robust performance and scalability checks from the very beginning of the development cycle, ensuring that the architecture could handle complex fulfillment logic consistently across all deployment options, not just in our ideal lab environment.
With Order Orchestration, how does the system resolve a conflict where the lowest-cost warehouse violates an SLA or inventory risk threshold? Please provide a step-by-step example of how a user would see and understand the logic behind the final fulfillment choice.
This is a perfect example of how we translate business logic into automated, yet transparent, decisions. Imagine a company has an orchestration plan that prioritizes fulfillment from the warehouse with the lowest shipping cost. A customer places an order. Step one, the system identifies the lowest-cost warehouse based on the user’s predefined rankings. Step two, before assigning the order, it runs a series of checks against the rules in the orchestration plan. It confirms if the warehouse can meet the SLA by checking shipping zones. Then, it checks inventory levels. Let’s say it discovers that fulfilling this order from the cheapest warehouse would drop a key item below its safety stock threshold—a violation of the inventory risk rule. The system automatically disqualifies that warehouse for this specific order line. Step three, it immediately moves to the next warehouse in the priority list, re-runs the checks, and if all rules are met, it assigns the order there. The user sees this entire decision process clearly. In the order details, the system would show the assigned warehouse and, in an audit log or explanation screen, it would explicitly state why the first-choice warehouse was bypassed, noting “Violated safety stock threshold.” This explainability is key; it ensures users trust the automation because they can always understand the ‘why’ behind its decisions.
The new UI promises upgrade-safe customizations. Beyond automated form migration, what specific design principles or schema enforcement prevents “UX drift”? Could you provide a metric, like the typical time saved during an upgrade for a mid-market organization with moderate customizations?
Upgrade safety was a foundational principle from day one of the new UI project. The core design principle is that customizations are stored as metadata, not as direct modifications to the base code. Think of it as a separate layer of instructions that tells the system how to alter a standard screen. When an upgrade happens, we update the base application, and our automated migration tool simply re-applies that customization layer to the new version. This prevents customizations from being overwritten. To prevent “UX drift,” where different parts of the application start to look and feel inconsistent after customizations, we enforce a strict UI schema. Any custom field or new section a partner or customer adds must conform to our design system’s rules for elements, spacing, and behavior. This ensures a cohesive user experience. While it’s hard to give a single metric, for a mid-market organization that would have previously spent weeks manually refactoring their moderately customized screens for a major upgrade, we’re now seeing that effort reduced to a matter of days. The bulk of the work is now automated, with human effort focused on testing and validating the migrated customizations rather than rebuilding them from scratch.
You noted that for AI Studio, only prompt text leaves the tenant boundary. Could you detail the technical process for how data masking will work in 2026 to anonymize PII? What specific audit artifacts can a compliance officer export to verify this process?
The data masking process, planned for 2026, is designed for maximum security and verifiability. When a user writes a prompt containing personally identifiable information (PII), like “Generate a summary for customer John Doe,” our system will intercept the prompt before it’s sent to the external LLM. An internal service will identify “John Doe” as PII and replace it with a non-identifiable, temporary token, like “[CUSTOMER_NAME_1]”. The prompt that actually leaves the tenant boundary would be “Generate a summary for customer [CUSTOMER_NAME_1]”. Once the LLM returns a response, our system receives it, and before presenting it to the user, it reverts the token back to “John Doe”. The external model never sees the actual PII. For a compliance officer, the audit trail will be explicit. They can export a log that shows the original user-submitted prompt, the fully anonymized and tokenized prompt that was sent externally, the timestamp of the transaction, and the final response shown to the user. This artifact provides concrete, exportable proof that sensitive data never left the secure tenant, allowing them to easily verify compliance with data residency and privacy frameworks.
What is your forecast for the evolution of autonomous ERP over the next five years, and what role will the human-in-the-loop play as systems become more intelligent and proactive in managing business operations?
My forecast is that over the next five years, autonomous ERP will shift from automating discrete tasks to proactively managing entire business processes. Instead of just automating an invoice payment, the system will monitor cash flow, supply chain lead times, and sales forecasts to proactively suggest optimal payment schedules or recommend placing a purchase order for raw materials before a stockout becomes a risk. It will move from being reactive to being predictive and prescriptive. In this world, the role of the human-in-the-loop becomes even more critical, but it evolves significantly. The human will no longer be a simple approver in a workflow. Instead, they will become a strategic overseer—the one who sets the goals, defines the risk tolerances, and manages the exceptions that the AI can’t handle. Their job will be to train, guide, and audit the AI’s strategies, ensuring its autonomous decisions align perfectly with the broader, nuanced goals of the business. The human becomes the conductor of an increasingly intelligent orchestra, not just another musician in it.
