How Is Databricks Simplifying AI Development and Governance for Enterprises?

March 12, 2025

Databricks has recently rolled out a slate of updates that significantly ease the creation of generative AI-based applications and AI agents. These enhancements provide enterprises with greater control and simplified management of their AI initiatives, marking a strategic move to deepen enterprise user engagement. Through comprehensive updates aimed at governance, deployment, performance evaluation, and data accessibility, Databricks demonstrates a commitment to enabling smoother AI integration for enterprises.

Bringing Order to AI Governance

Centralized Governance: A Game Changer

One of the standout features of Databricks’ recent updates is the introduction of Centralized Governance, now in public preview. This capability is engineered to streamline the governance of large language models (LLMs), regardless of their source, through Databricks’ Mosaic AI Gateway. For enterprises, managing such complex AI governance processes can be daunting, often involving intricate security protocols, access controls, and compliance measures. Centralized Governance aims to simplify these multifaceted tasks, providing a unified platform where enterprises can oversee model operations while ensuring conformity with industry regulations.

David Menninger, executive director at ISG, highlights the significant advantages offered by Centralized Governance. He emphasizes how this feature mitigates the complexities associated with overseeing AI initiatives by consolidating security and compliance protocols into a single, manageable interface. Doug MacWilliams, director of technology and experience at consulting firm West Monroe, agrees, noting that this capability represents a substantial simplification for enterprises. By reducing duplicate efforts and streamlining licensing fees, Centralized Governance not only lowers operational costs but also facilitates the monitoring and resolution of critical issues such as drift or bias in AI models. This unified approach further benefits legal, compliance, and security teams, making the approval process more efficient through comprehensive model review and approval.

Simplifying Governance Complexity

Analysts are taking note of the implications of Centralized Governance on enterprise AI operations. Both Menninger and MacWilliams highlight the potential for this feature to ease the burdens of AI governance, noting how it centralizes various compliance and security tasks. The complexities of managing AI initiatives—such as ensuring consistent security, controlling access, and adhering to compliance requirements—are significantly reduced. By providing a single interface for model review and approval, Centralized Governance enables a streamlined approach that cuts down on redundant efforts and lowers licensing fees.

Moreover, the capability addresses the challenges posed by monitoring AI models, particularly issues like drift or bias, which can impact model performance over time. Centralized Governance simplifies this by offering tools to consistently track and rectify such problems, significantly improving the models’ reliability and accuracy. For legal, compliance, and security teams, this translates into a less cumbersome approval process, as they can review AI models with greater ease and efficiency. Through these enhancements, Databricks not only simplifies governance but also paves the way for enterprises to manage their AI initiatives more effectively.

Revolutionizing AI Deployment

Provision-Less Batch Inference

Another pivotal advancement in Databricks’ portfolio is the Provision-Less Batch Inference feature, introduced with an aim to revolutionize AI deployment processes. Traditionally, setting up infrastructure to handle batch inference tasks has required substantial resources and technical expertise. This new serverless functionality, currently in public preview, eliminates those prerequisites, allowing enterprises to execute batch inference operations seamlessly with a single SQL query. This innovation charges customers only for the infrastructure they use, representing a cost-effective solution that scales with enterprise needs.

Doug MacWilliams commends Provision-Less Batch Inference, acknowledging its transformative impact on AI deployment. He points out how the serverless nature of this feature simplifies the scaling of AI processes, ensuring that resources are utilized only when necessary, which ultimately saves costs. For enterprises, this means they can deploy AI solutions more efficiently without the need for extensive setup or maintenance. Additionally, data analysts without MLOps expertise can now access and perform batch inference tasks, democratizing AI capabilities across various levels of an organization. This opens the door to numerous applications, such as overnight processing of customer support tickets, regular compliance checks, enriching product catalog data, and scoring customer databases for churn risk.

Accessible AI for All

Provision-Less Batch Inference is poised to make significant strides in making AI accessible to a broader audience within enterprises. By providing an infrastructure-free method to perform batch inference with a simple SQL query, Databricks removes significant technical barriers. This advancement is particularly beneficial for data analysts who may lack specialized MLOps expertise, allowing them to engage with AI tools more directly and effectively. The potential applications of this feature are vast, ranging from customer support and compliance checks to product data enrichment and risk scoring.

MacWilliams notes that this serverless functionality empowers enterprise teams to scale their AI processes efficiently, using resources only when needed. The ability to perform tasks without extensive setup or infrastructure management is a game-changer, facilitating quicker and more cost-effective AI deployments. Enterprises can now consider overnight processing of tasks such as customer support tickets or regular compliance checks—a significant improvement over traditional methods. Furthermore, by making AI tools more accessible to non-experts, Provision-Less Batch Inference fosters a collaborative environment where various departments within an organization can leverage AI capabilities to optimize their operations.

Enhancing AI Agent Performance

Upgraded Agent Evaluation Review App

Databricks continues to refine AI agent performance with the enhancement of its previously released Agent Evaluation Review App. This upgraded version now allows domain experts to offer evaluations, send traces for labeling, and establish custom evaluation criteria without the need for external spreadsheets or custom-build applications. This innovative approach significantly streamlines the process of gathering structured feedback, making it easier for enterprise teams to systematically refine their AI agents’ accuracy and effectiveness.

The upgraded app facilitates a more efficient feedback loop, crucial for improving AI agent performance. By enabling domain experts to offer real-time evaluations and customize criteria directly within the app, Databricks removes the dependency on fragmented tools and processes. This streamlined approach not only accelerates the feedback collection but also enhances the accuracy and reliability of the evaluations. Enterprise teams can leverage these insights to make iterative improvements to their AI agents, ensuring that performance continues to improve with each cycle of assessment and refinement.

Seamless Feedback Gathering

The enhanced capabilities of the Agent Evaluation Review App are poised to make a significant impact on the performance of AI agents within enterprises. By simplifying the evaluation process and enabling domain experts to customize criteria and send traces for labeling directly within the app, Databricks addresses the need for efficient and systematic feedback gathering. This unified approach reduces the reliance on external spreadsheets and custom-built applications, streamlining the feedback loop and facilitating quicker improvements.

For enterprises, the ability to gather and utilize structured feedback is paramount to refining AI agent performance. The upgraded app enhances this process by making it more intuitive and accessible, enabling teams to focus on accuracy and effectiveness without getting bogged down by complex procedures. As a result, AI agents can be fine-tuned more efficiently, ensuring that their performance aligns with the specific needs and requirements of the enterprise. This improvement not only boosts the agents’ accuracy but also enhances their overall reliability, paving the way for more sophisticated AI applications within enterprise settings.

Making Data Accessible

AI/BI Genie Conversation APIs

Databricks has also launched a suite of AI/BI Genie Conversation APIs in public preview, setting new standards for data accessibility. These APIs are designed to integrate natural language chatbots directly into custom-built apps or productivity tools like Microsoft Teams, SharePoint, and Slack. The Genie tool, being no-code, allows users to analyze data by querying it naturally, generating visualizations that explain the underlying data. The API can maintain state across multiple follow-up questions within a conversation thread, making data interaction more intuitive and fluid.

Arnal Dayaratna, IDC research vice president, points out that the Genie API not only increases conversational assistant extensibility by leveraging Databricks data but also bridges the gap between data availability and accessibility. This democratization of data access allows business users to interact with data without needing SQL expertise, significantly reducing technical barriers. For developers, the API offers pre-built conversational features, saving essential development time by eliminating the need to build such interfaces from scratch. This capability enhances user experience and accelerates the process of deriving actionable insights from data.

Bridging Data Gaps

Databricks has recently introduced a series of updates that significantly streamline the creation of generative AI-based applications and AI agents. These advancements are designed to provide enterprises with enhanced control and simplified management of their AI initiatives, marking a strategic move to deepen engagement with enterprise users. The updates encompass improvements in areas such as governance, deployment, performance evaluation, and data accessibility, demonstrating Databricks’ commitment to enabling smoother AI integration for enterprises. In addition to these enhancements, Databricks aims to foster a more robust and efficient AI ecosystem, empowering businesses to harness the full potential of their data-driven strategies. By focusing on key aspects like governance and performance evaluation, Databricks helps enterprises not only deploy AI solutions effectively but also ensure they operate optimally. These updates reflect an ongoing effort to support the dynamic needs of enterprises as they navigate the complexities of AI integration, ultimately driving innovation and growth in the AI sector.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later