← Back to Blog

The Next Frontier for AI Will Be Access to Internal Databases

Damon Danieli
CEO @ Ekaya

With yesterday's launch of Claude Sonnet 4.5, Anthropic has taken the ability for AI to read and write Excel spreadsheets, PowerPoints and PDFs — you know, the real tools of business users — to the next level. The “Vision” is that business users will analyze data without the need of a dedicated data analyst, create visualizations without the need of an artist, and automate processes without the need of a programmer.

The reality is much more sober.

All of this, of course, depends upon the ability for AI to access the internal databases safely and securely and in a format where Large Language Models (emphasis on Language) can understand and rationalize data.

The Problem: The AI-Data Gap

Most enterprise data does not live in documents. It lives in transactional databases, data warehouses, and proprietary systems — the crown jewels of the modern organization.

The challenge is not whether AI can analyze it. The challenge is getting that data into AI in the first place without compromising security, governance, or meaning.

The Semantic Gap: Turning Schemas into Meaning

The solution lies in a business semantic layer (aka ontology) that acts as a map to translate raw database structures into business concepts. Instead of pointing AI at “cust_tbl”, you give it “Customer.” Instead of exposing “inv_dt”, you give it “Invoice Date.” Additional information that is not in the database at all such as fiscal year end or computed metrics can be added to the system so that when AI queries the data, it can understand why you might ask for fourth quarter financials in June.

An ontology acts as the bridge between technical structure and human understanding.

The Access Gap: Connecting AI Chat to a Database safely and securely

The protocol for AI to access external tools is something called Model Context Protocol (MCP) which is insecure by default and there are a lot of security landmines if it is not implemented correctly.

Single Sign-On (SSO) is the first requirement. AI systems need to integrate seamlessly with enterprise identity providers — SAML or OIDC via Active Directory, Okta, Ping Identity, or Entra ID — so that authentication flows through the same portal employees already use. This ensures MFA policies, directory sync, and automated provisioning all apply without creating new accounts or shadow credentials.

Then the authorization token must find its way from the AI generating the request all the way to the data access layer, where access can be controlled and audited. Each query should run under the authenticated user's own database role, not a shared service account, so existing row-level security, role-based access controls, and audit logging continue to work as designed.

The Personnel Gap: Business Users to their Supporting Data Teams

Lastly, there is a gap between business users and their supporting data teams. In our customer discovery interviews, we have yet to find a data team that didn't mark managing and responding to ad hoc queries from their business users as one of their top pain points.

It is also frustrating for users who need queries, visualizations and reports that are not part of the standard dashboard and BI tooling that they have. Business users rarely know the actual database schema nor do they normally have access. Getting an answer to a request for this information can take days if not weeks from our respondents.

The AI Data Liaison

I'm calling the solution that fills in these gaps an AI Data Liaison as it doesn't replace the need for data analytics, engineering and governance, only that it fills in the gaps identified above.