A read-only ontology is just a data lake with better marketing
The core problem: Telco software vendors claim they have an “ontology” for artificial intelligence (AI), but most just give AI context about your data. They can’t take autonomous action. You need ontology where AI executes decisions, not just understands data. Here’s a 5-question test to help you tell the difference.
It used to take Totogi customer Zain Sudan 48 hours to detect and fix a dormant cell. Now, after deploying the Totogi Ontology, it takes 30 minutes. Last month, the operator told me what changed.
“We used to make decisions based on intuition and experience. Now we’re making them based on real operational data connected across systems that never talked to each other before.”
Connected across systems that never talked to each other before: makes your ears perk up, doesn’t it? Because that’s not what most vendors mean when they say “ontology.”
Here’s what I mean. In June, Amdocs released a whitepaper saying its ontology “provides AI agents with a contextual understanding of service plans, technical specifications, billing structures, and customer interactions.” In February, it announced aOS, an “agentic operating system” that “operates on top of any BSS/OSS stack.” Better vocabulary, same read-only architecture. Netcracker promises “dynamic access” to “ontologies and APIs.” Ericsson describes ontologies as techniques that “describe concepts, entities, and the relationships between them.”
Understanding, access, describing, operating on top of: That’s not what Totogi means by ontology. What these vendors describe are semantic data layers dressed up in transformation vocabulary—a fancy new name for the same old data access. I get it: “ontology” does sound more sophisticated, but if it sits on top of the same read-only architecture, nothing has changed. These semantic layers will give AI context about your data, but they won’t give it any actual control over what happens next.
And what happens next is the whole point of AI.
The difference that matters
Semantic layers map your data. They know that “subscriber” in billing means “customer” in CRM. That’s useful for grounding LLMs and preventing hallucinations.
But this kind of read-only access isn’t enough. When AI from these vendors identifies a high-churn-risk subscriber, can it apply a retention offer? Update billing? Trigger provisioning? A semantic layer can’t. It can only read your data. Taking action requires writing back to operational systems.
Real ontology gives AI read and write access. It doesn’t just understand your data. It also executes decisions and changes things.
Real ontology also compounds. For Zain Sudan, each system the operator connects and each process it encodes builds a more complete model of how the business operates. Anybody can buy GPT-5 the day it launches. But only Zain has Zain’s ontology—the accumulated operational intelligence of how that specific business works.
The 5-question test
Now that you know the difference, how do you ask insightful questions to see if your vendor has the real deal? The next time a vendor dares to call something an ontology, run them through these five questions. Any “no” means they’re selling you a semantic layer, not an operational transformation.
1. Can the system actually perform actions on its own?
Ask: When your ontology identifies an issue or opportunity, can it directly update billing, trigger provisioning, or modify a customer account? Or does it generate a ticket, send an alert, or create a recommendation for a human to act on?
What to listen for: Direct integrations where the ontology writes back to operational systems—not read-only access, but actual write permissions. You want the ability to change state in production.
Red flag: “The ontology provides context to our workflow engine” or “generates recommendations for the team to review.” If humans or other systems are required to execute every action, you’re buying a recommendation engine, not AI that can transform your business.
2. Does the architecture prevent invalid actions?
Ask: If an AI agent tries to provision a premium upgrade for a customer with a past-due balance, what happens? Does the system flag it for review? Or is that action literally unrepresentable in the ontology?
What to listen for: “Correct by construction.” The ontology encodes valid state transitions, so the AI can only express actions that are permissible. Invalid requests can’t even be formed.
Red flag: “Human review before execution” or “validation layer catches errors.” That’s a safety net for a system that can still attempt bad actions. You want a system that can’t make a mistake.
3. Does the system get smarter with each deployment?
Ask: After five deployments, can the ontology perform actions it couldn’t perform before? Does it understand edge cases it didn’t understand before? Or does it just cover more data?
What to listen for: Every deployment makes the ontology more capable—more actions, more edge cases, more operational understanding. The vendor should describe how capability deployments compound on one another.
Red flag: “Reusable data models” or “leverage existing definitions” signify data reuse, not capability compounding. If context is contained within each agent, you’re not building a compounding asset.
4. How fast from insight to action?
Ask: When your system identifies a high-churn-risk customer, how long until a retention offer is applied? Walk me through the steps.
What to listen for: Seconds, not weeks. The ontology queries available actions, checks eligibility and margin constraints, orchestrates billing validation, triggers the offer, updates CRM—without human translation in between.
Red flag: “Improved decision-making” or “actionable insights.” Those are dashboard words. Ask for the timeline from identification to execution—and count every handoff. If the answer involves manual coordination across teams or systems, or a change request, or a ticket, you’re looking at insights dressed up as automation. AI doesn’t get credit for making humans faster. It gets credit for replacing the steps humans used to do.
5. Can business users see what it knows?
Ask: Can a business analyst view what rules are encoded? If I ask “what happens when we downgrade a premium customer,” can the ontology show me the complete process?
What to listen for: Visual tools or clear documentation that let business users—not just paid consultants—understand what the ontology knows and can do. The ontology should be self-documenting.
Red flag: “Our data scientists can explain the models” or “the documentation describes the APIs.” If only technical experts can understand what’s encoded, you’d just be trading one expensive consultant for another.
How Totogi passes the test
At Totogi, where I’m CEO, we built the Totogi AI platform from the ground up as a decision and action layer, not a semantic layer bolted onto existing architecture.
Actions are built-in concepts. When our ontology detects a dormant cell at Zain Sudan, it doesn’t generate an alert for someone to manually investigate. It triggers automated remediation. Zain now goes from detection to resolution in 30 minutes, down from 48 hours. That’s the difference between a system that KNOWS and a system that DOES.
Invalid actions are architecturally impossible. The Totogi Ontology encodes valid state transitions, so AI can only express what’s permissible. There’s no validation layer catching errors after the fact. Errors can’t be formed in the first place.
Decisions execute through the ontology, not around it. The ontology encodes how your business actually operates, so AI doesn’t just understand your systems; it operates them.
And critically, the Totogi Ontology compounds. Every system you connect, every process you encode, every edge case it resolves makes the ontology more valuable as an operational asset. Zain’s ontology gets smarter with every deployment and becomes more capable across the entire organization, not just within individual departments.
Business users can see exactly what’s encoded. When someone asks “what happens when we downgrade a customer?,” the ontology shows the complete process—no data scientist translation required.
AI capabilities run on the Totogi platform—not alongside it, not informed by it, but actually executing through the Totogi Ontology as their operational environment. That’s the difference between a data layer that helps humans execute better and a system that you trust to make decisions and take actions on your behalf.
Why it matters now
Foundation models—the LLMs like GPT, Claude, Gemini, etc.—are commoditizing fast. Everyone has access to the same AI capabilities within months of any major release. You and your competitors can run the exact same models.
What sets you apart? The operational intelligence encoded in your ontology: how your business actually runs, what actions are possible, what decisions need to be made. This is your sustainable differentiation, and it compounds over time in a way that a semantic layer can never support. The vendors who understand this are building operational environments for AI to execute in. The vendors who don’t are rebranding their dashboards and hoping you don’t ask too many tough questions.
And here’s what your vendor REALLY won’t admit: a real ontology makes them REPLACEABLE. When semantic translation lives in the ontology, swapping one BSS for another becomes a configuration change. That’s why the other guys would rather sell you a read-only layer that keeps you dependent on them forever.
Next time a vendor tries to impress you with its “ontology,” whip out these five questions. Then ask for demos, timelines, and customer references. And remember: AI that can’t act isn’t transformative. It’s just a very expensive recommendation engine that requires humans to do all the work.
Recent Posts

Get my FREE insider newsletter, delivered every two weeks, with curated content to help telco execs across the globe move to the public cloud.
Get started
Contact Totogi today to start your cloud and AI journey and achieve up to 80% lower TCO and 20% higher ARPU.
Dive deep
A telecom-specific ontology is the key to AI-native operations, bridging fragmented legacy stacks with the decision logic AI needs to act intelligently and consistently at scale.
Engage
Set up a meeting to learn how the Totogi platform can unify and supercharge your legacy systems, kickstarting your AI-first transition.
Transform
Michael Walker, who leads enterprise AI deployment strategy at Totogi, talks about our approach, and how you don’t need data lake migrations to make AI work.
Frequently Asked Questions
An ontology in telecom is more than just a data model or semantic layer. It’s an operating system that gives AI the ability to actually execute actions, not just understand data. While most vendors use “ontology” to describe systems that provide context about data relationships, a real ontology encodes operational intelligence, valid state transitions, and direct controls over production systems. Think of it as the difference between AI that can read your business and AI that can run your business.
A semantic data layer gives AI context—it knows that “subscriber” in billing means the same as “customer” in CRM. But when AI identifies a high-churn customer, a semantic layer can’t apply a retention offer or update systems. It only has copies of data, not controls. A real ontology gives AI write permissions to operational systems, enabling it to directly trigger provisioning, modify accounts, and execute decisions without human translation. It’s the difference between a dashboard that shows problems and a system that fixes them.
A properly built ontology is “correct by construction”—it encodes valid state transitions so AI can only express permissible actions. For example, if an AI tries to provision a premium upgrade for a customer with a past-due balance, the ontology wouldn’t flag it for review—that action would be literally unrepresentable in the system. Invalid requests can’t be formed in the first place. This is different from validation layers that catch errors after the fact; real ontology prevents them from being attempted.
With a real ontology, the timeline is seconds, not days. For example, Zain Sudan reduced its issue-to-resolution time from 48 hours to 30 minutes using Totogi’s ontology. When the system identifies a dormant cell, it doesn’t generate an alert for manual investigation—it triggers automated remediation immediately. The ontology queries available actions, checks constraints, orchestrates validation, and executes changes without human coordination. If a vendor’s timeline involves manual steps between teams, you’re looking at analytics, not automation.
Foundation models like GPT and Claude are commoditizing rapidly—your competitors will have access to the same AI capabilities within months of any release. What differentiates you is the operational intelligence encoded in your ontology: how your business actually runs, what actions are possible, what decisions matter. This compounds over time as each deployment, process, and edge case makes the ontology smarter. The AI model is a commodity; your ontology is your unique operational asset that can’t be copied.