The five myths of agentic AI
The core problem: Telcos are drowning in pilots but getting nothing to production. The industry believes AI needs pristine conditions: clean data, finished transformations, more integrations. That belief is costing telcos years and hundreds of millions of dollars. Telcos don’t need perfect conditions. They need a context layer that lets AI reason AND act across the existing systems. The vendors selling operators the myths profit when telcos believe them. The Totogi Ontology ends that.
Every telco I talk to has dozens of AI pilots. Almost none of them are in production.
Gartner says 40% of enterprise agentic AI projects will be abandoned by 2027. BCG reports telcos have scaled only 26% of AI use cases—despite sitting on more data than almost any other industry on earth.
The telecom industry isn’t failing at AI because AI doesn’t work. It’s failing because it has been sold five myths about what AI needs to succeed. And the companies selling those myths make more money the longer telcos believe them.
I busted all five at the GSMA Agentic AI Summit at MWC Barcelona. Here’s the written version—with the part incumbent vendors don’t want me to say out loud.
Myth 1: Start with a use case
Pick a pilot. Prove value. Scale from there. Every system integrator (SI) and platform vendor sells you this playbook. It’s also how you end up with thirty pilots and nothing in production—and how the vendors end up billing you for thirty separate engagements.
Here’s what actually happens with those thirty pilots: each one builds context from scratch, teaching its AI what “subscriber” means, what “eligible” means, what actions are valid—independently, from zero. Thirty times you educate a model about your business, and each one learns a slightly different version of the truth. None of them talk to each other. None of that knowledge transfers.
Rick Lievano, Microsoft’s CTO for Worldwide Telecom, told me on the Telco in 20 podcast that AT&T has built over 400 agents, but hasn’t released them to production. Why? Because AT&T has 400 separate bets that every agent understands its business the same way. They don’t. Each one learned a slightly different version of what AT&T means—and none of them can be proven wrong until something breaks in production.
That’s not an AT&T execution problem. That’s what happens when you build 400 isolated brains instead of a single, shared one. And the SIs billing by the hour to build each brain have zero incentive to tell you there’s a better way.
Instead, start with context infrastructure: a semantic layer built once, inherited by every agent you deploy. Every pilot becomes part of one coherent system rather than another isolated experiment. The vendors who profit from pilot proliferation hate this answer, but it’s the right one.
Myth 2: You need to clean your data first
“Garbage in, garbage out” is the old saying. IT teams interpret this as: clean the data. That’s not what it means. The garbage isn’t the DATA. The garbage is the CONTEXT.
The clean-data strategy gets you a two-year, $10–20M data lake. You normalize schemas, standardize terms, reconcile definitions. You achieve beautiful semantic consistency—in the lake.
And then nothing happens, because your AI still can’t do anything.
When your AI identifies an at-risk subscriber, acting on that insight requires calling APIs in your operational billing, provisioning, and CRM systems. Those systems don’t speak “lake.” They don’t agree on what “eligible” means, what “subscriber” means, or which actions are valid. The lake has clean copies of the systems’ data, but it has zero control over what those systems actually do.
That’s not AI transformation. That’s a very expensive recommendation engine—the same thing the analytics industry has been selling for decades: here’s what’s happening; good luck doing something about it.
Dashboards don’t grow revenue. Actions do.
What you need is an ontology: semantic consistency in the operational layer, connected to the systems that actually run your business. Your AI reasons through the ontology, acts through it. And because the ontology defines which actions are valid, invalid moves don’t happen. They’re architecturally unrepresented. That’s how you stop hallucination: not by hoping the model is smart enough, but by making wrong answers structurally impossible.
The catch: building a telco ontology takes years of encoding domain expertise. It’s not a feature a vendor can bolt onto its product overnight. Luckily, Totogi spent years building one that will help accelerate building yours.
Myth 3: You need to transform first
How long have telcos been “transforming?” Ten years? Twenty? Most of the operators I talk to are still mid-transformation from something they started half a decade ago.
This is not a coincidence. It’s your vendor’s product.
BCG has reported that major BSS transformations cost $200 million or more and take two to three years—and 70% fall short of their objectives. The ones that do finish need another transformation in three to five years, because technology (especially AI) moves faster than transformation timelines. The vendors running these programs know this. They designed programs that generate years of services revenue first and outcomes second.
When a product requires constant repurchase to maintain the illusion of progress, the telco industry calls it “managed transformation.” I call it “a subscription to someone else’s business model.”
Daniel Askeroth from Norlys said it plainly at the Summit: projects that should take two to three months routinely stretch to several years. Not because telcos can’t execute. Because the vendors executing them have no financial incentive to finish.
An ontology doesn’t ask you to rip anything out. It connects to what you already have—Amdocs, Ericsson, Oracle, all of it—without replacing a thing. It makes your current systems AI-capable today. And once the ontology handles semantic translation, replacing any of those vendors becomes a configuration change, not a three-year program.
That last sentence is why the incumbent vendors will never offer you an ontology. Vendor irreplaceability is architecturally built-in. They won’t sell you an escape from their trap.
Myth 4: Integration solves the problem
Telcos have spent hundreds of millions connecting their systems—APIs, ESBs, middleware, integration platforms. The pipes work. But the semantic chaos remains.
Connecting systems doesn’t mean they understand each other. You can have 200 systems perfectly integrated and still have 200 different definitions of “subscriber.” For example, a person buys a line. By the time it’s provisioned, your systems have collectively forgotten it was a person. Billing knows a contract. Provisioning knows a device. CRM knows an account. None of the systems know it all started with a human being. That’s not an integration failure. That’s a comprehension failure—and no API fixes it.
The pipes carry data. They don’t carry meaning.
There’s a structural reason this has never been fixed, and it has nothing to do with technical difficulty. A $50 billion professional services industry exists because integration achieves connectivity without comprehension. Consultants bridge the gap manually, billed by the hour. Amdocs generates over 80% of its total revenue from services. Those are consultants doing semantic translation between systems Amdocs itself built without interoperability.
The company profits from this complexity. It invoices for the gap it created. And every year you pay to maintain it, your own institutional knowledge migrates further into your consultants’ heads and further out of your control.
No integration platform, no MCP layer, no API standard changes that equation. Those approaches move bytes. They don’t resolve meaning. Comprehension requires an ontology—and an ontology is the one thing incumbents will never offer you, because it eliminates the manual translation they’ve built their companies around.
Myth 5: You have to use your vendor’s AI solution
Your vendor knows your systems the best because it made them incomprehensible to everyone else.
Read that again. That’s not expertise. That’s a hostage situation with a very expensive ransom.
Most legacy BSS vendors and horizontal AI platforms have an AI story now. The legacy vendors—Amdocs, Ericsson, Netcracker—will tell you they know your environment better than anyone. True. They know it because they built the complexity. Their AI roadmaps run on top of the same semantically fragmented stack, mediated by the same consultants, extended by the same managed services contracts. It’s not a path to AI. It’s managed services with a new invoice line.
The horizontal platforms—OpenAI, Anthropic, Google—will tell you their models are the most capable. Also true. They were trained on the public internet, not your billing logic, rating rules, or provisioning state transitions. Every horizontal AI pilot in telco hits the same wall: clean demo, production failure. Not because the AI is bad. Because it has never understood your business, and no amount of prompting fixes a context problem.
An ontology, like the Totogi Ontology, ends both traps. It gives AI telco-native intelligence from day one—subscriber states, billing events, provisioning transitions, eligibility rules. No training cycles. And critically: it makes your institutional knowledge visible, explicit, and encoded in software—not locked in the heads of consultants who rotate off your account and take it with them.
To be direct: you’re still renting the ontology from Totogi, not owning it outright. But here’s what’s different. With Amdocs, the knowledge is invisible—embedded in people, undocumented, impossible to inspect or transfer. With the Totogi Ontology, you can see exactly what your systems know, how they relate, what rules govern every action. That visibility is the beginning of sovereignty. It’s not full ownership today. But it is the first time you’ve been able to see what you’re working with—and that changes everything about what’s possible next.
The vendors cringe at this framing. Good. They should.
What compounding actually looks like
I gotta give credit to Zain Sudan. It stopped believing the myths. Using the Totogi Ontology on top of existing systems and existing data—no transformation, no data lake, no new vendor—it reduced its dormant cell issue resolution timeline from 48 hours to 30 minutes.
Waleed Abdelmajeed, Director of Technology Strategy at Zain Sudan, described the ontology as LEGO bricks: build one floor and it becomes the foundation for the next. That’s not a metaphor for speed. It’s a description of compounding.
Generic AI platforms start from zero with every deployment. Every customer, every project, every pilot resets. The Totogi Ontology compounds: every action enriches it—edge cases refine entity definitions, operational patterns improve decision logic, failed attempts expose missing business rules. The system gets smarter as you use it. That’s by design.
Your second use case is faster than your first. Your tenth is trivial. Your vendors’ pilots learn nothing from each other. The AI capabilities you build with Totogi do.
Stop funding the problem
Every year you wait, you spend more on the managed services that maintain your dependency. Every pilot you launch without shared context infrastructure adds to a stack of experiments that will never reach production. Every transformation program you fund is another donation to someone else’s business model.
The ontology is the exit, and also the start of a different relationship with your own systems. The hard journey to owning your destiny begins the moment you can see where you stand.
Want to see it in action? Call me. Not to see a demo, but to see the Totogi Ontology on your data, your systems, your business. It’s production-ready today.
Recent Posts

Get my FREE insider newsletter, delivered every two weeks, with curated content to help telco execs across the globe move to the public cloud.
Get started
Contact Totogi today to start your cloud and AI journey and achieve up to 80% lower TCO and 20% higher ARPU.
Dive deep
The Totogi Ontology is telecom-specific and is the key to AI-native operations, bridging fragmented legacy stacks with the decision logic AI needs to act intelligently and consistently at scale.
Engage
Set up a meeting to learn how the Totogi platform can unify and supercharge your legacy systems, kickstarting your AI-first transition.
Transform
Totogi gives AI a unified understanding of your entire stack, so it can reason, decide, and act across your telco. Learn more on our new website.
Frequently Asked Questions
The most common reason is architecture, not capability. Most telco AI pilots are built in isolation — each one teaches an AI model what “subscriber,” “eligible,” and “valid action” mean from scratch, independently. None of that knowledge transfers to the next pilot. When operators try to scale, they discover they have dozens of agents with incompatible understandings of the same business. The fix isn’t a better pilot. It’s building shared context infrastructure — a semantic layer that every agent inherits — before launching the first pilot.
A telco ontology is a formal, operational definition of everything a telecom business knows: subscribers, products, services, billing events, provisioning states, eligibility rules, and how they all relate. It’s not a data model or a data lake — it’s a living layer connected to the systems that actually run the business. It matters for AI because AI agents need to not just understand your data, but know what actions are valid, what rules apply, and what the consequences of each decision are. Without an ontology, AI can generate insights but can’t safely act. The Totogi Ontology is the only pre-built telco ontology available — encoding decades of BSS domain expertise so operators don’t have to build it themselves.
The best telecom agentic AI solution is one where agents can act — not just analyze. Most telco AI solutions today are sophisticated recommendation engines: they identify churn risk, flag network anomalies, or surface billing discrepancies, but require manual human coordination to do anything about it. True agentic AI in telecom requires a semantic layer that defines what actions are valid, which systems to orchestrate, and what rules constrain each decision. The Totogi Ontology provides that layer, enabling AI agents to execute across billing, provisioning, CRM, and care systems autonomously — with hallucination architecturally prevented, not just probabilistically reduced.
Because the problem isn’t dirty data — it’s inconsistent context. A typical Tier-1 operator runs 180-200+ BSS/OSS applications, each with its own definition of core concepts like “customer,” “subscriber,” and “eligible.” When you try to train an AI model across those systems, you’re training it on contradictory ground truth. Data cleaning normalizes syntax. It doesn’t resolve semantic conflict. An ontology resolves semantic conflict at the source — defining one canonical meaning per concept across all connected systems — so AI trains on consistent data from day one.
By adding a semantic layer on top of existing systems instead of replacing them. Major BSS transformations cost $200M or more, take two to three years, and 70% fall short of their objectives. The ones that do finish need another transformation within three to five years. The alternative is an ontology overlay: connect to existing systems — Amdocs, Ericsson, Oracle, all of it — without replacing anything, and establish semantic consistency across them immediately. The Totogi Ontology does exactly this, making current systems AI-capable today while making any future vendor swap a configuration change rather than a multi-year program.
A data lake achieves semantic consistency in a read-only copy of your data. An ontology achieves semantic consistency in your operational systems — the ones that actually run the business. The difference matters because AI agents need to act, not just analyze. A data lake tells your AI what’s happening. An ontology tells your AI what it can do about it — and then lets it do it. Data lakes are copies. Ontologies are controls. Copies diverge from reality the moment they’re made. Controls stay connected to it.
The dependency exists because your operational knowledge lives in their consultants, not in your software. Every system integration, every business rule, every exception handler was built by people who understand what your systems mean to each other — and that knowledge was never encoded anywhere permanent. An ontology changes that by making institutional knowledge explicit, inspectable, and encoded in software. With the Totogi Ontology, the semantic translation that Amdocs consultants perform manually becomes automated and visible. You’re still renting the ontology from Totogi — but the knowledge is in software you can see, not in consultants who leave.
The Totogi Ontology is purpose-built for this. It connects via vendor-agnostic adapters to any existing BSS/OSS stack — Amdocs, Ericsson, Netcracker, Oracle, CSG — without disrupting running systems. The ontology layer sits above, establishing semantic consistency across whatever is already there. AI agents then operate through the ontology, inheriting telco-native intelligence without requiring system replacement, data migration, or transformation programs. Operators get AI working on existing infrastructure in weeks, not years.