Blog

Context is EVERYTHING

Artificial intelligence (AI) agents are having their iPhone moment in telecom. Everyone wants them. Everyone’s building them. And everyone’s running into the same brick wall: agents are easy to demo, but brutally hard to deploy at scale.

The problem isn’t the agents themselves. It’s context. At scale, AI needs up-to-date, accurate, cross-domain context to work. Without it, agents fail—like skyscrapers built on sand. Agents can’t make decisions, they can’t be trusted, and you’re stuck keeping humans in the loop to babysit them. That’s not autonomy. That’s not scale. And it’s certainly not the billion-dollar ROI telco CEOs are promising their boards.

The only way forward is context engineering—creating a common, authoritative foundation that every agent, chatbot, and system can access in real time. With it, agents can act, decide, and deliver outcomes on their own. Without it, AI stays stuck as a demo—never scaled, never systemized.

That’s the difference between playing with AI and actually deploying it at scale.

The industry has it backwards. Everyone’s asking “which agents should we build?” when they should be asking “what context infrastructure do our agents need?” 

Why does context matter for enterprise-grade AI? Let’s dig in.

Why context matters

According to a recent MIT study, 95% of AI projects fail to deliver measurable results—nearly double the failure rate of traditional IT initiatives. One of the top reasons is the lack of context engineering: most organizations do not have systems in place to provide AI with the relevant data, business knowledge, and operational context it needs for effective deployment. Without this foundation, even technically advanced AI projects are likely to collapse before reaching production or showing ROI.

Andrej Karpathy—AI visionary, former Tesla AI lead, and OpenAI co-founder—puts it bluntly:

“People associate prompts with short task descriptions you’d give an LLM in your day-to-day use. When in every industrial-strength LLM app, context engineering is the delicate art and science of filling the context window with just the right information for the next step.”

He’s talking about your applications, telcos. If you’re going to build AI applications at scale across your organization, ones that can actually orchestrate your network, optimize operations, and deliver measurable business outcomes, you’re going to need context engineering—the infrastructure that separates failed science experiments from enterprise-grade deployments.

Get it right, you win. Get it wrong, you join the 95% of projects that fail.

AI won’t scale without it

Without proper context infrastructure, your AI apps break. Here’s what goes wrong at scale:

  • Context chaos: Agents act on stale or mismatched data. One reroutes traffic on old congestion info, another bills on yesterday’s tables. Suddenly you have revenue leaks and angry customers.
  • Inconsistent definitions: “Premium customer” means one thing for retention, another for upsell. Agents give contradictory offers, your data conflicts, and the micro-targeted marketing that was supposed to increase revenue ends up eroding customer trust instead.
  • LLM confusion: With no intelligent filtering, agents accumulate irrelevant info, bloating context windows. Your fraud detection agent starts analyzing every customer’s every transaction, making worse decisions because the signal gets lost in the noise.
  • Human bottleneck: Without reliable context, you can’t trust the AI agents to make autonomous decisions, so humans become expensive quality gatekeepers for every action. Instead of AI agents improving productivity, they’re like digital toddlers that need constant supervision. 

Just like siloed humans, siloed agents aren’t the problem. They’re just lacking a foundation of context infrastructure. Without it, AI can’t be systemized to impact your bottom line.

Context engineering in action

While incumbent vendors chase hype, the AI research community—and now Totogi (where I’m acting CEO)—has moved on to what actually works in production.

Totogi’s BSS Magic is the first industrial-strength implementation of context engineering in telecom: a Palantir-inspired, AI-first platform built for telcos.

The “magic” is a telco-specific ontology that extends TM Forum’s information framework into a universal translator for telecom. It doesn’t just connect systems—it makes them intelligible to AI. BSS Magic links billing, network, customer, and operational relationships and data across all your BSS vendors. It knows that a network fault affects billing accuracy, that customer tier drives network prioritization, and that compliance rules shape service delivery. It’s connected intelligence, not isolated point solutions.

And it works without having to rip and replace a thing. BSS Magic sits on top of your existing stack—Amdocs, Ericsson, Huawei, homegrown—creating semantic interoperability without costly migrations. You also don’t need to move data to a lake or normalize it first. Use it in place, as-is, and start getting AI value immediately.

Where telcos once had to spend years and millions trying to integrate systems (like Vodafone’s Neuron project) to even approach this level of integration, BSS Magic’s context-first architecture delivers the same outcomes in weeks, not years.

The difference is architectural: legacy vendors bolt AI on as an afterthought. Totogi built BSS Magic as AI-native, with context engineering at the core, not added on later.

Compare Totogi’s approach to the dinosaurs, and the production results speak for themselves:

  • Semantic migration, not manual mapping: A Tier-1 North American telco cut a billing migration from 8 months to 14 days because BSS Magic’s ontology automatically generated the integration logic. Legacy vendors would’ve needed armies of consultants writing fragile mappings by hand.
  • Ontological compliance acceleration: CloudSense achieved TM Forum API certification across 13 APIs in just four weeks instead of 26 months, because the ontology natively understood TM Forum’s data model and auto-aligned the APIs. No custom normalization, no reinvention.
  • Cost collapse through ontology reuse: By leveraging the ontology across multiple systems, one telco switched to a lower-cost billing system, reducing annual costs by 75% ($280,000 → $70,000) and cutting project hours by 90%. Future integrations will connect through the ontology layer rather than point-to-point—making BSS modules truly interchangeable for the first time.

Stop chasing demos. Start building context.

Smart telcos are recognizing the pattern: context infrastructure first, then agents.

Don’t know where to start? Then give BSS Magic a try. We’re not selling chatbot hype; we can build your context foundation that will make every AI application across your enterprise actually work and bring results to your bottom line.

The winners in the next decade of telecom will be the ones who master context engineering—and use it to deploy AI at scale across their operations. And that is what Totogi is already delivering.

Recent Posts

  1. Stop procuring AI. Start using it.
  2. To get AI value, shift left (and eliminate jobs)
  3. The modern telco
  4. TelcoDR’s Summer Reading List 2025
  5. AI is crushing the BSS vendors


Get my FREE insider newsletter, delivered every two weeks, with curated content to help telco execs across the globe move to the public cloud.

Get started

Contact Totogi today to start your cloud and AI journey and achieve up to 80% lower TCO and 20% higher ARPU. 

Discover

Discover

AI that delivers results

CloudSense achieved TM Forum certification across 13 APIs in just one month using BSS Magic to automate the entire process.



Connect

Engage

Connect with an expert today!

Set up a meeting with the Totogi team to learn how BSS Magic can kickstart your transition to becoming an AI-first telco.
Test Drive

Accelerate

Your AI transformation

Your BSS modernization doesn’t have to be a five-year nightmare. BSS Magic can get you there in days or months instead of years.