Why Your BSS Vendor Won't Build AI-Native
I spend my days watching the business support system (BSS) competitive landscape. Over the past few years, I’ve been tracking vendor AI announcements, analyst briefings, and strategic pivots—and they’re all executing the same AI playbook: bolt it on and call it innovation.
I get why they’re doing it. It’s the fastest path to AI on their roadmap. It’s the easiest to execute. And it lets them avoid the one thing they can’t stomach: throwing away their products and rewriting from scratch.
But here’s the problem: bolt-on AI doesn’t solve YOUR hardest BSS problems. It makes them worse.
Once you understand why they won’t rebuild—and what that leaves on your plate—you’ll understand why Totogi built AI-native from day one.
The bolt-on pattern
Look at what the major BSS players are actually announcing:
Amdocs amAIz is “a modular suite of practical AI platforms,” i.e. AI agents for customer service, GenAI for network operations. Its own architecture diagram shows an AI layer sitting on top of its existing customer relationship management (CRM), billing, and catalog systems.
Ericsson Intelligent Automation Platform promises AI and machine learning for network operations and BSS processes. It’s an AI layer on top of Ericsson’s existing charging and policy systems.
Netcracker claims AI is “at the core” and “deeply embedded.” But all the AI capabilities in customer engagement and process optimization are bolted onto the same core BSS platform it’s been selling for years.
Some vendors are even talking about “telecom-specific ontology” in their materials—yet their architecture diagrams still show AI sitting on top of their legacy systems. Everyone has AI chatbots, AI analytics, GenAI process automation, “AI-powered” this, “intelligent” that. These aren’t bad features. But they’re siloed within each vendor’s system, unable to reason across your entire BSS stack. They can’t see the full picture, so they can’t solve enterprise-wide problems.
Bolt-on makes your BSS mess WORSE
You already know your BSS is a monolith. Not in the technical sense of “one big application built by one vendor,” but in the architectural sense that everything is hardcoded together.
You can’t swap out components. You can’t test alternatives. You can’t respond to market changes without triggering a cascade of dependencies.
And bolt-on AI makes this worse, not better.
You still can’t swap anything out. Your billing system is tightly coupled to your CRM, which is tightly coupled to your product catalog, which has custom integrations to your provisioning system. Want to evaluate a new rating engine? You can’t just plug it in. You need six months of integration work, regression testing across every downstream system, and professional services to handle the edge cases. The AI sitting on top can analyze your billing data brilliantly—but it can’t change the fact that your billing system is welded to everything else.
You still have no agility on product launches. The cruel irony of bolt-on AI is that your AI agent can tell you exactly which product bundles would reduce churn for high-value segments. The agent can generate tailored marketing copy. It can even predict which offers will hit revenue targets. But you still need 6-12 months to actually launch that product because the changes have to propagate through dozens of systems, each with its own data model, each requiring custom development. The AI makes you smarter about what to build—but doesn’t help you build it faster.
Your semantic chaos gets worse. You have 200+ systems, and seventeen definitions of “subscriber.” Billing thinks a subscriber is an account ID. CRM thinks it’s a household. Provisioning thinks it’s a service instance. Now you add AI on top that’s supposed to provide “unified insights,” but it’s learning from contradictory data. When you train your churn model on that, it learns noise, not patterns. You get recommendations you can’t trust, based on data that doesn’t actually align.
Your integration complexity becomes exponential. At 200 systems, you’re managing thousands of potential integration paths. Forrester research shows IT departments spend as much 80% of their budget on maintenance, leaving just 20% for innovation—and most of that maintenance is keeping integrations from breaking. Every schema change breaks multiple integrations. Every new system multiplies failure modes. Bolt-on AI can monitor these integrations, alert you when they break, and even auto-retry failed transactions. But it can’t collapse the underlying complexity. You’re still spending the vast majority of your IT budget maintaining point-to-point connections.
Your vendor lock-in deepens. When AI capabilities are tied to specific vendor platforms, your switching costs don’t decrease; they INCREASE. Amdocs amAIz works with Amdocs systems. Now you’re locked into not just their billing platform, but also the AI layer that sits on top of it. It just became even HARDER to move off Amdocs.
Why legacy vendors won’t make the switch to AI-native
Your vendors COULD rebuild as AI-native. The technology exists. Claude Sonnet 4.5 scored 77.2% on a software engineering benchmark, matching human performance. AI coding tools build production applications today. Modern cloud-native architecture enables rapid evolution.
So why aren’t they doing it?
Their business model won’t support it. Legacy vendors make billions from 18-36 month implementations, customization projects, professional services, upgrade cycles, and change request backlogs. AI-native architecture enables rapid deployment, self-service customization, continuous evolution, and AI-assisted changes. Amdocs alone makes over $3 billion annually in professional services. If AI-native architecture could reduce that by 70%, the company isn’t going to kill $2 billion in revenue. Look at how it prices AI features: as additive services on top of existing contracts, not replacements.
Even if they wanted to change that business model, the sunk-cost fallacy is institutional. They have decades and billions invested in their old products. Millions of lines of code. Hundreds of customer implementations. The codebase is literally on their balance sheet as value. Going AI-native means starting over—not refactoring. It means their engineers need completely different skills. It means moving existing customers to a fundamentally different platform. These vendors have done the math: incremental improvement on a declining asset is less scary and less risky than pivoting to a rebuild.
And even if they could overcome the economics and sunk costs, they’re not AI-first organizations. Building AI-native requires rethinking how software works. It’s a culture shift traditional enterprise software companies can’t make. Look at what they’re actually building: AI chatbots (known pattern from customer service), analytics dashboards (standard business intelligence with machine learning), recommendation engines (ecommerce playbook). What you don’t see from them: context engineering platforms, ontology-first architecture, knowledge graphs as core infrastructure, and AI as the foundation rather than a feature.
Why you must select AI-native products
Truly AI-native products are the only path out of BSS vendor lock-in. Here’s what you get with an AI-native architecture.
True composability through semantic consistency: Instead of 200 systems with 200 different data models creating thousands of integration paths, you have one, canonical semantic layer. Every system connects once. “Subscriber” means the same thing everywhere—not because you’ve forced data standardization across legacy platforms, but because the architecture enforces semantic consistency from day one. When vendors change their schemas, you update one adapter—not hundreds of downstream integrations. When you want to swap out your rating engine, you can, because the new engine speaks the same semantic language as everything else.
AI-assisted development that actually works: Here’s what nobody tells you about AI coding tools: they only work when the underlying architecture allows rapid change. If your architecture requires manual integration mapping, regression testing across coupled systems, and professional services for every modification, AI-generated code doesn’t help. You still need humans to handle the integration complexity. But when your architecture provides semantic consistency and true modularity, AI can generate working code because the interfaces are clean and the definitions are unified.
Real agility on product launches: When semantic definitions are unified and AI can work with that unified model, product changes happen once and propagate automatically. You don’t redefine the concept of “unlimited data” in billing, then provisioning, then CRM, then the customer portal, then care systems. You define it once in the semantic layer. AI generates the implementation across all systems because they all understand the same definition. You go from launch cycles counted in years or months to launches that happen in weeks. Not because AI writes code faster, but because the architecture enables AI to write code that works everywhere.
AI that actually learns from your data: When your churn model trains on semantically consistent data—where “subscriber” has one canonical definition regardless of source—it learns actual patterns. Your revenue assurance AI can spot leakage across billing, provisioning, and mediation because all three systems describe the same entities the same way. Your network optimization AI can correlate service degradation with customer experience because “service” means the same thing in operational support systems (OSS) and BSS.
Vendor optionality instead of vendor lock-in. When your architecture abstracts vendor specifics through a semantic layer, switching costs approach zero. You can evaluate alternatives. You can run parallel operations to compare vendors. You can replace systems one module at a time.
Your vendor negotiations shift from “how much will you charge to customize this?” to “can your system speak our semantic language?” You negotiate from strength. Do you think traditional vendors will ever enable this kind of optionality? They won’t. Their entire business model depends on keeping you locked in.
Totogi went AI-native from the start
We were building a full BSS suite to complement our Charging-as-a-Service product when AI capabilities accelerated. We looked at what suddenly became possible and made a call: throw away what we’d built and start over as AI-native.
Not an easy decision. But the right one.
What emerged is BSS Magic—our AI-native platform built on context engineering. It’s not middleware that sits between your systems. It’s a semantic foundation that makes every system speak the same language. Critically, it does not require that you rip and replace your existing BSS. You connect what you have today—your Amdocs billing, your Salesforce CRM, your legacy provisioning systems—to BSS Magic’s semantic layer. Your systems don’t just integrate. They become composable. The semantic layer enforces consistent definitions across everything. AI can reason across your entire BSS estate because it’s working with unified knowledge, not fragmented data.
BSS Magic doesn’t create new vendor lock-in because we built it on open TM Forum standards. The semantic layer uses industry-standard ontologies, not proprietary Totogi schemas. If you want to move off BSS Magic tomorrow, your systems keep speaking the same standardized language. We’re not trapping you; we’re FREEING you. Because the architecture is built for AI from day one, you can build, modify, and deploy BSS modules at the speed of your business—not your vendor’s roadmap.
At MWC24, I said operators would soon be able to build their own BSS modules—customized to their business, not constrained by vendor roadmaps. The skeptics in the audience thought I was overselling the technology.
I wasn’t. AI kept advancing. And now we can prove it.
In 48 hours from when this blog drops, at The AI-Native Telco Forum in Düsseldorf, Totogi will prove this AI-native approach works for BSS. Live. In real time. Not slideware. Not a canned demo. We’re shipping working BSS functionality to production during the conference—and letting operator attendees use it immediately.
This is what becomes possible when you architect for AI from day one instead of bolting it onto decades-old platforms.
Legacy vendors won’t rebuild as AI-native because their business models, sunk costs, and organizational culture won’t let them. But that doesn’t mean you’re stuck with architectural rigidity disguised as innovation.
The alternative exists. We’re proving it in Düsseldorf.
TelecomTV’s AI-Native Telco Forum is sold out, but you can join virtually. I’m speaking at 12:00 Central Europe Summer Time on Thursday, October 23, 2025—and you’ll want to see what we’re building live.
Recent Posts

Get started
Contact Totogi today to start your cloud and AI journey and achieve up to 80% lower TCO and 20% higher ARPU.
Evolve
DR and BT Group’s Gabriela Styf Sjöman talk about building AI for internal ops and supporting customers’ AI workloads—and how a simple decision framework can cut through the AI noise.
Engage
Set up a meeting to learn how the Totogi platform can unify and supercharge your legacy systems, kickstarting your AI-first transition.
Accelerate
Your BSS modernization doesn’t have to be a five-year nightmare. BSS Magic can get you there in days or months instead of years.