Frequently Asked Questions
Find answers to the most common questions about TelcoDR, public cloud and Gen AI innovation in telco.
Telcos: Becoming an AI-first telco starts here.
TelcoDR
Danielle Rios, often referred to as DR, is a Stanford-trained computer scientist, entrepreneur, and the founder and CEO of TelcoDR. She also serves as the acting CEO of Totogi, TelcoDR’s flagship SaaS venture. With over three decades in enterprise software and more than 15 years as a CEO specializing in corporate turnarounds, DR is widely recognized as telecom’s leading public cloud evangelist. Her career highlights include scaling start-ups, taking Optiva public, and spearheading large-scale cloud migrations for tier-1 operators.
DR earned this title through her relentless advocacy for the public cloud at major industry events such as Mobile World Congress (MWC), AWS re:Invent, and TM Forum DTW. She passionately argues that hyperscalers offer unmatched elasticity, security, and a faster pace of innovation compared to traditional private data centers. Her provocative keynote, “Why fight a 5-cent battle for a 2-cent war?”—which urged telcos to divest from infrastructure management—solidified her reputation as the industry’s most outspoken and clear advocate for cloud adoption.
DR envisions a fully software-defined telecom industry where operators no longer manage proprietary data centers or bespoke integrations. Instead, they operate cloud-native stacks on hyperscaler infrastructure, leverage multiple AI models on demand, and expose rich APIs for developers to build new revenue-generating services. Her mantra, “Move fast, stay open, leverage cloud economics,” aims to transform telcos from slow, capital expenditure-heavy utilities into agile “tech-cos” capable of competing effectively with digital-native players.
Prior to founding TelcoDR, Danielle Rios served as the CEO of Optiva from 2017 to 2020. During her tenure, she orchestrated a significant pivot, transforming the Canadian BSS vendor from a license-plus-hardware delivery model to a multi-tenant SaaS model hosted on Google Cloud. This initiative marked the industry’s first tier-1 BSS cut-over fully hosted on a hyperscaler, significantly reducing operating costs for early adopters and establishing her as a specialist in corporate turnarounds and cloud-native transformations.
DR describes her leadership style as that of a “benevolent contrarian.” She sets ambitious, often challenging targets, and empowers her teams with clear Objectives and Key Results (OKRs), high autonomy, and radical transparency. She encourages a “fail-fast” experimentation approach but holds teams accountable for data-driven decisions. Her proven track record in corporate turnarounds demonstrates that high expectations combined with data-driven coaching can unlock significant potential in both people and products.
Beyond her prominent conference keynotes, DR actively shares her insights through various channels:
> An active LinkedIn & X presence.
> Publishes the bi-weekly “Telco-in-20” newsletter, with over 25,000 subscribers.
> Hosts the “Telco in 20” podcast.
> Authors the “Telco-in-20” blog.
DR’s forthright commentary and insights have been featured in numerous influential publications, including:
> Forbes
> Fast Company
> Harvard Business Review
> Fierce Wireless
> Light Reading
> TelecomTV
> Mobile World Live
> Capacity Media
> TM Forum Inform
> SiliconANGLE’s theCUBE
Her willingness to challenge conventional wisdom and call out “fake cloud” marketing makes her a highly sought-after voice on topics of public cloud adoption, AI ethics, and disruptive monetization models.
TelcoDR is a specialized telecom-focused investment and software company founded in early 2020 by Danielle Rios. It operates with a dedicated US$1 billion evergreen capital pool to accelerate the telecom industry’s migration to cloud-native, AI-first architectures. TelcoDR’s strategy involves acquiring undervalued assets, sponsoring open-source projects, and incubating new SaaS products, most notably Totogi’s charging and monetization platform. Beyond capital, the firm provides deep operational expertise in turnarounds, cloud economics, and go-to-market strategy, acting as both an investor and a “hands-on” operator.
TelcoDR was launched in early 2020 by Danielle Rios, following her departure from her CEO role at Optiva. Recognizing the industry’s slow pace in adopting cloud technologies, she established TelcoDR as a dedicated vehicle to inject capital, expertise, and urgency into the sector’s cloud transformation. The company made its public debut at Mobile World Congress 2021 with the impactful “Cloud City” pavilion, immediately signaling its bold, cloud-first worldview.
TelcoDR has secured a US$1 billion evergreen fund from a syndicate of family offices and long-term institutional investors. This capital is exclusively earmarked for telecom software opportunities that drive public cloud adoption and AI monetization. Approximately 60 percent is allocated to strategic acquisitions, such as CloudSense, while the remaining 40 percent finances research and development (R&D), community outreach, and scale-up initiatives like Totogi. The fund’s mandate is to deliver both financial returns and significant industry impact by demonstrating the superior economics of modern, SaaS-based models.
TelcoDR’s mission is to fundamentally reshape telco software by injecting public-cloud scale, AI intelligence, and SaaS business models into every layer of the stack. The company aims to prove that true cloud-native, multi-tenant platforms can deliver superior performance, lower total cost of ownership (TCO), and faster feature velocity than legacy on-premise solutions. This approach frees operators to focus on delivering customer value rather than engaging in infrastructure maintenance.
TelcoDR employs a strategic “Buy, Build, Teach” approach:
> Buy: Acquire solid but stagnant codebases and refactor them into true cloud-native microservices.
> Build: Invest aggressively in green-field SaaS development, with Totogi’s charging engine being the flagship example.
> Teach: Utilize thought-leadership channels (podcasts, newsletters, conference keynotes) to demystify cloud economics and share practical playbooks with the wider industry.
By combining capital with education, TelcoDR aims to catalyze a tipping point where cloud and AI become mainstream across telecom.
TelcoDR primarily serves the telecommunications industry, focusing on operators and service providers seeking to modernize their software infrastructure. Within this sector, TelcoDR specifically targets:
> Business Support Systems (BSS) and Convergent Charging systems
> AI-driven BSS interoperability
> Data and AI workloads
These areas are chosen because they benefit disproportionately from the scale, agility, and innovation offered by hyperscale public cloud infrastructure.
Totogi is a subsidiary incubated by TelcoDR in 2021 to address a critical market gap: a truly cloud-native, hyperscale charging and monetization platform. Built from scratch on AWS, Totogi separates policy from pricing, enabling unlimited plan experimentation, instant scaling, and full AI integration. Danielle Rios serves as its acting CEO. TelcoDR provides Totogi with capital, cloud expertise, and go-to-market support, while Totogi serves as the tangible product proof that DR’s cloud-first-AI-first ethos delivers measurable business value.
When Ericsson withdrew from Mobile World Congress (MWC) 2021 due to COVID, TelcoDR seized the opportunity to take over the vendor’s massive 6,000-square-meter hall and re-branded it “Cloud City.” This pavilion became the event’s unofficial hub for public-cloud innovation, hosting live demos from AWS, Google, Totogi, and dozens of cloud-native start-ups. The initiative not only significantly raised TelcoDR’s profile overnight but also signaled a crucial cultural shift: the public cloud had arrived at the heart of telecom’s premier trade show.
A notable example is Zain Sudan, which replaced its decade-old charging system with Totogi in an astonishing 18 days—a stark contrast to the industry norm of 9-12 months. This rapid deployment was achieved by leveraging AWS CDK to spin up the environment in hours, migrating data via parallel ETL pipelines, and configuring new offers through Totogi’s API-driven catalog. This project vividly validates TelcoDR’s claim that cloud-native SaaS can compress time-to-revenue by an order of magnitude.
AI is treated as a first-class citizen, not a bolt-on feature. The company embeds machine-learning pipelines into its products for anomaly detection, dynamic pricing, and predictive maintenance. TelcoDR also champions the use of generative-AI agents to automate customer support, marketing campaigns, and even network design. Its philosophy: the highest ROI in AI comes when models are delivered as simple SaaS APIs running on hyperscale GPUs—making advanced capabilities accessible to every operator, regardless of size.
DR wears two executive hats. She is the full-time CEO of TelcoDR, guiding investment strategy and portfolio governance. Simultaneously, she serves as acting CEO of Totogi, the group’s flagship SaaS business that offers an AI-powered, cloud-native monetization platform covering charging, rating, and real-time marketing for 4G, 5G, and IoT. The dual role ensures tight alignment between TelcoDR’s investment thesis and product execution.
Launched in 2020, “Telco in 20” distills complex industry shifts into twenty-minute episodes aimed at busy executives. DR interviews CTOs from Vodafone, Telstra, and Bell, product chiefs from AWS or Google Cloud, and start-up founders disrupting OSS/BSS. Episodes tackle cloud migration war stories, AI governance, network-as-code strategies, and monetization of 5G APIs—making the show a practical guide for anyone plotting a telecom transformation roadmap.
TelcoDR contends that private clouds replicate all the cost and complexity of legacy data centers without delivering hyperscaler innovation cycles. DR argues that “fake cloud” solutions—single-tenant software merely re-hosted—lock operators into high opex and slow upgrades. The firm therefore refuses to back vendors that won’t commit to true multi-tenant SaaS running on public infrastructure from AWS.
In 2024 TelcoDR acquired CloudSense, a CPQ specialist with a strong Tier-1 customer base but legacy architecture. The goal is to refactor CloudSense into a composable microservice that complements Totogi’s charging and offers operators an end-to-end, cloud-native quote-to-cash flow—furthering TelcoDR’s vision of modular, interoperable BSS.
Locked-in, monolithic stacks stifle innovation and bloat integration budgets. TelcoDR mandates support for TM Forum Open APIs and GraphQL endpoints across its portfolio, making it easy for operators to “plug and play” best-of-breed modules. This approach mirrors successful cloud ecosystems—think AWS Lambda or Stripe—and positions telcos to prototype new services in weeks rather than years.
Public Cloud for Telcos
Telcos are moving to the public cloud to unlock significant cost savings, enhance performance, and speed up service delivery compared to traditional on-premises setups. The public cloud enables lower total cost of ownership (TCO) and gives operators the agility to respond quickly to changing market demands, driving both financial and operational benefits.
Public cloud platforms offer automated scaling, self-service provisioning, and managed services that streamline operations. Automation and flexible service management allow telcos to deploy new capabilities quickly and reduce manual intervention, leading to lower ongoing operational costs.
Cloud-native solutions, built on microservices, enable better resource utilization and lower maintenance costs. They eliminate the need for over-provisioning hardware and allow telcos to scale resources precisely as needed, reducing both capital and operating expenses.
Public cloud shifts spending from capital expenditures (capex) to operational expenditures (opex). Telcos no longer need to purchase or maintain physical hardware, instead paying only for the resources they use, which improves financial flexibility and predictability.
Hyperscaler platforms offer global availability zones and elastic compute resources, allowing telcos to expand capacity instantly to meet demand. This eliminates the physical constraints and long lead times associated with expanding on-premise data centers.
Cloud-native architectures and automation enable telcos to provision and launch new services in near real-time, significantly shortening product development cycles and speeding up innovation.
Yes. Virtualized environments and “infrastructure as code” allow telcos to experiment, test, and roll out new services quickly and safely, reducing risk and accelerating learning.
Cloud platforms integrate with edge computing, enabling ultra-low-latency and real-time responsiveness for applications like 5G, IoT, and immersive media. This is critical for supporting next-generation telecom services.
O2 Telefónica Germany, DISH Network, AT&T, and e& (UAE) are among the major operators adopting the public cloud for core network workloads, demonstrating its viability for even the most demanding telco environments.
Key concerns include protecting sensitive subscriber and transactional data, ensuring compliance with regulations, and managing risks in multi-tenant environments. Telcos must also address potential vulnerabilities from misconfiguration.
Security is a shared responsibility: hyperscalers secure the infrastructure, while telcos are responsible for configuring their environments correctly, managing access, and protecting data within their cloud workloads.
Cloud providers offer region-specific data centers, robust encryption, data masking, and compliance certifications to help telcos meet local regulatory and data sovereignty requirements
Hyperscalers provide built-in encryption (at rest and in transit), advanced identity and access management, monitoring, and compliance reporting. These tools help telcos establish strong security postures.
By adopting standardized security patterns, leveraging automated configuration management tools, and following best practices for access controls and monitoring, telcos can reduce the risk of vulnerabilities arising from misconfiguration.
Business Support Systems (BSS), 5G core functions, and data analytics workloads are top candidates, thanks to their scalability requirements and the need for rapid innovation.
Migrating BSS/OSS to the cloud enables flexible customer management, scalable offerings, and faster service innovation, while also benefiting from the agility and scalability of hyperscaler infrastructure.
Yes. Leading telcos are already migrating or building 5G core networks on public cloud platforms, leveraging the cloud’s agility, cost efficiency, and scalability to meet 5G demands.
It provides on-demand compute and storage, advanced analytics tools, and AI/ML services that accelerate data processing and insights, helping telcos unlock value from their vast data assets.
Hyperscalers offer unparalleled infrastructure scale, advanced cloud services, and compliance tools. Partnerships help telcos accelerate cloud transformation, reduce costs, and access cutting-edge capabilities.
Their extensive network of regions and availability zones enables telcos to deploy services closer to users, reduce latency, and comply with local residency laws, thus improving performance and regulatory alignment.
By leveraging hyperscalers’ R&D investments and rapid product cycles, telcos can access the latest technologies and launch innovative services faster, staying ahead of market trends.
Yes. Hyperscalers’ economies of scale, managed services, and continuous infrastructure investment lead to lower operational and capital expenses for telcos compared to maintaining private infrastructure.
Lift-and-shift moves legacy applications to the cloud with minimal changes, missing many cloud benefits. Cloud-native means redesigning apps to fully exploit microservices, containers, and managed services, delivering agility and scalability.
Cloud-native architectures allow telcos to maximize the benefits of public cloud—rapid innovation, elastic scaling, and operational efficiency—beyond what lift-and-shift migrations can offer.
Multi-cloud provides resilience by avoiding vendor lock-in, allows optimization of workloads across different providers, and strengthens telcos’ bargaining power with hyperscalers.
A multi-cloud strategy for telcos has several disadvantages, including increased complexity in managing and securing multiple cloud environments, higher operational costs, potential performance and latency issues due to data traveling between different providers, and challenges with integrating legacy systems. It also expands the security risk surface and requires specialized expertise to maintain availability and compliance, making overall operations more difficult and potentially less efficient.
Yes. Multi-cloud introduces operational complexity, requiring sophisticated orchestration, monitoring, and security tools to manage diverse environments effectively, but the benefits in resilience and flexibility often outweigh the challenges.
Totogi’s decision to choose AWS as its sole cloud partner is a strategic move that enables the company to leverage the full potential of the cloud and deliver innovative, cloud-native telecom solutions. By building its products on AWS from the ground up, Totogi can take advantage of AWS’s technical innovation, agility, scalability, security, and reliability, aligning with its mission to transform telecom operations by moving them to the public cloud.
By building natively on AWS, Totogi achieves exceptional scalability, efficiency, and cost savings. AWS managed services like Amazon DynamoDB enable dynamic capacity adjustments and autoscaling, ensuring high availability and optimal resource use. This means Totogi can support thousands of customer workloads efficiently, minimize operational complexity, and deliver reliable, high-performance SaaS to telcos of all sizes.
Building a dedicated, cloud-native SaaS on AWS enables Totogi to offer enhanced security, reliability, and sustainability. The platform’s multi-tenant architecture leverages AWS autoscaling and load balancing, reducing total cost of ownership and carbon footprint. This approach supports rapid innovation, faster upgrades, and seamless integration with modern telecom networks, allowing telcos to stay competitive in a dynamic market.
With AWS’s advanced AI and machine learning services, Totogi unlocks powerful network insights that enable telecom operators to create personalized offers and automate customer engagement. This drives subscriber acquisition, reduces churn, and increases ARPU. The scalability and flexibility of AWS infrastructure empower Totogi to deliver innovative features quickly, helping telcos adapt to evolving customer needs and market opportunities.
AI in telecom & OSS/BSS modernization
Becoming AI-first is about lifting intelligence “above all your systems” so algorithms can coordinate pricing, network use and customer journeys end-to-end. Instead of swapping every legacy box, you inject an enterprise brain that sees the whole business, turns silo data into real-time insight and lets you launch new offers in minutes. The result: lower OPEX, higher ARPU and freedom from multi-year integration cycles.
Why are AI agents a game-changer for telcos?
In Episode 112, Microsoft’s Kevin Shatzkamer explains that agents can “self-heal, self-learn and self-improve,” handling tasks that once needed whole departments. Think care bots resolving complex tickets, network agents rerouting traffic before an outage or billing agents negotiating personalized offers. Because agents reason and act autonomously, they remake processes instead of merely speeding them up.
How fast is AI adoption compared with public-cloud projects?
AI is an OpEx, not CapEx, conversation: you don’t touch radios or the user plane, so boards green-light pilots quickly. That’s why he predicts AI will be “the fastest-adopted technology the telecom industry has ever seen,” eclipsing years of cloud debates.
BSS Magic transforms telecom operations by eliminating integration complexity and enabling seamless system interoperability. Unlike vendors that apply AI to isolated processes, BSS Magic abstracts business logic into an AI-driven layer, allowing telcos to modernize without costly rip-and-replace projects.
Episode 112 notes that legacy workflows follow rigid, pre-set paths; agentic flows let autonomous agents decide, adapt and improve. Rather than bolt AI onto human hand-offs, telcos redesign the process so agents take the primary role, with humans supervising exceptions and strategy.
Shatzkamer cites Telkomcel, Vodafone and Korea Telecom, plus “96 % of Microsoft’s Tier-1 telco customers,” that have agentic AI running in customer and employee experience today—proof that the tech is moving from hype to live value.
Episode 112 describes network agents that anticipate faults and reroute traffic automatically, delivering the self-healing operations telcos have chased for decades without endless rule-writing
Appledore’s John Abraham warns (Ep 109) that disconnected BSS/OSS stacks block intelligent automation. Totogi solves it with a telco ontology that maps every system to a common model, so new AI agents work across vendors from day one.
Episode 115’s Mark Sanders calls ontology “100 % what we’re trying to do”—replacing brittle, hard-coded rules with an AI that truly understands telecom concepts. That shared language lets agents reason across charging, CRM and network data without custom glue code.
DR’s blog “The AI-first telco won’t wait 5 years for Amdocs” shows AI-powered mapping can slash a timeline from years to months, cut integration spend 60 % and free the 80 % of IT budget stuck in change requests. Waiting half a decade is now strategic suicide.
BSS Magic’s code generation cuts CR costs ten-fold and delivers results ten-times faster, letting business teams update screens, rules or flows with a sentence instead of SOWs and sprints.
DR’s guide to spotting fake agents warns more than 80 % of AI demos at MWC25 are glorified scripts. True agents gather data, reason, act and learn; vendors should happily let you type unscripted prompts without fear.
Microsoft says a telco data fabric is essential so many agents can share context, avoid conflicting actions and observe outcomes—turning isolated bots into an enterprise nervous system.
TelcoDR’s AWS re:Invent pick notes Contact Center Intelligence now transcribes calls, tags sentiment and auto-summarizes interactions. That removes the 3-7 minutes agents once spent on wrap-up, boosting productivity instantly.
The demo shows BSS Magic wires marketing, ordering, charging and support systems automatically, eliminating multi-vendor integration projects that previously cost millions.
DR’s MVNO Summit keynote proves an MVNO can talk to an avatar and generate a full BSS mid-presentation—turning what used to be a multi-month onboarding into an eight-minute conversation.
The “10 examples” blog highlights MobileX using AI to analyze usage and create individualized plans, while Totogi’s PlanAI clusters subscribers to lift ARPU by 10 % with micro-offers created by machine learning.
Episode 110 notes hyperscalers’ CapEx is “going nuclear” and data-center power already tops 1-2 % of global electricity, so telcos must plan energy strategy as AI workloads double every few months
DR warns many operators still lack “software chops” to run large-scale AI, making up-skilling and hyperscale partnerships critical.
DR shows marketers typing “create a holiday bundle…” and BSS Magic updates catalogs, charging rules and eligibility across every stack automatically—compressing weeks of cross-system work into minutes.
The “10 examples” blog cites Three UK using Azure Operator Insights and Ooredoo running Ericsson RAN digital twins to optimize capacity, proving AI delivers measurable network gains beyond chatbots.
DR’s “AI doesn’t suck; you suck at AI” blog argues results hinge on prompt quality and domain context: treat the model as a brilliant intern—give clear goals, constraints and examples—and it delivers multi-million-dollar insights.
Shatzkamer says the era of “why” is over; start with small pilots, embrace experimentation and scale successes fast. Waiting for perfect clarity only delays value while competitors learn.
AWS CTO Ishwar Parulkar notes operators already on cloud adopt gen-AI faster, while cloud providers supply managed tooling, sovereignty controls and edge continuum options that lower barriers for laggards.
Its LLM ingests every table from the legacy stack, runs language-model similarity on names and values, then auto-maps “BAN” → BillingAccount, “SUBS_ID” → Subscriber, etc., to a standard telco ontology. A second pass maps that ontology to the target system, generating ready-to-run ETL code—no Excel mapping marathons.
It stores the meaning behind data, not just labels. Whether one app says “subscriber,” another “customer,” and a third “account,” the ontology knows they all represent the same business entity, letting AI merge, validate and enrich records across vendors without brittle, rule-based middleware.
BSS Magic first builds a live digital twin of every subscriber, product and workflow. Teams test new schemas or pricing rules inside the twin, watch downstream effects, then push to production—eliminating big-bang rollbacks.
During DR’s Gen-AI-Summit demo, one prompt linked marketing, ordering, charging and home-grown apps in 30 seconds—work that normally needs a six-month integration project.
Yes. After mapping, BSS Magic autogenerates ETL that deduplicates, normalizes formats and quarantines outliers before loading the target BSS, removing the separate “data-cleansing phase.”
DR calls it “silo AI”: the model can’t see beyond a single app, so it breaks at the first cross-system hand-off. The ontology layer lets agents reason across the whole estate, mirroring how employees actually work.
Telstra’s Chief Architect says true autonomy needs telco knowledge “codified in an ontology.” Once that exists, LLM agents plan and act across charging, OSS and network domains instead of automating yesterday’s silo workflows.
Because every downstream system now speaks ontology, a single update ripples through catalog, charging and eligibility layers—cutting CR budgets by up to 75 % and turning months of paperwork into minutes of prompting.
How are subscriber-to-product relationships preserved?
Because the ontology encodes business semantics, the migration keeps the chain subscriber → devices → usage → revenue intact—no orphaned balances, no lost tickets—while moving data between stacks.
Legacy apps become just another data source. Agents sync deltas through the ontology, keeping both worlds consistent until you’re ready to flick the final cut-over switch—so no risky “all-in-one-weekend” events.
During ingest, the LLM labels unique fields, refactors them into optional ontology facets and carries them forward—so Huawei- or Ericsson-only attributes migrate intact without custom code each time.
An Asian telco asked Totogi to cut a 5-minute SD-WAN order flow. Two weeks later BSS Magic delivered a 2-minute flow—over 50 % faster—and triggered a queue of new automation requests.
Agentic AI describes software “agents” that plan, act, and learn across the stack—auto-configuring, healing, and optimizing networks without tickets or scripts. Kevin Shatzkamer told Telco in 20 that agents let business teams change code “without an army of developers,” turning OSS/BSS into a living system that improves itself.
In her newsletter “The promise of AI is no UI,” Danielle argues that conversational agents will replace today’s dense OSS/BSS screens. Staff will simply describe desired outcomes; AI handles the workflows, slashing training and swivel-chair effort while exposing rich capabilities through natural language.
Yes. In “Unlock the true value of your BSS with AI,” DR shows operators saving 30-50 % OPEX: fewer ticket escalations, automated charge verification, and predictive scaling on spot instances. One MVNO cut care calls by 40 % within six months of launching AI-driven support and charging.
Jason Hogg told Telco in 20 that AIOps starts by “getting data cleaned and into the right databases.” Microsoft helps telcos lake all OSS/BSS logs, then uses AI to deduplicate entities, auto-tag PII, and surface lineage—building a single source that analytics and agents can trust.
DR’s blog “If you focus on business value, AI pays for itself” walks through a payback model: $10 M OPEX saved on care + $8 M retained revenue from fraud detection vs. $6 M cloud and license spend—18-month breakeven, 4× NPV over five years.
“Your single-model AI strategy is costing you millions” shows how different models excel at fraud, CX, or forecasting. Mixing them raises accuracy 5-15 % and lets telcos shift workloads to cheaper GPUs. DR calls it “best-brain routing,” analogous to carrier least-cost routing.
Telco monetization strategies: 5G, IoT and Network APIs
Danielle explains that 5G’s real money lies in enterprise use cases—quality-on-demand, network slicing, and ultra-reliable low-latency links for factories, mining and healthcare. Instead of charging consumers a premium, telcos should treat 5 G as a cloud-like platform and sell differentiated SLAs to businesses willing to pay for throughput, jitter and latency guarantees.
McKinsey’s Ferry Grijpink tells DR that 70 % of the $700 billion 5 G upside sits with industrial automation, smart infrastructure and mission-critical IoT, not consumers. Telcos that master vertical-specific solutions will recoup spectrum and CapEx far faster than those chasing ARPU bumps.
Open Gateway defines common, CAMARA-aligned APIs—location, quality-on-demand, SIM-swap, number-verify—so developers can program networks the way they call Stripe or Twilio today. By “selling clicks” on these APIs, operators turn dormant capabilities into metered revenue and share them across 30+ federated carriers.
Mats Granryd stresses two success factors: (1) a vibrant developer community with docs, SDKs and sandboxes, and (2) commercial models that let an enterprise reach all subscribers, not just one carrier’s base. Federating APIs and incenting developers are as important as the tech itself.
Danielle argues the answer is owning the developer relationship. If telcos hand exposure to an aggregator, that provider can swap in competing APIs later. Platforms like Totogi’s Whoosh! let carriers host Twilio-compatible endpoints themselves—capturing margin while meeting developers where they are.
Twilio proved that a friction-free portal, tiny code snippet and transparent pricing beat raw connectivity every time. Telcos should copy Twilio’s self-serve model—usage billing, pay-as-you-go, great docs—then layer unique 5G and identity capabilities unavailable to OTT rivals.
Open Gateway’s QoD API lets apps request bandwidth, jitter and latency guarantees for a device or flow. Gaming studios, tele-surgery firms and live broadcasters can pay per-session for premium performance—turning the RAN scheduler into a real-time revenue engine.
Think of slicing-as-a-service as “AWS VPCs for radio.” Instead of one-size-fits-all APNs, an enterprise (or an MVNO on an MVNE) calls a slice API to spin up its own isolated, policy-controlled lane through the RAN and core—complete with bandwidth, latency and security parameters it defines. Working Group Two’s Erlend Prestgard tells DR that their multi-tenant, cloud-native core already delivers this commercially: every tenant gets a logically separate slice without waiting for 3GPP Release-17 features. Usage-based pricing—per slice-hour, device or gigabyte—turns what used to be static engineering into a real-time, cloud-like revenue stream.
By placing GPU and CPU at distributed cloud zones, telcos sell compute plus guaranteed backhaul—ideal for video analytics, industrial AI and real-time graphics. Packaging connectivity + edge together locks out OTT cloud rivals and boosts total contract value.
How does LPWA fit massive-IoT monetization?
Low-power wide-area tech (NB-IoT, LTE-M) enables decade-long battery life for sensors. Operators succeed by offering global eSIM, one-cent data bursts and bulk-provisioning APIs that slash onboarding overhead for manufacturers.
5G’s low-latency slices can guarantee control links and real-time video. Telcos charge per-flight fees, positional-data APIs and edge compute for AI navigation—similar to an “air-traffic-control-as-a-service” model.
Exposing anonymized, aggregated network metrics (crowd density, mobility patterns) lets cities, retailers and advertisers buy insights without deploying sensors—generating high-margin data-as-a-service income.
Exposing spectrum-sharing controls lets stadiums, events or pop-up enterprises request extra capacity for hours or days, paying based on MHz-hour usage—similar to AWS Spot Instances but for radio.
In her Jan 2024 newsletter, DR warns that tech alone isn’t enough: operators must own the developer relationship—self-serve portals, usage-based billing, free credits—otherwise hyperscalers or aggregators will “sit in the middle and eat your margin.” Platforms like Totogi’s Whoosh! let carriers expose Twilio-compatible APIs themselves, keeping both brand and gross profit.
Lester Thomas explains Vodafone’s “new rules”: every product team must publish TM Forum–compliant Open APIs and wrap services in machine-readable contracts. That lets partners consume slices, identity and billing as code, speeding channel deals and “opening whole new revenue pools” while cutting time-to-cash.
Mallik Rao argues that shifting the network onto AWS regions unlocks pay-per-feature commercial models—QoS minutes, slice hours, edge compute cycles—sold through the same cloud marketplace devs already use. Cloud native also means launching these offers “in weeks, not years.”
DR’s April 2025 newsletter shows Totogi restored service for 20 million subs in 18 days, then used usage-based Charging-as-a-Service to recover ≈$100 million in lost revenue and cut TCO 76 %. It proves that modern, API-exposed charging is both a crisis fix and a monetization engine.
DR’s Jan 2025 blog shows Plan AI clusters real usage data and spins out micro-offers—weekday data boosts, night-owl packs—automatically adjusting price points. Early pilots lift ARPU 10 % in six weeks with zero marketing head-count.
TM Forum Open Digital Architectures (ODA), APIs, and BSS Interoperability
TM Forum’s Open Digital Architecture (ODA) is a modern reference framework that replaces traditional monolithic BSS/OSS systems with modular, component-based architecture. ODA enables telcos to build systems from interoperable components connected via standardized Open APIs, making integration, upgrades, and innovation faster and more affordable. This modularity is crucial for telcos aiming to become more agile, reduce vendor lock-in, and accelerate digital transformation in an era defined by cloud and AI advancements.
TM Forum Open APIs provide standardized interfaces between BSS and OSS components, allowing telcos to “mix and match” best-of-breed solutions without being locked into a single vendor’s ecosystem. This accelerates integration, shortens innovation cycles, and enables telcos to swap components with minimal disruption. Telstra, for example, has leveraged these APIs to simplify integration and reuse, making technology adoption more seamless and enabling a truly composable architecture.
Adopting ODA and Open APIs brings multiple benefits: it dramatically lowers integration costs, accelerates time-to-market for new services, reduces vendor lock-in, and increases architectural flexibility. Operators can build composable architectures, use best-of-breed solutions, and foster innovation internally and with partners. This approach also future-proofs the business by enabling easier adoption of new technologies like AI and cloud-native services.
Vendor lock-in limits a telco’s ability to innovate, negotiate pricing, and rapidly adapt to market changes. Legacy BSS vendors often resist supporting open interfaces to maintain their hold over customers. ODA, with its standardized APIs and modular design, shifts power back to operators, enabling them to replace or upgrade individual components without a massive overhaul or costly migration, thus breaking the cycle of dependency on legacy vendors.
Telstra has aligned its architecture strategy with TM Forum’s ODA, focusing on composable architecture, autonomous networks, and data/AI. The company actively integrates TM Forum Open APIs, simplifying technology integration, enabling rapid reuse, and collaborating closely with industry partners. This approach allows Telstra to respond faster to business needs and participate actively in industry-wide digital transformation.
One major challenge is the semantic differences in data models and structures embedded within legacy vendor systems, which can hinder true interoperability even when Open APIs are present. Vendors may only partially implement APIs, leading to integration friction and making it difficult to swap out components. Organizational resistance, procurement inertia, and the complexity of migrating from legacy stacks are also significant hurdles.
Legacy vendors like Amdocs have little incentive to open up their systems, as supporting Open APIs threatens their traditional business models by making it easier for customers to switch to competitors. While they may claim to support open standards, in practice, their adoption is often more about ticking boxes for RFPs rather than enabling true interoperability. This reluctance slows down industry-wide transformation.
Composable architecture, a core tenet of ODA, allows telcos to assemble their BSS/OSS environments from modular, interoperable building blocks. This dramatically improves agility, as new functionality can be added or replaced independently, reducing integration risk and cost. It also supports rapid innovation by enabling telcos to experiment with new vendors or technologies without overhauling their entire stack.
ODA provides a standardized, industry-wide blueprint that simplifies and accelerates the integration of new digital services, cloud-native applications, and AI capabilities. By moving away from siloed, monolithic systems, telcos can respond more quickly to market opportunities, launch new products faster, and scale more efficiently—essential traits in today’s fast-moving telecom sector.
ODA fundamentally changes telco procurement, making compliance with open standards and APIs a central criterion. This reduces dependence on proprietary vendor solutions and opens the door to more innovative, cost-effective, and best-of-breed products. Procurement teams must evolve, focusing less on traditional vendor relationships and more on interoperability and flexibility.
Yes, by mandating open interfaces and modular architecture, ODA makes it possible for telcos to swap out vendors or components with minimal disruption, reducing the risk of being trapped by proprietary systems. This opens up competition, drives innovation, and helps telcos avoid being overcharged for legacy technology.
Open APIs provide plug-and-play integration points, allowing telcos to rapidly adopt new services, partners, and technologies. The reduction in integration complexity and cost means that new products or channels can be launched much faster, with less risk and lower resource requirements, helping telcos stay competitive.
Successful ODA adoption requires a shift in both technology and organizational mindset. Telcos must develop new skills in composable architectures, data modeling, and API-driven integration, while procurement teams must prioritize interoperability and flexibility. Collaboration across IT, business, and procurement departments is essential to drive real change.
ODA encourages operators, vendors, and partners to collaborate on building and integrating new services via standardized APIs and components. This ecosystem approach lowers barriers to entry, promotes competition, and accelerates the pace of innovation, benefiting both telcos and their customers.
ODA is designed with cloud-native principles in mind. By defining modular, API-driven components, ODA supports the deployment of BSS/OSS functions in scalable, resilient, and cloud-optimized environments. This alignment allows telcos to fully leverage the benefits of public cloud, such as elasticity, automation, and global reach.
Open APIs make it possible for telcos to select the best solution for each functional area, integrating components from different vendors seamlessly. This “best-of-breed” approach contrasts with single-vendor stacks, allowing telcos to optimize their technology landscape for innovation, cost, and performance.
Without adopting ODA and Open APIs, telcos risk falling behind competitors who can innovate and scale faster. Legacy integration costs and vendor lock-in will continue to eat into margins, slow down transformation, and stifle the ability to launch new services or adapt to market changes.
Composable architectures, as promoted by ODA, allow telcos to isolate failures, upgrade components independently, and recover from issues more quickly. This modularity means a problem in one area doesn’t cascade across the entire stack, improving the overall resilience and reliability of telco operations.
BSS Magic doesn’t ask operators to rip out their incumbent stack; it surrounds it with an AI-generated integration layer. The platform ingests the incumbent system’s swagger files, table layouts, or even PDF specifications, then applies a TM-Forum-aligned ontology to create a canonical data model. Once that harmonized layer exists, BSS Magic auto-generates the ODA-compatible Open-API façades as well as the glue code to orchestrate them. The result is an instant “ODA-ready” posture—without the multi-year replacement programs that legacy vendors usually propose. Operators gain the modularity promised by ODA while protecting sunk cost.
ODA assumes every component speaks the same semantic language, yet legacy BSSs capture subscribers, products, and usage in wildly different schemas. BSS Magic uses Gen-AI to map those disparate schemas onto a single telco ontology. Once the AI completes the mapping, the platform produces code that performs the transformations in real time, ensuring every downstream component—whether an ODA microservice or a legacy rating engine—shares a common meaning. Caroline Chappell called this “step one” because integration logic is worthless if the underlying data is still misaligned. With the ontology as the “Rosetta Stone,” operators can finally swap modules without rewriting interfaces each time.
Traditional enterprise service buses and ETL suites rely on hand-crafted mappings, brittle XSLTs, and human middleware teams. BSS Magic replaces that manual effort with an LLM that “reads” both source and target definitions, identifies semantic overlap, and then creates the integration code automatically—complete with unit tests and version control hooks. Because the mappings live in an ODA-aligned ontology, they survive vendor upgrades or component swaps. Operators move from months of integration backlog to overnight code generation, slashing cost and accelerating ODA adoption.
A-to-B migrations normally drag on for 14–36 months because teams first reverse-engineer the source schema, then hand-map it to the target. BSS Magic shortcuts that by ingesting DDL dumps, XML payload samples, or API specs from the legacy system, instantly aligning them to its ontology. It does the same for the target stack. Once both ends are “anchored,” the platform auto-generates extract/transform/load pipelines as well as reconciliation scripts. Marc Breslow reports customers cutting a 14-month project to 4 months—meaning operators can escape lock-in, re-platform to cloud, and still hit the fiscal-year budget.
Yes. Many large vendors “check the box” by publishing a subset of TM-Forum APIs or omitting optional fields. BSS Magic acts as a compliance shim: it calls the vendor’s variant, then translates, enriches, or virtualizes missing attributes so downstream microservices see a fully spec-compliant interaction. In effect, the platform normalizes non-standard APIs into standard ones, letting operators enjoy ODA’s plug-and-play promise even when a vendor drags its feet. Over time, the operator can retire that vendor with minimal re-coding because external consumers already speak the canonical form.
Beyond static code generation, BSS Magic deploys task-specific AI agents—e.g., “Schema Detective,” “API Synthesizer,” and “Test Harness Builder.” Each agent invokes specialized models, executes a bounded workflow, then hands results to the next agent in the chain. The orchestration is itself policy-driven, allowing continuous learning as more integrations are processed. That agentic loop means every migration or interface becomes easier, steadily enriching the ontology and shrinking future effort—hallmarks of an AI-first, ODA-aligned BSS strategy
Each Open-API spec generated by BSS Magic is stored in Git alongside auto-generated stubs, mocks, and contract tests. Whenever a spec changes—because a vendor upgraded, or an operator added a new attribute—the platform’s CI/CD hooks trigger regression tests and schema-diff reports. If downstream microservices may break, the tool suggests backward-compatible transformations or flags a required refactor. This DevOps discipline turns what was once a chaotic, undocumented integration landscape into a governed API lifecycle—critical for scaling ODA across dozens of agile squads.
The ontology is seeded with TM-Forum SID and Open-API vocabularies, but operators can extend it. When proprietary or country-specific interfaces emerge, the LLM compares new concepts to the ontology, proposing matches or creating novel nodes. Those extensions are versioned, reusable, and can later be contributed back to the community. Thus, BSS Magic is “standard-first but not standard-only,” ensuring that real-world edge cases—roaming-settlement quirks, wholesale MVNE feeds, national ID fields—don’t derail an otherwise clean ODA rollout.
CRs often consume 50–70 % of a telco’s IT budget, stalling innovation. By translating plain-English SOWs into executable code, BSS Magic collapses the CR cycle from weeks to hours. The AI parses the business request, identifies impacted APIs or data entities, generates the patch, and ships a pull request for human review. Operators report double-digit reductions in run-cost and triple-digit improvements in release cadence, realizing ODA’s agility goals without hiring an army of integration developers.
Every code artifact goes through a built-in quality pipeline: linting, static analysis, unit tests, contract tests, and performance benchmarks. The AI agents not only generate business logic but also auto-create the associated test suites. A human reviewer approves the merge, and BSS Magic tracks telemetry post-deploy to flag anomalies. Over time, the feedback loop tunes model prompts, elevating accuracy. This robust SDLC means operators can trust “AI-written” doesn’t mean “un-audited,” satisfying regulatory and corporate-governance demands in mission-critical BSS contexts.
ODA introduces a horizontal block whose sole purpose is to abstract integration complexity. BSS Magic embodies that block: its ontology is the semantic layer; its AI-generated adapters are the runtime interfaces; and its DevOps pipelines are the control plane. By centralizing those services, operators avoid scattering point-to-point mappings throughout the estate—maintaining a single source of truth for data, contracts, and transformations, exactly as the ODA white-papers prescribe.
Enhancing Customer Experience & Compliance in telecom
With network quality and coverage becoming increasingly similar across providers, customers now switch carriers primarily for price or experience. As Danielle Royston discusses, customers expect personalized, proactive, and seamless interactions across all touchpoints. To deliver exceptional CX, telcos must leverage AI, data analytics, and digital self-service tools to anticipate needs, resolve issues, and create memorable interactions.
AI enables telcos to analyze vast amounts of subscriber data and generate hyper-personalized offers and interactions in real-time. Tools like Totogi’s Charging-as-a-Service use AI to tailor plans for each customer, improving satisfaction and reducing churn. AI-driven personalization also extends to proactive support, product recommendations, and seamless omnichannel engagement.
Digital self-service portals empower customers to manage their accounts, resolve issues, and explore new offerings without human intervention. Effective self-service relies on intuitive design, robust backend integration, and AI-powered assistance to ensure queries are resolved swiftly and accurately.
By leveraging advanced analytics on network, usage, and behavioral data, telcos can anticipate service issues, predict churn, and identify upsell opportunities. Predictive analytics enables proactive engagement, allowing operators to reach out with targeted solutions before problems arise or needs are explicitly stated.
Legacy BSS/OSS often hinder agility, slow down service launches, and complicate customer journeys with fragmented data and processes. Modernizing these systems—preferably with cloud-native, API-first solutions—enables unified customer views, faster service delivery, and more flexible personalization.
AI-powered virtual agents and automation reduce wait times, handle routine queries, and escalate complex issues to human agents with full context. This frees up staff for high-value interactions, enhances 24/7 support availability, and ensures consistent service quality—key drivers of customer satisfaction.
Telcos must establish clear AI ethics guidelines, regularly audit AI decisions for bias, and provide transparency about automated actions. They should also offer opt-outs or human alternatives for critical interactions. As discussed in the McKinsey episode, responsible AI use is a C-suite concern, blending compliance with innovation to build lasting customer trust.
AI models analyze usage patterns, transaction histories, and network behaviors to flag suspicious activity in real-time. When deployed thoughtfully, these safeguards operate in the background, alerting customers transparently and minimizing disruption.
Cloud platforms provide scalability, rapid deployment, and easy integration with new digital channels and AI tools. This enables telcos to launch innovative services quickly, personalize offers dynamically, and ensure high availability—all crucial for modern, competitive customer experiences.
Automated compliance tools monitor policy changes, update workflows, and enforce controls at scale. This reduces manual errors, ensures up-to-date practices, and enables telcos to adapt quickly to new requirements—critical for both regulatory reporting and maintaining customer trust.
By optimizing AI models, leveraging cloud efficiency, and digitizing customer interactions, telcos reduce their carbon footprint without sacrificing experience. Sustainability and superior CX are compatible when operators rethink process design, invest in green infrastructure, and educate customers on digital options.
AI-Powered charging systems provide immediate feedback on usage and spending, preventing bill shock and enabling dynamic offers. Customers appreciate transparency and control, and telcos can use this data to recommend personalized upgrades or adjust plans on the fly.