Skip navigation
Like what you’re reading?

How AI-powered devices will drive the shift to uplink-heavy networks

  • AI wearables and autonomous machines continuously upload sensed environment data, straining uplink coverage, latency, and reliability. Networks must rebalance to deliver consistent UL performance.
  • That uplink shift unlocks potentially new revenues through assured-uplink slicing and APIs. Service providers upgrading early can capture growth.

Vice President, Head of Advanced Technology US

Vice President, Head of Advanced Technology US

Vice President, Head of Advanced Technology US

Here's a number that should make every network architect lose sleep: 1:8.
That's the download-to-upload ratio generated by modern smart glasses alone. For every byte these devices pull from the network, they push almost eight bytes up. This is due to a continuous stream of camera feeds and audio and sensor data flowing to cloud AI for real-time processing.

For decades, telecom networks were engineered around a simple assumption: users consume content. They stream, they scroll, and they download. The infrastructure that followed was downlink-heavy, optimized for pushing video and apps to passive endpoints. Most networks cater for 10:1 downlink-to-uplink traffic volumes. It worked.

Until now.

AI-enabled wearables, autonomous devices, augmented reality (AR), and other interactive applications will likely be requiring more uplink capacity and better uplink coverage, slowly skewing the 10:1 downlink:uplink ratio toward less downlink-heavy network traffic.

And while this shift will not happen overnight, it is a fundamental trend the telecom industry should pay attention to.

Section 1: Consumer devices driving the uplink surge

Global smart glasses adoption is accelerating rapidly. While annual shipment forecasts vary widely, market consensus indicates strong underlying momentum, with the overall market projected to grow from USD 1.93 billion to USD 8.26 billion by 2030 at a 27 percent CAGR.

Based on projections averaged over several available market predictions that are most aligned with observed 2025 sales, the global cumulated installed base is estimated to grow from approximately five million devices in 2025 to around 180 million by 2030, implying an average growth rate of over 100 percent per year.
At this scale, smart glasses transition from a niche accessory into a “nice-to-have” mass-market device category with material implications for mobile network traffic patterns.

Unlike smartphones, which are used intermittently and are mostly consuming content in the downlink, smart glasses stream camera and microphone feeds for AI processing continuously or on demand. Consider these common example interactions:

  • Visual AI queries ("What am I looking at?"): The glasses capture and upload a series of high-resolution images (~1 Mbps uplink) or video snippets (~1-5 Mbps uplink), receive a short audio response (~0.5 Mbps downlink), yielding a 1:8 download-to-upload ratio and requiring sub-200 ms round-trip for conversational responsiveness. Note that while today’s audio inference models have fairly short response times, the inference of video models is still in the order of 1 s.
  • Live translation (for example, between English and Spanish speakers): While fully on-device processing is emerging, most commercial implementations today are typically cloud-based or hybrid so as to access more languages and models of higher complexity. The data rates are not different to non-translated audio conversations, that is, a continuous audio stream uplink of ~0.3 Mbps and a translated audio stream in the downlink of ~0.3 Mbps (roughly symmetric 1:1 downlink:uplink). The difference between this and traditional human audio conversations is that the UL requires bounded latency in the order of 150 ms (aligned with the ITU-T Recommendation G.114), as the audio large language models (LLMs) are not trained on imperfect channels: micro-outages would disrupt the translation capabilities.

The network impact is thus fundamental: real-time visual understanding through large and intelligent cloud AI models or live translation require a robust uplink capacity and uplink coverage, low and bounded latency, and a reliable per-application quality of service (QoS).

AR/AI headsets: The cloud rendering challenge

In the realm of extended reality (XR), traditional virtual reality (VR) is downlink-heavy as it is dominated by VR-application downloads and/or video delivery to the headset.

Cloud-based AR with real-time scene understanding, however, flips this equation.

While current 3GPP XR KPIs largely assume a downlink-dominated cloud-rendering model, future AR systems will increasingly depend on continuous upstream transmission of environmental and spatial sensor data, shifting the bottleneck from media delivery to bidirectional perception-driven traffic.

Silicon Valley’s billion-dollar bets on the third core device

Silicon Valley is dreaming. Silicon Valley is planning. Silicon Valley is executing.

The plan: Hundreds of millions of screenless, pocket-sized AI companions by the end of the decade, likely even earlier. Built-in cameras and environmental sensors. Continuous context learning through persistent data upload. Always-on highly “intelligent” AI attached.

These devices represent what some call the "third core device", alongside laptops and phones. They have, however, a fundamentally different network behavior. Unlike intermittent smartphone usage, AI companions operate continuously. They generate sustained uplink traffic as raw sensory data flows to cloud AI, with processed responses returning to the device.

Personal agents are replacing conventional phone-only applications by seamlessly integrating across devices such as phones, laptops, glasses, cars, and home systems.

Driven by these emerging consumer applications, this means a growing number of brief and bounded latency-sensitive exchanges for networks; and, as potential value-adds, a growing need to expose additional network capabilities – like positioning, sensing, trust, security, and so on – as data sources for the AI systems themselves.

To consumers, the network is therefore no longer just a delivery pipe but rather becoming part of an intelligent fabric.

Section 2: Enterprise applications demanding massive uplink capacity

Across industries, AI-enabled systems follow a common pattern:

  1. Data is generated by sensors, machines, cameras, or vehicles.
  2. AI interprets that data, and increasingly at the network edge.
  3. Actions are executed over reliable, low-latency connectivity.

This loop is fundamentally uplink-centric, and even more so than in consumer-centric applications.

Autonomous vehicles: The ultimate uplink use case

Autonomous taxis, also referred to as robotaxis, delivered millions of rides monthly in 2025. The autonomous vehicle (AV) market is projected to grow from USD 68.09 billion (2024) to USD 214.32 billion (2030) at a 19.9 percent CAGR , with 58 million AV unit sales forecast by decade's end.

The use cases for autonomous vehicles are manifold and fairly heterogeneous but can all be supported by one 5G network:

  • Training data collection: 1-4 TB per hour is generated but typically uploaded in batch via Wi-Fi or fiber when parked, that is, not transmitted over cellular during operation. There are no real-time requirements; and, from the data generated during driving, less than 30 percent is transferred for eventual training.
  • Autonomous driving: All safety-critical functions are processed locally. There is a 1-10 Mbps intermittent uplink for telemetry, no notable latency dependency but absolutely business critical, and it occurs nearly 100 percent of the operational time.
  • Remote assistance: Live video streaming to human operators for edge-case intervention. There are several Mbps in uplink, <100 ms video and control latencies, extremely high reliability required since it’s business critical, and it is estimated to occur for  less than one percent of operational time.
  • V2X safety messaging and ‘vehicle-as-a-sensor’ (future use cases): While not deployed yet, we can also envisage a future where the vehicles interact with the nearby infrastructure or collect spatial data at scale. That will put additional requirements on the networks of the future, in uplink and downlink.

Humanoid robots and drones: The next industrial revolution

The humanoid robot market forecast is expected to grow six times to USD 38 billion by 2035, with 250,000+ industrial humanoid robots expected by 2030. The near-term market is expected to grow from USD 1.55 billion (2024) to USD 4.04 billion (2030) at 17.5 percent CAGR.

Requirements related to remote assistance, spatial intelligence, as well as other tasks requiring compute offloading typically mount to several Mbps in the uplink, with downlink for control commands at typically less than 1 Mbps. End-to-end latency requirements are <100 ms for video and <20 ms for control.

Similarly, the commercial drone market is growing rapidly, from USD 30 billion (2024) to USD 54.6 billion by 2030 at 10.6 percent CAGR. Networking requirements are as follows: autonomous flight needs about 100 kbps for command-and-control in the downlink, while live HD video streaming, if enabled during the flight, yields several Mbps per camera. A major U.S. retailer has completed more than 150,000 drone deliveries since 2021, covering now more than 100 store locations across five states.

Each autonomous unit is an uplink-intensive endpoint! Indeed, as AI systems aim to understand the physical world, the network itself becomes valuable data: positioning, timing, and radio-based sensing allow AI systems to infer rich environmental context, including spatial geometry, object motion, situational dynamics, and more.

In this paradigm, the network evolves from a communication substrate into a distributed sensing platform, augmenting onboard perception and enabling coordinated intelligence across autonomous vehicles, robots, and other autonomous agents.

These capabilities elevate the network from transport layer to information provider, further integrating it into AI workflows.

Section 3: What these use cases demand from networks

Some emerging AI applications invert the core assumption: uplink becomes the new bottleneck, not downlink. Today's mobile broadband forgives latency because content is pre-buffered with streaming video and downloads tolerating delay without impact. Real-time and interactive AI applications impose fundamentally different requirements on the network.

Latency is no longer forgiving

Interactive real-time AI-driven applications increasingly require:

  • low end-to-end experienced response times
  • predictable low and bounded latencies rather than best-effort averages
  • local processing via cloud-edge to avoid long-haul round trips

Reliability becomes mandatory

When AI-enabled systems interact with the physical world, network performance directly affects safety and reliability – even when low-level control loops remain on-board. Autonomous devices increasingly depend on the network for perception offloading, coordination with other agents, remote supervision, and more. These use cases therefore demand:

  • consistent QoS
  • observability at QoS-flow level
  • SLA-grade guarantees rather than statistical performance

Reliability expectations move from “good enough” to “consistent performance”!

Uplink capacity must be rebalanced

Traditional mobile networks are designed to cater for 90 percent downlink capacity; however, the uplink is inherently coverage-limited because smartphones transmit at only 200mW versus 80–400W for base stations. As discussed above, many emerging AI applications generate sustained or dominant uplink traffic – making uplink performance a primary network design constraint. The article Enhancing 5G uplink performance to enable differentiated services goes into more detail.

  • Radio Access Network (RAN) software features: Interference rejection, uplink carrier aggregation, decoupled uplink/downlink, uplink power-boosting features, as well as many AI-in-RAN L1/L2 features further improve achievable throughput and consistency.
  • Spectrum strategy: Low-band and mid-band frequency divided duplex (FDD) are critical for uplink coverage and capacity. Time division duplex (TDD) mid-band improves UL peak capacity near sites but remains limited for cell-edge and deep-indoor uplink due to DL-heavy slot ratios and propagation constraints.
  • Radio/antenna evolution: Enhancements such as uplink multiple input, multiple output (MIMO), advanced receivers, as well as evolution from 4T4R to 4T8R or 32T32R can significantly improve the uplink link budget, with operators reporting up to 50–100 percent cell-edge improvement and multi-fold total uplink capacity gains.
  • Site densification: Reducing distance to the base station remains essential where uplink coverage gaps persist, particularly for indoor users. Furthermore, with approximately 80 percent of traffic consumed indoors, outdoor-to-indoor uplink performance increasingly defines user experience, making uplink coverage the dominant system bottleneck.

Section 4: Why this matters and why infrastructure must transform

From content delivery to intelligence networks

Networks optimized for streaming video to consumers does – as per above insights – only sub-optimally serve AI-native devices generating continuous sensor data for cloud processing. The paradigm shift runs deeper than capacity upgrades.

Content delivery networks assure a dominant downlink flow. In addition to the already on-going real-time audio and video calls, intelligence networks must support bidirectional AI data exchanges where raw inputs flow up, processed insights flow down, and all that at latencies that feel instantaneous.

This is one of the most significant architectural challenges of the mobile broadband era.

The portfolio transformation: Software first, sites second

Uplink optimization is increasingly feature-defined: features such as uplink coordinated multi-point (CoMP), carrier aggregation, MIMO enhancements, and dynamic waveform switching unlock capacity from existing spectrum. Where software hits limits, targeted site improvements (advanced FDD radios, UL-boosting antennas, undeployed bands) compound the gains. Low-latency handover technologies like L1/L2 triggered mobility keep uplink-intensive sessions stable during movement.

The orchestration transformation: AI-enabled automation across control loops

Achieving adaptive network behavior requires coordination across multiple control layers, each operating at distinct timescales.

Real-time radio resource management, such as scheduling, power control, and beamforming, executes at sub-10 ms granularity and remains embedded in the RAN. The service management and orchestration layer, by contrast, hosts rApps that optimize network performance on timescales of one second or longer.

For AI workloads that are uplink-heavy and exhibit bursty patterns, rApps add value by correlating signals the RAN cannot see on its own: application-layer demand forecasts, cross-domain performance data, and contextual enrichment from core and transport, among others. This visibility enables closed-loop adjustments in form of cell individual offsets for load balancing, energy-feature scheduling that preserves uplink headroom, and more.

The result is a management layer that anticipates demand shifts rather than merely reacting to congestion, complementing the embedded RAN intelligence that handles per-TTI (Transmission Time Interval) resource decisions.

The monetization transformation: Slicing and APIs

Ericsson believes the mobile industry needs to align on a select few performance levels, that is, industry-aligned specifications of what performance applications can expect from networks described in terms of throughput and latency, which can be realized through slicing.

.Network slicing enables service providers to offer performance levels targeting UL-intensive applications, each with their own SLA and pricing. For instance, AV fleets are assured telemetry upload. For GenAI-native services specifically, the ability to guarantee consistent uplink throughput and bounded latency during inference bursts becomes a premium capability that users may want to pay for.

Programmable APIs take this further by enabling AI applications to interact dynamically with the network. APIs can, for example, be exposed directly to AI agents via the model context protocol (MCP), allowing LLM-based systems to orchestrate network functions as part of their reasoning loop.

As a result, three potential monetization opportunities emerge:

  • episodic premiums for reliability during high-demand moments (stadiums and live events)
  • continuous differentiated tiers with assured performance levels
  • API-driven revenue that scales with developer adoption as AI agents become autonomous network clients.

Our call-for-action: Prepare for an uplink-first world!

The numbers tell the story. AI glasses are generating 1:8 download-to-upload ratios. Autonomous vehicles and, likely in the future, humanoid robots and drones will also require notable uplink.

The traffic pattern that shaped network architecture for three decades, that is, downlink-dominant and consumption-oriented, is now changing towards requiring more uplink capacity and better uplink coverage. This represents the largest opportunity since mobile broadband!

And it belongs to operators that build for uplink-intensive, latency-critical, reliability-mandatory applications that will benefit from differentiated connectivity with appropriate performance levels – capabilities available today through 5G Standalone network features.

The action is clear:

  • Establish explicit uplink KPIs and baseline performance for AI glasses, autonomous systems, and other sensor-driven devices. Focus ought to be on cell edge, and where applicable, on indoor environments.
  • Audit already available and actively leverage uplink-capable spectrum assets, with particular focus on maximizing the value of low- and mid-band FDD holdings through deployment of undeployed bands, wider carriers, and more advanced radios and antenna systems.
  • Accelerate adoption of uplink-optimizing network features, across both software and hardware roadmaps, including advanced FDD radios, uplink-enhancing antenna systems, and AI-driven RAN features that improve uplink coverage, capacity and reliability.
  • Introduce and monetize differentiated connectivity, monetize critical connectivity moments, and introduce new API offerings to capture, for example, the emerging AI agent market.

It is time to recognize that the next wave of revenue comes from the intelligence requiring the uplink Do not wait with network upgrades until it is too late.

Join us in our partner-engagement lab D-15 in Ericsson Silicon Valley to jointly invent the future!

Read more

The Ericsson Blog

Like what you’re reading? Please sign up for email updates on your favorite topics.

Subscribe now

At the Ericsson Blog, we provide insight to make complex ideas on technology, innovation and business simple.