Internet Architecture Board


Measuring Network Quality for End-Users, 2021

Home»Activities»Workshops»Measuring Network Quality for End-Users, 2021

An Internet Architecture Board virtual workshop

The Internet in 2021 is quite different from what it was 10 years ago. Today, it is a crucial part of everyone’s daily life. People use the Internet for their social life, for their daily jobs, for routine shopping, and for keeping up with major events. An increasing number of people can access a Gigabit connection, which would be hard to imagine a decade ago. And, thanks to improvements in security, people trust the Internet for both planning their finances and for everyday payments.

At the same time, some aspects of end-user experience have not improved as much. Many users have typical connection latency that remains at decade-old levels. Despite significant reliability improvements in data center environments, end users often see interruptions in service. Despite algorithmic advances in the field of control theory, one can often find that the queuing delay in the last-mile equipment exceeds the accumulted transit delay. Transport improvements, such as QUIC, Multipath TCP, and TCP Fast Open are still not fully supported in some networks. Likewise, various advances in the security and privacy of user data are not widely supported, such as encrypted DNS to the local resolver.

We believe that one of the major factors behind this lack of progress is the popular perception that throughput is the often sole measure of the quality of Internet connectivity. With such narrow focus, people don’t consider questions such as:

  • What is the latency under typical working conditions?
  • How reliable is the connectivity across longer time periods?
  • Does the network allow the use of a broad range of protocols?
  • What services can be run by clients of the network?
  • What kind of IPv4, NAT or IPv6 connectivity is offered, and are there firewalls?
  • What security mechanisms are available for local services, such as DNS?
  • To what degree are the privacy, confidentiality, integrity and authenticity of user communications guarded?

Improving these aspects of network quality will likely depend on measurement and exposing metrics to all involved parties, including to end users in a meaningful way. Such measurements and exposure of the right metrics will allow service providers and network operators to focus on the aspects that impacts the users’ experience most and at the same time empowers users to choose the Internet service that will give them the best experience.

The IAB is holding this workshop to convene interested researchers, network operators, and Internet technologists to share their experiences and to collaborate on the steps needed to define properties and metrics with the goal of improving Internet access for all users.

The workshop will discuss the following questions:

  1. What are the fundamental properties of a network that contribute to good user experience?
  2. What metrics quantify these properties, and how to collect such metrics in a practical way?
  3. What are the best practices for interpreting those metrics, and incorporating those in a decision making process?
  4. What are the best ways to communicate these properties to service providers and network operators?
  5. How can these metrics be displayed to users in a meaningful way?

We realize that the answers to these questions will vary depending on the different experiences of the participants. For example, a commercial video streaming platform may prioritize higher throughput and to rely on latency-hiding techniques, while a massively-multiplayer online game may prioritze lower jitter, and invest into techniques for graceful degradation of the user experience in case of reduced network capacity. At the same time, researchers from the academia may be looking at properties and metrics that haven’t benn adopted by the industry at all. Likewise, participants may endorse different methodologies for interpreting the metrics and for making decisions. We are actively looking for identifying such methodologies and for capturing the respective best practices.

While this workshop isn’t focusing on the solution space, we are welcoming submissions that dive into particular technologies, to the extent of helping to set the context for the discussion. Comparing the merits of specific solutions, however, is outside of the workshop’s scope.

Interested participants are invited to submit position papers on the workshop questions. Paper size is not limited, but brevity is encouraged. Interested participants who have published relevant academic papers may submit these as a position paper, optionally with a short abstract. The workshop itself will be a virtual meeting over several sessions, with focused discussion based on the position paper topics received.


  • Submissions Due: Monday 2nd August 2021 Saturday, 14th August 2021, midnight AOE (Anywhere On Earth)
  • Invitations Issued by: Monday 16th August 2021 30th August 2021
    • Workshop Date: This will be a virtual workshop, spread over three days:
      • 1400-1800 UTC Tue 14th September 2021
      • 1400-1800 UTC Wed 15th September 2021
      • 1400-1800 UTC Thu 16th September 2021

Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira

The Program Committee members:

Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire, Sam Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen, Geoff Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja Kuehlewind, Jason Livingood, Matt Mathis, Randall Meyer, Kathleen Nichols, Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.

Send Submissions to:

Position papers from academia, industry, the open source community and others that focus on measurements, experiences, observations and advice for the future are welcome. Papers that reflect experience based on deployed services are especially welcome. The organizers understand that specific actions taken by operators are unlikely to be discussed in detail, so papers discussing general categories of actions and issues without naming specific technologies, products, or other players in the ecosystem are expected. Papers should not focus on specific protocol solutions.

The workshop will be by invitation only. Those wishing to attend should submit a position paper to the address above; it may take the form of an Internet-Draft.

All inputs submitted and considered relevant will be published on the workshop website. The organisers will decide whom to invite based on the submissions received. Sessions will be organized according to content, and not every accepted submission or invited attendee will have an opportunity to present as the intent is to foster discussion and not simply to have a sequence of presentations.

Position papers from those not planning to attend the virtual sessions themselves are also encouraged. A workshop report will be published afterwards.


Day 1: Tuesday (Slides) (Video)

Introduction 1
14:00 Chairs’ Intro
14:10 Stuart Cheshire. The Internet is a Shared Network
14:17 Jana Iyengar. The Internet Exists In Its Use
14:24 Yaakov (J) Stein. The Futility of QoS
14:31 Discussion
15:00 Keynote by Vint Cerf
15:30 Pedro Casas. 10 Years of Internet-QoE Measurements. Video, Cloud, Conferencing, Web and Apps. What do we need from the Network Side?
15:37 Lucas Pardue, Sreeni Tellakula. Lower layer performance not indicative of upper layer success
15:42 Discussion
16:00 Break
Introduction 2
16:10 Ahmed Aldabbagh. Regulatory perspective on measuring network quality for end users
16:17 Michael Welzl. A Case for Long-Term Statistics
16:24 Joachim Fabini. Objective and subjective network quality
16:31 Discussion
Metrics 1
17:00 Matt Mathis. Preliminary Longitudinal Study of Internet Responsiveness
17:07 Brandon Schlinker. Internet’s performance from Facebook’s edge
17:14 Discussion
18:00 End of Day 1

Day 2: Wednesday (Slides) (Video)

Metrics 2
14:00 Chairs’ Intro
14:10 Jonathan Foulkes. Metrics helpful in assessing Internet Quality
14:17 Vijay Sivaraman, Sharat Madanapalli, Himal Kumar. Measuring Network Experience Meaningfully, Accurately, and Scalably
14:24 Dave Reed, Levi Perigo. Measuring ISP Performance in Broadband America: a Study of Latency Under Load
14:31 Discussion
Metrics 3
15:00 Kyle MacMillian, Nick Feamster. Beyond Speed Test: Measuring Latency Under Load Across Different Speed Tiers
15:07 Gregory Mirsky, Xiao Min, Gyan Mishra, Liuyan Han. Error Performance Measurement in Packet-Switched Networks
15:14 Gino Dion. Focusing on latency, not throughput, to provide better internet experience and network quality
15:21 Praveen Balasubramanian. Transport Layer Statistics for Network Quality
15:28 Discussion
16:00 Break
Cross-Layer 1
16:10 Jari Arkko, Mirja Kuehlewind. Observability is needed to improve network quality
16:17 Robin Marx, Joris Herbots. Merge Those Metrics: Towards Holistic (Protocol) Logging
16:24 Rajat Ghai. Measuring & Improving QoE on the Xfinity Wi-Fi Network
16:31 Discussion
Cross-Layer 2
17:00 Koen De Schepper, Olivier Tilmans, Gino Dion. Challenges and opportunities of hardware support for Low Queuing Latency without Packet Loss
17:07 Olivier Bonaventure, Francois Michel. Packet delivery time as a tie-breaker for assessing Wi-Fi access points
17:14 Ken Kerpez, Jinous Shafiei, John Cioffi, Pete Chow, Djamel Bousaber. State of Wi-Fi Reporting
17:21 Mikhail Liubogoshchev. Cross-layer Cooperation for Better Network Service
17:28 Discussion
18:00 End of Day 2

Day 3: Thursday (Slides) (Video)

Synthesis 1
14:00 Chairs’ Intro
14:10 Sandor Laki, Szilveszter Nadas, Balazs Varga, Luis M. Contreras. Incentive-Based Traffic Management and QoS Measurements
14:17 Satadal Segupta, Hyojoon Kim, Jennifer Rexford. Fine-Grained RTT Monitoring Inside the Network
14:24 Al Morton. Dream-Pipe or Pipe-Dream: What Do Users Want (and how can we assure it)?
14:31 Discussion
Synthesis 2
15:00 Kalevi Kilkki, Benajamin Finley. In Search of Lost QoS
15:07 Neil Davies, Peter Thompson. Measuring Network Impact on Application Outcomes using Quality Attenuation
15:14 Mingrui Zhang, Vidhi Goel, Lisong Xu. User-Perceived Latency to measure CCAs
15:21 Discussion
16:00 Break
Synthesis 3
16:10 Christoph Paasch, Randall Meyer, Stuart Cheshire, Omer Shapira. Responsiveness under Working Conditions
16:17 Bob Briscoe, Greg White, Vidhi Goel and Koen De Schepper. A single common metric to characterize varying packet delay
16:24 Christoph Paasch, Kristen McIntyre, Randall Meyer, Stuart Cheshire, Omer Shapira. An end-user approach to the Internet Score
16:31 Discussion
17:00 Final Remarks
18:00 End of Day 3