►
From YouTube: IETF112-MAPRG-20211109-1430
Description
MAPRG meeting session at IETF112
2021/11/09 1430
https://datatracker.ietf.org/meeting/112/proceedings/
A
All
right
looks
like
we
got
about
60
people
signed
in
it's
about
time
to
start,
so
I'm
ready
to
go
all
right
good
morning,
good
afternoon
and
good
evening.
Everyone.
This
is
the
measurement
and
analysis
for
protocols,
research,
group
meeting
in
in
concert
with
ietf112.
A
The
irtf
follows
ietf
intellectual
property
rights.
If
you're
going
to
talk
about
anything
here
that
your
company
or
your
organization
is,
has
applied
for
a
patent
or
has
been
granted
granted
patent
you're
required
to
disclose
that
in
a
timely
fashion,
see
the
details
in
the
rfcs
and
documents
linked
here.
If
you
need,
if
that's
appropriate
for
you.
A
The
note
well
privacy
and
code
of
conduct,
this
meeting
is
being
recorded.
It's
in
the
public.
The
information
you
present
here
will
be
available
publicly
privacy
policies
linked
there.
We
also
expect
you
to
abide
by
the
code
of
conduct
be
respectful
to
other
people.
A
Harassment
is
not
tolerated
if
you
find
instances
of
this
there's
contact
information
to
the
ombudsman
there,
but
just
please
be
courteous
in
your
participation
in
our
meeting.
B
Yeah,
let
me
actually
add
a
few
words,
because
I
think
this
is
really
important
for
our
community
and
so
just
pointing
people
at
this
code
of
conduct
and
the
anti-arizona
policy
is
a
serious
point
and
not
only
something
we
skip
through,
and
I
think
this
is
also
very
important
for
us
in
the
irtf,
because
in
this,
especially
in
this
group,
we
have
a
lot
of
newcomers,
and
so
we
should
be
very
welcoming
and
open.
B
I
think,
and
that
doesn't
mean
that
you
cannot
criticize
anything,
but
if
you
criticize
something
or
if
you
provide
feedback
provided
in
a
constructive
way
and
be
friendly
and
help
people
to
get
to
know
the
ietf,
I
mean
consider
this
for
this
session.
I
don't
think
we
had
any
problems
in
the
past
also
consider
this
for
all
the
other
sessions
you're
participating
in
for
all
activities
around
the
ietf
and
the
irtf.
B
Also
for
the
chat,
not
only
when
you
speak
up
on
the
mailing
list
or
in
the
session,
and
also
please
speak
up.
If
you
see
behavior-
and
that
is
not
appropriate-
you
don't
have
to
speak
in
public
and,
of
course
it's
always
a
subjective
judgment,
but
it
does
help
to
approach
those
people
and
tell
the
people
that
you
personally
had
the
feeling
that
something
was
not
okay,
so
we
can
all
improve
and
make
this
a
better
environment.
A
Thanks
maria,
so
the
goals
of
the
irtf
is
to
to
support
and
present
research
work.
It
is
not
the
standards
organization
that
the
ietf
is,
even
though
we
are
co-located
and
use
some
of
the
same
procedures
and
there's
more
information
about
that
at
the
link
provided
here
so
administrivia.
The
charter
for
map
rg,
which
has
been
running
more
than
five
years
now,
is
available
at
the
link
there
please
subscribe
to
mailing
list.
A
If
you
want
to
hear
about
upcoming
map
or
g
things,
but
also
related
topics,
we
regularly
get
advertisements
for
calls
for
participation
or
contributions
in
other
workshops
in
symposia
and
conferences
that
are
about
measurement
of
internet
things
and
you'll,
see
today
that
will
we
sometimes
host
those
in
the
map
rg
meeting
as
well.
Today's
slides.
A
Today's
slides
are
at
the
link
there.
You
can
find
them
in
the
agenda,
search
for
mamprgy.
A
A
The
jabber
is
by
the
way,
linked
to
the
same
chat.
You
see
in
midair
a
small
announcement.
The
iab
workshop
on
annelle's
analyzing
in
ietf
data
is
coming
up.
Maria
is
one
of
the
organizers
of
that
and
that's
november.
29Th
is
number
two.
The
call
and
the
selection
process
has
already
occurred,
but
we're
letting
you
know
that
that's
there,
for
instance,
to
give
you
an
idea
of
work
that
will
be
presented
there.
A
That's
peripheral
and
related,
but
but
not
in
map
rg,
for
instance,
stephen
mcquisten
and
his
co-authors
did
a
work
on
characterizing
the
ietf
through
the
lens
of
rfc,
develop
deployment.
So
you
can
imagine
it
has
a
lot
to
do
with
the
things
that
you
might
be
interested
in,
especially
if
you're
working
on
protocols
and
the
standardization,
but
it's
not
about
measurement
of
the
protocol
itself,
so
we're
not
hosting
it
here
on.
On
a
similar
note,
sig
com,
20
2021,
happened
in
august
of
this
year.
A
There's
also
some
work
that
you
might
be
interested
in
that
again
wouldn't
be
in
map
rgb,
because
it's
not
about
you,
know
two
had
a
nice
piece
of
work.
There
called
insights
for
from
operating
on
an
ip
exchange
provider
that
is
about
running
a
global
network
of
fiber
optic
links
in
parallel
with
internet
use,
for
instance,
for
global
roaming
for
mobile.
B
Yeah,
maybe
just
also
the
the
authors
of
of
those
documents,
feel
free
to
use
a
map
of
gmail
just
to
announce
this
work
and
make
people
aware.
B
A
Okay,
so
the
agenda
for
today,
first
up
we'll
have
so
a
number
of
these
pieces
of
work
are
from
imc
and
and
it's
a
subset
that
samples
across
the
kind
of
things
that
map
rg
uzi
does
first
up
we'll
have
giovanni
mora
talk
about
sue
name
and
this
problem
with
the
dns
system
and
work
that
they
carried
on
to
to
adapt
the
protocol
to
not
have
this
problem.
So
an
actual
engineering
effort
there.
A
Next
up,
we'll
have
nicholas
kuhn,
bringing
some
new
work
to
us
that
and
some
early
work
about
vpn
performance
over
satellite
communications
and
then,
and
each
of
these
will
have
10
to
15
minutes
and
let's
go
of
time
for
questions
between
then
we'll
switch
to
iot
or
iot
ls
from
taha
paracha
and
and
that's
a
piece
of
work
from
imc,
obviously
about
how
the
protocols
are
actually
being
used
in
operation,
in
this
case
with
iot
and
then
we'll
round
out
the
meeting
with
kyle
mcmillan,
bringing
us
their
work
on
measuring
the
performance
network,
utilization
of
popular
video
conferencing
application
and
what
I
can
tell
you
I'm
happy
about
this
meeting
today,
because
we've
just
got
four
things
and
if
I
stop
talking
we'll
have
we
should
have
approximately
five
minutes
of
comments
for
each
of
them.
A
So
let's
get
underway
miria.
Can
you
stop
sharing
that
slide?
I
guess
I
should
call
and
say:
is
there
any?
Is
there
any
any
questions
or
comments
at
this
point?
I
don't
see
anyone
in
the
queue
so
so
we
can
bring
up
the
first
presentation
giovanni.
A
A
D
All
right,
so
thanks
dave
thanks,
miriam
and
so
a
good
morning
evening
or
afternoon,
depending
on
where
you
are
so
I'm
gonna
present
here.
This
work
we
did
together
with
folks
at
internet,
new
zealand
and
usc
was
a
paper
that
we
presented
last
week
at
imc.
So
if
you
want
to
get
a
full
paper,
here's
the
link.
D
Why?
Actually,
we
actually
identify
some
issues
with
current
rxcs
with
regards
dns
and
we
wrote
a
new
draft
which
the
link
is
also
here
so
we're
looking
for
feedback
with,
I
sent
the
draft
to
the
dns
uplist
last
monday.
But
if
you
are
interested,
please
have
a
look
and
provide
some
feedback
on
the
list.
D
So
let's
get
it
started.
It's
name.
We
all
know
that
the
dns
is
one
course
the
car
services
on
the
internet,
and
you
know
that,
because
when
it
breaks,
people
do
notice-
and
there
is
this
famous
mirai
button
and
attack
against
dyne,
a
major
dns
provider
that
provide
dns
services
for
netflix
qualifying
and
all
and
so
forth,
and
when
dyne
went
partially
down
in
the
u.s
and
made
to
the
you
know
front
page
of
the
you
know
the
new
york
times
you
can
see
here.
D
A
picture
of
the
zones
in
the
west
were
like
partially
affected
on
this
picture
here.
So
it's
a
big
deal
and,
and
we
talk
about
dns,
there
are
two
major
types
of
servers.
There
is,
if
you
have
a
client
in
this
figure
here,
the
client
wants
to
go.
Let's
say
to
a
domain
like
wikipedia.org.
D
The
client
sends
a
dns
query
to
its
configured
with
this
local
resolver,
which
is
configured
with,
and
there
is
over
those
sort
of
the
heavy
lifting.
The
resolver
is
just
like
trying
to
find
where
this
information
is
it's
located
in
terms
of
dns
to
map
the
you
know
the
domain
name
to
ip
address,
and
this
information
is
fetched
automatically
from
these
authoritative
servers.
As
you
can
see
here,
and
once
this
information
is
fetched,
the
resolver
responds
to
the
client.
D
The
client
can
go
happily
to
wikipedia
and
surname
effects
specifically
traffic
to
authoritative
servers
is
the
resolvers
that
would
kind
of
overwhelm
authoritative
servers,
and
this
all
happened
when
we
were
working
together
in
a
paper
for
imc,
2019
artillery
and
there
is
2020
actually-
and
this
is
traffic
that
arrives
at
the
authoritative
servers
of
new
zealand
and
the
folks
there
were
getting.
D
You
know
an
average
of
750
million
daily
queries,
so
all
the
data
in
securities
that
come
to
them
and
but
one
day
to
the
other,
they
saw
this
50
traffic
surge
in
the
traffic
and,
if
you're
an
operator,
you
know
you
don't
like
such
surprises
and
that
got
them
concerned.
Obviously,
and
when
they
start
to
dig
into
the
data
they
saw.
The
two
domains
in
their
zone
basically
had
no
carries
before
very
little
daily.
D
Currys
and
out
of
the
suddenly
they
start
to
get
like
more
than
100
million
queries
each
and
they
wonder
if
that
was
a
deny
of
serf's
attack
and
if
that
were
why
it
was
actually
targeted.
Domain
names
were
not
popular
at
all.
That
does
not
make
any
sense,
but
it
turned
out
that
there
was
a
misconfiguration
in
this
particular
two
domains.
D
That
was
not
a
major
issue
for
them,
but
it
could
be
have
been
so,
as
I
said,
and
cycle
dependencies
is
an
error.
It's
a
configuration
error
and
one
domain
points
to
let's
say
category
points
to
nzc.dog,
then
ask.nz,
but
dog.z
would
point
back
to
ns.com,
and
then
we
create
a
look
and
1536
describe
it.
This
paper
papa's
2004
they
covered
at
the
more
details.
D
We
also
provide
a
tool
for
dns
authority
server
operators
to
get
the
rid
of
these
loops
in
their
zones,
because
many
times
you're
not
aware
that
it
exists
in
their
zones.
So
it's
if
it's.
If
we
provide
a
tool,
that's
open
source
too
cycle
hunter.
They
can
download
and
scan
their
zones
and
also
identify
what's
missing
in
current
rfcs,
and
we
help
like
them.
D
So
the
but
the
real
threat
in
this
case
of
surname
is
that
its
weaponization
is
that
two
domain
names
in
the
rnz
that
had
zero
basic
little
traffic
cause
a
fifty
percent
total
traffic
surge
and
the
threat
suits.
The
diversity
will
hold
many
domains
and
it
would
be
configured
many
many
domains
with
the
slopes
and
then
trigger
recursive
resolvers
queries
from
botnet
that
could,
potentially,
you
know,
bring
down
dns
providers.
So
we
got
very
concerned
about
that
and
in
practice
the
root
causes
of
those
loops
are
two-fold.
D
D
So
that's
one
of
the
cases,
but
the
other
case
is
that
the
clients
themselves
would
start
looping.
They
would
stop
and
send
queries
non-stop,
and
that
would
cause
resolvers
also
to
look,
and
the
ultimate
effect
is
on
the
targets
here.
This
authoritative
service
would
see
the
surge
in
traffic.
D
The
solutions
will
cover
later,
and
you
know
people
know
that
loops
in
dns
are
no
problem,
but
it
isn't
that
solved
already,
but
they
turned
out
that
rfc
1034
was
like
the
the
first
dns
rfc,
it's
very
vague
about
loops.
They
say
it
resolves
just
about
the
amount
of
work
to
avoid
infinite
loops
and
but
not,
but
that
would
only
protect
resolvers
from
overwhelming
for
the
for
the
first
part
of
the
loop
here,
the
infinite
one.
D
It
was
just
for
bid
resolvers
from
looping
even
indefinitely
and
overall
mean
authority,
the
name
servers
but
does
not
protect
any
amplification
from
clients
that
the
clients
would
send
the
queries
non-stop
to
resolvers
and
then
resolvers
in
turn
would
loot
themselves
for
every
occurred
that
arrives
so
that
does
not
protect.
In
this
case,
1035
recommends
to
set
counters,
in
this
case,
to
forbid
again
resolvers
from
looping
to
prevent
resolvers
from
looping,
but
again
does
not
provide
any
protection
if
clients
or
forwarders
are
actually
looping
themselves.
D
So
the
solution
we
actually
provide
in
this
paper
is
that
this
is
not
really
covering
any
servers
at
the
moment
which
the
resolvers
should
actually
do.
Negative
caching
should
detect
those
cycles
and
put
in
their
cache,
so
every
new
client
query
can
be
like
responded
from
cache,
and
now
it
does
not
make
it
to
the
authoritative
servers
and
that
minimizes
the
both
causes
of
looping
here
and
that's
where
our
draft
comes
from
here
on
the
dns
hub,
and
we
also
in
the
paper
not
only
investigate
what
happened
to
new
zealand.
E
D
Also,
we
carry
out
a
bunch
of
control,
experiments
in
the
wild
with
using
ripe
athletes
and
other
vantage
points
to
that.
So
the
most
basic
experiment.
We
did
we
configured
ripe
atlas,
10
000
probes,
to
send
one
query
only
per
probe
to
their
local
resolvers.
We
wanted
really
to
determine
if
we
could
get
that
should
if
you
really
one
probe,
wants
to
uncur
if
that
would
be
enough
to
start
looping
and
it's
kind
of
the
lower
bound.
So
we
did
that
and
in
atlas
you
know
one
atlas
probe.
D
They
may
be
configured
multiple
resolvers,
but
the
what
we
collected.
It's
the
traffic
arriving,
the
authoritative
servers-
and
this
is
the
figure
you
see
here
on
the
top
figure
you
see,
sort
of
a
spike
in
the
y-axis.
We
see
the
number
of
queries
that
arrive,
the
first
spike
there
you
see
after
two
o'clock
and
when
the
normal
traffic
arrives
at
the
resolvers,
we
see
like
more
than
150
000
queries.
D
And
what
happened
if
you
look
at
the
graph
below
is
that
a
portion
of
the
resolvers
you
see
like
on
the
y-axis
here
and
on
the
graph
below
and
the
number
of
resolvers
doing
that
in
this
particular
experiment,
47-7
574
actually
looked
and
they
came
from
34
autonomous
systems,
including
google
and
cisco,
and
the
paper
have
way
more
complex
scenarios
and
it
gets
worse.
If
you
make
longer
cycles,
it
can
make
three
fold
cycles,
and
so
it
gets
worse
and
we
have
different
vantage
points.
So
it's
a
widespread.
D
So
that's
the
first
figure
into
this
newspaper,
but
the
second
thing
we're
interested
is
not
only
understand
the
phenomena
but
also
prevent
it.
So
we
wrote
this
tool
cycle
hunter,
which
you
can
download
here.
Here's
the
link
it
just
gets
your
zone
file
gets
all
the
ns
records
you
have
in
it.
That's
where
the
authority
div
servers
are
defined,
try
to
resolve
them.
If
they
fail,
they
try
to
find
it
they're
cyclical
or
not,
and
then
you
have
an
output
and
what
we
did.
D
We
ran
cycle
hunter
and
by
the
way,
thanks
for
all
folks
that
contributed
to
the
tool
later
after
disclosure,
we
ran
cycle
hunter
against
multiple
zones,
184
million
domains,
and
we
found
that
only
44
and
1500
domain
names
were
like
affected
by
that
and
we
believe
it's
like
human
error,
because
if
you
have
this
sort
of
thing,
your
domain
is
not
resolvable,
then,
if
you're
really
legitimate
use
of
your
domain,
you're
not
interested
in
that.
D
But
you
know
the
malicious
actor
could
just
do
differently
and
try
to
weaponize
this,
and
so
that's
the
prevention
part.
And
then
we
came
to
the
fixing
box
part.
We
carry
out
responsible
disclosure
and
here's
the
we
started
december
last
year
to
do
that
and
we
ended
up
in
may
this
year
with
the
public
disclosure.
D
We
talked
first
to
google
and
then
with
a
bunch
of
private
groups
that
had
contacts
and
including
dnsr,
which,
which
was
the
biggest
iraq
34,
was
the
biggest
group
that
we
disclosed
and
then
we
that
calls
like
google
and
others
and
spoken
as
in
cisco
asset
effects,
and
we
actually
confirmed
that
google
fixes
public
dns
because
we
ran
measurements
before
they
fixed
and
and
we
repeat
the
same
measurement
using
ramp
atlas
after,
and
you
see
here
that
before
in
this
particular
scenario,
it's
also
in
the
paper,
we
would
get
four
million
more
than
four
million
queries
in
our
traditional
servers.
D
But
after
that
it
will
get
like
fewer
than
two
hundred
000.
So
we
can
really
confirm
that
the
google
fix
and
we
see
that
they
fix
that
by
deploying
negative
caching,
too,
and
after
we
came
forward
with
the
post
with
the
disclosure
and
at
the
work
35,
I
think,
was
like
five
or
more
months
and
a
half
after
we
started
the
process
of
disclosing
this.
Some
operators
came
forward
to
us
and
they
said
hey.
Your
folks
are
not
the
only
ones
we've
seen.
D
We
have
seen
that
we
saw
that
ourselves
do
and
one
european-based
cctod.
They
were
kind
enough
to
share
this
graph
with
us,
and
you
see
here
when
the
graph
around
17
hours
starts
to
grow.
It's
when
two
domains
in
their
zone
was
were
also
misconfigured
and
their
total
traffic
increases
like
10
times
because
of
two
domain
names.
So
each
color
here
shows
a
different,
authoritative
name
server
for
that
eu
based
cctld
operator.
D
So
that's
a
big
deal
and
you'll
see
a
draw:
a
sharp
drop
later
on
around
11
a.m,
the
day
after
when
they
fixed
that
manually.
So
we're
not
the
only
ones
seeing
that
in
order
to
they
also
came
forward
during
the
work
presentations.
So
that's
the
third
part
of
this.
The
third
contribution
and
the
conclusions
is
that
ns
loops
are
no
problem,
dns
that
they
have
been
specified
since
they
have
been
talked
about
since
the
original
dns
specification,
and
we
showed
we
must
address
it
now.
D
Current
standards,
current
rfcs,
do
not
cover
them
and
that's
why
you
propose
this
new
draft
and
what
to
do?
If
your
operator,
you
can
run
cycle
hunter.
If
you
run
your
automated
servers,
try
to
find
those
loops
in
amazon
and,
if
you're
a
developer
of
resolvers
you
should
we
recommend
they
would
just
do
what's
in
it
with
what's
in
the
draft
like
negative
caching
uploads,
and
with
that,
I
think
it
opens
questions.
A
We
have
two
people
in
the
queue
and
we've
got
a
couple
of
minutes
for
their
questions
or
comments.
So
first,
let's
take
jonathan
hoyland.
F
Hi
jonathan
hoyland
cloudflare,
I'm
interested,
have
you
seen
g
root
or
groot
by
the
microsoft
research
people
they
have.
They
also
do
cycle
hunting
in
dns,
and
I
was
wondering
if
your
tool
is
different.
The
same
solves
the
same
problem.
D
A
A
Okay,
danielle
you're
at
the
top
of
the
cube,
but
I'm
not
hearing
you
do
you
want
to
put
your
question
in
in
the
chat
and
then
you'll
delay
it.
A
G
I
was
just
hoping
you
could
say
a
little
bit
more
about
looping,
it's
not
about
leaving
the
cursors.
What
sort
of
clients
were
looping.
D
So
our
vantage
point
was
like
we,
we
used
rap
atlas
and
they
created
whatever
the
resolvers
they
had,
but
it
would
also
collect
traffic
and
authoritative
servers,
but
there's
a
lot
of
things
that
come
in
between
both
of
them.
What
we
could
find,
specifically
it's
on
the
paper
like
two
versions
of
the
an
old
version
of
powerdns
from
2014,
was
looping.
An
old
version
of
microsoft.
Dns
server
from
2008
was
looping.
D
We
know
that
also
that
the
clients
that
google,
which
was
the
major
responsible
for
the
traffic
and
all
the
experiments
we
did-
google
pump
dns
did
not
look
so
the
only
we
talked
that
we
worked
with
the
operators
on
that.
So
the
only
reason
that
could
have
been
this
could
be
happening
is
the
client
from
google
downstream.
They
would
before
the
fix.
They
would
be
sending
queries
one
after
the
order
and
every
query
would
trigger.
Let's
say:
10
new
currents
in
our
authoritative
name
servers,
but
I
don't
know
which
clients
were
looping.
D
A
I
think,
let's
assume
that
it
does.
I
think,
there's
a
little
bit
of
audio
trouble
there
and
we're
at
time
thanks
so
much
giovanni
for
bringing
that
work
to
us
super
cool.
I
especially
like
the
cycle
hunter
tool,
so
people
can
see
if
they
have
the
problem
all
right
so
up
next,
we
have
nicholas
nicholas.
Can
you
put
yourself
in
the
queue
to
share
your
slides
there?
You
go
all
right.
A
There
you
go,
take
it
away,
nicholas
you've
got
10
minutes,
okay,.
H
H
This
study
came
because
we
are
all
with
all
the
pandemic
that
occurred.
We
are
all
using
vpns
and
we
have
measurements
showing
that
when
you
use
a
vpn
solution
and
you
have
satcom
access,
things
can
be
very
bad.
So
we
wanted
to
do
some
work
on
an
experiment,
wide
experiments
on
trying
to
find
what
is
the
best
vpn
solutions
for
different
circumstances,
and
we
have
an
archive
paper
showing
lots
of
results.
H
So,
as
I
said,
we
have
all
been
working
from
home
recently.
Also,
enterprises
networks
need
to
interconnect
their
for
their
different
components,
and
this
comes
with
increased
security
needs
and
vpns
are
usually
deployed
in
these
schemes.
So
I
guess
lots
of
us
know
what
vpns
are,
but
the
devil
is
in
the
details
and-
and
there
are
a
lot
of
different
solutions
on
here,
so
we
have
been
testing
wire
guard.
That
is
an
ip6
solution,
but
also
comparing
it
with
open
vpn.
H
On
top
of
udp
and
tcp
dave
told
me
that
everyone
is
not
always
aware
of
what
we
do
in
satcom
system,
but
we
are
not
the
only
ones
we
usually
deploy
tcp
proxy.
So
basically,
we
have
two
components:
intercepting
the
triple
check
messages
from
tcp
to
have,
in
the
end
three
different
tcp
connections.
We
do
not
do
that
because
we
like
to
break
the
in
to
end
principle.
We
do
that
because
it
has
a
lot
of
interest
and
it
is
to
answer
to
the
specificities
of
the
satellite
links.
H
In
short,
we
have
issues
in
connection
initialization.
We
can
have
lots
of
tuned
specific
initial
windows
in
tcp
pep.
Sometimes
the
end-to-end
buffers
are
not
in
adequation,
with
the
very
high
bdp
that
we
have
on
satellite
systems,
so
we
can
have
custom
buffers
also
splitting
the
reliability
in
three
subsystems.
Abs
helps
a
lot
loss.
H
Recovery
is
splitted
in
three
segments
and
not
really
end
to
end
on
with
a
very
large
rtt
and
last
but
not
least,
when
you
are,
when
you
know
your
system
and
you
know
the
condition
status
of
it
and
then
you
can
actually
very
quickly
increase
the
rate
at
which
you
transmit
data,
and
so
you
have
very
homemade
condition
controls
that
are
possible
in
these
components.
H
H
That
being
said,
if
we
look
at
an
let's
say
an
enterprise
network
and
using
a
satellite
network,
we
have
end-to-end
the
server
in
the
pc
and
we
may
have
a
pop,
a
satellite
gateway
that
a
terminal
and
an
access
box.
H
H
H
We
have
tested
it
over
a
lot
of
configurations
and
on
for
to
emulate
satcom
systems
which
are
or
geo-based
or
for
geosynchronous
systems
or
leo.
We
have
sometimes
usually
on
jio
systems.
You
have
no
losses.
That
being
said,
sometimes,
when
you
rent
a
subconscious
access,
you
may
end
up
having
losses.
You
don't
know
whether
there
are
transmission
losses,
congestion
losses,
but
you
can
measure
lots
of
losses
on
the
system
and
then
we
have
tested
cubic
cubic
without
ice
bridge
lost,
start
and
br
v2.
H
We
have
tested
initial
conditional
windows
and
so
three
different
vpn
solutions,
and
basically
the
results
are
for
the
case
where
we
don't
have
any
losses.
So,
basically,
here
I
show
the
transfer
time
of
30
megabytes.
H
And
I
will
just
point
to
different
results.
Basically,
we
have
measured
that
when
we
use
vpn
tcp
and
that-
and
there
is
a
pep
within
the
tunnel-
we
have
very
bad
performances.
We
guess
this
is
because
we
have
some
sort
of
a
tcp
in
tcp
issue
and
that
end
up
with
even
bbr
v2
exhibit
exhibiting
a
downline
time
of
up
to
40
seconds
in
this
configuration
when
we
use
just
for
the
pep
in
position
b
for
the
green
circle.
H
H
When
we
have
losses,
things
are
very
getting
very
as
we
could
have
expected.
Basically,
a
cubic
is
very
sensitive
to
losses
on
the
network,
while
bbr
v2
is
not.
Bbrv2
ended
up,
providing
very
good
performances
in
terms
of
downloading
time
and
using
it
helps
a
lot
in
these
cases
and
when
so,
when
we
cannot
use
bb
rv2
and
we
are,
we
have
to
stick
with
cubic.
H
So
what
we
have
measured
in
our
short
technical
report
that
you
think
we
are
god
is
very
interesting
and
basically,
we
have
more
or
less
the
same
performances
with
qb
and
bbrv2
when
we
don't
have
losses
when
we
have
losses
in
the
systems
and
that
you
don't
manage
actually
activating
bbrv2
helps
a
lot
when
you
cannot
turned
on
vbrv2.
H
H
Sometimes
when
losses
are
on
the
local
network,
the
pep
can
help
much
more
because
this
the
recovery
process
doesn't
is
not.
I
am
occurring
on
the
large
delay
path,
and
so
basically
we
we
we.
We
have
to
admit
that
in
this
case
that
we
have
considered
the
pep
cannot
really
help
the
risk
of
lost
recovery
process,
but
that
is
what
we
sometimes
face
in
deployments,
where
we
actually
cannot
guarantee
your
loss
free,
satcom
system.
H
So
we
have
recently
published
an
archive
paper
and
sent
a
link
to
the
list
and
basically,
on
top
of
what
we
have
shown
here
in
the
paper.
We
have
also
mentioned
the
conclusions
we
had
when
we
tried
to
introduce
congestion
and
when
baby
rv2
is
not
alone
to
see.
If
there
are
any
fairness
issues
and
in
a
nutshell,
we
don't
have
much
fairness
issues
with
rv2
in
our
experiments.
So
thank
you
all
for
and
for
listening,
and
thank
you
very
much
for
my
prg
to
give
me
the
opportunity
to
present
this
result.
A
Hi
nicholas
this
is
this
is
dave.
I
I
wanted
to
relay
a
couple
of
questions
of
basic
questions
that
maybe
I
should
interrupt
you
for.
Can
you
go
back
to
your
slide
with
the
results?
A
A
E
Yep
great
yeah,
thank
you
so
much
for
sharing
this
definitely
interesting
to
see
your
results.
One
thing
I
I'd
love
to
see
more
work
on.
E
I've
been
working
on
a
project
at
apple
to
use,
multi-hop
quick
proxies
for
our
service
called
private
relay,
which
is
kind
of
like
a
vpn,
but
it's
using
these
proxies.
So
I
think
it'd
be
interesting
to
try
to
see
you
know
rather
than
having
to
choose
between
okay,
I'm
doing
a
tcp
solution
here
with
a
pap
or
udp
solution
with
a
pep.
If
we
have
an
ecosystem
of
quick
proxies,
could
we
maybe
get
the
best
of
both
worlds
across
the
no
loss
and
lost
scenarios.
H
To
be
honest,
we
are
very
interested
in
deploying
mask-like
proxies
in
this
kind
of
systems
and
because
we
didn't
have
running
code
for
that,
we
used
what
was
available
and
we
think
that
sometimes,
when
we
use
the
open,
vpn
udp,
somehow,
even
if
it
doesn't
include
a
question
control
that
mask
would
proxy
will
have,
we
can
still
have
some
trend
of
the
results
we
could
observe,
but
then
indeed
we
are
very
interested
in
going
further.
I
think
it's
very
important
for
this
kind
of
systems
to
look
at
mask
proxies
and
what
is
possible.
J
And
now
I
should
be
able
to
see
my
screen.
Okay,.
J
J
So,
as
many
of
us
know,
internet
of
things,
devices
such
as
smart
tvs
and
voice
assistants
are
very
popular
and
are
expected
to
become
ubiquitous
invasive.
Nature
of
these
devices
raises
significant
privacy
implications.
For
example,
some
of
these
privacy
issues
in
the
past
have
been
found
in
baby
monitors,
voice,
assistants
and
even
biomedical
devices.
J
J
One
of
the
ways
to
improve
iot
privacy
and
security
is
to
protect
the
network
communications
from
iot
devices.
Our
prior
research
shows
that
iot
manufacturers
mainly
rely
on
tls
to
do
so
now.
Tls
is
the
de
facto
web
security
protocol
that
provides
confidentiality,
authenticity
and
data
integrity
to
network
connections
nicholas.
I
just
want
to
briefly
say
that
your
mic
is
on
in
case
you
forgot,
okay,
but
to
use
tls
properly.
J
Hence
the
main
objective
behind
our
work
was
to
study
how
effectively
iot
devices
use
tls.
However,
there
were
a
couple
of
challenges
that
we
needed
to
circumvent.
First,
for
example,
iot
devices
provide
limited
ways
to
trigger
traffic
in
order
to
interact
with
the
devices
one
needs
one
typically
needs
to
plug
them
with
a
power
supply
and
manually
interact
their
functionality
with
their
functionality,
and
second,
there
are
limited
vantage
points
that
shed
light
on
tls
on
on
a
traffic
that
is
tls
traffic
that
is
specific
to
iot
devices.
J
Here's
how
we
mitigate
these
challenges.
First,
we
build
on
the
prior
insight.
We
build
our
insight
by
prior
works
that
devices
generate
significant
network
traffic
when
they
are
powered
on.
We
automate
device
reboots
with
the
help
of
of
smart
plugs,
and
hence
we
have
a
way
to
generate
traffic
whenever
we
need
for
our
experiment
and
second,
we
complement
these
controlled
experiments
with
uncontrolled
ones
that
span
a
long
period
of
time
and
enable
us
to
study
longitudinal
trends.
J
J
We
also
recruit
more
than
30
participants
to
interact
with
these
devices
in
the
lab
through
an
irb
approved
user
study.
As
you
may
notice,
our
lab
is
designed
like
a
studio
apartment
and
our
participants
were
told
that
they
can
use
any
of
the
devices
as
they
please,
and
with
this
setup
we
conducted
we
obtained
two
years
of
longitudinal
data
from
2018
to
2020
I'll
now
share
some
of
the
key
results
and
we'll
then
talk
about
one
result
in
more
detail.
J
The
second
question
that
we
asked
was
if
these
devices
properly
validate
tls
certificates,
we
found
that
11
devices
were
vulnerable
to
tls
interception
attacks.
Our
manual
analysis
revealed
that
many
of
these
devices
were
sending
potentially
sensitive
data.
We
have
reported
this
vulnerability
to
all
these
vulnerabilities
to
all
device
manufacturers.
J
We
also
found
that
devices
do
not
appear
to
update
their
tls
root
stores
and
I'll
talk
about
this
result
in
more
detail
in
a
bit
and
finally,
we
also
asked
if
these
devices
share
tls
libraries
with
other
clients
using
a
technique
named
tls
fingerprinting,
we
were
able
to
show
that
devices
and
applications
from
the
same
vendor
likely
share
tls
libraries
in
the
paper.
We
talk
about
more
details
about
the
positive
and
negative
implications
of
this
finding.
J
So
I'll
now
talk
about
tls
root
stores
in
more
detail
to
understand
what
these
two
stores
are.
Let's
consider
an
example:
dls
handshake
here,
our
device
initiates
a
deal
initiates
a
connection
with
a
web
server.
There
are
many
steps
in
the
handshake.
The
one
to
remember
for
this
talk
is
that
at
some
point
during
the
handshake,
the
server
will
send
a
certificate
chain.
The
certificate
chain
contains
the
certificate
authority
which,
in
this
case,
is
digit.
J
J
These
root
certificate
authorities
have
the
power
to
compromise
authenticity
of
all
connections
that
a
device
makes.
Hence,
it
is
essential
to
know
what
root
certificates
are
trusted
by
iot
devices.
Other
platforms
such
as
web
browsers
and
mobile
operating
systems
publish
the
list
of
root
certificates
that
they
trust
and
regularly
update
them
over
time.
For
example,
here
a
root
certificate
is
being
removed
from
mozart
from
mozilla
route
store
because
of
administrative
reasons.
J
But
here
the
removal
of
this
root
store
is
due
to
negligence
and
misbehavior.
On
behalf
of
the
certificate
authority,
inspection
of
root
stores
and
iot
devices
is
difficult
due
to
their
black
box
nature.
I
will
now
share
our
technique.
I'll
now
show
how
to
use
completely
black
box
measurements
to
try
to
infer
root
stores
of
iot
devices.
J
J
These
two
different
reasons
can
lead
to
the
transmission
of
different
tls
alerts.
If
an
iot
device
behaves
in
such
a
way,
we
can
reliably
infer
the
presence
of
a
given
root
certificate
in
its
store
and
to
explore
all
root
certificates
that
a
device
has.
We
can
simply
repeat
this
methodology
with
the
names
of
all
publicly
known
root
certificates.
J
J
J
J
In
order
to
explore
more
of
the
security
implications
of
trusting
deprecated
root
certificates,
we
manually
analyze
the
certificates
that
were
still
trusted
by
these
devices.
To
our
surprise,
we
found
that
all
these
devices
trusted
at
least
one
certificate
that
has
not
just
been
deprecated,
but
also
explicitly
distrusted
by
either
firefox
or
chrome.
J
To
conclude,
our
study
reveals
a
mix
of
both
good
and
bad
news
about
tls
usage
and
iot
devices.
Unlike
the
perception
in
the
community,
we
do
not
believe
tls
usage
in
iot
devices
is
completely
broken.
Rather,
we
think
it
suffers
from
some
of
the
same
issues
that
existed
in
other
platforms
when
they
had
just
started
their
reliance
on
tls.
J
Please
feel
free
to
read
our
paper
for
more
details
and
I'm
happy
to
share
that.
We
also
have
released
the
tls
handshake
data
and
some
software
from
our
research
thanks
and
I'm
ready
to
take
questions
now.
A
Thanks
a
lot
taha
right
on
time,
perfect
what
up?
First,
I
want
to
refer
you
to
people
like
that.
You're
doing
this
work
and
there's
some
questions
for
you
in
the
chat.
I'm
gonna
refer
you
to
their
dkgs
a
couple
of
things
and
was
also
referred
to
your
paper,
and
then
I
see
we
have
hannes.
Actually,
benjamin,
I
guess
is
ahead
of
him.
We'll
have
benjamin
go
and
and
we're
gonna
have
to
switch
pretty
quick
to
kyle's.
So,
let's
make
it
quick
go
ahead.
Benjamin.
K
Hi
what
so
you
said,
this
worked
for
eight
of
the
devices.
What
happened
with
the
other
16.
J
So
what
happened
with
those
devices
that
was
that
they
would
either
not
send
us
a
tls
alert
at
all,
or
they
would
send
the
same
dls
alert.
So
it
means
that
we
need
to
find
some
other
channel
to
like
study
that
difference,
because
the
certificate
validation
would
be
different
in
the
both
two
cases,
but
the
others
are
not
necessary
to
be
sent.
I
should
also
mention
that
the
tls
protocol
does
not
mandate
the
sending
of
these
alerts.
I
Yeah
sure,
thanks
for
doing
the
work,
I
was
wondering
what
type
of
devices
those
were
like.
Did
you
sort
of
analyze
whether
those
were
sort
of
like
high-end
iot
devices
like
with
a-class
processors
or
more
these
low
end
devices,
which
obviously
have
different.
J
A
Cool
thanks
so
much
taha
for
bringing
that
to
us
follow
up
in
the
chat
with
them.
If
you
would
like
bringing
kyle's
slides
now
and
we'll
finish
up
with
our
last
presentation,
where
kyle
will
be
sharing
with
us
performance
measurements
with
video
conferencing,
apps
go
ahead,
kyle
you
got
at
least
10
minutes.
F
Great
thanks
dave
yeah,
so
my
name
is
kyle.
I'm
a
second
year
phd
student
at
the
university
of
chicago
and
today
I'll
be
presenting
our
imc
paper
measuring
the
performance
and
network
utilization
of
popular
video
conferencing
applications,
and
I
want
to
start
off
by
giving
a
little
bit
of
motivation
for
this
work,
which
actually
came
from
a
question
that
our
local
city
officials
here
in
chicago,
asked
us
towards
the
beginning
of
the
pandemic,
namely,
what
is
the
baseline
level
of
internet
performance
needed
to
support
common
video
conferencing
applications
for
remote
learning?
Now?
F
So
in
this
talk
I
want
to
briefly
highlight
some
of
the
core
findings
from
each
of
the
sections
of
our
paper
and
to
start
in
this
work
we
study
three
of
the
most
popular
is
in
use
right
now,
zoom
google
meet
and
microsoft
teams.
F
Now
for
context
home
internet
users
have
access
to
speeds
of
up
to
one
gigabit
per
second
and
many
subscribe
to
plans
in
the
tens
and
hundreds
of
megabits
per
second.
F
Additionally,
the
federal
communications
commission,
known
as
the
fcc
defines
broadband
internet
as
25
megabits
down
and
three
megabits
up
now,
while
there's
still
some
debate
over
this
definition,
the
actual
utilization
rates
that
we
found
for
these
apps
that
many
consider
to
be
pretty
big
bandwidth.
Hogs
are
actually
relatively
low,
but
while
utilization
rates
may
be
low,
insufficient
bandwidth
is
not
the
only
cause
of
poor
performance.
F
F
Among
our
results,
we
find
that
vcas
can
take
quite
a
long
time
to
recover
from
interruptions
in
this
figure.
We
measure
the
time
to
recovery,
which
we
define
as
the
time
between
when
the
interruption
ends
and
when
the
median
utilization
routine.
Excuse
me
when
the
median
utilization
returns
to
the
median
utilization
from
before
the
interruption.
F
The
levels
on
the
x-axis
indicate
the
available
uplink
capacity
during
the
30-second
interruption
and
while
zoom
and
meat
are
usually
faster
than
teams
to
recover
from
drops
to
0.75
and
1
megabit
per
second,
all
three
vcas
can
take
over
25
seconds
to
recover
from
the
most
severe
these
long
recovery
times
may
lead
to
poor
performance,
especially
on
networks
that
are
prone
to
these
types
of
interruptions.
F
But
interruptions
aren't
the
only
cause
of
dynamic
network
conditions.
So
in
this
next
part
we
study
how
the
vcas
respond
to
competing
traffic,
and
this
is
really
important,
because
there
are
often
more
than
one
user
on
a
single
network,
and
these
other
users
could
be
hosting
their
own
video
conferencing
calls
or
using
streaming
services
like
youtube
and
netflix.
F
F
Both
v1
and
c1
are
connected
to
the
same
network
and
will
share
the
same
bottleneck
link
again
we're
setting
the
capacity
available
to
v1
and
c1
at
the
router
in
each
experiment.
We
start
by
reducing
the
capacity
available
to
v1
and
c1.
Before
we
started
either
of
the
applications,
then
we
initiate
a
call
on
v1,
wait
30
seconds
and
then
start
the
competing
application
on
c1.
F
Now
it
might
not
surprise
you
to
learn
that
applications
aren't
always
fair
with
other
types
of
applications,
but
we
actually
found
that
the
vcas
are
not
always
fair
with
themselves.
So
in
this
figure
we
plot
the
upstream
utilization
over
time
for
two
different
teams
calls
using
the
same
three
megabit
per
second
uplink.
F
The
gray
region
indicates
the
time
period
in
which
the
two
calls
compete.
The
purple
trend
line
is
the
utilization
for
the
incumbent
call
running
on
v1,
whereas
the
black
trend
line
is
the
utilization
for
the
competing
call
running
on
c1
and
throughout
the
duration
of
the
experiment.
We
find
that
the
incumbent
teams
call
has
a
significantly
higher
average
utilization
than
the
competing
call.
Now.
F
This
lack
of
coordination
between
the
calls
has
obviously
implications
on
fairness
and
could
mean
that
one
competing
vca
call
could
use
more
of
the
available
bandwidth
just
because
it
started
first
now
up
until
this
point,
I've
only
discussed
utilization
and
performance
in
two-person
calls,
but,
as
we
all
know,
there
are
many
different
ways
that
people
use
vcas.
F
So
in
this
next
part
we
explore
how
varying
the
number
of
participants
and
how
people
view
the
call
affects
utilization.
We
use
the
following
setup
for
the
experiments
in
which
all
participants
join
the
same
vca
call.
We
consider
two
viewing
modes
gallery
and
speaker
mode
in
gallery
mode.
You
see
everyone's
video
on
the
screen
at
once.
F
In
this
figure
we
show
user
a's,
uplink,
bitrate
or
uplink
utilization,
when
all
other
participants
have
user
a's
video
enlarged
in
most
vcas.
This
is
when
you
pin
a
specific
participant's
video
to
the
screen
and
we
show
how
the
uplink
utilization
changes
as
we
increase
the
total
number
of
participants
in
the
call.
F
What
we
see
is
that,
as
the
number
of
participants
increases,
the
utilization
for
meat
and
zoom
remains
relatively
constant,
but
when
using
teams,
user
a's
uplink
bitrate
tends
to
increase
with
the
number
of
participants,
and
this
is
a
really
realistic
scenario
scenario.
You
can
imagine
it
occurring
during
remote
learning
when
all
of
the
students
have
pinned
their
teacher's
video
to
their
screens
so
understanding.
F
So
to
recap,
in
our
study
of
vcas,
we
have
several
interesting
findings
that
can
inform
policymakers
that
are
designing
internet
provisioning
legislation,
namely
that
vcas
have
relatively
low
utilization
that
they
can
take
quite
a
long
time
to
recover
from
certain
interruptions
that
vcas
may
not
compete
fairly,
even
among
themselves
and,
finally,
that
how
other
people
view
the
call
can
affect
your
utilization.
F
A
L
Hi,
thank
you
and
thanks
for
the
interesting
presentation,
it
was
fascinating
so
good
to
see
just
two
quick
questions.
When
you
did
this,
did
you
consider,
or
did
you
look
at
the
any
differences
between
using
a
native,
app
versus
a
browser
client,
and
also
did
you
look
at
any
differences
from
different
device
types
to
see
if
that
made
any
material
impact
on
this?
F
F
So
for
that
part
we
look
at
how
utilization
changes
as
you
adjust
the
network
settings,
but
also
how
certain
qoe
metrics
change
under
different
network
settings
so
stuff,
like
freezes
and
quantization
parameter
and
then,
as
for
device
type,
we
we
did
most
of
our
experiments
on
this
dell
laptop.
That
was
running
ubuntu,
but
we
also
replicated
some
of
the
experiments
on
a
on
a
mac
computer.
But
in
case
you
were
asking
about
like
mobile
devices
or
something
we
were
only
looking
at
laptops.
L
A
Thanks
a
lot
ali,
why
don't
you
go
and
then
we'll
wrap
it
up.
M
Yep
right
thanks
for
the
talk,
my
question
was:
when
you
were
doing
the
video
comparison,
the
video
betrayed
between
team
teams
versus
another
team's
client
or
another
competing
client.
Where
were
you
double
checking
that
you
were
actually
capturing
a
similar
video?
Because
you
know
talking
a
head
versus
talk,
you
know
moving
body
or
moving
ahead.
I
mean,
makes
a
difference
in
terms
of
video
bitrate
right,
so
I
just
want
to
make
sure
that
you
guys
were
you
know
confirming
to
pre.
M
F
No,
that's
that's
a
really
good
point
and
we
did
so.
We
we
had
a
pre-recorded
talking
head
video
that
we
used
for
all
the
experiments.
Okay,.
A
Yep
great
thanks
so
much
for
joining
us
kyle
and
thanks
to
the
other
contributors,
those
were
great
talks.
You
can
go,
find
their
papers
and
another
recording
of
the
presentation
for
the
three
of
them
are
ymc
there
thanks
to
miriam
for
helping
put
together
the
agenda
again.
I
hope
you
all
have
a
good
week
and
thanks
lastly,
to
teresa
encart
for
taking
the
minutes
for
us,
and
we
will
see
you
at
the
next
meeting.
Bye,
bye
and.