►
From YouTube: IETF115-IEPG-20221106-1000
Description
IEPG meeting session at IETF115
2022/11/06 1000
https://datatracker.ietf.org/meeting/115/proceedings/
B
A
C
A
So
hello,
everyone
welcome
to
iepg.
Let
me
stop
my
laptop
from
making
noise
because
otherwise
you'll
get
feedback
so
hi.
Everyone
once
again
welcome.
This
is
the
iepg
meeting
at
ietf,
115
and
apparently
I
found
a
slide
Deck
with
themes
and
stuff,
so
I
got
a
little
carried
away
with
Blinky
lights
and
pictures
and
pictures
of
palm
trees.
A
A
We
do
have
somebody
here
from
wide
who
is
going
to
be
taking
pictures
and
filming
the
ietf
meeting
and
iepg
meeting
unless
anybody
objects.
Does
anybody
strongly
object
to
pictures?
A
Great
I
did
not
think
so.
Chris
Morrow
is
the
other
iepg
sort
of
chair
type
person,
but
I,
don't
think
he's
online,
so
I'm
going
to
be
running
all
of
the
slides
Etc.
This
is
the
first
meet
Echo
session
of
the
sort
of
meeting
type
thing.
So
hopefully
everything
works
fine.
If
we
have
any
technical
problems,
you
know
just
remember
it's.
The
first
one:
this
is
the
agenda
this
time
we
have
a
lot
of
discussions
on
eh
measurement
and
things
like
that.
A
I
have
split
things
up
somewhat,
so
that
we
have
some
eh
stuff
and
then
a
different
discussion
and
then
some
more
eh
stuff
and
then
dscp
and
then
go
Dot.
Does
anybody
have
any
agenda
bashing
or
anything
else
that
they
strongly
care
about?
Nope
alrighty
I
believe
that
the
Deep
dive
into
eh
is
actually
split
into
two
presentations.
I,
don't
know
who's
actually
presenting
the
first
one.
Is
that
great?
The
only
okay
did
not
see
you
in
the
back
there.
So
I
will
bring
up
your
slide
deck.
D
Is
that
so
starting
over
hi,
I'm
Delaney,
oh
I'm,
better,
all
right,
okay,
so
I'm,
Delaney,
Elkins
and-
and
we
are
doing
some
Diagnostics
and
troubleshooting
for
extension,
headers
and
I
will
show
you
some
interim
results.
We
have
quite
a
few
people
working
with
us,
mostly
from
non-profits
here
in
the
United
States
and
India,
as
well
as
nitk
suretical,
one
of
our
fine
Indian
technical
universities.
Okay.
Next,
please
so
this
is-
has
been
a
controversy
for
quite
a
long
time.
D
I'm
going
to
go
through
these
first
slides
pretty
quickly,
there's
been
some
quite
a
few
people
saying
extension:
headers,
don't
work
so
and
but
you
know
it's
kind
of
interesting,
because
our
own
personal
experience
with
testing
some
of
our
headers,
which
we
want,
which
is
the
performance
and
diagnostic
metrics
extension
headers.
Our
anecdotal
experience
was
that
going
across
the
internet.
It
did
work,
and
so
we
were,
like
my
goodness,
I
wonder.
What's
going
on
so
next,
please!
D
This
is
our
extension
header
that
we
want
to
use
for
embedding
in
each
packet
to
get
real-time
performance
and
diagnostic
data.
So,
let's
go
past
that
next
piece
of
the
our
proposal
for
encrypting,
this
extension
header
was
accepted
into
the
IAB
workshop
on
managing
encrypted
networks.
This
is,
of
course,
a
very
big
problems.
What
are
we
going
to
do
as
we
do
more
and
more
encryption
of
networks,
and
our
proposed
solution
is
our
performance
and
diagnostic
metrics
extension
header.
D
We
had
modified
the
kernel,
the
FreeBSD
kernel,
to
go
ahead
and
send
tests,
and
you
can
see
that
we
sent
two
locations
throughout
the
world
passing
through
quite
a
few
Transit
networks,
internet
exchange
points
and
so
on,
and
it
all
worked
next,
please-
and
so
thank
you
guys
so
much
for
all
your
help.
Next,
please-
and
you
can
see
here-
this
is
a
I'll
go
through
this
quickly.
This
is
a
large
FTP
and
you
can
see
it
all
worked.
D
This
would
happen
to
be
Toronto
to
Mumbai
next,
please,
and
you
can
see
that
not
only
do
we
have
our
PDM
extension
header,
our
destination
options,
but
we
also
have
fragment
headers
all
being
transmitted
successfully.
Next,
please
so
bottom
line.
We
have
traces
of
everything
everything's
available
to
look
at,
so
that
was
that
brings
us
now
up
to
the
current
time.
Next,
please!
D
So
so
then
we're
like
all
right.
Let's
see,
what's
really
going
on
see
if
we
can
figure
out,
do
EHS
really
work
and
if
they
don't,
then
why
why
don't
they
work?
So
this
is
what
our
Deep
dive
efforts
are
all
about,
and
we
have
two
drafts
in
V6,
Ops
and
they're
troubleshooting
techniques
we
want
to
see.
Where
is
it
blocked?
Is
it
at
the
source?
Is
it
the
destination?
D
Is
it
in
the
transit
networks,
of
course,
depending
on
where
things
are
blocked,
the
problem
is
easier
and
harder
to
fix,
and
then
the
other
thing
is
it
intentional,
because,
interestingly
enough,
when
we
talk
to
some
people
by
that
I
mean
some
router
vendors,
Some
Cloud
providers
and
some
CDN
providers,
they
were
actually
quite
surprised
to
see
that
extension
headers
that
we
had
isolated
to
their
domain
of
control
all
were
actually
being
blocked.
They're
like
well.
D
Yep
next
yep,
and
so
we
have
a
side
meeting
scheduled
where
we'll
have
quite
a
bit
of
time
to
talk
about
our
Standalone
testing
in
this
particular
talk
I'm
in
his
focus
on
content,
delivery,
Networks
and
the
reason
for
that
is
many.
Many
high
volume
sites
use
cdns,
and,
and
so
when
we
did
things
like
the
top
1000
Alexa
sites,
we
could
see
that
many
of
them
were
actually
resolving
to
cdns,
as
one
would
expect.
So
next,
please.
D
So
if
we
think
about
some
topologies,
what
we
have
been
testing
where
we
saw
things
work
were
the
very
first
scenario,
which
is
very
simple,
which
is
you
have
a
client
which
is
not
necessarily
behind
a
CDN,
obviously
or
behind
any
kind
of
cloud
provider
just
stand
alone,
going
out
onto
the
internet
and
the
same
for
the
server.
That's
a
very
simple
scenario,
of
course,
not
that
many
situations
fall
into
that
Arena,
but
actually
I
mean
quite
a
few.
Do
but
many
don't
what
many
high
volume
sites
have
something
like
this.
You
have
your
client.
D
Wherever
they
are
going
out
onto
the
internet,
then
they'll
actually
end
up
resolving
to
a
CDN
cache
server.
The
closest
one
that's
near
to
you
and
then
you'll
do
something
somehow
you're
going
through
the
CDN
Network
and
then
end
up
at
your
original
server,
which
we
will
now
call
the
origin
server
and
in
once
you
get
into
the
CDN
Network.
There
may
be
very
complex
topologies,
but
this
is
the
simplest
scenario
within
a
CDN.
D
Then,
of
course,
if
you're
on
a
cloud
provider
that
also
becomes
quite
interesting,
we
will
talk
about
Cloud
providers
next
time
today,
we'll
talk
about
CDN
next,
please.
So
this
is
the
very
simple
scenario:
let's
go
to
the
next
one.
So
now
we
have
that
origin
server,
our
original
server,
which
we
have
enabled
to
send
extension
headers
with
every
packet.
You
will
recall
that
when
we
had
it
not
behind
the
CDN,
everything
was
working
perfectly.
D
D
So
we
just
got
free
trial
licenses
or
trial
efforts
with
three
different
CDN
providers
and
basically
really
what
you
have
to
do
is
first
thing:
you
have
to
give
the
CDN
authority
to
resolve
your
domain
so
before,
for
example,
if
we
had
myehserver.com
resolving
to
my
original
address
of
2001
colon
colon
one,
that's
the
Standalone
address,
it
will
now
resolve
to
the
nearest
CDN
cash
server,
fine,
so
I've,
given
him
DNS,
Authority
and
and
then
let's
see
what
happens:
okay,
so
next
piece.
D
So
this
is
kind
of
what
happens:
you're
no
longer
going
to
your
origin,
server,
directly,
you're,
you're
being
mediated.
Okay.
Next,
please
so
first
thing
is
what
exactly
did
that
CDN
resolve
to
and
of
course
we
assumed
it
was
the
IPv6
address
of
our
original
server.
D
Our
original
server
has
both
a
V4
and
V6
address,
and
so
we
said
absolutely,
let's
that's
what
we're
thinking
it's
going
to
do,
and
so
we
took
a
trace
on
both
sides
because
remember:
I,
control,
both
sides,
the
client
and
the
server
Mike
in
in
my
case
the
client
is
also
eh
enabled.
So
the
client
also
sends
an
extension
header
with
every
packet.
Next,
please
so
huzzah.
D
D
I'm
only
seeing
it
half
but
I
mean
one
good
thing:
I
mean
the
packet,
isn't
being
dropped,
I
mean
so
I
am
seeing
traffic,
but
I'm
only
seeing
eh
one
way
so
I'm
like
well
goodness,
I'm,
really
glad
that
I
have
access
to
the
server
on
the
other
side
and
I
can
take
a
packet
trace
on
the
other
side.
Next,
please!
D
So,
let's
go
take
a
look
at
the
server
side.
Next,
please,
and
in
the
my
first
shock,
is
what
what
have
I
done
wrong.
I
am
only
seeing
ipv4
packets,
okay
I
must
have
done
something
wrong,
and
so
okay,
next,
please
so
then
I
go
poke
inside
the
packet
and
I'm
like
new
new
I
did
not
do
something
wrong
because
in
the
HTTP
I
can
see
that
it
is
forwarded
from
IPv6.
D
Also
too
I
can
see
it
is
inside
the
CDN
Network,
so
I'm
like
Oh,
my
Heavens,
so
wait
wait.
What
just
happened
next,
please.
This
took
me
actually
several
calls
to
some
of
the
people
on
the
team.
Saying
I
must
be
going
mad.
Something
has
happened.
Okay,
so
then
I
actually
started
looking
at
the
documentation.
My
last
Resort,
of
course,
is
to
read
their
documentation
and
what
they
said
is
well.
If
you
have
a
V4
and
a
V6
address
on
your
origin
server,
we
will
prefer
V4.
D
Unless
actually
you
go
to
a
non-free,
a
paid
service.
I
was
doing
free
service
and
they
want
you
to
pay
for
the
service,
and
then
we
will
change
you
to
be
able
to
flip
and
make
IPv6
preferred.
And,
of
course,
we
are
working
for
several
non-profits
and
do
not
have
huge
amounts
of
money,
and
so
we
said
all
right:
let's
see
if
we
can
raise
some
money,
I'm
just
kidding,
so
let's
next
I
shall
tell
you
when
to
laugh,
yeah
I'm
just
kidding
so,
but
but
the
next
email,
the
next
CDN
provider.
D
That
was
even
worse
because
in
that
conversation,
I
actually,
first
read
the
documentation.
I
shocked
myself
and
I
could
see
that
they
did
not
have
V6
to
the
origin
server.
So
I
I
had
a
conversation
with
their
tech
support
and,
of
course,
the
tech
support
was
like
no,
no
little
lady.
You
are
wrong.
We
do
have
support
and
I
said:
okay,
I
shall
give
send
you
some
packet
traces
and
then
he
they
were.
They
were
gracious
enough
to
respond
back
and
say:
okay,
we
read
our
own
documentation.
D
D
They
are
very,
very
cooperative
and
they
are
working
very
closely
with
us
and
we've
already
found
one
bug
in
their
Network,
which
they
are
fixing
and
will
roll
out
through
their
Network
and
it
now
now
the
rest
of
it
was
actually
we
ourselves
had
a
problem
that
there
in
some
of
these
CDN
providers,
there's
so
many
different
options
for
which
kind
of
service
to
test
we
spend
like
three
weeks
after
I
picked
the
wrong
service,
but
anyway,
so
now
we
are
actually
behind
the
server
and
they
indicate
that
V6
to
origin
is
supported,
but
they
do
not
believe
that
eh
to
origin
is
supported
and
I
am
hoping
that
we
will
be
able
to
work
with
them
to
support
eh,
because
we
now
have
a
as
a
VM
image
that
we
can
quickly
give
to
be
able
to
test
eh
next,
please
so
more
breaking
news.
D
A
E
F
With
the
mask
on
I,
think
I'm
just
gonna
keep
it
on
whichever
you
like
right.
So
are
you
ready
for
more
extension,
headers
I
have
a
good
I,
have
a
slightly
different
approach
to
that
of
of
nalini,
which
is
rooted
in
white
scale,
traversal
measurements
next
slide,
please
oh
yeah
and
I'm
on
I'm
at
the
University
of
Aberdeen,
where
I
specialize
in
Internet
measurement
and
I'm
currently
completing
a
PhD
on
that
topic
right.
F
So
what
I've
tried
to
do
with
my
extension
header
measurements
is
I've
tried
to
focus
on
a
wide
set
of
diverse
paths,
so
in
today
I
will
be
talking
about
what
I've
seen
in
Edge
networks
and
I
tried.
Both
sides
of
the
edge
I've
tried
to
serve
I've
tried
to
test
to
the
server
Edge
and
for
that
I
chose
DNS
servers
and
web
servers
in
the
internet
and
I
also
try
to
look
at
the
consumer
Edge
and
for
the
consumer.
Edge
I
made
use
of
existing
measurement
infrastructure
provided
by
by
right
next
slide.
F
Please
right
so
I'll
talk
about
that.
First
I'll
talk
about
the
ripe
experiments
first,
so
what
type
is
it
is
a
distributed
measurement
platform
it
has
about
5500,
IPv6,
enabled
probes
and
by
probes
I'm
in
single
board
computers.
I
guess
it
is
not
strictly
consumer
Edge,
because
many
of
these
computers
will
be
in
in
academic
networks
or
in
data
centers,
but
a
lot
of
them
are
say
on
mobile
networks
and
in
people's
houses
like
I,
have
a
ripe
Atlas
probe
at
home.
F
So
from
all
of
these
probes,
then
I've
sent
packets
ripe,
allows
you
to
send
packets
with
either
destination
options
or
hot,
by
hop
options,
and
so
I
have
sent
packets
to
two
locations,
but
I've
tried
a
range
of
different
sizes
as
well,
and
I've
also
tried
two
different
protocols:
transfer
protocols
ripe
allows
you
to
do
TCP,
UDP
and
icmp,
but
I
chose
to
focus
on
TCP
and
UDP
next,
please
so
I
mentioned
that
ripe
is
globally
distributed.
This
is
a
really
cool
image.
F
F
F
F
F
If
you
change
the
transport
and
you
use
TCP,
this
drops
to
68
and
then,
if
you
use
hop
by
hop
options,
it
drops
a
lot
more
and
you
get
11
for
UDP
and
nine
percent
for
TCP,
but
interestingly,
to
see
here
is
that
the
difference
in
transport
protocol
is
constant
across
well
both
of
these
types
of
measurement.
Next,
please!
F
So
next,
what
I
try
to
do
is
I
try
to
work
out
where
the
packets
get
dropped,
so
I've
done
a
hop
analysis
and
an
as
analysis-
and
this
slide
is
about
drops
by
the
first
hop
on
the
path.
Next,
please
so
by
first
pop
on
the
path,
I
actually
mean
the
local
router,
the
actual
Gateway
of
the
ripe
Atlas
probe
in
question.
F
So
what
you
can
see
here
is
a
difference
between
hope,
I,
hop
options
and
destination
options
for
hope,
I,
hope,
options,
no
wonder
so
many
so
so
very
few
make
it
to
the
to
the
destination,
because
the
first
hop
on
the
path
just
blocks
55
of
them,
and
that
is
irrespective
of
protocol.
It
just
seems
to
be
like
a
blanket
ban
on
hobby
hop
options
for
Destination
options.
F
The
story
is,
is
slightly
different,
not
so
many
destination
options
sent
over
UDP
get
dropped
at
the
first
stop,
but
quite
a
few
sent
over
TCP
do,
and
this
helps
contribute
to
the
difference
in
protocol
that
you
saw
in
the
previous
slide.
Next,
please,
okay,
there's
a
lot
happening
in
this
slide,
but
this
is
the
per
AES
traversal
results.
F
Basically,
it
shows
you
at
each
as
of
the
past
on
the
path
how
many
of
these
packets
survive,
so
you
send
100
of
packets
with
different
extension
headers
and
for
Destination
options.
You
can
see
that
at
the
first
day,
yes,
you're
only
95
of
them
survive
at
the
boundary
between
the
first
and
the
second,
as
that
is
to
say
where
I
couldn't
determine
where
the
drop
was
made
in
the
first
or
the
second.
F
What
you
can
see
here
is
that
local
list
is
responsible
for
most
of
the
drops,
and
this
is
to
be
expected
because
the
local
layers
will
see
most
of
the
packets
after
all,
but
I
did
not
quite
expect
so
many
drops
in
the
local-
yes,
especially
not
for
hope,
I,
hope
options,
but
as
you
can
see,
you
get
up
to
75
percent
of
packets,
with
extension,
had
just
been
dropped.
F
There
I
include
more
asses
in
the
table
for
hot
by
hop
options,
because
that's
not
the
entire
story,
so,
for
example,
if
you
you
lose
70
percent
of
packets
in
the
first
as
but
then
you
lose
another
10
at
the
as
boundary,
then
you
lose
another
five
percent
in
the
second
day.
Yes,
and
then
you
lose
another
three
percent,
the
next
day
is
boundary
and
so
on.
F
So
basically
the
drops
happen
in
multiple
asses
and
not
just
the
locals
next,
please
so
then
I
decided
to
work
out
what
would
actually
happen
if
the
package
did
Traverse
that
first
day,
yes,
where
they
get
mostly
blocked
and
the
way
I
did.
This
is.
F
Well
doing
the
same
test,
but
in
Reverse,
so
essentially,
I
tested
both
directions
because
most
of
the
rap
battles
probes
have
public
IPv6
addresses.
So
all
I
have
to
do
is
do
a
trace
route
from
the
previous
destination
back
to
the
right
battles
probe
and
because
I
have
a
Control
process.
I
also
send
control.
F
I
also
do
a
control
test.
I
can
exclude
things
like
asymmetric
routing
or
like
ecmp
from
this
measurement,
and
I
can
work
out
if
the
packet
would
make
it
back
to
the
original
s.
So
next
slide.
Please.
F
So,
in
the
case
of
destination
options,
actually
it
turns
out
packets
do
make
it
back
to
the
original.
Yes,
so
essentially,
if
you
would
exclude
drops
that
happen
in
that
first
day,
yes,
you
could
bump
the
traversal
up
on
this
set
of
paths
to
96
and
that's
great
for
hope,
I
hope
options.
However,
you
you
don't
get
that
many
packets
making
it
back
to
the
first
day,
yes
because
they
seem
to
be
getting
dropped
in
transit
or
another
in
other
parts
of
the
path.
F
F
Now
this
is
the
fun
slide
of
I
mentioned
I
sent
packets
with
different
sizes
and
I
tried
to
work
out
whether
there's
a
difference
in
between
TCP
and
UDP.
With
respect
to
how
big
a
packet
can
you
send
and
I
found
I
found
the
difference
you
can
see
that
tcpc
is
the
biggest
drop
in
traversal
at
48.
Bytes
and
UDP
sees
the
biggest
drop
in
Traverse
of
56
bytes,
so
they
are
shifted
by
eight
bytes,
presumably
because
the
size
of
the
transport
header
is
is
different.
F
This
B
is
bigger
than
UDP,
but
I.
Think
the
key
takeaway
from
this
slide
is
that
if
you
send
an
extension
header
of
40
bytes
in
length
or
less,
then
you
have
the
highest
chance
of
it
going
across,
at
least
at
least
Edge
Networks.
F
How
are
we
for
time,
okay,
okay,
cool,
so
then
I
can
I
can
speak
a
little
bit
about
the
server
Edge
and
I
think
this
is
fun
because
it
it
ties
in
with
what
nalini
just
presented
so
what
the
target.
So
this
is
an
entirely
different
experiment
right:
it's
not
a
traversal
experiment,
it's
more
of
a
functional
experiment.
F
So
can
a
packet
with
an
extension
header
function
just
as
well
as
a
packet,
without
that
extension,
header,
essentially
I'm,
sending
the
packet
to
the
server
and
seeing
if
I
get
a
response
back.
The
packet
in
this
case
was
a
DNS
query
and
I
chose
DNS
here
because
of
two
reasons:
one
you
can
send
queries
over
UDP
and
DCP,
so
you
can
test
both
protocols,
but
then
also
the
DNS.
So
these
these
servers,
they
are
essentially
the
authoritative
analysis
for
the
domains
in
the
Alexa
top
1
million.
F
So
if
you
look
at
the
list
of
web
servers
for
those
same
domains,
you'll
see
that
most
of
that
is
just
cloudflare
and
at
least
the
DNS
list
is
a
bit
more
diverse.
Still
is
you
still
get
like
15
of
this?
Of
all
of
these
destinations
being
cloudflare,
but
it's
a
lot
less
than
if
you
were
to
choose
web
servers.
F
So
that's
why
my
targets
for
DNS
servers
in
total
there
were
like
20
thousand
of
them,
so
I
tested,
UDP,
TCP,
destination
options
and
hope,
I
hope,
options
and
also
I
tested
a
few
things.
I
varied
a
few
things
in
the
in
the
header
to
see
whether
or
not
it
affects
traversal.
So,
for
example,
you
can
test
with
the
valid
8
by
pattern
option,
but
you
can
also
test
like
an
unknown
option
or
like
an
invalid
length
and
see
what
happens
next
slide.
Please.
F
Okay,
so
we
have
some
number
here
numbers
here,
remember
that
this
is
a
functional
test,
so
the
53
percent,
you
see
for
say,
destination
options,
just
means
that
53
of
the
servers
at
the
other
end
responded
to
our
query
and
remember
that
the
data
set
is
made
out
of
a
lot
of
well
servers
in
cdns
and
we
saw
what
happens
in
cdns
in
a
previous
presentation.
So
this
shouldn't
surprise
you.
F
We
also
see
a
very
small
difference
between
TCP
and
UDP.
It's
not
as
large
a
difference
as
it
as
you
could
see
in
the
edge
data
set
and
I've
tested,
this
20
000
parts
from
12
locations
for
Destination
options
and
three
locations
from
Hop
for
hop
IHOP
options,
only
three
locations,
because
actually
I
struggle
to
find
providers
that
supported
them.
So
I
struggled
to
find
Vantage
points
from
which
the
testing,
because
of
the
lack
of
support
for
hope,
I
hope
options
in
Cloud
providers.
Next,
please
right
so
I
mentioned
cdns.
F
Well,
here
are
the
top
ases
that
essentially
do
not
pass
destination
options
and
by
do
not
pass
I
mean
either
they
drop
them
or
simply
do
something
in
the
back
background
and
and
then
the
packages
and
never
gets
a
response
and
as
you
can
see,
we
have
The
Usual,
Suspects,
cloudflare
Amazon,.
F
Microsoft,
but
if
you
did
fix
the
cdns,
you
could
get
your
traversal
up,
not
reversal.
Sorry,
your
response
rate
up
from
50
to
almost
90
percent.
So
that's
something
to
keep
in
mind
next
slide.
Please
and
finally,
I
think
I
mentioned
I
tried
to
vary
the
different
types
of
fields.
F
Okay,
so
this
is
what
the
with
an
8
byte
option
looks
like.
This
is
a
pattern
option.
It
looks
the
same
for
Destination
options
and
hope,
I,
hop
options
and
I
tried
different
things
in
different
fields
to
see
whether
or
not
this
this
would
help
traversal
or
if
it
makes
packets
gets
dropped
differently.
F
F
What
I
did
find
that
did
make
a
difference
is
if
you
advertise
an
invalid
extension
header
length
or
an
invalid
option
length
that
immediately
leads
to
like
almost
100
drop
rate
and
I
guess
this
is
to
be
expected,
but
those
fields
are
the
fields
that
I
think
routers
check
and
the
rest
of
them.
They
don't
make
too
much
of
a
difference.
Next
slide,
please
so
in
conclusion.
F
At
least
they
currently
currently
travel
quite
far
along
a
path
in
both
of
the
data
sets
that
we've
seen
and
we've
also
seen
that
there
are
still
some
types
of
networks
that
drop
them,
whereas
for
hope,
I
hope
options,
a
very
diverse
kind
of
set
of
pass,
Edge
networks,
as
in
consumer
networks
and
cdns
drop,
this
guy,
even
Transit
networks
they
drop
packets,
will
help
I
hope
options,
but
you
still
have
a
few
of
these
paths
that
support
them
and
that's
a
positive
and
finally,
we've
seen
a
difference
in
transfer.
F
Protocol
and
I
have
some
theories
as
to
that.
So
please
speak
to
me
after
the
presentation,
if,
if
you're
interested
next
slide,
please
yes
question
time.
Excellent.
F
Can
I
get
some,
can
I
get
nalini
as
well
here,
I
think,
just
in
case
I
don't
want
to
answer
the
hard
questions
by
myself.
B
John
border
I
have
a
clarification
question
when
you
were
doing
the
UK
Universal
from
The.
Edge
is
the
first
as
the
same
as
all
the
time.
What
I'm
really
wondering
is
does
a
given
as
always
drop
it,
or
does
it
sometimes
drop
it.
G
Meeting
so
far,
thanks
for
the
agenda
two
questions,
one
really
quick
qualification
when
you
show
the
first
highest
and
subsequent
tires
drops
yeah
in
that
drop.
You
include
the
first
hop
drops
right.
Yes,
yes,
yes,
yes,
I
do!
Okay,
so
did
you
look
at
how
my
Chicago
battery
result
would
be?
If
you
exclude
the
first
hop
drops
because
in
my
measurements,
I
feel,
like
a
lot
of
that
stuff
is
dropped
by
Cheap
CPS.
We
just
have
no
idea
what
to
do
with
all
those
strange
Fields,
yeah.
F
So
I'll
I'll
answer
your
question
in
two
ways.
So
if
we
go
back
to
the
slide
for
one
more
and
then
the
previous
one
to
this
one,
so
I
guess
the
first
hop
on
the
past
drops
like
say
for
hope,
I
hope,
options,
55
and
then
in
the
first
days
you
have
75.
So
the
difference
there
is
20,
so
you
can
calculate
it
that
way.
F
But
what
I?
What
I
did
too
is
I,
had
a
look
at
this
CPS
well
of
the
of
what
this
first
hop
on
the
past
does
and
I
tried
to
cross-reference
those
machines
that
also
do
things
like
modify
the
MSS
Fields,
so
I
try
to
work
out
if
they're
middle
boxes
and
I
try
to
work
out
if
that
influences
the
traversal
rate,
and
it
does
so.
F
G
Here,
yeah
and
a
second
quick
question
so
because
you
look
in
the
average
you're
saying,
like
20
dropped
between
first
and
second
tires,
for
example
right.
So
if
I
write
that
in
most
cases,
you
see
some
ISS
dropping
hundred
percent,
some
of
them
zero
and
it's
not
really
likely
to
see
like
Drop
in
like
random
number
right.
So
it's
more
like
All
or
Nothing.
Yes,
absolutely.
H
By
the
discretion
of
the
chair,
may
I
yeah,
okay,
hi
tail
I
couldn't
join
the
cube
because
weird
midakosta,
but
rather
than
everybody
having
to
ask
you
afterward.
Can
you
tell
us
your
theory
on
the
discrepancy
between
TCP
and
UDP.
F
Okay,
so
I
I
did
touch
on
this
in
the
previous
question,
for
one
of
one
of
them
is
that
you
have
loads
of
devices
in
the
past
that
will
modify
bits
of
the
transport
protocol,
so
you
have
I
specifically
looked
at
boxes
that
modify
the
MSS
option,
send
for
TCP
in
order
to
help
with
python
2,
Discovery
and
I
noticed
that
that
makes
a
difference.
So
that's
one
of
the
reasons
and
the
second
one
would
I.
F
My
theory
is
that
it's
it's
of
course,
fireballs
which
look
more
at
TCP
than
they
look
at
UDP
and
I
plan
on
testing
that
somehow
I
have
I'm
yet
to
concoct
my
amazing
firewall
test,
but
I
will
and
then
I
will
present
the
results.
Thank.
A
A
H
I
You
so
a
lot
so
yeah
today,
I'm
Thomas
from
swisscom
and
today
I
like
to
show
you
some
challenges.
We
have
with
the
network.
Telemetry
data
mesh
integration
specifically
yep.
Sorry
specifically
on
Yang
push
next
slide,
please
and
before
starting
there.
I
Just
to
give
you
some
insights
like
how
we
are
using
network
Telemetry
metrics,
so
here
in
this
example,
with
the
network,
anomaly,
detection
for
layer,
3
VPN,
we
are
having
actually
a
two
vpns,
so
blue
and
orange,
and
one
of
the
two
vpns
blue
one
is
redundant
where
the
orange
one
actually
is
not
in
redondant,
and
we
are
looking
from
different
perspectives
in
the
network.
So
on
one
hand
we
are
monitoring
the
bgp
updates
and
withdrawals.
I
I
We
have
withdrawals
just
on
the
on
both
on
the
orange
and
the
blue
one,
while
from
the
forwarding
perspective
orange
is
now
dropping
completely
blue
is
unchanged
because
it's
redundant
and
the
the
interface
State
changes
we
have
for
blue
and
for
orange,
while
when
we
move
on
to
the
second
event,
yep,
not
yet
previous
slide.
So
the
second
event,
so
the
red
line
in
the
middle
you
see,
basically
the
overall
concern
score
now
is
going
up
for
for
blue
as
well,
and
the
reason
is
that
we
are
no
longer
forwarding
for
blue.
I
We
have
some
pgp
with
stalls
in
the
network
and
also
some
interface
State
changes.
So
you
see
with
this
example,
we
have
to
look
from
different
perspectives
in
the
network
to
get
an
overall
picture.
What's
how
the
changes
in
the
network
are
actually
affecting
the
forwarding
of
the
packets
next
slide,
please
so,
from
a
network
operator
perspective,
what
we
are
aiming
for
in
a
network
telemetries
that
we
have
at
the
end
an
automated
data
processing
pipeline,
which
starts
with
network
Telemetry,
where
we
collecting
data
from
the
network.
I
We
consolidate
the
data
in
a
so-called
data
mesh,
which
I
will
explain
later
more
in
details
and
on
top
of
it,
we
have
Network
analytics
where
we
can
gain
insights
on
those
metrics
and
on
the
former
semantics
point
of
view,
so
the
ITF
defines
the
semantics
for
the
operational
Matrix.
While
the
analytical
Matrix
are
generated
with
the
analytics
capabilities
at
the
network
of
operator,
we
achieved
this
goal
by
actually
forwarding
the
metrics
from
the
from
the
network
unchanged.
I
We
are
learning
the
semantics
from
the
network
and
thanks
to
the
semantics,
we
can
also
now
validate
the
correctness
of
the
messages
and
we
want
to
control
the
semantics.
So
we
want
to
make
sure
that,
when
semantics
are
changing
that
we
keep
the
keep
the
backward
compatibility
in
in
control,
so
we
say
so
we
can
actually
move
to
New
revisions
of
the
semantics
next
slide.
Please
so
State
of
the
Union
I'm
paraphrasing
here,
but
today
and
I,
will
explain
why
we
have
a
bit
the
mess.
I
I
would
say
in
terms
of
how
we
get
all
the
the
metrics
from
the
network
and
the
data
mesh
is
basically
the
Next
Generation
big
data
architecture,
which
is
quite
already
advanced
and
next
slide.
Please
so
some
introduction
on
the
on
the
big
data
architecture
here,
so
it
evolved
over
time.
It
started
with
propriety
data,
warehouses
event
to
centralization
into
Big
Data
Lakes.
I
We
added
with
copper
so
Marine
time
streaming,
capabilities
and
now,
basically
ending
at
at
data
mesh
and
from
a
network
Engineers
perspective
data
mesh
is
much
like
how
we
are
managing
our
networks.
Today,
it's
distributed,
we
are
dividing
them
into
different
domains.
So
at
the
end
we
have
many
different
teams
managing
their
part
of
of
their
data
and,
of
course,
in
order
to
exchange
data
properly
same
as
with
networks.
We
need
standardized
interfaces
and
they're
called
in
data
mesh
bounded
context,
and
the
data
mesh
architecture
actually
defines
that
within
an
Enterprise.
I
The
operational
Matrix
should
be
standardized
with
a
Federated
computational
governance,
but
since
here
in
network
Telemetry,
we
are
actually
collecting
the
metrics
from
networks
and
how
ITF
take
the
responsibility
to
standardize
the
semantics
there.
So
next
slide,
please.
I
Looking
now
on
the
Yang
push
size
and
looking
at
the
thief
and
let's
say
angles
on
on
Yang:
push
on
transport,
encoding,
subscription,
metadata,
versioning
and
the
Yang
models
and
comparing
what
we
have
today
on
on
the
network
operator
side
and
what
we
are
developing
at
ITF.
What
we
achieved
there.
I
We
can
see
on
the
transport
side,
pretty
much
different,
non-standardized
Yang
push
transport,
then
at
ITF
we
have
with
https
no
different
UDP
native
drafts,
which
are
close
to
to
isg
getting
into
an
RFC
on
the
encoding
side,
also
again
different
encodings
in
in
the
operator.
So
we
have
Json
widely.
If
you
go
into
into
binary
encodings,
we
see
both
of
off
in
most
most
cases
in
various
variants,
sibo
itself
not
yet
implemented,
while
at
the
ITF
site.
I
We
have
Json
XML
and
sibo
already
in
RFC
on
the
subscription
side
same
again,
very
much
at
the
operator
side.
It's
non-standard
periodical
subscription,
widely
adopted
on
change
seldomly
and
at
ITF
we
have
to
RFC
describing
very
well
the
subscription
side,
the
metadata
there.
We
see
also
that
basically
it's
within
the
Json
message
itself,
so
it's
very
hard
to
find
out
what
part
of
the
Json
message
is
actually
the
the
the
the
the
the
the
Yang
model
itself
and
what
part
is
actually
the
metadata
and
now
at
ITF.
I
We
are
starting
with
with
these
drafts
here
to
describe
the
metadata
more
properly
so
that
we
have
a
semantic
reference
towards
the
message
which
we
are
transporting
then,
on
the
versioning
side,
I
would
say
there
it
gets
really.
On
the
operational
side.
We
have
nothing
while
at
ITF
with
nitmud
young
model
versioning,
we
are
working
on
semantic
versioning,
on
backward
compatibility
regarding
the
Yang
modules
itself
at
ITF
we
have
many
rfcs
so
and
looking
what's
currently
being
implemented.
I
would
say
it
that
the
coverage
is
very
sparse.
I
So
that's
the
situation
which
we
have
and
that's
why
I
mentioned
before.
We
are
actually
going
from
a
let's
say,
a
massive
situation
towards
a
more
organized
situation,
but
we
still
have
some
some
tasks
in
front
of
us
next
slice,
please
so
where
we
are
heading
to
so
at
the
end.
In
a
nutshell,
basically,
we
have
the
network.
We
have
the
data
mesh.
We
are
pushing
configuration
through
an
API
through
net
conference
convince
to
the
network,
and
so
we
yank
push
we're
getting
the
operational
Matrix
back.
I
So
it's
basically
just
in
in
the
future,
a
simple
cycle
between
Network
and
and
data
mesh,
and
in
order
to
bring
these
two
worlds
more
closer
together,
because
today,
big
data
doesn't
know
much
about
networks
doesn't
know
much
about
semantics.
I
We
we
are
now
collaborating
with
different
operators
and
network
and
analytic
providers
and
universities
together
and
kick
off
here
a
project
in
a
site
meeting
on
Monday
afternoon.
If
you
have
interest
to
join
us
so
that
we
can
ease
this
integration
between
the
two
worlds,
better
next
slide,
please.
I
So
in
a
nutshell,
data
message:
big
data
architecture.
It
relies
on
bounded
context,
so
we
can
forward
semantics.
So,
basically,
yank
push
is
a
message.
Protocol
Apache
Kafka
is
a
message
protocol
and
what
we
are
trying
is
actually
to
make
sure
that
this
integration
between
the
two
worlds
are
much
more
easier.
So
we
are
bringing
yang
the
semantics
into
a
into
the
schema
registry
and
we
are
extending
Yak
push
so
that
we
have
semantic
the
semantic
reference.
I
So
at
the
end
it's
about
an
automated
onboarding
of
data,
so
a
network
engineer
can
simply
configure
a
new
XPath
and
a
minute
or
two
later
he
sees
the
Matrix
in
the
database
or
in
another
way,
Simply
Now
different
domains.
Different
products
can
actually
exchange
data
in
a
data
mesh,
and
that
makes
integration
much
more
easier.
I
I
I
A
E
Find
a
peck
Warren
hi,
my
name
is
Jeff
Houston,
and
this
is
work
with
Joelle
Damas
from
AP
Nick.
So
we're
back
to
the
wonderful
world
of
IPv6
extension.
Headers
again,
you
thought
you
might
have
escaped
it,
but
you
were
wrong
next
slide.
E
This
kind
of
came
up
in
2016
and
for
me
I
must
admit
it
was
the
right
hand
column,
because
this
work
General
cover
Fernando
gond
and
a
couple
of
others
whose
names
I
have
forgotten,
but
you
can
look
up
7872
as
quickly
as
I.
Can.
What
it
really
said
to
me
is
V6
can't
fragment
with
that
kind
of
loss
rate.
You
cannot
fragment
packets
through
the
internet
and
make
them
work,
which
is
for
the
DNS
and
V6
a
Death
Note.
You
just
can't
make
big
packets
work.
E
E
If
I
read
the
documentation
in
that
RFC
correctly,
they
were
using
pattern
and
they
were
padding
the
size
of
the
header
up
to
a
total
of
eight
bytes
and
we'll
see
why
that's
important
a
bit
later.
But
this
was
also
client
to
server
not
server
to
client.
E
One
again,
that's
going
to
be
important,
was
two
512
byte
fragments.
So
it
was
a
single
point
test
and
we'll
see
that
that
becomes
important
next
slide.
So
we
got
curious
about
this
a
couple
of
years
ago
and
decided
that
we
would
push
this
a
little
harder
and
using
a
mechanism
that
was
fundamentally
different
to
Atlas.
We
actually
used
a
couple
of
Lino
boxes
and
created
a
very
standard
back
end.
E
So
here's
you
know:
nginx
running
a
web
server,
yada
yada,
but
in
front
of
that
we
actually
put
a
V6
Nat
and
the
way
it
worked
is
the
front
end
accepted
incoming
packets
change
the
addresses
so
that
you
know
the
return
from
the
web.
Server
got
back
to
the
front-end
box
and
just
sent
them
inward,
but
it
also
created
a
binding
and
a
flag
of
a
transition
to
apply
to
the
outbound
packet.
The
outbound
packet
got
back
to
the
front-end
server
and
we
then
diddled
and
played
with
that
packet.
How
do
we
do
that?
E
Well,
the
thing
is
running
standard,
pcap
packet
filters,
so
it's
picking
up
every
packet
and
we
actually
send
raw
IP.
It's
a
totally
synthetic
packet
that
we
send
out
the
other
end.
It
looks
like
what
the
back
end
did,
but
we've
added
either
fragmentation
or
extension
headers.
Now
we
wanted
to
test
a
whole
bunch
of
things
without
doing
a
whole
bunch
of
experiments
and
because
we're
using
an
ad-based
measurement
technique.
E
We
have
around
26
27
million
raw
sample
points
a
day,
and
so
we
could
conduct
a
whole
bunch
of
experiments
simultaneously
and
the
way
we
did.
That
was
every
single
time.
We
found
a
new
TCP
session.
We
randomly
flipped
the
coin
and
selected
a
test.
It
was
either
fragmentation
to
a
certain
packet
size
or
adding
a
h
by
hbh
header
to
a
certain
size
or
adding
a
destination
header
to
a
certain
size.
E
So
simultaneously
we
were
doing
around
54
tests
across
those
across
those
experiments
all
at
once,
which
meant
basically
a
large
amount
of
data
was
being
collected.
These
are
ads
are
not
normally
testing
servers
unless
you're
running
Apple's,
private
data
relay
I'm,
actually
testing,
statistically
mobile
phones,
but
a
whole
bunch
of
other
things
as
well.
You
know,
what's
behind
a
Broadband
Network,
possibly
smart
televisions
who
bloody
knows
I,
don't
but
we're
testing
in
devices.
E
E
There
was
an
experience
in
using
pad
in
which
was
disastrous,
we're
using
the
zero
x1e
type
code,
which
is
evidently
reserved
by
Ayanna
for
this
kind
of
experimentation
and
we're
doing
basically
a
progressive
size
of
you
know:
816
3264
for
each
Destin
header
and
the
fragmentation,
not
two
by
five
twelve
we're
actually
moving
around
that
1280
point
of
doing
in
eight
byte
hops,
the
initial
fragment
1200
1208
1216
up
to
14
16
octets
of
the
initial
fragment
size,
so
all
that
was
being
tested
at
once
next
slide.
E
So
this
is
the
first
result
and
I've
normalized
the
vertical
axis
across
all
these
slides.
To
give
you
a
feel,
but
I
must
admit,
there
are
certain
points
where
the
eyebrows
raise.
E
There
is
no
known
reason
that
I
can
think
of
why
the
drop
rate
is
higher
once
you
get
to
about
1360
octets
in
the
initial
fragment
size,
but
it
is-
and
there
is
no
reason
that
I
can
think
of
why
1416
is
higher
than
the
others.
No
reason
I
can
think
of
this.
Is
everybody
next
slide,
but
the
Internet?
Isn't
everybody
the
internet's
bloody
weird
here?
E
Is
this
part
of
Europe?
Now:
okay,
a
little
further
west,
yeah
somewhere
or
east
sorry
somewhere,
that
is
Europe.
If
I
look
at
Europe.
Oddly
enough,
the
larger
fragments
actually
have
a
lower
drop
rate
than
the
smaller
initial
fragments,
so
over
in
Europe
the
number
goes
down
slightly,
not
up
in
North
America.
It
goes
down
but
stays
steady
bizarre.
E
E
These
were
packets,
which
in
theory
were
going
to
make
it
I'm
not
pushing
beyond
what
was
the
initial
MSS
value
and
in
Asia
you
again
see
this
Peak
at
the
high
packets
India,
which
has
a
massive
V6
rollout
in
Reliance
Geo
is
the
major
contributor
to
that
behavior
that
the
larger
initial
fragments
have
a
dramatically
increased
drop
rate
compared
to
the
others,
because
there's
all
these
vertical
sizes
are
the
same
rate
so
for
small
fragments.
E
India
is
doing
magnificently
much
much
better
than
you
know:
Europe
North,
America,
South
America,
except
for
big
initial
fragments
and
China,
abandon
all
hope,
but
the
bigger
packets
go
abandon
even
more
hope
than
you
ever
had,
and
they
wouldn't
have
any
Hope
anyway.
So
it's
all
just
stuffed.
So
why
does
it
vary
like
that
next
slide?
E
Is
it
the
fact
that
there's
a
header
or
the
factor
fragmentation?
So
we
included
a
test
that
says
here
is
a
packet
with
a
fragment
header,
but
the
fragment
header
says
the
entire
packet's
here,
an
atomic
fragment
and
again
the
world
average
about
half
of
the
fragment
drop
rate
cool
next
slide,
whoa
United,
States
curious.
You
notice
this
really
solid
weekday
weekend
packet
drop
rate
in
the
US
that,
during
the
weekdays,
the
fragmentation
drop
rate
is
slightly
higher
than
weekends.
E
E
Next
slide
I
have
no
bloody
clue.
You
might
but
I,
don't
okay,
so
that's
frags
next
slide.
Let's
move
on
to
destination
extension
headers,
the
ones
that
are
meant
to
pass
through
the
network
and
only
go
to
the
box.
Now
most
of
you
run
a
variant
of
Linux,
no
matter
what
you
know,
Microsoft
or
whatever
they
used
to
do.
Windows
is
basically
dead.
There
are
very
little
out
there
and
most
of
you
use
code
that
has
this
particular
piece
of
code
in
it,
and
it
says
what
it
means.
E
Now
we
didn't
read
that
so
for
the
first
few
months
we
were
running
this
experiment.
We
were
running
with
pad
in
all
ones
and
sizes
of
8
16,
32,
64
128.
Well,
we're
getting
bad
we're
getting
bad
because
we're
padding
with
all
ones
and
then
we've
made
it
through
the
network.
The
host
just
dropped
it
anyway,
Ninja
after
doing
a
I
think
it
was
after
the
last
ITF
meeting
and
someone
kindly
pointed
out
that
piece
of
code.
Thank
you.
We
then
found
the
following
results,
which
are
kind
of
interesting
128.
E
Bytes
is
more
evil
than
a
smaller
destination
extension
header,
which
tends
to
suggest
something
closer
to
the
network
than
the
host
and
interestingly
64
bytes
is
more
evil
than
lowers.
8
and
16
are
largely
the
same
and
32
byte
is
slightly
different.
Averages
get
confusing
next
slide
because
in
Greece
it's
really
really
low.
E
E
We
kind
of
assume
that
everyone
runs
the
same
vendor
equipment,
much
the
same
configuration
because
you
guys
travel
all
around
the
world
and
offer
all
of
your
ISP
clients
the
same
config,
and
so
we
almost
assume
that
the
world
is
painted
the
same,
and
when
you
find
these
kinds
of
differences,
I'm
like
it's,
not
that
you're
running
a
different
version
of
Android
in
your
handsets
you're,
not
the
networks
are
being
are
being
treated
differently
and
don't
forget.
This
is
an
ad-based
experiment.
We're
following
where
eyeballs
are
next
slide.
E
E
E
The
what
you
actually
find
is
that
landline-based
systems
which
are
more
reliant
on
the
CPE
to
get
the
destination
headed
through,
seem
to
fare
much
worse
than
mobile
systems.
You
also
notice,
with
Comcast
128
bytes
near
same
drop
rate
as
everything
else,
whereas
in
T-Mobile
it
just
pushes
up
ever
so
slightly.
So
there
is
a
difference
between
mobile
and
fixed
next
slide.
E
Okay,
that's
as
much
clue
as
I
can
offer.
Let's
move
on
to
the
last
one
time
is
running
out
same
problem.
We
were
doing
all
ones
Pat
in
so
everything
was
getting
dropped.
Oh
I
will
outsmart
this
we'll
go
to
1e1
everything
gets
dropped,
absolutely
zip
change,
that's
almost
100
for
every
size.
Next
slide,
everybody
everywhere
drops
everything
where
there's
V6
the
bits
that
are
white,
there's
not
enough
V6
to
test,
but
everywhere
else
it's
bright
red.
Now,
one
of
two
reasons:
one!
E
It's
them
two,
it's
me
because
if
something
is
dropping
something
one
inch
away
from
the
server
where
we're
emitting
that
packet,
obviously
the
world
looks
red
because
the
drop
happens.
You
know
just
there.
Next
page
next
slide,
with
one
exception,
the
one
that
proves
the
packets
are
making
it
out
of
the
server,
because
there's
one
provider
in
Egypt,
ETI
Salat
that
doesn't
have
a
100
drop
rate-
does
not
have
a
100
drop
rate.
E
For
some
reason,
the
16
byte
destination
header
in
Egypt
is
evil,
but
everything
else
is
is
less
evil.
It's
still
80
evil,
but
not
a
hundred
percent.
But
what
that
does
prove
is
the
packets
made
it
out
of
the
data
center
and
in
this
case
the
data
centers
in
Frankfurt
yeah
so
made
it
through
Germany
got
across
the
Mediterranean
yay
and
died
in
Egypt.
E
E
T-Mobile
464xlat
has
a
much
lower
drop
rate
than
say
Comcast,
which
is
fixed
using
as
far
as
I
understand
DS
light,
and
someone
can
correct
me
if
I'm
wrong
there,
but
it's
a
different
kind
of
transition
strategy
which
appears
to
have
a
bearing
on
the
way
these
V6
packets,
with
extension,
headers,
are
being
treated
so
whatever
ISP
equipment
you're
using
less
so
because
in
some
ways
the
world
runs
the
same
code.
E
Even
Huawei,
the
world
runs
the
same
code.
What
CPU,
you're
using
will
be
different
will
be
different
because
CPE
is
crap
in
so
many
unusual
and
inventive
ways,
and
everyone
is
crapped
differently
to
everyone
else,
because
you
guys
are
Engineers,
not
creative
artists.
So
when
you're
asked
to
be
creative
write
a
new
CPE,
you
usually
get.
Instead,
you
get
it
wrong
in
different
ways
all
the
time
and
that's
what
we're
seeing
CPE
is
never
consistent.
What
mobile
platform
you're
using
it?
E
Not
what
handset,
but
what
they're
using
at
the
other
end
makes
a
difference
and
private
relays
and
proxies
which,
with
Apple's,
move
and
Google's
move
in
this
area
of
increasingly
sucking
everyone
into
Google
file
or
apple
private
relay,
has
a
real
big
effect
on
these
kinds
of
measurements
through
to
the
client,
because
you're
now
seeing
the
interaction
between
the
front-end,
obscure
system
from
client
to
sort
of
public
interface
and
then
from
public
interface
to
where
we
are
or
have
a
different
reason.
You
know
invent
your
own
next
slide.
E
So
what
this
means,
if
you
think
you
can
rely
on
eh
getting
through
as
a
code
developer,
you
are
wrong,
it
might
work
or
it
might
not,
and
the
context
of
where
it's
being
used
has
a
much
greater
bearing
than
the
particular
eh
option.
You're
actually
using
right,
so
it
can't
be
relied
upon
is
the
real
message,
and
that
includes
fragmentation.
E
G
E
I
am
in
the
middle
of
a
TCP
session
right
and
I've
actually
got
a
client
same
server
same
client,
where
I'm
not
going
through
the
front
end.
In
any
case,
I've
already
set
up
an
experiment
context.
This
is
not
the
first
time
that
server
has
talked
to
that
client
in
V6.
The
conversation
that
set
the
experiment
up
not
being
measured,
separate
TCP
conversation
is
not
being
if
you
will
diddled
and
deliberately
fragmented,
so
I
know
that
normal
packets
in
V6,
including
a
TLS
extension
big
packets,
are
actually
making
it
through.
E
E
J
Hey
Jeff
really
a
lot
of
great
data
thanks
for
doing
this,
what
I'm,
taking
away
is,
there's
a
lot
of
unexplained
situations
and
a
lot
of
unanswered
questions.
What
do
you
see
as
the
next
steps
for
maybe
pursuing
getting
some
of
those
answers?
Well,.
E
E
E
H
E
We're
getting
to
the
point
where
the
deployed
mass
of
V6
has
its
own
Bleak
implementation
stasis
and
the
amount
of
crap
CPE
out.
There
is
now
an
irredeemable
problem.
The
amount
of
operator
I'm
being
driven
by
contractors
and
what's
in
the
box
and
I,
have
no
idea
what
I
did
yesterday,
yet
let
alone
what
I
did
two
years
ago
says
it
ain't
going
to
get
fixed.
We
have
to
live
with
what
we
do.
What
does
quick
do?
E
No
packet
bigger
than
I
think
it
was
1340
at
one
point
in
these
days
it's
1200.,
so
quick
is
actually
going
I,
don't
care
about
fixing
this
1200
works
and
I
can
really
understand
the
pragmatism.
That's
gone
into
that
thinking.
Instead
of
trying
to
fix
you
know
the
elephant
just
simply
go
through
a
path
that
gives
you
a
much
higher
Assurance
of
success
and
forget
about
the
other
rest
of
the
problems.
Thanks.
B
F
But
yeah
one
more
question
so
very
interesting
data.
So
how
many
from
how
many
servers
did
you
do
your
testing
to
how
many
clients
I
sorry
if
I
missed
it,
and
it
was
in
the
slides.
E
Number
of
servers
is
currently
six
points
around
the
world:
they're
all
linode,
so
they're
all
running
inside
Akamai
number
of
clients.
The
Google
ad
system
is
truly
prolific.
It
tries
like
crazy
to
deliver
28
million
new
endpoints
every
single
day,
I,
don't
retest
the
same
people
I
just
test,
whoever
Google
throw
at
me
and
Google
are
bloody
good
at
giving
me
this
point.
Just
under
30
million
new
clients
every
single
day
and.
E
H
E
F
Yes,
hi
again
I'm
going
to
switch
context
for
a
bit
and
now
we're
going
to
talk
about
the
magical
world
of
differentiated
Services
code
points.
F
So
a
bit
of
context
for
this
I
gave
this
presentation
in
tsbwg
at
ietf113,
and
then
it
got
picked
up
by
by
Bob
Bob
hinden,
who
suggested
that
you
lot
might
be
very
interested
in
these
measurements.
So
here
I
am
presenting
it.
F
It's
the
it's,
the
old
slide
deck.
It's
got
some,
it's
okay!
We
can
go
through
it.
Okay,
it
does
have
a
bit
of
some
animation
glitches.
H
F
A
B
F
Fine
cool
right
so
some
background
about
differentiated
Services
code
points.
Well,
they
exist.
They
live
in
the
IP
header.
They
are
a
differentiated
Services
code
point
is
a
essentially
a
value
defined
in
a
six-bit
wide
field
and
it
can
encode
value
between
0
and
63.,
and
this
value
is
basically
are
looked
at
by
other
routers
on
the
path
who
can
then
provide
quality
of
service
within
a
disturbed
domain.
F
Based
on
that
particular
sorry,
I
think
you
skipped
one
slide
ahead,
yeah
more
context
here,
right
so
routers
on
the
past,
essentially
look
at
this
field
and
they
can
modify
this
field
and
they
can
provide
quality
of
service
treatment.
According
to
this
value.
Now
we
have
a
divster
field
now,
but
before
1998,
what
we,
what
do,
I?
F
What
we
used
to
have
is
a
toss
byte
and
in
the
tosbyte
you
had
three
bits:
the
first
three
bits
called
the
Precedence
field
and
that
was
used
in
order
to
do
the
same
thing.
This
is
important.
Next
slide,
please!
Well.
What
happened
in
1998
is
that
IFC
2474
came
along
and
said
right
guys
we're
no
longer
doing
IP
presidents
we're
now
doing
diff
surf.
F
If
you
implement
diffserv,
you
will
be
looking
at
this
6-bit
wide
field
and
you
won't
be
looking
at
the
previous
three
bit
field
and
in
order
to
provide
backwards
compatibility,
it
essentially
defined
some
new
dsps
that
are
called
class
elector
tscps
and
they
keep
compatibility
in
the
following
way.
Essentially
it's
all
of
the
eight
values
that
would
have
been
encoded
in
the
IP
precedence
field
and
you
take
all
of
them
and
you
stick
three
zeros
at
the
end
and
voila.
F
They
are
arranged
in
an
8x8
grid
and
I'm
going
to
show
you
on
the
grid
which
values
have
been
assigned
next,
so
I
mentioned
ifc2474
defined
all
of
this
class
selector
code
points
and
they
are
signed
next,
then,
in
1999
one
year,
one
year
later,
you
start
getting
all
of
the
assured
forwarding
code
points
which
also
include
drop
precedence
and
they
are
assigned
next,
then
Along,
Comes,
EF,
I,
think
in
2001
and
voice
admit
10
years
later,
roughly
in
2010
and
then
even
10
years
later,
you
get
the
the
latest
allocation
of
lower
effort
which,
which
is
called
0.1
next.
F
Basically,
if
you
take
the
binary
representation
of
the
code
points,
I've
just
shown
in
red
on
the
slide
deck,
you
will
see
that
they
end
in
one
one
and
that's
why
they
are
on
those
two
columns
and
these
are
the
reserved
ones.
So
here's
what
the
grid
looks
like
next
slide,
please
now
out
of
all
of
these
allocated
code
points
which
one
which
ones
are
used
it
was,
is
what
I'm
about
to
tell
you
and
how
do
I
know
this
well.
F
I
know
this
because
I've
been
doing
measurements
with
differentiated
Services
code
points
ever
since
2015.
F
and
I've
looked
at
many
different
data
sets
next,
please
so,
for
example,
by
examining
web
server
replies,
you
can
see
that
they
use
the
very
popular
short
word
in
code
points,
f11
F21,
some
of
them
use,
CS3
and
even
EF.
F
Next,
by
doing
a
trace
routes
within
mobile
networks,
you
normally
see
there
that
the
mobile
networks
like
to
remark
all
of
the
incoming
code
points
to
a
single
code.
Point
normally
it's
B,
but
you
also
see
if
11
and
F12
and
f13
being
popular
choices
that
are
used
within
mobile
networks.
Next,
please,
if
we've
also
looked
at
passive
data
traces
collected
by
cada,
so
like
very
large
pickup
files,
and
in
there
we
saw
that,
for
example,
icmp
traffic
often
uses
CS6
or
12.48.
F
This
is
all
measurement
data
I've
put
some
slides
in
the
appendix
for
you,
you
can
have
a
look
at,
but
also
I'm,
linking
to
the
conference
paper
and
the
Journal
paper
that
I've
published
at
the
time
with
this
next,
please
so
I'm
gonna,
say
next
next
again
to
avoid
the
bug
and
one
more
yeah,
some
of
the
colors
disappear
right.
The
measurement
data
also
shows
a
different
problem.
F
I've
mentioned
the
president's
field
and
well
it
turns
out
that
there's
still
many
routers
in
the
internet
that
actually
use
that
field,
and
this
is
mostly
seen
in
the
form
of
a
pathology
that
we've
called
toss.
Precedence,
bleaching
and
what
happens
here
is
that
routers
on
the
path
essentially
take
the
diff
surf
field
and
only
clear
out
the
first
three
most
significant
bits
of
it,
and
this
is
how
you
end
up.
F
So
if
you
take
a
code,
Point
say
46
and
you
think
of
it
in
binary,
and
you
slash
the
first
three
bits
and
you
make
them
zero.
Then
you
what
you
end
up
with
is
code
0.6.
If
you
do
the
same
for
say
a
popular
copy
like
f11,
then
it
toss
president's
speeches
down
to
code
point
two.
So
this
is
how
you
end
up
with
a
lot
of
these
very
small
code
points
between
zero
and
seven
in
the
core
of
the
internet.
F
This
is
supported
by
a
lot
of
data.
We've
done
traces
between
mobile
networks,
tracers
in
the
core
of
the
internet,
from
many
different
Vantage
points,
We've
looked
at
packet,
race,
analysis,
I.
Think
I
have
some
slides
to
delve
a
bit
deeper
in
these,
but
this
shows
up
time
and
time
again
in
all
the
different
kinds
of
networks
we
tested
text,
please
right.
So
this
is
the
this
problem
of.
F
First,
president
speaking,
is
a
actually
a
huge
problem
for
the
sap
Gold
Point
assignments,
because,
if
you
think
about
it,
when
you
want
to
assign
a
new
code
point
say
you
want
to
assign
code
point
17.,
you
wouldn't
really
want
to
do
that,
because
if
you
apply
those
precedent
speech
into
code
2017,
then
what
you
get
is
lower
effort.
So
that
could
be
a
huge
issue
because
of
priority
inversion.
F
But
the
the
problem
is
to
fault,
because
it
also
means
you
can't
really
assign
these
small
code
points,
because
you
have
so
many
popular
code
points
that
bleach
down
to
say
code
0.2.
Then
you
end
up
with
a
lot
of
traffic
aggregated
to
code
0.2.
So
you
don't
really
want
to
assign
that
code
point
because
there's
already
so
much
traffic
that
carries
it
in
the
internet.
F
Next,
please
and
the
bug
and
next
again
right.
So
looking
at
these
very
small
code
points,
you
have
zero,
which
is
best
effort.
You
have
one
which
is
already
allocated
two
as
I've
said,
is
kind
of
polluted,
because
all
of
these
popular
code
points
bleach
down
to
it,
so
it
can't
really
be
assigned
or
used.
Then
you
have
three
which
is
reserved
next,
then
you
have
code.4,
which
has
a
slightly
different
problem
in
that
it
is
being
set
by
SSH
traffic
everywhere.
F
I
mean
not
all
of
this.
No,
not
all
SSH
traffic,
but
the
majority
of
it
will
use
4.4
for
also
for
historical
reasons
which
proceed
they
observe
next.
You
have
4.5,
which
is
up
for
grabs.
Code.6
has
the
same
problem
as
code
0.2,
because
it's
on
the
same
column
with
ef
right
and
then
you
have
code
0.7,
which
is
again
not
for
use,
so
the
situation
looks
quite
Bleak
really
right.
F
Next,
please
now
I've
mentioned
the
data
that
underpins
this.
Basically,
we
first
started
seeing
this
in
2015
when
we
do
Trace
routes
within
mobile
networks.
We
see
that
most
mobile
networks
like
to
remark,
but
we
also
see
that
outbound
at
appearing
between
the
mobile
network
and
the
general
internet,
you
see
toss
precedence,
bleaching
happening.
We
revalidated
this
from
the
core
of
the
internet,
where
we
did
tracers
to
several
web
servers
and
there
you
quite
clearly
see
the
different
end-to-end
traversal
for
the
different
code
points.
F
Smaller
code
points
will
always
Traverse
better
because
of
that
pathology
and
we
find
out
that
it
happens
on
up
to
20
of
the
past
we
tested
and
in
quite
a
lot
of
the
routers
that
we
tested
next,
please
right,
then
we
see
this
again
in
data
that
I
did
not
collect
this
time.
This
is
data
provided
by
Keda,
so
Keda
used
to
they
don't
do
it
anymore,
but
they
used
to
provide
very
large
pickup
files
that
have
millions,
no
actually
billions
of
packets.
F
As
you
can
see
there,
they
made
them
available
to
researchers
and
basically,
these
are
choices
of
traffic
flowing
at
an
internet
exchange
point.
F
F
Okay,
I
don't
know
if
this
is
too
small
to
read,
but
we
found
that
after
be
the
traffic
flowing
in
that
particular
internet
exchange.
So
a
lot
of
you
see
all
of
the
small
code
points
are:
are
the
next
step,
so
you
see
the
sap
traffic
Mark
with
the
acp2
accounts
for
up
to
19
of
traffic
in
the
in
in
this
particular
data
set,
and
this
persists,
regardless
of
how
you
split
the
traffic
or
what
year
the
data
was
collected
in
or
even
across
ipv4
and
IPv6.
F
You,
you
see
this
pathology
and
the
scp-2
is
so
prevalent
because
of
course
it
is
what
results
from
toss
bleaching
of
af11
and
f1021,
and
all
of
these
popular
code
points
at
the
edge
right.
F
F
If
you
do
a
test
from
The
Edge,
you
can
still
see
that
those
bleaching
happens
on
up
to
10
of
us,
and
this,
of
course
results
in
the
different
traversal
rates
for
high
quad
prints
versus
the
Low
Bottom.
Seven
good
points
next
slide,
please
right.
F
F
F
F
F
So
basically
the
non-q
building
traffic
craft
originally
proposed
to
the
saps,
45
and
5,
essentially
because
of
this
problem
that
I
described.
But
then
it
worked
out
that
only
45
was
allocated
in
the
end,
because
people
didn't
want
to
risk
making
the
the
saps
in
that
same
column,
unusable
yeah.
So
we
don't
really
know
where
to
go
from
here
next
slide.
Please,
because
essentially
we
don't
know
that
much
about
who
does
those
presidents
bleaching,
because
it
might
not
just
be
old,
misconfigured
routers
that
do
it
by
mistake.
F
It
might
just
be
operators
that
actually
have
policies
like
these,
so
I
have
a
survey.
If
you
are
interested
you,
if
you
are
an
operator
and
you
use
the
dscps
in
your
network
and
even
if
you
don't,
because
this
survey
will
ask
you
questions
about
the
saps
extension
headers,
maybe
even
MTU.
F
Please
complete
this,
because
I
would
be
very
interested
to
understand
how
people
use
dsps
within
their
their
networks
and
see
where
we
go
from
there,
because
we
might
end
up
making
a
draft
that
does
make
actual
recommendations
on
how
to
assign
your
code
points
rather
than
just
an
informational
considerations,
draft
that
we
have
now
right-
and
that
concludes
my
presentation.
A
So
I'm,
actually
in
the
queue
and
I
happen
to
be
first
so
I
mean
a
lot
of
network
devices,
still
have
a
very
small
like
limited
number
of
queues,
like
eight
cues
for
interfaces.
A
But
they
have
a
number
of
cues:
they
can
buffer
into
yep
yep,
and
a
lot
of
operators
obviously
want
to
be
able
to
prioritize
stuff,
like
routing
updates
and
so
sort
of,
if
you're
going
to
put
those
into
some
sort
of
yes
buffer
queue.
You're
gonna
have
to
use
at
least
some
of
those
and
I
suspect
that
that
likely
means
that
you're
gonna
have
a
hard
time
always
with
this,
and
that's
me
Tom.
C
Hi
Tom
Hill
from
BT,
typically
generically,
I,
suppose
the
the
problem
is
that,
if
you're
accepting
any
form
of
tag
like
this
on
a
packet
from
your
outside
of
your
domain,
yeah
your
as
number
your
routing
them,
it's
informing
your
routers
how
to
deal
with
that
traffic
and
operationally,
unless
you
agree
with
the
third
party
over
in
another
as
number,
you
really
don't
want
someone
to
tell
you
how
to
do
that
without
your
knowledge,
and
it
can
sometimes
have
regulatory
impact,
so
net
neutrality,
for
example,
that
isn't
to
say
that
everyone's
doing
this
perfectly
in
wiping
everything
off
the
edges.
C
F
Yeah
but
then
why
overwrite
only
the
first
three
bits
I
mean
I'm,
all
for
treating
unknown
or
dsaps
that
you
don't
trust
and
remark
them
all
say
to
zero
bleach
them.
That's
allowed
by
By
the
Drop
by
by
the
ifcs,
should
I
say,
but
why?
Why
do
toast
presidents
bleaching
in
that
case,
when
you
can
just
do
it.
C
It's
probably
a
combination
of
a
lack
of
knowledge
and
a
lack
of
nothing
is
broken
if
everything
is
working.
No
one
has
time
to
spend
on
this.
Okay,
but
I
I
find
it
very
interesting
that
you've
done
the
work
and
the
research
and
I'd
like
to
keep
to
my
sight
on
this
in
the
future.
So
thank
you.
Thank
you.
K
Chair
and
March,
how
can
I
a
couple
just
a
couple
things.
First
of
all,
a
lot
of
people
get
confused.
You
know,
as
I
was
calling
out
Warren
and
the
difference
between
queuing
and
buffering,
because
most
of
the
hardware
does
that
the
similar
to
what
Tom
said
yeah
at
the
network
boundary
you
tend
to
stamp
on
everything.
K
K
You
know,
I
suspect
that
the
reason
why
the
only
so
many
bits
are
touched
is
because
that's
all
that's
programmed
in
the
hardware,
because
in
most
cases
when
you've
got
a
hardware-based,
you
know
Asic
or
forwarding
chip
or
whatever
it
only
looks
at
those
bits.
K
F
K
People
don't
configure
that
most
people
just
leave
it
in
the
default
configuration
and
the
second.
You
turn
on
any
sort
of
qos
like
I'm,
using
a
really
old
example,
but
like
the
Cisco
6500,
that
a
lot
of
people
use
the
second
you
turned
on
you
typed
the
command
mlsqls
into
it.
It
would
just
stomp
on
everything,
so
I
I
suspect,
It's,
a
combination
of
several
of
those
things,
which
is
why
you're
seeing
the
behavior
you're
observing
super
interesting,
though.
L
F
Okay,
so
this
is,
you
can
find
the
details.
So
the
short
answer
is
no
no
difference
between
TCP
and
UDP
in
this
case,
and
you
can
find
the
breakdown
of
the
measurements
in
the
Journal
paper.
That
is
the
second
one.
The
middle
link.
A
E
L
E
Https2
and
and
then
it's
become
DNS
over
HTTP
3,
almost
by
Collective
action
by
simply
saying
well,
it's
HTTP
and
it
all
gets
swept
in.
So
you
may
have
seen
it
when
you've
looked
in
your
platforms
like
Android.
At
this
point
there
is
a
part
of
the
config
screen
that
says,
add
a
secure,
DNS
resolver
by
name
and
what
that
actually
will
do
is
actually
create
a
TLS,
Association
and
authenticate
the
name
of
that
name,
resolver
and
then
thereafter
use
DNS
over
TLS
to
query
that.
E
E
The
other
place
where
you
might
see
this
is
where
you
actually
don't
have
a
big
ability
to
configure
it
yourself,
and
this
the
best
example
I
can
find
is
in
Firefox,
where
I
think
it's
about
two.
Maybe
three
years
ago
they
decided
to
adopt
what
they
called
a
trusted
recursive
resolver
program,
where
they
would
take
queries
made
by
a
client's
use
of
Firefox
and
instead
of
passing
it
down
the
platform.
Libraries
to
do
a
conventional
resolution
would
actually
do
the
resolution
inside
Firefox
going
to
what
they
called
one
of
a
small
set
of
trusted.
E
Recursive
resolvers
and
perform
the
whole
transaction
using
DNS
over
https.
So
using
do
next
slide.
So
there
was
some
evidence
that
this
was
around
and
available.
But
you
know
the
number
of
folk
who
twiddle
with
the
knobs
on
their
platform
is
probably
fewer
than
the
number
of
people
in
this
room,
because
you
know
you're
going
to
break
something
and
then
you're
going
to
Brick
your
device,
and
then
you
know
you're
going
to
feel
guilty
or
something
so
in
some
ways
creating
these
facilities
was
never
ever
going
to
change
the
needle.
E
It
was
always
going
to
be
tiny
unless
a
bit
like
Firefox
it
took
the
decision
out
of
your
hands.
It
just
did
it
where
you
weren't
consulted.
If
you
happen
to
be
in
the
US
and
you
used
Firefox
your
your
your
DNS
resolver
started
to
use
Doh
and
you
had
no
control
next
slide
so
hijacked.
Yes,
you
can
call
it
that
if
you
want
Jared,
so
the
issue
is
thank
you.
The
issue
is:
how
successful
are
these
measures
you
know?
After
all,
this
standardization?
Does
anyone
actually
use
it
now?
E
E
E
Next,
except
now
there
was
one
provider
cloudflare
who
was
desperate
to
use
1.1.1.1
and
one
rir
AP
Nick
that
had
it
and
and
so
we
we
came
to
a
deal.
We
would
let
them
use
1.1.1.1,
but
for
research
purposes
we
would
get
to
see
a
certain
amount
of
the
traffic.
The
query
traffic
that
hits
1.1.1.1
now
I,
don't
know
if
you're
using
1.1.1.1
that
it
was
you
I
have
no
idea.
I,
don't
know
who's
query,
but
I
do
see
the
protocol
you
use
to
make
that
query.
E
B
E
Back
one
back,
two
I
was
just
going
to
prove
that
it's
four
percent
yeah,
that's
the
one.
How
did
that
not
get
up
there?
I,
don't
know
yeah
right,
Google,
currently
of
the
resolvers
that
people
use
operate
around
a
16
market
share,
one
in
seven
users
believe
what
Google
tell
them,
because
that's
the
first
recursive
resolver
that
offers
back
an
answer
that
they
believe
so
Google
has
a
dominant
share
of
the
open,
recursive
market
area
of
the
globe.
E
But
cloudflare
is
certainly
number
two
and
currently
its
share
is
around
just
under
four
percent
of
all
users.
Worldwide,
open
DNS
still
has
residual
use
in
Quad
nine.
But
you
know
cloudflare
is
not
nothing,
it's
not
as
massive
as
Google,
but
it's
certainly
significant
next.
So
now
we
bring
back
the
next
point.
I
want
to
make
that
cloudflare
is
a
trusted.
Recursive
resolver,
and
so
this
data
is
loaded
with
Firefox
data.
E
E
E
E
The
yellow
or
Greener
the
larger,
the
redder,
the
lower
now
in
some
ways
this
might
be
a
map
of
Firefox
use.
I,
don't
know,
I've
never
really
looked
hard,
but
you
know
America,
so
so,
Russia
a
little
more
Morocco.
Is
it
somewhere
over
there
in
Northern,
Africa,
reasonable
and
wow
Thailand
through
the
guest
next
slide.
E
So
I
think
this
is
all
about
Firefox
and
trusted
recursive
resolvers
personally
hard
to
really
tell
for
sure,
but
in
some
ways
it's
not
users
doing
things
and
not
even
isps
doing
things.
It's
just
the
browser
going
I
know
better
than
you.
Let
me
go
and
take
your
DNS
traffic
and
shunt
it
over
HTTP
and
deliver
to
cloudfit
yay
next
slide.
E
So
you
know,
if
you
actually
have
a
look
at
the
Firefox,
the
Firefox
pages
of
DNS
server
https,
you
know,
that's
the
bid
in
Firefox.
It
says
we're
going
to
use,
do
not
DOT
for
this
particular
program.
Next,
so
dot,
which
is
not
used
as
much.
Where
is
dot
against
cloudflare
used
almost
nowhere
except
a
little
bit
in
Nepal
and
next
slide
blow
me
down.
Laos
I
have
no
idea.
E
My
suspicion
is
that
this
is
something
the
ISP
is
doing,
and
nothing
to
do.
What
you
know
is
happening
in
Laos,
but
I
can't
tell
you
that,
because
I
don't
know
the
IP
addresses
of
the
folk
doing
the
queries
so
I
can't
map
them
to
network
providers
and
I
can't
tell
you
which
network
provider
has
turned
this
on,
because
that
data
is
quite
properly
occluded
from
me.
E
I
don't
even
want
to
see
that
data,
but
what
I
can
say
is
that
what
looks
to
be
some
kind
of
ISP
thing
is
turning
on
Dot
in
Laos
to
cloudflare
more
than
in
other
countries,
and
the
amount
at
least
as
cloudflare
sees
it
in
Laos
of
DNS
over
UDP
is
actually
in
the
decline
as
a
result.
How
much
is
actually
do
well
a
lot
less
in
less
no
other
countries
like
this.
It's
just
Laos
next.
E
So
what
can
I
say
about
this
users?
Don't
twiddle
with
knobs?
They
really
really
don't,
and
maybe
that's
a
good
thing.
I
don't
know.
E
I
don't
run
help
desks
if
I
did
don't
ever
Twitter
with
the
knobs
would
be
my
advice
to
use
this
I
think
this
is
a
lot
to
do
with
Firefox
and
possibly
bits
of
chrome,
I
really
don't
know,
but
certainly
use
levels
are
growing
and
whether
it's
completely
Firefox
or
some
other
way
of
which
this
behavior
is
happening-
I,
don't
know,
but
maybe
other
apps
dot
is
really
low,
except
in
Laos.
E
G
You
know
Jeff
I,
think
the
problem
with
DOT
is
it's
much
easier
to
filter
by
an
ISP
and
I
know
about
at
least
one
ASP
which
basically
blocks
dot,
but
obviously
it's
much
harder
for
them
to
block
https.
So
maybe
it's
one
of
the
reasons
we've
seen
less
Dot
and.
E
A
Awesome
well,
thank
you,
everyone
that
brings
us
to
the
end
of
today's
aepg
meeting.
Don't
forget,
there
will
be
another
one
of
these
next
time
we
meet.
So
if
people
have
anything
they
would
like
to
present,
please
let
me
know
also
another
shout
out
for
the
technology
deep
Dives,
which
is
tomorrow
starting
at
8
A.M,
which
yes,
I,
realize
that's
terrifyingly
early
for
most
people,
but
well,
maybe
jet
lag
will
help
you
wake
up
earlier.
It
was
the
only
time
that
was
available.