►
From YouTube: IETF108-ANRW-20200731-1300
Description
ANRW meeting session at IETF108
2020/07/31 1300
https://datatracker.ietf.org/meeting/108/proceedings/
A
Okay,
hello,
everybody.
This
is
the
fourth
and
last
session
for
the
applied
networking
research
workshop,
2020.,
I'm
mia
and
in
the
other
video
window
you
see
my
code
chair
roland.
A
I
will
lead
you
through
the
session
who's
now
waving
at
you.
I
will
lead
you
through
this
session
and
for
those
people
who
haven't
been
in
one
of
the
three
previous
sessions.
We
will
quickly
go
through
some
of
the
slides
here
as
an
intro,
so
the
next
slide.
A
Roland
is
driving
is
just
very
quickly
to
think
I
was
born,
says
it's
really
important
to
make
these
things
happen.
Next
slide.
A
This
is
mainly
for
you
to
find
in
the
proceedings
the
proceedings
are
on
in
the
ietf
data
tracker,
and
so,
if
you
look
at
the
slides,
you
will
also
be
able
to
click
on
the
link
to
join
the
slack
channel.
So
we
do
have
a
slack
channel
for
this
meeting
office
workshop
on
the
sick,
homeworks
voice
space,
because
this
is
a
sitcom
or
acm
acm
circumsponsor
workshop,
and
so
you
can
use
friendly
now
the
chat
and
also
talk
to
the
authors
of
the
papers
there
in
the
chat.
A
All
the
program
and
the
papers
are
also
on
the
workshop
web
page
with
the
full
pdfs
of
the
papers,
and
you
have
free
access
to
the
acm
digit
library
for
those
papers
as
all
ietf
sessions.
This
session
will
be
recorded
as
well
and
the
recordings
will
be
put
on
youtube
afterwards.
So
please
be
aware
of
that
when
you
join
the
mic
queue
and
put
on
your
video
next
slide,.
A
Yeah,
I
just
said
my
q,
so
for
the
question
and
answer
part
of
the
session.
After
each
talk,
we
will
use
actually
the
mic
the
queue
tool
provided
by
meat
echo.
So
you
can
press
this
button.
You
see
on
the
picture,
so
the
second
button
from
the
right
underneath
your
name,
which
is
a
microphone
with
a
little
hand,
sign
to
join
the
queue
and
then
the
chairs
will
let
you
and
enable
your
audio
to
ask
a
question.
A
Please
use
the
to
the
queueing
tool
if
you
have
any
questions,
there's
more
instructions
about
meet
echo
on
the
ietf
web
page.
So
if
you
need
further
information,
please
go
and
have
a
look
there.
Next.
A
Yeah
and
that's
where
we
are
so,
as
I
said,
this
is
our
fourth
and
last
session.
It's
about
monitoring
and
logging.
It's
two
really
nice
papers.
I'm
really
excited
to
still
have
these
talks
here
and
yeah.
It's
only
two
of
them.
You
only
have
15
minutes
and
we
start
with
robin
max
now.
I
need
to
find
the
right
window
there.
It
is
so
I
mean.
Actually
I
don't
need
to
read
this
because
I
know
robin
quite
well.
A
He
has
been
to
a
couple
of
itf
meetings
at
least
remotely
some
of
them.
He
is
at
the
hazard
university
in
belgium,
whereas
where
he
is
a
phd
student
and
he's
looking
into
http
as
well
as
quick
in
http,
3
and
2,
basically
now-
and
he
is
now
talking
about-
I
think
one
of
his
favorite
topics
about
logging
for
these
kind
of
transports.
B
D
E
C
C
Welcome
back
apologies,
folks,
something
went
pear-shaped
with
the
heat
echo
system,
let's
give
everybody,
so
we
have
69
people
back
in
the
room.
Let's
give
everybody
one
or
two
minutes
to
join
again
and
of
course
we
will
make
up
for
the
time
at
the
end
of
the
session.
C
A
B
Welcome
to
this
presentation
on
debugging,
quick
in
http
3,
with
q
log
in
q,
vis.
My
name
is
robin
marks
and
I
am
a
very
large
lord
of
the
rings
fan
and
I'm
telling
you
this
because
recently
I
have
had
an
epiphany,
and
that
is
that
the
quick
and
hp3
specs
are
really
quite
bulky.
Even
if
you
look
at
just
the
core
six
documents,
their
turtle
page
count
is
already
well
over
that
of
tolkien's
book
the
hobbit
and
to
make
matters
worse.
While
the
hobbit
is
generally
considered
a
children's
book.
B
B
This
worked
well
for
us
and
we
wanted
to
make
it
available
for
other
implementations
as
well,
but
that's
kind
of
where
we
hit
some
sort
of
a
roadblock,
because
we
noticed
that
we
were
going
to
need
a
common
input
format
to
get
data
into
our
tools.
We
first
try
this
with
packet
captures,
which
works,
but
they
miss
a
lot
of
internal
implementation,
state
things
like
the
congestion
window
or
round
trip
time
measurements
or
why
exactly
a
packet
was
dropped.
B
These
are
all
things
you
do
want
to
know
when
debugging
these
are
things
that
are
also
available
inside
what
you
call
the
ad
hog
logs,
so
the
command
line
logs
or
the
console
logs.
The
problem
with
these,
however,
is
that
they
are
often
unstructured
and
they
are
also
different
across
the
implementations.
B
We
did
write
a
few
parsers
for
different
ones,
but
it
quickly
found
that
there
was
no
fun
at
all.
So
what
we
did
instead
is
that
we
proposed
how
about
everyone
simply
long
in
the
exact
same
format,
and
if
we
make
this
a
machine,
readable,
structured
format,
then
we
can
use
this
as
input
for
our
tools
as
well,
and
this
is
what
we
called
the
q
log
format
and
we
propose
to
use
as
the
basis
for
this
was
json,
because
we
think
it's
like
the
best
of
both
worlds.
B
B
Every
event
simply
has
a
timestamp
and
a
category
and
some
event
metadata
associated
with
it,
but
we
thought
it
was
a
good
idea,
but
let
me
be
honest:
we
didn't
actually
think
anybody
was
going
to
implement
this
just
to
make
use
of
our
tools,
and
I
must
say
I
haven't
been-
I
I'm
so
very
happy.
I
was
proven
wrong
because
now,
two
years
later,
it
turns
out
about
two-thirds
of
all
quick
implementations,
do
actually
output
q
log,
with
several
more
having
plans
to
support
the
format
over
time.
B
Facebook
has
even
deployed
this
into
production
and
the
report
logging
several
billion
qlik
events
every
day.
Given
this,
we
thought
the
moment
was
right
to
try
and
figure
out
why
exactly.
This
has
turned
out
to
be
an
unexpected
success,
so
we
performed
an
expert
survey
with
eventually
28
participants,
all
of
them
either
active,
quick,
implementers
or
academic,
quick
researchers.
B
B
B
The
first
tool
is
our
sequence:
diagram,
which
plots
a
connection
trace
on
a
vertical
timeline
where
each
queue
log
event
is
one
of
the
squares,
and
each
packet
center
received
is
one
of
the
directional
arrows
and
you
can
click
on
them
to
get
more
information
on
what
exactly
was
contained
in
these
packets.
The
internal
endpoint
state
is
visualized
on
the
sides.
As
you
can
see,
this
is
already
quite
interesting,
but
one
of
the
innovations
that
I
think
we
added
is
the
ability
to
load
the
same
trace
but
from
a
different
vantage
point.
B
This
setup
has
helped
us
a
lot
to
debug
quick's
complex
encrypted
handshake,
which
is
quite
easy
to
get
wrong
in
the
implementation
and
can
easily
lead
to
death
locks.
If
there
are
problems
with
the
recovery
and
re-transmission
logic,
our
colleagues
from
uc
louvre
have
used
this
concept
as
well
in
their
implementation
of
the
multipath,
quick
extension
and
they're
also
listed
as
co-authors.
B
On
our
paper,
because
they
have
extended
q
log
to
add
per
path,
information
to
the
events
and
they
also
implemented,
for
example,
this
custom
sequence
diagram
tool
which
visualizes
individual
parts
as
differently
colored
arrows.
So,
for
example,
the
first
part
is
the
cyan
combined
with
the
red,
and
the
second
path
is
the
green
combined
with
the
purple.
B
What
we
would
expect
the
implementation
to
do
is
to
fall
back
to
the
initial
path
to
complete
the
file
transfer,
which
clearly
does
not
happen
here.
This
is
quite
easy
to
see
when
compared
to
the
second
trace,
in
which
we
have
the
exact
same
situation,
but
here
after
a
while,
we
see
that
the
implementation
indeed
falls
back
to
the
initial
path,
to
complete
the
file
transfer,
simply
using
different
colors
for
these
arrows
per
pile,
allowed
them
to
very
easily
debug
a
lot
of
different
multipath
issues
using
this
type
of
tool.
B
B
We
also
show
flow
control
information
here
and,
of
course,
because
we're
using
q
log,
we
can
also
show
the
actual
congestion
window
and
bytes
flight
as
well
as
on
the
bottom.
Here
are
round-trip
time
measurements
accompanying
those
as
well.
This
allows
us
to
also
quickly
observe
some
of
weird
anomalies
in
this
trace,
for
example,
here
at
the
beginning,
there
was
a
period
where
we
were
sending
nothing
at
all,
even
though
we
had
plenty
of
congestion
window
allowance.
B
This
is
because
of
the
flow
control
here
in
pink,
which
only
updated
really
late,
causing
to
a
delay
in
the
send
rate.
In
stress
this
tool
has
been
used
extensively
by
others
to
implement
and
verify
several
new
congestion
controllers.
For
example,
cloud
fibre
has
a
very
nice
blog
post
on
how
they
added
cubic
and
high
storage
support
to
their
quiche
implementation.
B
Again,
our
colleagues
from
museu
luvan
have
taken
this
to
a
slightly
different
direction,
splitting
out
these
different
congestion
control,
metrics
per
path.
So
here
each
column
is
a
different
path
on
the
left.
We
have
the
initial
handshake
path
and
then
two
more
parts,
each
with
different
round
trip
times
and
bandwidth
that
are
being
used
to
transfer
a
single
file.
Splitting
these
out
in
this
way
allows
them
to
find
problems
with
their
multiple
logic
quite
efficiently.
B
The
third
tool
that
we
have
is
a
multiplexing
diagram.
This
is
needed
because
quick
uses
just
a
single
connection,
but
it
still
wants
to
share
bandwidth
between
different
independent
streams.
What
we
do
here
is
to
assign
each
stream
an
individual
color
and
then
plot
different
stream
frames,
or,
let's
say
packets,
on
the
timeline,
and
if
we
get
this
kind
of
a
top
level
visualization
here,
this
means
that
probably
the
files
have
been
sent,
sequentially
say
first
in
the
first
step.
B
This
is
quite
different
from
this
next
trace,
where
we
see
much
more
of
a
smudgy,
colors
show
up
or
maybe
a
rainbow,
and
this
is
because
here
the
server
was
using
a
more
of
a
multiplexing
around
robin
multiplex,
rather
switching
between
the
streams
for
each
packet.
B
However,
it's
weird
that,
if
we
zoom
out
here,
we
do
see
some
areas
in
which
this
server
seems
to
be
using
a
sequential
schedule
as
well,
and
this
can
be
explained
by
looking
at
the
bottom
part
here,
because
the
black
areas
indicate
that
this
data
has
been
retransmitted
after
being
declared
loss.
So
we
see
that
here
for
re-transmitted
data.
Apparently
the
server
does
switch
to
a
more
sequential
scheme.
B
However,
we
do
have
some
areas
here
at
the
start
that
are
sequential,
but
do
not
correspond
to
re-transmissions,
and
this
is
exactly
the
kind
of
bug
that
you
tend
to
find
with
this
kind
of
visualization,
and
this
turned
out
to
be
to
do
weird
interactions
between
internal
buffering
logic
and
flow
control
limits
causing
this
weird
behavior.
B
So
qlog
defines
a
lot
of
different
events
and
their
fields,
but
most
of
them
are
actually
optional.
A
good
example
is
seen
in
the
packet
drops
event
on
the
right
there,
where
every
field
is
actually
optional
indicated
by
the
question
mark
in
the
schema.
This
means
it's
very
easy
to
leave
out
some
of
the
events
you
don't
want
to
log,
or
indeed,
to
add
new
ones.
B
B
They
can
do
this
without
me
without
having
to
wait
for
me
to
update
q,
log
or
even
q
vis,
because
the
tools
were
made
with
this
flexibility
in
mind,
and
they
will
simply
show
new
events
or
news
new
packet
types
and
new
frame
types
and
quick
in
the
tools
as
they
are,
so
people
have
been
able
to
iterate
on
this
really
quickly.
Another
aspect
of
this
with
json
is
that
it
can
be
used
for
other
use
cases
as
well
or
to
make
things
easier.
For
example,
facebook
doesn't
log
full
cool
files,
they
log
each.
B
For
example,
you
know,
is
the
spin
bits
implementation
working,
we
can
simply
look
at.
Are
there
spin
related
events
in
the
queue
log?
There's
a
simplified
example
on
this
slide
for
from
move
files
as
well,
where
they
test
some
could
just
control
state
changes
where
they
trigger
in
the
tests
in
application,
limited
condition
and
then
also
check
that
it
actually
went
away.
B
It
also
makes
it
easy
to
tamper
with
new
protocols.
People
have
been
using
this
for
dns
over
quick
as
well,
and
we
ourselves,
as
we've
seen,
have
used
this
to
do
tcp,
tls
and
http
2
debugging,
with
the
same
tools
as
well,
where
we
get
most
of
our
stuff
from
the
pcaps
and
then
use
ebpf
kernel
probes,
together
with
some
http
2
application
logs
to
get
like
a
full
q
log
with
internal
state
as
well.
B
So
this
is
what
people
liked
about
the
q
log
format,
or
at
least
some
things,
but
there
are
also
some
things
they
don't
like,
and
the
main
pushback
that
we've
always
gotten
is
that
people
are
afraid
that
it's
too
slow
and
too
large
because
we
use
json,
you
might
think.
Well.
Is
that
really
a
problem,
because
if
we're
debugging
locally
or
implementation,
this
shouldn't
really
matter
in
practice
all
that
much
right.
B
But
it's
then
it's
time
to
to
consider
a
fuller,
quick
deployment
timeline
right,
because
debugging
implementation
is
only
the
first
step
which
we've
kind
of
done
by
now,
and
now
we're
moving
into
actually
deploying
these
implementations,
where
we
need
to
deploy
debug
that
deployment
first
and
later,
of
course,
fine-tune
it
for
performance
and
for
those
cases.
If
you
want
to
do
analysis
on
that,
then
of
course,
yes,
you
will
need
to
scale
up
to
run
with
thousands
or
even
millions
of
connections.
B
You
could
think.
Well,
maybe
you
know
for
this
use
case.
We
might
instead
just
go
back
to
those
packet
captures
right
because
they
contain
most
of
the
data.
The
problem
with
quick
is
that
it's
almost
fully
encrypted
well
with
tcp
things
like
your
sequence,
number
or
flow
control
limits
or
in
the
public
tcp
header.
That
is
no
longer
the
case
with
quick.
B
Another
option
that
has
been
discussed
would
be,
you
know,
to
add
a
couple
of
bits
to
the
public,
quick
header,
to
help
with
some
things.
B
So
the
idea
was,
you
know
it
would
be
how
about
we
use
q
log
for
this
use
case
as
well,
and
then
you
can
start
to
understand.
Yes,
if
we
want
to
do
that,
then
we
indeed
need
to
make
sure
it
scales,
and
many
people
say
you
know.
In
that
case,
we
want
to
use
a
binary
format
instead
of
json,
to
keep
the
file
size
down
and
also
to
make
it
faster
to,
for
example,
serialize
I've
always
kind
of
pushed
back
on
that,
because
I
feel
that
binary
formats
are
typically
much
less
flexible.
B
They
typically
use
an
upfront
defined
schema
which
doesn't
allow
you
easily
adding
removing
events
at
will.
Another
counter
argument
is
that
facebook
they've
actually
deployed
qlog
in
this
json
format
and
they're
loading
billions
of
events
per
day.
The
caveat
there
is
that
you
know
those
are
mostly
server-side.
Events.
They've
also
said
that,
indeed
it
becomes
quite
difficult,
sometimes
to
upload
client-side
logs,
because
they
can
be
quite
large.
B
What
they
also
indicated,
though,
is
that
they
try
to
upload
these
at
this
time
without
compressing
them
first,
and
this
makes
a
big
difference
to
give
you
an
idea
of
what
we're
talking
about
here.
These
are
some
numbers
for
a
download
of
a
500
megabyte
file.
We
can
see
that
the
p
cap,
indeed
is
very
large,
but
to
be
fair.
B
However,
this
is
before
compression
if
we
start
adding
compression.
I
think
it's
a
quite
a
different
story
where
the
seabor
and
the
binary
format
are
actually
very
similar
in
what
they
end
up
with
right
still-
and
this
is
what
I
like
about
the
ietf
and
hate
about
it
as
well-
is
that
people
kept
on
giving
pushback
really
wanting
that
binary
option
for
q
log
as
well,
and
so
our
current
approach
is
to
allow
that
to
so.
What
we're
doing
is
more
decoupling,
the
q
log
events
and
the
schema
from
the
actual
format.
B
So
the
people
that
really
need
this
for
the
production
setup
can
indeed
move
to
more
that
kind
of
thing,
with
still
keeping
the
same
keylog
definitions
as
you
use
in
json.
This
should
be
in
the
next
q
log
version
that
is
expected
to
drop
either
this
week
or
next.
Of
course,
this
will
need
some
more
evaluation
over
time
to
see
if
it
actually
works,
but
we're
hopeful
this
will
resolve
most
of
the
issues
people
had
so.
In
conclusion,
I
think
we
can
say
that
the
tools
have
been
a
huge
success.
B
They've
turned
out
to
be
very
useful,
not
just
for
us
for
research,
but
also
for
actual
implementation.
Debugging,
the
qlog
format
has
potential,
but
we
still
need
to
tweak
a
couple
of
things
there
to
make
it
user
to
make
it
usable
in
the
wire
deployment
sense,
but
we
have
good
hopes
that
this
will
work.
There
are
some
future
challenges,
still,
of
course,
that
I'm
hoping
the
ietf
can
work
with.
It's
still
unclear.
B
If
q
log
can
also
solve
the
network
operator
use
case
because
they
typically
won't
have
access
to
the
q
logs
directly,
maybe
we
can
share
some
of
those
endpoint
q
logs.
But
then
you
know
you
need
infrastructure
for
that
as
well.
Interestingly,
there
are
other
people
that
have
come
up
with
a
very
similar.
B
I
think
proposal
independently,
which
is
being
discussed
in
the
next
session
in
ippm
after
this,
if
you're
interested
another
very
important
part
of
all
that
is,
we
should
start
having
some
privacy
and
security
guidelines
on
how
to
anonymize
qrocks,
for
example,
happy
to
say
that,
for
example,
christian
haidama
has
been
contributing
in
that
effect
as
well.
Finally,
I
think
this
has
shown
quite
a
bit
of
potential,
not
just
for
quicken
h3,
but
also
maybe
for
other
protocols,
which
is
a
discussion
I'm
very
eager
to
have
with
you
all.
A
Okay,
thank
you
very
much,
so
my
fan
is
just
running
crazy.
So
if
you
have
some
background
noise,
I'm
sorry
for
that,
but
other
than
that
we're,
I
think,
ready
for
questions
we're
a
little
bit
behind
time,
but
it
you
know
unsurprisingly,
but
I
think
we
can
go
five
minutes
over.
So
let's
have
some
questions.
F
Oh
hi
very
nice
work
thanks
for
doing
this
and
I'm
glad
you
have
such
a
good
uptake
and
also
it's
interesting
that
it
seems
to
be
to
confirm
the
end
to
end
arguments
that
you
can
do
these
things
at
the
end
systems
and
yeah
also
important
input
for
for
discussions
on
how
to
modify
the
protocol
as
an
operator,
I
should
probably
be
arguing.
Oh
sorry,
oh
I
see
no
sorry,
it's
a
first
time,
I'm
I'm
using
this.
F
I
should
probably
be
arguing
in
favor
of
things
that
that
allow
us
to
to
do
debugging
from
the
from
the
midpoints,
but
right
honestly,
we
sometimes
do
debugging
performance
issues
and
generally
at
some
point,
you
have
to
go
to
the
endpoints
and
it's
super
good
to
know
that
that
even
with
quick
wheel,
we'll
have
possibilities
to
do
that.
So
it
makes
me
very
happy.
F
One
question
I
had
I'm
sure
it's
mentioned
in
the
paper
is:
can
we
expect
to
back
port
these
things
to
all
the
transport
protocols
that
are
implemented
in
kernels
and
not
just
quick.
B
Yeah,
can
you
hear
me
yes,
yeah,
okay,
so
yeah
in
the
paper.
Indeed,
we
discussed
that
we
have
been
making
a
proof
of
concept
for
that
for
tcp,
so
using
the
ebpf
kernel
probes
to
get
to
congestion
window
and
around
your
time,
estimates
like
that.
It's
not
super
far
along
yet,
but
the
proof
concept
seems
to
indicate
it
will
work.
So
that's
definitely
something
we're
interested
in.
A
A
Yeah,
so
thank
you
robin
and
see
you
at
some
other
itf
meetings
bringing
back
your
work.
Hopefully.
G
Oh
hang
on,
I'm
I'm,
I
feel
like
I'm
a
newbie,
even
though
it's
been
a
whole
week.
Thank
you
for
the
presentation
robin
and
I
just
because
I
know
the
questions.
I
also
definitely
had
that
this
has
been
a
super
useful
contribution
to
the
community
in
general.
I
think
it's
it's
shown
its
value
as
robin
already
pointed
out,
but
I
think
it's
important
to
note
that
this
is
kind
of
fundamental
to
trump
have
been
able
to
have
access
to
traces
into
this
sort
of
stuff
and
robin's
work.
G
A
Okay,
I
think
now
we
actually
move
on,
because
we
anyway
don't
have
time.
So
thank
you.
We're
going
to
the
next
talk.
The
talk
is
held
by
iana
and
she's,
a
postdoc
doctor
of
fellow
at
the
similar
med
research
center
in
oslo.
A
Prior
to
that,
she
was
having
a
holding
her
phd
from
the
university
of
oslo
on
measuring
the
adoption
of
ipv6,
so
also
very
interesting
topic
for
this
community,
so
in
general,
she's
focusing
her
research
work
on
measurements
of
reliability,
resistance
and
security
on
the
internet,
and
with
that
we
will
start
our
very
first
or
very
last
talk
for
this.
H
H
H
H
H
At
the
same
time,
we
use
two
recent
geolocation
methods,
ipmap
and
h-lock,
that
employ
active
measurements
from
vantage
points
to
determine
the
location
of
ip
addresses.
Ripe's
ip
map
relies
mainly
on
rtt
measurement
from
ripe
atlas.
Probes
h-lock
also
uses
the
ipad
last
platform,
but,
prior
to
this,
it
extracts
joe
hint
from
the
ip's
dns
names
to
select
the
set
of
probes
that
is
going
to
use
to
perform
the
measurements.
H
I
H
H
H
We
noted
that
most
of
the
ips
that
are
covered
by
the
three
geolocation
datasets
are
also
mapped
to
the
same
country.
However,
we
find
that
some
ips
have
partial
or
complete
disagreement
between
the
country
level
or
ipg
mappings,
and
we
are
further
interested
in
understanding
why
these
disagreements
occur.
H
H
H
H
H
H
We
further
want
to
understand
how
the
ip
geolocation
disagreements
impact
the
country
level
end-to-end
path,
jaw
mappings.
We
find
that
about
half
of
our
collected
ip
paths
have
similar
geometrics
we
mark
with
purple
on
the
plot
these
ipads
for
the
remaining
paths.
We
find
that
the
jaw
mapping
disagreement
is
mostly
due
to
ipv
to
country
mappings
disagreements
that
appear
along
the
path.
H
H
Due
to
our
measurement
setup,
a
high
percentage
of
the
paths
start
and
end
in
normal
in
the
case
of
ipv4
pallets,
we
find
no
evidence
of
bathroom
burning
in
any
considered
end-to-end
geometrics
in
the
cases
of
ipv6
paths.
We
find
one
such
case
when
considering
the
max
mine
jaw
mappings
of
the
entry
and
path
using
our
looking
glass
approach,
we
actually
dual
locate
the
eyepiece
that
causes
this
path
from
warning
in
norway.
H
So
in
this
case
the
force
positive
is
caused
by
an
inaccurate
ip2
country
mapping
into
the
max
mine
data
set
to
analyze
path.
Editors.
We
consider
ipv
paths
that
start
and
end
in
the
same
region
for
such
paths.
We
expect
that
at
the
path
to
mappings
indicate
that
the
path
remains
in
the
same
region.
H
H
H
H
In
the
delegation,
dual
mappings,
the
path
starts
in
china,
traverses
uas
and
then
reaches
norway
using
the
maxima
in
the
dataset.
We
see
that
the
path
does
not
jump
directly
from
us
to
the,
but
it
traverses.
First,
france,
the
ip2
location
jungle,
mappings
place
the
path
in
china,
us
canada
and
then
norway.
H
H
H
I
H
A
Okay,
thank
you
very
much.
This
was
our
last
talk.
We
are
already
two
minutes
over
time,
but
given
the
problems
we
had
at
the
beginning,
we
can
still
take
two
minutes
for
questions
and
there
was
already
somebody
in
the
queue
earlier.
So
if
that
person
wants
to
rejoin
perfect,
we
go
with
the
first
question.
J
J
Okay
hi,
my
name
is
carlos.
I
work
for
lagne
one
of
the
original
internet
registries.
I
I
wanted
to
make
a
point
about
the
meaning
of
the
delegation
files,
which
is
something
that
comes
up
again
and
again
during
ietf
presentations,
and
this
I
wanted
to
clarify
the
semantic
of
the
country
called
field
in
the
delegation
files.
The
country
called
field
doesn't
mean
geographic
location,
it
means
legal
location,
I
mean
when
you
see
like
us
or
ui
or
nl
or
whatever.
J
A
Thank
you.
That's
actually
good
information,
maybe
because
I'm
also
sharing
that
group.
Maybe
you
can
actually
send
an
email
to
the
memory.
I'm
mailing
this,
which
is
a
research
group
looking
into
measurements.
As
you're
saying
this
comes
up
over
and
over.
H
So
can
I
can,
I
comment
here
sure
yes
yeah,
so
thank
you
for
clarifying
this,
but
also
in
the
paper.
H
I
do
acknowledge
this,
so
I
acknowledge
the
fact
that
we
expect
the
country
in
the
delegation
to
be
somehow
to
bring
like
some
error
there,
and
we
take
this
into
account
just
to
try
to
understand
it,
because
we
thought
it
would
help
us
understand
the
root
cause
of
the
errors.
J
I
will
take
many
media's
miria's
recommendation.
I
will
send
an
email
and
one
of
the
things
that
we
sometimes
gets
us
in
hot
water
is
that
people
say
around
that
the
delegation
files
are
wrong
because
the
country
code
doesn't
match,
but
the
thing
is
the
country
goes.
That's
much
the
contract,
the
contract,
the
design
between
the
registry
and
the
organization
is
obviously
matches
because
it
matches
some
form
of
legal
document
depending
yeah.
Thank
you
very
much.
Do
I
do
I
have
to
do
something
to
to
no.
A
C
A
Let
me
ask
a
quick
question
in
the
meantime,
because
I
did
some
similar
work
on
geolocation
and
trying
to
figure
out
the
accuracy
of
geo
location
on
a
more
fine
grained
scale.
So
did
you
look
at
anything
below
country
level?
H
Like
the
overall
idea
here
was
to,
we
were
basically
after
country
level,
your
location,
but
this
idea
this
very
simple
idea
that
was
sketched.
I
think
it
would
work
even
like
lower
like
city
level,
probably
where,
like
this
is
just
like
a
sketch
of
an
idea
that
we
have
and
I'm
currently
working
on,
improving
it.
So
I
think
it
would
work
right.
Okay,
that
would.
A
So
I
don't
see
anything
from
sanjay
in
the
chat
and
he's
also
not
in
the
queue.
I
guess
he
still
has
audio
problems,
so
maybe
sanjay
can
just
go
on
slack
or
send
an
email,
yeah
yeah.
I'm
thank
you
for
your
presentation.
A
We
kick
you
out
of
the
video
here
yeah
and
that
was
the
last
talk
of
our
workshop.
Thank
you
not
only
yuana,
but
like
everybody
who
presented
and
sent
papers,
and
we
have
a
bunch
of
other
things,
because
this
was
a
not
planned
and
like
unusual
situation.
So
we
had
a
lot
of
help
and
and
support
from
the
secretariat
from
colin.
Of
course,
the
irtf
chair
and
a
big
part
of
the
work
was
done
by
the
program
committee.
We
had
the
name
of
all
the
members
at
the
very
first
session.
A
So
if
you
want
to
look
up
and
send
us,
thank
you
email,
you
can
do
it
right
away
again,
sponsors
also
even
for
the
virtual
meeting.
We
need
some
support
for
financial
costs
and,
of
course
we
also
have
the
acm
and
sitcom
supporting
us
with
the
publications
and
also
money
for
some
extent.
And
lastly,
we
want
to
also
thank
meet
echo
because
you
know,
even
so,
we
had
this
little
crash
today.
A
They
did
like
a
great
job
to
actually
extend
functionality
for
us
to
make
the
video
running,
and
these
kind
of
things
so
they've
been
very
busy
the
last
couple
of
weeks,
and
especially
this
week,
so
big
thanks
there,
because
I
think
it
worked
out
really
well
at
the
end,
and
I
would
also
like
to
thank
my
co-chair
because
it
was
actually
a
lot
of
fun,
even
though
this
wasn't
planned
this
way.
I
think
I
I
enjoyed
it
quite
a
lot
and
that's
the
end
of
it
roland.
You
have
the
last
word.
C
Yes,
I
would
like
to
check
to
thank
my
coach
here
as
well.
It
was
it
was
great
to
work
together
and
I
would
actually
like
to
do
it
again
and
oh,
we
have
lars.
Who
wants
to
say
something
so
shall
we
let
our
skin
yeah
we
should
go
on
then.
I
From
the
steering
committee,
I
want
to
thank
you
guys
for
doing
an
excellent
broadband
program
committee
chairs.
This
was
really
good
and
now
you're
on
the
hook
to
find
us
two
new
chairs
that
will
do
an
even
better
job.
Next
year
nobody
told
us
we
are
going
to
plan
on
having
another
one
of
these,
probably
next
summer,
by
default,
unless
something
changes
yeah
in
the.
L
Superpowers,
I'm
afraid,
so
I
just
want
to
echo
what
lars
has
said.
Thank
you.
This
has
been
an
excellent
workshop.
You've
done
a
great
job
as
chairs.
Thank
you
to
all
the
speakers.
So
everyone
for
submitting
papers
there's
some
really
nice
work
here
and
I
very
much
look
forward
to
the
workshop
next
year.
Thank
you.
Everybody.