►
From YouTube: IETF100-QUIC-20171114-1330
Description
QUIC meeting session at IETF100
2017/11/14 1330
https://datatracker.ietf.org/meeting/100/proceedings/
A
A
This
is
the
note
well
statement
whether
this
is
the
first
time
you're,
seeing
this
or
you've
seen
this
on
a
hundred
different
weeks
in
a
hundred
different
cities
in
the
world.
This
is
important
because
it's
the
intellectual
property
terms
under
which
we
participate
here,
so
please
do
make
sure
you're
familiar
with
it.
You
can
find
it
by
typing
IETF
note
well
into
your
favorite
internet
web
search
engine,
just
as
important
our
participation
here.
A
We
expect
it
to
be
professional
and
productive
if
you
have
any
problems
with
something
that
you
feel
is
harassment
or
something
like
that.
Please
feel
free
to
talk
to
your
chairs,
Mars
myself
or,
if
that's
not
comfortable
for
you,
we
have
a
team
of
special
people.
Who
can
you
can
talk
to
the
on
spud
team?
Sorry,
Ombuds
team
I
will
provide
no
perennial
in
this.
Pronounce
that
word
blue
sheets
I'll
send
around
in
just
a
second.
We
have
a
scribe.
Thank
you
very
much.
Brian
Trammell,
you
are
a
king
amongst
men,
jabber
relays.
A
Can
anyone
volunteer
to
relay
what's
happening
in
the
room
to
jabber
and
vice
versa?
I
would
necessary
anyone
I
see
Patrick
McManus
thinking
about
it.
You
don't
know
lucky
you
anyone
please
thank
you
very
much
agenda.
Bash
I'll
get
that
I'll
do
upcoming
meetings
in
a
second
our
agenda
today,
we're
gonna
have
a
hackathon
report
from
ekor.
A
Martin
is
going
to
talk
about
quicken
variants.
We're
gonna
talk
about
a
message
that
we
had
on
the
mailing
list
a
little
while
ago
about
the
working
group,
status
and
scope
and
our
schedule,
then
Martin
will
come
back
to
give
us
an
editor's
update
and
then
finally,
we'll
have
a
light
and
frothy
discussion
about
this
bin.
It
value
and
evaluation
design
team
from
Ted.
A
A
This
is
our
homepage,
I'll
make
it
bigger
there.
This
is.
Can
you
see
that
in
the
back?
Maybe
okay,
we
have
an
upcoming
interim,
it's
in
Melbourne
Australia
in
late
January.
You
register
online
from
the
arrangements
page
here
there
we
go.
Oh
yeah,
I
can't
see
it
down
there.
Thank
you
and
we
have
I
think
around
20
to
25
people
registered
right
now.
I
wanted
to
bring
this
up
because
we're
closing
registration
early
because
the
holidays
that
closes
on
the
8th
of
December.
A
So
if
you
haven't
decided
to
decide
soon,
if
you're
coming,
I'll
also
mention
it
as
we
have
before
that
hotels
around
there
may
be
getting
more
expensive
because
it's
during
a
major
event.
So
if
you
haven't
booked
a
hotel,
you
should
get
to
that
as
soon
as
you
possibly
can,
and
just
to
give
some
people
some
visibility.
A
We
do
anticipate
meeting
in
London
at
the
next
ITF
at
ITF
101
and
we've
been
starting
to
talk
about
an
interim
meeting
after
that
and
right
now
we're
penciling
in
probably
early
June
or
late
May
early
June
in
Europe
somewhere.
So
that's
still
TBD,
but
that's
our
current
thinking
about
the
meeting
schedule.
Any
questions
you're
not
coming
up
for
a
question.
Bob.
Are
you?
Ok,
everyone?
A
B
Yeah,
so
pile
of
us
got
together
some
physically
some
virtually
to
try
to
interrupt
with
draft,
oh
seven
in
in
in
in
classic
form
we
already
a
draft,
oh
eight,
which
has
changed
quite
a
bit,
but
we
do
the
seven.
So
this
diagram
shows
the
what
sort
of
worked
and
what
didn't
things
that
are
agree
are
good
things
that
are
not
in
green
or
not
as
good
things
that
are.
You
know
this
HDC
code
tells
you
kind
of
how
much
actually
works.
The
good
news
is
a
lot
of
green.
B
The
bad
news
is
HD
and
see
if
I'm
actually
covers
much
the
specification.
So,
but
still
it's
a
fair
amount
of
progress,
and
it
was,
you
know,
we're
starting
I,
think
we're
showing
again
our
rhythm
of
you
moving
I
work
with
each
other
and
get
that
stuff
working
on
the
I
guess.
The
thing
I
would
say
that,
so
what
these
means
specifically
is
I'm.
B
It's
kind
of
obvious
I
mean
it's
obvious
of
you
handshake
successfully,
but
even
that's
not
complete,
and
you
know,
is
one
day
to
channel
the
ten
dated
channels
if
big
files,
the
small
files,
no
one
has
the
answer.
There's
questions
Wow.
B
The
and
you
know
what
is
closed
mean
so
I
think,
as
we
going
into
a
Melbourne
I
would
suggest
they
would
be
good.
First,
try
to
recruit
some
volunteers
who'd
be
willing
to
actually
write
down
like
we're.
Also,
we
detailed
with
us
these
things
are
so
that
we
can
actually
know
what
success
and
failure
means
on
the
second
gap
is
sort
of
a
lack
of
clear,
continuous
integration
and
continuous
testing,
and
also
you
know
what
abate
in
Rob
testing.
So
there's
a
lot
of
hey.
B
Can
you
make
your
server
work
so
I
can
test
against
it.
Where's
your
endpoint?
Oh,
your
server
seems
to
be
down
I'm,
not
gonna.
Send
you
a
client
initial
and
nobody
answers.
Is
that
because
you're
down
or
is
exceeding
like
my
client
initial,
so
we're
slowly,
evolving
you're
on
lauren
had
a
conversation
over
this
meeting
about
trying
to
put
together
an
actual
framework
that
people
could
like
fit
into,
and
you
know
test
their
own
stuff
and
other
test
other
people's
stuff
on.
Maybe
they
can
run
on,
though
machines?
B
Maybe
they
could
run
it
in
docker
and
make
a
run
in
the
cloud.
It's
not
quite
clear
on
anybody.
Who's
been
interested
in
that
effort
and
we
like
to
volunteer
some
effort.
Please
do
I
imagine
otherwise
what
happen
is
we'll
have
something
not
very
good
together
and
you'll
kind
of
be
stuck
with
it
or
not.
So
anybody
who'd
like
to
spend
some
time
on
that
be
appreciated.
B
I
said
why
green
here,
so
this
is
pretty
good
news:
yeah,
the
handshake
everybody's
really
up
a
handshake
nailed
and
unless
you
have
the
data
thing
working,
so
that's
actually
a
Wapato
that
eastery
it's.
A
A
lot
of
green,
which
is
nice.
Thank
you.
Okay,
let's
move
on,
then
martin
is
going
to
talk
for
a
little
while
about
this
concept.
We
have
been
calling
invariants
and
I
won't
describe
what
it
is,
because
I
don't
want
to
steal
any
of
his
slides
from
him.
If
I
can
get
it
onto
the
screen.
C
Yeah
I
assume
you
all
know
who
I
am
next
right,
so
we've
made
something
we
would
attempt
in
the
in
the
draft
at
the
moment
to
document
the
things
that
are
quick
and
I
said
quick
in
the
generic
sense,
not
quick
in
the
sense
of
quick
version,
one
that
word
finding
that
has
HTTP
running
on
it.
I
mean
quick
in
the
sense
that
is
as
much
as
possible
in
mutable
over
time.
C
We
do
really
poorly
at
those
sorts
of
things
generally
and
we
find
that
they
need
a
great
deal
of
attention
and
effort
in
order
to
actually
work
over
longer
periods
of
time.
But
the
goal
here
is
to
work
out
the
things
that
won't
change
in
future
versions.
The
things
that
we're
going
to
commit
to,
and
so
part
of
the
point
of
this
presentation
is
to
simply
see
if
the
room
agrees,
particularly
on
the
guiding
principles
that
I've
outlined
here
and-
and
this
is
pretty
simple
it
is.
C
Can
you
negotiate
the
version
of
quick
that
you
want
to
talk
with
the
other
side
and
in
defense
of
that
making
sure
that
middle
boxes
don't
interfere
with
that?
To
the
extent
that
they
need
to
do
their
jobs
and
the
primary
primary
function
that
we've
identified
for
the
for
that
being
routing
the
packets
to
the
right
place
as
long
as
the
middle
box
and
continue
to
route
the
packets
to
the
right
place,
we
are
happy
if
we
do
not
provide
something.
That's
invariant.
C
That
will
mean
that
the
packet
will
need
that
the
middle
box
will
need
to
know
about
the
specific
version
of
quick
that
we're
talking.
We
don't
want
that
to
happen,
because
that
will
mean
that
we
won't
be
able
to
deploy
a
new
version
of
quick
because
of
the
way
that
the
endpoints
in
the
middle
boxes
have
nests.
It
don't
necessarily
have
alignment
in
terms
of
upgrade
schedules,
and
things
like
that.
C
Currently,
the
idea
is
that
anything
that
doesn't
contribute
towards
these
goals
is
subject
to
change.
We
could
decide
that
we
do
want
to
randomize
the
initial
packet
number
or
we
could
sorry
in
in
a
given
version.
We
could
we
could
decide
to
do
that
and
we
could
all
start
using
that
version
and
therefore
you
would
not
see
a
packet
number
of
0
or
1
ever
at
the
start
of
a
connection.
We
could
decide
to
do
any
number
of
things
as
long
as
they
fit
within
this
framework.
So.
B
B
C
There
is
a
hope
that
certain
people
will
put
out
reasonable-sized
experiments
and
reasonable
scale,
with
the
versions
of
quick
that
we're
going
to
be
defining
prior
to
publication
of
RFC
s,
and,
if
they're
going
to
do
that,
they
need
to
have
at
least
some
degree
of
certainty
about
these
things.
Does
anyone
want
to
comment
on
these
things?
I
think
there's
a
probably
good
point
to
pause
and
say:
have
I
missed
something
major
do
I?
Have
this
wrong
and
Brian
is
going
to
say
that
I've
got
everything
wrong?
No.
D
C
E
Ian's
wet
google
yeah.
This
seems
excellent,
and-
and
thank
you
for
writing
this
down.
There's
there's
also
a
little
bit
behind
a
particularly
in
terms
of
in
reality,
they're
going
to
be
like,
basically
like
things
are
going
to
ossify
ish
on
the
handshake
packets
in
some
way
and
I.
Don't
know
if
there's
anything,
we
can
do
to
prevent
that,
but
I
think
the
the
short-form
packets
I
think
we
can
do
something
with
that.
The
long-form
packets
I
think
we're
going
to
end
up
with
some
issues.
C
So
one
of
the
things
I
want
to
do
was
get
to
the
point
that
we
agree
on
the
principle
and
then
based
on
that
we
can
do
some
things
to
defend
against
the
sorts
of
ossification
that
we
have.
You
had
some
ideas
where
flooded
them
around.
Those
are
reasonable
ideas
and
we
can
work
within
this
framework
on
that
I
hope
crossed
my
fingers,
yeah
I'm.
A
F
C
B
or
C,
so
multi
path
is
an
interesting
one.
We
haven't
really
talked
a
whole
lot
about
what
multi
path
might
do
to
the
wire
image,
but
the
my
expectation
at
least,
is
that
multi
path
is
part
of
B
to
the
extent
that
it
needs
to
be
part
of
B,
and
if
that
means
that
we
need
to
do
is
in
talking
about
multi
path.
In
order
to
resolve
this
question,
I'm,
not
certain.
G
C
K
L
K
K
C
J
K
C
K
H
H
K
D
Brian
Trammell
I
think
that
Lars
is
clarification,
is
not
a
clarification.
I
think
that
it's
modification
I
think
that
that,
like
routing
of
quick
packets,
independent
of
version
means
weird
of
it.
Packets
go
whether
that's
at
a
data
center
boundary
or
at
some
random
point
in
the
network,
I
think
trying
to
say
we
only
care
about
data
center
boundaries
is
an
excellent
way
to
break
and
I
think
I
mean
we
have.
You
know
the
closer
you
get
to
the
edge
of
the
network.
Obviously
the
more
problematic
this
becomes.
C
I,
don't
think
we
need
to
necessarily
rathole
on
this
particular
choice
of
word.
In
fact,
it
was
that
that
was
all
I
wanted.
All
I
wanted
to
say
with
this
is
we
need
to
make
sure
the
packets,
when
they're
sent
end
up
in
the
same
place?
Well,
it
may
be
not
even
the
same
place,
the
intended
destination
without
any
sort
of
possibility
of
ambiguity
or
version
dependent
behavior.
Now
there
may
be
version
dependent,
behavior,
that's
added
on
top
of
it,
but
we
wouldn't
be
able
to
rely
on
that.
That's
all
hi.
M
Colin
Perkins,
so
what
you
just
said,
I
absolutely
agree
with
yeah
I
think
Lars.
His
clarification
did
not
clarify
because
he
I,
if
we
are
only
considering
routing
to
servers
that
happen
to
be
you
know,
in
a
data
center
or
somewhere
that
I
think
we
have
a
problem.
Yes,
yeah
I
think
there
is
something
we
need
to
consider
an
IVA
rule
in
scope
or
explicitly
rule
out
of
scope,
but
we
should
have
the
discussion
about
that.
Yeah.
E
E
I
mean
I
think
would
have
to
be
true.
There's
the
things
you're
talking
about
here
are
all
things
that
would
not
have
the
would
not
be
part
of
the
crypto
anyway.
So
I
think
you
know
that
might
make
it
easier
to
figure
out
if
it's
concur
up
that
it's
it's
not
an
invariant,
so
don't
even
think
about
it.
Actually.
C
N
C
No,
absolutely
not!
The
point
is
that
it
should
be
possible
for
someone
to
mark
a
packet
using
the
bits
that
a
version
independent
in
a
way
that
is
consistent
across
versions
that
ends
up
in
the
same
in
the
intended
destination.
So
if
you're
talking
about
using
connection
IDs,
that's
absolutely
part
of
this
and
connection
idea
is
actually
part
of
the
invariants
that
are
on
the
next
slide.
When
someone
take
me
to
that
so
that
we
can
start
talking
about
concrete
things
and
not
in
the
abstract,
so.
O
O
P
Going
to
respond
to
that
very
briefly
quickly.
Sure
the
the
point
here
is:
if
you
have,
if
you
use
connection
IDs
for
version
one,
then
you
can
continue
using
connection
IDs
for
version
two,
three,
four
five:
six,
it's
not
to
say
that
you
can't
use
IP
address
is
important
numbers,
it's
not
to
say
that
you
must
use
them.
It's
independent
of
that!
That's
our
independence
is
right.
P
K
K
Yet
the
connection
that
I
don't
think
I
required
by
Fever
a
nor
B,
yet
you
have
them,
as
I
saw
on
your
next
slide.
So
I'm
wondering
in
another
guiding
principle
is
the
risk
of
opening
up
this
rathole?
What
is
it
is
another
one
of
these
environments,
what
we're
going
to
allow
say
the
NAT
boxes
or
the
middle
boxes,
to
see
as
an
invariant
and
guarantee
that
will
never
change
and
connection
ID
would
fall
in
that
that.
B
Okay,
yeah
I'm,
not
sure
it
is
you're,
looking
at
funny
mark
I'm,
just
looking
I'm,
not
sure
it
is
just
like
angels.
The
head
of
a
pin
I
mean
the
so
the
there's
this
right,
as
you
say,
there's
routing
in
the
data
center
and
there's
writing
of
people
who
do
not
have
relationship
with
and
so
the
for
the
purpose
of
writing
in
the
data
center.
The
connection
ID
can
move
around
between
versions
because
you
only
set
versions.
B
You
know-
and
you
know
how
to
find
a
connection
ID,
but
look
at
the
version
number
for
purposes
of
routing
it
for
like
middle
book,
like
matte
box,
is
on
the
clients
network.
That
is
not
the
case,
so
I
think
it
probably
does
need
to
be.
It
may
be
inconvenient
to
do
a
conditional
lookup,
but
it's
not
impossible,
and
so
so,
but
but
the
so
there's
I
think.
The
the
thing
that
is
important
to
recognize
is
the.
B
There
are
things
which
have
to
be
which
have
which
require
which
it
was
required
for
cooperating
people
to
be
able
to
interact
with
each
other
and
the
things
were
required
for
non
cooperating
people.
They
talk
to
each
other
and
you're,
and
so
I
mean
a
is
only
about
cooperating
people
and
B.
Apparently,
you
think
is
about
non
cooperating
people
as
well.
B
C
B
Mean
just
like
I
saw
you
in
twitch
somebody
like
to
be
feel
like
I
mean
they,
like
always
within
scope
in
these
discussions,
when
someone
opposes
version
3
to
say
that
would
be
incredibly
inconvenient
for
me.
Please
don't
do
it,
but
that's
different
from
the
version
negotiation,
which
basically
is
a
practical
matter
once
we
agree
when
first
year
she
didn't
version,
goes
like
it's
getting
almost
a
pause,
it's
getting
more
much
pain
in
the
ass
for
everybody
to
change
the
version
right,
I'm,
you're
gonna
be
like
in
like
hash
finger
position
like
that
right.
B
A
So
just
to
give
folks
some
context,
we
asked
Martin
to
give
this
presentation
because
we
think
it's
important
to
start
the
conversation
about
documenting
our
invariants.
We're
not
going
to
finish
that
conversation
today
and
it's
great
to
hear
some
engagement
on
it,
but
but
let's
get
through
martin
slides
and
figure
out
what
our
next
steps
are
as
a
group,
rather
than
trying
to
nail
down
exactly
the
principles
and
exactly
what
the
invariants
are
today.
Yeah.
C
I'm
gonna
try
to
knock
those
out
a
little
bit
quicker
now,
okay,
next,
this
is
what
we
currently
have
in
the
document.
We
have
a
section
that
says
these
are
the
things
that
won't
change
between
versions
and
I
believe
this
to
be
a
fairly
accurate
translation
of
the
principles
that
were
described
on
the
previous
slide.
C
C
E
It's
a
fairly
simple
techniques
you
can
use,
but
so
I
definitely
think
like
greasing.
We
can
make
sure
the
other
code
points
are
still
usable
but
I.
Think
Brian
to
Brian's
point
like
we're.
Gonna
be
stuck
with
one
through
five
unless,
like
someone
comes
up
with
something
like
stupidly,
clever
that
I
can't
think
of
and
I'm,
not
even
sure
it's
worth
trying
to
fix
that
problem
anyway.
But
it's
not
a
big.
B
I
mean
I,
guess
I'm
actually
surprised
to
hear
people
think
that
this
is
difficult.
We
just
you
know
we
just
make
the
we
all
0
and
we
stuff
the
actual
value
inside
the
inside
the
encryption
battery
with
huge,
crazy
last
bunch
of
created
like
two
weeks
ago.
So
unless
no
that's
not
that's
not
a
terrible
idea.
I
mean
like
I,
don't
mean
like
on
I,
say
I'm.
The
five
he's
in
before
yeah
did.
D
D
P
P
A
pretty
big
gap
between
those
two
things
and
I
think
it's
important
to
recognize
that
this
is
simply
saying
across
quick
versions.
We
will
not
change
these
things.
There
might
be
more
things
that
we
decide
not
to
change
in
the
future,
because
they're
inconvenient
to
any
number
of
people,
and
that's
fine
right.
This
is
the
minimum
set.
So.
H
K
K
A
C
M
M
Multiplex,
quick
and
RTP
is
not
to
me
the
right
way
of
looking
at
this
I
think
the
problem
we
are
that
I
have
been
trying
to
raise
with
with
this
is:
how
do
we
run
quick
in
a
peer-to-peer
manner,
which
means
we
then
need
to
do
multiplex
done
and
if
we
can
also
do
multiplex
RTP
and
things
like
that?
That's
a
bonus,
but
the
key
thing
is
done,
so
we
can
run
peer-to-peer
okay
and.
C
Right,
and
so
it's
done
may
be-
may
be
easy
if,
as
I
could
point
out.
But
yes,
that
is
a
much
better
formulation
in
the
one
I
had
here,
but
the
the
question
that
was
raised
was
multiplexing
and
I
think
that's
a
much
better
way
of
approaching
it,
which
is
saying
if
we
want
to
use
quick
for
things
like
peer-to-peer
which
have
a
number
of
different
existing
solutions
in
place.
How
do
we
make
sure
the
quick
fits
in
with
the
things
that
are
that
already
exist
there
without
causing
too
much
disruption
for
those.
M
C
M
C
C
This
is
not
really
talking
about
using
quick
for
RTP
putting
RTP
on
the
inside,
as
has
been
explored
in
a
draft.
This
is
really
talking
about
the
work
that
sexually
bernard
did
and
peter.
I
think
it
was
in
a
bt,
core
and,
and
one
of
the
fundamental
problems
that
we
have
here
is
that
the
the
multiplexing
schemes
that
are
in
use
today
rely
on
having
unique
values
for
the
first
octet
quick
assumes
that
it
has
use
of
all
of
the
values
of
the
first
octet,
and
so
we
have
something
of
attention
there.
C
Next,
so
option
number
one.
We
make
the
observation
that
all
of
these
protocols
use
crypto
in
one
form
or
other
all
these
packets
are
authenticated
in
some
way.
If
that
packet
fails
authentication
for
protocol,
one
then
try
it
with
protocol
two
if
it
fails
authentication
with
protocol
to
try
it
with
protocol
three.
This
is
possible.
This
shows
that
is
possible
to
multiplex
off
authenticated
protocols
of
any
form
using
any
format
using
any
values
in
the
first
octet
it
sucks
next.
C
The
draft
that
Bernard
wrote
actually
suggests
that
we
change
our
invariants
and
we
commit
to
providing
the
first
two
bits
of
the
first
octet
as
one
and
that
would
conveniently
place
quick
in
the
bucket.
That's
currently
unoccupied
the
final
quarter
of
the
of
the
space,
and
then
we
could
very
easily
identify
quick
as
distinct
from
SRTP
and
turn
channels
and
ET
lesson
or
these
other
protocols
that
people
run
altogether
apparently
have
some
questions
about
that.
C
If
we
do
that,
we
commit
to
that
now
we
make
an
invariant
and
we
make
a
promise
that
we're
not
going
to
use
any
of
the
other
values
on
that
first
octet,
and
so
this
is
why
I
think
we
need
to
discuss
this,
because
this
was
one
of
the
options
that
that
draft
put
forward
and
I
think
it
would
be
responsible
of
us
to
actually
engage
with
this
particular
suggestion
and
make
a
decision
about
it.
:.
M
All
right,
:
Perkins,
so
that's
one
option.
Yes,
the
other
options
are
to
pick
some
subset
of
these
code
points
who
wish
to.
We
could
decide
that
we
don't.
We
only
wish
to
avoid
collisions,
were
stunned,
for
example,
or
who's
that
RTP
Oh,
with
which
we
have
a
subset
of
this
we
feel
like,
so
it
doesn't
necessarily
have
to
be
right.
C
K
Often,
what
roles
we
with
that
is
that
we
have
eight
bits
in
the
first
byte
and
I
think
we
already
also
talking
about
spin
bit,
and
there
are
other
uses
for
bits
in
this
by
so
unless
we
have
a
32
bits
by
then
I,
don't
think
a
byte
is
enough
for
the
future
in
order
to
allow
for
each
one
for
each
of
these
flags.
So
one
option
is
to
extend
the
first
byte
two
bytes.
Maybe
it
would
be
better,
but
otherwise
I
see
a
problem
with
allocating
two
bits
from
from
those
and
agree.
C
This
is
this
is
one
of
the
major
drawbacks
from
this.
Is
that
were
it?
We
don't
have
that
many
bits
and
we
have
been
talking
about
using
them
in
various
ways.
Decker
suggestion
is
actually
an
interesting
one,
because
it
means
that
we
don't
have
that
problem,
but
we've
been
talking
about
using
these
bits
in
various
ways
and
giving
away
a
very
large
portion.
This
space
means
that
we
have
a
lot
less
flexibility
in
how
we
approach
it.
That's
the
downside
of
this
sort
of
approach,
dispenser.
T
C
Had
this
discussion
on
the
AVT
core
list,
I
think
recently
we
have
good,
so
the
multiplexing
scheme
that's
been
used
in
peer-to-peer
is
generously
a
happy
accident
more
than
anything
else,
and
it
just
so
happened
that
the
protocols
tended
not
to
collide
and
some
of
the
later
ones
that
came
along
we're
officially
actually
engineered
to
fit
within
that
scheme,
specifically
because
of
the
but
I
talked
about
turn-turn
channels
and
rtcp
has
they've
made
some
specific
design
decisions
to
keep
on
using
this
thing,
but
ultimate
originally.
This
was
a
happy
accident
and
I.
C
Don't
think
that
it
represents
a
good
architectural
basis
for
these
sorts
of
things
to
simply
say.
Well,
we
continue
to
fit
into
the
into
increasingly
narrow
spaces
within
this
first
octet
I.
Don't
think
that's
that's
a
sustainable
approach,
particularly
since
you
know
we're
already
finding
that
the
space
is
running
down
pretty
hard.
If.
S
C
Yeah,
there's
there's
some
tiny
little
slices
left,
but
it
basically
makes
the
rest
of
the
rest
of
the
space
unusable
right.
It's
a
huge
slab
and-
and
we
don't
like
it,
because
there's
not
enough
space
for
us
and
I'm
sure
that
anyone
using
this
scheme
would
not
like
it
because
they're
just
not
that
many
slots
left
can.
C
C
C
So
the
other
thing
is
in
the
peer-to-peer
case,
you
can
spend
that
bite
and
take
just
one
value
and
then
burn
that
you
know
scrub,
that
bite,
throw
it
away
and
then
the
rest
of
the
packet
is
quick.
That's
another
option
fairly
similar
the
last
one
uses
less
of
the
space
gives
us
control
over
the
entire
octet
the
people
in
the
peer-to-peer
land.
They
spent
one
extra
octet
per
quick
packet.
That's
in
next
flight,
please!
C
This
is
a
more
convoluted
one
that
was
in
in
the
draft,
and
this
is
another
option.
So
the
observation
here
is
that
long
packets
are
only
really
used
during
the
handshake
and
during
the
handshake
you're
not
going
to
be
doing
SRTP
necessarily
depending
on
whether
you
have
well
well
number
of
observations.
Quick,
provides
the
same
sort
of
services
of
detail
s,
it
has
a
terrorist
connection.
It
can
be
used
for
generating
keys
for
SRTP,
so
you
could
define
something
like
quick
SRTP
rather
than
detail.
C
Ss
SRTP
you
need
for
DTLS
is
greatly
diminished
and
you
end
up
with
something
like
this,
where
you
say
that
long
packets
are
only
used
during
the
handshake.
The
long
packets
are
the
ones
that
are
in
that
latter
half
they
never
appear
at
the
same
time
as
RTP,
so
you
can
distinguish
them
temporarily,
as
opposed
to
by
number
and
the
short
packets
collide
with
with
stun.
C
But
you
don't
see
that
many
stunts
packets
and
they're
pretty
easy
to
identify
based
on
the
magic
cookie,
that's
in
the
front
of
them
or
the
message,
integrity
thing
or
the
fingerprint
thing
there's
any
number
of
ways.
You
there's
like
four
different
ways
that
you
can
identify
a
stun
packet
as
being
unique,
uniquely
stun
right
and
there's
only
the
first
four
of
them
right.
We
could
avoid
those
first
for
it
if
it
came
down
to
that
as
well.
We
just
add
five
to
our
to
our
short
header
types
and
problem
solved.
C
C
C
C
E
A
H
C
C
Problem
solved,
maybe
the
other
thing.
Maybe
you
could
do
something
that's
substantially
similar
to
the
sort
of
quick
that
we're
defining
today.
But
a
new
version
of
quick
can
do
just
about
anything
and
we're
not
making
any
commitments
about
the
type
fields
in
the
in
the
first
octet.
So
you
can
define
type
fields
that
are
friendly
to
the
peer-to-peer
traffic
that
you
actually
want
to
exchange
and
that's
the
last
one.
I
have
I
think
there
might
be
another
slide,
but
I
don't
know.
C
A
C
Before
before
we
get
back
into
the
discussion
on
the
multiplexing
thing,
the
point
of
having
the
multiplexing
discussion
is
to
make
the
people
who
want
to
do
this,
satisfied
that
they
have
enough
options
available
to
them
that
the
invariants
that
we
have
have
not
backed
them
into
a
corner.
Or
we
conclude
that
our
own
variants
aren't
necessarily
generous
enough,
and
we
need
to
make
some
additional
promises
so
that
those
use
cases
are
properly
handled.
E
Was
gonna
say
I'd
like
I
think
this
is
a
great
example
of
your
invariance
from
a
personal
perspective.
I
am
happy
enough
saying
we
can
do
for
for
now
and
do
five.
We
need
you
but
I
think
the
example
of
five.
It
is
an
excellent
example
of
why
we
can
totally
change
everything
with
a
version
as
long
as
it
fits
within
the
invariance,
and
it
certainly
gives
us
enough
freedom
to
solve
problems,
including,
but
not
limited
to
this.
E
If,
for
some
reason
we
came
up
with
some
food
protocol
in
the
future
that
like
for
some
reason,
just
couldn't
change
its
locations
and
we
need
to
interoperate
with
it
and
I
think
the
version
bump
gives
us
a
heck
of
a
lot
of
flexibility.
So
I
think
this
is
an
excellent
example
and
yeah.
Let's
do
something
of
or
the
nature
of
four
and
five
from
a
personal
perspective,
yeah.
C
P
P
So
III
think
we
have
that
thing.
This
is
great.
Thank
you
for
putting
this
together
and
and
I
think
the
medians
are
I,
think
they
have
the
right.
Invariants
I
want
to
talk
about
the
multiplexing
option.
Very
briefly,
though,
what
I
so
I'm
going
back
to
what
colin
pointed
out
earlier
and
in
in
the
context
of
something
else,
which
is
that
you
do
not,
as
it
is,
have
an
exhaustive
list
of
all
the
different
conflicts
we
can
have,
even
with
the
list
that
you
have
generated
there.
P
That
is
not
in
practice
the
conflict
that
we
will
see
on
the
wire.
There
are
protocols
that
we
haven't
sanitized
here
example:
RT
MFP.
There
are
plenty
of
others
that
we
will
conflict
with
on
bite
one
and
we're
going
to
have
to
deal
with
it
when
that
happens,
because
the
list
is
not
going
to
be
exhaustive
and
because
you're
going
to
look
at
every
protocol
and
go.
P
P
It
may
be
a
most
scalable
approach
to
look
at
stun
and
go
or
anything
and
go.
Is
this
something
we
want
in
draft
one?
Or
is
this
something
that
we
want
to
make
sure
that
we
avoid
and
draft
in
in
version
one
for
instance?
In
this
case,
we
can
agree
that
perhaps
agree
that
stun
is
something
we
want
to
interoperate
with
if
we
want
to
interoperate
with
Artie
RTP,
not
interoperate.
P
If
you
want
to
share
space
with
RTP,
then
define
something
and
that
something
is
either
RTP
inside
of
a
shim
or
quick
inside
of
our
deviation
or
whatever
it
is,
but
we
have
a
shim
and
we
define
it
later,
because
I
think
what
we
need
to
know
really
right
now
is
simply
define
the
one
that
we
know
that
we
want
to
coexist
with.
We
can't
exhaustively
coexist
with
everything
else,
yeah.
C
So
I
think
the
reason
this
is
particularly
important
is
that
we
have
people
actively
looking
at
this
particular
set
of
protocols,
and
it's
used
with
quick
and
I
think
that
we'd
be
doing
that
them
a
disservice
by
simply
saying
sorted
out
for
yourselves.
So
I
want
to
make
sure
that
we
deal
with
this
properly,
but.
P
I
think
it's
that
I
tend
to
talk
about
it
for
sure,
but
that's
my
opinion
would
be
that
we
we
look
at,
don't
need
that
we
work
on
trying
to
make
the
things
invariant
that
we
know
we
need
to
interrupt.
That
did
we
need
to
work
with
on
bite
one
and
there
I
saw
them
the
ones
which
we
are
not
sure
about.
We
leave
it
to
the
future,
so.
M
C
M
Q
M
Yeah
there's
always
going
to
be
something
that
conflicts
and
very
much
think
that
that
the
essential
thing
here
is
to
enable
peer-to-peer
use
of
quick,
because
I
think
we're
gonna
want
to
do
that
and
since
I
really
don't
want
to
reinvent
stun
I
think
that
means
we
have
to
avoid
the
stun
packets
and
specify
very
clearly.
This
is
how
you
do
multiplex,
quick
and
stun,
and
if
you're,
building
a
middle
box
that
looks
a
quick
make
sure
you
you
understand
these
stun
packet
types
as
well
and
let
it
through
I
have
a
solution
for
that.
C
M
C
R
H
U
U
Okay,
Magnus
resident
so
I
want
to
explain
to
turn
channels
pioneer
because
in
standardized
usage,
it's
become
enough
to
run
from
one
single
base,
including
turn
to
the
turn
servers,
which
I
mean
bases
down
to
the
turn
server
and
setting
up
the
channels.
And
then
you
have
it
all,
though,
going
directly
towards
the
peers
for
the
different
options
through
other
addresses
they
pair
candidates.
That's
why?
But
it's
intense,
you
could
actually
run
the
turn
turn
shot.
Tunneling
is
so
to
say
of
the
protocol
across
another
source
port
from
your
localhost.
U
C
U
C
C
U
H
U
V
D
V
C
C
S
C
S
S
S
It
was
just
that
you
could,
essentially,
if
you
were
able
to
move
a
version
field
or
connection
ID
or
something
else.
You
could
combine
that
with
the
shimmer
approach.
Essentially,
you
know
if
you
had
the
space
to
guarantee
that
you
know
an
octet
would
have
a
given
value
and
the
version
field
looks
like
the
best
candidate
for
that
because
it,
you
know,
you
don't,
presumably
don't
require
more
than
16
bits
of
versions.
You
know
if
you
had
major
and
minor
so,
but
if
we
can't,
if
that,
if
you
can't
move
it,
it's
pointless.
C
K
Ronnie
Evan,
my
major
concern
is
a
good
like
I
said
before
is
with
having
just
one
bite
and
how
do
we
use
it?
So
will
not
take
too
much
options
from
this
by
to
use
as
flex,
because
we
at
least
use
now
three
bits
for
the
pipe
and
I.
Don't
know
how
much
we
want
to
keep
there
in
order
to
allow
for
that.
K
So
that's
one
limitation
and
then,
if
we
use
three
or
four,
if
you
use
for,
for
example,
we
still
have
just
have
four
bits
which
there
are
other
might
be
other
usage
that
would
like
to
delete
something
in
the
first
byte
in
order
to
work
or
something
in
that
area.
So
either
that
or
we
have
to
have
a
machine,
it's
okay,
because
basically
it's
one
more
bite
what
you
can
use
so
I'm,
also
okay,
with
the
solution
that
about
the
using
the
just
for
stun
and
again
that
we
need
to
verify
that.
S
C
I
bought
an
A
I
was
going
to
point
out
here
that
it
seems
likely
at
this
point
that
we
will
produce
a
document
that
exists
standalone
that
describes
what
quick
is
in
the
invariant
sense.
It
will
be
section
five
point,
eight
and
not
a
lot
more
at
this
point,
so
I
imagine
a
very
short
document,
the
idea
being
that
we
can
then
publish
that
document
and
commit
to
that
document.
And
anyone
who
wants
to
say,
for
instance,
describe
the
wire
image
of
quick.
C
A
Thank
You
Martin
III
just
want
to
make
sure
people
understand.
You
know
the
context
here.
Is
that
we're
going
to
where
we're
saying
we
want
to
write
down
the
things
that
don't
change
between
versions
of
quick,
that
they
are
truly
invariant
and
that
will
drive
discussions
like
what
we
grace
and
what's
encrypted
have
not
encrypted
as
well
as
discussions
about
what
we
expose
to
middleboxes
and
what
guarantees
they
have,
that
our
version,
independent
and
and
that
that
that's
kind
of
a
very
high
order
bit
here
and
it's
going
to
drive
part
of
the
next
discussion.
A
We
have
about
the
scope
of
our
work
as
well,
so
I.
Thank
you
for
that.
I
think
that
does
that.
Well.
Is
it
controversial
that
we're
taking
this
approach?
Does
anybody
have
concerns
about
this
idea?
Overall
of
documenting
invariants,
we
had
a
little
bit
of
a
side
discussion
about
one
particular
example
that
is
driven
by
invariance,
but
I
don't
want
to
lose
the
high
order,
concern
here
and
Spencer's
going
to
the
mic.
Mr.
T
Dawkins
this
time,
I
think
is
responsible.
Id
I
wanted
to
thank
Martin
for
doing
this
work
and
I
know
it's
not
easy.
I
just
had
the
observation
that
if
you
document
the
invariants
in
their
own
document,
then
you
can
open
other
documents
for
revision
without
without
having
to
had
the
argument
about
whether
you're
going
to
change
the
invariance
to
it's
easier
to
keep
a
document
close
than
a
section.
It
seems
like
to
me
yeah.
B
I
think
it's
a
good
idea
to
document
the
invariance
I
think
that
the
exact
inference
the
table
suggests
don't
be,
but
the
principles
reverse
the
emergence
arrive
and
then
the
invariance
themselves
need
to
still
be
kind
of
open
tissues.
So
I
I,
guess
I'm,
hoping
that
this
document
will
start
with
the
principles
and
we
can
debate
them
and
then
we
can
debate
what
drives
that
of
those
principles.
Okay,
great.
A
So
we're
gonna
talk
about
how
to
get
such
a
document
started
and
I
would
expect
that
by
the
next
interim,
we'll
have
something
meteor
to
discuss
where
we
can
really
dig
into
the
exact
invariance
and
the
principles
behind
them.
So
so
look
forward
to
that,
I,
guess
and-
and
that
leads
in
to
another
discussion-
I
very
much
wanted
to
have
that
discussion.
Before
we
had
this
one.
A
A
That's
a
pretty
aggressive
schedule,
but
it's
pretty
clear
we're
not
going
to
meet
that
right
now
and
so
as
chairs.
You
know,
Lars
and
I
have
been
talking,
and
we've
been
talking
to
the
editors
and
other
folks
in
the
working
group
about
how
we
can
make
sure
that
we
complete
this
work
successfully
and
and
the
the
mechanism
that
we
were
proposing
that
we
do
that
that
we
adopt
to
do
that.
A
Are
a
couple
of
things
one
is:
is
that
for
version
one
of
quick,
we
focus
only
on
our
our
first
use
case,
which
is
HTTP,
and
the
second
is
is
that
we
adjust
our
milestones
and
we
focus
on
meeting
those
milestones.
Patrick
McManus
very
rightfully
pointed
out
that
the
way
we
expressed
that
was
not
ideal.
Thank
you
for
doing
that.
Patrick,
a
lot
of
eyes
looked
at
that
before
yours,
and
no
one
saw
what
you
saw
but
you're
absolutely
correct.
This
is
a
you
know.
A
That
doesn't
mean
people
can't
work
on
things
in
parallel,
but
as
a
working
group,
our
focus
will
be
on
shipping
that
v1
for
the
for
the
foreseeable
future.
I
think
the
the
proposal
is
November
of
2018
is
when
we're
going
to
try
and
put
a
pin
in
and
ship
v1.
This
was
discussed
on
list
and
I
think
with
Patrick's
positioning.
A
Q
A
H
H
To
enable,
for
example,
Pizza
pair
it's
it's,
the
invariance
and
HTTP
is
version
1
and,
and
the
question
is
whether
the
demultiplexing
was
something
like
stun
is
part
of
that,
and
my
personal
feeling
after
this
discussion
in
the
list
is
that
everybody,
so
nobody
has
said
as
a
stupid
idea.
We
shouldn't
do
it
right,
so
it
seems
like
this
is
something
implicitly
seems
to
be
consensus
for,
but
but
it
if
we
do
it,
it
needs
to
be
part
of
me
one
because
it
will
be
part
of
the
invariants.
Yes,.
A
And
it's
a
you
know:
it's
a
the
distinction
is:
is
that
it's
not
that
we
can
ignore
the
rest
of
the
world
and
only
focus
on
HTTP
and
perhaps
I
misspoke
there?
It's
that
we're
not
going
to
fully
flesh
out
every
capability
of
quick
in
v1.
It's
that
we're
going
to
make
sure
that
we
can
cover
what
we
need
to
cover
in
the
invariants,
so
that
in
the
future
we
can,
you
know,
have
a
good
chance
of
being
able
to
do
whatever.
We
need
to
be
able
to
do
without
changing
those
invariants.
M
T
T
That's
going
on
with
people
experimenting
with
running
their
protocol
over
over
quick,
can
go
on
in
parallel
and
provide
input
on
to
whether
or
not
those
are
the
correct,
invariants
or
not,
as
you
go
so
I
think
if
you
drop
the
for
HTTP
only
and
say
using
these
invariants
and
delivering
HTTP
I'm
totally
on
board.
But
if
you
say
for
HTTP,
only
I
believe
that
that
will
be
a
stick
to
rule
things
out
that
the
invariants
would
have
kept
alive.
So
I
hope
that
that's
considered
a
friendly
men,
yeah.
H
Thank
you
for
that.
So
one
thing
we
didn't
explicitly
say,
although
I
think
it
might
have
been
in
the
email.
So
the
reason
why
we're
sort
of
saying
that
we
want
to
focus
to
work
around
quick
for
HTTP
for
version.
H
One
is
that
the
editors
and
the
chairs
believe
that
we
so
we've
had
discussions
that
took
quite
a
bit
of
time,
especially
at
the
interims
that
that
were
and
further
ago
that
goal
and
in
and
we
have
to
push
back
six
months,
which
has
caused
some
grumbling
by
implementers
and
and
going
much
beyond.
That.
H
Except
you
know
we
cannot
fully
architectural
II
do
multipath
and
other
things
while
we
at
the
same
time
still
waiting
to
do.
We
won
right.
So
the
idea
is
that
we
prioritize
HTTP,
because
it's
the
most
well
understood
use
case
for
quick
at
the
moment
and
we
make
sure
as
soon
as
we
can,
that
we're
greasing
the
hell
out
of
everything
else,
so
that
further
extensions
become
easy
and
remain
possible.
A
Sorry
I
just
one
thing
to
that:
I
think
it's
important
to
keep
in
mind.
The
invariants
are
intentionally
quite
broad.
You
know
there's
that
so
much
of
what's
happening
is
inside
of
the
crypto
and
is
outside
of
the
invariants
that
and
sorry
not
broad,
they're
quite
limited.
So
you
know
there
are
many
things
we
can
do
without
violating
the
invariants
and
I
think
it's
important
to
appreciate
that.
K
Even
I
listened
to
what
last
now
said
and
I
think
that
they,
the
issue
I,
don't
think
it's
invariant
it's
cuz.
The
first
issue
is
that
that
doesn't
address
the
needs
of
HTTP
over
quick,
but
there's
no
quantified.
There's
no
document
that
says
what
are
the
needs
of
HTTP
over
quick.
There
are
assumptions.
What
are
the
needs
of
all
of
HTTP,
quick,
so
I
think,
together
with
the
first
bullet.
K
That
says,
which
are
realistically
achievable
in
this
time
frame
is
the
point
is
the
point
for
that,
because
a
lot
of
things
that
are
being
discussed
are
also
relevant
to
HTTP.
Over
quick,
like
multipass,
have
some
relevance
to
that.
But
you
can
say
it's
not
realistic,
okay,
but
just
to
say
to
say
it
doesn't
address
the
needs
unless.
H
K
It
no
it's
a
trick.
Why
it's
not
realistic,
I,
don't
think
it's
not
required.
I
mean
again
the
is
there
a
document
that
says
what
is
required
for
for
HTTP
over
quick
I
didn't
see
such
a
document,
but
that
doesn't
say
what
is
the
requirement
that
says?
What
is
the
what
you
want
to
do
there
so
I
think
I
think
I
think
that
we
at
least
I
I,
think
that
we
have
to
see
what
is
realistic
to
have
in
version
one
and
by
that
to
to
work,
be
see.
K
E
Yes,
but
I,
that
is
an
interesting
point
about
multipath.
For
example,
we've
decided
that
you
know
doing
HTTP
header
compression
of
some
sort
is
a
requirement
of
some
sort
for
HTTP
over
quick,
but
I
mean
we
could
do
HTTP
one
one
over
quick
and
like
ship
that
and
that
would
be
very
easy
in
and
it
would
totally
work
but
but
so
I
mean
I.
Think
that's
an
excellent
example.
E
I
don't
know
if
we
need
to
write
down
what
the
requirements
are,
but
I
I
do
think
that
there
is
an
implicit
set
of
requirements
that
the
group
or
some
people
the
group
have-
and
you
know
at
some
point-
we're
gonna-
have
to
decide.
What's
what's
the
boundary
there,
but
I
don't
know
you
have
a
point.
I
think
there's
an
implicit
assumption.
E
E
This
is
awesome
in
terms
of
exact
timeframe.
From
from
my
editor
hat
perspective,
I
feel
like
if
we
don't
have
something
that
we
think
is
pretty
darn
implementable
by
London
that,
like
we're
we're
in
bad
shape
like
really
bad
shape,
because
I
don't
know
about
everyone
else,
but
most
you
know
it's
gonna,
take
a
nonzero
amount
of
time
to
finish,
implementing
whatever
people
haven't
implemented
in
quick
and
then
it's
going
to
take.
You
know,
even
if
you
have
a
pretty
strong
infrastructure,
probably
like
another
four
weeks
to
like
ramp,
that
up
to
like
be
global.
E
Maybe
it's
too,
if
you're
like
Facebook
and
you're
super
super
fast
and
you
know
actually
getting
a
large-scale
deployment
and
then
fixing
all
the
bugs
that
you
find
and
so
and
so
forth.
We
need
to
give
people
like
five
or
six
months.
I
would
say
so
I'm
saying
I,
don't
know
if
everyone
else
agrees
to
this,
but
but
yeah
I,
say
London
and
if
that
should
be
really
hard
and
basketball,
and
at
that
point,
if
we
can't
get
something
done,
we
should
start
like
cutting
things.
Pretty
brutal
ously.
H
X
Mcmanus,
so
you
know,
I
think
the
original
sort
of
problem
may
be
a
one-time
only
problem
here
they,
the
Charter,
was
written
and
it
said
a
couple
almost
implicitly
conflicting
things.
It
gave
a
fairly
short
timeline
to
submit
the
original
document,
and
it
also
said
you
know.
Everything
in
the
document
were
adopting
is
does
not
have
consensus
and
has
to
be
revalidated
and
I.
Just
think
those
things
were
incompatible.
X
The
good
news
is,
you
know,
as
we
move
along
there's
less
and
less
in
the
document
that
doesn't
have
consensus
right,
we're
working
through
those
issues
and
I
think
that's
sort
of
what
we've
we've
seen
going
on
sorry
I
mean
I.
Think
that's
good
news
for
us
going
forward.
If
we
just
work
it
out
slightly
quicker,
cadence
I,
don't
think
we're
going
in
circles
which
would
really
really
concern
me
I.
Just
think
this
is
a
matter
that
the
original
you
know
the
original
layout
was
more
aspirational
than
than
realistic.
X
J
P
P
P
Once
work
is
started
Commendation,
you
can't
pause
and
wait
for
things
to
change,
so
it's
it's
important
to
allow
implementers
to
go
forward
and
finish,
implementing
and
deploying
it
as
well.
I
think
Ian's
exactly
right
that
we
need
to
allow
for
a
while
for
folks
to
finish,
implementing
and
deploying
I.
Think
four
weeks
is
very
optimistic.
People
need
months
to
deploy
and
try
and
see
what
happens.
There's
another
bit
of
urgency
here
that
I
want
to
raise,
and
this
is
ossification
that's
happening
already.
P
This
is
why
I
think
the
invariance
discussion
is
very
timely
and
it's
important
and
it's
important
to
have
now
if
we
not
only
can
agree
on
the
invariance,
but
we
can
also
allow
for
us
to
move
on
the
invariance
quickly
and
agree
on
them
as
quickly
as
we
possibly
can
meaning
not
just
do
we
agree
on
the
fact
that
these
are
invariants,
but
we
also
agree
on
us
on
where
those
invariants
up
here
appear
in
the
packet.
This
will
allow
us
to
implement
and
deploy
these
things
faster
and
and
prevent
ossification
of
anything.
P
H
I
mean
they
they
need
to
be
done,
at
least
when
we
want
to
ship
v1,
but
they
don't
have
to
wait
and
they
shouldn't
wait
for
that.
So
so
I
think
the
invariants
are
probably
the
most
important
thing
that
we
can
spend
time
on
after
this
meeting
or
during
this
meeting,
Praveen
Praveen.
P
So
I
had
two
comments
here.
So
I
overall
agree
with
this.
You
know
direction.
One
of
the
things
I
wanted
to
say
was
HTTP
over
quic
is
very
general,
so
it
might
be
useful
to
you
know,
maybe
define
an
API
which
kind
of
you
know
v1
API,
which
would
limit
the
amount
of
functionality
we
would
ship
and
we
one
it
would
be
pretty
useful
to
define
what
that
is.
The
other
thing,
I
would
say
is
instead
of
you
know.
P
Yes,
we
are
saying
December
2018,
but
be
nice
to
work
backwards
from
that
and
have
actually
milestones
and
deadlines
on
the
mailing
list
for,
for
example,
invariance.
You
know
we
lock
it
down
by
you
know,
date
X
and
if
people
have
use
cases
that
they
want
to
try
out
and
want
to
give
feedback
to,
you
know
expand
the
set
of
invariance,
then
they
should
do
without
by
a
certain
date,
and
if
you're
after
that,
then
you
know
hard
luck.
So
I
would
like
us
to
put
more
deadlines
versus
just
December
2018.
So.
H
D
H
D
Alright,
Brian
Trammell,
just
just
gonna,
throw
a
little
bit
of
oil
on
that
fire.
Not
very
much
I
agree
very
much
with
Ian's
point
that
we
need
to
get
I
would
say
that
if
we
go
into
Melbourne
and
it's
not
just
cleaning
up
a
little
bit,
then
we
have
a
problem
right.
If
we
have
a
discussion
that
starts
like
this
in
nan
burn,
then
we
have
a
problem.
D
D
I
think
that
we
need
to
be
with
respect
to
really
emphasizing
these
invariants
I
also
agree
that
sort
of
the
shape
of
where
we
want
them
to
be
right
now
is
probably
correct:
I'm,
not
sure
how
to
document
this
split
between
invariance
of
the
contract
that
we'd
like
to
make
and
the
contract
that
is
going
to
be
made
on
us
right,
so
that
the
extra
externalities
I
think
those
externalities
need
to
be
documented
somewhere.
I.
Don't
think
that
we
need
to
document
them
on
the
weak.
D
We
need
to
know
where
they
are
on
the
front
end,
but
we
don't
want
to
basically
put
them
in
the
document
and
say
we
already
concede
this
point
and
I'm
not
really
sure
how
to
do
that.
Maybe
one
of
them
goes
in
the
invariants
document.
One
goes
into
the
manageability,
docu
I'm,
not
sure
where
to
put
that
split,
but
that's
something
I
think
we
need
to
to
pay
attention
to
as
we
drive
forward
to
that.
I
agree.
B
So
I
think
disco,
good
and
and
I
have
having
I
mean
yes
relate,
you
can
add
more
manpower,
D
scope,
maybe
both
and
you
know,
moving
to
more
rational
dates
is
good,
obviously,
marches
now
beyond
science
fiction
and
fantasy
I.
Guess
that
the
the
question
I
want
to
say
is
like
you
can
write
whatever
you
want,
I,
don't
like
the
milestones
right,
but
like
what
makes
those
dates
work
is
people
actually
work
on
things,
and
so
I
mean
why
not
when
I
hear
people
get
up
and
say
we
like,
we
should
be
working
faster.
B
What
they're?
What
they're
really
saying
is
I
intend
to
work
harder,
and
you
know
so
I
mean
I'm
great
I
work
pretty
hard
like
if
I
work
harder
but
but
I
mean
you
know.
The
problem
is,
is
that
the
milestones
are
like
aren't
aggressive
enough.
The
problem
is,
like
people
are
not
working
fast.
B
So
you
know
so
so
it's
good,
so
it's
gonna
take.
You
know
say
that
what
you're
saying
is
I
intend
to
like
you
know,
I
intend
to
work
harder,
I,
ain't
anymore
and
compromise
I
mean
I,
intend
to
do
things
that
make
things
move
faster
right
right.
So
I
guess
that's!
So
it's
like
I
want
I'm,
not
I'm,
not
putting
here's.
Anybody
I'm
sure
he's
bad.
His
life
is
like
babies
and
some
people,
maybe
all,
but
you
know
which
one
up
like
something
difficult,
I.
A
Share
your
skepticism
about
you
know
just
because
we
set
the
date
doesn't
mean
we're
going
to
meet
it
and
I.
It
makes
me
wonder
if
we
need
to
reconsider
our
schedule
for
meetings
if
holding
more
interim
meetings
is
we've
talked
about
in
the
past,
and
we've
decided
that
that
was
too
much
for
commitment
for
a
lot
of
people.
Do
we
need
to
revisit
that
decision
if
we
want
to
make
an
aggressive
schedule,
I.
B
Think
about
more
but
I'm
a
guess.
My
initial
reaction
would
be
I,
don't
think
more
into
meetings
is
gonna.
Help
almond
on
the
problem
is
attorneys
or
SLR.
It's
a
practical
work.
They
work,
meaning
Formosa
from
any
other
working
groups.
I
think
what
will
help
perhaps
is
you
know:
I
mean
I,
I,
hasten
to
point
out.
This
is
not
like
the
way
the
idft
plays
the
game,
but
you
know
many
summits
of
people
who,
like
are
the
major.
The
major
Cooper
horde
like
having
trouble
Wilkins
coming
to
coming
to
consensus
on
things.
B
Those
guys
should
be
meeting
themselves
and
trying
to
resolve
them
and
I've.
Seen
that
work
on
other
places,
I
worked
out
to
see
what
blue
well
so
I
mean,
obviously
wasn't
the
vetted
in
the
room,
the
working
group
with
all
those
things
but
like
if
you
know,
if
what
happens
is
you
know
that
Martin
Thompson
I
can't
agree
on
something
and
we
wait
until
you
don't
wait
until
nowhere
to
fight
it
out
and
everybody
else
through
their
watch?
Let's
fight
it
out,
that's
like
not!
That's,
not
helpful.
I.
A
B
B
Think
we're
already
certain
we're
just
all
struggling
to
getting
the
work
at
all
is
that
people
in
the
pyramid,
people
in
the
hackathons,
were
so
very
willing
to
come,
sir,
provisionally
compromising
something
just
get
to
work
at
all,
and
then
you
know,
and
then
after
that
was
done,
they
were
like
people
like
wow,
you
know
that's
a
bad
or
you
know
or
whatever,
and
it
like
that.
Helped
me
progress
to.
I
Mike
Bishop
Akamai,
so
I
will,
on
the
question
of
more
interims
in
person.
I
will
say
please
god.
No,
however,
I
will
observe
that
there's
this
wonderful
thing
called
the
Internet
and
a
lot
of
our
interrupts
testing
and
hackathon.
We
could
do
virtually
by
way
of
slack
or
schedule
a
conference
call
or
and
WebEx
or
whatever,
and
if
we
need
a
more
frequent
meeting
of
a
part
of
the
working
group
or
the
entire
working
group
or
an
interrupt
test
go
for
it.
Maybe.
H
That,
between
now
between
the
last
ITF
and
and
this
it
has
been
knows
to
be
more
active
on
a
slack
channel
in
between
right,
so
people
are
basically
have
public
servers
up
and,
and
other
people
say,
hey
I
send
your
client
hello.
Can
you
tell
me
what
happened
on
your
side
on
the
slack
channel
so
that
that
actually
works,
but
having
a
formal
sort
of
joint
time
where
everybody's
on
slack
I
think
would
be
very
well
the
it's
going
to
say
something
else:
I've
got
okay.
A
So
III
think
we've
heard
from
from
a
bunch
of
folks
on
that,
both
here
and
on
the
list.
Laura's
now
go
and
chat
with
Spencer
and
we'll
figure
out
how
best
to
reflect
what
we
understand.
The
path
forward
is
going
to
be
I'm,
not
sure
what
form
that
will
take
yet,
but
we'll
figure
it
out
we're
gonna
do
a
slight
agenda
modification.
We
had
the
editors
update
up
next
from
Martin,
but
we're
going
to
shift
that
till
tomorrow,
so
that
ted
has
time
to
talk
about
the
spin
bit
evaluation,
design,
team,
Ted.
Q
R
Q
T
Okay,
I
can
see
that
there
are
at
least
eight
people
who
recognize
the
Konami
code.
Thank
you,
spin
bit,
round-trip
time
evaluation
team
report
take
one
threat
model
geolocation,
as
you
remember,
from
the
discussion
at
our
last
non
intro
meeting.
One
of
the
potential
threat
models
that
we
discussed
was
that
somebody
would
be
able
to
take
this
round-trip
time
information
and
use
it
to
determine
the
geolocation
of
one
of
the
endpoints.
T
The
design
team
consider
that
pretty
carefully,
and
it
has
consensus
that
the
use
of
the
spin
bit
is
not
a
significant
new
source
of
geolocation
information.
The
reasons
are
up
here,
but
they
boil
down
to
this
one
there's
a
lot
of
this
information
already
out
there
and
therefore
this.
This
is
probably
something
commercially
available
to
you.
If
you
really
cared
to
the
data
that
you
get
from.
T
So
the
question
to
the
to
the
design
team
was
well
what
about
things
that
aren't
just
min
RTT,
where
what
you're
trying
to
do
is
to
see
the
pattern
of
round-trip
times
throughout
the
course
of
a
flow
and
dkg
and
christian.
We
Toma
pointed
out
during
the
interim
meeting
that
actually
there's
a
problem
here,
and
that
is
whenever
you
don't
have
consistent
exchanges
between
the
peers,
that
is
when,
whenever
there's
sparse
traffic
you
you
actually
get
the
wrong
data
because
you
get
the
application
latency
instead
of
the
round-trip
time.
T
So
here's
an
example
in
in
beautiful
ASCII
art
translated
into
non
ASCII
art
where
you
have
a
quick
initiator
talking
to
a
quick
responder.
It
sends
across
a
piece
of
work.
You
get
an
acknowledgment
of
the
piece
of
work.
Theory
states
that
at
this
point
the
quick
initiator
has
kind
of
mentally
changed
the
spin
bit
from
up
to
down,
but
because
we
don't
act,
X,
there's
nothing
on
the
wire
to
show
you
that
that
mental
change
has
occurred
instead.
T
The
next
thing
you
see
is
in
fact
another
thing
from
the
quicker
responder
to
the
quick
initiator
when
the
works
complete.
That
gets
acknowledged
and
that's
when
your
spin
bit
changes,
which
means
all
the
time
between
the
request,
acknowledgement
and
the
work
being
complete,
is
not
visible
to
the
wire
as
having
taken
place.
Post
spin
bit
change,
Spencer,
I'm,
I'm
gonna,
learn
from
the
mistakes
of
the
people
who
who
came
in
front
of
me
and
asked
to
defer
all
questions
to
the
end
I'm
going
with
that.
T
Okay
next
slide,
please
so
they're
a
bunch
of
sparse
traffic
examples:
logging,
for
example.
If
you
ran
syslog,
it
would
work
like
this.
Some
kinds
of
compute
intensive
applications,
DNS
queries
to
a
caching
resolver
unidirectional
streams.
In
the
case
of
like
RTP
being
sent
over
this
I
realize
that's
not
v1
you
would.
You
would
actually
be
measuring
your
ACK
coalescence
and
in
sort
of
our
TCP
like
ways,
and
all
of
these
then
would
show
up
with
the
the
previous
pattern
that
we
just
discussed.
T
The
the
logging
example
itself
may
in
fact
be
insensitive
to
round-trip
time
because
of
the
way
it
works,
but
in
a
lot
of
these
other
cases,
the
app
response
time
can
dominate
the
RTT.
A
note,
by
the
way
that
this
is
all
about
using
this
to
measure
round-trip
time
latency
and
all
of
these
remain
sensitive
to
loss,
even
if
they're
not
latency,
sensitive
next
slide.
T
Additional
processing
I
will
note
that
there
are
people
in
the
design
team
who
didn't
really
think
that
this
was
additional,
because
their
measurement
guys
and
from
their
perspective
this
is
the
cost
of
doing
business,
but
I
doubt
that
you
can
see
nods
from
other
people
who
believe
it's
the
cost
of
doing
business.
But
I
point
out
that
it
has
to
occur.
You
can
you
can
reject
samples
that
appear
to
have
these
characteristics
pretty
easily.
T
You
can
just
throw
them
out
on
simple
heuristics
that
do
look
like
that
was
a
a
tight
exchange,
I'm
just
not
going
to
count
this.
If
what
you're
trying
to
do
is
to
figure
out
something
like
the
mean
round-trip
time
on
a
particular
path
but
you're
not
actually
looking
at
specific
flows,
so
they're
definitely
cases
where
very
very
lights,
amount
to
work.
Light
amount
of
work
can
get
you
additional
information
you
can
also
and
I'm.
Instead
of
analyzed
the
flow
characteristics
you
can
characterize.
T
T
That
is,
in
fact,
straight
out
leaking
that
information,
and
nobody
in
the
design
team
really
proposed
doing
that
in
any
any
specific
way,
but
I
throw
it
out
there
just
because
it's
an
obvious
thing
that
would
happen
next
slide.
Please!
So
is
this
worth
it
slide?
Please?
Maybe
the
design
team
did
not
come
to
consensus
on
this
point.
We
would
have
loved
to
have
come
to
consensus,
but
after
a
bit
of
back
and
forth,
I
cannot
claim,
as
it's
a
reader
of
consensus,
that
I
did
so.
T
We
did
seem
to
believe
that
the
geolocation
threat
was
negligible
and
no
other
threats
were
identified,
but
clearly
this
does
provide
some
information
to
observer,
friendly
or
adversarial.
We
don't
really
know
yet
what
the
adversary
would
want
to
it,
but
they're
clever,
little
and
I
have
to
restrict
myself
from
characterizing
them,
but
it
is
clear
that
the
network
managers
would
would
want
rough
estimation
of
the
contribution
of
a
leg
of
a
path,
the
latency
implicit
measurement
of
congestion
and
potentially
some
passive
troubleshooting.
T
T
T
Next
steps,
we
recommend
closing
the
design
team.
It
didn't
come
to
consensus
on
this.
If
you
hand
us
other
questions,
we
kind
of
suspect
the
same
thing
might
happen.
We
think
that
the
right
thing
at
this
point
is
to
take
the
question
at
the
working
full
working
of
discussion
of
this
is
more
appropriate
than
more
discussion
in
the
design
team.
We
also
believe
that
there
were
some
output
from
the
hackathon
I'm
getting
at
CSE.
D
T
Summarize
a
little
bit
of
that,
okay,
that
that
might
have
helped
us
carry
that
forward
if
there
is
other
data
that
can
come
to
the
working
group,
so
that
it's
not
just
based
on
this,
that
would
be
wonderful
and
we
also
probably
should
consider
uploading
the
discussion
to
reconsider
methods
for
delay
and
loss
detection
that
aren't
this
last
slide.
This
is
the
data
that
the
the
group
kind
of
used
in
this
discussion.
T
Please
do
review
it
before
we
start
the
working
group
discussion
so
that
you
don't
kind
of
go
back
and
it
you
have
other
links
you
want
to
share
with
the
group.
Those
are
obviously
gonna
be
great.
We
appreciate
that
I
will
note
that
we
asked
the
design
team
if
we
could
make
the
discussion
open
to
the
working
group
post
facto
and
just
kind
of
share
the
discussion,
but
there
was
unfortunately
an
objection
and
since
I
did
not
make
it,
that
is
an
early
condition.
I,
unfortunately,
will
not
be
able
to
share
that.
T
A
T
D
V
D
Yeah,
it's
back
together,
right,
yeah,
okay,
priority;
okay,
so
okay.
D
Me
out
here
so
I'm
going
to
start
with
this
one.
On
eight,
we
have
a
student
who
has
implemented
the
latency
spin
bit
in
a
fork
of
mink.
It
does
what
it
says
on
the
tin
when
you
have
when
you
have
like
you
know
full
demand
right,
like
you
have
full
traffic
in
at
least
one
direction.
You
get
a
very,
very,
very
good
estimate
of
what
the
cinders
estimate
of
your
TT
would
be
right.
Lakes
are
comparing
the
internal
RTT
estimate
to
what
the
spend
bit
signals.
D
It
works
great
with
respect
to
the
issue
that
you
identified
with
ASCII
art.
You
can
go
back
to
that.
If
you
want
to,
this
is
actually
a
relatively
well-known
problem
in
passive
TCP
measurement.
There
was
actually
an
I
forgot
to
bring
this
the
attention
of
the
DT.
There
is
a
paper
in
ACM
CCR
run
about
a
year
ago,
Hepburn.
A
D
D
Yes,
I
also
disagree
with
this.
That,
maybe
is
incorrect.
I
think
it's
worth
it.
Can
we
go
to
five
yeah,
so
I
think
the
concern
about
about
this
leading
to
traffic
analysis
is
a
little
bit
overblown
because
again,
this
is
a
very
well
known
problem
and
passive
front
for
time
measurement
and
everybody
uses
extremely
simple
heuristics
and
nobody
says:
oh
well,
I,
don't
know
what
the
RTT
is.
D
Therefore,
I
have
to
like
actually
figure
out
what
type
of
traffic
business
I
think
that
that
saying,
okay,
giving
a
latency
latency
signal
to
the
path
that
is
very
easily
rigid
like
you,
must
be
rejected.
All
with
very
simple
heuristics
is
not
a
slippery
slope
toward
other
nastier
things.
I.
Just
think
that
that's
that's
total
BS
yeah,
okay,
since
I'm.
Only
up
here
to
disagree
I
will
let
other
people
speak.
Thanks.
Y
May
I
could
have
it
so
I
I'm,
not
sure.
If
it's
a
valid
point
to
say
we
have
a
couple
of
use
cases
which
are
maybe
even
the
important
use
case.
We
look
at
at
the
moment
and
those
use
cases
it
doesn't
work,
and
so
we
don't
need
it,
because
this
is
a
general-purpose
transfer
protocol
say
there
are
use
cases
where
it
will
work
and
you,
if
you
know
that
your
traffic
characteristics
will
not
lead
to
useful
values.
You
can.
Y
But
we
will
only
figure
out
that
we
need
this
when
the
deployment
of
quic
is
so
high
that
we
don't
can
do
any
measurements
on
TCP
on
like
competing
TCP
anymore,
on
the
traffic,
and
then
it's
so
widely
deployed
that
we
don't
have
it
and
then
it
might
be
too
late,
and
then
it
might
be
really
hard
to
get
it
in
what
I
think
it
might
be
much
easier
to
just
put
the
bit
in
and
if
nobody
uses
is
nobody
needs
it.
We
can
put
it
out
at
the
later
version.
T
Y
K
Oven,
I'm
glad
to
help
to
hear
Martin.
Brian
talked
about
disagreement
about
that,
because
I
was
I,
surprised
reading
that,
because
I
mean
this
is
similar
to
TCP.
Basically,
it's
have
the
same.
You
couldn't
want
to
weather
the
RTT
in
the
middle
between
the
upstream
at
the
downstream,
and
it's
done
in
TCP
with
all
the
things
like
you
said
it
can
be
delayed
India
in
the
one
side,
the
message
would
not
be
continuous.
The
arc
will
come
better,
it's
really
disabled
ISM.
K
The
only
difference
here
is
that,
instead
of
the
arc,
you
get
the
spin
B.
That
tells
you
that
this
is
basically
the
return.
The
return
message
so
today
it's
working
and
it's
being
used
and
again
like
I,
think
that's
important
that
it
should
be
implemented
as
a
mandatory
feature
in
the
in
the
first
release
in
order
to
to
be
able
to
analyze
what's
happening.
There.
E
T
I
think
if
we
do
take
the
next
step
seriously
and
up
level
the
discussion,
then
other
mechanisms
would
come
into
play
and
it's
definitely
true
that
one
of
the
issues
that
many
people
in
the
design
team
were
concerned
about
is
capturing
loss
on
the
path,
and
this
is
very
poor
for
that.
Okay,
so,
okay,.
W
Daniel
con
Gilmore,
so
I
think
one
of
the
things
that
so
the
discussion
that
was
presented
here.
Thank
you
for
for
summarizing
it
it.
It
doesn't
mention
that
people
may
be
forced
by
the
networks
if
this
exposure
is
present
to
basically
emulate
whatever
the
standard
expected
pattern
is
certainly,
if
we're
talking
about
shipping
v1
of
quick
as
HTTP.
W
Only
there
are
certain
traffic
flows
that
will
be
expected
and
if
there,
if
the
network
started
measuring
things
on
the
basis
of
a
spin
bit
like
this,
that
will
be
what
quick
looks
like,
which
means,
if
you
want
to
do
anything
other
than
HTTP
in
the
future,
then
you're
gonna,
you
may
be
forced
by
some
networks
that
have
decided.
This
is
the
way
that
we
measure
traffic
to
emulate,
HTTP
or
the
the
traffic
patterns
of
HTTP
and
I'm
I'm.
I.
Think
that's
a
mistake.
If
we
put
ourselves
in
that
situation,
so.
T
T
O
We've
been
discussing
this
issue.
Basically,
since
the
beginning
of
this
working
group,
we
had
this
discussion
in
in
Tokyo
in
Paris
in
Prague
and
the
we
always
had
the
question
come
up.
What
what
are
we
really
gained
by
that
there
are
people
who
claim
that
this
will.
This
will
give
us
enormous
benefits
on
the
network
that
they
need
this
to
make
it
work,
but
I
haven't
seen
any
numbers.
I
asked
for
the
numbers
a
couple
of
times.
Other
people
have
asked
for
those
numbers,
but
I've
never
seen
anything
so
those
people
who
who
really
wanted.
T
AA
AA
T
AB
T
AB
L
Thomas
nokia,
to
the
question
whether
this
is
available,
I
think
it
is,
although
it's
a
bit
weak
as
a
signal
we
would
like
to
have
more
for
from
you
know
a
measurement
perspective,
but
you
know
better
melodies
than
nothing
the
team
to
the
privacy
problem.
Having
read
Brian's
draft
the
document
a
couple
of
time,
I'm
quite
convinced
that
is
not
a
problem.
Everything
work
with
the
CDN
and
the
redirect
routing
and
getting
the
max
mind
database.
You
have
very
very
precise
information
about
where
the
the
the
endpoint
is
so
effectively.
This
wouldn't
provide
any
additional.
L
J
L
The
signal
doesn't
change
anything
on
the
state
machines
on
the
two
end
points,
but
you
know
if
I
can
convince
the
L
points
incorporate.
This
is
good.
As
I
said,
it
is
better
than
nothing
and
and
to
versification
a
problem
which
I
think
because
II
was
mentioning
I
I
don't
see
this
because
it's
used
in
a
different
way.
It's
used
it
passively.
So
if
you
can
clarify
this
better
so
so.
T
I'd
like
to
pick
up
on
something
so
just
just
to
reinforce
it
into
the
group,
and
that's
this
the
way
this
measurement
works
when
you're
using
TCP
is
you're
taking
information
which
is
required
for
the
state
machines
on
the
endpoints
and
examining
it
as
it
passes
through
the
network.
In
order
to
get
this
data
falsifying
the
data
going
through
the
network
requires
falsifying
the
data
you're
sending
to
the
state
machine
of
the
other
end
point,
because
this
is
data
which
is
consumed
only
by
the
path
altering
this
data
at
the
endpoints.
Z
Q
J
Q
T
P
Jenna
and
God
I
was
actually
hoping
that
we
would
have
some
strong
sense
of
I
I
want
to
echo
what
Martin
said
earlier.
Martin's
aim
and
said
earlier
about
about
the
monster.
I
didn't
need
it's
not
so
much
that
that
we
need
to
show
numbers
about
exactly
control
numbers
until
you
actually
have
this
deployed,
or
until
he
have
it
in
use
or
numbers
are
difficult.
P
T
P
Understood,
okay,
thank
you.
So
the
one
thing
I
want
to
point
out
and
I,
don't
know
if
it's
it's
coming
out
in
the
it
does
come
out
is
that
the
network,
the
network,
can
in
fact
measure
round-trip
time
right
now
with
quick
packets.
It
can't
do
it
continuously
throughout
the
connection.
It
can
be
done
on
the
first
package.
You
can
be
done
on
the
handshake
packets
and
that's
a
pretty
critical
bit
of
information.
That
may
not
be
all
of
it,
but
that's
certainly
a
fair
bit
of
information.
That's
available
as
it
is
in
the
network.
T
And
the
design
team
pointed
that
out
in
in
rejecting
the
geolocation
as
new
information,
because
you
can
clearly
get
round-trip
time
from
the
handshake
packet
and
using
a
series
of
those
you.
You
could
look
for
min
rtt
for
a
related
set
of
IP
addresses.
So
we
definitely
consider
that
an
agreed.
So
in
the.
P
So
my
concern
remains
that
when
we
don't
have,
if
we
don't
think
that
there's
very
strong
demonstrated
need
to
have
this
bit
exposed,
then
it's
just
a
bit
exposed
that
doesn't
get
used.
If
it's
something
that
might
be
useful,
that
to
me
is
not
strong
enough
if
it
is
in
fact
necessary
and
useful.
That
may
be
strong,
but
we
need
that.
Okay,.
A
I'm
gonna
close
the
queues
at
this
point.
We've
got
ten
minutes
left
and
we
have
six
people
in
queue.
So
please
be
brief.
So.
AA
First
and
answer
again,
I
think
we
look
looking
at
the
links
and
then
there
you
will
see
quite
a
lot
of
demonstrations
where
it's
actually
very
useful
and
very
critical
for
for
a
lot
of
operations,
especially
troubleshooting
and
inter-domain
troubleshooting,
where
you
want
to
see
differences
in
latency
on
different
lakes
or
different
parts
of
the
path
so
say.
I'm
measuring
here
and
I
want
to
see.
If
the
trouble
is
in
my
domain
or
if
it's
in
some
domain
outside
and
that's
very
very
then
you
would
need
to
measure
latency.
AA
You
would,
and
you
will
need
to
measure
it
in
the
middle
to
be
able
to
discern
where,
where
the
problem
is
so
that's
a
very
important
use
case
that
that
many
operators
need
this
for,
and
there
are
think
two
or
three
documents
they're
describing
how
it's
being
used
by
different
operators
and
I
have
another
question.
So
if
version
one
is
focusing
on
a
to
be
HTTP,
then
many
of
these
sort
of
corner
cases
might
not
apply.
So
we
might
need
to
rethink,
rethink
that
as
well,
because
these
are
TP
and
sparse
traffic.
K
The
first
need
to
verify
where
it
is
in
the
network
that
the
problem
happened.
Ok,
if
it's
in
their
domain
or
in
other
domain.
So
that's
why
they
take
the
RTT
a
different
point
in
the
ingress.
If
the
echo
address
and
figure
out
what's
they
are
TT's
to
figure
out
where
the
delay
is.
Is
it
counter
it,
maybe
in
one
of
their
devices,
so
they
can
figure
out
that
there's
something
wrong
with
the
device,
the
cube
management
in
the
device,
the
buffering
or
whatever,
and
that's
how
it
being
used.
K
I
mean
you
have
to
understand
that
video
doesn't
go
and
that's
mostly
important
about
video
traffic
and
video
doesn't
go
just
from
the
endpoint
2d
from
the
server
to
the
endpoint
just
automatically.
It
was
nothing
in
the
middle.
There
are
things
in
the
middle
that
has
to
verify
that
it
will
flow
and
will
flow,
optimized
and
I.
Think
one
of
the
one
of
the
if
you
talk
about
latency
is
one
of
the
achievable
things
that
you
want
to
achieve
here.
So
this
is
about.
Latency
basically
want
to
have
the
best
legacy
running.
AC
I'm,
a
little
bit
disappointed
that
the
design
team
didn't
come
with
the
conclusion,
because
we're
will
we're
back
to
square
zero
and
proud.
We
discussed
that
they're,
potentially
some
threat
surfaces
and
we
I'm
glad
that
the
result
is
that
there
are
no.
It
is
needed
for
service
providers
and
it
is
in
the
Charter
that
we
need
to
provide
at
least
the
same
level
of
manageability
that
you
know
no.
No,
that
is
not
in
the
Charter.
F
Yep
so
last
week
on
our
network,
we
have
to
trouble
shut
but
drop-off
traffic.
Actually,
we
have
was
a
quick
traffic
which
dropped
from
100
to
30%
and
in
fact,
to
troubleshot.
We
have
to
rely
on
the
fallback
on
TCP
because
about
nothing
available
in
the
quick
headers
total
shot.
So
the
question
either
is
we
fall
back
to
TCP
part
of
the
invariance
of
quick.
T
A
A
M
Yeah
I'm
told
but
I'm
not
that
tall
:,
pokin
so
I'm
very
sensitive
to
the
concerns
of
the
operators
here
and
I
think
it's
clear
that
they
have
real
issues
to
solve.
I.
Think
that
the
point
which
Ted
made
earlier
and
I
think
Lars
made
perhaps
over
dinner
last
night
that
since
you
can
put
stuff
inside
the
encrypted
parts
of
the
packets
that
drives
the
protocol
states,
anything
we
put
in
the
unencrypted
part
that
the
operators
can
see
could
be
faked
and
could
have
no
real
relation
on
real
data.
M
AB
T
Sorry,
it
wasn't
necessarily
the
fine-grained
aren't
et
but,
for
example,
understanding
the
contribution
of
latency
for
particular
legs
along
the
path.
If
you
have
the
I
ppm
techniques
that
were
in
the
draft
for
which
Christiaan
made
the
original
reference,
because
you're
actually
a
ppm
yeah
your
your
your
tightly,
I'm
matching
the
the
clocks
of
the
various
things
and
then
saying:
hey
I
saw
this
one
and
and
move
the
the
up
bit
depth
to
down
as
I
went
through
this.
T
You
actually
get
very,
very
strong
information
from
that,
but
at
a
cost
and
the
cost
is
that
you
must
now
synchronize
the
clocks
of
these
routers,
and
you
must
be
active
on
two
points
in
in
the
in
the
path
adding
the
moving
the
bit
up
and
taking
it
back
down
as
you
ingress
and
egress
your
network,
but
certainly,
if
you're
willing
to
have
that
kind
of
measurement
architecture
in
your
network,
you
can
measure
your
own
network
quite
successfully.
Without
this,
you
can
measure
on
that
we
using
MPLS
and
etc
as
well
right
so.
T
This
the
the
engineering
trade-off
here
from
the
point
of
view
of
the
operators,
isn't,
is
this
the
only
possible
way
for
me
to
get
this,
but
is
this
a
way
that
I
can
get
it
at
scale
conveniently
for
the
traffic
that
is
going
to
be
crossing?
My
network
and
I
think
the
the
question
of
what
type
of
traffic
is
actually
important
there
and,
although
I'm
certainly
willing
to
bow
to
the
greater
knowledge
that
you
and
Brian
have
about
how
much
additional
effort
this
looks
like
to
measurement
folks.
I.
T
AB
H
H
Tangentially
lost
detection
was
mentioned
and
I'm
wondering
so
from
the
operator
community
right.
Will
you
be
happy
with
delay
or
is
lost
the
next
thing,
and
will
that
also
need
to
be
part
of
the
invariance
or
not?
And
we
don't
necessarily
have
to
answer.
We.
H
That
was
so
did
the
opinions
are
heard
was
either
because
we're
doing
greasing
we're
gonna
have
bits
that
we
can
yeah
use
for
this
later.
Yeah
I
think
mayor
specifically
made
the
point
that
she
would
like
to
have
this
be
part
of
version
one,
because
the
traffic
is
going
to
be
so
large
that
by
the
time
you
have
the
problem,
you
don't
have
TCP
anymore.
So.
D
So
Brian
and
then
I
think
we
really
do
have
to
break
so.
I
I
actually
came
up
here
to
say
something
very
much
like
that.
I
tend
I
tend
to
agree
with
Miriah
that
if
it's
just
a
grease
signal
that
turns
into
a
non
grease
signal,
then
it's
not
clear
if
it's
useful,
but
I
will
point
out
that
the
running
RTT
of
the
spin
bit
as
long
as
the
handshake
RTT
is
still
available.
D
The
running
RTP
of
the
spin
bit
happens
after
you
have
a
cryptographic
connection
establish
so
you
it's
it's
well
outside
the
the
invariance
rate,
so
you
still
have
the
problem
with
it
possibly
being
greased
and
then
not
useful,
but
you
could
actually,
if
you
really
wanted
to
you
to
put
it
just
on
short
packets,
for
example,
I
mean
there
are,
so
it
I
think
that
this
is
something
that,
if
we
absolutely
have
to,
we
can
defer
it
as
long
as
we
know
that
it's
a
thing
that
we're
deferring.
A
This
is
not
by
any
means
the
last
time
we're
gonna
talk
about
this,
we're
out
of
time.
A
H
Of
for
these
questions,
so
the
questions
about
time
is
affirm
from
the
chairs
perspective
right.
We
need
to
know
whether
this
discussion
blocks
version
one
or
not
so
opinions
on
that.
Please
send
them
to
the
list.
Do
it
right
now,
while
you're
waiting
to
go
to
the
social
tool
on
the
bus
or
into
it
the
room
tonight
and
if
you're,
all
very
good
and
keep
the
schedule
tomorrow,
we
do
have
a
few
minutes
at
the
end
of
tomorrow's
session
arm
to
maybe
have
a
summary
of
some
people's
thoughts
on
this.
Thank
you.