►
From YouTube: IETF108-QUIC-20200729-1100
Description
QUIC meeting session at IETF108
2020/07/29 1100
https://datatracker.ietf.org/meeting/108/proceedings/
A
Right
so
this
is
our
agenda.
Brian
tremmel
and
robin
marks
were
gonna
scribe
for
us.
C
Hello,
apologies.
Yes,
my
dhep
decided
to
go
down.
A
All
right
so
brian
is
here
and
robin
is
not
yet
so
I
guess
brian,
you
can
start
mining
and
we're
using
the
kodi
md.
How
do
you
pronounce
this
thing?
A
B
And
please
put
a
link
into
the
various
jabber
bits.
D
A
And
if
people
want
to
help
brian
out,
you
know
by
you
know
fixing
names
or
spelling
or
whatever,
please
go
ahead.
Yes,
brian's
also
gonna
present,
so
robin
was
gonna
minute
for
him
at
that
time.
If
robin
isn't
here,
if
somebody
else
could
do
that,
that
would
be
great.
F
E
A
Okay,
that
that's
all
we
care
about
and
and
if
there
are
problems,
let
us
know
so
that
we
stop
the
press
and
and
let
you
figure
out,
what's
not
working
okay,
so
I
guess
with
that
we
can
start
so
so
welcome
to
the
the
quick
working
group
meeting
at
itf
108.,
everybody
is
at
home,
or
at
least
not
in
one
place.
We
haven't
met
in
a
while.
A
We
cancelled
107
and
we
haven't
had
any
interims
because
we
were
busy
chugging
through
issues,
but
since
it
looks
like
we're
going
to
be
in
this
virtual
form
for
a
little
while
longer-
and
we
thought
it
was
important
now
that
the
working
group
last
call
and
the
bass
drafts
has
concluded
to
sort
of
have
a
meeting
to
figure
out
where
we
are
and
where
we're
going
to
go
from
here,
you'll
see
our
agenda,
it's
same
as
always,
you
know,
if
you're
here
you
know
how
to
join
meet
echo.
A
A
There
was
a
hackathon
last
week
all
week.
Long
quick
was
one
of
the
topics.
We
also
had
a
virtual
interrupt
day
before
that
and
an
interrupt
is
going
on
pretty
constantly
anyway.
So
I'm
going
to
say
a
little
bit
about
that
blue
sheets,
I
think,
are
being
automatically
generated.
If
I
remember
correctly,
so
we
don't
have
to
do
that.
We've
selected
our
scribes.
You
are
familiar
with
the
note
well
in
case
you're,
not
I'm
gonna
click
on
the
link
and
see
if
it
comes
up
right.
A
You
are
so
basically
if
you're
participating
in
the
itf
and
participating
means,
you
know
speaking
or
sending
email
or
interacting
on
github.
You
are
obliged
to
follow
a
set
of
rules
and
that
rule
that
set
of
rules
includes
ipr
related
things,
but
also
the
code
of
conduct.
So
please
adhere
to
that.
When
you're
participating
agenda
bashing,
let
me
quickly
run
through
what
we
have
planned
and
then
we
can
see
if
we
want
to
bash
it.
The
usual
quick
hackathon
interrupt
report.
A
A
I
think
we're
just
gonna
basically
do
the
discussion
based
on
on
github
and
then
we're
gonna
talk
about
other
adopted
working
group
items
and
their
status
and
any
sort
of
issues
with
those
which
would
be
quick
load
balancers
and
the
ups
draft
manageability
applicability
drafts
we're
gonna,
try
a
bit
a
little
bit
of
planning.
A
A
Based
on
how
much
engagement
we
saw
on
the
list
and
on
github,
so
this
is
roughly
the
order
in
which
we
think
we're
going
to
get
to
them,
but
we'll
see
how
much
time
we
have,
and
if
something
can
you
know
get
by
with
less
time,
maybe
we
bump
them
up
or
not.
Let's
see
any
agenda
bashing
that
people
want
to
do.
A
A
A
Do
we
want
to
go
with
the
one
issue
that
we
have
slides
on?
First.
A
A
G
Yeah
that'll
be
okay,
okay,
go
ahead,
so
yeah,
basically
just
to
recap:
we
had
an
issue
previously
about
why
about
disabling
active
migration-
that's
been
in
for
a
while,
and
there
are
a
couple
categories
of
reasons
why
you
might
want
to
do
that.
One
is
purely
technical
that
you
can't
receive
incoming
packets,
so
you
can't
guarantee
the
routing
of
incoming
packets
that
have
a
different
source
for
a
couple,
different
reasons
that
are
on
the
slide,
but
we're
not
going
to
go
into
unless
people
need
to.
G
G
It's
not
going
to
perform
as
well
when
there's
likely
a
node
closer
to
you
and
then
also
there
are
contract
issues
where
you
have
all
these
hyperlocal
servers
and
part
of
the
way
you
got.
Those
is
by
contracting
with
the
isp
we're
going
to
put
a
server
in
your
network
to
serve
your
customers.
That's
all
it's
there
for
so
next
slide,
which
is
actually
already
on
screen.
G
Sure
we
also
have
this
feature
of
server
preferred
address,
which
you
can
use
to
improve
scaling
on
your
load
balancer
by
only
doing
handshakes
through
it
and
then
offloading
the
rest
of
the
traffic
to
an
address
that
goes
directly
to
the
node
that
you're
talking
to
that
also
helps
you
deal
with
any
cast
if
you
do
have
a
migration,
and
it
also
has
this
little
side
effect
that
we
added
where
a
client
could
do
a
handshake
on
one
address
family
and
switch
to
a
different
one.
G
I'm
seeing
a
strange
look,
but
the
client
might
want
to
make
that
noise.
G
Okay,
if
you're
at
the
beginning
of
the
download,
you
might
want
to
kill
that
old
connection
and
switch
to
a
new
path
that
is
going
to,
hopefully
give
you
better
performance
from
your
new
network
attachment
if
the
download's
not
resumable
you're,
almost
at
the
end,
you
might
want
to
just
finish
it
off
going
on
to
the
fourth
slide.
G
G
So
going
back
to
the
technical
issues
bucket,
we
had
an
issue
earlier.
We
had
an
issue
fairly
recently
and
I
think
the
last
consensus
call
where
we
concluded
that
from
a
technical
standpoint,
if
the
server
is
able
to
handle
the
client
migrating
to
its
server
preferred
address.
G
Clearly
the
server
is
able
to
distinguish
those
incoming
connections
and
therefore
the
technical
issues,
probably
don't
apply
for
the
server
preferred
address
case
of
disabling
active
migration.
So
we
made
the
decision.
We
had
a
consensus,
call
to
scope,
disable
my
active
migration
only
to
the
handshake
address
and
after
you
migrate
to
the
preferred
address,
you're
fine.
G
So
that
leaves
us
with
that
contract
issues
bucket,
and
so
now
we
have
an
issue
raised
as
part
of
last
call
that
by
scooping
it
only
to
the
handshake
address,
you
have
this
class
of
servers
that
is
technically
capable
of
responding
to
that
connection,
but
for
contract
or
other
reasons
doesn't
want
to.
Let
you
migrate
to
another
network.
G
G
There
are
different
ways
you
can
communicate
that
the
pr
sticks
it
in
the
spa,
transport
parameter
other
ways.
You
could
conceive
of
writing
that,
like
non-zero
or
one
values
of
the
disable
active
migration,
or
we
could
do
any
of
these
as
an
extension,
the
other
option-
that
is
also
on
the
table
last
slide.
I.
G
Believe
if
the
server
just
ignores
the
client's
probes,
then
the
probe
will
fail
and
the
client
will
not
migrate.
If
the
client
has
actually
lost
that
primary
connection,
then
the
connection
will
just
time
out.
So
I
think
our
paths
forward
that
are
both
reasonable
are
we
pursue
something
like
the
pr
that
gets
us,
the
behavior
we
actually
want
or
the
behavior?
That
applies
for
this
other
scenario
for
disabled
active
migration,
or
we
say
we
have
consensus
on
the
handshake
only
and
it's
the
surfers
job
to
ignore
those
incoming
probes.
G
D
All
right,
so
my
preferences
for
the
first
one
that
we
actually
do
something
technical
for
to
indicate
it,
and
the
reason
is
that
we
did
have
a
reason
for
actually
putting
this
parameter
there
in
the
first
place
and
for
the
case
of
those
servers
that
mike
identified
as
the
third
kind,
the
contract
kind,
it's
the
same
case
for
them
as
for
any
server
that
uses
disable
active
migration
to
begin
with.
D
D
No,
I
I'm
definitely
for
doing
something,
and
it
looks
like
on
the
pr
that,
like
nobody
is
opposed
philosophically
to
solving
the
problem.
It
just
needs
to
be
a
nice
simple,
clean
solution.
B
Okay,
thank
you.
Next
is
eric.
B
H
I
think
we
have
when
we
previously
discussed
this.
We
said
that
this
was
something
that
people
might
want
to
put
in
an
extension
and
that's
fine.
I
don't
have
a
huge
just
as
a
personal
opinion.
I
don't
have
a
huge
resistance
to
kind
of
closing
the
rest
of
the
gap
and
putting
it
in
spa,
which
is
the
pr
that
mike
has,
and
I
think,
we've
worked
out
most
of
the
kinks
and
getting
that
appropriately
scoped
to
different
addresses,
etc,
etc.
H
So
I
think
that's
a
fine
thing
to
do
and
if
that
would
make
enough
people
happy,
then
then
sure
we
we
previously
had
a
very
similar
discussion
and
all
come
to
the
consensus
that
it
was.
H
It
was
better
as
an
extension,
the
I
I
don't
know
if
it's
exactly
the
same
as
the
other
cases
where,
where
it
was
claimed
that
there
would
be
active
harm
if,
if
people
had
packets
coming
in
from
other
addresses
in
this
case,
especially
if
it's
a
policy
thing-
and
you
just
say
hey-
I
don't
want
this
to
do
it,
that's
very
similar
to
something
that
all
devices
or
clients
that
are
attempting
to
migrate
are
going
to
have
to
deal
with
anyway,
which
is
that
the
answer
you
got
back
for
this
ip
address
is
only
applicable
to
one
interface
and
when
you
probe
over
the
other
one,
you
get
nothing.
H
That's
always
going
to
be
something
that
people
have
to
deal
with,
and
so
I
don't
think
there's
any
problem
in
simply
just
not
accepting
packets
from
other
from
other
sources
that
are
outside
of
your
contractual
domain,
which
is
nice
in
the
sense
that
if
you
have
address
changes
within
that
contractual
domain,
if
you're
say
you're
an
embedded
cache
within
an
isp
somewhere
if
they
need
to
switch
local
addresses
on
that
isp
or
if
something
happens,
and
the
routing
changes
so
hugely
such
that
the
ip
changes,
as
well
as
the
port.
H
B
H
B
I
Great,
so
I
think
I
can't
agree
with
what
eric
just
said.
While
there's
no
technical
need
to
make
a
change
here,
I
still
think
that
it
makes
some
sense
and
to
have
the
same
features
for
the
original
address
and
also
for
the
server
providers.
I
So
the
only
question
regarding
the
current
state
of
the
pull
request
is
that
it
requires
or
recommends
the
endpoint
the
client
to
use
the
same
source
address
as
it
used
for
the
handshakers
and
I'm
a
bit
concerned
about
that
new
requirement,
because
it
changes
the
assumption
on
how
the
client
can
be
implemented.
I
Previously,
we
could
allow
a
client
a
client
to
be
implemented
simply
by
allowing
the
simply
by
writing
the
operating
system
to
choose
the
source
address,
whereas
if
we
have
a
specification
that
recommends
a
client
to
use
a
specific
address,
then
the
requirement
on
the
api
surface
changes.
Sequence
conflict,
so
I
concerned
about
that
change,
but
other
than
that
I
think,
having
the
bit
in
the
transport
parameter
or
having
a
flag
in
the
transport
for
this
fight.
B
You
next
is
dance
kanaji.
J
Morning,
can
you
hear
me?
Yes,
I
I
think
that
at
this
point,
if
I
understand
this
issue
correctly,
this
is
a
minor
change,
and
this
happens
enough,
like,
during
and
after
the
handshake
that
it
can
totally
be
negotiated
with
an
extension.
J
K
K
G
We
can
word
smith
around
that.
It's
not
it's
not
intended
to
require
you
to
keep
exactly
the
same,
address
and
port
on
your
same
network.
We
can.
We
can
figure
out
the
wording
there,
the
other
piece
about
what
we
previously
agreed.
What
we
had
said
was
we
weren't
aware
of
any
concrete
scenarios
that
required
this
and
if
there,
if
we
discovered
some
later,
we
could
have
an
extension.
D
Actually
kazuka's
proposal
well,
basically
or
the
concern
and
and
the
same
thing
that
christian
actually
voiced
that
it
it
could
complicate
a
bit
some
implementations
to
require
exactly
the
same
source
address,
and
so
I
think
that
perfectly
reasonable
not
to
include
the
capital
code
in
the
text
and
it's
even
okay
to
let
the
os
choose
address
as
long
as.
Basically
it's
not
change.
D
You
don't
deliberately
change
and
migrate
later,
because
for
all
practical
purposes
you
are
much
more
likely
to
be
migrating
soon
after
the
handshake
and
that's
likely
to
be
you'll,
be
sticking
with
the
same
network
attachment
you
had
during
the
handshake
so
yeah.
It's
very,
very
pragmatic
and
much
more
friendly
to
implementation
approach,
to
just
say
my
great
ones
and
don't
migrate
again.
L
I,
during
this
conversation
we
I
kind
of
discovered
two
things
about
this
particular
kind
of
feature
that
I
did
not
anticipate
one
is,
it
does
provide
a
little
bit
of
a
server,
really
wants
to
not
wait
for
a
timeout
and
doesn't
have
this
feature.
They
are
slightly
more
incentivized
to
want
to
mint
a
stateless
reset
oracle.
L
I
mean
the
text
obviously
says
you
must
not
do
that,
but,
like
I
mean
as
a
server
vendor
like
you're
you're,
choosing
between
like
potentially
having
a
client
timeout,
which
is
a
pretty
painful
experience
and
like
minting
an
oracle
that
you
may
or
may
not
in
certain
circumstances
like
have
a
strong
disincentive
to
like
do
so.
That
was
one
kind
of
observation
that,
because
that
was
my
first
response.
This
is
like.
Why
don't
you
just
reset
it
move
on,
and
the
answer
is
no.
You
must
not
do
that
the
other
is.
L
It
provides
a
slight
incentive
to
not
give
out
any
connection
ids,
which
is
actually,
I
think,
the
easiest
solution
as
a
server
vendor
to
this
problem
is
you
just
don't
give
up
any
more
connection,
ids
and
then
the
rest
of
the
text?
L
Basically
says
like
the
client
must
not
intentionally
migrate
after
moving
to
the
spa,
if
you
don't
give
out
any
connection
ids
and
this
this
like
solves
itself,
I
don't
know
if
we'd
like
the
fact
that
it
solves
itself
in
that
way,
but,
like
that's
the
easy
solution
to
this
problem
from
like
my
deployment
perspective,
so
that's
why
I
feel
like
I
don't
have
a
strong
opinion
from
employment
perspective.
This
is
okay.
M
Try
now
we'll
have
audio
now
guys
seem
great,
so
what
eric
said
is
actually
quite
relevant,
but
in
in
at
high
level
I
just
want
to
bring
it
back
to
we
are.
We
are,
I
feel
like
we
are
rehashing
the
discussion.
M
That's
happened
on
the
issue
and
we
keep
going
around
in
the
same
circles,
because,
fundamentally
we
think
this
problem
is
basically
solvable
in
the
way
that
it's
proposed
in
mike's
pr,
but
we
are
really
talking
here
about
how
much
do
we
want
to
take
on
right
now
at
this
late
stage
in
the
game
now,
given
that
the
fix
is
is,
is
in
in
the
pr
more
or
less
it's
it's
basically
there,
but
we,
as
mike
said
we
may
have
wordsmith
a
little
bit,
but
we're
basically
there
at
a
high
level
the
there
is
a
fundamental
alignment
in
design
here.
M
The
whole
idea
of
migration
should
be
attached
to
addresses
not
to
end
points,
and
we
had
earlier
the
the
current
text
in
the
in
the
draft
basically
attaches
it
to
the
end
point
with
the
change
that
mike
is
proposed
on
his
pr.
I
think
this
makes
it
per
address.
It
makes
it
we
have
the
connection
level
one
for
the
original
address
and
the
spa
has
its
own
disabled
migration
bit.
M
That
aligns
it,
I
think,
with
addresses,
which
is
where
I
think
it
should
be,
and
as
you're
hearing
from
implementers
as
well
there's
some
trepidation,
perhaps
hesitation,
but
there's
not
a
real,
strong
pushback
against
this.
Because
again
we
are
going
back
in
the
same
circles
as
we
have
on
the
issue
we
did
previously.
We
just
did
a
consensus,
call
we're
going
back
again.
My
suggestion
will
be
at
this
point
because
this
issue
keeps
popping
up.
M
We
can
take
this
solution.
We
can
go
ahead
with
this.
Not
a
lot
of
people
have
implemented
spa
or
have
tested
spa
yet
so
it
seems
like
it's
one,
small
piece
that
can
actually
move
on.
I
can
see
eric
jumping
into
the
queue
there
to
say.
Yes,
we
have
implemented
and
tested
this,
but
I
I'm
gonna
say
that
that's
my
opinion
that
we
should
probably
take
this
in
and
move
forward
instead
of
rehashing
this
again
and
again,.
B
K
K
There
is
one
that
says
migrate
or
not
and
as
in
don't
migrate
and
the
other
one
that
says
please
migrate
to
this
server
address,
and
so
the
two
of
them
creates
a
contradiction,
because
what
does
it
mean?
You
say:
hey,
don't
migrate,
but
migrate
to
this
client
address
to
this
server
address
and
right
now
we
say:
okay.
K
What
that
really
means
is
that
if
you
have
don't
migrate
and
a
transport
address,
it
means
please
only
migrate
to
this
transport
address
and
and
then
you
can
migrate
to
anything.
You
want,
and
I
think
that's
the
ambiguity
in
the
spec
that
mike
is
working
around.
That's
really
what
it
is,
because
we
have
two
contacted
intps
and
it's
4am
here.
Sorry.
H
Yeah,
I
actually
would
agree
strongly
with
what
christians
said.
This
is
we
we,
we
scoped
this
to
just
the
initial
address
as
a
way
of
of
kind
of
wriggling
around
the
fundamental
distinction
there.
I
think
if
you
took
the
pr-
and
you
made
it
such
that
we
specified
it
per
address,
not
per
endpoint
like
jonah
brought
up
on
the
issue
and
again
just
now,
which
is
a
fundamentally
good
thing
that
we
probably
should
do
either
way
if
you
scoped
it
to
just
that
and
said
hey,
we
recognize
that
you've
said.
H
Please
go
migrate
over
here
and
that's
going
to
come
with,
potentially
an
interface
change.
If
that's
required
by
you
know
the
fact
that
you
have
a
vpn
that
claims
the
address
that
I
gave
you
and
asked
you
to
do,
and
if
you
do
that,
then
don't
I
think
if
you,
if
you
scope
it
down
to
that
and
just
say,
hey,
we
covered
this
one
thing,
there's
two
other
places
where
you
need
a
similar
switch
and
we
didn't
yet
provide
it,
and
we
can
always
provide
additional
switches
in
a
extension.
H
If
we
really
decide,
we
don't
have
the
stomach
for
it,
but
it
is
essentially
filling
a
gap
that
has
caused
a
lot
of
confusion.
If
we
filled
that
gap
by
simply
saying
once
you
get
there
now,
you
can't
migrate
any
more.
That
seems
eminently
reasonable
and
I
suspect
we
could.
We
could
all
be
happy
in
that.
M
Just
clarifying
what
eric
said,
I
think
eric
is
suggesting
that
we
allow
for
potentially
one
migration
per
additional
address
in
this
case,
limiting
it
to
one,
because
there
is
an
additional
address
in
the
spa.
That's
what
I'm
hearing
I
want
to
confirm
that
so
there
it
was
proposing
that
we,
if
you
move
to
the
spi,
you
might
have
a
change
of
source
address.
But
after
that
we
don't
allow
for
any
more
migrations.
If
that's
the
yeah
that
maybe
that's
necessary,
but
I'm
just
confirming
that's
what
you're
saying
eric.
H
H
We've
said
that
scopes
to
only
the
one
that
you
started
on,
you
are
going
to
have
to
move
that
may
come
with
address
changes.
I
think
we
avoid
touching.
All
of
that.
If
you
want
to
restrict
that
farther
make
it
an
extension,
go
crazy
with
lots
of
logic
there,
but
effectively
take
that
that
thing
that
currently
only
applies
to
one
of
two
or
three
places
that
it
should
and
give
the
granular
control
over
where
it
actually
applies.
M
May
I
respond
to
this,
of
course
yeah
so
eric.
So
in
the
worst
case,
what
happens
in
case?
You
don't
allow
for
this
mic
or
you
don't
have
if
you
don't
specifically
allow
for
this.
Migration
is
the
same
thing
that
would
happen
now
right
that
the
packets
come
to
the
server
and
the
server
may
or
may
not
deal
with
it.
If
it
comes
in
a
different
source
address,
it
may
or
may
not
get
a
get
to
the
server
it
may
or
may
not
get
through
the
server
infrastructure.
M
H
Last
thought
there:
yes,
although
you
could
argue
that
from
a
policy
perspective,
none
of
what
we're
talking
about
right
now
is
needed,
because
you
could
always
just
not
accept
the
packets
on
the.
If
you're,
contractually
obligated
that
packets
coming
from
outside
your
isp
aren't
allowed
in
then
don't
allow
them
in.
H
M
J
Is
this
you
telling
me
to
go
next?
Yes,
no
worries,
so
I've
been
following
the
discussion
here
and
I
want
to
apologize
if
I
misunderstood
anything,
because
it
is
stupid
early
right
here
and
my
copy
hasn't
kicked
in
yet.
But
this
conversation
sounds
like
a
brainstorm
to
me,
it
sounds
like
people
in
front
of
a
white
board
trying
to
figure
something
out
and
that's
fine,
but
this
is
no
longer
the
time
to
be
doing
this.
Like
we
have
a
feature,
we
have
some
ideas.
B
Okay,
thank
you
david
and,
to
that
end
I
noticed
we're
34
minutes
into
the
meeting
so
far
and
I
think
we
have
an
hour
and
40
minutes,
so
we
don't
have
anybody
else
in
the
queue
which
is
great
lars.
How
do
you
feel
about
doing
a
quick
home?
Do
you
think
that's
helpful
and
try?
A
Yeah,
so
I
hate
cool,
but
we
can
try
if
you
think
there
is.
B
I'm
I'm
gonna
top
some
beside
some
options
into
sorry,
not
slack
into
jabber,
which
will
get
mirrored
to
slack
and
we'll
have
two
questions
or
or
two
things
we'll
come
on.
One
is
we
need
to
address
this
before
shipping
if
my
typing-
and
I
apologize
for
my
typing
noises-
sorry
folks
and
option
two
or
or
hum2
is
gonna,
be
it's
okay
to
leave
this
as
is
and
consider
an
extension
after
shipping,
those
sound
like
good
homes.
Any
any
discussion
on
the
hums.
B
Okay,
thank
you
for
repeating
those
martin,
so
I
will
now
start
the
hum
for
the
first
question,
which
is
hum
if
you
believe
we
need
to
address
this
before
shipping,
and
this
is
done
in
the
little
graphy.
Looking
blue
tab
next
to
the
red
circley
tab
and
the
dark
blue
people
tab
at
the
top
left
of
the
screen
in
midacco.
B
Are
we
ready
starting
now?
Oh,
my
role
does
not
allow
me
to
hum
oh
right.
My
role
doesn't
love
me.
Okay,
fair
enough
I'll,
just
sit
here
then.
So
again,
the
hum
that
we're
currently
coming
for
is.
We
need
to
address
this
before
shipping.
B
B
Four
three
two
one
and
done
piano
so
a
medium
hum.
We
need
to
address
this
before
shipping,
and
so
then
thank
you,
jonah.
The
next
hum
we'll
do.
Is
it's
okay
to
leave
this
as
is
and
consider
an
extension
after
ship
extension
after
shipping,
starting
now,
please
go
ahead
and
hum
again
hum
if
you
believe
it's
okay
to
leave
this
as
is
and
consider
an
extension
after.
A
B
Minutes
at
least
make
it
a
standard
time,
you're
saying
that
makes
sense
two
one
fortissimo
loudest
okay,
so
certainly
there
was
more
support
for
just
calling
this
extension.
B
D
Anyone
very
good
felt
like
I
had
to
say
something
because
I
opened
the
issue
right
well,
it
seems
that
conceptually
where
I
live
in
a
hall,
we
know
that
there
is
a
hole
that
is
not
hard
to
fill
and
it's
basically
like
again
looking
at
the
design,
we
did
introduce
a
parameter
for
some
reason
for
the
pre-spa
migration
case.
D
It
looks
like
there
is
a
case
of,
and
it's
not
a
nick
case,
it's
a
very
common
actually
situation
on
the
internet.
That
would
in
exactly
the
same
way
benefit
from
this
treatment
as
the
regional
handshake
address,
so
spa
or
not,
and
we're
just
living
it
as
a
whole,
and
people
commented
that
our
current
design
seems
incomplete
because
it
should
be
attaching
semantics
to
addresses
and
not
leave
it
in
the
current
state.
D
D
B
Igor,
would
you
strongly
objectively
close
this
without
making
a.
D
Change,
I
don't
know
I
mean
I
would
be
disappointed,
but
I
mean
obviously
everything
can
be
worked
through
and
yes,
you
could
always
work
through
by
just
not
giving
out
connection
ideas
which
I
actually
even
stated
in
the
issue
is
one
alternative
to
solve
it,
and
then
the
working
group
says
that
actually
we
don't
want
people
to
do
that.
We
do
want
people
to
give
out
connection
ideas
for
privacy
reasons,
so
it
would
be
disappointing.
D
I
mean
strongly
object,
I'm
not
sure
what
it
means.
In
this
case
I
mean
things
are
not
things
are
not
going
to
get
absolutely
horrendously
broken
without
it.
B
I
do
normally
you
know
one
of
the
things
we
could
say
at
this
point.
What
would
be
well,
let's
split
the
design
team
or
let
some
people
go
off
and
come
up
with
a
proposal
and
then
come
back
later
in
the
process.
Part
of
the
problem
is
is
that
as
a
couple
of
people
were
saying,
we're
at
a
point
in
the
process
where
we're
very
close
to
the
end,
and
so
we
don't
have
a
lot
of
time
unless
we
want
to
push
the
schedule
out,
which
we
can
certainly
do.
B
But
I
think
we
need
to
get
agreement
on
the
group
that
it's
important
enough
to
do
that.
D
Right
absolutely
I
mean
there
is
a
design
I
mean
there
is
a.
There
is
a
proposal
already
in
the
table.
It's
not
like.
We
need
it.
So
there
is
my
xpr
and
it
looks
like
there
is
a
proposal
that
nobody
objected
to
technically
that
we
did
in
this
like
right
now
about
don't
requ,
don't
have
a
master
suite
with
a
capital
about
the
same
address
just
don't
migrate
later,
so
I'm
not
sure
what
what
new
design
group
is
needed.
It's
mostly
like
change
the
text
now
to
exactly
that
or
don't
touch
it.
B
Okay,
it
seems
great
thanks,
igor
I'm,
not
sure
how
much
more
information
we
can
get
from
discussion.
So
I'm
going
to
cut
the
cue,
with
the
only
exception
being,
if
someone's
mind
has
been
changed
since
they
hummed
please
get
in
the
queue.
Now,
thanks
go
ahead.
Jonah.
M
Thanks,
I
I
just
wanted
to
say,
respond
to
igor
to
say
that
this
is,
I
think,
to
some
extent,
what
we
are
going
through
here
is
not
that
there
is
a
strong
technical
disagreement,
but
there's
no
appetite
for
taking
on
something
that
requires
people
to
think
through,
and
people
are
finding
hesitation
and
people
are
finding
that
there's
potential
issues
here,
it's
not
clear
and
that
a
lack
of
clarity
is
a
good
reason
to
not
not
not
go
ahead
with
something
at
this
late
stage,
especially
because
we
can
find
other
ways
around
this
at
a
high
level.
M
I
would
just
simply
like
to
say
that,
as
I'll
repeat
what
david
schenazi
pointed
out
earlier,
we
are
at
a
stage
in
the
game
where
we
want
to
experiment
with
these
things,
we're
going
to
learn
things
we're
going
to
we're
going
to
have
gaps
going
forward.
We
learned
that,
but
we
learned
that
through
deployment,
even
if
we,
some
of
us
can
see
some
gaps
very
clearly
right
now,
not
everybody
agrees.
There
are
gaps,
there
will
be,
and
it
might
be
just
we
have
to
accept.
M
I
think
that
we'll
have
to
live
with
some
gaps
until
we
are
able
to
get
some
more
deployment.
Experience.
A
Okay,
so
go
ahead.
Mark
go
first,
so
folks
saw
what
I
wrote
down
in
the
box
that
basically
the
feeling
in
the
room,
the
the
little
virtual
hum,
seems
to
have
been
that
we're
going
to
close
this
with
no
action.
So
I'm
going
to
record
that
and
and
market
proposal
ready
and
we're
going
gonna
run
a
consensus.
Call
that
will
have
that
issue
with
the
resolution
in
it,
and
people
can
then
object
or
not,
but
I
think
agree
with
marked
it
for
this
meeting.
A
I
think
we've
exhausted
the
discussion
on
this
thanks
guys
and
the
editors
have
informed
me
that
this
was
actually
the
last
issue
that
they
had
that
they
wanted
to
discuss
here,
there's
obviously
more
issues.
If
you
look
at
our
project
board
that
are
still
sort
of
floating
around,
so
we
have
a
bunch
in
triage.
As
you
can
tell,
we
have
a
whole
lot
of
editorial
stuff.
A
We
have
some
design
issues
that
still
require
work,
and
then
we
have
some
that
have
consensus
emerging
that
we're
going
to
do
the
call
on
very
soon.
While
I
have
this
up,
I
will
point
out
that
we
we
seem
to
see
an
increased
rate
of
new
issues
being
created,
which
is
not
what
we
need
at
this
point
of
the
process.
So
people
will
really
really
need
to
think
hard.
A
Whether
something
is
is
justified
to
be
opened
at
this
time,
and-
and
you
know,
and
if
that
doesn't
help
we
might,
you
know,
need
to
have
some
very
late
stage
process
where
maybe
we
want
to
you
know
have
some
more
formal
limit
to
this,
but
if
we
want
to
ship
soon
we
should
we
should
really.
You
know,
stop
mucking
with
the
protocol
and
and
ship
it,
and
the
caveat
is
always
that
you
want
to
have
bugs
in
a
protocol.
We
don't
have
problems,
but
it's
getting
awfully
late
to
have
this.
A
N
N
N
O
Thanks,
I
also
don't
do
not
want
to
be
perceived
as
advocating
for
this
issue.
I
advocate
for
closing
this
issue
with
no
action,
but
I
thought
that
this
would
be
a
good
time
to
do
that,
and
so
I
figured
that
maybe
we
could
use
that
deal
with
time
to
discuss
it
and
close
it
with
no
action.
P
F
B
So
if
we
flip
this
to
editorial,
then
our
convention
is
that
the
editors
close
it
at
their
pleasure
they'll,
add
some
text
and
talk
to
you
about
it,
or
they
can
decide
that
they
don't
need
to
add
text.
So
are
you
happy
with
that.
M
Mind
proposing
yeah.
B
Right
now,
all
we're
doing
is
proposing
that
we
flip
this
issue
to
editorial
and
then
can
talk
to
the
editors
and
see
if
they
can
come
to.
Some
combination
about
some
editorial
text
sounds
good.
Thank.
A
A
Let
me
flip
back
to
the
agenda.
One
thing
that
we
actually
missed
like
we
sometimes
do
is
the
sack
of
an
interrupt
report.
I
don't
really
know
how
hackathon
went
because
I'm
in
the
european
time
zone
and
most
of
the
interrupt
last
week
seems
to
have
happened
during
american
time
zones,
but
the
interrupt
matrix,
if
I
can
pull
it
up,
looks
pretty
full,
which
is
good.
A
There's
some
load
load
load,
what
the
hell.
Why
is
there
a
form
responses
anyway?
And
that
looks
yeah
so
so
there's
it
looks
pretty
green,
there's,
there's
some
columns
and
there's
a
bunch
of
white,
but
that
is
mostly
because
we
have
some
new
stacks
that
that
haven't,
you
know,
participated
or
only
serve
our
client,
but
it's
it's
looking
generally
pretty
okay,
but
my
one
observation
would
be
that
ecn
does
not
seem
to
be
very
widely
supported,
which
is
a
pity,
especially
just
reporting.
A
This
is
also
if,
if
somebody
has
interns
or
students
available
right,
if,
if,
if
for
the
larger
stacks,
like
google,
facebook
microsoft,
what's
what
have
you
if
people
want
to
send
them
prs
that
that
implement
at
least
reporting
ec,
and
I
think
that
would
be
valuable
contribution,
since
the
hope
is
that
dcn
will
actually
increase
quality
of
experience
over
quick
further
than
what
you
can
do
with
tcp,
because
it
might
reduce
losses.
A
Otherwise,
I
don't
really
I
mean
we're
going
to
continue
doing
this.
I'm
pretty
sure
we're
going
to
have
another
interrupt
event
when
the
next
drafts
drop,
but
this
sheet
actually
is
still
pretty
interesting,
but
it,
but
what's
more,
become
more
and
more
interesting
in
recent
months.
A
So
your
participation
in
the
interop
is
great
evenly
great
is
to
create
a
docker
image
for
your
server
and
client
and
the
necessary
plumbing
to
hook
it
up
into
this
runner,
which
is
actually
pretty
simple.
If
you
need
help
with
that,
I'm
sure
that
martin
and
others
who
have
done
it
are
willing
to
help
you
and
if
you
do
it
once
and
automate
the
build
you
always
participating
in
this
and
every
day
you
get
new
results
running
against
everybody
else.
So
that's
pretty
great.
A
A
A
All
right,
there's
no
questions,
you
can
dq
martin
and
I
will
try
and
show.
Q
Q
We've
had
some
actually
fairly
significant
changes
over
the
past
couple
of
months.
Thanks
christian
for
your
pr,
you
can
see
what
the
what
that
says
down.
I'm
going
to
read
the
slide
to
you
next
slide.
Q
So
here's
kind
of
the
status
of
getting
feedback
on
this
draft
mark
thompson
and
christian
have
actually
been
providing
a
lot
of
good
input.
Thank
you
for
that
gentlemen.
I've
made
an
attempt
to
outreach.
The
cloud
providers
are
actually
running
these
these
layer
load
balancers
that
we
need
to
interact
with.
We
haven't
been
super
successful.
Q
The
one
person
we've
had
really
quality
contact
with
has
been
reluctant
to
think
this
is
even
a
problem
and
then
has
gradually
said
that
maybe
they'll
do
the
plain
text
algorithm,
which
is
not
a
great
response.
I'm
considering
taking
this
to
the
opposite
areas
for
early
review
and
right
now
there
are
three
open
issues.
I
think
we've
addressed
everything.
That
is
that
I
was
able
to
sort
of
judge
technically
quite
quickly
that
we
just
closed
them
rather
quickly.
Q
Q
Okay,
so
one
is
whether
or
not
we
should
work
in
big
rotation
bits.
So,
just
very
briefly,
in
the
first
byte
of
one
of
these
connection,
ids,
we
have
two
bits
just
so
people
can
rotate
keys
incrementally
over
a
server
pool.
So
the
reason
they're
two
bits
is
kind
of.
Q
There
are
other
good
points
happening
for
other
things,
but
but
essentially
you
know,
you
have
configuration
zero
configuration
one
with
different
keys
and
and
servers
are
some
of
your
servers
on
one
somewhere
on
the
other
and
so
that
the
load
balancer
can
easily
distinguish
which
configuration
you
use.
While
that
roll
happens
now,
there's
a
proposal
to
make
it
three
and
three
bits
and
next
slide.
Q
And
it's
to
solve
this
problem
that
we've
added
to
the
security
configuration,
so
you
can
imagine
mega
cloud
corp
that
has
lots
and
lots
of
customers
and
boy,
and
they
say
boy,
this
quick
lb
thing.
Q
We
really
believe
in
privacy
and
want
to
use
block
block
cie
encryption
and
all
that
stuff,
but
boys,
real
pain,
to
like
manage
all
these
configs
and
get
them
out
to
all
services,
have
one
giant
config
for
our
whole
company
and
we'll
email,
the
keys
or
something
to
our
customers,
or
you
know
securely
email,
the
keys
to
the
customers.
Well,
obviously,
there's
a
problem
here
that
an
attacker
simply
becomes
a
customer
like
the
podcorp
gets
the
config,
and
now
they
can
read
the
server
mappings
of
all
the
the
the
the
cid.
Q
A
bad
thing,
so
you
know
so
certain.
So
one
thing
is,
we
add
in
security
configuration
security
considerations
just
so
like
the
basic
principle
that
you
should
try
to
use
is
the
config
as
more,
please
possible
that
you
know
different
different
physical
load.
Balancers
should
probably
have
different
configs,
especially
if
they're
serving
different
customers.
Q
If
you
can
separate
the
config
by
things
like
ip
address
and
so
on,
that's
great.
This
eventually
reduces
so
it's
like
kind
of
the
hard
version
of
this
problem.
Is
you
have
two
mistrustful
customers
on
a
single
load,
balancer
they're,
not
they
have
the
same
ip
address
and
they're
switched
on
sni,
or
something
like
that,
which
is
of
course
not
available
on
on.
You
know
non-handshake
packets,
and
so
you
have
a
few
options
here.
Q
It's
the
next
slide,
so
one
thing
you
can
do
is
just
kind
of
accept
that
in
this
irreducible
point,
there's
just
a
sort
of
a
total
leakage
of
the
servative
mapping
that
these
two
mistrustful
customers
can
simply
read
each
other's
server
ids
because
the
same
config,
the
other
option
is
to
just
create
more
code
points
in
the
configuration
bits
so
that
you
can
a
long
live
instance
of
two
different
groups
of
servers
with
completely
different
configs
and
the
single
load.
Q
Balancer
can
distinguish
them
now
so
there's
so
we
have
sort
of
a
trade-off
between
so
the
problem
with
that
is
that
then
everyone
on
the
path
can
see
what
the
configuration
bits
are.
I
can
identify
that
connection.
Q
Is
destiny
is
destined
for
for
customer
a
or
customer
b,
so
we
have
a
choice
between
a
global
leak
of
a
small
amount
of
information
or
a
local
leak
of
all
the
server
mapping
information
that
is
whatever
customer
has
to
be
co-located
on
that
ip
address
can
see
all
the
server
mappings
or
everybody
everyone
on
the
path
has
this.
O
So,
as
I
understand
it,
this,
the
second
issue
is
only
an
issue
with
dch
right
because
if
there's
no
ech
and
you
just
look
at
the
sni,
you
know
the
customer
is
right.
O
Q
O
Q
Think
it's
a
relatively
minor
compromise,
but
with
ech
and
other
things
coming
down
the
pike
that
could
be
more
of
an
issue.
O
Sure
sure
I'm
just
trying
to
think
about
is
there
is:
is
there
any
way
to
solve
this
stuff
public
photography.
Q
I've
had
some
a
massive
departure
from
the
from
the
current
design.
If
we
were.
O
Q
I
I
would,
I
would
say
that
probably
a
a
better
and
another
way
to
do
this
with
trial,
decryption,
where
you
hand,
different
keys
to
the
two
customers
and
the
load
balancer
has
to
decrypt
it
twice
and
you
know,
has,
has
a
rich
enough
server
id
space
that
it
can
sort
of,
but
a
low
error
rate.
O
Right,
maybe
the
load
balancers,
don't
don't
don't
don't
have
done
memory
to
do
I
mean
because
I'll
say
if
you
were
to
use
public
key
exactly
now,
I'm
supposing
that
I
guess,
if
we
use
public
key
right,
the
load
balancers
could
just
cash.
The
answer
right.
O
Q
O
Q
L
Okay,
yeah,
I'm
back
sorry,
I
was
gonna
comment
on
ecker's
comment
about,
like
you
know
whether
you
could
cache
it,
and
I
think
the
answer
is
yes,
you
can
definitely
cache
it,
but
it
it
mitigates
one
of
the
major
main
advantages
of
using
cryptography
in
this
particular
use
case,
which
is
you
can
have
a
you
know,
a
load
balancer,
that's
truly
stateless
and
only
uses
like
local
crypto,
which
in
some
cases
could
actually
be
a
lot
faster
than
like
a
hash
table.
L
Look
up
the
various
sites
and
the
other
comment
was
gonna.
Be
I'm
not
really
sure
the
third
can
big
rotation
bit
is
buying
you
enough
extra
space
to
meaningfully
mitigate
the
issues
you
laid
out,
but
that's
my
intuition,
certainly
wouldn't
for
us.
I
don't
think
that's
it.
Q
Yeah
I
mean
we
are
a
little
bit
limited
in
that
the
rest
of
that
might
just
reserve
for
cid.
So
we
really
only
have
cid
link
for
self
encoding,
so
we'll
only
have
three
bits,
but
of
course
we
went
and
just
opened
up
another
octet
for
it,
which
would
be
real
shame
all
right.
Well,
hopefully
we
get
some.
You
know,
I'm
sure
a
lot
of
people
are
seeing
this
first
time.
Hopefully
we
can
people
can
think
about
it
and
and
take
it
to
the
list.
Next
slide.
Q
Please,
okay!
This
is
the
other
issue,
so
I'm
guessing
unguessable
connection
ids.
So
that's
what
quick
transport
says?
It's
pretty!
It's
not
all
that
ambiguous!
So,
as
many
of
you
know,
there
are
four,
there
are
four
algorithms
in
this
draft.
Two
of
them
involve
some
sort
of
encryption.
Q
One
of
them
is
is
is
not
even
really
trying
and
just
has
a
fixed
set
of
bytes
that
encode
the
server
id
in
the
clear
and
the
other
one
tries
to
obfuscate
it
for
people
who
might
be
super
reluctant
to
to
to
employ
encryption
for
some
reason.
So
this
is
a
bit
of
an
obvious
a
bit
of
a
problem,
but
next
slide
anytime.
We
talk
about
these
sort
of
privacy
issues.
I
think
we
should
always
start
with
this,
which
is
that
there
are
real
technical
ones.
Q
What
we
can
achieve
with
this
algorithm,
when
there's
one
client
that
passed
your
server
pool,
they
are
linkable
next
slide
and
when
you
have
infinity
clients,
really
almost
any
moderately
sensible
scheme
will
make
you
unlinkable
next
slide.
Q
Okay,
so
so
one
part
of
this
question
is
what
do
we
want
to
do
with
these
non-encrypted
connection,
id
encodings
or
server
mapping
encodings?
We
like,
as
I
said,
the
one,
the
one
I've
engaged
with
two
cloud
providers
so
far,
one.
It's
still
been
pretty
surface
level
interaction.
The
second
one
has
been
super
reluctant
to
really
doing
anything
complicated
here.
Q
So
there's
a
there's
a
like
the
willingness
to
deploy
question
on
the
more
sophisticated
algorithms,
at
least
on
the
load,
balancer
side,
although
I
guess
you
know,
I
represent
a
low
balancer
vendor
and
we'll
do
we'll.
Do
the
advanced
stuff.
Q
You
know
what
one
possible
way
out
of
this
is
to
give
quite
more
information.
There's
a
basic
incentive
this
match
here
is
that
you
know
the
server
is
choosing
what
they're
doing
with
the
encoding,
but
it's
the
client
that
bears
the
consequences
of
that
decision.
So
maybe,
instead
of
trying
to
over
prescribe
what
servers
do
just
give
the
client
more
information
tell
hey.
This
is
what
I'm
doing
and
the
client
can
decide
whether
or
not
they
want
to
migrate
or
use
your
agent
more
precisely.
Q
The
we
could
use
disable
accurate
migration
for
that,
which
would
mean
that
the
whole
point
of
quick
ob
is
to
kind
of
allow
multiple
cids
to
really
only
save
you
in
that
rebinding
case,
which
is
a
pretty
neutral
case,
another
option
to
add
a
transport
parameter.
That
was
a
little
more
explicit.
I
mean
I
don't
want
a
bike
shed
over
the
design
that
chance
per
parameter.
We
could
add
one
that
says.
Look
something
like
this
is
the
this.
Q
Is
the
algorithm
you're
using
you'll
look
up
in
the
draft
with
the
security
properties
that
are
and
decide
if
you
want
to
migrate
or
not
so
you
know,
except
it
could
be
two
bits
or
whatever
but
yeah
so
next
slide.
Q
So
these
issues
are
kind
of
related.
Do
we
want
to
retain
these
these
unencrypted
methods
and
if
so,
what
kind
of
safeguards
should
we
put
around
them?
Regarding
notification,
client,
I'd
like
to
open
the
floor
for
comments
and
questions.
A
B
A
F
H
I'll
make
one
quick
comment,
which
is
just
that,
rather
than
being
offered
a
set
of
oh
here's,
the
things
that
I'm
doing
you
can
choose
what
behavior
you'd
like
it'd
be
nice.
If
we
could
try
to
remove
some
of
the
behaviors
that
you
might
not
like
and
make
it
such
that,
rather
rather
than
saying
here's
a
potential
compromise
you
could
make,
we
could
say
the
only
options
that
we
allow
are
ones
that
aren't
huge
compromises.
H
But
then
again
I
suspect
that
isn't
a
new
thought
and
so
there's
likely
a
reason
why
some
of
those
compromises
exist.
So
but
but
I
would
have
a
preference
towards
not
forcing
people
to
choose
between
functionality
and
something
like
linkability
or
their
privacy,
because
we've
seen
that
a
lot
of
the
times
it's
both
possible
to
have
both.
And
if
you
present
a
choice
between
those
two
things.
Not
everybody
is
going
to
make
the
same
decision
and
sometimes
that's
going
to
compromise
people
in
either
of
those
areas.
Q
Thank
you
yeah.
I.
I
should
also
say
that
I
personally
don't
have
any
strong
opinions
about
this
issue,
one
way
or
the
other
another
reason
I'm
really
interested
in
getting
into
the
community
thanks
eric.
Q
What
I'm
seeing
in
the
in
the
jabber
is
a
lot
of
resistance
to
the
obfuscated
algorithm.
Q
A
related
issue
here
that
kazu
is
bringing
up
is
that
there's
a
little
disagreement
on
whether
so
one
possible
threat
model
is
that
an
attacker
chooses
to
direct
a
lot
of
traffic
to
specific
server
when
they
have
the
server
mapping
id
exposed
or
by
d
mapping,
exposed
and
there's
some
disagreement,
whether
that's
a
problem
or
not,
if
like,
if
you
want
to
take
down
one
of
my
100
servers
like
you
know,
go
knock
yourself
out
so
that
that's
always
an
interesting
discussion,
but
okay,
I'm
hearing
because
they
don't
want
to
speak
up
for
the
obfuscated
cid
algorithm.
Q
All
right
I'm
going
to
take
that
to
the
list,
but
I'm
I'm
sorry,
I'm
hearing
now
I'm
hearing
very
little
excitement,
but
obviously,
which
is
great
because
that
was
horrible
code
to
write.
Oh
by
the
way,
so
I've
actually
opened
some
code
and
there's
a
link
to
it
in
the
in
the
quick
ob
channel
on
the
slack.
If
you
want
to
poke
at
it
and
see,
what's
involved
there
all
right
if
there.
G
A
Brian
or
mia
I
guess
brian
is
presenting,
is
next
brian.
Have
you
found
somebody
to
take
over
minuting
for
you.
E
Go
ahead,
yeah,
so
hi,
I'm
brian,
for
those
of
you
who
don't
know
me
here
are
two
drafts
that
have
been
actually
around
for
quite
some
time:
the
applicability
manageability
drafts
the
ops
drafts
have
not
seen
a
whole
lot
of
traffic
until
just
recently,
because
we
were
sort
of
waiting
for
the
core
drafts
the
base
drafts
to
be
done.
The
base
drafts
are
now,
as
you
can
see,
from
the
fact
that
we
had
no
discussion
on
any
issues
whatsoever.
The
base
wraps
are
done.
N
E
Yeah,
so
we
just
published
a
revision07.
There
are
a
couple
of
sort
of
content
changes
here.
We've
got
more
discussion
about
using
alpn
pointers
in
the
http
draft.
On
that
we
in
the
manageability
draft,
we
did
a
pass
to
make
sure
we
were
aligned
with
the
transport
and
variance
drafts,
because
there
had
been
some
changes
to
the
wire
image
since
the
last
time.
We
did
that
pass.
That
is
now
done
and
added
a
section
on
udp
policing.
Next
slide,
we
have
a
few
remaining
open
issues.
E
There
are
three
of
these
in
applicability
that
have
proposed
pr's
we've
got
reviews
done
the
questions
that
we'd
like
to
ask
in
the
five
minutes
of
discussion
that
we
have
here
are:
does
anyone
have
a
a
objection
to
merging
these,
or
do
we
need
more
text?
So
each
of
these
issues
is
linked
together
with
the
pr
that
fixes
it.
You
can
also
go
and
look
at
the
github,
quick,
wg
ops
drafts,
slash
issues
to
have
a
look
at
these
next.
N
Yes,
so
I
I
just
wanted
to
highlight
the
point
that
we're
creating
interdependencies
between
drafts
by
citing
the
load
balancers
draft.
I'm
I'm
comfortable
with
that,
but
I
wanted
to
make
sure
that
everyone
else
was
doing
so
because
it
seems
to
me,
like
these
drafts
are
now
much
closer
to
being
ready
and
great.
A
balancer
draft
is.
E
So,
if
I'm
not
completely
forgetting
this
particular
reference,
this
is
a
informative
reference.
Wait.
Is
it
a
normative
reference?
E
E
Yeah,
I
the
intention,
is
that
it's
informative,
because
we
did
actually
think
about
this
with
exactly
this
particular
issue
in
mind.
Do
we
want
to
block
publication
on
this
on
the
publication
of
the
load
balancer
draft,
and
the
intention
is
not
to
block
publication
of
this
draft
on
load
balancers,
so.
E
Right,
my
understanding
is
that
the
current
process
is,
if
we
make
it
an
informative
reference.
They'll
just
leave
it
as
an
informative
reference
to
the
id
they
won't
hold
for
the
for
the
rfc,
correct,
yeah,
okay,
so
yeah.
So
the
intention
is
informative.
If,
if
that
is
currently
in
the
draft
is
a
normative
reference,
that's
a
bug
and
we
should
fix
it
in
pr.
R
E
So
next
on
to
manageability,
we
have
one
open
issue
here
without
a
proposed
pr.
There
is
some
kind
of
hand-wavy
text
in
the
the
graph.
That's
been
there
for
about
a
year
that
says,
and
you
can
get
the
sni
information
out
of
the
quick
handshake,
and
we
don't
really
talk
about
how
there
is
an
issue
to
clarify
that
make
it
a
a
little
bit
more
clear.
We
now
have,
after
essentially
trying
to
find
people
to
write
this
text.
E
We
now
have
marcus
zlar
signed
up
to
do
this,
so
he
will
put
together
a
pr
at
some
point
soon.
Next,.
E
Right,
so
the
question
is:
do
we
want
to
like
the
the
wider
question
is?
Do
we
want
to
document
the
wire
image
as
of
the
time
of
publication,
of
the
bass
drafts?
I
mean
I,
I
think
we
could
just
put
a
a
like
a
one
line
of
text
into
into
here.
That
says,
you
know
by
the
way
s-
and
I
is
there-
is
here
now
but
reference
to
why
that
why
that
may
be
going
away
soon,
I
think
that's,
that's
probably
fine.
O
E
Yep
cool
yeah
I'll
take
a
no
or
let's
put
in
the
notes,
to
take
a
note
to
add
that
comment
to
issue
75
when
it
gets
when
it
gets
fixed.
J
David
all
right,
can
you
hear
me
yep
cool?
I
just
wanted
to
say.
I've
spent
over
the
past
few
months
a
lot
of
time
discussing
sni
and
quick
with
middle
box
vendors,
because
there
are
a
lot
of
equipment,
vendors
that
today
parse
sni
in
tls
and
wanted
to
ship
the
same
feature
for
quick.
J
So
the
first
one
is
that
the
all
these
equipment,
vendors,
have
kind
of
ported
their
code
to
you
know
from
a
tls
parser
to
a
quick
parser,
but
they
haven't
actually
thought
through
the
implications
and,
for
example,
in
tls,
once
you've
picked
out
the
sni
you,
the
five
tuple
will
stay
the
same,
and
if
the
client
wants
to
create
another
connection
or
something
it
will
create
a
separate
tcp
flow
with
a
separate
tls
handshake
where
you
can
again
parse
the
sni
in
quick.
That's
not
true
at
all
in
quick.
J
You
can,
even
if
you
parse
the
sni
our
stack,
for
example,
if
we
detect
a
black
hole,
we'll
spuriously
migrate,
our
client
port,
in
order
to
like
check,
if
maybe
that
five
tuple
is
getting
black
hole
by
something
along
the
path,
and
sometimes
you
know
with
the
its
spec,
you
can
also
change
your
connection,
ids
mid
connection
and
all
this.
So
that
means
that
if
you're,
relying
on
sni
sure
you
can
expect
the
sni
from
this
client
to
load
back
here,
but
you
can't
tie
it
to
the
other
packets.
J
So
what's
the
point
of
extracting
the
sni,
if
you
can't
actually
act
on
it
like
it
is,
and
one
further
point
that
I
think
is
really
important
to
make-
is
that
because
of
this,
these
boxes
were
deployed
without
the
quick
team
at
google
knowing
they
deployed
it
with
whatever
the
google
client,
google,
quick
clientele
format
was
at
the
time
and
so
and
that
client
hello
format
was
the
same
from
quick
version.
J
Google
quick
version
one
to
google
quick
version
43,
but
then
we
changed
it
in
response
to
the
itf
header
format,
changes
and
on
some
of
these
networks
we
have.
We
are
forced
against
my
personal
objections,
but
we
are
forced
to
stay
on
that
old
version,
so
this
has
effectively
already
ossified
quick,
we're
pushing
very
hard
to
change
that
and
we're
making
it
clear
to
folks
that
once
we
ship
itf,
quick
that'll
be
the
version
that
they
need
to
update
to,
and
then
we
won't.
J
We
will
like
stop
supporting
google
quick
because
we
don't
want
to
support
it
forever,
but
I'm
really
really
really
concerned
about
having
text
that
says:
here's
how
you
extract
the
sni,
because
at
that
point
we've
just
ossified
the
quick
initial
clientele
format
for
all
versions
of
quick,
pretty
much,
and
I
don't
think
that's
what
we
ever
wanted
to
do
with
quick,
so
things
to
keep
in
mind.
But
if
there's
any
text
explaining
how
to
do
this,
please
have
like
at
least
a
paragraph
explaining
why
this
won't
work
and
it's
okay.
J
E
So
I
in
general
will
share
your
sense
of
nihilism,
the
actually,
what
I
hear
from
this,
and
I'm
going
to
take
advantage
of
the
fact
that
it's
like
5
17
in
the
morning
there.
What
I
hear
is
that
you're
volunteering
to
do
a
pass
on
these
documents
and
make
sure
that
we
are
being
appropriately
non-recommendative
and
you'll
also
review
the
pr
for
this
issue
when
it
comes
in.
The
intention
here
is
to
be.
E
You
know,
blunt
and
clear
about
what
the
wire
image
is
and
what
it
isn't,
and
I,
I
suspect
that
you
know
what
marcus
comes
up
with,
will
be
well
we'll
meet
those
requirements,
but
please
do
have
a
a
review
of
the
pr
that
is
associated
with
this
image
or
with
this
issue,
when
it
comes
out.
J
E
E
E
We
want
to
resolve
those
there's
issues,
85,
95
and
100,
which
are
basically
just
notes
to
the
editors
to
go
and
do
a
full
editing
pass
to
check
normative
language,
references
and
alignment
with
v1
before
we
before
we
send
this
out
to
working
plus
call
to
some
extent.
E
This
presentation
was
meant
to
you
know,
threaten
you
with
an
impending
working
group
last
call
so
that
you
would
go
and
read
the
document,
because
I
know
that
this
is
has
fallen
off
of
people's
radars
for
a
while,
and
I
would
like
to
bring
it
back
onto
people's
radars
because
we'd
like
to
get
this
out
the
door
by
next
meeting
or
even
earlier.
I
guess
we'll
talk
about
whether
we're
doing
an
interim
a
virtual
interim
before
not
bangkok,
in
a
subsequent
part
of
this
meeting,
but
yeah.
A
N
All
right,
you
can
go
onto
the
next
slide.
That's
that's
the
first
bite
of
the
quick
packet
by
the
way,
so
we
fixed
a
bit
in
the
packet,
and
that
makes
me
uncomfortable.
N
It
seems
that
there
are
certain
cases
where
we
might
want
to
use
that
bit
for
something,
and
so
the
question
here
is
I
asked
myself
is
what
would
it
take
to
change
the
bit?
So
next,
please.
N
So
all
you
really
need
to
do
is
get
permission
from
the
person
receiving
the
packet
that
the
bit
is
safe
to
change.
The
only
real
real
reason
that
we
use
this
bit
in
some
cases
is
for
those
people
who
have
google
quick.
The
bit
is
useful
in
distinguishing
ietf,
quick
from
google
quick
relatively
cheaply.
N
N
It
helps
us
distinguish
quick
from
all
of
those
other
protocols,
so
that
we
really
only
have
to
look
at
the
first
octet,
because
people
deploying
those
protocols
are
reliant
on
looking
at
the
first
octet
to
determine
which
protocol
is
in
use.
But
for
the
vast
majority
of
deployments,
the
value
of
that
bit
doesn't
really
matter
it's
just
another
bit
and
they
don't
really
use
it.
N
But
just
being
able
to
set
the
bit
doesn't
really
matter,
it's
not
that
useful
to
anyone,
if
you
can't
use
it
for
something.
So
the
question
is
what
would
it
take
to
make
the
bit
useful
for
some
some
purpose
in
the
future?
So
next,
please.
N
So
one
option
is
to
use
it
without
encryption.
You
can
set
it
to
any
other
value
when
you
don't
have
any
particular
use
for
the
values
and
you
can
set
the
value
based
on
some
desired
semantic.
Maybe
you
set
the
bit
to
zero
every
time
you
send
something
evil
and
then,
once
you
know
that
the
other
side
has
indicated
an
intention
to
use
a
particular
semantic,
you
can
actually
act
on
the
value.
N
The
other
option
next
slide
is
to
try
to
put
that
bit
under
encryption,
and
it
means
that
you
have
to
negotiate
its
use,
because
the
way
that
we
do
our
header
protection,
but
which
limits
it's
used
to
short
header
packets,
and
we
just
extend
the
header
protection
over
that
bit
once
that's
happened.
N
Problem
with
this
is
that
both
endpoints
need
to
agree
to
do
this,
because
both
the
sender
and
the
and
the
receiver
need
to
agree
on
the
change
to
the
the
way
that
this
bit
is
processed.
Otherwise
they
don't
talk
to
each
other
with
probability
50,
which
is
not
great.
If
you
want,
if
you'd
like
to
add
50
packet
loss,
then
that's
one
way
you
can.
You
can
do
this
so
next
slide.
N
So
the
draft
defines
the
first
use
the
without
encryption
bit
in
part.
This
is
because
I
figure
that
we
have
a
very
large
amount,
very
large
number
of
bits
that
are
already
under
encryption,
and
it
would
be
sort
of
churlish
to
deny
those
people
who
want
to
do
things
without
encryption
an
extra
bit,
but
at
the
same
time
it's
mostly
because
I
was
too
lazy
to
work
through
all
the
implications
of
the
other
one,
and
this
one
seems
like
you
can
get
more
greasing
out
of
it.
N
There
are
already
a
few
interoperable
implementations
of
this
one.
I
don't
know
why
people
latched
on
to
this
idea
so
enthusiastically,
but
there
you
have
it.
That's
all.
I.
O
Maybe
like
questioning
the
initial
conditions
here,
a
little
bit
do
we
still
need
this
bit
to
distinguish
from
google
quick.
N
That
is
something
that
I
have
deliberately
left
unstated
at
this
point.
I
I
believe
that
google
still
use
this
one
and
they
don't
have
to
ship
the
server
half
of
this
extension
if
they,
if
they
do
and
if,
if
at
some
point
they
manage
to
wean
themselves
off
the
other
stuff,
then
that
that
will
work
too.
J
Hey
at
google,
we
still
support
google
quick
version,
43
and
version
46,
so
version
43
has
the
what
was
the
format
of
the
first
ietf
draft,
and
then
we
also
support
version
46,
which
had
the
second
version
of
the
invariant
and
now
everything
after
that
uses
the
third
version
of
the
invariants,
because
they
don't
vary
so
once
we
deprecate
43
and
46,
we'll
be
good
to
go
because
everything
will
be
using
the
new
invariants.
J
So
we
won't
need
this,
but
until
then
we
still
need
use
that
bit
to
figure
out
if
a
packet
is
like
using
the
v43
encoding
or
later
so.
But
as
mte
said,
like
that's,
not
a
blocker.
I
still
think
this
is
a
good
idea.
We
wouldn't
support
it
on
our
server
until
we
deplecate
that
version,
but
yeah
definitely
open
to
implementing
it
on
the
client.
O
N
O
Right,
but
I
guess
I
guess
where
I'm
going
with
this
is
that
is
that,
with
the
h3
case,
as
soon
as
google
changes,
there'll
be
no
reason
not
to
like
to
randomize
it
all
the
time
and
within
without
encryption
and
in
the
and
in
the
and
in
the
real-time
case
like
depending
on
exactly
what's
deployed,
it
will
either
be
like
a
permanent
thing
you
have
to
have,
or
it
will
also
be
randomizable,
and
so
I'm
just
trying
to
figure
out
like
this
seems,
like
sort
of
a
funny
half
measure
that
doesn't
matter
the
conditions
that
we
actually
find
this
is
it.
N
O
Or
or
may
turn
out
that
or
I
may
find
that
no
sane
person
ever
wants
to
multiply
quick
with
rtb
right.
So
I
guess
I
guess
I
get
yeah,
so
I
think
my
thesis
is
like
if
the.
If,
if
g
quick
is
if
the
g
quick
need
for
this
is
going
away,
then
I
suggest
that
for
h3
we
actually
like
required
to
be
randomized
all
the
time
or
a
lot
to
be
around
all
the
time.
L
Or
yeah
I
hopped
in
before,
but
thanks
for
having
me
now
it's
a
good
time.
I
yeah,
I
think
that
I
mean
the
time
frame.
For
us.
Getting
rid
of
g
quick
is
is
as
soon
as
possible,
but
it
you
know
we're
still
rocking
like
more
than
a
year.
I
mean
we
haven't
actually
shipped
an
iotf,
quick
rfc
at
this
point,
but
that
being
said,
you,
you
probably
could
shift
the
default
posture
from
that.
L
The
current
posture
I've
never
randomized
to
always
randomize
and
have
a
way
to
turn
off,
but
that
would
be
a
fairly
major
change
to
the
core
like
design
at
this
point,
but
that
would
be
like
functionally,
I
think,
fine
with
us,
but
I
think
you
would
have
to
be
in
the
core
draft
if
you're
going
to
do
that,
because
I
do
think
there's
going
to
be
enough
people
not
just
cheat,
but
also
like
the
multiplexing
issue.
I
think
I
mean
that
was
raised
as
a
fairly
major
issue
early
on.
L
I
would
not
want
to
revisit
that
and
decide
we're
gonna
like
ditch
multiplexing,
quick
with
other
protocols
completely.
O
Other
protocols
completely
I'm
just
saying
that
we're
only
really
trying
to
do
h3,
and
so,
if
we
solve
the
hp
problem,
we
know
how
to
solve
the
the
rtp
problem.
I
mean
like
what
I'm
saying
is.
The
rgb
problem
is
going
to
be
a
true
invariant
right,
like
whether
we
called
invade
or
not
like,
if
you're,
only
watching,
rtp
you're
going
to
need
to
do
this.
A
So
I'd
suggest
that
we
maybe
try
and
make
some
more
progress
understand
on
the
list.
This
is
sort
of
the
first
presentation
of
this
draft
and
we
have
about
10
minutes
which
lets
us
hopefully
get
through
at
least
one
more
of
the
time
permits
bucket.
Next
one
is
martin,
duke.
Let
me
pull
it
up.
Q
Hello
again,
everyone
tlbr
on
this
one
new
little
graph
and
what
I
really
like
to
know
that
people
are
interested
in
developing
this
and
deploying
it
and
therefore
it's
worth
continuing
to
refine.
So
what
what
this
is
about
is
basically
taking
the
taking.
B
You're
very
weak,
could
it
be
that
your
microphone
has
changed.
A
No,
it's
not
working.
Let's
move
on
next
one
is
the
atss
document.
I
think
spencer
is
going
to
talk
to
that
one
and
I'll
try
and
pull
it
up.
S
Yeah,
thank
you.
So
this
is
just
updating
what's
been
happening
since
we
submitted
this
draft
and
it's
next
slide.
Please.
S
So
basically,
this
is
the
three
gpp
access
traffic
steering,
switching
and
splitting
service,
and
the
point
of
this
is
that
this
is
work.
That's
in
3gpp,
that's
moving
to
support
more
than
tcp
and
ethernet,
so
they're,
looking
at
using
quick
and
release
17,
and
they
asked
to
be
informed
through
a
formal
liaison
about
our
work.
Quick,
and
we
said
yes
next
slide.
S
S
We
have
gotten
feedback
on
the
quick
mailing
list
so
far
thanks.
Thanks
for
this,
we
haven't
had
a
lot
of
conversations
responding
there
yet
and
I
think
we're
going
to
have
more,
but
this
is
actually
we
are
getting
feedback
and
we
are
appreciating
that
and
that
is
having
an
impact
on
what
free
shipping
is
doing
with
their
plans
for
release
17..
S
S
S
So
the
june
meeting
this
is
the
where
the
current
study
drafted
free
gpp
is.
S
There
are
four
proposed
solutions
that
are
accepted
for
further
consideration:
quick
tunneling,
multi-path,
quick
totalling
quick,
proxy
and
multi-path
quick
proxy
and
just
making
sure
that
people
understand
there
are
race
conditions
between
3gpp
and
ietf
work
on
this,
with
unreliable
datagrams,
with
multipath,
quick
with
support
for
ip
traffic
and
mask,
and
it's
worth
mentioning
that
there's
a
conference
call
on
this
immediately
after
the
quick
meeting
in
so
that
that
work
is
happening,
but
I
just
wanted
to
put
this
in
front
of
the
working
group
and
call
people's
attention
to
it.
M
John,
am
I
audible?
Yes,
okay,
thank
you.
So
this
is
it's
strange
to
thank
you
for
presentation,
spencer,
but
it's
it's
strange
that
the
race
conditions
between
3gbp
and
iitf
suggests
that
3gb
is
doing
work
on
this.
Are
they
actually
doing
work
on
developing
things
here
in
quick
to
make.
S
It
work,
no,
absolutely
not
the
the
thing
I
didn't
say
on
the
first
on.
The
second
slide
was
that
the
three
the
participants
on
this
draft
want
to
do
the
work
in
the
ietf,
not
in
3gbp
and
the
rest
of
the
3gbp
participants.
Audibly
agree
with
that.
That's
basic.
S
Basically,
it's
the
the
race
conditions
between
3gbp
deadlines
and
ietf
movement,
but
like-
and
I
mean
that's-
that's
not
new,
what's
new
is
that
we
are
coupled
a
lot
more
closely
than
we
have
been
on
most
strings
in
3gpp
for
a
while.
B
Okay,
we've
got
four
minutes
left,
so
david
kanasi,
if
you
can
keep
it
relatively
brief.
Thank
you.
B
F
J
You,
I
think,
double
clicked
and
kicked
me
out
nice
very
quickly.
J
I've
seen
more
and
more
discussion
about
this
3gpp
mp,
quick
thing,
and
I
just
wanted
to
say
the
impression
I'm
getting
from
a
lot
of
folks
is
that
and
a
lot
of
engineering
teams
around
the
world
are
saying:
yeah
3gpp
is
doing
mp
quick
and
we
they
have
this
deadline
for
this
3gpp
standard,
that's
happening
and
every
time
I
feel
like
I
have
to
tell
them.
Yeah,
that's
not
going
to
happen
and
be
quick
will
be
defined
at
the
atf,
which
is
not
what
it's
not
doing
yet.
B
Thanks
cutting
the
queue
after
spencer,
because
we
do
want
to
try
and
get
back
to
martin,
if
only
briefly,.
S
Nope,
I
understand
the
confusion
and,
like
I
said,
I
think
that
part
of
this
is
the
the
I
want
to
say
race
condition
again,
perhaps
misusing
the
term
the
race
condition
between
how
quickly
we're
moving
on
multi-path,
quick
and
how
quickly
people
seem
to
need
multi-path,
quick,
but
you
know
which
is,
which
is
fine.
That
just
means
we
have
something
that
matters
a
lot
to
to
the
rest
of
the
world.
B
Okay,
very
briefly,
I
think
we've
got
officially
two
minutes,
maybe
a
couple
more
martin.
If
you
want
to
go
through
as
quickly
as
you
can.
B
A
With
two
minutes
I
would
do
planning
so
so
it's
not
clear
whether
bangkok
will
happen.
You
know
physically
or
not.
My
money
is
on
virtual,
but
also
the
question
is:
what
do
we
do
in
between?
We
could
certainly
schedule
a
working
group
interim
meeting
at
any
time.
A
little
bit
of
a
challenge
might
be
time
zones,
since
we
have
people
that
are
basically
around
the
globe,
so
we
we
haven't
really
done
virtual
interims
because
they
might
be
cumbersome.
A
But
if
so,
my
feeling
would
be
that
we
we
don't
do
that
unless
we
like
hit
a
roadblock
issue.
That
really
just
needs
like
actual
discussion,
so
we
can't
resolve
it
in
github.
I
would
prefer
to
work,
desynchronized
and
work
on
github
and
try
to
push
the
issues
there,
which
has
been
working
pretty
well
for
the
last
couple
of
months.
So
unless
somebody
really
feels
like
we
need
an
interim,
my
my
preference
would
be
to
just
do
a
working
reading
at
109
and
see
what
happens.
A
Hopefully,
I
guess
we
will
be
able
to
do
another
working
group
last
call
on
either
30
or
if
stuff
is
still
pending
the
version
afterwards.
I
would
again
sort
of
really
caution
people
to
not
raise
new
issues
without
a
really
strong
reason.
A
Also,
if
you
want
to
help
speed
things
along,
you
can
like
pick
up
an
editorial
issue
where
there's
no
pr
and
make
one
right.
I
think
the
editors
would
be
very
happy
to
to
comment
on
prs,
rather
than
doing
them
all
themselves.
So
if
you
feel
like
you
want
to
do
that,
please
go
ahead.
I
think
the
editors
will
like
your
help,
and
then
you
know
once
we
are
sort
of
closer
to
being
able
to
ietf
call
this
thing.
A
I
think
we
should
have
a
discussion
with
the
ads
and
also
the
broader
community,
what
we
would
do
in
terms
of
a
recharter
or
slicing
and
dicing
follow-on
work
between
this
working
group
and
potentially
others
at
the
moment.
Stuff
like
web
transport
and
mask
have
been
broken
out,
and
that
makes
a
lot
of
sense,
but
there's
a
bunch
of
extension
work.
That's
apparently
on
the
horizon.
If
you
look
at
what's
being
proposed,
that
would
need
a
home
somewhere
there's.
A
Certainly,
a
lot
of
you
know
minor
revisions
to
the
base
drafts
that
that
we
might
find
as
people
implement
it
and
deploy
it
more.
That
needs
a
home
and
there
might
be
other
things
so
that
that's
a
discussion
that
I
think
we're
going
to
have
more
around
the
bangkok
timeframe
than
before.
B
Yes,
I
think
the
only
thing
I'd
add
to
all.
That
is
that
if
we
do
have
another
working
group
last
call
it
probably
isn't
going
to
be
as
long
as
the
last
one,
we
had
it'll
be
just
a
confirmation
that
the
changes
we
made
as
a
result
of
working
group
last
call
are
well
reflected
in
the
drafts.
A
Yeah,
I
think
we're
going
to
probably
have
a
longer
ietf
last
call
because
we're
dumping
a
whole
bunch
of
long
longish
documents
on
the
community.
So
people
will
still
have
obviously
the
chance
to
read
everything
in
detail
then,
but
I
don't
think
we're
going
to
do
another
four
week.
Work
group
last
call
unless,
like
fifty
percent
of
the
text
changes
you
know
at
that
point,
I'm
stepping
down.
A
That's
we're
two
minutes
over
any
final
thoughts
from
everybody.
C
Hello,
hey
lucas,
if
you
can
hear
me,
the
consensus
call
was
due
to
close
the
the
current
consensus
calls
you
to
close
reporters
meeting,
but
I
have
some
local
internet
issues
so
I'll
be
doing
that
afterwards
and
we
had
push
back
on
one
of
the
issues
which
will
recycle
back
in
but
I'll
make
this
all
clear
on
the
mailing
list.
So
thank
you.
Everyone
for
for
the
session
today.