►
From YouTube: IETF104-QUIC-20190327-0900
Description
QUIC meeting session at IETF104
2019/03/27 0900
https://datatracker.ietf.org/meeting/104/proceedings/
D
D
D
B
A
That's
optimistic:
this
is
the
quick
working
group,
if
you're
not
here,
for
the
quick
working
group
you're
in
the
wrong
room
Eko,
could
you
close
that
door
over
there
bunny
chance?
Thank
you,
I
like
to
see
the
quick
service,
so
we're
not
one
of
those
countries.
This
is
the
note.
Well,
these
are
the
terms
under
which
we
all
participate
in
the
ITF
regarding
things
like
intellectual
property,
as
well
as
your
behavior
and
and
our
behavior
as
chairs
other.
A
Need
to
go
that
includes
the
code
of
conduct
that
includes
the
anti-harassment
procedures,
the
processes
that
this
working
group
works
under
copyright
patents
all
that
fun
stuff.
If
you
are
not
intimately
familiar
with
this,
please
take
some
time
to
put
a
ATF
note
well
into
your
favorite
internet
search
engine
and
find
out
more.
A
A
A
A
So
today
we
reserved
a
very
large
block
of
time
to
discuss
open
issues
in
Tokyo.
We
started
a
new
thing
in
the
past
in
in
both
our
interim
and
in
our
ITF
plenary
meetings.
We
have
had
the
editors
lead
discussion
on
clusters
of
issues
that
they
thought
were
important
to
move
forward
on,
and
we
flipped
that
in
Tokyo
to
just
do
a
a
hard
slog
through
the
issues
list,
one
by
one
and
try
and
come
to
some
sense
of
the
room
for
each
issue
that
we
thought
we
could.
A
That
seems
to
have
worked
out
pretty
well.
I
think
everybody
was
pretty
happy
with
the
progress
we
made
in
Tokyo.
So
although
this
is
a
slightly
different
kind
of
room
with
a
lot
more
people
in
it
in
a
different
setup,
we
thought
we'd
try
that
again
and
so
we're
gonna
focus
on
first
the
transport,
then
the
TLS,
then
the
HTTP
issues
we're
gonna
reserve
some
time
in
the
end,
to
talk
about
planning
future
meetings.
Future
interrupts.
You
know
how
do
we
get
to
done
and
we
have
an
as
time
permits
slot.
A
A
B
A
B
Your
way
up
here,
perhaps
and
I
think
we're
actually
trying
to
get
to
both
of
their
as
time
permits
and
maybe
may
not
see
it
or
me
10
minutes,
but
depending
on
how
the
issues
go
right.
If
we
make
the
call
at
that
time
that
we'd
rather
burn
time
on
the
issues
we'll
do
that,
but
I
guess
the
plan
is
to
leave
enough
slack
at
the
end
that
we
can
to
go
through
those.
It's
not
hyper
important
that
we
touch
every.
A
Issue
so
let's
do
your
presentation
David
and
keep
the
time
reasonable
and
then
we'll
I
think.
The
other
thing
we
want
to
do
is
do
a
quick
run
to
the
triage
of
the
new,
the
and
late-stage
process.
We'll
do
that
right
after
this,
so
any
bashing,
while
we're
any
bashing
on
all
that.
That
seemed
good
everybody.
A
F
B
I
would
like
a
feedback
for
people
who
maybe
not
participating
design
team
on
what
David's
going
to
summarize
doesn't
even
easily
done
is.
G
G
G
I'm
nervous
cause
you
from
Google
the
rest
of
the
design
team
members
are
on
the
slide
next
slide,
please
so
quickly.
The
problem
is
in
the
quick
short
headers
there's
a
key
phase
bed,
which
conceptually
is
the
lowest
order
bit
of
the
key
epoch,
so
every
tire
it
just
flips
the
problem
with
that
is,
if
one
side
does
key
updates
too
quickly.
Let's
say:
does
two
key
updates.
G
Without
the
other
side,
knowing
they
can
end
up
being
out
of
sync,
and
that
means
that,
like
all
the
packets
get
drops
and
the
connection
just
dies
so
next
slide,
we
managed
to
agree
on
a
lot
of
design
principles
which
was
nice,
so
we
really
want
to
avoid
trial
decryption
quick.
So
we
decided
we
wanted
an
explicit
signal
for
when
okay,
we
are
both
agree
that
we
are
now
at
epoch
1.
G
So
then
it
is
possible
to
go
to
a
pocke
to
the
third
point
has
been
litigated
since
so
initially
we
really
didn't
want
to
key
off
of
acknowledgments
or
retransmission.
Because
of
some
implementations,
it
was
tricky
that
one
were
not
a
hundred
percent
of
ketosis
on
so
more
news
coming
soon,
but
the
model
in
general
we
want
four
key
updates
in
quick.
Is
that
any
end
point
can
initiate
a
key
update
and
then
we
require
a
confirmation
for
the
next
one.
G
You
know
early,
but
also
force
the
other
side
to
update
their
send
keys,
the
reason
being
that
if
the
crypto
bound
for
one
of
our
ciphers
is
found
to
be
weak
or
shorter,
any
of
the
two
endpoints
who
knows
this
can
kind
of
bring
the
whole
lifetime
down
before
key
update,
and
we
want
implementations
of
whatever
we're
doing,
to
be
very
simple
and
not
required
to
keep
like
what
ET
keys
for
every
single
lap
back
next
slide.
Please.
So
the
current
proposal
is
to
use
one
of
the
unused
reserved
encrypted
bits
in
the
short
header.
G
So
we
haven't
bike
shed,
which
one
of
the
two
yet
that'll
be
fun,
but
next
slide,
please,
the
the
idea
is
also
we
haven't
by
shut
the
name
Tov
a
key
ready
bit,
which
means
you
send
it
only
once
you
receive
something
at
this
key
level
and
then
you're
not
allowed
to
initiate
a
key
update
until
you've
received
it,
and
it
kind
of
ensures
this
latching
of
okay
we're
both
here
great
now
we
can
go
there
and
wait
for
it.
Okay,
we're
both
here,
then
we
can
move
on
next
slide.
G
Please
here's
an
example
of
how
it
works.
So,
let's
say
you've
been
communicating
it
at
box
zero
for
a
while,
so
you're
both
have
key
phase
zero
for
that
app
puck
and
key
ready
one,
because
you've
both
been
receiving
at
this
level,
you're
ready
for
a
key
update,
then
a
initiates,
a
key
update,
so
they
update
the
right
keys
for
sending
an
epic
one,
so
they
start
sending
with
key
phase
one,
and
now
it
becomes
key
ready
zero
because
they
haven't
received
anything
there.
G
They're
not
latched
up
when
B
receives
this
they're
forced
to
update
their
key,
so
that
the
read
key
so
you'll
receive
this
and
they
have
to
update
the
right
keys.
So
now
they
send
key
phase
one
as
well,
but
they're
allowed
to
send
key
ready
is
one
and
they
have
to
because
now
they
have
received
this,
and
both
sides
are
in
agreement
about
key
phase
one
and
then
the
moment
they
receives
this.
G
G
You
would
end
up
in
a
case
where,
like
older
packets
from
a
end
up
arriving-
and
you
already
latched
the
key
back
twice
so
those
would
get
dropped
and
you'd
get
a
performance
degradation.
So
there
are
three
ways
we
get
out
of
this.
One
is
to
be
kind
of
holds
back
on
saying:
okay,
now
you're
allowed
to
do
a
key
update.
G
F
I
had
it's
by
Google
and
informational
comment
on
the
west
side,
which
is
we
have
enough
data
from
packet
rearing
in
quick.
That
I
would
I
would
say
that
the
the
previous
issue
you
just
raised
is
in
practice
not
really
a
problem.
I
think
something
like
ninety
percent,
some
of
connections
literally
experienced
and
nowhere
during
through
the
entire
connection
pocket
reordering
in
dropping
packets,
so
I
mean
I.
I
would
just
say
if
you
wanted
to
go
with
this
design.
I
would
see
if
she
just
not
solve
that
problem.
G
H
C
B
A
G
B
A
C
A
And
so
we
wanted
to
go
through
the
current
triage
issues
really
quickly
just
to
see
if
we
can
make
a
little
progress
on
those
and
to
make
sure
that
everybody's
familiar
with
this
process.
This
is
the
the
view
that
we're
using
it's
the
project
board
on
the
github
issues.
List
I,
think
it's
project
board
five,
it's
called
late
stage
processing,
and
so
what
we
do
is
when
it
new
issues
get
created
that
refer
to
these
drafts.
A
We
put
them
this
triage
bucket
and
Lars
and
I
go
through
them
and
figure
out
whether
there
are
design
issues
or
there
editorial
issues.
Although
the
editors
can
claim
obviously
editorial
issues
and
if
they're
a
design
issue,
we
try
and
get
a
sense
of
whether
there
is
a
interest
in
the
working
group
in
adopting
that
issue,
so
that
if
we
think
it's
worth
discussing
effectively
and
so
right
now
we
have
12
of
these.
A
Do
we
actually
want
to
go
through
these
now
and
try
and
do
that
all
right,
I
think
sorry,
the
triage,
URI,
yeah
I
think
we
can
probably
do
that
on
our
own
sure
and
use
this
time
for
actual
issue
processing
just
a
minute
familiarize
people
with
what
we're
doing
so
that
we're
on
the
same
page
and
then
the
editorial
issues
are
here
so
right
now
we
have
36
editorial
issues
against
these
drafts.
We
have
30
design
issues
which
is
a
few
more
than
we
had
a
couple
weeks
ago.
A
I'd
say,
and
these
are
the
ones
that
we
need
to
actively
discuss
once
the
editors
and
indeed
we
believe
that
there's
consensus
emerging
on
a
proposal
we
ever
had
pact.
We
add
the
has
proposal
label.
So,
ideally,
if
there's
a
poll
request
against
it,
that
we
think
reflects
emerging
consensus
and
then
we
do
a
consensus,
call
on
the
list
and
once
we
confirm
that,
there's
consensus
or
and
I
will
label
it
as
house
consensus
and
then
once
the
editors
actually
incorporate
that
that
consensus
usually
just
merge
the
PR.
A
B
I
I
A
A
J
A
I
Not
in
Thompson
I
think
we're
done
with
this
one.
The
consensus
sent.
It
seems
to
be
emerging,
saying
that
we
want
the
transport
parameters
to
be
mandatory
if
the
client
doesn't
send
them
the
server
current,
which
turns
out
to
be
really
awkward
for
a
number
of
things,
and
so
it's
easier
to
make
them
mandatory
for
both
of
them.
I.
Don't
think
anyone
has
any
interest
in
in
doing
anything.
Otherwise,
for
instance,
if
you
don't
send
these
things,
you
can't
open
any
strings.
I
A
I
K
A
L
N
Just
go
through
them
button.
Why
don't
you
summarize
what
this
isn't
where
we're
at
yeah?
This
is
Martin
Duke,
so
I
asked
chairs
to
time
box
this
because
I
don't
think
we're
anywhere
near
consensus
about
this
at
all,
but
but
the
venetie
of
the
issue
is
that
we've
put
a
lot
of
effort
into
blocking
massification,
quick
and
yet
the
easiest.
The
dumbest
thing
for
the
middle
box
to
do
is
just
say:
aversion
is
greater
than
one
drop.
N
This
happened
with
this
was
an
issue
in
TLS,
where
you
know
TLS
one
three
pretends
to
be
one
two
and
has
some
other
hidden
field.
That
says
that
it's
one
three
I'm,
not
I,
don't
have
a
great
solution
by
any
means.
There's
all
we
had
a
long
discussion
on
a
list.
We
talked
about
possible
obfuscation
methods
and
so
on,
but
I
think
the
working
group
should
spend
some
time
thinking
about
a
strategy.
N
I
O
I
A
published
specification
and
then
public's
that
specification
will
be
published.
Everyone
will
have
that
specification.
So,
no
matter
what
we
do
in
that
first
message
that
we
send
it
will
be
possible
for
someone
on
the
network
to
identify
that
as
quickly
one
quickly
to
whatever
version
or
quick
that
they
they
understand.
It
is
true
that
we
could
smuggle
something
in
there
that
they
don't
know
about.
Yet
that
looks
like
an
extension
and
whatever
else,
but
they
could
just
decide
not
to
allow
those
things
that
they
don't
understand
and
we're
back
in
the
soup.
I
A
Just
interjection
I've
always
kind
of
thought
of
this
issue
as
quick
version
ossification
is
that
uncontroversial
yeah,
that's
and
I
think
the
way
that
we
can
make
a
little
bit
of
progress
here
in
a
time
boxed
fashion
and
I'm.
Almost
at
the
point
of
saying
ocean
cut,
the
cues
is:
is
that
at
the
end
of
the
queue,
let's
take
a
whole
entire
queue.
A
F
Bonin
Dremel
of
a
covering
researcher-
this
is
a
really
interesting
research
topic.
I,
think
that
there
probably
was
a
alternate
design
that
we
could
have
used
at
some
point
where
we
could
have
made
some
progress
here,
which
is
deploy
arbitrarily
many
versions
of
quick
at
the
same
time,
to
make
this
not
worth
it.
That
is
not
going
to
get
me
happy
of
looks
from
the
chairs
at
this
point
in
the
process.
I
think
that
the
discussion
in
this
issue
should.
B
F
Is
very
easy
to
add
to
a
table.
You
don't
have
to
yeah.
You
have
to
sign
up,
so
you
exercise
different
code
paths
with
the
different
versions.
Right.
I
think
that
this
discussion
we've
had
it
like
I,
think
now
twice
in
the
list
and
once
in
this
issue,
I
think
it
needs
to
be
captured
in
some
document
somewhere
that
comes
out
of
the
working
group,
maybe
not
one
of
the
core
documents.
F
O
B
B
J
John
I
ain't
got
to
Martin's
comment
earlier
condition.
Control
is
a
research
topic.
We
still
do
it,
though,
so
just
because
it's
a
research
topic
doesn't
mean
that
we
shouldn't
do
anything
at
all.
My
my
general
I
propose
a
couple
of
things
on
the
list
about
this.
I
agree
that
this
is
not
a
problem.
We
can.
We
both
understand
the
problem
here
fully
well
either.
J
It
will
be
worth
doing
it
even
if
it
is
simple
obfuscation,
because
I
can
tell
you
from
experience
that
when
does
do
not
even
go
read
the
spec
like
if
I
can
get
them
to
go,
read
the
spec.
That's
for
me
achieve
locked
right.
So
so
as
it
is,
the
bar
is
fairly
low
here
to
be
clear
in
what
I
believe
we
may
need
to
do
to
just
a
while,
like
basic
level
of
ossification
and
the
bar.
Is
there,
though,
like
it's?
It's
it's.
J
It's
just
sending
one
number
on
the
wire
for
a
long
period
of
time
runs
the
risk
of
getting
that
ossified,
and
we
have
talked
about
this
in
the
past
with
google's
experience
with
this
first
byte,
and
it's
it's
a
real
thing
and
and
for
what
it's
worth
I
still
don't
know
what
happens
when
Google
changes
that
first
byte
from
nine
to
whatever
it's
going
to
be
next,
so
there's
there's
some
real
pain
there
that
that
should
be
anticipated.
I
I,
don't.
B
Researching
right,
it's
the
question.
So
if
somebody
comes
up
with
a
proposal
during
the
quick
version,
one
timeline
that
gives
us
new
information
or
lets
us
build
a
mechanism
here,
rather
than
saying
we
don't
know
what
we
should
do.
That's
fine
right,
I
think
it's
basically,
the
question
is:
are
we
at
that
stage
now
and
I?
Don't
think
the
answer
is
yes.
B
K
K
We
just
can't
can't
reliably
win
this
and
so
I'm
inclined
to
maybe
we'd
have
a
little
ossification
proposal
so
that
they
have
to
do
a
little
bit
of
work
toss
of
my
ass,
but
ultimately
we
just
ship,
quick
and
the
best
defense
against
this
is
we're
already
using
version
numbers
greater
than
one
their
draft
versions,
and
we
keep
moving
on
those
drafts
of
future
versions,
so
we
need
to
deploy
them
yeah.
The
best
defense
against
this
is
deploying
draft
versions
and
draft
versions
of
quick
v2
and
keep
moving.
M
So
Phillip
Tina
I
have
another
idea
how
to
make
this
a
little
bit
more
complicated
to
ossify
on
it's
just
having
a
lot
of
version
aliases
and
this
alias
version
numbers
are
not
used
by
default,
are
not
part
of
the
standard
but
just
be
sent
by
the
server
at
the
handshake
can
be
cached
by
the
client
and
then
can
be
used
for
later
connection
to
that
server,
and
so
we
can
really
spread
a
huge
space
of
this
version.
Number
they
get
a
time
out.
L
Yeah
all
right,
then,
he's
probably
really
but
I
like
the
way
that
he
sort
of
which
is
we
start.
As
usual,
I
mean
we.
There
are
people
who
have
not
read
the
specifications
like
it's
definitely
and
then
people
with
the
specification-
and
you
know,
and
then
they're
people
and
so
fixing
the
interface
between
the
two
sets
is
different.
You
know
clearly,
we
can
like
if
the
threat
model
is
only
people
in
the
specification
there's
a
bunch
of
kind
of
CC
things
we
can
do.
You
know
EXO
over
the
last.
L
You
know
the
last
two
PC
if
it's
the
factor
that
kind
of
stuff
right,
if,
if
you're,
if
not
all
those
people
who
have
read
specification,
then
all
you
can
do
is
like
you
know,
you
need
site
information
of
some
kind
like
in
the
DNS
I,
like
the
suggestion
folk
just
made
on
so
on.
That's
probably
like
the
first
news.
Is
that
well
my
sense
for
what
it's
worth
and
the
other
things
like
much
more
complicated.
My
sense
was
worth
is
that
well
on?
L
The
initial
ossification
does
in
fact
happen
in
Shana
suggests
by
people
in
specification.
These
Spurs
41.3
was
that,
like
plenty
of
people
that
we
ran
into
problems
with
clearly
hi
read
specification
at
least
but
computation
and
then
have
read
it
well,
but
they'd
read
it
and
I
mean
they.
You
know
they're
like
mental
posses,
for
instance
right
so
on
the
sorry
what
we
have
to
do,
instead
of
trying
to
like
fight
them
on
this,
which,
like
really.
L
State
what
your
obligations
were
as
like
a
device
in
this
category
and
we
don't
have
to
go
work
because,
like
you
know,
we've
only
one
cycle
of
this
but
I
guess
my
sense
is
that
anything
we
can
do.
People
lazy,
they
just
aren't
forward-thinking,
and
so
anything
we
do.
That,
like
is
like
all.
You
have
to
do.
Read
the
spectrum
person,
no
matter
the
spec
offers
back.
They
will
eventually
figure
out
which
actually,
one
already
for
another
code,
there's
not
willing
to
like
I.
Think
about,
because
system
affects
bigger.
L
Was
like
you
had
to
drive
that
they
element
were
down
like
to
you
know
basically
cook
argument
and
with
like
ordinary
one
to
I.
Don't
really
get
that
far
so
I
just
might
I
think
to
answer
large
question.
I
think
we
should
not
walk
deployment,
a
quick
and
if
someone
comes
up,
if,
if
you
know
people
but
but
if
people
decided,
if
people
want
to
do
something,
that's
just
that
they
start
with
that
model
and
say
exactly
what
suddenly
you
find
a
hit
and
then
we
can
against
those
okay.
B
Okay,
so
I
heard
like
three
different
well,
I
did,
there's
the
you
know
it's
a
blocker
which
I
don't
think
anybody
spoke
for
so
I
think
we
can
take
that
off
the
table
and
then
basically
dead
leaves
close.
This
issue
without
any
tax
changes
other
than
maybe
a
note
in
the
applicability
statement
that
discuss
is
that,
but
no
no.
B
Changes
the
other
option
that
I
think
I
heard
at
least
some
people
express
this,
whether
it's
worthwhile
to
have
a
a
simple
office
keishon.
What
that
gets.
Multiple
version
numbers
used
that
that
all
fall
back
to
the
quick
v1
protocol
Christian
is
shaking
his
head
and
Jenna's.
Not
Engel
is
very
good.
She
seemed
to
be
there
sitting
so
I
think
that
was
the
only
other
thing
that
I
heard
and
I
never
captured
it
well
Brian.
Why.
F
I
was
just
pointing
out
that
I
just
notice
the
area
is
actually
already
filing
an
issue
to
bring
this
discussion
into
applicability
so
yeah
now
the
suggestion
is,
is
basically
yes,
I,
think
that's
where
we
are
close
and
close
and
moved
applicability,
but.
B
P
B
P
B
J
C
Prefer
to
keep
this
open
and
continue
the
discussion
I
find
the
proposal
that
Philip
made
easy
Philip
made
interesting.
I
haven't
thought
about
that
enough.
Yet
I
would
like
to
explore
this
idea
further.
I
think
that
the
continued
interest
in
this
shows
that
this
is
a
problem
and
we
shouldn't
just
close
it.
So.
A
L
O
L
What
is
that
flies
you
use
for
this
purpose,
but
well
I
don't
want
to
do
is
discuss
it
like
the
next
meeting
on
when
we
have
another,
no
proposal
which
is
substantively
no
different
from
the
ones
you've
already
seen
and
there's
closing
the
issue
and
waiting
for
new
and
satisfying
to
me
I
mean
I,
guess
what
I
would
say
is
there's
like
a
series
like
like
there's
like
a
there's
like
a
series
of
possible
options,
and
we
could,
like
you
know,
and
any
one
of
them
like
I,
think,
like
a
number
of
us,
could
design
like
a
some
some
example.
L
Don't
like
you
know,
I
don't
want
to
buy
shares
the
first
bytes
of
the
last
bytes
or
whatever
we
obfuscated
with,
like
you
know
so,
I
think
I
like
it.
You
know
what
someone
wants
to
do.
This
come
back
with,
like
a
like,
like
like
some
something
is
funny
million
new
phenomena,
new
category
or
consensus
model,
rather
than
like
say:
oh
no.
No
now
we
have
posed
over
like
2x
over
the
last
four
places
in
the
first
four
bytes.
That's
like
reason.
B
O
L
I
I
But
the
core
point
here
is
that
we
don't
know
exactly
what
a
good
solution
is,
what
the
requirements
are
really
when
we
get
down
to
it
for
any
solution
and
we
can
proceed
and
those
people
are
interested,
can
go
off
and
do
the
research
and
come
back
to
us
and
if,
if
that
happens
before
we
ship
them
with,
then
great.
If
it
doesn't
happen
before
we
ship
and
we
I
would
prefer
to
carry
on
and.
G
There's
a
very
quick
point:
what
first
off
to
Becker's
point,
we
can't
put
the
quick
version
as
a
v2
feature,
because
the
version
is
in
the
invariance
like
it's,
not
something
you
can
figure
out
later,
but
anyway,
just
quickly
in
terms
of
process.
I,
personally
think
this
is
interesting,
but
I
totally
agree
that
we
can
table
it
until
they're
like
more
concrete
proposals.
That
said
in
general,
anytime,
someone
comes
up
with
a
PR
that
has
a
proposal
in
it.
G
A
F
You
guys
one
sorry,
but
I
do
have
one
comment,
so
at
Google
we
are
going
to
be
in
a
position
where
we
are
going
to
be
stuck
supporting
something
that
looks
a
lot
like
IETF,
quick.
That
is
going
to
be
a
draft
version
for
at
least
12
months
after
v1
ships,
probably
longer
to
my
sadness
and
I
guess.
The
point
is
we're:
gonna
probably
have
something
like
three
or
four
different
versions
that
all
look
kind
of
in
the
shape
of
IDF,
quick
of
various
drafts
or
other
things
prior
to
be
one
concurrently
with
v1.
F
And
so
hopefully
we
can
get
our
crap
together
as
a
working
group
and
ship
out
v2
before
we
deprecate
all
the
previous
draft
versions,
and
that
will
mean
that
a
substantial
portion
of
the
actual
quick
traffic
up
internet
will
still
be
using
other
non
v1
versions.
I
hope
that
will
mitigate
this
I
mean
I
think
that's
a
good
incentive
to
actually
ship
v2
in
a
reasonable
timeframe,
and
this.
B
Might
even
generalize,
but
a
lot
of
the
different
implementations
have
claimed
private
version
space,
and
maybe,
when
you,
when
you
ship
your
v1,
you
also
still
negotiate
your
private
version,
bridges
v1.
That
might
help
a
little
bit
too.
Okay,
so
there's
three
options
here
to
mark:
what
up
does
this
make
sense,
as
options
are
readable,
yep.
A
Q
A
A
D
A
B
Everybody
would
put
their
hand
up
fine,
smart
and
semen,
and
you
guys
figure
out
if
you
have
anything
like
a
shared
interest
and
and
and
don't
form
a
design
team,
but
maybe
discuss
this
a
little
bit.
But
I
still
would
like
you
to
have
an
answer
for
now
right
and
the
mayor
makes
a
fair
point
out.
We
can
always
even
if
you're
close
there
with
no
action
beyond
a
cabeza,
we
can
actually
open
it.
Somebody
comes
back
a
shadow
design
team,
but
only
the
IB
does
that
I.
A
Think
yeah
that
was
convincing
with
the
context
of
this
conversation
of
mine
by
everyone
I
think
it
give
those
folks
who
want
to
come
up
with
a
proposal.
Good
information
right
if
we
get
a
sense
of
where
people
are
leaning
in
terms
of
what
they
want
to
see
here,
Aaron
are
you
inferring
the
mic?
Unfortunately,.
N
C
A
A
suggestion
that
the
option
to
you
may
have
a
variant
of
option:
two
that
that
is
block
V
one
for
some
fixed
period
of
time
to
ensure
that
the
design
team
produces
something
and
I
don't
want
to
put
timers
on
issues
all
right
that
your
it
goes
over
to
I.
Think,
let's
just
get
a
sense
of
the
room.
Then
people
connect
accordingly
hum
if
you
believe
we
should
close
this
with
no
action.
A
A
A
The
most
compelling
thing
I
heard
was
was
from
echo
in
terms
of
let's
establish
what
threatened
model
we
can
actually
agree
upon
here
and
I
think
we
made
a
little
progress
in
that,
but
until
we
really
nail
that
down
we're
gonna
have
a
back
and
forth
to
one
end.
So,
let's,
let's
focus
on
that.
First,
before
we
talk
about
any
specific
mechanisms,
but
but
we'll
try
and
establish
consensus
in
the
list
and
see
how
we
go
all
right.
Okay,
how.
B
A
B
N
Yes,
let
me
turn
remind
myself
what
the
spec
forbid
you
to
go
from
a
zero
length
connection,
ie
D,
to
a
non
zero
length
connection
ID.
Should
you
find
that
you
have
a
migration
scenario?
You
weren't
anticipating,
and
it
was
not
clear
to
me
why
that
would
be
the
case
and
I
wanted,
and
this
issues
about
possibly
allowing
that
that
change
and
then
the
first
comment
is
not
reflective
of
I.
Think
where
I
ended
up.
N
N
But
yes,
I
brought
it
but
I
believe.
The
idea
here
is
that
you
have
a
zero
in
connection
ID
field,
and
then
you
can
something
happy
and
you
can
get
new
connection
IDs
that
allow
you
to
provide
new
I'm,
miss
Dana,
I,
have
no
connection.
Id
I
then
provide
people
with
new
connection
IDs
that
are
non
zero
length
and
then,
if
there's
a
migration,
I
can
switch
to
it.
F
Yes,
what
Google
I
think
the
one
reason
for
arguing
for
allowing
this
is
that
there
may
be
periods.
Cases
are
such
where,
like
typically
5-tuple,
is
sufficient
and
then
maybe
another
circumstances.
It's
not
sufficient,
and
you
only
know
later
on,
I
mean
I.
Think
the
biggest
argument
for
bi-directional
zorg,
like
the
connection
IDs
I,
see,
is
peer
to
peer
and
that's
the
one
case
work
at
Google
using
it.
It's
okay,
so
you'd
like
to
accept
our
proposal
along
these
lines.
H
I
So
if
you
go
from
from
one
to
none,
you
have
packets,
potentially
coming
in
that
have
bits
in
the
front
of
them,
but
you
might
be
able
to
recognize
the
old
connection
ID
and
pull
those
apart.
There's
a
nonzero
probability
that
those
were
genuinely
the
output
of
the
aad
as
well,
and
so
yes,
it's
entirely
possible
and
the
other
way
around.
K
Mike
Bishop,
depending
how
you
look
at
it,
were
effectively
trial,
decrypting
anything
anyway.
We
decrypt
it
see
if
it
worked
and
if
it
didn't,
we
drop
it.
So
if
we
go
from
zero
to
a
connection
ID,
yes,
there's
a
chance
of
collision
either
way
we're
going
to
attempt
to
decrypt
it
one
time,
maybe
him
to
do
clipped
a
second
time.
K
Yes,
that's
possible,
but
you
can
recognize
the
connection
ID
coming
up
front
I'm,
not
as
concerned
about
going
from
zero
to
a
connection
idea
that
you've
previously
issued
the
idea
that
you
could
ever
go
back
to
zero
after
you've
done
that.
That
means
your
trial,
decrypting
everything.
If
you
see
a
connection
ID,
you
don't
recognize.
K
G
So,
thank
you,
Martin
for
precising.
That
I
will
be
disagreeing
with
you
and
now
for
my
next
trick.
I
will
explain
why
the
weave
quick
doesn't
exist
in
a
vacuum.
Underneath
us
we
have
UDP,
we
have
IP.
There
are
information
in
those
bits,
so
imagine
a
scenario
where
you
can
connection
migrate
where,
if
you're
actually
running
over
ipv4,
you
have
a
connection
ID,
because
you
need
that
to
land.
G
You
here
say:
you're,
your
client
is
sharing
the
port,
but
imagine
your
migratory
ipv6
and
you
have
a
bajillion
ipv6
addresses
you
can
encode
information
there,
and
so
you
do
not
need
trial
decryption
if
you
know
how
to
get
to
the
connection
from
information
outside
of
quick,
so
I
think
maybe
this
use
case
is
kind
of
crazy
and
not
that
useful.
But
it's
a
real
one
and
something
that
people
might
want
to
do
and
I
would
think
it
would
be
silly
to
prevent
them
from
doing
that.
R
That
means
that
the
pier
is
ready
somehow
to
process
that
that's
the
problem,
if
we
want
to
the
issue,
is
okay
to
appear
as
propose
the
lengths
for,
and
we
are
asked
as
not
enough
and
wants
to
get
user
ranks
eight
or
some
standard
Oh
co-op,
zero,
zero,
a
that
means
that
what
the
pier
has
to
do
is
finish
or
whatever
you
are.
You
say
that
in
English
finish,
take
back,
take
back
connection
ID
that
it
has
already
proposed,
because
it
realized
that
it
was
wrong.
R
So
in
that
case
would
have
a
mechanism
similar
to
retire
connection.
Id
that
says,
oh
I
propose
you
this
old
connection
ID,
but
they're,
not
good
for
me
anymore.
Please
remove
them
from
your
list
and
I
think
that
if
you
want
to
do
that,
we
have
cellular
synchronization
issues.
So
my
vote
is
to
not
do
anything.
N
I
Mountain
Thompson,
there
is
maybe
a
scenario
that,
in
what
David
describes
that
is
valid,
it
requires
migration
onto
a
new
path
that
supports
having
no
connection
IDs.
You
could
then
change
this
sending
no
connection
IDs,
but
you
don't
control
that
situation.
Your
peer
controls,
the
situation
and
your
peer
decides
queues.
I
It
gets
worse
because
you
have
to
deal
with
the
fact
that
pairing
up
connection
IDs
to
stateless
reset
tokens
is
not
possible
when
you
have
zeros
involved.
The
idea
is
that
you
can
have
what
a0
is
is
effectively
any
number
of
connection
IDs
and
you
can't
do
stateless
resets
with
a
zero
length
connection
ID,
but
this
would
require
that
you
have
to
deal
with
the
the
cop
the
consequences
of
having
zero
length
connection
IDs
do
I,
send
multiple
new
connection,
ID
frames
with
different
stateless
reset
tokens.
Does
that
even
make
sense?
I
A
G
F
J
Are
you
definitely
gonna
use
it
if
there
isn't,
it
doesn't
know,
but
I
might
turn
it
to
the
question
is:
do
you
have
a
use
case
for
it,
because
I
am
just
the
theoretical?
The
theoretical
flexibility
you
talking
about
is
fine,
but
you
can
only
serve
a
theory.
Will
flexibility
at
any
point
in
time
the
transfer
parameter
extension.
That's
how
your
extension
mechanisms
I'm
going
to
keep
adding
more
things
that
are
unnecessary
at
this
point
in
time,
and
nobody
is
really
going
to
use
it
right
now.
R
J
B
A
I
think
maybe
a
quick
hum
might
just
help
a
little
bit
I'm
thinking
of
asking,
if
folks
believe
that
in
the
specific
case
of
the
server
preferred
address,
we
should
be
loosening
things
here
and
then
should
we
or
should
we
be
loosening
things
more
generally
or
whether
things
are
just
fine
as
they
are
so
first
home.
If
you
believe
in
the
specific
case
of
the
server
preferred
address,
we
should
be
loosening
this
requirement.
Please
hum
now,
I.
B
B
A
A
So
too,
I
think
that
that's
something
we
can
take
the
list
if
people
believe
that
that
they
can
raise
a
new
issue,
and
that
contains
enough
information
to
justify
that
since
we're
in
this
new
late
stage
process.
Please
do
so
but
know
that
you're
gonna
have
to
bring
the
evidence
to
the
group
of
concrete
use
case
to
convince
people
make
sense.
N
N
N
The
next
one
is
issue
number
24,
71,
stateless,
reset,
blacks,
normative
text.
I
actually
have
no
position
on
this,
except
that,
if
you
read
it,
it
does
there's
no
like
there's
some
musts
and
so
on
about
how
you
build
the
stasis
reset,
but
no
rules
whatsoever,
no
guidance
whatsoever
on
whether
I'm
supposed
to
send
them.
How
often
I'm
supposed
to
send
them
what
I
need
to
do
if
I
get
one,
etc,
etc.
N
K
J
L
I
thought
we
were
gonna
agree
there
for
a
second
John,
yeah
I,
guess
I
guess
until
agree
with
them.
Yeah
good,
but
you
know
I,
don't
so
I
mean
reset
is
something
that's
optional
for
the
server,
but
I
don't
understand
any
reason
why
food
option
for
the
client-
you
say
it's:
the
client,
centers
there's
also
the
server's
interest
of
the
flight
plan.
I
thought
a
packet
stop
and
so
I
mean
I.
Guess
I,
don't
really
understand
you
know.
What's
the
worst,
the
argument
for
not
being
required
for
the
plan
to
sink
this
reset.
I
Yeah
mum,
Thompson,
oh
I,
think
that's
fine,
I
think
the
word
end
point
rather
than
client,
because
for
some
reason
we
made
it
symmetrical.
But
but
yes
I
think
if,
if
you
receive
one
of
these
things,
I
think
it
should
be
something
along
the
lines
of
a
must
I.
Don't
if
we
didn't
do
that,
I
think
that's
just
an
a
mistake.
Editorial
mistake.
I
A
C
N
I
So
this
is
a
an
offshoot
of
the
discussion
that
we
had
about
what
packets,
what
frame
types
are
allowed
in
0rt,
tea,
packets,
sorry,
and
so
there
was
a
realization
in
the
discussion
that
we
didn't
really
have
any
strict
rules
about
what
it
is
that
could
go
in
0rc
tea,
packets
and
one
of
those
rules
was.
You
should
not
include
in
a
zero
RTT
packet,
a
reaction
to
a
one
RCT
packet
reason
being
that,
if
you
are
able
to
read
one
RCT
packets,
you
should
probably
be
sending
them
and
the
pull
requests.
I
K
Mike
Bishop
the
the
discussion
on
this
get
kind
of
scattered,
because
there
is
one
PR
that
is
conceptually
right.
It
describes
what
clients
need
to
do
and
it's
not
very
concrete
how
a
server
detects
of
a
client
has
validated
has
violated
it.
The
other
one
which
I
wrote,
specifies
exactly
how
a
server
could
detect
it,
but
it
has
a
it's
not
as
strong
a
statement
on
what
the
client
has
to
do,
because
the
only
restriction
is
what
the
server
can
detect.
I.
K
I
There's
a
question
about
principles
here
again
and
it's
a
similar
one
to
what
we've
been
having
with
respect
to
this
key
update
thing
that
we
didn't
talk
about
earlier.
But
the
the
basic
question
is
whether
we
we
operate
on
the
basis
that
we
enforce
behavior
or
whether
we
operate
on
the
basis
that
we
simply
describe
what
what
behavior
the
client
must
must
abide
by,
and
this
is
really
the
split
between
the
two
to
pull
requests.
I
I
would
say
that
I'm,
okay,
with
adding
some
text
about
what
the
server
can
detect
to
my
pull
request,
but
I
would
rather
not
rely
solely
on
enforcement
rules
in
this
one
because
we
just
can't
rely
on
if
we
have
non
cooperating
peers
in
this
regard,
then
we
don't
really
have
a
protocol.
We
just
have
adversary
slinging
rocks
at
each
other.
I
L
I
mean
I
mean
generally
like
I,
think
it's
fine
for
things
to
be
enforceable,
but
like
were
calls
for
requirements
that
are
like
a
phenomenon.
First
of
all
that
in
his
presence,
why
not
me
earlier
about
cached,
IDs
and
I?
Don't
know
how
to
force
that
from
the
other
side,
so
I
guess
what
I
don't
want
to
do
is
jump
through
a
lot
of
design
hoops
in
order
make
like
mechanisms.
Only
purpose
is
to
allow
the
period
enforce
that
you're
behaving
correctly
generally.
L
We
expect
you
to
behave
correctly
so,
and
you
know
if
we
find
you're
not
like
it's
a
specially
true
by
the
way.
Just
worry
about
the
philosophy
is
especially
true,
if,
like
you,
can
detect
people
behaving
correctly,
but
like
that
with
the
hundred
on
trial,
reliability,
but
like
they're,
almost
certainly
incorrectly
I,
don't
think
like
the
ideal
like
I
was
gonna
happen
otherwise,
and
I
like
your.
It's
like
you're
auditioning
this
and
a
missile
or
either
side
like
automatically
and
pull
them
off.
H
A
A
F
Any
summarize
this,
please
good.
So
if
you
are
a
if
you
are
using,
for
example,
bi-directional
zero
by
convention,
IDs
and
you're
communicating
between
two
peers,
that
has
the
only
connection
between
those
two
peers,
ie
Weber
at
CD,
peer-to-peer,
yada,
yada
and
one
of
those
peers
would
like
to
migrate
addresses
intentionally.
The
current
text
says
you
cannot
do
that.
I
would
like
to
allow
that
because
it
seems
useful
like
WebRTC.
That's
it.
L
So
I
absolutely
believe
in
that
it's
useful
to
be
able
to
migrate
clashes
in
WebRTC,
but
my
assumption
has
always
been
that:
where
would
he
see
you
would
migrate
with
ice?
L
And
then
you
would
you
start
sending
and
so
you'll
be,
and
so
that
I
would
Petrobras
II
would
need
to
pass
challenge,
because
that's
how
you
would
do
I
say
you
do
migration
with
like
DTLS
assertively,
for
instance
right
so
so
I
think
I
think
my
perspective
is
that
for
that,
when
we
get
around
to
binding
quick
to
Peterborough
genitals,
we're
gonna
find
that,
like
the
IP
addresses
are
largely
irrelevant
and
you
should
be
petitions
is
this
the
layer
below,
rather
than
sitting
at
the
same
layer.
L
That's
what
we
found
we
did
when
we
did
where
TCP
basically
I'm
gonna
SCTP,
like
the
IP
address
you
just
don't
rejoice
for
don't
consciously
exist
level.
A
nice
gives.
O
L
An
abstraction
that's
basically
like,
like
I'm
saying
the
other
dude,
which
is
who's
it
identified
in
someone
specified
away
and
then
I
just
takes
care
of
the
routing.
There's
one
thing
I
mean
so
I
think
so
I,
don't
I.
Think
like
like.
We
just
need
like
effectively
passed
specification
at
that
point
like
say,
basically
ignore
the
IP
addresses.
R
I
A
N
D
B
C
A
A
T
A
A
I
U
I
B
F
So
this
is
a
really
simple
issue,
so
I'm
sure
it's
gonna
get
like
extra
by
trading,
but
the
version
of
Yoshi
ation
packet
is
currently
the
only
packet
in
the
quick
wire
format.
That
does
not
have
the
quick
bit
set
to
zero,
as
required
by
every
other
packet,
and
my
concern
is
that
ossification
is
probably
going
to
happen,
and
so,
if
you
don't
specify
this,
like
people
may
have
the
version
addition.
Negotiation,
packets,
randomly
dropped
and
be
very
curious
as
to
why
and
some
networks
and
so
I'm
strawman
proposing.
B
F
I'm
trying
to
accept
reality
like
I'm
willing
to
like
give
up
on
one
bit
on
one
packet
that
we
almost
never
use.
I,
guess
it's
the
point,
but
I
don't
know.
Maybe
people
have
strong
objections
to
this,
or
maybe
people
think
my
concerns
of
ossification
are
ill
found
it.
But
I've
already
talked
a
bit
about
to
injures.
I
I
think
he
is
probably
right:
I
would
not
want
to
mandate
that
endpoints
only
accept
the
version.
Negotiation
packets
that
I
have
this
bit
set,
but
I
I
would
I
would
certainly
respect
an
encouragement
to
set
it
in
in
the
spec
I
think.
That's
it.
That's
about
the
appropriate
level
that
we
can
put.
This
is
that
it
would
be
nice
if
it
could
be
sort
of
greased
a
little
but
I'm
not
really
going
to
aspire
to
those
heights
so
should
should
set,
should
so
one
is
what
I'm
proposing
and
must
ignore
and
receive
no.
O
K
Bishop
I
will
point
out
that
the
quick
bit
is
in
version
1
and
the
version
negotiation
packages
and
the
invariance
so
I
think
really.
The
question
here
is:
do
we
want
to
put
the
quick
bit
in
the
invariance,
or
do
we
just
want
version
1
to
say
well
when
you're
picking
your
random
value
up
there,
we
encourage
a
random
value.
That's
works
with
one.
L
Man,
you
don't
have
to
I
mean
this
isyou
know.
No,
it's
just
so
fraught
I
mean
given
that
VN
now
really
has
the
semantics.
I'm,
sorry,
you're
screwed,
you
know,
loss
of
the
end
is
not
straight
he's.
Gonna
have
much
impact
on
whether
or
not
like
I'm.
Just
like,
please,
you
know,
less
loss
of
the
end
is
not
something
that
much
impact
on
whether
or
not
like
the
premise
of
like
offsetting.
L
These
bits
is
that
you
know
is
that
we're
in
here
who
deployed
and
that
these
visitor
mover
on
the
chain
and
that
that,
if
more
boxes
like
like
screw
these
bits,
then
then
we'll
notice
and
no
gif
and
you're
like
a
plane,
because
the
connection
is
great
great
and
but
the
problem
is
because
VN
already
means
you're
hosed,
like
if
middle
buses
block
these.
It's
just
not
gonna,
be
like
a
noticeable
kind
of
light
problem
for
people.
L
F
F
Frankly,
I
feel,
like
you
know,
I'm
in
line
and
Ecker
and
John
I
agree
with
each
other,
so
Iong
after
agree
with
the
middle
I
did
point.
I
will
point
out
at
the
point
that
we
decided
to
call
this
the
quick
bit
we
came
up
on
calling
about
and
and
I
think
we
can
have
a
question
posed
to
be
one
as
to
well.
You
know
the
question
was
when
we
close
the
invariances
to
whether
we
want
to
put
in
the
invariants
now
or
later.
I
Have
an
idea:
sorry
mountaintops,
it
I
have
an
idea.
It
may
be
a
bad
one,
there's
a
lot
of
settings
in
which
this
bit
isn't
useful.
If
we
allow
endpoints
with
awareness
of
this
setting
to
signal
the
fact
that
they
would
like
to
set
it
to
zero,
occasionally
that
might
work
now
would
have
to
be
mutual,
because
I
believe
that
there
are
certain
deployments
that
really
rely
on
the
value
being
one,
but
that
would
allow
us
to
grease
it
now.
I
V
Chose
Linux,
one
of
the
purposes
of
the
quickvid
is
for
the
deem
ups
with
other
protocols,
notably
stun.
If
you're
running
over
ice,
does
it
need
to
be
one
actually
for
a
reason,
as
opposed
to
just
for
the
sake
of
the
rest
of
the
network,
I
mean
if
you're,
certainly
very
dishware,
it's
you're
doing
D
MUX.
You
need
to
probably
need
to
set
the
quick
bit.
B
I
Make
a
proposal
yeah
go
for
it
concrete
proposal.
Is
it's
the
same
one
that
I
might
before,
which
is
that
we
say
that
when
generating
a
version,
negotiation
packet,
endpoints
shirts
have
this
bit
to
one,
because
that
will
make
it
look
like
every
single
other
version.
One
quick
packet
endpoints
must
ignore
the
value
as
specified
in
the
invariance
document
and
that's
advice
that
would
be
constrained
to
the
quick
version,
one
protocol
specification
and
we
can
explore.
You
know
optimistic
ideas
for
getting
it
back
in
extensions.
That's
my
proposal,
I'm
hearing.
A
S
B
B
H
B
B
I
I
A
I
A
J
A
I
A
A
A
I
F
Yes,
but
google
can
ask,
it
seems
like
we're
going
through
all
the
issues
that
were
gone
over
in
Tokyo
for
the
most
part
do
Mun
or
anyone
else
have
to
report
issues
that
they
think
are
open,
that
we
haven't
gone
over
and
took
you
and
haven't
gone
over
here.
I
asked.
F
B
It
is
10:30,
which
means
we
have
half
an
hour
left
and
we
have
to
give
time
privet
session
and
we
have
the
next-step
session.
So
maybe
any.
F
A
E
A
When
we
talk
about
this
new
process,
one
of
the
things
that
lars
that
I
use
to
judge
whether
or
not
we
should
discuss
these
issues
in
triage
is
how
much
interest
is
in
the
working
group
is
expressed
in
them.
So
if
these
issues
are
interesting
to
you,
it's
helpful
for
us
to
know
that
you
can
express
that
by
commenting
on
the
list
or
commenting
on
the
issue
or
even
going
to
the
issue
and
reacting
to
it.
A
Exactly
but
an
issue
that
has
no
engagement
so
ever
from
the
rest
of
the
working
group
tells
us
that
there's
not
a
lot
of
interest
in
discussing
well
we'll
confirm
that
before
we
close
up,
you
know
with
prejudice,
but
it
helps
us
if
you
tell
us
which
ones
you're
interested
in
discussing.
So
please
take
a
look
at
the
issues
list
from
time
to
time
and
do
that
and
the
easiest
way
to
do
that
is
to
look
at
this
triage
list.
A
C
O
B
A
We're
gonna
have
it
in
Europe
as
well
right,
so
it'll
be
two
days
of
Interop
two
days
of
meeting
the
meeting
I
think
will
be
much
like
we
saw
today
where
we
just
go
through
the
issues
and
slog
through
them
and
try
and
make
as
much
progress
as
possible.
I
think
the
question
is
from
the
editors
perspective.
Can
we
get
a
sense
of
when
the
next
draft
that
you
want
to
publish
is?
I
B
I
A
B
That
that
one
it's
easy
to
do
I'm
happy
to
do
a
dutiful.
I
will
not
forget
to
email
us
your
list
this
time,
but
the
date
we
actually
picked.
Sorry
about
that.
I
only
put
it
on
the
slack
on.
We
can
have
a
discussion
on
whether
we
want
to
do
18,
19
or
20,
depending
on
when
that
draft
drops
and
and
ready
implementations
on.
But
it's
easy
to
do
one.
If
people
wanna
do
one
we'll
just
do
that
right
and
I.
A
J
L
Yeah
I
want
to
support
what's
enough
on
20
and
let's
get
a
20,
basically
as
soon
as
the
editors
feel
like
they've,
gotten
like
as
soon
as
they
can
within
the
work
of
whatever
changes
that
occurs,
ending
what
making
so
like
I
was
fresh
grape
or
be
excellent.
Okay,
it's,
like
you
know.
We,
like
I,
already
had
to
do
lunch
like
crap
like
go
back
to
eighteen,
even
after
already
in
pulmonary
City
nineteen,
so
I'm,
like
sooner
the
better
okay.
B
That
sounds
like
a
plan
right,
we're
gonna
have
a
meeting
in
Montreal.
It's
likely
going
to
be
again
two
sessions.
It
depends
a
little
bit
what
we're
going
to
do
with
those
two
sessions.
Basically,
what
happens
between
now
and
then
in
an
ideal
world
I
think
Montreal
could
be
a
meeting
where
we
might,
instead
of
slogging
through
the
issues,
actually
have
a
discussion
on
that.
The
two
things
that
people
seem
to
want
to
do
next,
which
is
either
media
or
or
like
unreliable
stuff,
over
quick
and
or
Monica.
B
That
kind
of
depends
a
little
bit
on
whether
we
will
have
the
time
whether
or
not
we're
going
to
have
more
working
group.
Inner
rims
after
London
also
depends
on
whether
people
will
want
to
make
specifically
implementers
and
employers
who
want
to
make
rapid
progress
on
either
unreliable
traffic
over
quick
and
or
multipath,
because
I
think
rapid
progress
remains
not
just
meeting
during
the
ITX
week.
It's
a
trade-off
and
reading
to
come
to
some
consensus
on
that
sure
and.
A
B
Plan
yet,
but
I
think
that
the
milestones
say
that
we're
gonna
send
the
drafts
to
the
iOS
G
in
July.
That
is
aspiration.
Well
at
this
point,
I
would
say:
I
still
hope,
though,
that
we
can
do
it
after
the
summer
and
and
then
really
enter
a
phase
where
we
keep
them
open,
and
we,
based
on
my
feedback
from
the
deployments,
decide
that
they're
baked
enough
to
stamp
on
it
with
the
IIST
you
use.
B
So
I
have
in
Finland
now
do
you
think
I
have
a
concept
of.
B
Second,
academic
workshop
on
quick
co-located
with
ASEAN
second
in
Beijing
in
August,
as
a
side
note
supports
keynote.
Video
was
finally
posted
by
the
AC,
and
yesterday
they
apparently
have
a
problem
of
understanding
that
people
might
give
talks
without
also
submitting
papers,
and
so
the
digital
library
couldn't
handle
that.
But
they
finally
figured
that
out
and
it's
online
and
you
can
look
at
it
and
Robin
is
going
to
put
it
on
YouTube
or
something.
Oh
there's
a
great
keynote
talk.
B
So
if
you
do
academic
work
or
if
you
do
or
if
you're
in
an
industry-
and
you
want
to
talk
to
people
to
academic
or
critic
and
maybe
hire
their
students,
come
there,
we're
looking
for
somebody
from
industry
to
give
another
great
keynote
like
supported.
So
if
you're
planning
on
doing
anything
cool
with
quake,
especially
in
terms
of
large
deployments
over
the
next
few
months,
and
then,
if
you
feel
like
you're,
ready
to
talk
about
that
and
in
in
August,
we
were
happy
to
have
you
come
and
do
that
if
you
epic,
is
epi
Q.
B
X
So
it
has
a
response
back
about
ten
seconds
later,
and
this
is
what
connection
migration
does.
It
will
try
to
solve
the
parking
lot
before
we
really
get
into
the
details
about
the
deployment
data
and
experience.
I.
Think
it's
very
helpful
to
keep
in
mind
that
there
are
several
different
types
of
signals
that
could
trigger
a
connection.
Migration
attempt
and
the
first
one
is
a
platform
notification
which
is
innervated
by
the
platform
telling
you
about
your
network
status.
X
For
example,
your
current
network
is
being
disconnected
or
your
new
network
has
been
connected
or
made
by
default
in
the
second
type
is
called
right
arrest.
This
happens
when
you
try
to
write
a
packet
to
the
socket.
Usually
there
comes
with
sanitary
code
and
some
of
those
errors
can
be
a
preemptive
signals
telling
you
that
your
network
is
going
to
change
and
the
third
one
is
a
posture,
grading
detections.
X
This
is
implemented
in
the
quick
layer
based
on
the
retransmission
time
out,
and
it
tells
you
that
you're
not
working
your
network
pulse
is
probably
not
the
America
next
slide,
please.
So
now,
let's
get
started
with
the
data
when
we
first
analyzed
the
application
data
about
two
years
ago.
We
find
that
about
two
percent
of
the
requests
failed
within
network
change,
meaning
during
the
lifetime
of
those
records.
X
So
was
that
initiative
we
dug
into
the
connection
level
metrics
and
we
find
that
about.
7.9
percent
of
the
connections
are
closer
due
to
the
network
being
changed
and
about
point.
Seven
connections
are
encountering
those
pre-emptive
packing
right
errors,
those
are
caused
by
the
natural
change
and
so
adding
together.
It's
about
like
eight
point,
six
percent
of
the
connections
that
may
be
subject
to
connection
migration.
X
With
that
incentive,
we
implemented
a
connection
migration
stage,
one.
We
migrated
the
connections
on
platform,
notification
and
preempted
pack
a
write
errors
and
we
ship
that
increment,
which
is
elaborate.
Packaging.
The
chrome
networking
stack
and
we
deployed
that
in
the
Android
Google
Search
app,
which
is
also
used
for
the
demo
purpose.
The
results
come
back
and
it
was
really
reliable.
The
data
shows
that
the
confidence
level
is
about
99%
and
we
find
out
this
feature.
X
I
reduces
the
taxes
search,
failure
rate
by
about
point
nine
point,
seven
percent
and
for
the
castle
rate
its
reduced
about
point
three
percent.
So
some
of
you
may
recall
in
the
previous
slides
we
mentioned
that
there
are
about
two
percent
of
the
requests
failing
due
to
the
network
change,
and
what
we
are
seeing
here
is
about
like
1%.
X
In
fact,
those
improvements
we
listed
here
is
across
the
board,
not
just
a
two
percent
of
the
request,
so
we
are
targeting
at
so
you're
right,
there's
a
gap
in
in
the
opportunity
size
and
the
deployment
data
access
libraries.
So
we
dug
into
the
connection
level
metrics
again
of
the
deployment,
and
we
find
that
of
all
the
packet
right
error
signals.
X
99%
of
the
time
the
connections
hey
shake
hasn't
been
confirmed,
which
means
that
we
cannot
do
the
migration
for
the
same
connection
and
for
all
those
triggered
connection,
migration
attempts,
31
percent
of
the
time,
the
connection
below
have
an
optin,
a
network,
which
means
your
connection
has
nowhere
to
migrate
to,
but
to
stay
on.
The
original
Network-
and
this
is
also
proved
by
different
versions
of
androids
like
enjoy
the
latest
Android
version-
will
keep
say
selling
on,
and
that
has
a
better
data
on
this
feature.
X
X
We
will
migrate
to
the
alternate
network
and
for
the
second
case
and
that
the
hatred
kind
out
on
the
original
network,
we
could
also
solve
the
before
higher
cases
by
spinning
up
a
new
connection
on
the
odds
name
network,
but
the
like
spring
up.
The
new
connection
is
very
specific
for
the
implementation,
so
different
versions
of
quark
might
have
different
implementations
next
slide.
Please.
So
we
ship
that
and
we
experimented.
We
migrated
the
connection
on
parts
degrading
and
we
also
try
to
kick
off
a
new
connection
on
the
alternate
network
before
the
handshake.
X
We
experimented
this
feature
in
the
Android
Google
Search
app,
and
the
data
comes
back.
It's
very
impressive
that
this
feature
stage
to
greatly
improve
the
social
agency
for
both
voice,
search
and
taxes,
search
in
particular
for
taxi
search.
You
can
see
that
almost
every
percentile,
the
server
response
time
has
been
greatly
improved.
Additionally,
we
also
find
that
the
taxi
search
failure
rate
has
been
improved
by
about
1.4
percent
and
the
castle
rate
has
been
reduced
by
one
when
9%.
X
X
X
It's
if
you
have
been
following
the
connection
migration
work
in
Google
very
closely,
and
the
version
we
deployed
here
is
rushing
to
you
and
what
we
do
in
version
2,
as
we
will
always
try
to
probe
an
alternate
Network
when
the
original
Network
turns
out
to
be
working,
and
so
this
is.
This
is
trying
to
ensure
that
if
the
new
network
is
not
fully
ready
a
week,
we
won't
fill
the
request
immediately
and
the
second
principle
si.
You
will
always
want
to
respect
the
preference
choice
of
the
different
network.
X
So
to
summarize
what
we
have
been
doing
in
the
deployment
we
took
a
connection
migration
signal,
which
can
be
a
preferred
mode
of
finishing
a
packet
array
apart,
degrading
signal
and
handshake
final
signal
on
the
original
network,
then
we
check,
say:
hey
check
status.
If
the
handshake
hasn't
been
confirmed,
we
will
swing
up
a
new
connection
on
the
odds
network.
If
the
handshake
has
already
been
confirmed,
then
we
will
check
if
the
original
network
is
completely
broken.
If
it's
completely
broken,
we
will
do
immediate
migration
to
the
antenna
network.
X
A
L
To
slides,
thank
you.
No
that
went
perfect,
so
understanding
trip
with
this
data
on
you
say,
you're,
getting
in
a
very
substantial
improvement
in
texture,
50
percentile
right,
but
I
would
assume
the
vast
majority
of
searches
do
not
involve
migration
scenarios.
So
why
are
you
seeing
improvement
at
50,
percentile
and
I.
X
Think
this
is
like
this
is
aggregated
data
and
in
particularly
for
this,
the
application
search.
They
also
do
the
retries
so
like
so
a
requester
stores
will
say
application
and
if
that
doesn't
proceed,
and
then
you
will
do
a
retry
on
the
application
layer,
so
this
is
a
and
what
the
data
we
collected
is
for
a
particular
user
initiated
a
request.
X
So
if
you
have
the
request
started
and
you
sure
you
would
try
to
get
the
on
time,
if
you
don't
have
the
connection
migration,
you
what
gets
a
timeout
and
then
the
retry
and
then
the
retry,
but
if
they
on
the
u+
waiting
and
before
hashtag,
is
kicking
in
then
the
first.
The
very
first
time
you
on
starts
a
request.
Before
you
get
the
timeouts,
we
will
try
to
do
the
migration
on
parse.
It
ready
I'm.
R
A
Q
I
would
be
interested
in
digging
into
the
data
a
bit
more
off
finding
kind
of
discussing
some
of
those
questions
and
looking
at
it,
but
I
think
I.
I
also
want
to
echo
support
for
the
idea
of
migrating
back
when
the
platform
decides
that
you
know
respecting
that
path,
selection
and
saying
if
we
think
it's
available
now,
let's
migrate
back
I,
think
that's
really
nice
and
it's
it's
great
to
see
this
data
and
thank
you
for
sharing
sure
tell.
Y
Y
Is
that
because,
essentially,
if
that
is
not
available
at
the
moment,
we
give
up
immediately,
because
we've
had
some
experience
with
this
and
like
adding
in
the
ability
for
migration
to
have
a
delay
to
say
if
there
is
no
available
network
at
least
give
it
some
time
wait
for
radios
to
come
up.
Are
you
guys
doing
something?
Yes,.
X
We
do
have
a
migration
time
out
like
if
we
don't
when
the
migration
signal
is
received.
We
think
that
we
will
attempt
to
do
through
the
migration.
Not
immediately
do
they
connection
migration,
but
if
there's
no,
ultimately
Network,
we
will
start
the
timer
to
wait
for
a
new
network
being
pulled
up.
I
think
I
believe
that,
and
today
we
use
about
10
seconds.
If
the
network
is
coming
back,
then
we
will
try
to
do
the
migration
on
the
auxin
a
network.
If
the
network
doesn't
comes,
then
we
will
abort
the
migration
and
we
clean
that.
Z
Are
you
floating
Barbaresco
broken
few
issues
here
you
give
an
original
use
case,
which
is
a
parking
lot.
I
can
come
up
with
a
different
use
case,
which
is
a
car's
stopping
as
a
traffic
light.
At
that
moment
regularly.
You
may
not
actually
want
to
do
something
like
this,
because
once
you
don't
have
the
Starbucks
access
that
happen
to
be
seen
as
a
traffic
light,
you
may
still
have
wanted
to
keep
your
solar
connection.
For
example.
Z
B
Z
AA
X
F
F
B
B
T
D
T
And
I'm
showing
some
data
here,
which
is
related
to
an
offload
and
some
measurements
that
we
did
with
quick.
So
what
we
did
is
we
went
ahead
and
created
an
offload
where
we
can
do
encryption
offload
as
well
as
transmit
segmentation
offload,
and
we
went.
We
tested
this
out
with
a
chromium
stack.
We
we
have
a
toy
server
stack
which
we
optimized
to
run
at
25
gigs.
T
T
T
We
tested
out
the
standard
64k
segmentation
against
the
9k
and
when
we
and
one
okay
and
when
we
did
this
yeah,
so
that's
the
data
I
want
to
show
before
going
back
to
the
other
slides.
So
when
we
did
this,
we
found
that
segmenting
9k
was
a
64k
look
almost
the
same
and
digging
into
it
further.
We
found
that
the
reason
they
look
similar
was,
even
though
the
hardware
or
the
GSO
could
accelerate
this
a
little
better.
T
T
It's
that
it's
a
really
good
starting
point,
but
we
might
need
to
do
something
more
and
something
more
looks
a
lot
like
TCP,
where
we
send
a
whole
bunch
of
data
to
the
hardware
and
let
the
hardware
take
care
of
the
framing,
not
just
with
like
l2
Mac,
IP
UDP,
but
also
the
quick
header.
When
we
try
to
do
that,
we
get
into
some
complications.
T
So
this
is
an
example
of
a
segmentation
of
so
that
is
like
a
TCP.
So
at
the
top
layer
you
see
a
quick
header
with
one
or
more
stream
payloads,
maybe
an
ACK
interleaved
in
there,
and
we
would
have
to
give
this
information
to
the
socket
with
some
very
detailed
information
about
what
is
in
that.
You
know
blue
blob
of
data.
T
In
order
to
do
this,
we
have
a
little
bit
of
a
problem
with
how
we
can
achieve
this,
sending
a
whole
bunch
of
metadata
in
the
Linux
stack,
but
with
a
little
bit
of
you
know,
pushback
already,
so
the
you
know
the
combined
way
to
do
this
is
we
could
modify
the
ACK
header
and
have
an
act
length
in
there.
If
there's
an
act
length,
then
it's
possible
by
the
hardware,
and
it
reduces
the
need
for
that
much
metadata
to
flow
in
the
stack.
T
A
Okay,
so
that
sounds
like
a
request
to
add
an
a
cleanse
to
the
frame
and
I
think
what
we
could
do
is
open
an
issue.
We
can
help
you
do
that
if
you
need
it
and
then
discuss
whether
there's
at
least
interest
in
discussing
that
as
a
group
based
on
the
conversations
I've
seen
in
hallways
so
far,
I
think
there
is,
and
we
can
we
can
move
forward
with
that.
Yeah.
T
T
I
Martin
Thompson
this
is
this
is
good
work
and
I
kind
of
like
the
fact
that
you've
done
the
hard
work
of
working
out.
What
what's
important
the
the
critical
thing
here
is
to
optimize
those
things
that
we
send
a
lot
and
you've
identified
stream
and
act
as
being
those
critical
ones
which
I
think
is
is
correct.
I
There
already
is
a
length
in
acts,
it's
just
a
different
style
of
length
and
I,
don't
think,
there's
any
significant
problem
with
with
moving
and
recasting
a
little
bit
from
from
my
perspective,
but
I'd
like
to
to
make
sure
that
we've.
We
are
confident
that
this
sort
of
segmentation
is
necessary
because
it
might
be
possible,
for
instance,
to
only
send
acts
through
a
known
segment,
segmentation,
optimized
path
and
strained
out
around
another
one.
B
J
Do
things
very
quickly,
first
aerotank
Muncie
for
doing
the
work,
and
it's
also
for
being
patient
with
with
the
group,
because
she's
been
coming
here
for
a
while
I'm
trying
to
get
us
to
actually
pay
attention
to
the
stuff
for
a
while
and
she's
persisted.
So
I
really
work
all
that
out.
Second
I,
don't
think
it's
it's
it's
whether
we
need
this
or
not,
but
whether
it's
useful
or
not,
because
that's
what
the
question
really
here
is
but
yeah.
AB
Z
Are
happy
this
type
of
work
and
we
think
it
is
a
way
to
go
forward,
at
least
if
you
want
to
make
it
implementable,
enhance
it.
Our
first
question
to
you:
I
mean
a
single
question
to
you.
These
sites
is
a
valuation
in
terms
of
how
much
load
you
have
on
a
CPU
versus
your
offload
engine.
Here's
a
context
of
encryption.
Do
you
also
have
some
power
numbers
if
there
is
any
benefit
in
the
context
of
offloading
this
to
different
Union?
Sorry,.