►
From YouTube: IETF105-QUIC-20190723-1000
Description
QUIC meeting session at IETF105
2019/07/23 1000
https://datatracker.ietf.org/meeting/105/proceedings/
A
Okay,
let's
go
ahead
and
get
started
if
you're
having
a
side
conversation
we'd
appreciate,
if
you
do
that
in
the
hallway,
this
is
quick.
We
have
two
hours
today
and
I
believe
two
hours
tomorrow
or
and
a
half
two
hours
tomorrow,
we're
having
a
bit
of
an
issue
with
the
screens
in
the
room,
so
we're
gonna
ask
you
to
follow
along
that
way.
Now.
B
A
So
we're
gonna
ask
folks
to
follow
along
in
on
your
own
laptops.
We
don't
have
a
tremendous
amount
of
presentation
or
visual
aid.
Sorry,
this
one
is
and
it's
fine
right
there,
but
these
ones
are
not
working
and
that
one
was
working.
But.
B
C
A
We
have
Jeb
described,
we
have
ascribed
well
go
ahead
and
circulate
the
blue
sheets.
Please
sign
them.
Our
agenda
is
pretty
monolithic.
Today,
we're
going
to
have
a
brief
discussion
of
the
hackathon,
the
Interop
event
excuse
me
and
then
we're
gonna
get
straight
into
issue
discussion.
We're
gonna,
follow
the
practice
that
we
started.
I.
A
It
keeps
on
No
okay.
So
if
the
screens
were
working,
you
would
now
see
a
lovely
slide.
That
starts
with
a
note.
Well,
we
have
the
small
in
the
middle
if
your
eye
says
especially
good
if
you're
not
familiar
with
this
concept,
please
use
your
favorite
internet
search
engine
to
search
for
IETF.
Note.
Well,
that
should
take
you
to
a
description
of
the
terms
under
which
we
all
participate
here
regarding
important
things
like
intellectual
property
and
and
and
how
you
share
it
or
how
you
disclose
it
and
your
daugh
requirements
to
do
so.
A
Also
things
like
the
anti
harassment
procedures.
We
expect
this
to
be
a
professional
environment,
copyright,
patents,
so
forth
and
so
on.
There's
a
lot
of
process
document
policy
documents
around
that.
So,
if
you're
not
familiar
with
it,
please
do
familiarize
yourself
with
it
it's
important
to
what
we
do
here.
There's
no
screen!
Okay!
A
C
But
while
my
quotes
the
slides
that
I
showed
at
the
end
of
the
hackathon
one
thing
I
want
to
mention
as
an
email
to
list
about
it,
so
we
got
feedback
that
it's
hard
for
people
who
aren't
following
like
every
single
day
for
several
hours
to
find
out
what
things
are
happening
related
to
quick.
There
are
not
like
chart
at
work
at
the
moment,
but
a
sort
of
interest,
and
we
started
a
wiki
on
the
github
wiki
page,
where
we
collect
those
and
that
collection
basically
has
like
a
little.
C
You
know
blurb
on
what
is
this
thing
and
then,
where
do
I
go
for
more
information
and
where
do
I
go
to
participate?
So
if
you
have
something,
that's
quick
related,
there's
a
few
things
that
we
know
about
it.
They
are
on
there
that
could
use
some
detailing.
But
if
you,
if
you
I
interested
in
working
on
something
or
you
are
working
on
something,
it
isn't,
there
feel
free
to
add
it
and
help
other
people
find
that
work.
C
We
don't,
as
Mark
said,
gonna,
have
for
a
lot
of
time
to
talk
about
this
in
the
working
group
session,
because
we
are
still
struggling
with
the
issue
list,
but
that
that
shouldn't
stop
people
from
having
five
meanings
on
things.
They
want
to
do
in
the
future
right
and
it
keeps
flickering
here
not
so
you
can
order
the
next
few
slides,
not
that
one.
C
Right
so
this
is
this:
is
our
beloved
interruptible
Docs
sheet,
which
is
getting
very
hard
to
read,
because
we
had
19
implementations
that
participated
most
of
them
both
have
a
client
and
a
server.
You
see
some
white
rose
and
and
columns.
That
is
because
the
22
drafts
dropped
really
really
close
to
the
inner
up,
and
some
stacks
simply
haven't
had
the
time
to
update
from
20.
C
But
we
expect
that
to
change
that.
One
new
thing
is
that
we
have
now
realize
of
tests.
If
you
will,
where
we
could
record
the
results
in
three
lines:
the
third
line
being
HTTP
three
related
tests,
and
in
the
past
we
just
sort
of
had
one
test
that
Bailey
said
you
know
if
I
send
you
an
h3
request
again
an
h3
response.
Now
there
is
a
few
more
feature
tests
and
we
expect
that
to
broaden
out
further
in
the
future.
C
C
C
D
C
You
know
keeping
up
with
the
change
churn
that
that
takes
the
effort
right.
One
thing:
that's
on
the
positive
side.
Does
the
next
slide?
If
you
can
go
down
one
more
that
can
see
that
it's
a
graph,
oh
wow,
they
can't
see
it
because
what
the
hell
happened
there
so
Jana
and
Martin
see.
Men
have
worked
on
a
little
side
project
that
lets
you
plug
your
existing
quick
client
and
server
implementation
in
a
simulated,
ns3
topology
that
defines
a
network
topology
and
it
also
defines
simulated
cross
traffic.
C
So
this
is
really
cool,
and
this
now
lets
people
that
have
an
interest
in
congestion
control,
but
maybe
are
not
intimately
familiar
with
the
different
logging
formats
of
the
quick
implementations
to
actually
play
around
and
see
if
they
can
improve
things
for
us.
So
this
is
I.
Think
it's
a
nice
milestone
it's
early
days
for
this
still
more
work
will
be
needed.
I'm,
pretty
sure
that
Martin
and
John
I
would
love
for
people
to
help
with
this,
especially
students.
E
E
C
E
A
C
I've
heard
is
that
that
if
you're
in
the
North
American
time
zones,
the
virtual
networks
work
quite
well,
it's
a
little
bit
more
difficult
if
you're
not
but
that's
to
be
expected.
I
I
certainly
think
we
don't
want
to
stop
doing
the
virtual
interrupts.
I
think
the
question
is:
do
we
want
to
reduce
the
frequency
of
the
physical
ones
that
because
they're,
obviously
costly
personally
given
the
result,
but
I
not
sure
if
we
should
at
the
moment,
but
that's
one
opinion
all
right,
Eric,
Kinnear,
Apple,
so
I
think
the.
G
G
This
is
exactly
what's
happening
and
so
I
think
we've
been
able
to
track
down
a
number
of
kind
of
finicky,
weird
issues
that
we
wouldn't
have
otherwise
found
when
we
were
face
to
face
and
given
where
we
are
in
terms
of
how
the
implementations
are
coming
along.
I
think
we're
in
a
place
where
we
probably
still
need
some
that,
but.
A
H
David's
can
Ozzie
cool
so
plus
one
to
everything.
Eric
said
I
think
there's
a
lot
of
value
in
both
the
virtual
in
drops
and
the
in-person
ones,
if
only
like,
we
all
have
day
jobs
and
having
some
like
magic
deadlines
that
force
us
to
get
stuff
done
match.
The
dress
is
a
really
good
thing
and
regarding
the
in-person
meetings,
I
think
London
was
the
first
time,
at
least
in
my
personal
experience,
that
the
Interop
became
more
useful
than
the
issue
discussion.
H
I
J
Lucas
party
Clyde
back
just
to
reiterate
Eric's
point.
It
might
not
be
that
we
work
on
those
issues,
but
the
the
attendees
that
the
face-to-face
interrupt
channeling,
a
team
that
are
remote
and
they
may
be
in
a
different
time
zone
so
being
able
to
have
that
at
another
day
at
least
to
then
come
back
the
next
day
and
retest
is
really
useful.
A
Yeah
so
related
to
that,
excuse
me:
we
are,
at
this
point
planning
and
starting
to
plan
an
interim
meeting
in
the
September
October
time
frame.
It's
pretty
clear
that
we're
gonna
need
to
discuss
issues.
It's
pretty
clear
that
we
need
to
do
more,
interrupts
work
and
so
I
know
that
we've
been
doing
this
for
a
while
and
then
some
people
are
getting
a
little
exhausting,
but
we
need
at
least
one
more
in
our
rotation.
A
You
know
historically
rotated
when
one
interrupts
or
in
one
interim
meeting
in
asia-pacific,
one
in
North,
America
and
one
in
Europe,
and
so
in
our
rotation.
The
next
one
should
be
in
North
America
we're
having
a
lot
of
trouble
finding
a
host
for
that
meeting.
We
have
a
few
folks
looking
for
rooms
and
their
companies
this
week
and
we're
hoping
that
that
will
eventuate
something
that
we
can
announce
to
you
in
the
by
the
end
of
the
week
early
next
week.
A
But
if
you
have
a
meeting
room
at
your
company
that
you
think
is
suitable
for
our
requirements
and
if
you're
not
clear
about
what
those
requirements
are,
please
come
to
talk
to
lars
and
I
we're
really
looking
for
more
facilities
where
we
can
host
these
meetings.
It's
it's
kind
of
a
tough
set
of
requirements
and
that's
why
we're
having
trouble
out
there
so
yeah.
A
If
you
know
something,
please
do
come
to
us,
but
with
any
small
amount
of
luck,
one
of
the
avenues
we're
chasing
will
become
fruitful
and
we'll
have
an
announcement
of
an
Interop,
slash
interim
meeting
in
September,
October
I.
Think
right
now
we're
looking
at
the
week
of
October
7,
but
we're
also
open
to
other
dates.
Potentially,
they
also
know.
C
A
Okay,
so
let's
not
join
you
get
up
today,
so
this
is
kind
of
the
the
project
board
on
github
that
we've
been
using
to
keep
an
eye
on
the
issues
lists
and
if
I
look
at
this,
so
as
we
mentioned
last
time,
Loras
and
eyes
regularly
triage
new
issues
to
figure
out
if
their
design
issues
editorial
issues
or
they
need
to
be
thought
about
and
discussed
a
little
more
before
we,
except
for
this
issues.
A
K
A
C
A
Right
and
and
to
that
you
know,
point
earlier
of
exhaustion.
If
we
keep
on
doing
this
for
another
year
or
two
I'm
concerned,
I
think
we're
concerned
that
we're
gonna
lose
implementers,
we're
gonna
lose
interest
and
the
effort
will
fail,
and
so
we
need
to
focus
and
get
these
final
few
issues
I
mean
it's
22
issues.
It's
not
that
many
considering
how
many
we've
closed,
how
very
they
many
we've
closed
to
date.
A
So,
let's
do
do
a
final
push
and
get
these
done
so
we
can
really
get
a
good
set
of
drafts
out
that
we
can
start
thinking
about
working
with
the
last
call
on
yes,
I
said
working
group
last
call
not
yet
don't
be
so
self
congratulatory
hey.
If
we
can
get
consensus
to
close
all
these
issues,
we
can
close
all
these
issues
today.
I,
don't
think
we're
gonna
get
there
see.
A
So
let's
go
ahead
and
go
through
these
if
you're
aware
of
a
dependency
or
a
relationship
to
another
issue,
please
say
so:
these
aren't
in
any
particularly
special
order,
but
we'll
discuss
her
and
try
and
get
as
much
time
as
possible.
First
127
92
Martin
seaman.
Do
you
want
to
tell
us
a
little
bit
about
this
issue?
I
can't
scroll,
I
I
can't
scroll
where's.
This
crowbar.
L
M
L
Sorry,
our
project
I
thought
about
this
a
little
bit,
not
a
lot
and
it's
hard
and
I'm
not
sure
that
it's
worth
addressing
without
anything
other
than
maybe
text
I'd
like
to
do.
N
So
I
guess
I,
scroll,
I,
don't
actually
understand
this
issue.
If
we
ever
said
it
seems
to
be
so
reading
this
first
paragraph,
the
fact
the
key
is
computer.
We
see
that
packet
as
a
side
channel.
It
might
be
used
to
recover
the
head
of
protection
key.
If
it's
possible
recover
the
head
of
protection
key
from
measuring
the
cost
of
computing
it
then
we
have
problems
all
up
and
down
the
stack,
because
we're
constantly
doing
the
wrecker.
L
D
N
N
O
A
A
L
D
L
M
L
L
P
L
L
P
P
A
C
A
A
L
R
R
R
F
N
R
L
Pack
remove
packet
protection,
and
that
includes
removing
the
head
of
protection
and
Martin's
point
is
that
in
the
case
where
you
see
that
bit
flip
the
middle
of
that
process
contains
a
non
constant,
idle
time
operation,
which
is
generating
a
new
set
of
keys,
the
fix
would
be
to
generate
the
next
set
of
keys
some
other
time.
Okay,.
C
A
L
B
M
A
E
L
Yeah,
can
we
get
rid
of
the
notification
and
make
it
fullscreen
next
page,
please,
when
you.
B
L
It's
you
need
to
use
the
arrow
keys
rather
than
the
space
I,
don't
know
who
designed
this
system
all
right.
So
we
have
a
number
of
options
that
we've
been
discussing
on
the
issue
and
it
seemed
like
it
was
not
going
to
resolve
on
the
thread
because
we
had
weak
preferences
in
every
direction.
It's
one
of
those
wonderful
scenarios
where
there's
weak
preferences,
so
I'm
going
to
go
through
the
three
options.
L
This
is
what
Nick
proposed,
which
is
to
remove
the
scaling
of
the
activate
parameter
those
people
who
are
not
aware
of
this.
We
express
the
time
that
the
stack
holds
on
to
it
delays
the
acknowledgment
of
an
ACK
eliciting
packet,
and
we
express
that
in
microseconds,
but
we
have
a
scaling
parameter,
that's
exchanged
as
a
transport
parameter.
This
one
would
say
remove
that
and
the
advantages
of
doing
that
is
it's
much
much
simpler.
L
This
disadvantage
here
is
that,
as
a
result,
your
encoding
larger
values
for
your
actally,
which
will
take
one,
maybe
for
a
three
more
nacht.
It's
when
you
encoded
I'll
also
point
out
here
that
if
we
decide
that
this
was
a
bad
idea
in
retrospect,
adding
a
transport
parameter
back
for
this
one
is
super
complicated,
so
we
would
be
kind
of
committing
to
this
unless
we
wanted
to
revise
the
protocol
completely
next
play.
Please
option
B,
you
might
wanna
hit
the
fullscreen
e
button
again
and
then.
L
Yeah,
that
might
do
it
terrible
thing:
isn't
it
all
right
option
B,
we
express
the
a
delay
with
a
thank
you
Mary
with
a
scaling
parameter
just
like
this
is
what
we
have
in
the
in
the
document
today
with
a
shift,
so
it
an
integer
shift
shift
to
the
left
3,
which
is
a
default.
Our
shift
to
the
right
when
you,
when
you
encode
this
one,
can
reduce
the
bytes
on
the
wire,
it's
very
simple,
to
implement
the
number
of
gates
that
you
have
to
do
to
do
this.
L
Sometimes,
if
you
want
to
keep
the
the
scaling
in
use
and
then
some
multiplication
and
division
in
that
case
anyway,
next
place
right,
our
yeah
we
go
and
what
was
proposed
on
the
thread
is
multiplicative
scaling,
which
basically
says
we
just
multiply
number
rather
than
shifting
it,
which
degenerates
to
a
shift.
If
you
choose
certain
values
and
arguably
it's
the
most
complex,
but
really
we're
not
talking
about
complexity
here.
These
are
no
complex
options
and
that's
change
because
of
clarification.
Yes,.
L
L
Seem
to
be
the
options
we
have
some
interesting
combinations
of
people
with
different
preference
orders,
so
a
graphic
just.
Oh
one,
last
point:
the
discussion
on
the
issue
made
it
very
clear
that
we
wanted
a
negative
vote
so,
rather
than
voting
for
a,
we
would
want
to
hear
the
people
who
hated
a
because
that,
well
you
know
you
can't
live
with
a
and
we
may
find
that
there's
silence
on
all
three
options,
in
which
case
we
might
have
to
find
some
other
way.
So.
A
It
seems
like
the
first
question
should
be
to
see
if
everyone
agrees
that
there's
not
a
strong
view
and
so
that
this
is
the
appropriate
process
to
use
Ted
is
stepping
back
for
the
mic.
Is
it
Nancy
author
of
those
things?
Okay,
do
people
have
thoughts
that
does
anyone
disagree
with
that
statement
that
they're
they're
not
strongly
held
views
here,
and
so
we
just
need
to
make
a
decision.
Oh.
T
A
U
That's
ed
hardy
as
author
of
RFC
3920,
nine
I
would
actually
suggest
one
of
the
other
procedures
in
39:29.
For
this,
which
is
the
pick
somebody
and
make
them
decide
procedure.
The
working
group
can
come
to
accept
that
instead
of
the
working
group
make
the
choice.
If
you
get
to
that
point,
you
can
simply
pick
somebody
by
lot
from
the
working
group
and
say
they're
going
to
decide
given
if
what
Martin
said
was
true,
that
there's
pretty
much
equal
selections
among
the
different.
U
C
L
I
was
gonna
correct
that
okay
right
there
from
my
reading
of
this-
and
this
is
just
my
rating
of
this
one
of
the
options-
is
distinctly
less
preferred
than
the
others,
but
that
that
may
complicate
its
suggestion.
Although
my
reading
is
that
ie
is
much
less
preferred,
but
there
are
a
few
people
who
are
advocating
for
that.
U
Ted
Hardy
speaking
again,
if,
if
that's
the
case,
then,
if
you're
selecting
it
by
random
lot,
then
there's
a
fair
chance
that
you'll
get
somebody
who
doesn't
like
a
because
there
are
more
of
them.
But
there
is
always
a
chance
that
you
will
end
up
with
one
of
the
people
who
likes
a
as
a
result
of
that.
But
the
whole
thing-
and
all
of
this
is
predicated
on
the
idea
that
everybody
can
handle
any
of
them,
that
it's
a
preference.
U
C
U
F
K
L
A
A
Are
we
ready
to
make
that
decision,
or
at
least
gather
that
data?
No
we're
not
doing
that.
This
is
a
normal
everyday
pedestrian
ietf
home,
such
as
it
is
so
the
first
time
again
is.
If
you
prefer
the
option,
a
which
is
no
scaling,
we
don't
believe
we
need
scaling.
The
second
home
will
be.
You
believe
we
need
some
form
of
scaling
without
talking
about
which
particular
form
that
will
take.
Is
that
clear?
Okay,
please
come
now.
If
you
believe
we
do
not
need
scaling.
A
L
B
A
So,
are
we
ready
to
decide
between
option,
B
and
option
C?
Do
we
believe
we
have
all
the
information
available
I'm
going
to
ask
again
two
questions:
I
will
ask
you
to
hum
for
option
B
exponential
and
then
I'll
ask
you
to
hum
for
option
C
multiplicative
you
can
hum
for
both.
If
you
have
no
preference
or
you
have
some
other
state
where
you
believe
that
both
are
okay.
O
D
B
N
P
T
Martin
Duke
I
agree
with
the
in
an
actor
just
because
I
don't
think
it
is
a
big
deal,
I,
don't
think
anyone's
gonna
be
dramatically
affected
and
just
reducing
churn
change
and
PRS
and
consensus
costs
at
this
point
is
quite
valuable
and
my
preference
for
B
is
not
at
all.
Technical
is
slowly
to
not
unnecessarily
shift
the
standard
for
non-critical
reasons.
So.
A
P
Don't
want
to
make
the
technical
point,
but
there
is
a
difference
between
the
two
mechanisms.
With
a
shift
you
do
it's
natural
to
go
from
microseconds
to
milliseconds
by
dividing
or
multiplying
by
1,000.
It's
not
a
thousand
24
I
know
it's
a
simple
and
silly,
perhaps
a
minor
point
but
milliseconds
to
microseconds
as
a
multiplication.
By
thousand,
not
a
shift
by
ten
I
mean
it
just
is
simply
what
it
happens.
If
you
want
approximated
to
be
thousand
24,
that's
the
discussion
you
want
to
have
that
I.
I
Caused
the
whole
passage,
I
I
completely
agree
with,
but
now,
on
the
other
hand,
people
who
want
precision
can
always
use
a
scaling
of
one
and
that
good,
if
they
effectively
forced
them
to
go
back
to
option
a,
and
so
maybe
we
can
ask
to
people
preferring
option
see
if
they
can
live
is
using
scaling
one
and
then
Excel.
We
can
live.
F
Machinery
Tama,
really,
if
the
problem
is
conduction
from
microseconds
to
milliseconds
or
having
a
multiply,
you
can
deduce
your
multiply
from
the
sheet
value.
That's
that's
not
an
issue.
Minh
I
I
am
I
am
already
wondering
why
people
are
so
concerned
about
that
optimize.
Your
dumb
multiplication.
That's
all
come
on.
We
have
modern.
Cpus
are
pipelining
many
implementation,
you're
not
getting
into
CPU
stole
their
I
have
a
hard
time,
believing
that
it
has
any
effect
whatsoever
on
the
grant
Bachmann's
of
the
same.
R
C
H
Gave
its
Ganassi
Google
question
about
process:
this
is
a
change
to
the
transport
dock
and
the
transport
dock
is
under
the
late
stage
process.
My
understanding
of
that
process-
and
please
correct
me
if
I'm
wrong-
was
that
if
something
doesn't
have
working
with
consensus,
then
we
close
it
with
no
action
and
move
on
with
our
lives.
If,
clearly,
this
doesn't
have
consensus.
Why
are
we
still
going
around
in.
A
A
S
E
A
L
L
A
If
I'm
reading
that
and
to
be
clear,
we
don't
make
a
decision
based
just
on
hums.
If
people
have
technical
arguments
that
they
think
will
change
people's
minds,
please
bring
them
now
it's
off
or
we're
not
hearing
this
we're
not
hearing
the
minds,
change,
we're
hearing
the
mines,
pretty
much
focusing
on
option.
B,
let's
drain
the
keys
relatively
quickly,
Eckerd
sure.
N
I
mean
Martin
that
wasn't
my
primary
motivation,
my
prayer
meditation
was.
We
have
something
in
the
spec
like
if
it's
fine
then
like
I,
don't
want
to
like
go
over
it
again
again.
I,
don't
think
it's
big
for
these
for
us,
like,
as
Ian
said
it
for
C
I,
probably
be.
If
you
see
we're
in
this
back
I'd
like
fine
Martin,
dude,
okay,.
A
H
Dear
it's
glossy
and
cool,
so
this
issue
came
up
from
discussions
on
some
other
issues
where
basically,
we
realized
that,
as
currently
defined
in
the
document,
the
client
connection
ideas
don't
actually
accomplish
what
they're
trying
to
accomplish.
So
in
quick,
the
goal
of
the
connection
ID
is
when
you
receive
a
packet,
it
allows
you
to
identify
which
connection
is
back
in
Napster
and
there
are
cases
where
you
don't
need
those,
let's
say
as
a
client.
If
you
have
one
newbie
port
connection,
you
don't
need
to
multiplex,
they
all
go
to
the
same
connections.
H
You
can
use
your
length
connection
ideas
that
was
kind
of
the
rationale
behind
them
in
the
first
place
and
how
I
I
personally
have
been
rude
reasoning
about
them,
except
in
the
current
document.
It
says
that
the
server
can
decide
to
use
zero
length
connection
ID,
even
when
it
cannot
multiplex
and
then
that
has
a
whole
bunch
of
repercussions
such
as
oh,
when
you
do
that
the
clients
no
longer
allowed
to
migrate
or,
if
there's
a,
not
rebinding,
everything
crash
and
burns,
and
so
what
I'm,
proposing
I
have
a
PR.
H
For
this
is
to
say,
look
you
can
use
your
like
connection
IDs.
If
you
can
do
multiplex
without
connection
IDs,
it's
pretty
simple,
a
connection.
Id
is
something
that
is
only
in
the
direction
towards
you
and
that's
how
you
you
use
them.
That
makes
the
spec
a
bunch
simpler,
removes
a
bunch
of
text
stuff
that
was
disallowed
and
I.
Don't
think
the
one
use
case
of
a
server
who
wants
to
use
you
like
the
knish,
attendees
and
multiplexing
is
are
very
useful
use
case.
L
So
I'm
not
in
Thompson
I,
just
label.
This
proposal
ready
we've
had
a
bunch
of
discussion
on
this
one.
It
still
didn't
slip.
The
net
I
think
David
solution
is,
is
correct.
Using
information
that
you
control
to
ensure
the
packets
are
routed
to
you
is
very
good.
Using
source
address
based
routing,
doesn't
work
uniformly
in
this
protocol
because
of
the
risk
of
gray,
bindings
and
vibrations.
A
V
Got
a
am
I.
It
just
seemed
that
this
proposal
excludes
the
case
of
two
machines
that
know
that
they're,
not
the
kindness
of
any
kind,
just
not
using
connection
edges,
so
this
requires
that
server
does
not
use
source
connection
source.
Ip
information
from
the
packets
seems
unnecessary
if
the
server
know
that
the
source
is
not
behind
net.
V
Yeah
and
guess
yeah
yeah
imagine
many
machines
belonging
to
I,
know
Google,
whatever
Akamai
that
I'm
not
behind
that
they
just
want
to
make
a
point-to-point
connection
to
each
other.
They
use
the
same
port,
but
they
unit
they
use
the
same
destination
port
so
that
breaks
connection
migration
yeah.
They
not
gonna
make
right
they're
like
stationary
their
actual
server
hardware,
they're,
not
mobile
I,.
H
N
David's,
understand,
David's
point
here,
but
I
point
out
that
the
primary
use
case
Dave
is
interested
in
is
not
something
we're
charging
to
do,
and
the
thing
that
like
is
emulating
basic
HTTP
servers,
is
something
we're
told
you
to
do
and
what
I
just
said.
The
primary
thing
that
that
the
peer-to-peer
use
cases
are
not
something
which
harder
to
do
can
I
finish.
N
We
are
not
chartered
require
connection
migration,
which
are
to
a
wild
connection,
migration,
weenus,
plus
a
decision
that
you
could
just
opt
out
a
connection
migration
if
I
wanted
to
and
so
on.
While
I
think
it's
certainly
valuable
to
be
able
to
indicate
to
the
peer
that
you'd
like
them,
not
to
use
it
on
their
like
them,
not
see
it.
Do
you
let
them
have
a
nonzero
life
connection.
N
L
L
4B,
the
in
order
for
someone
to
decide
that
it's
possible
that
they
can
use
some
aspect
of
the
source
address
to
determine
which
connection
the
packet
belongs
to.
They
have
to
be
certain
that
every
single
client
that
they
were
ever
talk
to
is
not
going
to
be
subject
to
migration
is
not
gonna,
be
subject.
It's
an
rebinding.
L
O
A
A
W
Polly
Apple,
so
I
was
just
thinking,
maybe
one
solution.
We
could
do
to
modify
the
existing
pull
request
for
this
to
take
into
account.
Kickers
thoughts
on
this
is
to
specify
that
the
server
can
have
zero
length
connection
IDs
for
itself
if
it
in
its
transport
parameters,
also
disables
migration
right.
So
we
can
just
say
that
if
you
want
to
do
this
explicitly
disable
migration
to
not
allow
it,
and
that
way
you
for
this
point
to
point
to
server
use
case.
G
Erik
Kinnear
Apple,
just
a
brief
note
on
there
I
think
disabled
migration
is
becoming
disabled,
active
migration
because
it
has
nothing
to
do
with
Nats
and
so
I
think
those
things
are
orthogonal.
So
if
you
have,
if
you
really
wanted
to
do
this,
allow
the
zero
length
connection
ID
and
remove
the
transport
parameter
entirely.
N
So
I
think
there
were
scroll
again,
I
think
Ian
said
is
actually
what
I
was
trying
to
get
at,
which
is
like
the
way.
The
like
your
standard,
like
TCP
server
works.
Now
is
the
client
picks
on
a
ephemeral
port
and
the
server
has
a
static
port
and
the
server
so
reasons
the
5-tuple
deserves,
distinguish
clients
and
finds
phi2
clients,
clients,
ports
or
dress
changes.
Then
tcp
is
hosed
and,
like
things
fault
line,
and
so
on
is
great.
N
The
quick
allows
you
to
has
mekin
allow
you
to
survive
yeah,
but
it
is
not
mandatory,
but
my
point
is
like
it's
probably
reasonable,
to
say:
I,
don't
carry
any
this
mechanism,
I
just
wanna
say
that
he's
more
or
less
a
tcp
and
in
those
cases
like
why
should
be
able
to
bear
the
overhead
of
any
connection
ID,
so
I
certainly
understand
on
now.
N
David's
point
which
I
agree
with
is
that
if
you
want
to
make
a
peer-to-peer
protocol,
everybody
uses
the
exact
same
ports
all
the
time
then
journey
that
should
I
use
the
D
MUX.
That's
actually
I'm
a
little
surprised,
rash,
I
having
that
problem,
but
I'm
prepared
to
cleave.
If
you
do,
but
it
seems
like
that
seems
like
if
straightforward
way
to
do
that
is
have
an
extension
that
says
like
no
listen,
you
can't
even
answer
a
connection.
Id
I'm
also
I'm
a
little
surprised,
and
it
seems
like
that.
N
I'm
not
quite
sure,
I
understand
what
your
design
is,
that
you
wouldn't
know
a
priori.
That
was
not
saying
you
should
do,
but
maybe
that's
different,
but
it
seems
like
this
is
a
client
meeting.
We
can
easily
negotiate
whatever
flag.
It
happened
to
be
so
I'm
sort
of
pollinate
the
feature
out
at
a
must
level.
Here,
whether
you
know
she
ate
the
converse.
X
My
burning
crema
I
think
I'm
here
to
say
what
ekor
said,
but
I
trailed
off
at
the
end,
the
acoustics
in
here
really
suck
the
the
TCP
semantics.
Do
you
have
the
downside
that
things
get
screwed
up?
The
world
has
lived
for
the
past
thirty
years,
those
semantics
and
the
past
28
years,
if
not
or
however
long
that's
been
going
on,
I
mean
so
these
are
risks
that
pretty
much
everybody
there's
connectivity
risk
in
the
internet,
yeah
great.
X
We
know
that
I
would
essentially
support
a
slight
change
to
this
PR
that
essentially
just
notes
that
hey,
if
you
want
to
use
TCP
semantics
great,
you
get
TCP
guarantees.
S
And
yeah,
so
it
doesn't
matter
if
we
have
functionality
that
tries
to
get
us
around
the
NAT,
because
invariably
somebody
is
not
going
to
support
all
the
features
that
actually
make
that
work,
which
means
that
the
application
stacks
are
always
going
to
have
to
have
their
timeouts
and
their
reconnects
the
timeout
and
the
reconnect
is
ultimately
the
only
way
you're
going
to
get
reliability
in
a
vast
number
of
scenarios.
Anyway,
it's
gonna
have
to
be
there.
S
H
Otherwise
you
don't
need
them
and
most
like
browsers
today,
we'll
use
ephemeral
ports
use
them
as
your
langston,
like
that's
why
the
Google,
quick
encoding
didn't
have
those,
but
we
added
those
time
connection
ID
stage,
Jeff
quick,
to
allow
this
use
case
in
the
current
spec.
That
doesn't
work,
because,
if,
as
a
client,
I
want
to
create
a
connection,
I
don't
yet
know
if
the
server
is
using
these
TCP
semantics
and
then
everything
falls
over
if
I
put
multiple
connections
on
my
same
local
port
and
the
service
using
the
5-tuple.
L
And
I
I
think
Becca's
point.
There
is
a
really
good
one
and
mostly
convincing
for
me
I'd,
be
I,
think
I'd
be
almost
okay
with
the
idea
of
what
Brian
said.
If
you
want
these
aviso
Mattox,
you
get
sick,
tcp
guarantees.
The
only
reservation,
I
have
and
I'd
like
people
to
think
about
this.
A
little
bit
is
that
this
is
a
unilateral
decision
in
a
two-party
protocol
and
whether
or
not
we
need
to
have
some
sort
of
negotiation
for
that
property
or
not
I.
A
L
S
Vector
so
I'll
say
again
that
we
as
a
protocol,
as
as
a
group
of
people
making
a
protocol,
what
we
are
doing
is
providing
potential
functionality
to
endpoints,
and
these
endpoints
can
decide
to
not
use
this
functionality,
regardless
of
what
we
spec
in
particular.
Even
if
we
have
con
ID
it
can
be
ignored
by
the
server
it
I
mean
con
ideas,
their
core
routing
for
the
server
in
this
particular
direction.
S
So
having
this
does
not
guarantee
that
we
get
anything
better
than
TCP
like
functionality,
regardless
I
will
also
state
that
servers
who
are
silly
enough
to
do
this
globally
are
going
to
see
much
worse
outcomes
than
servers
that
actually
put
the
con
ID
on
here.
If
we're
worried
about
performance,
this
will
fix
itself.
I,
don't
think
we
need
to
talk
about
this
deeply,
because
it's
gonna
fix
itself
regardless.
So
Roberta.
V
Ego
yeah,
basically
similar
to
Roberta,
said
if
a
server
have
clients
that
Wando
mcGirk
migrate
and
it
has
zero
connection.
It
is
just
a
broken
server,
but
not
all
servers
are
designed
to
serve
mobile
clients.
Some
servers
are
designed
to
serve
stationary
client
that
they
know
that
they
really
know
about,
and
they
know
what
their
properties
are
and
those
who
define
the
result
should
not
be
penalized
by
requiring
to
have
this
connection
ID
that
they
don't
need.
I
At
all,
firstly,
I'd
like
to
add
my
plus
one
what
David
says
if
we
keep
honestly
make
this
change,
it
is
really
to
have
one
implementation
that
does
both
client
and
server,
because
the
requirements
on
how
the
sockets
are
being
mapped.
The
connections
becomes
different
on
the
client
and
the
server
side.
W
Tommy
Polly
Apple
one
other
proposal
for
how
we
could
deal
with
this.
Given
the
current
conversation
is
to
kind
of
take
this
PRM,
maybe
move
it
out
of
the
transport
to
allow
the
transport
to
kind
of
be
open
to
allow
any
use
of
connection
IDs
and
put
this
into
the
like
manageability
document,
I'm
sorry
applicability
document
to
give
guidance
about
how
to
deploy
it.
Now,
that's
fine
as.
W
H
Gauzy
I
think
what
it
boils
down,
Martin
described
it
really
well,
is
that
it
is
a
property
of
the
server
to
want
to
use
old,
TCP,
semantics
or
new
quick
connection,
ID
semantics,
and
if
we
allow
it
to
unilaterally
make
that
decision,
then
we
end
up
having
to
negotiate
this.
The
problem
is:
this
is
something
that's
a
client.
You
need
to
decide
before
you
start
your
connection,
and
so
you
can't
negotiate
something
when
you
haven't
talked
to
the
other
side
before
and
that's
why,
like
this
breaks
down
pretty
quickly.
G
Yes,
I
think
the
proposed
change
is
a
strict
improvement
over
the
routing
in
red,
with
a
one
word.
Death
on
top,
which
is
an
in
point,
should
not
use
a
zero
in
connection.
My
deal
is,
in
other
words,
this
is
a
clearer
explanation
of
what
this
means
and
it
expresses
a
preference.
But
if
you
don't
make
it
a
must
requirement,
then
it
she's
the
same
thing
as
the
other
wording
in
let
everybody
know
what
the
consequences
are.
G
L
So
Martin
Thompson,
while
Dave
is
going
up
there
I
think
that
is
a
good
improvement.
I,
don't
think
it
covers
the
unilateral
decision
making
process,
because
the
problem
here
is
that
we
have
one
endpoint
can
make
a
unilateral
decision
that
affects
the
other
endpoint.
Now
looking
the
side
that
we're
comfortable
with
that,
but
I
don't
think
it
addresses
that
concern.
H
So
but
the
peer
does
multiple
things.
One
of
them
is
this:
must
that
Andrew
suggests
reach
or
should
another
thing
is
that,
because
we
have
this,
must
it
allows
us
to
remove
a
bunch
of
text,
I
think
further
down
going
yeah
that
a
little
bit
that
says?
Oh,
if
you
receive
the
signal
which
is
the
server's
using
this
you're
like
connection
ID
disabled,
these
features
and
the
problem
is:
if
we
turn
the
mast
up
there
to
assured,
then
you
end
up
leaving
the
clients
guessing
and
maybe
we're.
H
N
I
think
it's
important
to
recognize
we're
actually
talking
about
two
features
which
are
sort
of
coupled
one
feature
is
sustaining
migration,
error,
side
and
one
feature
it
on
this.
One
one
feature
is
having
multiple
connections
on
the
same
host
port
pair.
On
the
air
side,
I
mean
there
is
having
one
connection
on
tubular
sports
proposed
no
side,
and
those
beans
are
being
like
operative
of
the
same
feature
but
they're
like
very
different
for,
like
the
server's
perspective
and
they're,
very
different
from
the
input
to
the
design
in
system.
N
So
I
guess
I'm
trying
to
understand
it's
it.
It's
basically
research
in
TCP
that
you
can't
survive
like
that.
You
can't
have
two
clients
with
the
same
like
this
important
parameter,
sighs.
U
N
If
you're
the
server
it
just
right
and
and
so
I
think
the
question
the
question
that
the
way
to
turn
around
like
Martin's
question
is
like,
should
we
require
all
quick
servers
regards
of
an
application
Iran
to
support
the
more
rich
semantics
of
how
Iong
the
scene,
whose
portrait
on
their
side
or
should
we
allow
them
to?
Can
you
operate
it's
easy,
peasy,
metrics,
hum
and
I?
Guess
the
thing
I
think
basically
I
think
that's
probably
worth
on
got
emprender
to
COAS
like
Roberto's
comments
and
David's
comments
is
like.
N
Is
this
really
something
which
has
to
be
done
to
quickscope
versus
again
telling
which
in
an
application
scope
and
are
there
in
particular
like
does
the
HTTP
using
application
we
were
charted
for?
We
need
that
or
that
isn't
nearly
another
application
which
is
coming
out
later,
which
then
we
can
layer
a
new
semantics
on.
L
L
H
Conversational
hand
so
I
I,
don't
think
we
should
put
this
at
the
application
layer
because
all
of
the
concepts
that
this
impacts
are
at
the
transport
layer,
action,
IDs,
migration
and
also
like
Martin,
was
saying.
This
is
a
property
of
your
deployment.
Like
is
your
client
sharing
a
port
like
my
HTTP
browser
can
share
a
port.
It's
not
a
property
of
the
fact
that
we're
running
HTTP
or
something
else
that
that
said
moving
this
to
a
shirt
and
maybe
adding
some
text
on
I
can
add
like
a
sentence
on
here's.
I
N
Is
not
interfere
closes
as
well.
I
guess
I'd
be
comfortable
like
with
the
should
piece
of
the
part
that
Andrew
pointed
out
the
rest.
It
because
I
think
certainly
encouraging
people
not
to
like
well
I,
anticipate
semantics
is
a
fine
plan.
N
If
it
turns
out
that,
like
people,
everyone
wants
to
do
that,
we
can
pull
that
off
later,
but
I'm
less
certain
about
the
other
pieces.
We
think
we'll
have
to
not
go
away
er,
but
we
can
less
study
that
problem,
I'm,
I
think
I'm
less
comfortable
with
this
sort
of
should,
unless
you
like
know
that
the
America
doesn't
have
mats
with
those
kinds
of
things
personally
process.
N
The
point
is
that,
like
the
applicate
that
the
server's
want
to
tolerate
those
behaviors
so
like
I,
think
it's
fine
to
say
like
here
that
here
the
shitty
consequences
that
will
happen.
N
If,
like
you,
you
know,
if
you
don't
do
this
like
I'm
happy
to
have
that
be
like
arbitrary,
long
and
scary,
but
on
the
point
of
having
normative
Texas
says
like
you
can't
do
it
unless
you
you
know
and
so
cuz
it
was
also
fine
words
like
don't
do
this
since
we're
taller
the
consequences,
but
the
saying
don't:
do
it
less
those
consequences,
don't
apply
it.
Okay,
let's.
A
R
Bike
place
I'm,
not
sure,
if
I'm
understanding
the
issue
here
correctly,
let's
assume
I
have
a
client
that
always
uses
the
same
the
same
part
to
make
new
connections
and
wants
to
use
connection
IDs
to
demultiplex.
And
now
the
client
wants
to
establish
two
connections
to
to
the
same
server.
Then
this
will
just
break
right.
H
L
R
So
so
we
should
make
it
a
must,
because
this
is
a
Kinect.
This
is
a
decision
that
the
rubber
makes
and
the
client
has
no
way
of
knowing
like.
Let's
assume
the
client
once
really
wants
to
connect
to
connections,
because
the
server's
serving
two
domains,
so
it
it
has
to
know
if
it
if
it's
allowed
to
establish
those
two
connections
from
the
same
from
the
same
port
or
if
it
has
to
open
the
news
and
socket
and
established
two
connections
given
to
something.
H
It's
Ghazi
sure
so
to
explain
the
shirt
I
think
so
my
personal
profits
was
a
mosque.
That's
what
I
put
it
in
there,
but
the
reason
I
could
live
with
assured
is,
if
you
only
violate
that,
should
when
you
know
your
clients
and
your
deployment.
Take
your
that
work
like
that.
You
are
aware
that
this
condition
won't
happen.
That's
why
that
allows
those
use
cases
to
still
survive
and
not
have
to
batter
the
bust.
That's
why
we
care
about,
but.
C
C
A
N
L
L
L
L
M
L
L
They
share
the
same
network
path
and
there's
and
recommendations
here
about
doing
things
like
coordinating,
spin
I
think
we
have
agreement
amongst
the
people
that
are
participated
in
the
discussion
anyway,
that
this
is
aspirational
at
best
and
there
is
a
PR
I
will
find
it
I
apologize
for
not
linking
it
properly
that
to
simply
remove
that
paragraph.
Okay
is.
A
Q
Hi
Nick
thanks
Microsoft
I
opened
this
issue
and
we
had
a
lot
of
discussion.
Initially,
it
was
essentially
stating
that
in
the
load
bouncer
stateless,
lis
load,
balancing
the
long
header,
packets
based
off
of
the
destination
IP
address
was
not
possible,
but
we
came
up
with
the
solution.
Thanks
zu
ho
that
you
can
use
the
long
header
source
destination,
connection
ID
and
it
achieved
the
results
of
your
need
itself.
I
don't
have
any
further
need
to
keep
this
open,
there's
a
little
bit
more
discussion,
but
I'm
fine.
With
closing
this
no
I'm.
A
R
O
V
O
A
H
It's
me
again,
so
the
current
spec
allows
set
block
for
each
connection.
Id,
you
have
locally,
you
tell
the
other
side.
What
the
cross
spawning
citrus
reset
token
is,
so
that,
if
you
receive
something
later
there,
you
just
send
us
it
reset,
and
they
know
that
it's
a
valid
sailors
reset
and
they
close
the
connection.
H
One
of
the
interrupts
one
of
the
implementation
was
using
all
zeros
father,
stateless
recent
tokens
and
people
are
like.
Oh,
this
is
breaking
my
code.
That's
somewhat
bad
at
text,
saying:
oh,
you
could
do
that
with
so
Martin.
Add
it
Thompson
added
some
really
good
tax,
explaining
the
doubt
side
of
doing
that,
which
is
mainly
that
if
you
use
them,
then
there
are
some
things
the
attacker
can
do
it
looses
the
security
properties
of
stateless
reasons.
H
L
Yeah
so
long
terms,
it's
not
about
losing
security
properties
per
se.
It's
simply
about
additional
steps
that
you
have
to
take
in
order
to
ensure
that
you
retain
those
security
properties,
and
it
goes
through
what
those
are
I.
Specifically,
if
you
have
to
connection
IDs
that
have
the
same
stateless
reset
of
token
associated
with
them,
you
can't
forget
one
of
the
other.
Well,
you
can't
forget
one,
while
the
other
is
still
in
play.
Otherwise,
you
end
up
with
in
a
nice
Oracle.
I
Fast,
we
saw
I
think
we
should
diesel
reuse,
because
if,
if
we
are
all
of
us,
then
we
need
to
reference
count
on
the
receiver
side.
The
state,
let's
reset
tokens
I
mean
because
what
many
connection
is
might
map
to
a
single
state
acceptable.
Then,
if
you
are
retiring,
one
of
those
connection
ID,
then
you
need
to
decrement.
The
state
rested
token
refers
constant
and
they
keep
it
until
it
becomes
zero.
So
it's
this
is
a
the
translated
a
and
this
sorry
complication,
I
say
with
a
marginal
benefit
at
rest.
L
L
Here's
the
requirement
would
be
that
they
are
unique,
or
at
least
probabilistically
unique
I'm
not
going
to
require
that
they
be
unique
in
in
absolute
terms,
are
unique
within
that
connection.
Ideally,
okay,
the
scope
of
uniqueness
is
another
problem
that
I
want
to
talk
about
right
now,
because
that's
kind
of
complicated,
but
you
would
allow
people
to
generate
a
protocol
violation
if
they
saw
fit
on
the
connection,
but
allowed
not.
A
Y
A
N
A
P
General
there's
a
piano,
didn't
it's
got
some
comments.
It's
on
me,
I'm
gonna,
go
update
it
and
try
to
get
it
through
this
week.
Ok.
A
P
P
P
A
L
None
Thompson
I
will
want
to
check
that
this
is
fine,
but
I'll
wait
for
the
PR.
Let.
N
French
nikka
this
has
slides,
is
there
or
here
can
we
doing
in
25
minutes
we
can
introduce
it.
We
should
start
entirely.
C
N
M
A
We
have
a,
we
have
a
decent
but
not
impossible
list
for
tomorrow.
I
think
you
missed
a
couple
while
you're
away
where.
E
A
N
A
N
N
A
A
A
G
Yeah
cuz
I
sent
a
PR
for
this.
There's
a
lot
a
lot,
a
lot
of
discussion
on
this
issue
about
a
whole
bunch
of
stuff
that
has
been
addressed
by
other
issues.
So
the
final
thing
we
talked
about
in
the
interim
was
basically
we
should
have
a
threat
model
for
migration
of
what
we
think
different
attackers
can
and
cannot
do,
and
so
this
is
purely
documenting,
the
capabilities
that
we
think
exists
today.
If
we
don't
like
them,
we
should
open
an
issue
to
change
those
capabilities,
but
this
should
not
be
any
new
information.
G
It's
things
that
are
stated
elsewhere
in
the
draft
and
collected.
So
the
contents
of
the
PR
are
also
right
there
on
the
issue,
so
we
don't
have
to
look
at
it
if,
but
so
it
basically
I
can
go
through
it.
In
about
two
minutes
you
have
to
which
kind
of
turned
into
three
types
of
attackers.
There's
somebody
who's
on
your
path
and
that's
like
a
middle
box.
G
That's
routing
you,
packets
and
that's
great
because
they
can
do
anything
they
want
to
your
packets
like
not
let
them
go
through
and
that's
a
problem,
so
they
can
inspect
packets
modify
packet
headers.
Obviously
they
can't
change.
What's
inside
the
ciphertext,
they
can
inject
new
packets,
they
can
delay
the
packets
randomly
or
with
purpose,
and
they
can
drop
the
packets.
So
that's
somebody
who's
on
path.
G
Somebody
who
is
off
path
has
the
ability
to
see
the
packets
going
by
so
they
basically
have
a
tap
into
somebody's
sporting
in
their
packets,
where
they
can
observe
them
some
other
way.
They
can't
cause
those
packets
to
be
dropped
or
delayed
or
modified
in
any
way,
but
they
can
copy
them
and
inject
them
with
their
own
modifications.
G
So
if
you
keep
scrolling
to
the
guarantees,
yes,
that
one
will
become
much
more
clear
if
you're
asking
about
the
third
one
yeah,
so
an
on
path.
Attacker
can
screw
you
completely
because
they're
on
path
and
you
need
them
to
move
your
packets
and
if
they
don't
move
your
packet.
So
you
don't
have
a
connection.
G
So
that's
pretty
much
what
1,
2
3
&
4
say
there
there's
a
little
bit
more
nuance
there
around
migration,
where,
like
point
to,
if
you
are
an
on
path
attacker,
you
and
you
are
on
path
for
both
the
paths
that
the
connection
is
currently
on
and
the
path
that
a
endpoint
is
trying
to
migrate
to.
You
can
prevent
that
migration.
G
However,
you
should
not-
and
we
believe
currently
cannot
prevent
migration
to
a
path
for
which
the
attacker
is
not
on
that.
So,
if
you
are
on
one
path
but
not
on
the
other,
you
can't
stop
me
from
switching
to
that
other
one
and
point
four
here
is
kind
of
the
obvious.
Like
yeah,
you
can
screw
up
the
connection
if
you
delay
packets
or
drop
them,
or
something
like
that.
So
that's
on
path.
G
Attackers
do
you're,
basically
screwed
because
they're
on
your
path
off
path,
attackers
can
race
packets
and
we
make
a
couple
assumptions
about
an
off
path
attacker
in
order
to
have
it
be
the
absolute
worst
case
possible.
So
an
off
path,
attacker,
we're
going
to
for
now
assume
has
the
ability
has
some
magical,
better
routes
to
your
actual
destination,
where
they
can
race
a
hundred
percent
of
your
packets
and
have
a
hundred
percent
of
those
packets
get
to
the
server
first.
G
N
G
Just
one-
and
we
should
we
should
indeed
rename
it
because
there's
some
other
text
in
here
about
the
difference
between
an
active
and
a
passive
attack.
And
yes,
thank
you
very
so
there
there's.
This
is
strictly
defined
as
an
active
attack,
and
this
is
somebody
who's
getting
copies.
So,
yes,
this
is
a
man
on
the
side
and
I'll
actually
go
through
in
the
PRA
and
rename
it
more
fully.
I
have
a
reference
to
the
definitions
of
those
names
and
I
use
them
in
one
place,
but
it'd
be
nice
to
use
them
everywhere
consistently.
G
So,
yes,
so,
let's
see
point
to
hear
you're
off
path.
Attacker
can
forward
packets
and
offer
you
improved
routing,
and
she
will
use
that
improved
routing
the
difference
between,
at
which
point
they
become
kind
of
on
path,
which
is
why
I
call
it
limited
on
path
because
there's
still
a
copy
of
your
packets
going
over
your
original
path,
and
so
an
on
path
attacker
can
just
drop
your
packets
and
then
you're
screwed
a
limited
on
path.
Attacker
can
drop
your
packets
and
you're
just
fine,
because
the
original
copy
still
gets
there.
So
that's
the
that's.
G
The
difference
there
so,
for
example,
if
I'm
sitting
here
on
the
side
and
forwarding
all
your
packets
I,
can
delay
them
to
slow
you
down
only
as
slow
as
the
original
one
until
I
start
losing
the
race,
and
this
gets
kind
of
finicky
and
tricky.
So
we
can
draw
some
pictures
and
stuff,
but
let's
see
so
an
off
path.
Attacker
should
not
be
able
to
cause
a
connection
to
just
straight
up
clothes.
G
They
should
not
be
able
to
cause
migration
to
a
new
path
to
fail
if
they
can't
also
observe
and
get
copies
and
race
packets
for
the
new
path,
if
they
can
observe
that
they
can
become
a
limited
on
path
attacker
if
you
migrate
to
that
new
path,
and
they
can
also
attack
an
app
and
pretty
very
attractive.
Yes
in
the
interest
of
time.
That
may
be
point
so
we're
good.
Oh
then,
no
most
of
all
pretty
much
all
of
those
attacks
are
already
described
in
the
text.
Q
D
A
Q
A
Okay,
so
what
we
might
do
is,
let
folks
have
a
chance
to
catch
up
with
it
and
then,
if
there's
no
further
comment,
we'll
go
ahead
and
see
if
we
can
get
consensus
on
the
list,
any
other
thoughts
about
it
right
now.
Okay,
thanks
for
doing
that,
Eric.
L
I
would
like
feedback
on
that.
One
2880
yeah
I
got
some
positive
feedback.
None
of
it
was
particularly
negative.
I,
don't
think,
although
there
was
a
request
to
do
the
same
for
HTTP
HTTP
has
the
distinction
of
having
both
connection
and
stream
local
their
own
codes,
and
so
it
would
be
slightly
expanded
text,
but
this
this
both
defines
what
the
principles
are
and
also
expands
on
some
of
the
error,
handling,
advice,
I
guess
we
have
in
the
document
so.
M
A
G
It's
almost
the
same
text.
There
was
some
discussion
at
the
previous
interim
about
ekor
building
on
top
of
this,
with
some
of
the
other
sections
that
are
in
there.
So
there's
some
TBD
in
the
PR
for
like
here's
a
bucket,
but
that
should
probably
have
a
separate
issue
than
that
one.
Sorry,
there
was
a
lot
of
noise.
Do
you
say
it's
ready
or
not?
The
current
PR
covers
the
migration
cases
here.
N
I
can
take
that
Randy
that
ball
too
quickly.
I
was
basically
I
need
a
taxonomy
for
it,
which
I
had
kind
of
half
in
my
head,
which
now
Eric
has
done,
and
so
like.
I
can
go
in
that
quite
quickly.
So
I'll
review
his
and
then
write
some
section.
It
sounds
like
I
was
ready
new
section
rather
than
a
PR
okay.
So
we'll
wait
for.
O
O
L
P
P
L
My
recollection
and
it's
on
the
issue
so
I
get
this
wrong.
I
apologize
was
that
we
would
not
require
that
the
largest
acknowledge
continued
monotonically
increase,
but
we
would
I
like
the
fact
that,
if
it
does
not,
then
you
run
the
risk
of
disabling
ACN,
and
so
there
would
be
an
encouragement
to
have
the
value
increased.
This
is
to
accommodate
all
of
those
absurdly
limited
devices
that
only
seem
an
acknowledgment
for
a
single
packet
and
things
like
that.
Yeah.
P
L
G
A
V
G
Yeah,
that's
an
interesting
point,
so
there
were
some
questions
raised
about
whether
or
not
that
transport
parameter
should
exist
or
didn't,
and
as
part
of
that,
if
it
continues
to
exist,
we
should
have
some
editorial
text
explaining
why
it
exists
and
when
you
would
use
it
and
if
it
doesn't
exist,
then
we
don't
need
that.
So
I
didn't
add
that
text
here,
because
if
it,
if
the
outcome
of
that
is
it
doesn't
exist,
then
there's
no
point.
A
A
O
A
We
have
a
number
of
them
that
we
defer
to
discuss
tomorrow.
Hopefully
we'll
have
a
nice
chunk
of
time
to
do
so,
and
then
that
that
leads
me
to
believe
we
might
be
in
a
state
where
those
drafts
at
least
will
be
able
to
be
moving
a
little
bit
towards
working
for
last
call,
or
at
least
stability.
Shortly
after
we
get
all
those
incorporated.
I
think
tomorrow
will
result
at
the
end
of
this
discuss.
A
Planning
and
one
of
the
things
we'll
talk
about
is
the
other
drafts,
the
HDP
and
recovery
drafts
getting
them
into
that
late
stage,
process
offline
I
think.
Maybe
we
should
talk
to
the
editors
about
whether
there
any
HTTP
or
recovery
issues
that
we
need
to
carve
out
some
time
for
tomorrow,
because
the
ones
that
we
did
defer
to
tomorrow
could
take
a
significant
chunk
of
time
based
on
the
interrelationships
they're.
P
A
Right
then,
we
need
to
probably
go
away
and
come
up
with
a
run
list
for
tomorrow
that
incorporates
all
those
issues
we
deferred
but
leaves
time
for
that
and
the
planning
discussion
that'll
be
the
bulk
of
the
day.
I
we've
got
eight
minutes
left
I,
don't
know
that
we
can
really
use
that
productively
unless
we
want
to
try
and
have
the
planning
discussion
now.
X
Great
hi
Brian
Crandall
toreador
of
the
manageability
and
applicability
dress.
There
are
a
few
open
issues
on
these.
You
can
maybe
like
many
of
these
are
things
for
which,
before
the
just
before
the
draft
outline,
the
area
and
I
went
through
and
basically
put
some
text
in
for
them.
X
There
are
some
that
we
will
need
some
help
from
people.
You
will
see
that
aside
from
one
of
these,
which
is
assigned
to
I,
think
that's
the
that's
the
picture
that
means
Yoon's,
but
is
that
from
two
of
these
there
are
not
people
assigned?
We
may
be
coming
to
some
of
you
in
the
hallways
and
press
ganging
you
into
providing
us
some
text
for
these.
X
If
you
would
like
us,
if
you
would
like
to
like
what
through
this,
if
you
list
and
see,
if
there's
you
can
volunteer
to
add
syntax,
that
would
be
even
more
pleasant
for
us
again
for
you.
Thank
you
very
much.
Thank.
A
C
Can
take
a
minute
or
two
also
to
talk
about
sort
of
what
the
plan
would
be
now
going
forward
or
the
work
includes
right.
I
mean
we
still
have
quite
a
few
issues
open.
That's
all
we
did
today.
We
didn't
get
through
them
all.
This
is
also
still
quite
a
longer
list
of
editorial
issues.
Some
of
them
are
not,
you
know,
spelling
fixes,
but
actually
require
some
work,
but
it's
some
work
willing
to
happen
on
the
draft
so,
but
we
hope
that
once
of
the
design,
issues
are
more
or
less
done.
C
Hopefully
this
could
be
after
the
September
October
inner
end.
So
we
will
say
what
we
said
before
that,
maybe
in
Singapore,
if
that
plant
hands
out,
we
can
actually
spend
the
session
on
some
things
that
are
not
be
one
related
and
there's
there's
quite
a
few
proposals
around
I
think
the
the
ones
that
certainly
seems
to
draw
a
lot
of
interest
inside
the
IGF.
It's
things
related
to
RTC
web.
C
There
was
the
whole
dispatch
thing
yesterday
and
but
if
there's
other
things
that
we
have,
for
example,
multipath
as
an
item
on
our
Charter,
which
is
it
feels
like
this-
will
be
a
much
harder
much
longer-term
thing,
and
it's
also,
maybe
not
quite
as
urgently
needed
as
the
RTC
stuff.
But
RTC
could
be
a
topic
that
we
spend
a
session
on
in
Singapore.
If
you
guys
stop
opening
new
design
issues
for
a
moment,
Eric.
A
But
by
talking
saying,
we'll
spend
some
time
talking
about
something
like
RTC
whatever
it
doesn't
mean
that
that
work
will
be
done
here.
It
just
means
it
will
start
to
have
a
discussion
with
me.
So
we're
talking
to
the
area
directors
about
what
the
proper
dispositions
of
things
like
extensions
versus
new
applications
using
quick
might
be.
T
T
T
Oh,
quick
I'll
be
specific,
quick
I'll
be
specific
lecture.
Yes,
so
I,
not
not
probably
Montreal,
but
I
would
like,
certainly
in
the
interim,
to
get
some
time
to
discuss
it
with
a
group
and
move
for
adoption,
and
then
we
can
iterate
further
off
of
there.
I
think
it's
important
that
it
at
least
be
an
adopted
document
before
we
go
to
RSC
would
be
one.
The
people
have
guidance
on
how
to
do
this
and.
A
C
Would
probably
require
some
small
amount
of
recharging,
because
at
the
moment
we
have
a
pretty
specific
list
of
work
items
and
that
isn't
one.
But
but
you
know
we
wrote
that
charter
like
years
and
years
ago,
when
we
didn't
know
about
this
stuff
I'll
make
you
a
deal
that
that
we
we
do
a
call
for
adoption
when
you
keep
your
hands
off
the
Marion
for
a
couple
months,
I'm,
never.
S
I
do
have
a
bit
of
a
worry
that
we're
gonna
start
discussing,
RTC,
etc.
Before
we
start
discussing
what
we
currently
labeled
as
v2
features,
I'm
worried
that
we're
gonna
start
discussing
WebRTC,
etc.
Before
we
start
discussing
what
are
currently
labeled
as
v2
features,
either
getting
v2
chartered
or
something
like
that
would
be
a
really
good
idea
before
we
start
discussing
with
RTC
and
quick
in
particular,
because
otherwise
I
think
we're
gonna
end
up
with
outcomes.
That
would
be
very
different
than
what
would
happen
otherwise.
P
General
you
go
I.
Don't
think
that
the
quick
help
me
to
have
needs
a
change
in
the
Charter.
We
already
have
some
text
that
says
we
will
discuss
specific
sensors,
how
quick
and
describing
deployment
and
manageability
implications
of
the
protocol.
It
goes
a
little
bit
past,
just
the
implications,
I
guess,
but.