►
From YouTube: IETF99-MPTCP-20170718-1550
Description
MPTCP meeting session at IETF99
2017/07/18 1550
https://datatracker.ietf.org/meeting/99/proceedings/
A
B
A
A
A
A
So
that
should
be
really
interesting
and
we've
got
some
news
about
the
iOS
implementation,
which
is
some
also
really
good.
We've
then
got
a
little
slot
on
our
protocol.
Biss
work
that
we've
been
doing
a
short
update
from
Alan
and
then
really
a
kind
of
room
for
discussion
about
that
to
try
and
work
out
when
we
think
we'd
be
able
to
push
it
through
to
the
isg.
A
We've
then
got
a
longer
session
about
proxies
and
a
related
work
proxies.
So
there's
a
couple
of
new
drafts
there
from
olivier
and
from
vlad,
so
hopefully
they'll
be
enough
time
there
to
get
some
really
good
discussion
going
and
then
on
Friday
we've
got
a
couple
of
other
interesting
talks,
so
there's
one
from
Marcus
about
a
proposal.
They
know
how
to
do
robust
session
establishment
and
one
from
Quentin,
which
is
a
proposal
for
fast
sub
flow
creation.
A
So
both
of
those
in
theory
can
have
some
impact
on
the
protocol.
Biss
I
think
I
think
they're
sort
of
proposals
to
him
that
could
impact
I
I
didn't
think
there
was
time
to
squeeze
all
of
that
stuff
in
to
one
session,
so
we'll
have
a
little
wrap-up
at
the
end,
just
at
the
end
of
Friday
just
to
revisit
what
we
think
we
need
to
do
on
the
on
the
protocol
base.
That
was
the
agenda
did
I.
Forget
anybody
requested
a
presentation
slot
or
anything.
Everybody
happy
excellent.
A
We
need
to
complete
agreement
on
some
of
the
items
that
Bitton
raised.
We've
been
working
on
the
last
few
meetings.
We
had
a
talk
at
the
discussion
about
this
at
the
last,
the
last
meeting
in
Chicago
about
our
different
choices,
and
we
basically
decided
to
wait
for
implementations
to
catch
up
with
what
the
new
one
or
two
of
the
new
things
that
been
added
in
in
the
in
the
best
document
and
because
clearly,
in
order
to
be
on
the
standards
track,
we
need
some
implementation
to
go
forward.
A
A
A
However,
when
we,
when
we
did
a
more
kind
of
formal
look
at
that
on
the
list,
we
didn't
really
find
an
approach
that
quotes
everyone
can
live
with,
which
is
the
definition
of
consensus
or
one
of
the
yeah.
It's
the
definition
that
we've
suggested
you
use
so
from
that.
We
think
that
more
work
is
needed.
So
that's.
Why
we're
having
a
lot
more
discussion
here
at
this
meeting
and
just
to
alert
people
I
guess
most
people
here
will
know
about
the
banana.
A
C
C
As
most
of
you
know,
we
are
using
MPTP
at
Apple
since
now
about
iOS
7
and
we
have
been
using
it
for
Siri.
The
use
case
for
Syria
is
mostly
because
users
have
a
tendency
to
use
Syria
why
they
are
walking
out
of
their
home.
They
use
it
why
they
are
walking,
they
ask
Siri,
I,
don't
know
navigate
to
a
certain
destination,
and
so
it's
very
common
to
basically
lose
the
Wi-Fi
connectivity.
C
If
this
connection
gets
dropped
or
gets
closed,
then
basically,
Siri
doesn't
work,
because
we
need
would
need
to
recreate
a
new
connection,
resent
all
the
traffic
again
to
the
server
and
so
MPT
speeds
really
a
perfect
use
case
for
Siri,
and
we
have
seen
over
time
that
we
have
certain
metrics
for
Siri,
where
we
are
measuring
the
performance
and
we
have
one
metric
that
is
called
the
time
to
first
word.
That
means
the
time
that
it
takes
for
the
first
spoken
word
to
appear
on
the
screen.
C
So
it's
the
time
between
you
speak
it
until
it
appears
on
the
screen
and
in
the
95th
percentile.
This
has
been
reduced
by
20%
and
has
been
reduced
because
well
with
M
V
TSP,
we
are
able
to
choose
between
Wi-Fi
or
cell
and
then
the
latest
iOS
releases
I.
Think
since
iOS
9
we
didn't
only
use
MP
TSP
for
handover.
C
We
also
used
it
for
latency
reduction
by
basically,
whenever
we
saw
that
Wi-Fi
is
being
too
slow
up
to
a
certain
threshold.
We
basically
started
using
cellular
data
and
especially
in
then,
and
the
worst
case,
scenarios
like
95th
percentile.
In
those
cases,
we
saw
the
improvement
of
20%
time
to
first
word
and
then
also
the
network
failures,
which
is
basically
to
see
now
you
when
people
are
walking
outside
of
they're
outside
of
their
home
and
when
basically
Wi-Fi
drops.
C
In
those
cases,
we
saw
a
5%
reduction
of
those
failures,
which
is
pretty
huge,
so
it
was
already
very
low.
But
now
it's
really
almost
negligible.
The
number
of
Network
failures
that
we
see
with
MP
TCP
from
series
specifically,
you
know
it's
light
so
then
we
have
heard
a
bunch
of
other
multipath
technologies
in
iOS,
also
ready
since
quite
some
time
now.
This
one
Wi-Fi
assistance
since
iOS,
9
and
Wi-Fi
says,
is
very
similar.
I
brought
this
slider
actually
just
because
we
also
have
another
presentation
coming
up.
C
That
is
also
very
similar
to
our
Wi-Fi
assist,
which
basically
Wi-Fi
says
this:
the
technology
that
chooses
the
interface
for
the
initial
sub
flow.
So
whenever
we
create
a
connection
and
Wi-Fi
is
in
a
marginal
state,
that
means
the
signal.
Strength
is
too
low
or
some
other
factors
that
make
I
always
believed
that
Wi-Fi
is
not
good
enough.
C
Then
we
basically,
we
first
create
a
connection
on
Wi-Fi
and
after
a
certain
time
out,
we
create
the
connection
over
cell,
and
so
we
take
the
one
that
wins
this
race,
and
so
this
is
kind
of
the
way
that
allows
us
to
choose
which
initial
interface
to
to
select
and
there's
a
presentation
coming
up
later:
empty
SPEA,
robe
from
t-mobile,
I,
think
or
Dutch
Telecom,
and
so
it's
very
similar
to
this
now
as
part
of
Wi-Fi
assist.
We
have
then
added
now
in
iOS
11.
C
We
also
added
MP
TCP
into
it,
and
so
you
can
go
to
the
next
slide.
So
in
iOS
11
we
are
opening
up
MPSP
as
a
public
API,
so
people
can
start
using
it
and
amputees
be
will
be
steered
by
Wi-Fi.
Assist
that
means
Wi-Fi
assist
is
basically
imposing
some
limitations
on
the
amount
of
cellular
data
that
an
application
can
use
when
Wi-Fi
is
in
a
modular
state.
So
that
means
basically
when
Wi-Fi
is
available.
Usually
people
don't
expect
that
an
application
uses
cellular
data
but
Wi-Fi
assist,
enables
the
usage
of
cellular
data.
C
Even
when
Wi-Fi
is
there
and
so
Wi-Fi
assist
is
simply
putting
a
cap
on
the
amount
of
data
one
can
send
on
cellular
when
Wi-Fi
is
available
and
every
TSP
is
just
going
to
be
part
of
this
limitation
and
part
of
those
triggers
that
send
traffic
over
the
cellular
data
and
so
in
iOS
11.
The
opening
up
of
the
API
will
be
done
in
three
different
ways.
C
D
C
E
C
C
So
the
handover
mode,
the
ANOVA
mode,
whenever
Wi-Fi
is
available,
we
only
create
a
connection
on
the
Wi-Fi
interface
and
we
don't
use
cellular
at
all.
We
will
only
bring
up
the
cellular
data
if
Wi-Fi
assist
is
telling
us
wait.
Wi-Fi
is
not
good
enough
and
it
has
a
chance
of
dropping
out
so
basically
based
on
the
signal
signal
strength.
Only
then
we
bring
up
the
cellular
data
and
we
start
sending
the
seller.
C
If
we
are
getting
a
retransmission
over
white
retransmissions
over
Wi-Fi,
we
recommend
the
developers
to
use
the
handover
mode,
for
example,
for
long-lived
persistent
connections,
the
interactive
note,
the
interactive
mode
we
use
Wi-Fi
and
cell.
At
the
same
time,
we
bring
up
both
together
and
we
schedule
the
traffic
so
that
we
minimize
the
latency.
We
still
send
more
traffic
over
the
Wi-Fi
interface,
but
if
the
later
seems
a
little
bit
too
high
on
Wi-Fi,
we
send
traffic
on
the
cellular
interface
and
that's
the
mode
that
we
are
Apple.
C
We
are
using
it
for
Siri
and
that's
the
one
where
we
saw
the
20%
latency
reduction
for
the
time
to
first
word
for
this
particular
mode.
We
will
recommend
to
only
use
it
for
latency,
sensitive
and
low
volume
flows,
because
they
can
actually
send
quite
a
lot
of
data
on
cellular
interface.
If
an
application
developer
would
use
this
mode
and
would
send
too
much
data.
C
What
what
use
cases
do
you
see
or
the
developers
see
or
maybe
here
in
this
room,
see
where
we,
where
this
tiny
little
bit
of
additional
capacity
would
be
worth
the
cost
of
sending
so
much
data
on
cellular
link,
because
at
best
one
can
only
double
the
capacity
of
the
of
the
Wi-Fi
link?
That's
one
well,
the
maximum
one
can
do,
and
so
the
use
case
seemed
to
be
very
narrow
and
very
specialized
and
not
for
the
general
public.
C
I'm
gonna
say
that
now,
let's
get
to
the
next
one
to
the
Linux
implementation
on
June
force,
we
have
really
released
a
new
empty
speech,
stable
kernel,
it's
based
on
version
4.4
of
the
upstream
Linux
kernel.
The
new
features
in
this
release
we're
that
there
were
two
new
socket
options
that
allow
applications
to
select
the
scheduler
and
the
path
manager
we.
It
has
been
several
years
now
that
we
had
multiple
different
schedules
and
path
managers
to
use
in
Linux,
and
they
were
one
could
choose
them
with
the
sysctl.
C
One
use
case
from
PTC
was
where
people
were
setting
it
up
on
a
home
gateway
router,
for
example,
and
basically
doing
bonding
of
multiple
interfaces,
and
they
saw
that
sometimes
some
sub
pros
were
timing
out
because,
for
example,
the
net
mapping
got
lost
and
when
that
this
happened,
basically
our
in
the
linen
sampling
MPSP
implementation
was
basically
not
recovering.
This
sub
flow.
So
in
this
new
release,
what
we
added
is
basically,
whenever
sub
flow
dies
because
of
timeouts
or
some
other
reason.
C
C
F
A
G
G
So,
first
on
updates
on
the
graph
that's
summarized
or
little,
balancing
techniques
and
the
adaptation
we
can
make
to
impe
TCP
to
make
it
work
with
load
balancing.
So
we
added
in
a
part
on
the
application
layer,
authentication
that's
been
made
by
Alan.
Basically,
it
decouples
the
token
saina
saina
in
the
option
from
the
key
used
in
for
the
authentication.
So
basically
it's
an
application
layer,
authentication
I,
think
it's
some
words
missing
it.
It's
okay,
but
with
the
figure
probably
we
will
follow.
G
The
next
step
for
this
draft
is
to
add
some
security
consideration
because
we
haven't
done
it
yet
Cendant
still
important
next
slide,
please
so
about
the
addressed
advertisements.
We
discussed
that
Lots
during
ITF,
96
and
97
out
of
5
proposal.
We
made
we
kept
two
which
were
the
node
join
flag
in
the
NPK
pedal
and
the
echo
flag
in
the
other
dress.
So
what
was
that
makes
like
this?
Just
a
quick
recap
of
ideas.
The
node
join
was
basically
allowing
their
load
balancer
to
tell
a
client's.
G
Please
don't
send
an
MP
join
on
that
address
because
it
cannot
be
used
to
join.
So
if
we
have
the
example
with
the
green
IP
that
cannot
be
used
to
send
joined,
why?
Because
the
load
balancer
rushing
most
of
the
time
based
on
this
I
out
of
5
stable.
So
if,
if
a
join
tried
to
join
on
this
address,
the
server
may
a
207,
then
it
was
not,
it
would
not
join
so
you
know,
John
was
just
our
window
balancer
to
tell
the
clients.
G
Please
don't
his
address
next
slide,
please,
the
other
one
was
making
the
other
was
reliable.
It
seemed
important
in
all
balancing
and
I
will
show
why,
after
but
to
make
the
other
less
reliable.
Basically,
we
use
the
echo
technique.
So
if
you
look
here
with
an
address
with
the
echo
bit
to
zero,
meaning
it's
a
real
other,
otherwise
it's
an
address
advertisement.
This
one
is
lost
so
without
receiving
any
kibitz
to
one.
The
server
will
just
send
the
address
again
until
I
receive
an
acknowledgement
for
that.
So
the
acknowledgement
is
the
Eco
bit
slide
please.
G
So,
with
this
we
try
to
make
multipath
ccp
friendlier
to
a
load
balancer
than
any
cast.
Oh
well.
What
was
the
point?
The
point
was,
we
were
trying
to
change
slightly
multipath
SCP,
to
make
it
work
with
existing
load.
Balancer
also
done
has
been
work
to
modify
a
load
balancer
to
make
it
them
work
with
MP
TCP.
Here
we
are
taking
the
problem.
You
know
the
way
making.
G
Okay,
we
don't
change
one
balancer,
because
the
other
hand
nobody
is
going
to
upgrade
them
to
support
MP
TCP
and
we
change
MP
TCP
to
make
it
work
with
those
balancer.
How
did
we
do
that?
First,
we
implemented
the
two
proposal.
That's
I
present
that
just
up
just
before,
so
the
no
join
in
the
relay
bell.
Otherwise,
and
we
designed
as
manager
that
was
specific,
photo
balancing
use
case
X
like
please
so,
what's
the
general
idea
of
this
load
balancing
path
manager?
It's
actually
pretty
pretty
pretty
simple,
say
it's!
G
You
have
to
add
a
couple
ik
IP
address
to
each
server.
I
know
it's
sometimes
complicated,
but
this
is
the
design
we're
trying
to
make.
So
you
had
an
IP,
a
a
public
IP
address
that
doesn't
pass
through
the
load
balancer
on
each
server.
It
could
be
another
an
IP
address
of
an
ipv6.
For
instance-
and
this
is
more
realistic-
just
so
/
64
/
56,
so
you
had
a
range
on
the
server,
so
the
server
can
generate
IPs
and
send
them
to
the
client.
So
the
server
needs
to
advertise
it
terribly
to
the
clients.
G
Why?
Because
the
idea
yeah
to
put
the
load
balancer
off
path
so
when
the
first
flow
is
established,
the
first
thing
the
server
will
do
is
advertise.
The
new
IP
address
the
one.
That's
there
directly
connected
to
the
Internet,
so
the
client
can
join
this
address.
Then
you
put
the
first
up
flow
in
backup
mode,
so
you
don't
use
it.
You
don't
cut
it
because
it
can
still
be
useful
if
the
new
stuff
Louis
Katz,
but
you
put
basically
the
world
band's
above
path.
G
So
you
expect
all
of
the
traffic
except
the
three
or
four
packets
that
you
have
used
for
the
three-way
handshake
and
NDL
address
to
go
to
on
a
separate
link
with
this
load.
Balancer
is
kind
of
only
used
to
match
the
client
and
the
server
so
use
the
load
balancer
that
we
will
make
the
decision
you
go
to
this
or
this
server.
Then
you
connect
directly
to
the
server
and
you
don't
use
the
road
balancer
connection
again.
The
implication
of
that
is
you
don't
need
big
other
ways
to
make
your
load
balancer.
G
You
don't
need
a
lot
of
availability,
because
if
the
old
man
cell
goes
down
during
the
connection,
the
connection
continues
and
that's
the
idea.
So
you
couldn't
it's
a
bit
of
a
stretch,
but
you
could
make
a
verb
and
say
we
fellas
vary
by
for
all
I
know
it
would
work
slide,
please.
So
the
pseudocode
for
the
load
for
the
load,
balancing
past
manager
is
pretty
pretty
simple.
It's
basically
generating
new
IP
y
generating
a
new
IP
when
you
have
arranged
for
some
security
purpose.
G
So
on
the
fly
here
we
generate
a
new
IP
in
an
ipv6
range
and
you
advertise
that
new
IP
only
to
that
clients,
so
the
client
will
connect
to
that
one
and
when
the
client
disconnect
this
IP
address
is
removed
and
it's
not
used
used
again,
so
you
generate
the
IP,
you
advertise
it
to
the
client
and
you
set
the
backup
mode
on
the
first
step
flow
next
slide,
please!
So
how
does
this
work
with
layer,
4,
load,
balancer
and
modify
its
load
balancer?
On
the
most
left
part?
G
We
have
the
traditional
nut
setup,
while,
basically,
all
the
traffic
goes
to
the
bandsaw
upstream
and
our
stream.
Everything
goes
to
the
balancer
in
that
case
of
a
balancer
about
like
because
all
the
traffic
is
going
to
him
in
the
middle.
You
have
the
director
Vuitton.
This
is
evilly
used
in
the
industry.
Basically,
all
the
traffic
coming
from
the
client
to
the
server
is
going
to
the
load
balancer,
but
the
traffic
coming
from
the
server
to
the
client
is
using
a
dedicated,
dedicated
link.
G
G
So
if
you
look
at
the
MP
TCP
figure,
the
web
traffic
is
the
establishment
of
the
first
stop
flow,
so
the
profits
used
to
advertise
the
address
and
then
in
blue
you
have
all
the
traffic
that
goes
both
ways
before
the
club
is
in
the
client
and
the
server.
So
the
load
balancer
is
not
used.
X
right,
please!
So
to
evaluate
this,
we
took
a
very
simple
setup
where
you
have
100
megabit
per
second
link
between
the
client
and
the
Lancer
dual-band
service,
a
traditional
in
Excel
vs.
G
We
didn't
change
anything,
it's
just
like
it's
using
the
world,
then
all
the
server
have
a
direct
1
gigabit
per
second
link
connected
to
the
server.
We
run
a
lot
of
clients,
HTTP
clients
that
they
are
downloading
or
uploading
files
between
the
client
and
the
server.
So
what
you
will
expect
showing
that
is,
that
the
clients
download-
that's-
hopefully
one
gigabit
per
second
TMP
TCP,
and
what
and
hopefully
100
megabits
per
second
in
TCP
X
I,
please
so
the
result
they
were
a
lot
to
figure.
G
G
So
what
we
see
here
is
basically
MP.
Tcp
is
not
affected
by
the
loss,
yeah
yeah
clear.
Clearly
it's
not
ramping
and
fast,
but
is
because
of
the
latency
we
haven't
talked
about
millisecond
latency.
If
I
you'd
figure
without
the
latency,
you
could
see
that
actually
NP
TCP
is
not
affected
by
the
loss.
It
could
be
if
the
address
and
that's
why
this
proposal
was
important.
If
the
address
is
not
available
when
you
ever
loss.
G
This
will
impact
NP
TCP
a
lot
for
a
big
for
bigger
file
size
because
we
were
unloading,
foggy
foggy
bytes
in
total
splitted
in
one
KB,
tanky
be
100
KB.
So
if,
when
you
have
very
huge
file
size,
if
the
other
race
is
lost,
the
faster
flow
will
not
be
established,
and
then
it
goes
a
lot
in
demo
performance
because
you
will
download
a
big
file
via
the
100
megabits
interface.
G
So
if
the
address
is
not
available,
I
can
show
the
plot
here,
because
I
don't
have
time,
but
basically
it
affects
NP
TCP,
a
lots,
because
if
instead
of
ramping
up
is
just
crashing
because
it's
each
other
less
lost
means
a
big
file
that,
if
to
go
to
the
100
megabits
per
second
link,
so
the
bigger
file,
the
bigger
the
costs
of
losing
the
address.
So
with
that
kind
of
situation,
you
really
want
the
otherwise
to
be
scalable
at
slide.
Please
so
something
else.
G
Now
we
also
apply
that
to
any
caste
because
yeah
it
was
over,
and
so
but
it
also
we
any
caste.
So
in
this
simple
schematic,
if
the
clients
so
due
to
server
are
announcing
in
anycast
address
the
same
anycast
address
so
before
the
red
cost.
If
everyone
wants
to
connect
to
the
anycast
address,
it
will
be
routed
directly
to
F
2
and
then
to
the
server.
If,
during
the
TCP
connection,
the
links
between
f1
and
f2
fail,
then
the
client
will
be
sent
to
the
node
that
is
connected
to
f4.
G
Then
what
happened
to
the
TCP
connection?
It's
reset,
and
we
don't
want
that
because
if,
if
I
was
downloading
a
huge
file
and
then
cut
in
the
middle
of
that
and
the
receiver,
he
said,
I
will
have
to
start
again.
Probably
so
I
don't
want
that.
So
next
slide.
Please
we
made
a
setup
to
simulate
that
on
the
hotel
here
you
have
the
anycast
address
that
is
routed
to
the
server
each
each
server
does
a
FD
anycast
address
and
a
public
IP
on
the
same
interface
so
adjust
the
routing
configuration.
G
We
are
not
removing
a
link,
but
each
server
is
a
though
they
have
any
case
address
on
the
loop
back
n
feel
big
pie
fix.
So
what
do
we
do?
We
will
send
packets,
the
Gouda
will
do
some
ecmp
and
basically
just
load
balance
the
traffic
among
the
three
servers.
So
what
happened
next?
Please!
So
what
happened?
We
simulated
a
reconfiguration?
Is
the
network
like
I,
showed
to
slide
earlier
when
an
X
go
down,
so
we
have
twist
ever
and
every
10
students
will
remove
a
server
from
the
ICMP
pool.
G
G
Gigabits
but
not
when
one
server
goes
down,
you
have
gone
to
2
gigabits,
which
is
the
sum
of
the
two
other
links.
So
that's
expected,
so
you
see
everything
every
10
seconds.
We
have
a
drop
of
hopefully
5
seconds,
because
that's
what
we
do
and
then
we
had
it
and
that's
why
it's
ramping
up
again
in
the
head.
You
know
you
see
the
resets
that
we
have
received
and
that's
that's.
The
problem
that
I
described
earlier
is
when
you
are
reconfigured
to
another
server.
You
will
get
a
resets
yeah,
you
see.
G
That's
we
huge
spike
in
resets
when
the
server
is
removed
from
the
same
people,
which
is
logical
because
that's
our
cannot
be
reached
or
ashed
to
another
server,
but
we
also
well
so
see
a
spike
in
the
reset
when
the
server
is
started
back
again,
because
a
CMP
just
recomputes
all
the
ash
and
then
a
fraction
of
the
flow
will
be
sent
to
the
server.
So
you
will
get.
Is
it
in
that
case
next
slide?
Please
I
will
NP
DCP,
that's
what
we
expected.
G
We
have
noted
up
in
the
bandwidth
and
notice
it
is
sent
why,
but
because
even
if
the
anycast
address
the
Frog
fails,
the
direct
connection
to
the
public
IP
address
is
not
disconnected.
It
means
that
no
clients
will
be
interrupted
during
a
download
sure
enough.
If
the
server
stays
down
and
the
connection
from
the
client
is
terminated
by
the
client
and
he
established,
then
it
will
react
to
another
server,
but
within
P
TCP
you
basically
allow
your
clients
to
finish
what
they
are
doing
when
you
are
configuring,
the
network.
G
So
when
you
configure
the
anycast
network,
you
don't
disking
people,
they
will
just
save
the
time
to
finish
their
download
and
then
there
will
be
actual
another
seller.
Yes,
okay,
so
in
conclusion
we
made
some
simple
change
to
impede.
Cp
and
I
have
to
insist
it's
very
simple,
and
that
was
the
idea
to
make
it
work
with
unmodified
load
balancer.
We
are
improving
the
performance,
they
are
ability
and
we
are
solving
the
bottleneck
problem.
G
I
G
A
A
J
K
J
So
we
did
a
few
stuff
related
to
multipath,
CP,
some
usage
and
so
on
to
the
socket
API
usage
about
documenting
multipath,
CP
and
other
stuff
that
are
related
to
new
feature
added
in
a
the
piece
version
of
the
RFC
sonic
site.
So
this
presentation
will
mainly
focus
on
these
three
propositions
in
the
B's
and
so
next
slide.
So
first,
the
multipath
experimental
option
so
basically
that
this
allows
us
to
exchange
a
PAC
data
between
them.
We
are
an
option
that
can
contain
basically
anything
it
was
massive
during
the
akhaten
we
partially
implemented
it.
J
So
we
learned
the
transmission
and
the
parsing
of
the
option
reception
and
the
main
things
that
we
need.
Some
clarification
about
this
option
is
what
about
the
speed
so
the
synchronizing
bit.
So
basically
what
it
does.
It
says
that
when
you
say
you
set
the
synchronizing
week,
you
require
that
all
at
least
you
expect
some
response
from
the
remotest
and
the
question
is
about
said
to
be
reliable.
J
J
So
about
the
deposit
option,
so
basically,
this
allows
enough
to
explain
why
our
sub
pro
was
reset
on
our
MP
TCP
connection,
so
it
is
implemented
in
in
Linux.
So
it's
okay
and
whereas
additional
discussion
about
this
so
the
first
one
is
about,
we
have
to
tip
it.
That's
that
said
that
is
the
ero
transionic
not
to
know
if
it
is
worth
trying
to
re-establish
as
the
flow
does.
That
is
to
be
taken
into
account
if
the
code
is
different
from
the
multipath
unspecified
reason
for
our
set.
J
Also,
another
thing
is
that
the
option
today
is
only
sent
when
we
try
when
we
decide
to
send
the
reset,
but
we
don't
keep
state
about
why
we
reset
the
sub
flows
because
of
memory,
reasons
and
resource
reasons.
Next
slide,
please,
and
about
the
reason
code
that
we're
defined
the
RFC.
We
found
some
cases
where
we
can
define
additional
code,
so
the
first
one
is
what
about?
If
you
try
to
do
a
empty
join
with
a
token,
but
the
token
does
not
exist
anymore
on
the
server
you
try
to
establish
a
new
set
flow.
J
B
I
Any
bigger
discussion
ship
will
be
taken
to
the
list,
but
the
general
point
is
that
MPT
to
be
reset
option
was
to
be
analogous
to
an
actual
to
go
on
a
TCP
reset
on
that
connection,
to
carry
some
additional
semantics
so,
for
example,
unknown
MP,
TCP
connection.
There
really
is
no
point
in
in
sending
that
out.
I
Certainly
we
have
a
limited
number
of
bits
to
play
with
here
as
well,
and
the
previous
slide
you
had
should
t
be
set
for
anything
other
than
zero.
Zero
well
yeah,
because
it's
got
stuff
like
a
lack
of
resources
or
out
or
unacceptable
performance.
Those
are
exactly
the
kind
of
ones
that
would
be
transient
that
you'd
want
to
look
at
later.
Okay,
but
I
think
that
whole
list
of
questions
is
ideally
suited
for
an
email
rather
than
a
yeah.
J
Yeah
yeah
we
can,
we
can
can
continue
the
discussion
on
mail.
There
is
a
problem
yeah
and
also
we
did
not
use
yet
the
code
about
administratively
for
creation
of
the
flow
and
when
you
have
to
mention
ascended
actor
on
a
given
support.
But
maybe
we
will
find
where
to
put
this
but
snow
and
next
slide,
and
there
is
also
the
new
table
option
that
was
made
by
Christophe.
So
basically,
it
allows
to
use
connection
key
that
we
defined
a
tire
iron
level.
A
I
D
J
A
Thanks
Winton,
so
I
skipped
over
I
shouldn't
have
done
just
an
open
call.
Is
there
any
other
updates
about
implementation
news
that
anybody
wants
to
share
know?
In
the
past,
we've
had
several
other
operating
systems
that
have
talked
about
or
got
some
of
the
way
through
orth
got
a
lot
of
the
way
through
implementing
multipath
tcp.
So
we
just
traditionally
have
a
moment
for
anybody
who
has
any
other
implementation
updates
and
they
want
to
share
anything
with
us
for
them
to
do
so.
If
they
want
to.
B
I
There
is
one
slide
about
from
the
title
slide,
which
obviously
we're
trying
to
get
to
Thank
You
Phil.
This
is
really
weird.
This
screen
down
here
my
feet:
what
three
changes
in
in
zero,
eight
as
based
on
feedback
implementation
feedback,
which
is,
of
course
the
driving
motivator
here
Christoph's
implantation,
is
pointed
out
quite
significantly.
There
are
some
issues
around.
I
Clarifying
that
when
you
send
data
on
the
fourth
ACK,
so
is
the
MP
capable
option
being
used
to
provide
the
data
sequence
mapping.
You
could
end
up
with
a
situation
where
one
end,
since
the
data
sequence
signal
the
other
end
sends
an
MB
capable
and
they
cross
and
one
end
was
expecting
one
and
when
I
was
expecting
the
other.
So
it's
just
to
clarify
that
these
mappings
are
identical
and
should
be
treated
as
identical
when
they
turn
up
in
two
packets.
I
Similarly,
also
the
retransmit
logic
for
an
MBA,
always
a
data
mapping
in
it,
I
was
just
saying
their
equivalent
means
that
the
problem
kind
of
goes
away.
I
said
thanks,
olivier
on
poor
point
to
but
actually
realize
it
was
actually
Quentin
was
Olivier.
My
apologies
Renton,
but
thank
you
for
clarifying
that
the
TCP
reset
in
the
fast
life
situation
also
put
in
and
finally
the
updater
sha-256
has
gone
in.
Although
I
looked
again
and
realized,
my
searching,
replace
wasn't
quite
as
a
as
comprehensively.
There
should
have
been
a
couple
of
chartres
ones.
I
A
A
C
A
L
A
A
A
A
With
two
new
drafts,
one
from
Olivia
and
one
from
Vlad,
so
we
got
a
good
deal
of
time
for
discussion
about
both
than
what
we
got
about:
60,
65
minutes
or
so
just
over
an
hour.
So
we
should
be
able
to
get
to
the
discussion
about
them
both.
Now,
how
are
the
blue
sheets
doing?
I
haven't
seen
it
come
down?
Oh,
it
has
who
has
not.
M
Okay,
so
let's
go
back
to
the
new
proxy
work
and
let's
first
come
back
to
the
motivation.
So
if
you
remember,
if
you
were
in
MPD
CPC
in
the
beginning,
you
know
that
we've
had
a
lot
of
troubles
with
middleboxes
and
we
had
to
fight
a
lot
with
the
middle
box
and
we
managed
to
have
something
which
is
which
works
over
the
global
Internet
and
now.
The
next
question
is:
can
we
design
a
middle
box
that
would
benefit
to
an
ptc?
M
So
can
we
get
something
more
thanks
to
your
generation,
a
familiar
box,
and
this
is
part
of
the
chatter
and
at
the
bottom
you
see
the
the
current
chatter
of
MPCP
working
book
and
during
the
last
year
there
have
been
several
proposals
that
have
been
discussed.
So
there
is
a
draft
plan,
but
there
is
a
draft
transparent
mode.
There
is
socks5,
there
will
be
stock
6
that
will
be
presented
by
blood
later
on.
M
So
there
is
a
family
of
solutions
that
try
to
address
the
problem,
and
in
this
talk
and
in
the
draft
I
will
try
to.
We
have
tried
to
restart
from
scratch,
based
on
all
the
discussions
that
have
been
made
on
the
mailing
list,
12a
design,
which
is
cleaner,
and
we
opiez
here
to
understand
easier
to
implement
and
which
provides
all
the
use
cases
that
we
would
like
to
work
with
an
ptc
proxies
and
to
distinguish
that
with
what
you
might
think
about
proxies.
M
We
have
decided
to
use
another
name
which
might
change
later
on,
but
the
the
name
that
we
use
is
a
converter
and
our
motivation
is
that
there
are
now
far
more
and
PCCP
enabled
clients
than
MP
TCP
servers
and
for
all
those
clients,
it's
very
beneficial
to
be
able
to
use
MP
TCP
on
a
part
of
the
network
when
reaching
a
server
which
is
which
does
not
support
MP
TCP.
So
there
is
a
clear
use
case
and
there
are
other
use
cases
for
those
kind
of
converters
that
have
been
discussed
earlier.
M
So
let
me
first
try
to
summarize
what
have
been
the
the
discussions
in
February,
March
and
April
on
the
main.
So
first
point
is
that
if
we
use
a
converter,
one
key
point
is
that
we
do
not
want
the
converter
to
significantly
increase
the
connection
establishment
delay.
So
we
don't
want
to
lose
100
millisecond
because
we
are
going
through
a
converter.
So
that's
the
motivation
for
having
a
solution
that
requires
zero,
a
key
key,
in
contrast
with
satisfy,
for
example.
M
As
we
see
that
there
is
more
and
more
support
for
careful
and
finally,
the
design
should
be
extensible
and
should
approve,
and
we
should
avoid
defining
new
TCP
options,
because
you
know
that
putting
new
TCP
options
in
a
sin
is
a
mess
when
you
already
have
the
NP
capable
the
timestamp
and
all
the
other
options
that
already
present
them.
So
what
is
the
cleaner
design
that
that
we
came
up
with
so
we
have
an
application-level
protocol
that
uses
a
specific
port.
M
M
We
leverage
TFO
so
that
we
can
put
the
commands
directly
in
the
sin
which
is
done
by
the
client
like
subsidy
6
that
will
present
later.
So
the
comments
and
the
respond
I
stand
inside
the
sea
and
we
have
a
solution
to
allow
the
clients
to
learn
what
are
the
TCP
options
that
are
supported
by
the
final
servers,
and
this
is
required
if
we
want
to
be
able
to
bypass
the
converter,
because
we
will
bypass
a
converter
if
we
know
that
the
final
server
supports
and
P
TCP
in
the
case
of
MVT.
M
Simply
so,
let's
start
with
an
example.
So
there
are
three
hosts
in
this
example,
you
have
the
client
at
the
bottom,
the
server
and
the
converter,
which
could
be
anywhere
if
you
want
to
create
a
connection
towards
the
server
via
the
converter.
What
do
you
do?
You
first
send
the
scene
from
the
client
to
the
converter,
by
using
the
converter
IP
address
and
the
port
number
of
the
converter
inside
the
scene
that
you
send.
You
place
a
gfo
option
with
the
pookie
of
the
converter.
M
Let's
call
this
cookie
T
and
inside
the
payload
of
the
scene,
and
all
the
information
that
I
will
show
in
blue
will
be
TCP
options
and
the
information
that
I
show
in
red
in
the
packets
will
be
payload
information,
which
is
encoded
as
a
GL
v
messages,
so
in
the
payload
I
have
a
TL
v
message
that
says:
please
connect
me
to
the
server
s
on
port
P,
nothing
very
strange!
So
when
the
client
receives
so
we
receive
the
same
message.
What
does
it
do?
It
will
create
immediately
a
sin
towards
the
server.
M
The
server
will
receive
a
normal
scene
and
it
can
reply
with
the
scene
with
a
scenic
that
goes
back
to
the
converter
and
the
converter
will
return
the
syn
ACK
to
the
client
to
confirm
the
establishment.
At
that
point
we
have
two
TCP
connections.
There
is
one
connection
from
the
client
to
the
converter
and
one
connection
from
the
converter
to
the
final
server,
and
these
two
connections
are
glued
together
in
the
converter
by
the
application
running.
M
On
the
contrary,
so
all
the
data
that
comes
from
the
client
will
go
to
this
connection
and
all
the
data
that
comes
from
the
server
will
go
to
the
connection
to
the
file.
Nothing
really
difficult.
Okay.
So
let's
look
at
that
in
more
details
which
two
example
with
different
TCP
connections.
So
our
motivation
is
MP
TCP.
So
let's
look
at
what
happens
when
you
use
when
you
try
to
open
an
MP
TCP
connection
to
the
controller
so
to
open
an
MP
TCP
connection.
M
The
converter
will
send
a
syn
which
contains
the
MP
kept
evolution
to
the
final
server,
and,
let's
assume
that
the
server
supports
and
P
TCP
and
uses
RFC
68
24
bits,
which
means
that
it
provides
its
key
in
the
NPK
level
option.
While
there
was
no
key
in
the
any
Indian
peak
evolution
of
the
C.
So
you
have
the
key
of
the
server
which
is
written
as
a
TCP
option.
M
The
information
arrives
on
the
converter
and
what
the
controller
does
is
that
it
accepts
the
establishment
of
the
connection,
and
once
you
have
accepted
the
estivation
of
the
connection,
you
need
to
inform
the
client
that
the
connection
can
be
established
and
you
need
to
let
the
client
know
that
the
server
supports
and
P
TCP
and
to
do
that.
What
we
do
simply
is
that
we
copy
the
content
of
the
extended
adder.
M
So
all
the
TCP
options
that
were
in
the
synth
exact,
we
put
them
inside
the
TLD
message,
which
is
part
of
the
payload
on
the
simple
Zak
which
is
returned
to
the
client
and
then
the
client
when
it
receives
the
simply
Zak,
which
contains
this
theory
information.
It
will
know
by
parsing
the
tcp
extended
adder
that
this
connection
to
the
server
will
be
using
and
p
tcp
from
the
client
to
the
server,
so
that
the
next
connection
that
the
client
will
want
to
open
to
the
server.
M
Here,
for
we
want
to
use
the
fo
here,
that's
hilly,
but
we
also
want
to
be
able
to
use
the
fo
from
here
to
there.
Let's
look
at
what
happens
if
we
want
to
use
the
arrow
through
the
converter.
So
again
we
assume
that
we
already
have
a
cookie,
a
cookie
from
the
converter,
because
we
have
had
many
connections
to
the
converter
and
if
I
want
to
do
TFO
with
a
server.
You
remember
that
in
TFO
we
do
that
in
two
steps.
First
I
send
a
TFO
option
which
is
empty
to
the
server.
M
The
server
replies
with
a
cookie
and
then
I
can
use
a
cookie
to
send
data
in
the
next
TCP
connection.
So
how
do
we
do
that
with?
When
we
have
a
converter?
I
will
send
a
syn
which
contains
the
TCP
for
a
cookie
of
the
converter
inside
the
TCP
option
and
in
the
TLV
message
of
the
scene.
I
have
the
connect
command
and
I
have
additional
information
which
contains
TFO
an
empty
TCP,
an
empty
TFO
option.
M
So
the
converter
will
recognize
that
and
it
will
send
a
syn
with
an
empty
TF
or
option
to
the
final
destination.
So
the
final
destination
receives
a
scene
with
an
empty
Kia
option.
What
does
it
do
when
it
receives
a
scene
with
an
empty
TF
auction?
It
returns
the
cookie
that
corresponds
to
this
TCP
connection.
So
the
cookie.
Let's
say:
let's
call
this
as
C.
The
cookie
SC
is
returned
back
to
the
converter,
and
the
converter
does
the
same
as
with
the
MP
key
of
evolution.
M
It
simply
copies
the
TFO
option
and
puts
it
back
inside
the
payload
of
the
C
of
the
simply
Zac,
which
is
returned
to
the
client
and
the
client
can
extract
the
server
cookie
and
use
it
for
the
next
connection
that
it
will
create
for
the
same
server.
So
let's
look
at
the
next
connection,
so
this
is
the
second
connection
on
the
state
of
the
client.
We
have
the
server
cookie,
which
is
known
so
when
we
send
the
scene
to
the
converter.
M
This
is
a
scene
that
contains
data
because
we
are
using
TFO
and
before
the
data
we
have
the
connect
command
and
we
have
the
TCP
option
which
contains
TFO
and
the
server
cookie
that
we
have
received
in
the
previous
connection.
The
converter
will
copy
this
information
in
the
scene
that
it
sends
to
the
server
and
will
copy
the
data.
The
server
sees
the
server
cookie
that
it
has
sent
for
this
specific
client.
M
N
M
N
I
M
Talk
directly
to
the
server
at
this
point,
since
both
and
so
V.
So
when
you
have
debt,
you
have
to
TCP
connections
so
that
brought
to
MP
TCP
connection.
In
this
example,
you
have
one
from
the
client
to
the
converter
and
one
from
the
converter
to
the
final
server.
So
if
the
clients
at
the
surf
low,
the
subfloor
will
be
added
between
the
client
and
the
second
and
the
converter,
if
the
server
adds
a
sub
flow,
it
will
be
added
there.
So.
M
Where
you
can
go
direct
from
the
client
to
the
server,
so
this
would
add
a
lots
of
lots
of
complexity.
Complexity
to
do
that
for
a
single
MPT,
speak
connection
and
I
think
it's
easier
to
just
test
the
first
connection,
and
if
you
determine
that
the
destination
is
MP
capable,
then
you
can
open
the
next
TCP
MP
TCP
connection
to
us.
It's
over
our
say
in.
M
Cases
you
have
a
large
number
of
actions
anyway
to
the
server
and,
if
not,
you
can
apply
a
kind
of
a
PA
word
solution
or
you
can
also
try
to
open
in
parallel
to
connections
one
via
the
convertor
and
one
directly
to
the
server,
and
you
just
take
the
one
that
does
n
P
TCP
and
works
correctly.
Okay,
so
the
the
the
presence
of
all
this
red
stuff.
M
Of
nasty
stuff
with
it
yeah,
so
the
question
is:
why
do
you
give
the
TF
of
cookie
to
the
client
and
not
keep
it
on
the
controller
itself?
Yes,
it
really
depends
on
what
your
type
of
deployment
that
you
have
on
the
convertor.
So
if
you
think
of
a
converter
that
has
a
single
IP
address
and
this
single
IP
address
is
shared
by
all
the
clients,
then
you
don't
want
to
reveal
the
cookie
which
is
mapped
to
this
IP
address
to
all
the
clients,
because
they
could
be
able
to
do
traffic
directly.
M
C
See
that
there's
the
TFO
cookie
T
inside
the
syn
sent
to
the
convertor,
so
we
kind
of
I
think
in
the
draft
it
said
like
it
recommends
to
pre-emptive
proactively
refresh
the
cookie.
So
my
question
is:
how
much
do
we
rely
on
T
fo
for
this
sin
like
if,
for
what
another
reason,
even
if
it's
not
because
of
the
cookie
we
fall
back,
does
it
still
work
I?
Think
so,
but.
M
So
what
up
so
the
question
is:
what
happens
if,
for
example,
the
controller
changes
its
cookie,
and
so
we
have
a
wrong
cookie
on
the
client-side,
but
that's
the
normal
procedure
of
TFO
that
we,
this
would
apply
there.
So
you
would
not
act
the
the
content
of
the
data,
so
you
would
not
act
the
message
there
and
that
would
be
a
fallback
and
you
would
restart
and
you
would
have
to
add
the
key
instead
of
one
entity
in
this
case
yeah
and
that's
the
normal
TFO
procedure.
M
So
one
of
the
of
the
objective
of
the
design
was
to
have
extensibility
and
if
you
think
about
extensibility,
there
are
two
dimensions
that
needs
to
be
considered.
The
first
I
mention
is
the
extensibility
of
the
application
level
protocol.
That
includes
the
TLD
messages
and
we
get
that
by
using
tlvs
and
version
numbers.
So
we
have
a
solution
which
is
easily
extensible
and
you
have
to
look
at
extensibility
from
the
tcp
viewpoint.
M
What
happens
if
tcp
is
extended
and
if
someone
has
another
phrase,
Eid
then
NP
tcp,
and
we
would
like
to
use
that
through
the
converter.
Well,
this
is
feasible
as
well,
because
we
can
detect
what
is
the
tcp
options
that
are
supported
by
the
server.
So
we
have
flexibility
and
extensibility
in
the
design
of
the
protocol,
so
the
client
can
decide
to
bypass
the
converter
once
it
has
detected
that
the
converter,
that
the
server
supports
the
options
that
you
would
like
to
use.
M
So
now,
let's
go
back
to
the
criteria
that
feed
and
Yoshi
send
on
a
mailing
list
about
how
to
compare
different
solutions
for
proxies.
So
the
criteria
and
I
try
to
summarize
the
email,
because
I
could
not
fit
the
content
of
the
email
in
a
slide.
The
sentences
were
pretty
long.
So
first
comment
was
no
changes
to
NP
TCP,
so
there
is
no
change
required
to
the
current
spec
proxy
simple
to
operate
and
deploy
well.
M
This
is
like
stock,
so
we
need
an
IP
address
and
a
port
number,
so
you
can
place
the
convertor
we're
in
the
network,
so
this
is
existing
best.
Current
practices
session
can
be
initiated
from
either
end.
We
believe
this
is
feasible,
but
this
is
not
part
of
the
first
draft
because
we
want
the
draft
to
be
simple.
Setup
type
is
minimized.
We
we
provide
that,
thanks
to
the
utilization
of
TFO
design,
minimizes
the
amount
of
overhead
on
data.
M
Basically,
the
only
overhead
that
we
have
is
the
theory
messages
that
we
place
inside
the
scene
and
the
simple
Zak.
So
the
overwrite,
the
overhead
is
only
for
the
scene.
We
don't
need
any
encapsulation
scheme
and
the
solution
works.
If
Antoinette
encryption
is
in
use,
the
solution
is
compatible
with
the
end-to-end
encryption.
M
The
only
issue
is
that,
if
you,
since
we
are
used,
if
you
use
different
IP
address
on
the
client
and
under
on
the
converter,
you
would
have
you
would
be
in
the
same
situation
as
with
a
nut,
and
so,
if
you
have
an
encryption
that
relies
on
the
validation
of
the
IP
addresses,
then
you
will
add
issues.
Other
criteria
that
were
mentioned
for
single-ended
proxies
in
iOS
to
proxy
then
likely
to
be
on
both
default
path.
So
this
is
the
case.
M
The
controller
can
be
anywhere
in
the
network,
clarify
whether
the
proxy
simply
for
what
terminates
TCP
connection
and
maintain
stays
for.
The
two
TCP
connections
from
the
client
to
the
converter
and
from
the
converter
to
the
to
the
server
allows
post
web
traffic
that
doesn't
get
proxies,
so
this
can
be
based
on
policies
or
automatic
bypass.
M
When,
once
you
have
detected
that
the
server
suppose
a
TCP
option
that
you
want
to
use
and
an
OS
and
proxy
need
to
authenticate,
we
believe
that
it's
possible
to
put
authentication
scheme
in
the
protocol
by
using
the
tid
messages.
But
if
we
we
say
the
word
authentication,
then
we
left
20
people
in
the
ITF
saying
that
you
would
like
to
support
our
authentication
scheme
and
it
will
not
be
a
draft
but
5
draft
and
so
I
want
to
keep
the
solution
simple,
but
we
can
add
protocol
authentication
schemes
inside
the
draft
as
well.
M
So,
to
conclude,
this
is
a
new
design
which
takes
into
account
all
the
commands
that
have
been
raised
on
the
list.
That's
an
application
level
protocol,
so
we
will
have
to
whisper
his
name
and
port
from
the
IANA.
It
provides
0
ft
t
using
TF
o,
which
was
the
key
issue
and
the
client
can
bypass
the
convert
or
the
server
suppose
the
option.
Our
request
to
the
working
group
would
be
adoption
of
this
document,
hotshot
writer
and
with
that
I've
covered
all
the
basics.
The
details
are
in
the
draft.
D
Scrubber
tract
cell
makes
some
more
substantial
comments.
Perhaps
after
this
sock
six
talk
here,
I
have
just
two
minor
comments.
One
is
about
extensibility
I
haven't
found
in
the
draft
where
you
say
what
to
do.
Can
you
see
an
unknown
TLD?
The
intent
is
pretty
clear,
but
you
don't
actually
say
it
when
you
see
an
unknown
on
TLV.
Yes,.
M
D
B
D
This
needs
to
be
said,
and
the
other
thing
so
again,
that's
something
the
intent
is
perfectly
clear.
So
you
say
that
it's
a
normal
user
space
proxy.
Now
you
are
doing
some
rather
exciting
stuff
with
TCP
in
here.
Okay,
for
very
good
reasons,
I
understand
very
well
how
you're
doing
that,
but
you're
returning
the
unchanged
sin,
header,
Sendak
header
and
you
are
actually
replied
by
a
syn
ACK.
Only
after
you've
received
the
syn
ACK
from
the
others,
yeah.
M
And
the
reason
why
we
are
replying
with
the
thing
act
on
only
when
we
have
received
a
cynic
is
that
we
want
to
be
able
to
test
whether
the
connection
to
the
server
works
correctly,
because
we
know
that
there
are
many
IP
eyeball
solutions
where
you
try
to
open
multiple
TCP
connections
in
parallel
to
a
given
server,
and
it's
important
that
you
are
not
in
a
situation
where
some
of
the
connections
always
succeed.
Even
if
the
server
is
not
there,
because
this
would
increase
would
produce
new
types
of
failure.
M
D
N
So,
let's
monitor
by
hey
I,
have
to
admit
I,
don't
get
the
question
so
I
just
read
it
out
loud
in
the
scenario
of
the
middle
box
between
an
end
device.
What
this
market,
with
only
plain
TCP
support
and
the
middle
box
talking
to
a
server
in
the
in
the
platform.
An
MP
TCP
is
the
client,
the
middle
box.
N
L
Me
a
cool
event,
as
you
say
correctly,
this
is
kind
of
an
application
layer
protocol
and
it's
also
not
it
can
also
not
or
it
can
be
used
for
other
things
than
just
MP.
Tcp,
say:
I
assume
this
is
not
the
right
crew.
Sorry
to
say
that
so.
M
L
L
M
P
Yeah,
it
depends
what
ambition
do
we
have
if
you
want
this
distribution
to
be
specific
to
the
impetus
EP,
the
document
will
be,
will
talk
only
about
the
how
we'll
handle
the
MPCP
connections
and
we
don't
talk
about
the
other
other
usage
outside
in
participate.
So
it's
just
a
matter
of
I
would
say
scoping
of
the
document.
So
if
that's
what
we
want,
we
just
we
can.
Q
P
We
have
to
remind
ourself
where
we
ended
with
the
applications
application
layer
proxy.
This
is
one
of
the
main
commissioned
imaginative
people
say:
that's
if
you
wanted
a
proxy
just
reserved
a
port
number
and
we
are
hearing
what
the
what
and
that's
why
we
are
asking
to
iterate
on
the
board.
If
now
you
say
that's,
no,
we
decided
the
direction
we
go
because
we
have
this.
L
M
But
then
my
question
will
be:
what
do
you
think
will
fit
in
the
last
shuttle
items,
which
is
the
working
group
will
explore
whether
an
NP,
TCP
or
middle
box
would
be
useful
when
I
know
is
an
PTC
be
enabled.
So
if
you
look
at
the
chatter
on
the
first
slide,
yes
I
think
we
aren't
answering
the
chatter
so.
L
But
it
doesn't
mean
that
this
has
to
result
in
an
MP,
TCP
document
or
in
any
kind
of
outcome.
It
will
explore
the
options
and,
and
probably
what's
really
in
scope,
would
be
maybe
an
important
form,
a
document
that
is
describing
how
to
use
options
that
may
or
may
not
be
will
developed
in
other
groups.
So.
D
A
I
I'm,
it
may
surprise
you
that,
given
how
much
I've
been
banging
on
about
application
layer
protocols
for
a
while
here,
this
one
strikes
me
as
being
very
amputees
to
be
specific,
because
there's
all
about
shifting
the
options
around
the
impetus
be
specific
options.
You
haven't
looked
at
how
to
use
it
for
other
options
and
plus,
what's
more
that's
out
of
scope,
it's
in
scope
as
an
MP,
TCP
solution
and,
as
you
say,
if
Mattie
is
a
charter,
so
actually
I
think
it's
quite
a
good
match
for
this
working
group.
L
What
I,
push
or
propose
at
this
point
is
that
you
actually
go
and
contact
other
working
groups
and
maybe
even
present
this
in
like
area
meetings
and
stuff
and
get
some
feedback
about
it.
And
then
we
can
see
if
there's
any
other
worker
group
that
thinks
it's
it's
in
there
scope
and
should
be
done
in
their
working
group.
And
if
not,
we
can
do
it
here
and.
A
M
L
Another
issue
no
I
was
I
was
kind
of
waiting
to
see
if
there
are
more
people
coming
today,
my
and
they
weren't
so
but
like
we
can
also
talk,
but
I
have
a
little
bit
of
a
technical,
taking
a
question
which
was
related
to
say
you
did
say
this
can
be
used
for
other
TCP
options
and
when
I
look
at
it,
I
think
it
would
be
super
easy
to
use
it
for
other
TCP
option
would
like
be
straight
right
like
this.
Actually,
no
change
record,
so
I
think
this
is
actually
not
MPT.
Cccp
specific.
Yes,.
M
M
But
they
never
add
this
issue,
so
there
they
have
produce
window
scale.
Therefore,
these
times,
then
they
produce
lots
of
drugs
and
lots
of
solution,
and
they
never
add
this
issue,
because
there
was
no
use
case
with
well,
where
you
have
to
access
networks
and
there
is
a
strong
benefit
of
having
MP
TCP
on
the
part
of
the
network.
Devon
Devon
seen
that
use
case.
So
this
use
case
comes
from
MGP
CPM
from
the
utilization
of
MB
TCP
I.
L
D
The
scrubber
track
I
would
just
want
to
add
one
element
to
the
discussion.
This
is
actually
a
quite
manageable
document.
I've
actually
read
it
all
of
it
and
it's
written
with
an
eye
to
something
that
we
sometimes
neglect
that
is,
implement
ability.
Okay,
there
are
a
few
things,
so,
for
example,
there
is
one
thing
you
cannot
take
a
domain
name
which
makes
it
and
since
you
are
doing
strange
things
with
TCP,
you
might
want
to
implement
it
in
a
specific
environment.
M
I'm
not
sure,
but
the
reason
why
there
is
no
domain
name
in
the
in
the
connect
message.
In
contrast,
we
saw
with
socks,
for
example,
is
that
we
want
to
provide
geo
RTT.
So
we
don't
want
to
be
in
a
situation
where,
when
the
client
sends
a
connect
request
to
the
converter,
the
converter
has
to
do
a
DNS
lookup
before
sending
a
see
and
that's
a
feature
and
not
a
bug
and.
D
J
L
A
Just
to
answer
Julius
is
common
it.
It
is
nice
and
readable.
This
draft,
it's
good.
So
thank
you
and
congratulations
to
Pete.
We
wrote
it.
It's
got
nice
protocol
modeling
stuff,
which
is
how
people
to
advise
to
write
this
sort
of
document.
So
it's
what
you
know
don't
be
incorrect
that
our
BES
is
basically
I'm
trying
to
encourage
everybody
here
to
go
away
and
read
it.
A
A
And
yeah,
and
thank
you
to
the
authors
for
really
really
listening
to
the
feedback
that
they've
been
on
yeah.
Thank
you
to
the
contributions
for
really
listen
to
the
feedback
that
they've
been
on
on
the
previous
documents
and
discussion.
We
had
last
time
and
the
follow
on
discussion
to
that,
because
it's
really
come
over.
That
you've
tried
to
react
to
that.
A
We've
heard
from
at
least
one
people
who
one
person
introduced.
You
have
some
issues
with
with
some
of
the
previous
approaches,
and
maybe
maybe
it's
worth
trying
specifically
to
get
comments
from
some
of
the
other
people
who
had
comments
to
see
if
they
can
live
with
the
approach.
That's
proposed
cuz
do
I.
Ask
your
question
about
the
that
you
wanted
adoption
I
think
well.
A
First
thing:
we
need
to
give
people
more
of
a
chance
to
read
it
and
get
some
comments
on
the
main
this
supportive
or
with
suggestions
you
know
and
and
and
secondly
in
in
order
to
have
the
consensus.
It
can
be
a
working
group
item
here
or
somewhere
else
in
another
working
group.
We've
got
to
make
sure
that
it's
a
solution
that
people
can
everyone
can
live
with.
I
mean
that's
kind
of
the
definition.
Isn't
it
so
we
got
to
make
sure
that
happens
or
that
people
can
come
to
decision
that.
A
We're
probably
we're
probably
on
Friday,
have
a
little
bit
of
spare
time.
Looking
at
the
again,
we
got
an
hour
and
we've
probably
not
got
an
hour's
worth
of
stuff.
So
if
people
have
the
chance
to
read
it
before
Friday,
either
this
and
the
and
the
Sox
draft,
we
probably
can
have
a
little
you
know
ten
minutes
or
so
slot,
where
people
can
come
up
with
the
questions
that
they've
thought
of
in
the
time
they've
got
during
the
busy
week
too,
and
to
read
the
draft
and
think
about
it
a
bit
more.
B
B
O
And
I'm
going
to
talk
about
Sox
protocol
version
6
of
the
next
slide.
Please
so
socks5
is
20
years
old
and
it
really
needed
a
facelift.
The
main
problem
with
it
is
that
it
makes
liberal
use
of
round
trips,
so
we've
got
one
round
trip
for
four
negotiating
the
method
of
authentication.
Then
another
round
trip
or
maybe
more
for
authentication,
and
then
the
client
can
finally
request
the
establishment
of
a
connection.
Now.
O
Not
only
is
it
very
chatty,
but
in
the
meantime
they
have
been.
People
have
been
developing
zero,
ATT
authentication
methods.
So
if
the
two
entities
had
already
communicated,
they
can
authenticate
in
just
only
one
message
in
and
further
connection
attempts.
So
now
we
have
this
hot
in
use
case
where
mobile
users
might
want
to
use
both
the
cellular
interface
and
their
Wi-Fi
to
basically
get
a
basically
get
more
bandwidth.
Now
and
now.
O
O
And
finally,
it's
also
extensible,
so
we've
added
TCP
like
options
in
order
to
do
all
kinds
of
cool
stuff,
like
maybe
asked,
maybe
ask
the
as
to
the
proxy
to
use
a
certain
MP
TCP
packet,
scheduler
or
whatnot.
These
still
remain
to
be
discussed
and
standardized,
and
via
these
options
that
I
have
mentioned,
grtt
authentication
methods,
you're
RTT
authentication-
can
be
done
via
these
options
next
slide.
O
So,
let's
take
a
look
at
how
sucks
6
looks
when
compared
to
Sox
5
Sox,
so
in
Sox
5,
you
can
see
that
the
client
at
the
client
first
elevators
is
it
authentication
methods
and
then
the
authentication
proceeds,
and
then
it
sends
a
request,
gets
a
reply
and
then
five,
then
it
can
send
data.
So
we've
moved
those
three
bits:
the
the
blue
bit
the
green
bit
and
the
red
bit.
We
move
them.
We've
packaged
them
into
one
single
message,
which
is
the
request
which
the
client
sends
at
the
very
beginning
of
the
connection.
O
O
After
the
authentication
step
has
concluded
the
proxy
attempts
to
create
the
connection
to
the
server
and
then
sends
an
operation
reply
to
the
client
telling
it
whether
it
has
ceases
or
not
so
next
type,
please
in
subsequent
connection
attempts,
the
client
can
do
0tt
authentication,
in
which
case
the
server
can
attempt
to
initiate
the
connection
as
soon
as
it
sees
the
request.
So
it
sends
an
authentication
reply
and
then
an
operation
reply
after.
O
After
the
connection
has
been
established,
so
next
slide,
please.
So,
let's
take
a
closer
look
at
the
request,
so
it's
basically
a
mishmash
of
the
of
the
method,
advertisement
message
and
the
requests
along
with
some
date.
So
please
note
that
the
client
can
include
in
some
initial
data
as
part
of
the
request,
while
still
asking
the
the
proxy
not
to
use
TF.
All
the
options
are
in
TLB
format
and
they
can,
aside
from
authentication
to
zero
out
the
authentication
data
they
can
include
all
kinds
of
other
stuff
that
has
yet
to
be
standardized.
O
So
next
slide,
please,
the
authentication
reply
basically
tells
the
client
whether
further
authentication
is
needed
or
not,
and
also
it
also
tells
the
client
which
in
case
for
the
authentication
is
needed,
which
method
has
to
which
method
for
authentication
must
proceed.
In
case
authentication
succeeded.
It
informs
the
client
via
which
method
the
client
got
authenticated.
O
So,
for
example,
if
a
client
connects
to
the
proxy
by
its
home
Wi-Fi,
the
server
probably
expect
some
kind
of
authentication,
but
then,
if
it
connects
to
the
proxy
via
its
cellular
network,
the
proxy
knows
that
the
client
is
one
of
its
that
the
client
is
actually
a
paying
customer
and
tells
the
basically
tells
the
client
hey
no
Authenticator.
We
CENTAC
ated
you
via
the
no
identification
required
method,
so
that
the
client
gets
to
learn
that
that
it
shouldn't
attempt
to
do
any
kind
of
authentication
when
using
its
cellular
network.
So
next
slide,
please.
O
It
informed
occupy
client
why
it
hasn't
succeeded.
So
maybe
the
remote
server
has
reset
the
connection
or
maybe
maybe
the
connection
timed
out
or
maybe
the
server
is,
is
unreachable
or
any
other
or
some
other
reason
now,
there's
one
particular
field
of
one
period
of
particular
interest
here.
It's
called
the
initial
data
offset.
So
if
you
remember,
if
you,
if
you
recall
the
request,
includes
some
initial
data
now,
if,
if
further
authentication
is
required,
the
server
has
to
buffer
that
initial
data.
O
This
initial
field,
the
initial
data,
offset
basically
gives
the
server
carte
blanche
as
to
how
much
of
that
initial
they
tied
buffers.
So
it
can
choose
not
to
borrow
any
of
the
initial
data,
a
while
waiting
for
the
client
to
authenticate.
Otherwise
it
is
largely
useless.
The
next
slide,
please.
So,
let's
see
how
socks
v6
fares
in
action.
In
case
we
don't
have
TFO
on
any
leg.
O
The
client
basically
initiates
a
three-way
handshake
and
after
one
client
to
proxy
RTT,
sends
a
request,
along
with
initial
data,
the
proxy
also
initiates
a
three-way
handshake
to
the
server
and
sends
the
data
as
soon
as
as
soon
as
it
receives
the
syn
ACK.
In
this
case,
it
takes
two
and
to
end
our
tt's
for
a
data
response
same
as
with
vanilla
TCP.
So
there's
no
overhead
in
terms
of
delay
next
slide,
please
now
in
case
we
have
TFO
on
the
proxy
server
leg.
Things
get
interesting,
so
the
client
gets
gets
to
send
it.
O
So
the
client
sends
a
syn
that
also
contains
the
request
and
some
data
that
the
proxy
counts
and
is
sent
to
the
server
the
proxy
gets
in
starts
initiating
the
connection
to
the
server
as
soon
as
it
we
receive
this
sent
from
the
client
and
thus
and
as
such,
we
shave
off
one
client
to
proxy
RTT.
In
other
words,
if
the
proxy
is
on
path,
we
actually
actually
have
negative
overhead.
O
This
is
highly
advantageous
for
mobile
networks,
where
we
have
high
delay
at
the
at
layer.
Two.
So
next
slide,
please
and
finally
Inc.
If,
if
we
have
TF
or
no
legs,
the
the
the
scene
that
contains
the
request,
basically
gets
translated
into
a
scene
that
contains
the
initial
data
and
it's
headed
for
the
server
and
the
server
can
reply.
Reply
is
immediately
and
we
get
data
expands
in
one
RTG
same
as
with
Bonnie,
with
tearful,
without
any,
without
any
Fox
in
the
middle
next
slide.
Please
so
can
we
use
multiple
proxies?
Yes,.
O
This
socks
can
be
stacked
over
socks
multiple
times
and
I'm,
not
matter
how
many
times
we
stack
it
yeah
tt.
Overhead
is
the
same.
So
we
get
the
data
response
into
our
duties
if
the
there
isn't
any
tier
4
available
or
in
one
RTT,
with
Tier
four
on
all
paths
on
our
legs
of
the
path.
Regardless
of
how
many
servers
we
stack,
regardless
of
how
many
times
we
stack
socks
over
socks
over
socks,
so
no
matter
how
many
proxies
we
have
on
path,
we've
got
the
same
overhead.
O
Alternatively,
we
could
just
configure
the
first
proxy
to
go
by
the
second
proxy.
The
second
proxy
doesn't
necessarily
have
to
be
a
sock,
6
proxy.
We
can
use
any
other
proxy
in
technology.
So
next
slide,
please
we
do
have
an
early
prototype,
so
we
have.
We
have
modified
shadow
socks.
Ok,
some
some
of
you
might
be
familiar
with
that
proxy.
O
Finally,
I'd
like
to
see
some
words
about
about
how
this
compares
to
the
other
solutions,
so
they
are
similar
in
that
when
you
strip
the
initial
exchange
between
the
client
and
the
proxy,
you
end
up
with
the
plain
data
stream.
So
it's
the.
However,
we
use
an
entirely
different
starting
point,
so
this
is
a
purely
layer,
5
protocol.
We
do
not
use
CIN
acts
to
signal
that
the
remote
server
has
accepted
the
connection
or
not.
We
use
a
actually
user.
Some
will
use
some
TCP
data
for
that
TFO
or
seen.
O
D
So
first,
thank
you,
Julia
scrubber
track.
First,
thank
you
because
when
in
the
late
90s
early
2000s,
we
were
upgrading
our
client
software
to
use
Sox
5
instead
of
socks
for
a
we
realized,
there
was
one
thing
that
was
shocking:
was
that
gradient
to
Sox
5
added
1
RT
T
to
the
unauthenticated
case
with
no
benefit
whatsoever?
Yes,
so
now
now
we've
recovered
this
RT
t
so.
D
It
took
20
years
but
well.
The
comment
I
wanted
to
make
is
that
this
I
think
it's
important
to
stress
how
much
oliviers
draft
and
your
draft
live
in
different
spaces
and
I
think
there
is
space
for
both.
So
this
thing
is
something
I
would
estimate
that
it
would
take
20
minutes
to
upgrade
an
existing
Sox
5
proxy
to
use
Sox
6
without
authentication
and
without
0
RTT.
The.
B
D
Thing
would
be
very
simple:
I
can
imagine
putting
that
and
software
that
is
not
doing
proxying
just
as
a
way
of
connecting
today,
software
I
couldn't
imagine
using
oliviers
technology
for
doing
the
same.
It
also
has
the
main
name,
so
it
fits
the
needs
of
Sox
5.
So
I
think
this
has
reason
to
exist.
Even
if
early
years
draft
existed
to
are
not
at
all
neutrally
exclusive,
and
the
other
comment
is
suppose
that
I'm
writing
a
client
for
Sox
6.
Now
my
client
will
probably
want
to
speak.
D
Sox
5
once
would
speak,
Sox
6
and
will
want
to
speak
to
be
able
to
speak
to
protocol
converters.
It
doesn't
care
who
you're
speaking
to
it,
just
wants
to
establish
it
ones.
The
weaker
guarantees
that
Sox
gives
with
respect
to
a
product
to
a
transport
protocol
converter,
so
it
doesn't
Kirk
which
it
speaks
to.
But
the
package
format
between
the
protocol
converter
and
the
Sox
protocol
are
very
different.
So,
in
the
client
side,
I
have
to
make
two
completely
different
implementations.
D
Now
I'm,
not
saying
it's
a
good
idea:
I,
don't
know
if
it's
a
good
idea.
Generalizing
often
year
yields
completely
unmanageable
protocols,
but
I
wonder
whether
it
would
be
possible
to
consider
having
the
same
the
same
package
format
for
both
techniques
and
making
it
possible
for
a
client.
You
know
to
connect
somewhere
and
discover
whether
it's
speaking
to
a
protocol
converter
or
a
Sox
six
proxy
well.
O
D
O
O
So
if
the
slide
has
a
request
on
it,
yes,
so
the
first
byte
is
actually
the
major
version
number
same
as
in
Sox
four
and
Sox
five.
So
so,
if
it's
away
for
Sox
6
server
gets
a
request
that
starts
with
a
five
just
as
in
Sox
five,
then
you
can
start
speaking
Sox
five
to
the
client.
Yes,
that's
there
mm-hmm
just
clear.
H
D
H
Now
that
once
there's
a
you
space
that
relies
on
them
being
democ
scible,
they
can
no
longer
evolve
to
modify
those
bytes
independently.
They
have
to
be
considered
as
a
single
protocol.
So
that's
a
that's
a
point
that
sounds
very
scary
to
me.
I
wouldn't
want
to
do
that
lightly.
The
the
second
thing
I
wanted
to
point
out
is
that
socks5
is
an
incredibly
widely
deployed
protocol
in
essentially
every
web
browser
in
the
world
and.
B
H
Quite
a
number
of
other
places,
I
think
your
your
proposal
is
an
improvement
and
that's
good,
but
it
but
agreeing
on
a
new
major
version.
Number
of
socks
seems
almost
as
big
as
agreeing
on
a
new
major
version
number
of
HTTP
and
again
there's
not
to
be
not
something
that
I
think
we
should
just
do
within
the
NPT
be
working
group
no.
O
H
So
there
are
so
I
hope
that
we
don't
just
just
move
ahead
with
a
with
an
incremental
change
when
there's
a
lot
more
things
that
we
need
that
need
work
and,
in
particular,
I
want
to
point
out
that
this
this
protocols,
major
improvement,
is
a
reduction
in
round
trips.
That
sucks
Sox
proxies
in
general,
are
used
primarily
in
two
cases.
They're
used
on
on
localhost.
H
To
a
very
large
extent,
and
to
also
the
the
remainder
is
largely
on
a
LAN
with
extremely
low
latencies,
so
so
that
latency
savings
is
useful,
but
it's
only
really
useful
if
the
Sox
server
is
at
a
large
distance
from
you,
and
if
the
sock
server
is
at
a
large
distance
from
you,
then
you
do
not
have
a
trusted
network
anymore,
which
in
my
view,
means
that
encryption
using
TLS
or
an
equivalent
protocol
should
be
mandatory.
Well.
O
H
H
B
H
Your
authentication
data
includes
a
username
and
password
those
are
sent
in
clear
text
between
you
and
the
proxy.
So
I
I
really
think
that
if
you're
building
some,
you
appear
to
be
building
something
that
is
useful
over
large
networks,
not
inside
private
lands
and
whenever
you're
outside
of
a
private
land
TLS
should
be
or
equivalent
should
be
mandatory.
I
see
the
other.
H
The
final
thing
I
want
to
point
out
is
that
this
use
case
is
already
served
by
proxies
over
HTTP,
including
all
of
the
of
the
zero
RTT
functionality.
So
I
think
we
should
think
very
carefully
about
whether
we
want
to
duplicate
the
functionality
of
modern
HTTP
proxies
or
whether
we
want
to
basically
invest
in
improvements
to
those
proxies.
M
Many
of
comments
Ben,
so
the
first
one
is
that
with
an
P
TCP,
there
is
already
at
least
two
deployments
of
using
such
solutions
to
the
bonding
of
any
type
of
access
networks
together
through
a
sock
server,
and
one
of
them
is
pubic.
That's
the
over
the
box
solution
from
OVH
in
France
and
they
use
shadow
Sox
with
a
solution
where
there
is
an
encryption
on
the
different
links.
So
this
address
is
the
comment
from
from
Ben,
and
this
is
a
solution
which
is
subsidy
file.