►
From YouTube: IETF114 TSVWG 20220725 1900
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
C
D
B
A
C
We
can
hear
you
thank
you,
rona,
okay,
excellent!
Let's
do
a
few
more
quick
checks
here.
Marcus
mpdccp
are
you
here?
Yes,
okay,
he's
he's
waving
at
us.
Thank
you.
Sorry.
I
apologize
for
what
I'm
about
to
do
here,
but
I
cannot
recognize
anybody
behind
a
mask.
C
So
we've
checked
going
on
and
they're
here,
I'm
looking
at
greg
so
he's
here.
We
just
checked
with
lewis
some
some
of
your
long-standing
group
chairs
handle
that
item.
The
can
drafts
peng.
Are
you
here.
C
G
C
C
Thank
you,
martin
and
the
the
job
isn't
voluminous
notes.
Please
we're
mostly
interested
in
capturing
important
things.
Important
things
said
the
mic
and
importance
in
and
important
conclusions
reached
and
with
that
let's
go
ahead
and
get
started.
This
is
the
transport
area
working
group
at
ietf
114
in
philadelphia.
C
Your
work
group
chairs
are
gory
who
is
remote
out
there
somewhere
there
he
is
waving
at
us.
I'm
david
black-
and
this
is
this-
is
wes.
This
is
where
setting
okay.
This
is
the
note
well
you've
seen
it
before.
You'll
see
it
again
this
week,
I'll
make
sure
you've
read
it
at
some
point
and
apply
it.
It
applies
to
you.
C
Okay,
we
have
a
note
taker
in
general
folks,
the
work
group
one
runs
on
reviews
of
drafts.
We
please,
if
you
find
a
working
group
draft,
that's
interesting
review.
It
send
comments
to
the
list.
The
authors
are
almost
always
grateful
a
reminder
in
general.
If
you
want
to
draft
brought
the
attention
of
tsbwg
put
tsvwg
in
the
name
of
the
draft.
In
the
usual
place,.
C
C
C
At
this
point,
the
design
work
is
pretty
much
complete
and
it's
time
to
go
over
the
details
and
make
sure
we've
got
them
all
right:
okay,
accomplishments
and
status,
and
as
part
of
this,
this
is
going
to
be
the
updates
on
working
group
drafts
that
didn't
that
don't
have
a
slot
on
the
agenda,
because
we
just
need
to
tell
you
what's
going
on.
Okay,
one
rrc
has
been
published
since
ietf113.
C
That's
the
new
sctp
document.
Thank
you.
Thank
you,
michael
for
getting
through
that.
We
have
no
drastic
rfc
editor.
The
l4s
drafts
have
completed
ietf
last
call
there's
three
of
them
issue
resolution
is
in
progress,
you're,
seeing
it
on
you're,
seeing
it
on
the
list.
As
bob
and
the
author
team
respond
to
the
inbound
comments
that
they
that
they've
received,
I
think
I
think
I
think
working
through
the
area
reviews
is
go
is
going
well.
Yes,
bob.
H
All
right,
bob
briscoe
yeah,
I'm
editor
of
all
three
and
that
so
far
everything's
been
resolved.
That's
been
sent,
but
there
were
12
reviews
lined
up
and
I've
only
seen
three.
So
I
don't
know
how
long
we
wait
before
I
issue.
Another
draft.
C
I
think
you
get
to
make
the
call
when
you
think
you've
got
enough
material
done,
that
we
audition
another
draft
issue.
Another
draft
ultimately
decision
on
whether
to
wait
for
the
reviews
rest
with
with
on
which
reviews
to
wait
for
rest
with
our
a.d,
who
is
now
nominally
responsible
for
the
draft.
C
And
I
want
to
say
say
thank
you
to
to
valerie
smithslav,
whose
last
name
I
probably
just
butchered,
because
I
thought
the
discussion
around
that
security
review
was
a
great
example
of
how
things
are,
how
things
ought
to
happen
when,
when
there
are
some
solid
technical
issues
raised.
A
C
Okay,
we
have
two
drafts,
we're
in
group
chairs
and
authors
after
every
last
call
where
they're
with
me.
These
are
the
ec
encapsulation
draft.
When
bob
comes
up
for
air
from
working
on
the
lfs
drafts,
we
have
a
little
more
text
to
write
for
one
of
these
drafts.
C
H
H
Yeah
some
more
text
is
needed,
but
I
I
I
detect
it's
going
to
be
more
along
the
lines
of.
We
can't
disagree,
so
we're
just
going
to
agree
to
disagree,
and
this
will
have
to
be
sorted
out
later.
C
I
will
take
a
look
at
it,
I
believe
at
before
I,
before
you
sent
those
emails.
My
impression
was:
we
just
need
a
little
bit
of
scoping
text
to
explain
when
reframing,
when
reframing
happens
and
when
you
have,
when
you
have
to
do
the
congestion
accounting
as
part
of
reframing
and
with
luck
we'll
get
there.
Okay,
one
draft
just
completed
group
last
call
assigning
a
new
recommended
dscp
that
one
has
an
agenda.
Slot
you'll
see
that
shortly
remaining
work.
C
Group
ids-
and
this
is
where
we
do
the
low
flyby
to
explain.
What's
going
on
here,
okay,
lfrs
operational
guidance
greg
says
that
one's
in
pretty
good
shape.
It
says
the
star
here
is
something
I
need
to
update.
We
took
it
off.
We
took
it
off
the
agenda
and
it's
going
to
be
a
living
doc.
I
think
we're
going
to
keep
it
alive
for
a
while,
as
people
move
forward
with
hacking
on
lfrs
and
and
getting
it
to
work
and
getting
it
deployed,
greg.
I
Yeah,
I
just
also
wanted
to
add
that
it
is
a
effect
to
be
a
living
document
at
this
point.
So
as
deployments
start
and
we
gain
experience-
is
the
editor,
the
document
would
love
to
hear
things
that
the
document
is
missing,
some
things
that
we
could
add
to
improve
to
it.
I
C
Great
thank
you
non-cube
building,
nqb
php
that
will
have
an
agenda
slot
this
afternoon,
udp
options
and
datagram
path,
mtu
detection
for
udp
options.
Gore,
do
you
want
to
say
a
word
or
two
about
those.
J
Sure
yeah,
both
of
the
udp
options
drafts,
are
simply
waiting
for
joel
to
say
this
is
the
last
version
that
he
wants
to
put
to
the
working
group.
I
believe
that
all
the
issues
are
known
by
joe
and
the
text
is
pretty
much
there.
I'm
just
waiting
for
joel
to
say
it's
ready.
C
Okay
sounds
good:
dtls
sctp
continues
to
move
forward.
We
know
we
have
to
publish
the
new
the
the
new
version
of
it,
because
three
3gpp
needs
it
needs
it
to
deal
with
large
search
chains.
C
Sctp
nat
is
expired,
gory
or
michael
any
words
of
wisdom
on
that.
K
So
I
had
some
discussion
with,
I
mean.
K
At
again,
at
the
feedback
from
the
former
transport
id
and
the
current
one,
and
basically
the
way
to
resolve
most
of
the
issues
are
removing
any
http
specific
thing
from
the
document
and
just
do
translate
the
ip
address,
but
don't
translate
the
port
number.
K
That
seems
to
be
the
the
the
the
simplest
way
to
address
both
of
their
comments.
I
also
chatted
with
claudio
and,
and
he
sort
of
agrees
they
are
focusing
on
what's
available
right
now
and
then
the
question
is
whether
we
really
need
a
document
describing
the
obvious
thing
or
not,
because
the
core
of
the
sctp
net
document
was
how
to
how
to
do
it
in
an
sctp
specific
way
to
to
deal
with
not
being
able
to.
A
K
K
L
K
K
K
All
the
other
stuff
is
how
to
deal
with
the
take
the
verification
tag
into
account
and
deal
with
port
number
collisions
yeah,
and
you
said
well
that
that
affects
endpoint
behavior.
That
is
difficult
to
implement
and.
K
Basically,
the
same
major
points
came
from
from
martin
and
they
are
valid.
So
it's
okay,
okay,.
C
Sounds
like
we
probably
got
to
take.
This
ought
to
take
this
to
the
list
with
the
notes,
reflecting
that
it's
uncle
the
value
of
a
draft
that
only
does
the
ip
address
translation
doesn't
contain.
Sctp
specifics
is
unclear,
but
I
think
we
need
need
to
need
to
move
along
here
in
meeting
time.
Thank
you,
michael.
J
A
K
C
C
Yes,
please
take
it,
let's
let
let's
take
it
to
the
list
and
make
sure
the
minutes
reflect
that
it.
The
value
of
the
draft
is
unclear
and
I
think
I
think
gory
is
heading
a
good
direction,
which
is
if
it's
going
to
get
used.
We
ought
to
document
it,
but
let's
make
sure
we
can
actually
write
up
something.
Usable
multipath
dccp
has
an
agenda
slot
later
related
ids
to
the
working
group.
These
are
all
individual
drafts
use
reports
for
experiments.
C
It's
on
the
agenda
for
for
later
later,
we'll
talk
about
it.
Then
enhanced
port
forwarding
functions
with
cgnat.
Also
on
the
agenda
later
we'll
talk
about
it,
then
there's
a
draft
wang,
phwg,
sw,
slc
fec
scheme.
It's
been
posted
to
the
list,
we've
not
seen
any
interest
in
it.
If
you're
interested
in
this
draft,
please
post
the
list
or
talk
or
talk
the
chairs
about
it.
It's
not
on
that
that
one
is
not
on
today's
agenda.
C
Milestone,
review,
okay,
the
two
milestones
here
that
say
april:
those
milestones
are
about
to
be
achieved,
so
I'm
not
going
to
go.
Go
go
work
on
on
set
on
put
putting
putting
new,
putting
new
dates
on
them.
I'm
gonna
guess
that
between
bob
and
myself
and
the
folks
on
the
list,
we'll
probably
have
those
drafts
submitted
sometime
next
month
or
by
sometime
in
september,
the
latest
corey.
Can
you
reach
out
to
joe
between
the
two
you
figure
out
what
the
dates
ought
to
be
for
the
udp
options.
Draft.
Thank
you.
C
Everything
else
is
still
on
course,
one
way
or
another.
Although
we'll
we'll
talk
about
the
first
two
address,
nqb
php,
we
will
talk
about
in
this
meeting.
Sctp
nat
support
you
just
heard
about
in
considerations
for
a
new
recommended
dscp
is
is
on
the
meeting
agenda
next
year,
dks
over
sctp,
multipath
dccp
and
right
now
we
have
a
july
2023
tent
of
date
for
lfrs
ops,
that
one's
very,
very
malleable
we
we
can
easily
make
calls
as
we
go
along
as
when.
C
Okay,
so
having
done
a
trip
through
the
working
groups
program
of
work,
let's
go
look
at
the
agenda,
so
these
slides
are
are
the
agenda.
We
just
did
the
milestones
review
chairs
update
these
slides
we've
just
managed
to
get
through
eason
encapsulation
draft
status,
all
first
draft
status
and
active
work
group
draft
status
in
general.
C
We
have
four
working
group
drafts.
We
four
sessions
on
work
on
working
group
drafts.
Marcus
is
going
to
talk
to
us
about
multipath
dccp
in
a
couple
minutes,
anna
and
gory
on
considerations
for
assigning
new
dscp
greg
white
on
nqb
and
the
lfrs
interop
update
where
good
things
have
been
have
have
been
happening
around
here.
C
Then
we
get
to
introduce
our
drafts.
The
chairs
are
going
to
talk
about
your
supports
or
experiments.
Heads
up
folks
explain
what
is
going
on
here.
This
is
a
process
draft
that
has
basically
come
out
of
the
porch
review
team
about
something
they
think
would
be
very
useful
in
terms
of
managing
ports,
we're
probably
going
to
move
fairly
quickly
on
it.
C
N
Yeah
wes
eddie
was
brought
on
as
tcw
chair
quite
a
bit
before
I
became
a.d,
but
specifically
as
a
third
chair
to
help
with
the
very
administratively
intensive
l4s
process
that
we're
all
aware
of
many
of
you
just
heard
this
bill
45
minutes
ago
in
tsv
area.
So
I
guess
it
could
be
brief,
but
wes.
Thank
you
very
much
for
all
you
did
to
make
that
happen.
N
It
looks
like
it's
going
to
be
a
successful
process
and
it's
out
in
the
internet
wes
since
mission
accomplished
now
so
he's
going
to
step
down
as
tsvw
chair
after
this
meeting
and
I'd
like
to
thank
you
on
behalf
of
the
community
for
everything
you
did
both
there
and
really
other
things.
I
mean
you
weren't,
just
the
r4s
guy.
You
certainly
lended
your
expertise
to
everything
going
on
in
the
working
group
and
and
it's
very
much
appreciated.
So
thank
you
very
much.
C
Okay
and
then
the
rest
of
what
of
what's
not
yet
on
this
slide
is
this
is
david.
I
am
going
to
be
stepping
down
as
a
working
group
chair
after
the
next
meeting
in
london,
so
the
game
plan
is
that
martin's
going
to
see
about
finding
another
working
group
chair
yes
and
then
gory
that
new
chair
and
myself
will
make
london
happen,
after
which
I'm
going
going
going
to
step
down.
I
have
been
a
chair
for
quite
some
time,
helped
a
number
of
drafts
along.
It's
been
an
adventure,
it's
been
a
pleasure.
C
You
should
be
fine,
just
just
just
tell
us
when
to
advance
the
slice.
I
think
I
have
to
do
it
from
here.
Okay,.
O
Yeah
first,
starting
with
the
status
update,
so
what
has
changed
since
last
iatf,
we
made
a
version
bump
from
version
four
to
five
and
committed
a
lot
of
changes
to
our
github
repository
where
we
maintain
the
multipass
dcp
draft.
You
see
at
the
bottom
a
link
to
the
full
change
log.
I
think
it
was
more
than
100
commits
this
time.
We
organized
everything
as
pull
requests.
O
A
number
of
pull
requests
addressed
editorial
work,
as
you
can
see
on
the
top,
but
we
also
made
a
lot
of
changes
regarding
our
multi-path
features.
We
have
defined
for
multi-pass
dccp.
That
was
mainly
based
on
implementation
feedback
we
received.
So,
for
example,
we
redefine
the
mprtt
h,
parameter
to
specify
the
actuality
of
the
mprtt
measurement
of
the
rtt
measurement.
O
We
convey
in
the
mprtt
option
we
added
a
new
section
on
closing
procedure,
so
we
give
a
description
here,
also
accompanied
by
some
diagrams.
Along
with
that,
we
enhance
the
mp
close
definition,
including
connection
and
a
subflow
socket
stays
so
states
so,
and
we
give
really
in
detail
some
overview
to
which
states
the
socket
moves.
O
O
O
I
think
prior
value
1
is
to
put
a
path
into
standby
mode,
so
it's
only
used
when
there's
no
other
path
available,
and
then
we
have
a
value
2,
which
defines
a
secondary
path
and
from
value
3
onwards.
We
have
a
primary
path,
a
definition
and
you
can
use
value
3
up
to
15
for
giving
priority
information
to
the
scheduling
process.
O
We
updated
iana
section
proposing
there
multiple
things
to
be
registered
so
that
our
mp,
the
multi-path
options,
a
new
dccp
reset
code.
We
use
for
closing
a
multi-pass,
dccp
connection
and,
last
but
not
least,
also
the
key
types
we
use
in
the
mp
key
during
the
handshake
of
a
multi-pass
dcp
connection.
O
We
enhanced
the
fallback
section
where
we
also
now
include
version
mpk
and
checksum
mismatch
and
its
impact
on
either
the
mpd
ccp
connection
or
individual
subflows
yeah
and
the
one
I
think.
What
is
what
I
would
like
to
highlight
here
is
enhanced
description
and
secured
multi-path,
add
address
and
remove
address
so
for
those
of
you
who
are
familiar
with
mptcp,
there's
also
address
and
remove
address,
specified
to
add
new
subfloors
or
to
remove
subfloors
when
is,
for
example,
coming
up
or
going
down
different
to
mptcp.
O
I
think
we
secured
both
at
the
trust
and
remove
address
so
that
only
the
multi-pass,
ecp,
client
and
server
are
able
to
generate
an
exchange
in
those
information.
I
know
no
man
in
the
middle
can
can
generate
or
make
use
of
those
options
to
impact
the
multi-part
dcp
connection
next
slide.
Please
yeah
maturity,
state
of
the
mpdcp.
O
O
We
are
pretty
much
convinced
that
we
have
everything
which
is
required
for
a
successful
implementation
and
usage.
So
focus
is
now
on.
As
you
have
already
seen
in
the
introduction
slide
on
editorial
fixes
incorporate
more
feedback
from
external
review.
For
example,
matt
buccadeira
has
already
made
a
great
job
and
gave
a
very
first
very
comprehensive
feedback,
pull
requests
and
issues
you'll
find
the
links
at
the
bottom
of
the
slide.
O
O
You
will
see
in
the
next
slide
that
we
are
heavily
working
on
completing
the
prototype
and
incorporate
all
the
multi-parts
options
which
are
specified
in
the
draft
and
the
feedback
we
get
from.
This
implementation
helps
us
to
improve
our
nptcp
draft
next
slide.
Please
yeah.
That
is
exactly
the
slide
I
reference
to
so
this
gives
an
overview
again
on
what
is
completed
in
the
draft.
O
What
I
also
outlined
throughout
the
last
two
slides
so
where
we
put
some
focus
on
this
time.
So
in
this
current
version,
five
is
to
complete
mprtt
editors,
remove
address
mp
prior,
but
also
fallback
mechanisms
and
mp
close,
most
of
them
due
to
our
work
on
the
prototype.
And
if
you
look
into
the
prototype
column,
you
see
that
more
or
less
everything
is
now
incorporated
available
in
the
prototype.
O
This
linux
reference
implementation
we
have
of
the
mpdccp
so
where
we
put
a
lot
of
effort
in,
is
to
implement
mp
at
address,
remove,
address
and
also
mpprio
what
we
improved
in
our
prototype
implementation,
because
it
was
not
fully
implemented
what
the
empty
join,
which
did
not
cover
the
yet
which
did
not
cover
the
address
id
and
the
mprtt
which,
where
the
type
and
h
parameter
was,
was
missing,
though
that
is
completed
now.
O
You
also
find
here
in
the
slide
the
individual
pull
requests
in
in
our
public
github
repository
where
we
maintain
the
reference
implementation
we
have
started
now
to
implement.
Also,
the
mp
confirm
mp
confirm
is
used
for
a
reliable
exchange
of
multi-path
options
so
far
in
the
draft.
This
is
defined
for
editors,
remove,
address
and
and
prior
yeah,
which
is
still
missing,
is
to
have
all
the
fallback
mechanisms
implemented
and
what
is
completely
missing
at
the
fast
close
and
the
close,
our
goal
is
to
complete
all
these
missing
things
until
next
itf
in
london.
O
Nevertheless,
we
are
happy
if
someone
contributes
to
our
code.
So
all
the
informations
are
available
in
the
draft
and
we
also
have
a
website
multipath
dccp.org,
where
you
find
the
relevant
information
to
contribute
or
to
use
the
mpdccp
prototype
next
slide.
Please.
O
Okay,
giving
some
more
status
information
on
the
reference
implementation,
more
or
less,
and
that
is
something
what
I
already
presented
last
itf,
but
I
think
it's
worth
to
repeat
it
here,
so
the
published
prototype
is
for
sure
providing
mpdccp
as
it
is
specified
in
the
current
draft,
so
the
multipath
transport
protocol
itself.
O
On
top
of
this,
we
also
provide
some
more
implementation
to
make
it
also
usable
in
in
other
scenarios,
so
not
specifically
for
dccp
transport,
which
is
for
sure
the
main
target,
but
with
the
encapsulation
framework
we
provide
on
top.
We
are
also
able
to
transport
non-tcp
traffic
and
make
it
a
multi-pass
capable
we
have
a
number
of
scheduling
algorithms
available.
You
see
them
on
the
list
on
the
left.
O
O
O
We
try
to
drive
this
an
iccrg,
but
I
think
from
the
previous
psb
area
meeting
yeah.
We
we
all
know
that
this
is
at
the
moment
a
little
bit
difficult
to
drive
new
combustion
control
mechanism.
So,
let's,
let's
hope
that
this
will
be
solved
soon,
so
that
we
can
really
specify
ccid5,
which
we
think
is
very
valuable
for
mpd
ccp
and
last
but
not
least,
we
are
also
able
to
establish
and
destruct
subflows
during
operation.
O
A
B
A
O
We
also
are
convinced
that
they
enable
a
range
of
use
cases
and
can
be
used
to
demonstrate
the
effectiveness
of
multi-pass
dccp
and
an
announcement.
So
we
plan
additional
reordering
mechanisms,
mechanism
to
be
published
using
mprtt
for
dynamic
path,
latency,
difference,
determination
and
equalization
at
the
end
next
slide,
please.
O
Yeah
some
general
updates
from
the
mp-dcp
ecosystem.
I
think
that
is
also
something
what
I
present
in
detail.
Last
ietf,
so
mpdccp
is
proposed
as
a
so-called
lower
layer
solution
to
the
ongoing
release.
18
study
phase
on
80
triple
s
enhancements,
so
we
matured
this
lower
layer
solution
and
you
find
latest
update
in
version
0.2
of
the
technical
report
linked
here
on
the
slide.
O
So
again,
if
you
have
multiple
links
with
different
characteristics
and
you
use
multi-pass
dccp
or
a
solution
like
multi-path,
quick
using
mask
and
data
chrome
extension,
which
is
also
discussed
as
an
alternative
for
the
at
triple
s,
then
you
have
this
challenge
of
yeah
having
packets
crumbling
entities
when
you
simultaneously
use
a
part
with
different
latencies,
that's
quite
bad
for
the
application
or
end-to-end
services
using
tunnel
multi-path,
and
we
were
able
to
prove
that
in-network
reordering
mechanisms
are
required,
so
probably
something
what
has
to
be
specified
in
3gpp.
O
We
will
present
those
results
during
iccrg
slot
on
on
thursday
and
what
we
especially
verified,
how
the
impact
on
quick
end
to
end,
but
also
the
effect
on
traffic
using
different
congestion
controls
end
to
end
so
quite
interesting
word,
and
I
encourage
you
to
listen
to
the
icc.
Lg
presentation.
O
J
I
was
going
to
try
and
answer
the
question
on
the
side,
which
I
think
probably
goes
to
the
working
group
chairs
the
charter.
I
think,
says
july
23,
that's
the
target
if
we
can
beat
the
target,
I'm
okay
with
this.
J
C
I
mean
this
is
this:
is
david
working
group
chair
I'm
noticing
that
the
whole
discussion
is
now
at
a
much
more
detailed?
These
smaller
things
are
getting
done.
We
think
we've
got
the
graph
function
complete.
So,
yes,
I
think
it
is.
It
is
getting
very
close
to
the
point
of
which
work
last
call
makes
a
lot
of
sense.
C
O
A
C
Hang
on
a
minute,
I
need
to
look
at
my
cheat
sheet.
The
scp
on
the
anna
and
gory.
F
F
C
F
Perfect
sorry
about
that
right,
so
thank
you
to
all
the
people
who
reviewed
in
particular,
rudiger
guy
who's
raised
many
issues.
The
two
major
issues
we
had
were
that
the
document
was
not
clear
in
its
scope
and
intended
audience,
so
we
clarified
that
it
is
not
intended
for
application
developers
who
want
to
choose
a
dscp,
but
rather
for
the
idf
and
iesg
when
dealing
with
a
request
for
a
new
dlcp.
F
F
There
have
been
17
minor
issues
raised
as
well
and
I've
included
here
just
the
highlights.
We
made
extensive
use
of
the
word
pathology
to
describe
the
observed
remarking
behavior,
but
there
was
a
slight
medical
implication
here
that
there's
something
wrong
with
this
remarking,
which
is
not
the
case.
So
now
we
use
just
observed
remarking
behavior
instead
of
pathology.
F
F
F
It
describes
a
local
use
for
code
points
that
have
the
most
three
most
significant
bits:
zero
and
end
in
one
so
in
decimal.
That
means
code
points
one
three,
five
and
seven,
this
local
use.
As
I
understand
it,
was
proposed
for
networks
that
used
ip
precedence.
I
think,
because
of
concerns
around
the
code
running
equipment
at
the
time,
but
since
then
code
points
in
pool
3,
which
includes
the
one
I've
just
mentioned,
were
assigned
to
standards,
action
and
then
also
code,
point
1
was
assigned
to
law
referred
so
going
on
to
the
next
slide.
F
F
Here's
a
question
for
everybody
here:
we
want
to
add
a
footnote
that
mentions
and
describes
essentially
reflects
this
text.
So
here
is
the
proposed
text
that
we
have
on
this
slide.
It
mentions
this
local
use
described
in
rfc
4594.
F
Is
there
anything
here
that
you
see
that
you
you
object
to
with
regards
to
this
text?
If
so,
just
let
me
know
at
the
end
of
the
slide
deck
we'll
I'll
go
back
to
this
one,
also
to
clarify
the
assignment
of
dhcp5
and
the
discussion
around.
That
is
not
in
scope
for
for
this
draft.
F
Just
a
final
slide
to
say,
the
new
revision
has
been
uploaded
check
it
out.
If
there
are
any
issues
or
comments,
we
welcome
them
either
on
the
mailing
list
or
here's
a
github
link
where
you
can
submit
them.
F
C
So
this
this
is
david.
I
I
think
I
think
you
found
something
and
I'm
not
sure
what
to
do,
but
I
think
it.
I
think
it
is
important
to
recognize
that
we
now
have
a
recommended
use
for
for
dscp
one,
and
so
I
think
a
note
saying
that
that
things
have
changed
since
rrc
49
45
94
is
probably
in
order,
and
I
don't
want
to
spend
the
next
15
minutes
trying
to
wordsmith
it.
But
I
do
I
do
think
you
found
something
there.
Good
catch.
F
Okay,
then,
I
guess
so
we'll
work
with
the
document
shepard
to
include
this
text
in
there
welcome
to
draft.
Thank
you.
C
Let's
take
a
careful
look
at
it:
taking
the
taking
the
lid
off
of
the
rfc
4594
in
some
ways
might
be
taking
lid
off
pandora's
box.
Let's
make
sure
we
think
very
carefully
about
what
we
want
to
accomplish
before
going
there.
M
M
Looking
at
the
quoted
part
of
4594,
it
seems
as
though
this
behavior
of
remarking
to
0
1
might
actually
be
compatible
with
the
current
use
as
the
least
effort
code
point.
But
I
would
need
to
be
definitely
more
detailed
to
be
absolutely
sure
about
that.
N
Martin,
duke
google,
no
hats,
I
would
say
at
a
minimum.
We
would
I
mean,
assuming
that
this
is
in
fact
an
update
that
I
mean
not
not
using
the
ipf,
but
in
fact,
if
4594
has
been
superseded
by
other
documents,
then
writing
it
down
here
is
useful,
just
as
a
matter
of
having
text.
Leaving
aside
all
the
technical
document
status
questions,
I
will
also
note
that
4594
is
informational.
So
if
we
decide
to
open
up
the
box,
we
could
just
update
it
yeah
and
help
clarify
things.
I'm.
C
Just
more
concerned
that
I
one
of
the
things
I
definitely
do
not
want
to
do
is
announce
that
we
any
and
all
modification
40
45.94
is
over
any
all
modifications.
This
draft
is
going
to
do
it.
I
think
that
would
that
would
be
a
major
mistake,
so
I
want
to
proceed
with
great
care
here.
Make
sure
we
we
understand
what
has
to
be
done
and
do
it
and
and
and
do
approximately
the
minimum
necessary.
J
Yeah
my
question
simply
to
our
document
shepherd:
are
we
just
going
to
try
and
take
input
here
and
then
wrap
the
document
up?
Is
the
working
group
last
call
leading
towards
completion.
B
Yeah,
it
sounds
like
we
need
to
address
this
one
question
and
then
confirm
that
all
the
work
group
last
call
comments
are
addressed
for
some
short
amount
of
time
and
then
we'll
be
able
to
forward
to
the
aed.
B
I
don't
know
how
soon,
if
I
step
down-
and
I
can
change,
the
working
group
shepard
to
david
gory-
is
conflicted.
So,
yes,.
N
So
I'm
sorry
just
to
labor
the
point
a
little
more
about
45.94,
like
I
I
agree,
is
not
in
scope
to
like
revise
the
way
the
dsep
works
in
any
way
in
this
draft.
That
said,
if
there's
a
discrepancy-
and
we
can
just
update
it
with
like
four
lines
of
text
like
does
that
really
open
pandora's
box.
C
C
D
A
A
A
D
C
You're
going
to
have
to
no
one
of
the
things
unfortunately
has
happened
as
a
result
of
this
is,
is
the
left
minute.
Double
click
means
I
can't
watch
the
queue
so
wes
you're
gonna
have
to
watch
the
queue
I
see
mike
heard
in
the
queue
you
want
to
say
something
quickly,
mike.
P
Yes,
your
volume
from
the
room
is
very
low
to
the
remote
participants.
Thank
you.
A
A
I
So
the
status
here
the
last
published
update
was
a
draft
10
which
came
out
on
march.
4Th.
It's
not
been
an
update
since
then.
Milestone
here
is
to
submit
this
as
standard
rfc
by
proposed
enrollment
by
september,
and
what
is
left
to
do
next
slide
turns
out
that
selecting
a
diff
script
code
point
for
to
recommend
in
the
draft
that
it
can
reach
consensus
amongst
all
the
people
in
the
working
group
has
been
more
challenging
than
maybe
I
anticipated.
It
would
be.
I
I
think,
in
part,
because
the
goal
of
this
is
to
encourage
and
enable
its
use
end
to
end
from
across
the
internet,
from
senders
to
receivers,
and
there
are
an
awful
lot
of
existing
practices
in
place
around
diffserv
code
point
in
different
networks
and
finding
a
single
recommendation
or
a
single
set
of
recommendations
that
everyone
can
live
with,
has
been
challenging.
I
But
currently,
what
we
have
in
the
document,
just
as
a
refresher,
is
that
the
recommendation
for
applications
if
they
comply
with
the
nqb
description,
is
that
they
should
mark
their
traffic
with
the
value
45
and
then
on
the
network
side
networks
that
don't
support
the
php
should
treat
nqb
traffic
as
default,
but
should
preserve
the
marking
distinction.
So
traffic
passes
through
their
network
with
that
marking
distinction
attack
and
then
the
third
major
bullet
networks
that
do
support
the
php
things
get
more
complicated.
I
So
they
must
maintain
the
marking
distinction,
but
they
should
not
send
nqb
mark
traffic
or
nkp
traffic
as
45
across
interconnections.
I
Instead,
they
should
use
the
value
5
across
interconnections
and,
if
they're,
sending
traffic
across
a
customer
access
network
link
so
into
a
customer's
home
network,
for
example,
they
should
use
the
value
45
there,
and
that
involves
remarking
in
the
in
that
isp
network
between
the
values,
5
and
45.
In
order
to
make
all
that
work
and
that's
where,
on
the
next
slide,
that's
where
there's
been
some
discussion
actually
before
I
get
the
discussion.
The
rationale
for
that
was
presented
at
ietf.
I
110
bond
is
probably
a
little
bit
small
to
read
on
the
screen,
but
the
idea
for
the
rationale
for
the
value
45
is
that
there
are
hundreds
of
millions
of
wi-fi
networks
around
the
world
today
that
will
meet
many
of
the
requirements
for
the
nqb
php
today,
with
no
modification,
no
reconfiguration,
nothing
if
we
choose
a
value,
that's
in
the
range
32
to
47,
so
hence
the
value
originally
we're
using
it
by
42,
but
now
45..
I
That
is
a
significant
incentive
for
adoption
of
this
marking
that
there
are
networks
around
the
world
that
today
will
go
a
long
ways
towards
providing
nqb
treatment
for
packets
that
are
marked
with
the
value
in
that
measure,
the
value
of
45.,
and
since
that's
been
talking
about
wi-fi
links,
applications
that
are
running
on
wi-fi
clients.
There
is
no
opportunity
for
a
network
operator
to
remark
that
traffic
from
when
the
application
generates
it
until
it
crosses
the
wi-fi
link
in
many
cases.
So
so
that's
a
very
valuable
property
to.
I
To
have
for
for
application
the
application
recommendation
that
again
it
it
will
get
nqb
treatment,
more
or
less
and
then
on
many
networks.
I
Now
the
rationale
for
the
value
five
came
about-
maybe
I
guess
right
before
I
igf-110
must
have
been
that
there
are
there's
at
least
one
network,
one
transit
network,
large
network
that
that
network
network
operator
said
hey.
If
you
want
us
to
carry
traffic
across
our
network,
treat
it
as
default
and
not
remarket.
I
It
would
make
our
lives
a
whole
lot
easier
if
you
chose
a
value
between
one
and
seven,
and
so
it
seemed
it
seemed
that
meeting
that
middle
requirement
on
the
previous
slide
of
networks
that
don't
support
the
php
carrying
it
through
with
the
marking
distinction
intact,
would
be
advantage
by
choosing
this
value
five.
So
he
came
up
with
this
compromise
of
five
at
the
at
the
core
45
at
the
edge
and
the
requirement
on
the
isp
in
between
to
do
that
remarking
all
right
now,
next
one.
I
So
the
comments
recently
on
the
list
and
also
some
off
list
discussion.
Let's
point
out
that
there
are
folks
that
are
not
so
happy
with
that
recommendation
or
set
of
recommendations.
The
first
there's
an
interest
in
ensuring
that
end-to-end
traversal
of
unmodified
dhcp
remains
rfc
compliant.
I
So
in
other
words,
this
draft
is
the
first
drafted
in
the
the
rfc
series
that
would
effectively
recommend
that
all
networks
that
want
to
support
it
do
remarking,
whereas
previously
a
network
that
just
passed
disturb
code
points
through
end
to
end
would
be
complying
with
all
rfcs.
So
so
that's
one
concern.
I
Second,
one
is
an
interest
in
just
minimizing
the
complexity,
some
of
the
network,
some
networks
have
said:
hey.
It
doesn't
matter
to
us
this
value.
You
know
five
or
one
to
seven.
Why
can't
we
just
keep
it
45
all
the
way,
in
particular
a
lot
of
isp
networks
peer
directly
with
application
providers,
and
so
there's
not
a
lot
of
traffic.
That's
going
across
multiple
networks
going
across
transit
networks
and
it's
unclear
really
what
percentage
of
transit
networks
would
prefer
that
value
one
to
seven
one
through
seven
to
begin
with,
so.
I
N
N
Are
there
provisions
against
that
in
the
draft?
And
if
so,
do
we
think
that's
realistic,
given
what
we
know
about
the
community.
N
Assuming
that,
assuming
that
the
draft
is
like
well,
I
I
put
it
this
way.
I
think
the
draft
should
say
you
know
they'll,
not
just
do
this
for
everything
greater
than
what
is
that
eight
to
just
bleach,
these
bits
like
actually
look
check
for
45
and
replace
it
with
five,
and
vice
versa,
not
like
apply
a
broader
filter.
N
I
Yeah
well,
the
draft
definitely
does
not
recommend
just
bleaching
the
top
three
bits
for
all
different
code
points.
It
specifically
talks
about
mapping
45
to
5.
I
We
could
we
could
make
some
statements
around
that.
I
suppose
that
you
know
that
this
is
recommended
specifically
for
just
you
know,
hammer
that
point
home.
I
guess
that
it's
not
intended
to
mean
that
bleaching,
the
top
three
bits
is
the
desired
behavior
for
other
code
points.
N
I
N
I
All
right,
nelsonq,
yeah,
sorry
as
part
of
the
discussion,
I
guess
I'd,
invite
other
operators
who
may
have
opinions
on
that
to
chime
in
the
last
point
that
was
made.
This
was
off
list
was,
or
maybe
be
honest
at
one
point
was
that
some
applications
might
be
hosted
in
locations
where
they're
on
a
network.
That
would
prefer
the
value
five,
and
so
is
the
recommendation
to
use
45
for
all
applications.
I
The
right
recommendation
I
mean
currently
as
a
should.
It's
a
recommendation,
so
local
practices
are
allowed
to
to
preempt
that,
but
there
might
be
some
value
in
actually
having
a
recommendation
for
the
value
five
in
some
certain
situations.
What
those
exactly
get
written,
as
I
think,
is
the
open
question
anyway.
Next
slide
goes
on
to
a
proposed
compromise.
I
I
don't
know
if
this
meets
requiring
everyone's
requirements,
but
the
idea
is
to
change
the
the
recommendation
to
recommend
45
across
interconnection,
so
45
end
to
end
and
then
have
language
that
says
you
know
in
some
cases
this
may
not
be
the
ideal
code
point
in
some
networks,
but
if
we
total
free
marking
is
necessary,
recommend
the
value
five.
So
we
still
have
two
code
points
that
are
identified
in
the
draft
and
some
hopefully,
consistency,
at
least
between
networks
that
there's
traffic
is
either
45
or
5
nkp
traffic.
I
But
here
the
the
onus
would
be
on
the
receiver
of
the
traffic
to
remark
for
the
value
45
to
five
unless
they
have
some
agreement
with
their
interconnection
partner
ahead
of
time
and
then
the
bottom
bullet,
on
the
application
guidance
update
that
and
to
mention
that
in
certain
cases,
five
might
be
preferable
again,
providing
a
recommendation
there
and
not
just
leave
it
totally
open
to
the
application
developer,
but
recommended
by
fi,
but
also
recommend
that
they
consult
the
network
in
which
they're
deployed.
In
order
to
make
that
decision.
C
So
this
is
david
speaking
from
the
floor,
where
we're
wearing
no
hat
but
as
one
of
the
authors
of
the
original
diffserv
rfcs.
Let's
see.
Oh,
I.
C
Basically
have
to
put
the
mask
under
the
mic:
okay,
so
david
speaking
on
the
floor,
wearing
no
hat,
but
as
one
of
the
authors
original
dips
of
rfcs.
C
C
This
started
out
as
let's
provide
a
phb
for
traffic
that
is
effectively
default,
but
doesn't
build
a
cue
and
therefore
we
we
can
handle
that
separately
right
so
far,
so
good,
then
along
comes
wi-fi,
and
we
can't
do
that
on
wi-fi,
at
least
as
I
understand
not
without
completely
changing
all
the
wi-fi
deployed
gear.
So
in
order
to
make
this
work
over
wi-fi,
we
have
to
do
something
else
and
45,
and
the
mapping
of
the
ac
vi
is
not
default
on
what
is
not
default
on
wi-fi.
C
I
C
I
think
I
don't.
I
don't
expect
an
immediate
answer,
because
I've-
I
put
you
right
on
the
spot
up
up
up
on
stage
in
front
of
us,
so
it
might
be
that
this
is
I've.
I've
said
my
piece
and
we
need
to
think
about
it
and
and
bash
on
the
list,
but
something
is
still
bothering
me
very
much
architecturally
about
this
situation.
I
I
Maybe
we
could
soften
that
to
say
in
some
networks
where
this
is
difficult
and
feasible.
You
know
some
pick
your
word:
it's
okay
to
treat
it
with
higher
priority
and
blah
blah
blah
some
some
protection
of
that
area.
The
idea
here
is
not
to
create
a
high
priority
path.
Right
is
to
create
a
path
for
sporous
applications
that
would
like
not
to
be
subjected
to
latency
caused
by
okay.
C
We
should
have
a
discussion
in
more
detail
on
the
list,
because
the
glass
half
empty
view
of
that
is
independent
of
what
original
goals
may
have
been
this.
This
does
not
work
on
wi-fi
and
hence
does
not
work
at
all.
Unless
you
create
the
high
priority
path,
at
least
there
raising
the
question
of.
Should
we
admit
that
the
right
thing
to
do
is
high
priority
path
everywhere.
I
Yeah
well
to
say
it
doesn't
work
on
wi-fi
a
little
bit
strong.
I
mean
it's.
The
default
behavior
of
existing
wi-fi
gear
to
to
put
it
in
high
priority
queue,
but
with
some
configuration
changes,
most
wi-fi
access
points
can
map
the
value
45
into
best
effort,
some
of
them
with
code
changes
from
the
more
recent
ones.
You
can
actually
create
a
separate
queue,
invest,
effort
and
map
the
traffic
there.
So
yeah,
it's
more.
The
existing
wi-fi
unmodified
where
high
priority
aspect
said.
So,
let's.
J
Yeah,
I
was
going
to
say
thank
you
ever
so
much
for
sticking
at
this.
What
I
see
here
with
no
hat
on
is
definitely
much
better
than
what
we
had
last
time.
We
talked
about
this
using
45
everywhere
and
allowing
five
as
a
difference
within
a
particular
domain
seems
a
much
more
straightforward
solution
to
this
than
the
original
one
we
had.
So
this
looks
promising
thanks
for
working
on
it.
My
question
would
be:
has
has
rudiger
seen
this
particular
flavor
of
combinations.
J
I
I
think
he
wasn't
totally
happy
with
the
wording
of
it,
but
I
was
just
trying
to
channel
him.
I.
J
C
David
yeah,
this
is
david
back
back
on
the
floor.
Still
wearing
no
hat,
something
gory
said,
might
provide
a
way
out
of
this,
which
is,
we
are
circling
something
by
trying
to
recommend
both
values,
and
I
heard
gory
say
the
magic
words
local
use.
C
If
we
recommended
45
and
then
discussed
the
necessity
in
some
networks
of
local
use
of
a
different
dscp,
which
is
completely
consistent
with
the
diffserv
architecture
and
doesn't
require
us
to
ask
ayana
to
do
anything
and
then
discuss
that.
Perhaps
five
could
be
that
local
somewhere
in
there
then
fitting
this
into
the
notion
of
local
use
and
a
network
operator
can
use
dscp's
other
than
xero
for
anything
they
wanted
for,
might
get
us
something
considerably
considerably
cleaner
and
acceptable.
C
J
C
J
Yeah
and
yeah-
let's
continue
to
discuss
this,
and
my
comment
on
rudiger
is
simply
to
say
that
if
you
really
are
interested
in
this,
please
join
in
and
send
comments
to
greg
joining
the
discussion.
There's
a
lot
of
discussion
going
around
this.
It's
not
just
greg
coming
up
with
a
new
solution,
each
time
and
I'm
sure
we'll
get
there
pretty
soon.
I
All
right,
thanks
yeah,
I
will,
I
think,
after
the
meeting
send
a
post
to
the
mailing
list
with
some
of
the
some
more
of
the
comments
that
were
discussed
off
the
list
as
well.
To
make
sure
everyone
is
up
to
speed
on
that.
C
I
All
right
so
for
those
of
you
who
attended
the
conclusion
of
the
hackathon
yesterday,
these
slides
look
very
familiar.
I
had
intended
to
include
a
lot
more
new
data
that
we've
collected
today,
but
time
kind
of
got
away
from
me.
We
were
working.
I
Yeah
so
so
I
did
do
some
updates,
but
the
graphs
will
look
pretty
familiar
if
you've
seen
them
before.
I
So
this
was
the
first
l4s
interoperability
event
that
brought
together
a
bunch
of
different
implementations
of
l4s
and
both
network
equipment,
as
well
as
congestion
control,
implementations
and
and
tcp
receivers
that
can
feedback
using
the
acura
dcn
functionality.
The
drafts
everyone
in
this
room
is
probably
familiar
with
those
already.
The
three
l4s
drafts
which
have
completed
ietf
last
call,
as
well
as
the
accurate
ecn
draft
those
ones
we've
been
testing
in
the.
I
And
I
guess
the
last
part
everyone
you're,
probably
familiar
with
that
too
involves
the
sender,
the
bottleneck,
the
receiver
next
slide,
so
our
plan,
so
the
hackathon,
strictly
speaking,
was
just
saturday
sunday.
We
came
in
friday
afternoon
to
begin
set
up.
We
have
quite
a
bit
of
equipment
if
you
haven't
gone
into
the
hackathon
room.
I'd
encourage
you
to
take
a
look
over
there
actually
this
evening
after
this
session
is
the
hack
demo
happy
hour,
and
so
please
come
over
to
liberty,
a
two
doors
down.
I
Take
a
look
at
we'll:
we've
got
five
tables
set
up
from
the
front
of
the
room,
with
all
sorts
of
gear,
stacked
up
on
them,
cable
equipment,
wi-fi
equipment,
some
network,
emulators
and
a
bunch
of
clients,
but
you
won't
see
directly
in
the
in
the
room,
but
right
outside
the
room
in
the
back
are
two
large
racks
of
equipment
that
are
the
cable
cmts
equipment,
the
head-end
equipment
that
implements
for
us
as
well.
So
in
terms
of
the
the
plan.
I
I
So
next
slide
in
terms
of
the
implementations
that
we've
got
on
the
congestion
control
side
and
the
sender
side.
We
have
the
apple,
quick,
prog
implementation,
tcp,
prague,
google's
bbr,
v2
nvidia's
geforce
now,
which
is
a
proprietary
udp
implementation
and
nokia's
real-time
prog,
which
is
integrated
with
their
their
codec
directly.
I
I
The
next
slide
on
the
bottleneck
link
implementations.
We
have
seven
of
those
that
are
there
in
the
hackathon.
Four
of
them
are
low,
latency,
docsis,
two
cable,
modem,
chipsets,
that's
doing
the
l4s
functionality
in
the
upstream
direction
and
then
two
cmts
implementations
they're
doing
it
in
the
downstream
direction.
I
I
I
In
terms
of
combinations
that
we've
tested
so
far,
I'm
not
going
to
go
through
the
whole
list,
but
maybe
to
try
as
many
combinations
as
we
can
some
a
single
congestion
control
implementation
using
a
bottleneck.
In
other
cases,
it's
pairs
of
congestion,
control
or
even
like
three.
In
one
case,
congestion
control
implementations
simultaneously
sharing
a
single
bottleneck.
I
Each
of
these
there
are
all
sorts
of
different
configurations
of
particular
docsis
traffic
rates,
whether
queue
protection
is
on
or
off,
marking
thresholds,
and
things
like
that
that
we've
been
testing
and
and
looking
at
results
on
that
and
more
will
come
so
next
slide.
I
Here's
a
snapshot
of
some
of
the
results,
probably
not
too
readable
from
the
back
of
the
room.
But
actually
let
me
have
to
look
here
so
this
is
for
comparison
classic
traffic
in
the
upstream
on
docsis,
and
you
can
see
in
the
top
of
latency
time
series
and
on
the
left
in
the
middle,
the
the
ccdf,
a
packet
latency
and
on
the
bottom
is
the
throughput
time.
Series
and
circled
is
the
p99
and
99.9
packet
delay
variation
statistics
so
with
classic
traffic.
So
this
is
cubic.
I
I
So
with
the
the
l4s
implementation
p99
package,
laboration
now
nine
milliseconds
p99.9
now
10
milliseconds
and
that
that
packet
delay
variation
is
not
just
queuing
delay
that
includes
the
media
access
delay
on
docsis
when
the
case
this
case
is
doing
the
request
grant
mechanism
on
the
upstream.
So
there's
you
know
four
or
five
milliseconds
of
delay
variation,
just
from
sometimes
even
more
deliberation.
Just
from
the
media.
Access
on
that
right
and.
C
I
Yeah
pretty
close
to
you
and
then
the
next
slide
similar
result
for
downstream
move
on
to
that
one
and
unfortunately
we
actually
realized
after
we
put
this
data
in
the
slide
that
the
classic
aqm
configuration
actually
has
aqm
implementation
on
the
cmts
that
was
used
for
this
testing
had
a
bug.
There
was
a
bug
that
actually
was
discovered
earlier,
but
the
the
correct
firmware
image
didn't
get
didn't
make
it
to
the
hackathon.
Unfortunately,
so
that's
why
we
see
the
incomplete
utilization
of
the
downstream
channel.
I
There
was
quite
a
lot
of
packet
loss
at
the
beginning,
which
kind
of
tanked
the
classic
tcp
performance,
but
on
the
latency
side,
that
made
lazy
actually
pretty
good
for
the
classic
flow
after
that
initial
startup,
because
it
was
under
utilizing
the
link,
but
still
we
see
a
99th
percentile
pdv
of
55,
milliseconds
and
p99.9
of
96
milliseconds
compared
to
next
slide
with
l4s
p99
packet,
delay,
variation
of
1.1
milliseconds
and
p99
99.9
packet,
deliberation
of
7.8,
milliseconds
so
and
again
and
great
throughput
as
well.
I
So
just
one
snapshot
of
kind
of
work
in
progress.
Obviously,
a
lot
of
testing
going
on
in
different
conditions.
These
were
some
graphs
that
we
pulled
out
as
being
a
nice
illustration.
I
In
terms
of
the
participating
organizations
of
15,
different
organizations,
participating
that
brought
gear
or
or
implementations
of
congestion,
control
and
or
otherwise
facilitating
the
testing
and
then
the
next
slide
number
of
people
involved,
32
folks,
all
together,
who've
been
involved
in.
I
So
definitely
a
success
and-
and
we
started
having
discussions
about
next
steps,
potentially
another
interop
event
at
london.
I
think
it's
a
tbd,
but
if
there's
sufficient
interest
we'll
try
to
organize
one
there,
any
questions
or
comments.
A
C
Yes,
my
cheat
sheet
says
my
cheat
sheet.
She
has
been
up
the
use
of
ports
for
experimentation,
draft.
A
C
Okay,
well,
I
decided
to
pretend
to
be
joe
and
I
think
we've
talked
to
the
aed
and
we
think
we
know
what
we
want
to
do
with
this.
Okay,
but
first
we
probably
explain
what
it
is
before
we
explain.
What's
going
to
happen
to
it,.
B
B
So
that's
really
all
there
is
to
this
just
asking
for
two
more
ports
to
be
assigned
for
experiments
and
there's
one
extra
bit
that's
of
interest
in
terms
of
answering
the
question
of
how
do
you
avoid
conflicting
use
of
those
two
ports
and
there
joe,
has
basically
borrowed
the
idea
that
we
have
in
the
tcp
experimental
options
of
having
a
32-bit
experiment
id
that
you
can
get
first
come
first
serve,
so
nothing
really
required
there
in
order
to
get
an
experiment
id
allocated
from
ayanna,
and
you
can
use
those
as
a
way
to
put
a
cookie
into
your
your
packets
and
or
magic
number
in
your
packets
and
basically
not
have
collisions
of
different
experiments
using
those
two
port
numbers.
Q
Hi
lara
sagar,
so
you
stick
those
32-bit
values
into
a
tcp
option.
I
assume
right.
B
So
it's
to
be
blunt:
it's
not
my
draft,
but
I
actually
in
reading
it
had
the
same
kind
of
question
about
how
specifically
to
use
those
32-bit
values,
because
you
could
think
it
might
need
to
work
differently
for
udp
for
tcp
for
other
protocols.
B
So
I
think
that's
something
that
joe
probably
needs
to
address
more
deeply.
Q
C
Yeah
I
mean
my
reaction
to
that
was
to
one
possibly
would
be
just
stuff
the
stuff,
the
experiment
number
magic
number
at
the
start
of
the
payload
of
the
of
the
experiment.
P
Back
to
the
right
draft
was
that
it
was
to
do
exactly,
as
you
said,
david,
to
put
it
at
the
front
of
the
payload,
and
I
might
note
that
this
is
supposed
to
be
transport
agnostic,
so
it
wouldn't
work
to
put
it
in
a
in
a
tcp
option
that
wouldn't
work
for
udp,
for
example,.
C
You
start
off
with
tc
you,
you
you,
you
start
off
start
off
the
stream,
this
magic
number
right
and
if
so,
if
you're,
if
you've
got
a
protocol
that
thinks
it's
doing
framing
atop
a
tcp.
This
is
a
good
thing
to
put
in
the
frame
header,
so.
Q
That
means
you
can't
experiment
with
stuff,
that
is
a
library
or
something
right,
because
it,
if
you
don't
control
the
byte
stream
directly,
you
you
can't
put
the
number
there
so
so
I
guess
I'm
sort
of
wondering
about
the
usefulness
of
the
number,
because
it
limits
some
experiments.
I
want.
Is
there
an
option
to
not
do
that
and
just
use
the
port.
N
Martin,
duke
google,
no
hats,
although
with
a
hat
I
apologize
because
you
sent
me
the
draft
and
I
had
a
warning
on
it,
but
yeah
the
middle
box,
I
mean
so.
The
the
like
payload
problem
also
has
a
middle
box
problem
like,
as
everyone
knows,
it
is
like
extremely
common
to
chop
up
the
payload
and
repacketize
it,
and
all
the
dumb
middle
boxes
in
between
will
be
very
sad.
C
Okay,
so
we're
going
to
start
an
adoption
call
on
the
list-
probably
one
figure
one
in
the
adoption
call
during
august
and
see
if
we
can
get
more
discussion
going
on
how
to
use
the
how
to
use
the
the
experiment.
A
C
Sure
why
not?
Can
I
see
a
show
of
hands
how
many
p,
how
many
people
how
many
people
have?
Actually
let
me
let
me
get,
let
me
get
this.
Let
me
get
the
slides
back
up.
One
thing
at
a
time.
A
C
This
draft
raise
your
hand
in
the
rumor
on
the
list.
If,
if
you've
read
this
draft.
C
I
see
maybe
five
people
in
the
room
have
read
the
draft.
Maybe
one
person
handling
his
head.
I
can't
tell
whether
it's
what
was
raised
or
resting
and
so
okay,
I
think
we
need
to
run
the
adoption
call
on
the
list
and
we
have
more
more
discussion
of
appropriate
use
of
the
experiment.
Number
magic
number
cookie.
What
whatever
on
the
list.
C
Okay,
let's
see.
C
Lewis,
are
you
online
for
port
forwarding,
enhanced
port
forwarding
with.
E
Thank
you.
Thank
you.
So
my
name
is
luis
chen
today.
Actually,
I'm
going
to
I'll
give
you
a
kind
of
I
mean
a
draft
which
solving
a
trouble
which
we
have
in
nat444
situations,
and
this
is
a
long-standing
problem
for
whatever
tcp.
If
you
want
to
do
a
p2p
connections
with
tcp
you'll
face
some
trouble
when
you
do
a
cgnet
or
anything
from
before
next
slide.
Please.
E
Okay,
so
the
problem
statement
actually
is
well
where
we
have
the
nat44
or
n844
with
rfc5.
E
They
provide
a
method
to
set
up
a
p2p
connections
behind
any
nats.
Even
cgnet,
however,
is
only
works,
usually
works
for
udp
because
they
use
a
kind
of
a
method
called
hole
punching,
but
for
tcp
this
is
a
very
low
success
rate.
So
normally
I
didn't
see
people
I
mean
the
commercial
product
they're
using
tcp
for
this
kind
of
hole
punching,
usually
the
uedp.
I
think
I
love
con
rc
at
some
game
console
they
only
use
udp
for
the
whole
punching
and
the
problem
with
hole
punching
is
they
need
a
common
third
party
server?
E
E
So
this
is
the
how
hole
punching
is
working
so
from
the
pc1
you
connect
to
lg
and
cgnet
and
there's
an
nad444
in
this
case
and
then
from
pc2.
If
you
want
to
have
inbound
connections
to
trust
the
pc
one.
So
today,
though,
the
only
working
mechanism
is
the
hole
punching
hole
punching
has
a
udv400
has
high
success
rate,
but
it
really
depends
on
two
things.
First
of
all,
they
need
a
common
server
to
exchange
the
information
so
that
they
can
punch
the
hood
at
the
same
time.
E
That
means
there's
a
timing
between
the
pc1
and
pc27,
the
traffic
trust
the
other
side
to
punch
a
hole
and
also
because
of
this
one,
the
limitation,
pc1,
pc2
and
common
server.
Usually
it
belongs
to
the
same
entity.
For
example,
it's
a
webcam
server.
Oh
sorry,
it's
a
webcam
kind
of
a
pc
one.
The
pc2
is
a
software
and
then
the
the
common
server
is
operated
by
that
webcam
kind
of
providers.
E
So
in
this
case,
in
his
diagram
because
of
the
cgnet
and
lg
in
this
diagram,
it's
difficult
for
tcp.
To
actually
being
I
mean,
I've
been
successfully
punching
because
of
the
synchronization
and
also
the
time
kind
of
time,
synchronizations
and
also
the
kind
of
some
cg
net
implementations
or
the
lg
implementations,
so
the
tcp
usually
very
low
rate
in
this
case.
So
here
I'm
trying
to
solve
this
problem
with
the
next
one.
So
next
slide,
please.
E
So
this
is
the
kind
of
function
I
implemented
in
the
cgnet
very
simple:
it
just
just
allowed
a
kind
of
tcp
udp
incoming
connections.
The
only
thing
is
just
a
knob.
Actually
just
do
not
change
one
of
the
destination
ports
when
it's
going
incoming,
which
I'm
going
to
illustrate
in
the
kind
of
diagram
in
the
next
slide,
and
it
allows
actually
to
change
chain
of
forwarding
of
the
same.
That's
the
network
from
the
cgnet
to
lg
and
hints
to
the
end
device
next
slides.
E
So
this
is
actually
a
kind
of
lab
either
testing
this
one.
So
it's
under
nat444.
If
you
see
this
one,
there's
a
pc
one
and
then
between
pc
and
lg,
which
is
the
home
gateway
you
have.
The
network
is
192
160.20.0.
E
So
the
the
puppy
address
here
is
201.1.1.10
and
the
port
range
assigned
is
1024
to
105.5,
so
the
usually
the
the
how
the
pc
one
allocated,
the
port
you're
using
upnp
or
nadpnp
from
lg
and
okay
I'll
reserve
the
port
over
there
either
tcp
or
udp
port
there,
and
then
it
will
the
the
port
will
forward
into
the
pc
one.
That
is
normal
operations.
E
Here
I
added
additional
procedures.
Is
I
need
it
to
detect
what
port
it
can
be
used?
So,
for
example,
here
in
pc1
it
will
send
a
kind
of
detection
via
third
party
stunt
servers,
there's
a
lot
sun
server
in
the
network
right
and
in
the
internet
right
now
you
just
detect
this
using
a
stand
server.
It
will
tell
you
about
okay,
now
you
are
using
this
ips
red
public
barriers
and
then
the
port
number
is
1024.
E
Okay,
once
you
know
that
1024
is
available
port,
then
you're,
actually
using
upnp
step,
two
using
upmp
to
reserve
102
102
for
this
port
in
the
lg,
so
anything
send
if
1024
to
pc1
and
then
in
cgnap.
Here
this
is
something
the
new
function
I
the
knob
I
need
to
implement
is
whenever
it
sees
some
incoming
connections
the
102
for
that
port,
I
mean
the
port
number
actually
I'll,
assign
that
to
that
algae,
or
I
mean
the
range
of
the
port
right
it
won't.
It
won't
modify.
E
It
will
just
keeping
the
port
at
this
and
not
not
change
in
this
case,
because
there
will
be
other
sections
actually
created
under
cgnet.
You
just
make
sure
that
this
port
number
will
not
change
for
incoming
right
and
then
from
the
pc2
here.
The
original
packet,
sending
trust,
is
a
puppy
ip
address,
201.1.1.10,
which
is
going
towards
the
cgnet
and
then
with
the
destination.
E
Port
is
1024,
which
is
tcp
port,
for
example,
and
it
will
be
actually
going
throughout
the
eipf,
which
is
sorry,
the
cgnet
and
with
the
eipf
functions
this
one
will
unchange
the
port.
Okay,
usually
the
part
will
change
because
of
cg
net
functions,
but
this
spot
now
I
should
change
when
1024
port
actually
reach
the
rg.
E
E
So
this
is
a
very
simple,
easy
method
to
actually
make
it
work
under
cg
as
well,
because
right
now,
whenever
we
have
cgnet,
this
kind
of
I
mean
hole
punching
become
a
lot
of
troubles
and
a
lot
like
bt
is
not
working.
Actually,
I
mean
pcy
want
to
see
things.
Something
is
not
working
today
I
mean
because
it's
under
the
two
nets
actually
so
next
slide.
E
E
You
turn
on
the
knob
actually
saying
that
for
this
this
address
range
or
this
user.
Whenever
I
sign
from
this
ip
pool
or
whatever
it
will
have
this
behavior,
that
means
it
will
allowing
incoming
connection.
When
you
see
that
okay,
that's
a
there
is
I
mean
unknown
connections,
unknown
sections
arriving
in
the
cgnet
and
that's
pointing
to
pc
by
tossing
the
lg
directions.
L
E
That's
a
function.
This
is
why
a
new
function
is
not
something
actually
a
procedure
and
you
need
a
knob
to
turn
on
in
the
cg
net
to
allow
incoming
like
this
way.
Otherwise,
it's
just
looking
at
the
section
table
to
see
whether
that
section
already
created
or
not
right,
so
right
now,
you
need
a
knob
to
actually
enable
this
function
so
that
it
will
allow
this
one
to
pass
through.
C
All
right,
it
sounds
like
you
actually
need
a
config
per
rg
to
say
whether
the
rg
is
offered
offering
services,
and
hence
the
cg
net,
must
behave
this
way.
E
C
Some
you're
missing
the
point
so
magnus
point
you're
missing
the
point.
What
what
magnus
pointed
out
is
that
there
is
a
current
cg
net
behavior,
which
is
shown
on
this
slide,
differs
from
the
current
behavior.
How
do
you
fee?
How
does
a
cg
net
know
whether
to
do
the
existing
or
the
new
behavior,
and
it
sounds
like
the
answer?
Is
it's
a
per
rg
configuration
of
how
the
cg
net
behaves.
E
Not
really,
okay,
so,
okay,
first
of
all,
the
rg
is
independent
right,
lg
working
so
long
as
support
eim.
It
will
work.
The
problem
actually
is
the
cgnet
how
to
implement
it.
So
if
the
cgna
implement
the
way
that
it
is
following
the
I
mean
the
rfc
with
the
eim
all
these
functions.
L
E
L
E
E
L
E
E
So
basically
I
mean
there's
a
difference
with
between
between
I
mean
you
have
a
puppy.
Ipa
address,
the
lg
partner
view.
Right
from
our
point
of
view,
you
have
all
the
usable
ports
from
from
this
I
mean
from
zero
or
one
to
actually
65k,
but
usually
with
cg
net.
Today,
there's
I
mean
usually
pop-up
applications.
That
means
you
allocate
a
certain
port
range
for
a
particular.
I
mean
python
ip
address
you
specify
a
range.
For
example.
E
I
mean
that
depends
on
the
port
range
right
around
that
one,
and
sometimes
the
port
range
may
be
made
up
rank,
maybe
even
single
port
right.
So
but
doesn't
matter
for,
for
this
kind
of
I
mean
the
draft,
I
don't
care
whether
it's
a
pba,
a
port
range
or
a
single
port
you
can
have
multiple
a
discrete
port
are
located,
is
also
fine,
because
I
can
just
repeat
the
procedures
to
allocate
different
parts.
E
For
example,
this
is
one
or
two
four,
maybe
eight
thousand
then
now
next
one,
nine
thousand
so
long
as
they
behave,
the
behavior
is
is
is
is
is
is
still
quite
previous.
Actually
so
here
it,
it
just
tried
to
actually
try
to
in
in
in
here.
If
102
for
the
one
two
one
five
five,
then
it's
actually
picking
up
one
of
the
port
within
this
range,
and
then
we
detect
one
of
the
port
and
then
just
allocate
that
part
two
to
to
to
itself
right.
E
There
is
another
things
that
I
put
in
the
draft
say
that
okay,
you
can
let
the
user
or
the
lg
okay
to
to
use
the
http
kind
of
method
to
retrieve
the
ip,
the
public,
ipa
and
then
the
port
or
the
puppy
ip
and
then
the
port
range.
So
here
I
use
a
very
simple
url,
url
kind
of
things
just
using
http.
You
can
integrate
it
in
your
software.
E
In
the
lg,
okay
to
get
the
port
range
or
you
can
use
in
the
web
browser
in
your
pc
just
to
get
your
pub
ip
address.
Another
port
range
today,
it's
easy
to
get
the
public
ip
address,
but
you
cannot
get
the
port
number
actually
present
in
the
public.
I
mean
the
networks,
for
example.
This
part.
Maybe
you
can
use
this
for
bt,
okay,
this
pipeline
public
address
and
the
port
number
you
can
use
for
bt
and
they
have
manual
enter
into
your
games
or
the
bt
software.
You
can
do
that
this
way.
G
Six
is
not
described
well
in
the
internet
draft.
In
fact,
I
didn't
see
a
discussion
of
it
at
all
and
I'm
confused
the
order
of
events
for
getting
the
the
punch
through
to
happen
and
the
incoming
port
to
happen
so
slide.
Four
or
five
I've
forgotten
now,
but
the
the
slide
magnus
commented
on
seemed
to
show
after
the
hole
punching
and
the
rg
communication
and
stuff
had
occurred,
but
there's
no
discussion
in
the
draft
of
how
that
order
happens.
G
E
Yeah,
in
fact,
actually
you
can
see
it
like.
I
mean
a
series
of
pop
forwarding
from,
for
example,
if
you
know
pop
forwarding
right
in
the
home
gateway.
So
if
you
allocate
one
or
two
for
whenever
traffic
go
to
one
or
two
for
this
port
will
present
to
pc1.
Okay,
you
I'll
adjust
that
this
one
between
rg
and
cg,
that
there's
no
control
plane
protocol.
E
That's
nothing
so
here
the
the
reason
why
I
design
like
this
way,
I
don't
want
any
kind
of
control
protocol
running
between
all
this
entity,
pc
one
lg
and
cgnet,
there's
no
control
protocols
that
would
be
make
it
more
scalable
in
real
departments,
so
here
actually
the
the
1024.
Actually
you
can
receive
it
like
this.
E
If
I
see
102
for
this
one,
I
will
send
it
to
rg
actually,
because
this
one,
when,
whenever
you
you
allocate,
I
mean
you
have
a
private
address,
which
is
which
is
the
one
192
166
1.11,
which
is
assigned
it
when
this
one
actually
1.11
this
one
assigned
it
to
the
lg
immediately.
Actually,
there's
a
a
kind
of
public
ipad
and
port
number
port
range
will
be
allocated
is
copy.
E
Ba
in
this
kind
of
allocations-
and
this
1024
actually
is
part
of
the
port
number
actually
can
be
forwarded
towards
the
algae
that
portion.
That
means
this
aipf
only
enable
the
things
actually
anything
within
one
or
two
four
and
one
or
five
five.
It's
just
forward
to
this
lg.
Do
this
192
168
1.11!
E
In
this
diagram,
I
mean
this
is
just
suspected
blind.
That's
what
the
whole
I
mean,
whatever
things
go
to
these
guys
only
so
you
can
see
this
is
like
a
series
of
port
forwarding
only
I
mean
just
from
the
pc2
to
pc1
directions.
It's
a
series,
I'm
not
sure,
answer
the
questions,
but
I
hope.
D
C
I
think
I'm
going
to
call
time
on
this
question
I'll
be
taken
to
the
list.
If
you
think
the
working
group
ought
to
work
on
this
draft
or
something
like
it,
please
post
the
list
or
come
talk
to
one
of
the
working
group
chairs.
C
Yes,
keep
speaking.
R
Yeah
thanks
hello.
This
is
pauline
from
channel
mobile
and
it's
my
pleasure
to
introduce
work
of
computing,
aware
networking
and
we
had
came
both
in
idea,
one
three
and
there
are
some
questions
related
to
the
player
protocol.
So
that
is
the
reason
why
I'd
like
to
present
this
work
to
see
if
there
is
any
comments
from
the
wt
next
slide.
Please.
R
First
is
the
motivations
coming
where
networking
aims
at
computing,
a
network
which
source
optimization
by
steering
traffic
to
appropriate
community
results,
considering
not
only
network
metrics
but
also
competing
resource
metrics
and
service
variation,
so
it
is
to
solve.
The
problem
of
cloudest
is
not
best
providing
the
best
use
experience
of
low
latency,
high
reliability
and
stable
service
experience
when
moving
to
different
areas
based
on
the
increasing
development
of
integrated
convenient
network
infrastructure
such
as
sydney
and
ed
computing
and
next
slide.
Please.
R
R
The
dns
was
not
designed
for
the
dynamic
scenarios
with
fast
changes
of
the
service
service
instance
and
the
catching
and
a
low,
and
it
should
also
learn
computing
status,
other
additional
problems
and
the
load
balance.
There
should
also
learn
about
network
standards,
so
we
had.
We
also
had
two
potential
solutions.
One
is
a
dynamic:
anycast
is
based
on
the
edge
routers,
get
the
community
metrics
and
select
the
service
instance
and
there's
also
a
unpath
load
balancer.
R
R
R
First
category
is:
what's
the
relationship
between
traffic
steering
and
service
deployment,
discovery
and
upper
layer
protocol?
Here
we
got
answer
is
that
the
whole
process
includes
service
deployment,
discovery
and
traffic
steering
and
can
assume
service
have
been
developed,
deployed
and
discovered
in
multi-edge
sites.
Even
at
traffic
steering,
on
the
one
hand,
can
shoot
coordinates
with
upper
layer
for
service
deployment
and
this
discovery.
On
the
other
hand,
the
computing
metrics
collection
in
can
we
also
benefit
them,
and
the
second
issue
is
about
the
relation
between
can
and
auto,
so
also
solve.
R
The
problem
of
service
instance
selection
as
an
of
fast
solution,
such
as
it
is
like
the
dns
and
the
load
balance
replies
for
the
applications
or
service
before
traffic
deliver
might
not
be
optimal
or
valid
after
the
hand
over
so
multi-files
are
needed
of
also
including
some
extension
to
support
multi-deployment,
quick
interaction,
integrate
more
performance,
metric
information
and-
and
here
are
just
the
issues
about
our
player
protocol.
More
users
could
be
found
at
the
github
excise.
Please.
R
As
a
discussion
after
submitted
its
draft,
the
instance
selection
mechanism
should
be
another
item
with
some
modeling
which
we
will
continue
working
a
nice
interface.
J
R
Yes,
we
had
answer
issues
such
as
maybe
the
service
deployment
or
service
discovery
for
this
not
requiring
a
very
low
latency
response.
So
I
may
use
upper
layer
protocol
because
we
get
issues
from
the
community,
so
we
would
like
to
clarify
the
work
and
also
find
more
comments
from
from
the
earlier.
J
Okay,
I
didn't
get
from
that.
What
you
wanted
the
transport
area
to
do.
I
enjoyed
your
talk,
so
if
you
want
to
follow
up
on
the
list,
I
think
that
would
be
a
great
conversation
on
the
mailing
list.
D
N
So
I
think
I
think
the
in
my
understanding
of
thinking
of
this
is
you
know.
Peng
was
told
to
go,
get
some
feedback
from
the
transport
area
and
he's
doing
that.
It
is
true
that,
like
I
mean
alto,
is
a
transport
area.
I'm
not
sure
there
are
a
lot
of
ultra
proponents
in
the
room.
Unfortunately,
as
the
as
the
responsibility
for
alto,
I
will
say
that,
like
I
think
there
are
a
few
concerns
on
how
that's
going
and
there
there's
certainly
similarities.
N
There's
a
a
fair
amount
of
concern
about
the
speed
at
which
you
want
to
do
these
updates
and
the
other
issue
with
alto.
Is
that,
like
it
has
not
really
been
deployed
very
much
and-
and
I
think
I
mean
I
personally
would
like
to
see
some
assurance
that
we're
not
going
down
that
same
road
again
and
doing
another
application
level
pathfinder
that
that
no
one
deploys
so
I
mean
I
thank
you.
I
think
on
your
first
slide.
You
had
like
use.
N
R
Yeah
we
have
addressed
some,
sometimes
some
rich
relationship
with
others
in
the
gap
analysis
draft
and
I
think
it
might
be-
keep
opening
for
more
discussion,
and
this
discussion
also
copied
to
the
auto
analyst
yeah
thanks.
S
S
S
For
us,
and
also
if
you
attend
the
the
big
5g
in
the
past,
may
there's
a
tier
1
provider
talk
about
the
similar
concept.
It's
like
the
biod.
S
T
T
If
you
look
at
the
the
mechanism
proposed
artist-
and
we
described
this
in
the
gap,
analysis
as
an
off
path
solution
and
and
and
can,
is
proposing
an
on-pass
solution,
which
is
why
it's
positioned
in
the
routing
area
and
and
and
the
lessons
learned
from
aldo,
I
think,
are
very,
very
important
to
take
into
this,
which
is
why
we're
focusing
also,
in
the
gap,
analysis
to
look
deeper
at
systems
like
alpha
and
to
understand,
where
instance,
the
frequency
and
the
the
dynamics
of
the
updates
are
an
issue
with
off-pass
solutions
like
auto,
which
is
why
canon
is
proposing
one
part
solution
where
these
frequencies
of
change,
of
instance,
selections,
can
be
much
higher
and
in
some
of
the
use
cases
you
can
see
those
in
the
use
cases,
the
frequency,
the
higher
frequency
is
required,
and
it's
one
of
the
benefits
that
that
can
is
promising,
at
least
as
a
solution.
T
In
the
use
cases,
so
there
is
a
relation
to
art
in
terms
of
the
insights
in
order
to
really
prevent
what
what
martin
was
referring
to
to
go
down
the
path
of
yet
another
similar
solution,
which
kind
of
suffers
from
the
same
drawbacks,
and
I
think
that's
why
we're
kind
of
reaching
out
to
the
transport
area,
because
the
idle
work
is
being
located
and
conducted
in
the
in
in
the
transfer
area.
There
are
a
couple
of
possibly
also
transport
related
transfer
protocol
related
aspects
which
we
haven't
deeper
discussed.
T
Yet
in
can,
but
personally
we've
in
in
some
of
our
own
work,
we've
been
looking
into
some
of
those
issues
that
have
to
do
with
the
in
the
existence
of
several
service
instances.
How
do
you
handle
transport
connection,
transfer
connection
change
really
quickly?
These
may
be
things
to
come
back
to
the
transport
area
as
well
later
on
if
this
work
progresses
in
the
routing
area.
So
it's
another
potential
connection
point.
Thank
you.
N
I'll
I'll
try,
I
don't
stand
between
you
and
dinner
I'll,
try
to
be
brief
right.
So
I
I'm
I'm
like
my
personal
mission
is
to
figure
out
what
the
status
is
of
alto
in
the
real
world.
I
I've
not
completed
that
mission,
but
I
will
say
that,
like
issue
number
one
is
was
designed
for
peer-to-peer
and
that
that
use
case
sort
of
imploded
they're
trying
to
get
over
to
cdni
and
some
other
things
that
seem
to
make
sense,
and
that
draft
is
like
just
made
rfc.
N
So
I
don't
know
how
that's
gonna
go
yeah,
so
I
would
encourage
you
to
engage
with
that
community
and
find
out
what
their
pain
points
have
been
in
particular
and
absolutely
if
you
have
very
specific
transport
questions
related
to
you
know,
metrics,
to
use
transport
style,
metrics
and
so
on.
Alto
and
ippm
are
not
particularly
represented
in
this
room,
but
are
in
the
transport
community
and
would
have
a
lot
of
the
knowledge
points
there.
Tsvg
is
open.