►
From YouTube: IETF111-V6OPS-20210729-1900
Description
V6OPS meeting session at IETF111
2021/07/29 1900
https://datatracker.ietf.org/meeting/111/proceedings/
A
You've
seen
the
agenda,
I've
posted
it
to
the
list.
It's
been
posted
at
the
itf
site
for
meeting
material
site
for
a
while
and
okay.
Now,
a
new
deck
is
being
shared.
A
A
Now
I
think
we
can
move
on
from
that.
The
first
agenda
item
is
deployment
status.
D
B
A
To
spend
about
10
minutes
on
or
15
minutes
on
this
ip
deployment
status,
draft.
A
So,
who
is
presenting.
D
So
basically,
this
is
the
update
of
the
deployment
status
about
ipv6
and
I'm
presenting
on
the
alpha
of
the
authors
named
on
this
first
slide
ron.
If
you
can
go
to
the
next
one.
D
Please,
okay,
so
history
of
the
draft
we
submitted
the
personal,
the
individual
draft
in
october
last
year.
We
had
a
couple
of
revisions.
Probably
you
remember
that
after
atf
110
there
was
the
week
requested
by
the
chairs
exchange
comments,
and
after
that
I
think
the
list
decided
to
adopt
as
a
work
group
draft.
D
D
Next,
one
please:
okay
status.
We
have
added
three
more
authors
from
the
previous
version.
Let's
say
that,
thanks
to
the
many
comments
we
have
received.
B
D
The
list
we
tried
to
do
our
best
to
review
entirely
the
draft,
and
you
see
we
have
grouped
all
the
comments
received
in
four
major
areas,
the
first
one
being
removing.
Let's
say
what
is
considered
the
marketing
sales
language
specify
a
bit
better.
The
scope
of
the
draft
remove
some
unreferenced
or
unproven
advantages
of
ipv6
that
were
explained,
but
probably
not
referenced
the
proper
way
and
introduced
some
missing
references.
D
Next
one,
please,
okay,
so
you
see
here
the
main
differences
between
version
zero,
one
and
version
sorry
version
o2.
Basically,
we
have
maintained
the
overall
structure
of
the
draft.
We
changed
some
specific
sections
and
all
those
sections
are
highlighted
in
light
blue.
You
see
there,
I
would
say
main
intervention
in
the
text.
D
We
have
rewritten
the
introduction
and
we
changed
the
final
part
of
the
draft
that
was
called
call
for
action
into
something
that
probably
is
more
accepted
by
the
community,
which
is
common,
ipv6
challenges,
and
I'm
going
to
explain
just
in
the
next
few
slides
next
one.
Please
run
okay,
something
about
the
changes.
D
D
We
have
widened
a
bit
description
of
the
surveys
and
I'm
going
to
talk
about
that
in
the
next
slide,
then
we
have
differentiated
between
the
usage
of
ipv6
only
at
the
service
layer,
we
call
it
the
overlay
and
in
the
the
physical
network,
so
in
the
underlay
and
by
the
way
the
section
about
the
underlay
has
been
improved
quite
a
lot
thanks
to
the
help
of
jian
or
verizon,
and
we
have
better
clarified,
let's
say
the
description
about
the
challenges
which
are
contained
in
the
final
section
of
the
draft.
D
Somebody
let's
say
pointed
out
that
probably
it
was
not
completely
clear.
Let's
say
the
the
poll
that
we
did
last
year
didn't
cover.
Actually
the
majority
of
network
operators
so
say
that
looking
at
the
customers
of
the
operators,
we
have
interviewed
they,
the
number
is
quite
high.
D
Our
feeling
is
that,
probably
now
it
should
be,
let's
say
suitable
for
the
draft,
but
if
anything
is
still
missing
or
something
is
not
completely
clear,
we
are
open
to
discuss
again
and
we
are
also
proposing
if
you
think
that
they
could
be
even
enlarged
in
its
scope,
we
could
propose
a
poll
even
through
the
v6
ops
mailing
list.
Just
to
you
know,
get
more
feedback
and
more
input
next
one
please
now
challenges
I'm
not
going
through
the
entire
list
of
the
challenges
that
we
have
listed
in
in
the
draft.
D
But
basically
you
see
that
we
try
to
point
out
what
is
still
missing,
what
it's
perceived
as
the
probably
the
factor
that
is
still
hindering
the
deployment
of
ipv6.
We
touched.
Let's
say
almost
all
the
technology
and
operational
areas,
including
service
providers,
enterprises,
devices,
governments
and
then
moving
to
the
operational
side,
so
management
performance,
security
and
so
on,
and
next
one
please,
okay.
So
next
steps
again,
we
are
completely
open
to
receive
feedback
and
criticism
as
usual,
what
is
still
missing
in
the
draft?
D
Let's
say
when
reviewing
the
text
is:
is
there
any
operational
input
which
is
missing
so
in
case
feel
free
just
to
send
an
email
and
exchange
the
material
that
could
be
relevant
for
the
discussion
and
then
one
final
question,
which
is
actually
for
again
for
for
the
audience
whether
it
makes
sense
to
share
these
analysis.
Externally
I
mean
not
just
advertising
physics
yet
another
draft,
but
if
it
could
be
meaningful
to
say
well,
there
is
a
let's
say,
an
analysis.
B
D
C
Okay-
and
we
are
done
so-
I
will
keep
sharing
from
the
next
draft,
which
is
pros
and
cons
of
ipv6
transition
technologies.
E
E
E
Thank
you.
So,
on
the
one
hand,
this
draft
should
be
published
as
soon
as
possible
to
assist
network
operators
with
a
stable
document
and
on
the
other
hand,
our
draft
has
yet
two
incomplete
parts.
One
of
them
is
section
5,
which
is
currently
a
stub.
It
would
be
about
benchmarking,
different
implementations
of
all
the
five
ip
versions
for
the
service
technologies.
E
However,
it
would
last
two
or
three
more
years
to
do
it
because
most
ifc
8219
testers
are
yet
to
be
implemented,
and
I
understand
that
this
is
way
too
long.
We
discussed
it
with
the
authors
and
with
the
working
of
chairs
that
we
may
not
wait
so
long
time,
so
it
must
be
left
out
and
we
recommend
to
put
a
pointer
to
a
new
draft
which
will
be
about
this
topic,
and
the
other
issue
is
more
more.
E
It's
not
so
easy
because
there's
a
undecided
question
in
section
4.2
it's
about
the
scalability
of
the
stateful
technologies
and
I
think
that
meaningful
results
can
be
produced
for
the
next
ietf
meeting
and
we
recommended
that
for
the
working
group
chairs
who
accepted
it.
However,
I
would
like
to
to
have
the
consent
of
the
anti-working
group
debate
until
the
next
ietf,
or
just
to
leave
it
out
and
put
a
pointer
for
that
too
and
start
the
working
loop
last
call.
E
E
Yes,
so
that
should
be
measured
or
but
could
be
measured.
Of
course
we
are
talking
about
the
five
ipv
version
for
the
service
technologies
and
after
the
five
there
are
two
basically
called
stateful
because
they
have
state
in
the
core
network.
One
of
them
is
464x
lat,
which
is
a
combination
of
stateless
and
n846
and
stateful
n864,
and
we
have
a
methodology
and
an
rfc
8219,
compliant
tester
for
benchmarking,
the
states
for
net
64
implementations.
E
However,
we
do
not
have
a
tester
for
dslide,
so
we
hope
that
representing
the
class
by
n864,
we
could
give
meaningful
answers
to
the
question.
But
it's
up
to
you.
If
you
accept
this
answer
or
not
so
please
go
to
the
next
slide
and
here's
our
draft,
which
was
proposed.
It's
benchmarking
methodology
for
stateful
nat
xy
could
be
six
four
and
another
as
a
gateway
is
using
ist
48,
14.
E
E
Could
you
go
to
the
next
slide,
so
it
is.
The
question
is:
what
can
we
measure
and
how
can
we
use
it
for
making
decision?
So
what
we
can
measure
is
how
the
number
of
connections
stored
in
the
stateful
n864
gateway,
influence
the
throughput
of
the
stateful
gateway.
So
what
parameters
could
we
use?
We
could
we
can
specify
the
exact
port
ranges
for
but
source
port?
What
so
source
point
class
source
port
and
the
same
for
destination
port?
E
And
if,
if
you
multiply
with
the
size
of
the
source
port
range
with
the
size
of
the
destination
voltage,
we
can.
You
can
calculate
the
number
of
combinations
potential
number
of
connections
to
be
stored
in
a
in
the
state
for
gateway.
So
I've
tested
this
tester
in
a
self-test
setup
up
to
400
million.
E
Yes,
so
it's
just
an
artwork,
it's
not
not
results,
but
I
could
presume
such
kind
of
results.
So,
on
the
horizontal
axis,
you
can
see
the
number
of
connections
in
the
stateful
gateway.
Let's
say
from
one
million
to
one
billion
and
on
the
vertical
axis
you
can
see
the
throughput
in
million
frames
per
second.
So
I
just
go
to
extremes.
E
The
blue
one
shows
good
scale
up,
because
the
throughput
doesn't
really
decreases
with
a
huge
increase
of
the
number
of
connections
and
other
is
a
very
bad
deal,
because
the
stupid
decreases
significantly.
So
we
could.
We
could
see
something
like
this,
this
kind
of
results.
Could
you
please
the
next
slide?
E
Yes,
and
the
challenge
is:
what
parameters
should
we
use
to
provide
meaningful
results
for
the
network
operators,
for
example,
the
range
of
the
number
of
connections
from
1
million
to
one
billion?
Is
it
okay,
and
if
you
are,
you
want
more
how
it
breaks
down
to
source
portraying
that's
the
entry
point
range
but
frame
size
to
use
what
an
86
for
implementations
to
use.
I
could
use
some
free
software
such
as
joule,
or
is
it
enough
to
use
a
10
gigabit
per
second
ethernet,
because
I
have
no
faster
than
that.
E
E
So
it
is
very
clear
that
either
we
go
option
a
or
option
b
option,
a
means
that
we
include
the
scale
of
that
and
I
hope
to
perform
the
measurement
before
the
next
ietf
meeting.
And
for
this
reason
we
delay
the
working
group
last
call
until
that
time,
and
we
include
the
scrap
test
results:
option
b.
We
leave
out
the
scalar
test
and
we
just
add
the
pointer
to
new
draft
and
we
can
initiate
a
working
group
last
call
right
now
I
mean
next
week.
E
F
Thank
you
and
I'd
like
to
thank
you
for
the
great
work
it's
going
to
be
helpful
to
those
of
us
enterprises
deploying
when
we
finally
really
do
so.
So
I
thank
you
in
advance.
My
question
is
I'm
very
much
in
favor
of
the
scalability
testing
and
my
question
is:
would
that
include
cgns
and
again?
Thank
you.
E
A
A
Asking
for
nat
64
testing,
would
you
not.
E
It,
okay
so
sure
I
I
can
do
both
n8864
and
four
four,
I'm
happy
to
do
that.
Both.
G
Thank
you
chair.
I
have
a
comment
about
network
406.
I
think
that
it
is
stateful
not
stainless,
which
is
the
same
to
this
knight.
G
C
You,
okay,
and
what
any
response.
E
H
I
have
a
question
about
couple
of
previous
slides
because
I
have
seen
some
parameters
which
is
accounted
for,
but
as
a
and
for
any
stateful
device,
and
here
we
have
definitely
stateful
device.
We
have
four
primary
parameters
which
should
be
counted.
Of
course
it's
gigabits
per
second,
it's
packets
per
second,
it's
sessions
per
second
in
its
upper
limit
of
active
sessions
for
parameters.
H
The
active
limits
of
sessions
is
just
memory
is
typically
not
much
related,
but
the
previous
three,
especially
the
one
which
is
missing
on
all
slides,
is
missing
sessions.
New
sessions
per
second,
it's
really
affect
the
performance.
You
could
not
test
just
one
parameter.
It
does
not
make
sense.
H
To
understand
for
some
particular
scenario
for
some
particular
use
case,
what
is
the
combination
of
session
per
second
gigabit
per
second
packet
per
second
and
try
to
establish
all
parameters
at
the
same
time?
Only
in
this
case,
your
your
measurement
will
make
sense,
because
if
you
will
measure
just
one
parameter
it,
it
does
not
make
sense.
It's
a
stateful
device
is
very
much
affected
by
all
three.
E
Yes,
I
completely
agree
with
you.
I
have
shown
you
a
link
about
a
draft
paper
in
that
paper.
I
made
some
measurements
and
even
the
measurement
method,
which
I
don't
want
to
detail.
It
has
two
phases.
The
first
phase
is,
I
called
primary
phrase
when
we
fill
up
the
state
table
of
the
tester
and
also
the
the
connection
tracking
table
of
the
stateful
nat
device,
and
I
have
made
some
experiments
and
I
experienced
that
the
connection
establishment
rate
is
lower
than
the
throughput.
E
I
mean
that
if
I
have
to
fill
in
the
state
table
and
every
packet
belongs
to
a
new
new
connection,
it's
lower
rate
than
when
the
state
table
is
already
filled,
and
I
just
send
packets
through
existing
connections.
So
I'm
aware
of
this
situation
and
of
course
we
can
test
different
combinations.
E
However,
my
experience
says
that
we
may
test
only
a
few
combinations,
otherwise,
because
we
have
so
many
parameters
and
if
we
prepare
the
cartesian
products
that
every
let's
say,
one
parameter
may
have
three
values
that
may
have
four
values
and
they
have
five
parameters
and
we
combine
all
possibilities.
E
It
is
a
so
huge
amount
of
measurements
that
it's
for
weeks
or
for
months
to
execute
them.
So.
H
You
don't
need
too
much,
you
don't
need
too
much.
You
just
need
to
assume
couple
of
use
cases,
for
example,
use
case
mobile
traffic
use
case,
fixed
broadband
traffic,
maybe
maybe
couple
of
different
scenarios.
You
should
you
should
assume,
from
my
point
of
view,
a
few
typical
scenarios
10
years
ago
back
I
had
such
experience
in
10
years
ago.
Back
I
has
understanding
what
is
the
typical
relationship
between
cps,
pps
and
gbps
for
mobile
and
what
is
the
typical
case
of
fixed
broadband?
H
I
am
not
sure
that
I
know
current
numbers,
because
traffic
may
change
a
little
bit,
but
you
should
just
assume
a
few
typical
use
cases,
maybe
additional
use
case
enterprise,
for
example,
yeah,
big
corporation
additional
use
case.
Then
after
you
will
assume
or
typical
use
cases,
then
you
could
test
just
these
three
or
four
use
cases.
No,
no,
not
100..
E
Well,
I'm
smiling
because
this
is
what
I've
learned
from
the
working
I'm
group
academic
researcher,
and
this
is
what
I
don't
have
the
typical
number
typical
use
casey.
So
if
you
can,
others
can
tell
me
this
one,
I'm
very
happy
with
that
and
I'm
happy
to
to
make
the
measurements
according
to
your
parameters.
I'm
very
happy
with
that.
I
Okay,
julie,
I
as
a
quota.
I
just
want
to
to
say
something
about
the
first
two
questions
that
that
have
been
placed:
the
one
about
the
cgn.
We
need
to
understand
that
this
document
is
only
about
ipv6
only
networks,
so
yet
not
64
is
covered,
of
course,
but
vgn
should
not
be
covered.
Now
I
am
happy
that
we
include
it
if
we
want
to
use
it
just
as
a
comparison
point,
but
not
as
a
main
question
of
the
draft,
because
then
the
draft
is
totally
different
right
then.
I
The
second
point
I
want
to
make
is
about.
If
I
understood
correctly,
it
was
mentioned
that
lightweight
four
over
six
is
a
stateful,
and
it's
not
because
we
are
talking
about
the
state
in
the
network,
not
in
the
city
and
the
in
lightweight
four
over
six.
The
state
is
in
the
city
in
terms
of
performance,
okay,
so
just
just
wanted
to
clarify
that.
Thank
you.
I
C
Okay,
I
have
one
quick
question:
gabor,
yes,
have
any
of
these
test
test
methods
been
socialized
with
bmwg.
E
I
I
have
just
shared
the
method
and
all
my
50
minutes
were
over,
so
I
have
already
shared
the
draft
about
maybe
six
weeks
ago,
so
I
have
got
some
answers
on
the
bmw
g
mailing
list,
but
at
the
at
the
meeting
I
don't
have
any
comments
because
we
have
run
out,
maybe
maybe
because
they
have
run
out
of
the
time.
But
maybe
there
were
no
comments.
J
H
Recently,
we
see
we
have
seen
a
little
bit
more
activity
than
typically
for
oversized
for
mtu
problem
is
probably
because
we
have
developed,
or
especially
last
year's,
have
developed
a
lot
of
new
headers.
It's
iim
its
services,
its
a
different
extension
headers,
which,
after
applied
in
the
network,
a
little
bit
challenge
additional
challenge
additional
to
typical
tunneling,
additional
challenge,
which
probably
explains
why
we
have
a
little
bit
more
interest.
Recently.
H
Additional
problem,
which
may
be
already
old
enough,
but
not
very
well
discussed,
is,
is
a
problem
about
equal
equal
cost
multipath,
because
if
you
have,
if
you
look
to
any
current
network,
you
will
see
that
on
many
links
you
have
ineffectively
many
links
in
parallel
up
to
16.
Maybe
four
eight
is
typical.
I
have
seen
personally
for
one
more
or
less
big
network
for
five
hopes.
H
The
combination
which
gives
64
thousands
of
different
paths
possible
just
between
two
points
through
the
network,
because
five
hopes
and
one
hope,
four
parallel
links
next
couple:
eight
parallel
links,
and
additionally,
there
is
some
ecmp
between
different
nodes
and
then,
as
a
result,
64
as
a
result,
a
huge
in
such
a
situation.
Now
the
path
mtu
could
change
the
dynamically
and
could
change
up
abruptly.
It's
a
it's
additional,
probably
challenge
which
which
happened
recently.
H
Maybe
I
need
to
mention
that
dc
cloud
is
more
important
and
it
generates
typically
much
bigger
packet
than
the
normal
subscriber
at
its
additional,
probably
challenge,
and
maybe
I
need
to
mention
that,
on
the
receiver
side
we
still
have
default
on
on
the
receiver
host.
We
still
have
default
buffer
for
reassembly
1500
bytes,
which
is
maybe
limiting.
If
you
will
see
that
the
minimum
packet
size
for
ipv6,
1280
and
1500
this
220
byte
overall
it
it's
again,
could
be
limited
in
some
particular
situation.
Hence
go
next
slide.
H
Please
we
did
investigation.
What
could
be
the
the
solution?
Okay,
it's
a
one-again
investigation
because,
probably
discussion
what
to
do
about
oversized
packets
did
happen
already
many
times.
But,
okay,
let
me
see
what
we
would
have
here.
Of
course,
the
best
choice
is
to
to
increase
them
to
you
on
your
own
network
links
and
don't
have
problems
of
oversized
packet
at
all.
Okay,
okay,
it's
it's
an
excellent
solution.
H
Unfortunately,
it's
not
always
possible,
and
it
has
not
just
something-
probably
problem
like,
for
example,
outdated
hardware,
for
example,
yeah,
it's
additionally,
all
middle
boxes,
for
example.
They
typically
don't
support
a
big
enough
mtu.
It
has
some
fundamental
problem.
For
example,
one
of
big
fundamental
problem
is
buffer
inefficiency.
H
If
you
have
cheap
platform
chip,
it's
something
like
low
low
end
router
or
any
switch
almost
any
switch
okay
most
of
the
switches.
If
it's
a
low
end
hardware,
cheap
hardware,
then
the
buffer,
typically
typically
constructed
in
such
a
way
that
one
packet
one
buffer.
There
is
no
split
like
atm.
There
is
no
split
in
the
air
assembly
inside
the
memory.
H
Hence
the
memory
could
be
occupied
extremely
non-effectively
on
majority
of
low
cost
platforms,
and
if,
in
this
situation
you
you
will
install
the
big
mtu,
then
it
means
that
9k
buffer
will
be
occupied
by
okay,
maybe
not
64,
but
even
sometimes
64
byte
packet
will
occupy
will
occupy
9,
000,
byte
buffer,
it's
a
principal
problem
which
is
not
easy
to
solve,
because
it's
a
cost.
H
Of
course,
high-end
platform
is
capable
to
split
a
big
packet
for
smaller,
smaller
packets
and
smaller
packets
put
in
the
buffer,
and
then
memory
is
effective,
but
for
low
cost
it.
If
you
will
increase
six
times
the
size
of
the
packet,
then
you
will
decrease
buffer
size
six
times,
and
if
this
platform
in
general
has
15
millisecond
of
buffer
for
wire
speed,
then
after
six
times
less
three
millisecond
any
burst
from
video,
for
example,
will
will
immediately
get
a
drop
of
the
packet.
H
Hence
it's
it's
difficult
to
solve
this
problem,
and
maybe
the
problem
organizational.
For
example,
the
link
could
be
from
third
party
link,
could
be
one
telco
rented
link
from
another
telco.
Then
it's
it's
not
possible
to
change
him
to
you
or
maybe
the
situation.
Then
we
have
a
non
ethernet
technology.
Wireless,
for
example,
where
mtu
in
principle
is
not
so
big.
H
Next,
like
this
okay
additional
option
which
we
potentially
have,
we
could
be
careful
about
attachment
of
additional
information
to
the
packet
be
frugal.
I
see
a
lot
of
activity
in
reality.
Most
of
activity
currently,
which
I
see
in
itf,
is
about
this
particular
solution,
because
I
have
seen
already
many
publications
for
transfer
from
q
information
by
isas
by
bgp
last
by
pcp,
by
stark
policy.
H
H
If
it's
controller,
fully
controller
managed
something
like
srt,
for
example,
then
all
path
is
well
known
for
controller.
Then,
of
course,
it's
possible
to
predict
mtu,
but
if
it's
distributed,
architecture
like
we
have
right
now
of
a
majority
of
cases,
if
every
node
make
a
decision
about
next
scope,
forwarding
by
itself,
then
prediction
for
a
cmp
calculation
is
extremely
difficult
if
possible.
H
Hence
it's
not
universal
solution.
It
needs
okay,
upgrade
of
everything,
but
in
some
situation
it
could
be
useful,
maybe
one
additional
problem
which
exists
that
sometimes
we
attach
headers
not
on
the
tunnel
side.
There
is
no
virtual
tunnel
interface
where
I
attach
headers.
It
could
be
the
case
for
for
beer
for
doesn't
matter
which
one
ipv6
or
ipv4
it
could
be
for
iom,
for
example,
it's
not
always
a
virtual
interface
and
then
it's
not
very.
H
If
it's
not
virtual
interface,
then
such
mechanism
will
be
much
more
difficult
to
implement.
Hence,
okay,
maybe
it's
it's
possible.
But
again
it's
not
universal
solution.
Go
next
slide!
Please.
H
It
has
been
discussed
already
many
times
why
fragmentation
is
very
bad,
but
here
in
the
draft
we
did
analysis
about
all
current
implementation
of
tunneling.
It's
about
10
to
10,
different
type
of
tunneling,
mpls,
lgp
direction,
three
vxlan
etc.
H
We
have
analyzed.
Unfortunately,
fragmentation
is
mandatory
for
one
corner
case.
Unfortunately,
if
your
packet
is
already
smallest
one,
it's
already
1280.,
you
could
not
send
ptb
message
to
the
source.
It
does
not
make
sense
because
the
source
will
not
decrease
packet
anyway,
because
it's
already
minimum
it's
not
possible.
H
Hence
we
need.
Unfortunately,
we
need
fermentation
for
this
corner
case,
and
this
situation
is
unfortunately,
properly
properly
explained.
Only
in
the
oldest
tunneling
rfc,
it's
rfc
2473.
H
It's
a
pv6
inside
the
pv6,
the
basic
tunneling
specification
which,
by
the
way,
referenced
by
srh
srh,
just
has
small
comment
that
srh
is
based
on
on
this
particular
rfc
in
this
rfc.
We
have
in
reality
a
good
explanation
of
what
to
do
about
this
situation.
If
we
look
to
all
other
tunneling
specification,
then
in
majority
of
them
we
have
just
discussion
that
fragmentation
is
prohibited,
no
any
discussion,
what
to
do
if
it's
already
1280
smallest
pocket.
What
to
do
in
this
situation?
No
discussion
in
some
there
is
a
discussion.
H
There
is
a
that.
Okay,
we
we
have
the
situation.
Of
course
it
does
not
make
sense
to
discuss
too
much.
I
have
references
here.
Why
why
fragmentation
is
bad,
because
there
are
some
number
of
good
recent
drafts
of
rfc,
which
explains
explains
why
fragmentation
is
very
bad.
Please
go
next.
H
Pathway
discovery
still
looks
like
the
best
solution,
most
most
universal.
Hence
it's
still
pro,
probably
primary
one
against
oversized
packets.
H
It
is
again
best
described
only
in
the
rfc
2473
20
22
years
old
rfc,
because
only
this
rfc
has
deep
explanation:
how
to
do
effectively
proxy
of
okay.
This
particular
rfc
does
not
have
name
proxy.
It's
it's
a
terminology
which
is
not
available
in
this
rfc,
but
effectively
it's
proxy,
because
what
is
explained
in
this
rfc,
if
you
will
receive
ptb
message
packet
to
big
message
from
inside
the
tunnel,
then
you
should
not
just
change
your
virtual
interface
mtu.
H
Additionally,
you
should
recreate
the
ptb
message
and
send
to
source
in
this
way
the
real
source.
The
real
host
has
a
chance
to
change
the
size
of
the
packet
by
one
route
or
round
trip
time,
because
in
many
other
cases
for
other
tunneling
technology,
which
I
explained
in
in
this
draft
for
other
tunneling.
Unfortunately,
we
have
an
explanation
that
only
the
mtu
of
the
virtual
interface
itself
should
be
changed.
H
No
any
immediate
feedback
to
the
to
the
real
source
should
be
sent
according
to
many
other
standards,
and
then
it
means
that
we
need
to
wait
next
packet
from
the
source
and
the
next
packet
would
be
dropped,
or
this
time
it
would
be
dropped
not
inside
the
tunnel.
This
time
it
would
be
dropped
in
front
of
virtual
interface,
and
then
we
will
have
chance
to
report
ptb
by
the
next
round.
Trip
report
ptb
to
the
source
and
source
could
could
shut
down.
This
ptu
discover
python.
H
Q
discovery
is
friendly
to
massive
equals,
multipath,
which
we
have
right
now
in
the
network,
because
even
if
it
will
change
dynamically
for
some
for
some
situation,
then
it
it
would
still
operate
it
would
still.
It
would
still
be
possible
to
use
it.
Probably
it's
still
the
best
solution.
Please
go
next.
H
Additionally,
some
of
you,
of
course,
could
point
to
participation,
layer,
mq
discovery,
but
packetization
layer,
umq
discovery
itself
claims
that
it's
not
a
replacement
for
path,
view
discovery.
It
claims
that
it's
a
good
additional
tool,
but
unfortunately,
it's
only
based
for
for
one
application,
it's
difficult
to
share
between
applications
and
it's
it
could
not
work
if
we
have
a
massive
drop
for
just
overloaded
link,
for
example,
it
has
some
some
some
other
some
other
problems.
It's
not
a
replacement,
but
it's
augmentation
for
patterns.
H
The
conclusion
is
just
evident
that
if
you
have
the
capability
to
increase
them
to
you,
if
it's
really
possible,
then
just
increase
them
to
you
and
then
it's
fine,
it's
excellent
yeah,
but
if
it's
not
then
probably
path
mtu
is
your
next
choice
as
a
solution?
The
fragmentation
probably
is
not
the
solution
at
all
and
perfect
and
practicalization
layer,
pathetic
discoveries
is
probably
is
a
good
augmentation
for
for
all
of
this
story,
of
course,
document
has
a
much
more
much
more
discussion.
H
I
mean
much
more
details,
but
for
the
presentation
for
atf-
that's
probably
it
now.
I
see
gyan
is
asking
questions.
J
Yes,
yes,
sure
any
questions
you
know,
I
guess
you
mentioned
in
your
presentation
about
various
overlay
technologies
that
there
is,
you
know
quite
a
lot
out
there,
as
you
mentioned
vxlan,
mtls,
vr,
there's
so
many
overlay
technologies
that
do
require
a
higher
end
to
use.
So
you
end
up,
you
know
having
a
large
rover,
at
least
on
the
transport
side,
that
the
core,
the
core
and
even
aggregation
layers,
you
know,
really
need
to
hire
mtu.
Have
you
seen?
Maybe
you
know
as
as
new
overlay
technologies,
that
kind
of
has
become
really
the
paradigm.
J
This
really
the
change
in
the
market
that
really
just
about
anything
you
you
have
out
there.
It
has
some
kind
of
overlay,
so
you're
always
having
some
kind
of
overhead
that
has
jumbo
frames,
at
least
from
maybe
from
on
the
internet,
and
because
I
guess
you
have
the
different
types
of
networks,
let's
say:
enterprise,
internet
or
even
let's
say
private,
mpls
or
segment
routing.
Have
you
seen
or
noticed
that
maybe
there's
less,
maybe
less
fragmentation
and
also
maybe
less
issues
related
to
with
I
guess,
dropped
and
yeah?
J
I
guess
drop
packets
or
issues
related
to
oversized
packets.
H
But
it's
look
like
no.
The
question
is:
look
like
a
comment
right.
H
Gyan
thanks
you
for
your
comment,
because
you
have
reminded
me
something
important,
which
I
have
forgotten
to
mention
to
support
your
idea
even
more.
I
could
say
that,
right
now,
because
data
plane
in
general,
forwarding
plane
data
plane
has
become
much
stronger
compared
to
control
plane.
We
see
right
now
a
lot
of
movement
of
some
tasks,
some
functionality
from
control
plane
to
to
data
plane
in
general,
not
just
it's.
It's
related
to
tunnel,
for
example.
It's
related
to
everything,
to
quality
of
service,
to
many
things
and
as
this
movement.
H
Typically,
it
does
not
create
a
huge
challenge
for
data
plane,
because
data
plane
has
become
much
more
strong
for
last
20
years,
but
movement
to
control
movement
from
control
plane
do
two
good
things.
One
good
thing
that
control
playing
has
become
much
much
simpler.
There
is
a
synergy
here,
because
the
data
plane
tolerate
this,
but
control
plane
is
becoming
much
much
simpler.
We
see
this
story
and,
additionally,
we
could
organize
some
new
number
of
services
which
was
not
available
before.
H
Hence,
yes,
I
could
even
expand
your
comment
that
it's
not
just
about
tunneling,
it's
about
everything
we
just
have
right.
Now
the
trend
on
the
market
to
move
many
things
from
control
plane
to
data
plane,
and
then
it
means
that
more
size
in
the
heaters.
We
will
need.
J
Thank
you.
Thank
you.
I
just
had
one
last
comment.
We
knew
with
ipv6,
you
know,
because,
because
you
do
have
you
do
have
a
comparing.
I
guess
apples
for
oranges.
I
guess
like
ipv4
versus
ipv6,
you
know
you're
doubling
the
ip
header
and
then
your
maximum
segment
size.
Now
it
actually
shrinks
down.
It's
almost
like
you're
doing
gre
tunneling
but
you're,
adding
an
extra
20
bytes.
J
So
that's
that's
quite
a
bit
big
change,
and
now
you
have
all
these
extension
headers
so
that
that
overlay,
I
guess,
if
you,
if
you
look
at
it
from
you,
know,
comparing
to
let's
say
overlay:
ipv
sits
in
itself
just
by
itself
natively.
It
adds
a
tremendous
overhead
in
by
itself
and
you're,
not
even
taking
into
account
overlays.
Have
you
seen
it?
J
Maybe
a
trend
in
the
market
that
or
industry
that
has,
I
guess,
let's
say:
internet
service
providers
and
whatnot
start
migrating
and
pushing
towards
ipv6
everyone's
going
towards
super
jumbo
like
at
least
nine
thousand
or
more
or
higher
mg.
I
mean,
I
know,
let's
say
verizon,
I
guess.
For
example,
I
mean
we
we
ride
in
super
jumbo
like
throughout
verizon,
I
mean
I'm
sure
other
providers
as
well
like
lumens
and
some
of
the
major
tier
one
tier
two
tier
three
are
running
at
least
from
the
transports
perspective.
J
C
Okay,
jen.
K
K
The
assembling
buffer
is
1500
and
I
am
not
sure
if
it's
just
theoretical
statements
or
you've
seen
any
practical,
real
life
evidence
of
this,
because
to
be
honest
in
ipv4,
this
value
is
three
times
slower.
It's
five,
six,
seven
six
right,
but
it's
only
means
the
minimal
number
we
can
expect
from
horse
and
from
what
I
saw
most
of
modern
operating
systems
are
easily
reassembling
up
to
65k
packets.
K
So
I
agree
that
rfc
2200
may
be
not
so
clear
about
like
yeah.
Maybe
a
bit
pessimistic,
I
should
say
about
horse
should
not
be
sending
more
than
1500,
but
I
don't
think
it's
what's
happening
in
canada
in
real
life.
So
I'm
just
curious
what
how
big
would
be
problem
you
describing
if
we
assume
that
hosts
are
capable
of
reassembling
big
packets?
K
It's
my
first
question
and
second,
I'm
just
curious,
my
again,
my
experience
being
that
you
assume
that
internet
mtu
is
1500
or
maybe
even
1280,
but
most
of
the
operators
I've
talked
to
which
have
overlays
in
their
backbone,
are
capable
of
running
much
higher
mtu.
So
mostly
all
these
small
mtu
links
are
interconnects.
H
Let's,
let
me
start
from
second
question,
because
it's
much
easier
for
a
long
time
because
restriction
on
the
ethernet
level,
because
any
size
above,
if
I
remember
correctly,
15
30
36,
I
I
have
stated
in
the
in
the
document
everything
is
what
is
above,
has
been
treated
as
a
type
not
as.
B
H
And
for
that
reason,
for
a
long
long
time,
1536
was
really
a
restriction,
and
for
that
reason
it's
still
the
case
with
many
links
in
the
internet
are
provisioned.
Not
not
just
internet
everywhere
is
provisioned
as
something
a
little
bit
smaller
than
1536..
Therefore,
the
second
question
is
is
definitely
yes.
H
It
still
is
still
the
case
in
many
cases,
because
it's
enough
for
mpls,
definitely
not
for
a
few
labels
for
many
labels
it's
enough,
but
as
we
start
instead
of
labels
to
put
to
put
in
enhanced
headers,
especially
such
long
as
between
beating
this
replication
beer,
it's
it's
already
not
enough
for
second
question
is
positive.
H
For
the
first
question,
I
have
two
answers
for
the
first
question:
the
first
answer:
it's
not
a
big
deal
that
we
have
the
just
220
bytes
between
reassembly,
buffer
and
minimum
packet
size,
because
it's
related
to
the
cost
and
majority
of
features
which
we
have
started
right
now:
a
service
xba,
whatever
it's
all
features
which
is
related
to
intermediate
nodes.
It's
not
for
cost.
Therefore,
this
particular
assembly
buffer.
H
It's
it's
stated
by
the
way
in
the
draft
that
it's
not
a
problem,
yet
it
probably
would
not
be
a
problem
for
many
years
to
come.
It's
it's
stated
in
the
draft.
It's
it's
a
positive
answer
that
it's
probably
not
not
yet
a
problem,
but
for
the
for
the
second
answer
to
your
first
question
would
be:
I
need
to
search
more.
I
need
to
investigate
more
because
I
am
not
sure
about
the
majority
operation
system,
linux.
What
we
have
on
the
market.
We
have
linux,
android,
microsoft,
windows.
H
I
need
to
search
more
because
I'm
not
capable
to
answer
the
first
question
properly.
I
will
answer
your
your
question
later.
I'm
not
ready
right.
K
Now
I
can
tell
you
that
I
did
test
yesterday
looks
mac
os,
11
and
windows.
10
were
perfectly
fine
with
65k
packets
arriving,
not
under
load,
but
theoretically,
and
actually
the
funny
thing
is.
I
agree
that
8200
a
bit
like
not
as
clear
as
ipv4
fragmentation
rfc,
because
ipv4
actually
for
ipv4.
It
clearly
states
that
it
shouldn't
be
constant
for
operating
system
and
the
host
might
have
a
kind
of
continuous
buffer
so,
and
I
would
be
surprised
if
modern
operating
system
do
ipv6
reassembly
different
than
from
v6
from
before.
C
C
B
Okay,
so
basically
this
this
draft
is
to
address
an
operational
issue
that
some
of
us
that
are
that
are
prototyping
and
deploying
very,
very
large
networks
are
running
into
and
it
essentially
comes
down
to
adding
a
dedicated
prefix
for
use
in
lab.
Now,
there's
actually
two
drafts.
I
I
think
we're
only
going
to
talk
about
the
lab
prefix
in
in
here.
There
are
two
slides
if
we
want
to,
but
I
don't
think
we'll
have
time
so
could
you
go
to
the
next
slide.
B
Okay,
so
folks
can
go
and
take
a
look
at
the
draft
that
you
know
the
the
tl,
dr,
is
that
we
are
asking
excuse
me
to
dedicate
gua
space
out
of
some
deprecated
out
of
a
deprecated
prefix.
It's
a
slash,
seven
that
would
be
dedicated
to
I
prototyping
and
lab
utilization
so
that
we
can
have
gua
space
that
operates
just
like
a
normal
production.
B
Gua
based
network
would
but
allow
us
some
padding
and
some
other
niceties
for
building
it
in
a
safe
environment
and
also
allow
the
sharing
of
these
labs,
because
that's
becoming
more
and
more
common
between
folks
with
no.
There
needs
to
be
no
changes
to
the
lab
files
configurations
and
things
like
that.
B
So
in
the
past
it's
been
typically
done
with
you
know,
whatever
you
can
scrounge
up
to
cobble
together
what
space
you
need,
but
now
that
there's
been
more
frequently,
institutions
are
being
allocated
things
larger
than
slash
32s,
and
that
makes
it
very
hard
to
lab
things
up.
In
addition
to
that,
using
ula
space
doesn't
really
provide
a
terribly
useful
platform
because
of
rfc
6724
precedence
values.
B
So
if
you're,
if
you're
planning
to
do
v6
only
you
can
maybe
get
away
with
that,
if
you're
planning
to
do
dual
stack-
which
I
think
is
what
most
environments
are
looking
at,
it
becomes
an
impediment
because
you
can't
properly
test
a
dual
stack
environment
in
a
way
that
it
would
operate
in
real
life
with
gua
space.
So
we've
gone
through
and
addressed
a
bunch
of
the
the
things
that
have
been
come
up
on
the
list.
We
think
this
is
a
you
know:
a
dash
01..
B
C
L
B
So,
if
you're
building
a
dual
stack
network,
ula
does
not
function
in
a
usable
way,
because,
if
you're
doing,
if
you're
building
say
you're
building
a
network,
that
is,
you
know,
prototype
for
a
service
provider
and
your
dual
stacking
everything
and
you're
you're
building
it
properly
like
you
would
build
a
production
network
you're
going
to
use
dns
when
you
use
dns,
you
always
use
the
ipv4
addresses,
which
does
not.
That
does
not
mimic
reality.
When
you're
trying
to
lab
things
up
so
ula
is
de-preferenced.
B
L
C
I
think
this
is
the
significant
question
you're
up.
J
J
So
just
so,
the
motivation
for
this
draft
is,
is,
I
guess,
as
we
all
know,
we've
had
a
chat
you
know
on
on
the
mailing
list
related
to
hph
and
the
issues
related
to
hvh
and
being
able
to
you
know,
leverage,
hbh
and
use
hbh
for
new
services,
but
the
you
know,
problems
related
to
hbh
and
making
it
actually
viable.
So
what
we've
done
is
we've
repositioned
this
draft
I've
removed
the
anything
related
to
solutions
in
this
draft,
and
so
it's
really
just
problem
statement.
J
So
that's
really
kind
of
the
you
know
the
major
updates
with
this
draft,
so
so
with
so
with
that
some
hph
options,
header.
J
So
really,
as
I
mentioned,
the
big
thing
is
being
able
to
use,
make
making
it
viable,
making
hph
options
viable
so
and
it
is
a
valuable
container
for
hot
by
how
processing
of
new
services
ioam
baltimore
pmtvd
and
there
there
are
probably
many
many
others
that
would
be
coming,
and
would
you
know
more
to
come
as,
as
you
know,
hbh
really
becomes
a
viable
option
to
use
for
our
developers
the
hph
options-
header,
it's
really
rarely
utilized
in
current
operator
networks
and
really
the
big
issue.
J
That's
kind
of
the
long
standing
of
the
problem
with
with
the
control
plane,
and
you
know
the
denial
service.
You
know
with
the
control
plan
and
the
punch
from
the
forwarding
plane
to
the
control
plane
so
that
that
is
the
big
and
undesired
impact
of
hbh
options,
and
I
guess
I
guess
in
the
industry
where,
where
a
lot
of
providers
even
on
the
internet,
just
you
know
just
due
to
fear
of
having
a
possible
ddos
attack,
block
hph,
so
even
valid
legitimate
hph,
really
can't
make
it
across
the
internet.
J
So
our
our
main
goals
and
purposes
basically
to
break
that
endless
cycle
that
results
in
hph
becoming
a
dos
vector,
enable
hbh
options
to
be
utilized
in
a
safe
and
secure
way
without
impacting
the
management,
plane
and
easy
deployments
of
the
new
hph-based
network
services
and
a
multi-vendor
or
operating
environment
that
can
now
be
deployed
without
operational
impact
next
slide.
J
So
we've
got
a
few
revisions,
so
we'll
just
go
through
some
history,
so
in
revision
two,
so
that
was
at
the
end
of
last
year.
We
removed
the
solutions
from
section
seven,
so
the
draft
is
focused
on
the
problem
statement
only
for
v6
ops
in
revision,
four,
which
was
june
30.
So
not
too
long
ago,
we
removed
the
proposed
processing
behavior
that
we
had
so
we
had
so
initially
in
december.
J
We
removed
the
solutions,
but
we
still
had
a
proposal
that
we
had
in
six
man
so
that
we've
removed
and
really
the
goal
of
this
draft.
Now
it's
it's
for
any
any
solutions
that
are
out
there.
We
want
to
have
the
work
group
kind
of
come
together
and
that
they,
everyone
is
in
agreement,
that
this
is
a
problem
that
we
want
to
solve
and
we
want
to
make
hvh
viable
and
so
this
this
draft,
the
goal
is
really
to
help
pave
that
way.
J
You
know,
for
everyone
to
you
know,
come
to
agreement,
make
this
a
work
group
draft
document
that
the
worker
can
work
on
and
and
then,
and
that
would
actually
help
pave
the
way
to
actually
any
solution,
not
just
the
solution
that
we
have,
that
you
know
possibly,
but
really
any
other
solutions
that
are
out
there,
including
like
bob's
solution
that
he
has
with
with
a
single.
You
know
a
single
tlv
options:
multi,
yes
or
limiting
the
number
of
options,
and
there
there
are.
J
You
know
other
viable
options
that
are
out
there,
but
we
want
to
make
it
open
but
really
come
to
a
consensus
that
this
is
a
problem,
and
it
is
a
problem
that
we
want
to
solve
in
the
last
revision
that
we
had.
It
was
just
recently
just
changed
the
section
seven
from
desired
processing
behavior
to
requirement
so
we're
just
kind
of
making
it
official.
So
this
is
an
official
document
and
we're
we're
really
ready.
I
think
you
know
with
the
changes
that
we've
made
in
the
last
few.
We
really
feel
this
right.
C
Well,
I
think
the
first
action
item
is
for
everybody
to
read
the
draft
that
was
posted
on
monday,
because
that
really
makes
a
significant
difference.
C
Okay,
thank
you
all
attending
barbara.
Thank
you
for
helping
with
the
minutes
and
see
you
all
on
the
list.
All
right.