►
From YouTube: IETF110-6MAN-20210309-1200
Description
6MAN meeting session at IETF110
2021/03/09 1200
https://datatracker.ietf.org/meeting/110/proceedings/
A
A
Slide
in
this
trivia,
chaperone
you're,
obviously
in
the
mud
echo
session,
I
hope
barbara
and
suping
are
taking
minutes.
Our
thanks
to
them.
Minutes
will
be
available
and
there's
the
link
to
where
the
presentations
are.
I
think
meat
echo
also
has
a
a
button
that
gets
you
to
that
next
slide.
A
So
we
did
something
a
little
different
for
these
sessions.
We
had
two
120
minute
sessions
and
a
variety
of
session
requests
so
for
today
we're
doing
the
new
internet
drafts
and
drafts
that
have
had
some
discussion,
but
there
doesn't
seem
to
be
a
lot
of
working
group
support.
A
The
goal
is
to
give
time
to
the
speakers
to
basically
make
the
case
for
what
they
are
proposing
and
see
if
the
working
group
is
interested
the
second
session
on
thursday
morning
or
at
least
thursday
morning,
my
time
will
include
the
usual
introduction
and
document
status,
we're
getting
a
report
from
the
spring
compression
design
team
and
then
working
group
and
act.
Active
working
group
drafts
next
slide,
and
so
this
is
today's
agenda.
C
Great,
that's
great
okay,
so
so
this
is
about
ipv6
ipv6
solution
for
the
5g
edge
computing,
sticky
service.
We
presented
this
at
ietf109
and
we
got
quite
a
few
comments.
So
we
address
those
comments
and
come
back
with
the
update.
Next
slide,
please,
okay!
So
this
is
just
a
brief
update
background
of
the
5th
computing
project.
So
this
is
the
3gpp
tr23
for
748.
C
So
in
that
project
it
identified
many
mission
critical
use
cases
for
local.
They
call
local
data
network
for
the
5g
core
to
achieve
those
edge
computing.
C
One
of
the
requirement
there
is
those
edge
computing
services
are
controlled
by
their
application
function,
controller
and
they
are
using
any
cash
addresses,
and
you
can
see
here
that
from
battery
core
user
traffic
is
anchored
to
pack
a
psa
pdu
pdu
session
anchor,
which
is
part
of
the
function
of
the
user
plan
function
and
there's
always
router
to
ingress
router
to
the
ldn
local
data
network
and
those
services
edge
computing
services.
Any
kind
servers
are
located
in
the
mini
data
center
in
very
close
proximity
to
the
upf
function.
C
There
are
many
of
them
to
achieve
the
mission
critical
services,
so
the
assumption
is
that
those
servers
are
not
directly
attached
and
not
very
far,
and
there
are
many
of
them
there
and
any
cache
is
used
to
address
multiple
servers
actually
server
to
up
to
the
network
perspective.
Actually,
maybe
the
application
layer
load
balancers.
They
may
have
multiple
load
balancers
with
multiple
servers
behind
them,
but
from
the
network
perspective,
it's
the
only
the
the
load
balancer
be
visible
next
slide.
Please.
C
Okay,
so
in
that
document
it
identifies
many
benefits
of
the
anycast
right,
basically
leverage
a
network
layer
to
provide
to
optimize
the
balancing
among
different
load
balancer
and
also
eliminate
the
single
point
of
failure
at
a
particular
load
balancer
and
avoid
the
stealth
entry
in
the
user
device,
because
some
of
them
don't
always
query
dns
when
they
change
from
one
location
to
another
location,
but
they
also
introduce
problems
as
well,
because
they
are
so
close
in
proximity
and
so
that
the
routing
distance
are
very
small.
C
The
differences
to
different
egress
routers
are
not
very
large.
The
benefit
is
any
of
them
can
serve
the
service
and
but
but
you
need
to
stick
to
specific
ones
if
you
move
and
they
could
have
unbalanced
distribution
like
because
the
ue
move
frequently
so
that,
even
though
you
plan
ahead,
you
put
three
load
balancing
attached
to
three
routers,
because
ue
movement
all
anchored
towards
like
particular
site
and
so
causing
one
site
to
be
over
utilized,
others
being
less.
C
C
So
the
sticky
service
is
really
about
the
assumption
that
not
all
services
need
sticky
service,
only
the
ones
which
need
the
network
to
optimize.
They
register
with
the
3gpp
and
then
switch
pp,
recognize
those
services,
and
then
the
network
provide
additional
optimization
to
distribute
among
different
equest
routers.
C
C
So
using
the
tunneling
to
achieve
sticky
service,
what
it
is
is
one
the
router
ingress,
router
ra,
based
on
the
running
status,
that
is
distributed
from
the
e-waste
router
and
create
a
tunnel
and
to
the
specific,
the
the
egress
router
like
r1,
be
selected,
and
when
the
ue
moved
to
another
location,
the
router
itself
will
be
able
to
distribute
the
sticky
service
id
and
flow
id
and
sticky
egress
address
to
the
neighboring
ingress
router.
Those
are
statically
configured.
C
Each
router
is
configured
with
a
set
of
the
routers
which
are
close
by
that
the
service
when
they
move
to
anchor
to
their
corresponding
psa
their
the
user
traffic
should
stay
together
with
to
the
sticky
egress.
Bear
in
mind
that
many
of
those
applications
are
capable
of
handling
different
locations
it
just
at
application
layer,
they
can
actually
coordinate,
and
you
move
the
session
id
at
the
application
layer,
but
it
just
takes
time
and
for
some
mission
critical
services.
They
would
like
network
to
help
them
to
achieve
better
to
stick
to
the
original
one.
C
To
finish
the
session
next
page,
please.
C
So
internal
service
that
the
5g
core
could
coordinate
with
the
ingress
router
so
that
so
that
that
the
because
the
session
movement,
the
session
control
in
the
5gc
5g
core,
can
know
that
when
you
e
move
from
side
a
to
side
b,
there's
a
session
control
that
keep
that
grease
period
transition.
It
knows
where
the
next
upf
is
going
to
a
correspondingly.
C
You
can
notify
the
router
telling
the
router
ahead
of
time
that
this
particular
ue
is
moving
from
psa1
to
psa2,
the
corresponding
the
ingress
router
rb.
So,
with
this,
the
router
a
doesn't
have
to
send
to
multiple
english
routers
anymore.
It
only
sent
you
one
but
needed
information
next
page.
C
So
here
it
is
like
we
can
either
use
the
destination
extension
header
or
the
hubba
hub
extension
header
in
the
tunnel,
so
that
the
traffic,
when
coming
back
and
in
the
in
the
middle
of
the
network,
they
know
where
to
send
it
to
in
a
tunnel
and
keeping
track
of
the
information
so
that
all
the
intermediate
nodes
is
aware
of
this
particular
service
has
to
be
sent
to
the
ingress
r1
next
page.
Please.
C
And
in
the
document
we
propose
this
sub
tlv,
which
we
call
sticky
distance,
sub
tlv.
So,
with
sticky
service
we
have
sticky
type
like
and
the
sticky
type
showing
that
how
much
this
particular
service
needs
to
stick
to
the
original
egress.
Some
can
be
strongly
need
to
be
stick
to
some
can
be
loosely
sticked
to
and
then
there's
most
important
information
is
the
destination
destination.
Basically,
egress
addresses
egress
address
for
the
sticky
for
the
service
to
be
sticked
to
next
page.
Please.
C
C
So
they
have
a
sla
level.
Here
we
have
sticky
level
mapped
to
the
sla
level
they
have
application
id
here
is
the
sticky
service
id
mapped
into
that
particular
field.
C
Another
one
is
their
user
id,
which
can
be
the
ue
information
and
then
there's
a
flow
id,
and
then
the
sticky
service
sub-uv
is
mapped
into
the
service
parameter
subtly
they're
just
showing
how
those
can
be
utilized
of
the
apn
six
that
document
over
there
they
mentioned
about
using
how,
by
hub
count
to
carry
the
information
or
if
the
local
data
network
is
the
srv6,
and
how
do
we
carry
the
information
in
srb6
header,
so
all
that
are
applicable
when
we
map
into
those
application
aware
id
next
page.
C
B
C
Okay,
so
we
have
proposed
in
several
working
groups
for
different
aspects,
for
six
men
is
only
to
identify
the
the
next
hop
option:
header
or
destination
option
header
to
carry
the
sticky
information
and
that's
what
the
work
to
be
done
at
six
men
at
idr
working
group
is
to
carry
the
information
from
the
egress
router
like
equine
swaddler,
keep
track
of
all
those
all
the
packets
to
and
from
those
particular
anycast
servers
like
load
balancer.
C
They
also
have
information
about
their
their
the
site
capacity.
That's
because
some
mini
data
centers
may
have
higher
capacity
than
others,
and
you
could
also
carry
the
information
about
preference
like,
for
example,
one
mini
data
center
has
higher
network
capacity
or
more
preference
than
others.
C
So
those
information
need
to
be
propagated
to
the
ingress
router,
so
that
the
past
computation
can
include
not
only
the
round
trip
delay,
because,
typically,
today,
networking
when
we
have
multiple
paths
to
a
destination,
we
always
use
our
network
parameter
like
distance
routing
distance
to
determine
which
one
is
the
optimal
one.
So,
with
those
additional
information,
the
ingress
router
will
be
able
to
combine
the
routing
distance.
C
In
addition
to
the
the
capacity
or
of
site,
preference
added
together
apply
the
combined
weight
to
choose
the
optimal
path
and
there's
also
a
proposal
in
the
ospf
extension
that
allow
ospf
to
carry
those
information
to
the
to
the
nodes
which
care
about
choosing
the
path
based
on
not
only
the
routing
distance,
but
also
the
weight
of
the
egress
yep.
D
Hi
there
yeah,
so
they
have
a
draft
in
v6
ops
as
well.
They
will
be
asking
for
working
group
adoption
today
and
I
don't
expect
it.
We
discussed
this
in
email.
I
don't
expect
to
see
the
working
group
adopt
it
unless
it
is
conformance
to
the
v6
ops
charter.
D
B
B
Well,
judgment
so
far
has
been
that
we
haven't
seen
much
discussion
on
the
mailing
list,
so
we
are,
you
know,
unsure
of
of
the
interest
in
in
this
work.
If
there
is
enough
insufficient
interest
in
a
working
group,
I
suppose
I
could
use
the
hands
to
show
hands
tool
and
see
what,
if
people.
A
A
For
linda,
so
the
work
that
this
is
related
to,
like
the
apn
work,
has
that
been
adopted
in
another
working
group.
A
A
My
thinking
is
that
it
would
be
more
appropriate
to
ask
six
men
to
adopt
this
once
it's
actually
a
real
work
item
somewhere
else
and
then
then
it
would
be
reasonable
for
us
to
evaluate
you
know
the
the
ipv6
extension
headers
you're
proposing,
but
I
think
until
that
happens,
I
think
it
might
be.
I
think
it's
a
little
early.
B
I
did
start
a
race
or
hand
session,
so
you're
if
you
wanna,
raise
your
hand
or
not.
There
is
a
session
ongoing
while
we
move
on
to
the
next
presentation
thanks
a
lot
linda.
B
So,
alexander,
I
think
you're
presenting
the
the
next
session
please
go
ahead,
feel
free
to
enable
video
as
well.
If
you,
if
you
want
to.
E
E
F
E
Slide
please
the
contents
of
the
presentation.
I
will
mention
only
two
points
that
are
described
in
this
draft
in
the
introduction,
a
little
bit
of
the
problem
statement
for
vislak
and
the
one
slide
on
the
implementation
of
v-slot,
but
in
the
draft
there
are
more
topics
that
have
been
given
equal
importance.
E
E
Then
what
are
the
reasons
for
longer
than
64-bit
prefix
length
and
then
using
greater
than
64
prefixes
by
isp
is
normally
strictly
prohibited,
so
we
don't
have
a
race
to
the
bottom
problem,
a
brief
comparison
between
static,
slack,
dhcp,
v6
and
v-stack,
and
then
again
some
variable
strike
use
cases,
and
then
I
will
go
now
to
describe
some
points
of
the
problem
statement
next
slide.
Please.
E
Well,
there
are
very
many
aspects
of
this
problem.
We
selected
only
a
few
of
them
that
I
try
to
illustrate
with
a
figure
in
the
center
of
the
slide
and
the
bullets
bullet
points
in
the
right
part
of
the
slide
in
the
center
of
the
slide
in
the
upper
part,
you've
seen
the
figure,
the
internet
and
a
ggsn
which
is
part
of
the
3gpp
network
and
that
advertises
a
64
prefix
to
the
ue,
which
is
the
user
equipment.
E
Then
this
user
equipment
needs
to
extend
the
network
beyond
this
just
164,
so
it
could
try
to
use
to
make
a
65
prefix
or
even
a
66
for
the
subnets
behind
the
user
equipment.
Such
use
cases
are,
for
example,
a
mobile
hotspot,
a
subnet
in
automobile
or
other
mobile
platforms.
E
E
It
is
also
impossible
in
implementation
to
use
slack
with
a
prefix
of
length
65
in
the
router
advertisement,
for
example,
in
linux.
This
is,
is
not
working
simply.
This
makes
that
the
problem
is
that
it
is
impossible
to
extend
the
network
to
multiple
subnets
beyond
the
user
equipment
for
further
description
of
the
problem.
We
refer
to
this
draft
mishra
v6
variable
slack
problem
statement
that
is
pasted
at
the
bottom
of
the
slide.
E
E
The
implementation
that
we
wrote-
I
I
think,
actually
to
meet
roshiti
our
co-author
for
this
writing
that
he
took
several
months
to
to
make
it
to
to
progress
it
in
in
linux.
It
is
a
just
a
local
parameter,
a
cctl.
E
The
default
value
of
this
parameter
is
zero,
which
means
that
the
slack
behaves
acts
as
before
uses
64-bit,
ids
and
prefix
themes.
But
if
an
operator
sets
this
value
to
1,
it
makes
that
prefixes
of
length
other
than
64
are
accepted
for
slack.
For
example,
a
host
that
receives
a
63
in
an
array
will
form
an
ia
in
interface,
id
of
length,
65
and
subsequently
an
address
of
length
128
bit.
F
E
And
we
have
also
submitted
for
consideration
to
linux
maintainers,
but
we
have
some
feedback
from
them
asking
the
status
of
of
this
proposal
that
we
have
here
at
idea
in
open
bsd,
we
have
learned
that
there
is
an
implementation
of
rfc
7217
that
works.
Okay
with
variable
length,
freelance
in
slack
and
in
this
is
not
work.
The
variable
slack
is
not
working.
Only
64-bit
ids
are
implemented,
respecting
the
samples
and.
F
E
So
that
is
the
implementation
that
I
was
trying
to
present
and
at
this
point
I
might
wonder
if
there
are
any
comments.
Otherwise,
I
have
an
additional
slide
on
the
on
what
we
discussed
not
publicly
but
privately,
with
a
few
people
about
the
potential
next
steps.
But
if
there
are
any
comments
here,
I'm
I'm
interesting
to
we
are
interesting.
We
authors
are
interesting
to
hear
these
comments.
G
I'm
sort
of
like
a
deja
vu
right.
I
think
we've
been
in
the
discussion
a
couple
of
times,
but
I
have
like
two
questions.
First
of
all,
why
are
you
calling
the
easy
behavior
above
my
understanding
is?
We
are
all
here
to
discuss
how
operating
systems
should
behave
and
after
we
agree
on
something
that
should
be
implemented.
G
If
operating
system
started
to
do
something
which
is
currently
prohibited
by
rfcs,
I
would
not
call
it
a
bug
right
if
they
start
doing
this
like
we
all
can
go
home
and
just
forget
about
making
any
standards,
because
operating
system
would
do
whatever
they
want,
and
I'm
also
a
bit
confused
about
the
draft.
I
read
it
and
in
the
beginning
of
it
the
draft
it
said
we
want
to
do
prefixes,
which
are
shorter
than
plus
64,
and
we
go
and
explain
why
and
yes,
we're
also
allowing
longer
prefixes.
G
But
operators
should
not
do
this
no
and
then
the
earliest
cases.
I
found
in
the
document
talking
about
longer
prefixes
and
I
could
not
find
any
compelling
use
case
for
shorter
one.
So
I'm
not
sure
why
we
need
this,
especially
providing
that
I'm
quite
sure
that
operators
would
not
listen
right,
they're
only
doing
360
for
now,
because
they
could
not
do
longer
as
long
as
you
allow
them
to
do
longer,
it
would
be
the
race
for
the
bottom
and
our
way
you
should
not
do
this
would
not
work.
E
On
the
calling
it
a
bug
report,
I
think
it
is
because
the
tool
of
submitting
proposals
of
improvements
is
called
a
bug.
Reporting
too
like,
if
I
remember
correctly,
but
okay,
probably
buggies-
should
not
be
used,
but
we,
we
could
also
see
it
as
a
suggestion
for
a
new
functionality
that
that
could
also
be
a
experimental
functionality
if
you
wish,
into
which
some
things
are
tried,
but
by
default
they
should
be
off
of.
E
With
respect
to
the
question
of
these
shorter
and
longer
prefixes,
okay,
now,
basically
I
agree
with
you
that
if
this
luck
allows
for
longer
prefixes,
then
operators
might
be
tempted
to
allocate
longer
prefixes
to
the
end
users,
and
that
could
create
the
raise
to
the
bottom
problem,
and
that
is
not
good
and
should
be
avoided.
There
should
be
something
some
tool
some
may
can.
I
don't
know
that
should
not
allow
for
this
to
happen.
I
don't
know
what,
but
now
for
shorter
than
64
shorter
than
64
prefixes,
then
that
is.
E
E
Maybe
we
could
clarify
it
and,
and
then
the
third
point
that
to
answer
this
is
that
if
the,
if
this
document
does
not
progress
or
is
outright
rejected,
not
even
as
experimental
accepted
or
if,
if
this
is
completely
stopped,
then
in
the
implementation,
I
must
say
that
I
will
be
again
tempted
to
use
this
66
prefixes
inside
the
extended
network,
because
the
operators
will
always
allocate
a
64
and
then
the
only
thing
one
could
be
tempted
to
do
is
to
create
66
out
of
this.
So
that
is
a
conditional.
E
E
And
I
I
a
few
suggestions
were
made,
but
it's
nothing
private
about
this,
but
I
really
try
to
push
this
forward,
but
I
I'm
not
try
I'm
not
clear
how
to
proceed
and
how
to
so
one.
One
of
the
point
that
was
listed
is
to
to
make
a
liaison
between
ietf
and
3gpp
and
maybe
make
the
most
of
this
work
at
3gpp
such
that
make
a
requirement
at
3gpp
to
to
advertise
shorter
than
64
prefixes
to
the
user
equipment.
That's
a
possibility.
E
Generic
tunnel
encapsulation
protocol-
I
think
that
is
a
3gpp
protocol,
so
these
two
concepts
could
work
together.
A
third
bullet
suggests
that
maybe
we
could
ask
ayana
for
a
sub-range
of
this
one
ffe
slash
three
space.
All
these
addresses
that
start
with
zero,
zero,
zero
binary,
which
is
not
subject
to
the
64-bit
bin
boundary,
and
this
this
could
be
performed
on
a
experimental
kind
of
activity
into
which
not
only
v-slug
would
could
be
used,
but
maybe
other
drafts,
and
maybe
a
little
bit
of
small,
longer
prefixes,
but
smaller
space
in
this
that
could
be
allocated.
E
Fifth
bullet
is
an
activity
that
was
started
at
the
earlier
iatf
on
a
64
share,
v2
cameron,
barn,
I
think,
propose
it,
but
it's
not
an
internet
draft,
and
maybe
it
could
be
possible
to
take
that
text,
put
it
in
a
real
internet
draft
and
submit
it
to
ietf,
then
the
next
bullet
make
a
lightweight
prefix
delegation
mechanism
for
nd
and
ndpd.
E
E
Next
bullet
is
to
use
a
method
like
in
draft
naveen
into
which
a
host
puts
a
specific
request
in
rs
return,
solicitation
to
request
multiple
64
prefixes
or
a
non
64
prefix.
Maybe
that's
another
protocol
proposal,
and
then
I
know
that
pascal
to
bear
as
a
activity
proposal
for
ipo
over
wireless
and
since
this
happens
on
a
wireless
link,
maybe
bring
this
like
there
and
also
eduard
proposes
an
activity
or
a
sort
of
concept
on
next
generation
slack,
and
maybe
visla
could
be
part
of
that
next
generation.
E
Slack,
that's
a
another
potential
and
then
again,
somebody
also
proposes
not
is
is
always.
Is
there
another
way
possible
for
this
to
to
proceed?
Is
there
another,
because
the
problem
is
still
there?
All
mobile
operators
allocate
a
64
to
a
user
equipment
and
many
user
equipments
need
to
extend
beyond
so
that's
the
that
is.
That
is
my
last
slide
yeah.
I.
B
Hope
I
managed
to
well
I'm
all
years
now
I
listen.
You
seem
to
be
missing
at
least
in
homeland.
This
was
also
discussed.
You
know
many
years
ago
in
the
solution
they
at
least
implemented
was
just
to
use
the
atp,
which
supports
other
prefix
lengths
just
fine
at
least
there.
You
have
a
solution
that
is
widely
implemented
and
supported
in
all
equipment.
B
E
B
H
Ahead
yeah,
so
I
actually
had
a
question
about
the
the
modems,
the
modem
vendors
blocking
dhp
pd.
I
mean,
I
think
that
seems
like
you
know,
do
you
have
data
on
why
they
do
this,
because
I
mean
it's:
it's
not
so
written
in
any
standard
that
they
should
do
this
and,
in
fact
the
3gpp
release.
10
standards
do
support,
release,
10
and
above
and
now
we're
on
release,
15
or
16.
They
do
support
dhb,
so
hppd,
it's
a
supported
part
of
3gpp.
H
So
the
other
thing
is,
you
know,
given
that.
I'm
not
sure
that
you
know
there's
actually
a
problem
solved
here
if
your
primary
use
case
is
to
be
able
to
assign
a
larger
than
a
shorter
than
64
prefix
to
a
mobile
node,
that's
already
supported
by
this
gpp
standard.
This
is
a
deployment
problem
and
writing.
A
new
draft
is
unlikely
to
affect
the
the
reasons
why
it
is
or
is
not
deployed.
E
Yes,
so
with
respect
to
why
some
or
most
mobile
model
manufacturers
block
dhcp
v6,
because
that
is
what
they
block,
they
don't
block,
in
particular
the
pd
part
of
the,
but
the
http
v6
blocking
the
http
v6
means
blocking
the
port
numbers
of
the
http
v6
and
or
blocking
the
multicast
part
of
the
http
v6.
These
are
the
two
things
that
are
blocked
by
various
ones.
Now
why
they
do
that,
we
have
a
list
of
reasons
why
they
do
I
mean
we
speculate.
Why
they
do
we
don't
know
exactly
why,
and
even
some
people
are.
E
E
The
other
is
that
it
constitutes
a
any
open
port,
is
a
security
hole
and
multiplied
by
the
number
of
smartphone
devices
sold.
It
amplifies
a
lot
these
securities.
E
Another
is
that
these
model
manufacturers
have
heard
from
itfm
from
others,
which
is
a
old
wisdom
at
idea
that
in
ipv6,
the
way
to
configure
addresses
is
slack
and
not
dhcp.
E
So
that's
all
of
the
wisdom,
but
it
still
persists
in
many
circles,
and
I
think
that
these
are
the
main
reasons
that
I
have
heard
about,
but
that
that
is
the
way
it
is
that
that
is
the
way
it
is
in
practice,
and
probably
new
releases
will
change
things,
but
it
is
now
several
years
that
operators
allocate
only
64
to
end
users.
E
H
If
this
is
just
a
deployment
problem
right,
if
the
operators
wanted
to
want
to
use
the
hppd
or
anything
else,
really
that
that's
there
for
them
to
use
right
now,
writing
another
draft
or
another
standard.
It's
not
gonna
affect
that
right.
I
think
yeah.
E
Yeah
yeah
yeah-
it
is
true.
It
is
also
suggested
to
do
to
make
this
more
of
a
3gpp
document
or
a
requirement
at
3gbp,
and
my
reply
to
that
is
that
sometimes
3gpp
does
refer
to
internet
drafts
when
writing
their
own
requirements,
and
so
that
is
a
probably
a
possibility.
I
mean
yeah
we
could.
We
could
try
to
to
do
that,
rather
than
then
pushing
this
forever
at
itf
and
yeah.
It's
a
good
suggestion
that
we
take
into
account
we.
We
will
also
look
into
that
3gpp.
E
B
I
Okay,
good
because
it
shows
there
is
some
error
there.
Okay,
thank
you
chair
and-
and
this
is
a
a
very
new
draft-
it's
zero
zero
version
draft
and
in
this
draft
we
introduce
and
also
an
associated
channel
over
ipv6
and
this.
For
short,
we
named
this
associated
channel
as
ach,
and
the
current
scope
is
ipv6
networks.
I
Is
we
see
that
from
from
ipv4
ipv6
ipv4
to
mprs
and
from
ampers
to
fpv6
and
currently
nowadays
the
ipv6
provides
connectivity
in
many
new
emerging
and
also
the
lexi
net
networks
and
in
all
these
scenarios,
actually
the
ip
ip
services
required
higher
quality
of
the
rsa
guarantee
and
rather
than
the
best
effort,
and
we
see
the
segment
routing
over
ipv6
provides
optimized
route
for
service
forwarding,
while
the
routine
programming
on
srh-
and
we
introduced
this
ach
and
to
provide
the
control
and
management
program,
programming
capabilities
and
to
the
to
the
service
forwarding
and
next
please
and
later
on.
I
We
will
have
some
examples
to
show
the
applique
applicability
yes
and
the
ach
architecture
and
and
the
the
cloudiness
middle
is
the
represent.
The
it
is
ip
network
and
then
the
two.
I
The
there
is
a
black
link,
black
line
between
the
first
node
and
the
fourth
node,
and
it
represents
the
it
is
ip
pass
and
user
data
is
transmitted
in
the
along
the
ipads
in
the
in
the
yellow
arrow,
and
this
blue
arrows
are
the
associate
channel
created
to
the
ip
ip
pass,
and
it
is
if
you,
if
you
look
at
one
error,
it
is
a
control
channel
and
it,
and
this
associate
channel
is
connected,
is
associated
to
ip
forwarding,
pass
and
carrying
the
messages
of
the
control
and
management
protocols
and
aiming
to
provide
the
control
management
management
functions.
I
And
if
you
see,
if
you
look
at
the
the
right
bottom
graph
there
and
it
shows
where
the
sh
in
the
in
one
network
node
and
the
control
and
management
planes
can
control
element,
planes
generate
the
control
and
management
messages
and
carried
it
in
the
associated
in
associated
channel
and
transmitted
in
the
data
plane
in
next
page.
Please.
I
Yes,
in
the
drafted,
we
defined
the
ach
as
a
t
as
a
trv
format
and
the
first
type
specified
it
is
a
control
channel
for
one
specific
ip
pass,
and
this
the
channel
type
specifies
the
type
of
the
control
or
management
protocols
to
identify
the
the
ip
pass
and
also
associated
the
ip
pass
to
the
ach.
I
The
associated
channel
id
is
identified
there.
It
is
defined
there
and
the
control
and
management
messages
can
be
carried
in
the
in
the
fixed
messages
in
the
in
the
value
field,
and
this
ach
flat.
This
is,
it
is
a
htrv,
so
this
drv
can
be
flexible,
encapsulated
in
the
ipv6
extension
headers,
including
the
doh
hubbar
hub
or
srh.
I
It
can
also
be
encapsulated
as
payload
in
in
the
synthetic
packet,
and
I
will
use
the
following
examples
to
show
the
application.
The
end-to-end
application
hub
help
applications
next
page.
Please.
I
Yeah
we
identified
our,
we
give
just
two
expand
examples.
One
is
the
we
we
call
it.
We
use
it
for
for
the
unified
oem.
Here
we
give
three
protocols.
I
We
list
three
particles
that
used
for
the
oem
functions,
and
we
also
identifies
these
several
problems
that
these
protocols
with
this
protocols,
for
example,
these
protocols
are
designed
for
to
perform
different
functions,
different
oem
functions,
but
they
also
have
overlapped
functions
and
they
use
different
session
identifiers
and
also
deep,
dating
capitals
lead
in
the
ipip
packet
and
if,
if
there's
and
and
because
they
are
defined
for
this
defined
as
an
end-to-end
session.
So
the
intermediate
node
is
not
aware
by
the
end
and
end-to-end
session.
I
So
we
try
to
come
up
with
a
simple
solution
to
carry
these
oem
messages
in
the
ach,
because
the
ach
is
a
tlb
encapsulated
in
ip
layer.
So
this
so
this
info
this
all
these
oem
messages
are
encapsulated
in
in
pure
a
epilator
and
it
to
reduce
the
the
number
of
particles
sessions
and
also
unified
the
session
identifiers
yeah.
The
figure
below
it
shows
that
there,
the
example
is
that
if
we
encapsulated
delay
management
measurement,
the
ach
tlv
inside
the
ipv6.
I
Actually
I
actually
used
a
doh
header,
and
this
and
this
ach
this
d,
s
h,
t
always
is
encapsulated
in
the
ip
layer
and
transmitted
from
r1
to
r4
when
r4
receive
it
and
it
will
process
the
htrv
as
it.
Since
it
is
the
last
it
is
the
destination
of
the
ap
pass
and
we
will
need
to
receive
the
htrv
it
processed,
the
the
the
tre
and
measure
the
delay
next
page.
Please.
I
Yeah
the
second
case
it
it
it
is
more
complicated,
but
it's
very
easy
to
understand
from.
Actually
there
are
two
associate
channels
used
here,
and
the
first
one
is
are
what
we
generated
this:
this
fault
management
probe
and
to
r4,
and
this
this
is
this.
I
This
photo
management
ach
trv
is
encapsulated
in
the
ipv6
helper,
help
extension
header
so
that
each
head
each
note
will
process
this
trv
and
if,
if,
for
example,
if
r3
detect,
there
is
a
signal
degradation
on
the
link
and
it
can
simply
set
the
flag
when
it
processes
the
trv
and
indicate
that
indicate
the
the
the
arrow
to
the
to
the
to
the
following
and
when
r4
received
this
indication
and
it
it
generates.
I
Another
protection
switch
request
to
r1
to
ask
ask
around
to
switch
the
the
forwarding
pass,
but
this
message
is
is
using
another.
This
message
is
using
another
associate
channel
and
this
in
on
this
associate
channel.
It
is
end
to
end
the
the
t.
The
s
hdrv
is
encapsulated
in
the
doh
header,
because
it
is
a
message
sent
from
end
to
end
to
tell
r1.
I
There
is
not
to
tell
I
want
the
the
request
of
the
switch
and
you
you
can
see
they
use
different
associate
channel
and
with
different
associate
channel
id
and
also
specify
the
different
channel
type.
I
And-
and
we
yes
actually
so
far,
we
have
already
received
a
lot
of
comments
and
suggestions
to
about
this
draft.
So
we
would
like
to
have
more
discussion
on
this
topic
and
to
refine
this
sh,
how
it's
used
in
ipv6
network
and
expect.
Maybe
since
ip
since
segment
routing
srv6
is
a
specific
type
of
ipv6.
I
We
may
also
want
to
specify,
may
maybe
specify
the
ac
how
ach
used
in
on
how
a
sh
used
over
srv6,
maybe
in
another
draft-
and
we
also
see
there-
are
different,
depending
on
the
applications
that
the
the
ach
can
be
used
in
different
ways.
So
we
better
would
better
to
to
separate
the
draft
to
specify
the
application
used
in
ach
yeah,
and
I-
and
I
also
want
to
say
that
actually
sh
is
not.
Maybe
I
have
two
examples
here,
but
ash
is
not
designed
for
only
for
oam.
I
We
would
like
to
have
it
to
to
have
ach
to
carry
the
control
management
messages
for
different
users.
Applications,
yes,
and.
A
J
A
Sorry
has
any
questions.
I
Yeah,
actually,
we
want
to
present
this
go
ahead.
K
Well,
yeah,
I
wanted
to
ask
about
the
relationship
to
other
ioam
work.
I
know
there's
some
stuff
in
ippm
and
and
other
things.
I
Yeah,
actually,
if
you,
if
you,
if
you
ask
me
the
relation
between
ach
and
l,
I
o
e
m-
I
think
iom
was
one
of
the
case
that
can
make
use
of
the
can
be
one
of
the
type
of
ach
and
why
we,
I
think
the
big
difference
is
iom-
is
only
only
designed
for
the
oem.
I
But
if
you,
if
you
just
look
at
the
encapsulation,
they
are
maybe
they
are
similar,
but
the
the
the
the
the
design
background.
The
I
mean
the
design
behind
this
two
technology
are
different
because
we
see
there
are,
there
are
so
many
control
and
management
messages.
I
There
are
requirements
to
carry
these
control
and
management
messages
on
the
ip
layer.
If
we
don't
have
a
signal
routing,
but
we,
but
we
we
don't,
have
a
we
don't
we
we
don't
have
some
some
of
them.
They
don't
have
any
signaling
protocol
to
carry
them
so.
I
I
mean
ach
is,
is
is
designed
it
is
open
to
to.
I
mean
it's
open
to
any
applications
that
if
you
think
that
is
it,
it
should
be
used
in
in
the
ip
layer,
yeah.
I
Iom
is
one
one
of
the
type
of
sh.
Currently
I
I
I
don't
have
it
in
the
draft,
but
I
have
some.
I
have
something
in
mind
to
to
use
it
to
use
sh
in
other
other
application,
but
but
yeah
just
under
the
discussion.
So
maybe
maybe
in
future
that
we
we
can
have
more
application
use
cases
there.
I
I'm
I'm
not
sure
that
I
I
don't
have
this
a
mind.
So
currently
I
mean
from
my
site:
it's
it's
not
so
related
to
apn.
A
I
Currently,
we
don't
want
to
have
this
app
capability
from
between
the
hosts.
A
Okay,
thank
you
all
right.
I
think
we
are
out
of
about
out
of
time.
So
thank
you.
I
guess
you
have
another
talk.
I
Yeah
yeah,
actually,
this
word-
this
topic
includes
two
drafts.
One
is
the
the
draft
named
second
segment
routing
for
redundancy
protection
in
spring
working
group,
and
the
second
is
the
this
one
is
the
srh
extension
for
this
redundancy
protect
protection
and
yeah.
I
will
give
this
this
the
introduction
for
both
of
them
yeah.
I
Next,
please
and
just
a
short
introduction
to
to
this
redundancy
protection,
what
it
is
so
actually
the
the
service
per
protection
comes
from
one
of
the
three
technology
technologies,
defined
techniques
defined
in
deterministic
networking
in
the
dead
networking
group,
and
also
we
see
there
are
also
requirements
for
providing
strict,
end-to-end
reliability
to
the
services.
I
I
Replication,
elimination
and
audio
function
that
we
have
example
scenario
very
simple
example:
scenarios
there
and
when,
when
there
is
when
the
flow
arrives-
and
we
name
this,
the
red
radist
is
the
redundancy
node
and
the
mgr
is
the
merging
node,
and
we,
when
there
is
a
float,
comes
to
the
redundancy
node
and
the
the
flow
is
replicated
to
two
copies,
and
these
two
copies
two
copies
will
go
through
different
paths
to
the
merging
node.
I
The
first
one
will
go
to
the
e
will
go
from
redundancy
node
to
r3,
to
margin
node,
and
the
second
is
to
r4
to
match
node
and
the
first
received
packet
with
this,
with
the
sequence
number,
with
the
the
same
sequence,
number
and
packet,
the
first
package,
with
the
same
sequence,
number
will
be
transmitted
from
merging
note
to
r2
and
the
other
one.
I
I
Yeah
to
support
the
the
relentless
protection-
actually,
we
defined
four
information
and
the
first
first
is
the
redundant
segment
redundancy
segment
it
is.
It
performs
the
packet
replication
function
on
the
redemptive
node,
and
it's
associated
with
a
redundancy
policy
actually
is
a
variant
of
the
sr
policy
and
in
in
and
in
case
of
the
srv6
that
we
define
a
new
behavior
and
are
and
the
the
second
piece
of
information.
The
second
information
is
the
merging
segment
and
is
the
similar
to
the
redundancy
segment.
It
performs.
I
The
map
package
of
amni
alignment,
elimination
on
the
merging
node
and
new
behavior
and
m
is
defined
and
to
identify
the
unique
flow
and
and
identify
the
packet
sequence
with
within
one
flow
that
we
define
the
flow
identification.
The
sub
and
sequence
number
there,
and
in
this
drafted
we
extend
the
srh
option,
optional,
tre
to
encapsulate
encapsulate
them,
and
the
last
information
is
that
we
define
this
redundancy
policy.
Actually,
it's
a
variant
of
sr
policy.
I
The
difference
is,
we
have
more
than
one
ordered
list
of
segments
and
all
these
other
lists
of
segments
are
used
at
the
same
time.
That's
the
difference
from
from
normal
as
our
policy
and
the
the
blue
and
the
test
in
blue
is
what
we
have
for
the
in
the
draft
in
the
draft
at
in
this
working
group.
I
Yeah
we
here
we
take
a
service
success
example
to
show
the
redundancy
protection
process.
Actually
here
we
have.
We
have
two
different
two
choices
to
deploy
this
this
process
and
the
difference
is
depending
on.
Where
do
you
want
to
assign
the
flow
id
and
generate
the
sequence
number
and
the
first
choice
is
to
use
it?
I
Of
course,
in
the
past,
all
over
the
the
the
srv6
pass
forwarding
pass,
and
the
second
choice
is
that
you
have
this
two
information
only
between
the
redundancy
node
and
the
merging
node
I
want
to.
I
want
to
explain
the
all
these
headers,
but
let's
just
focus
on
the
trv,
the
in
the
orange
in
the
orange
color
in
orange-
and
this
is
this-
will
identify
the
the
flow
identification.
I
The
sequence
number
and
we
capsulated
in
the
tlv
that
in
the
in
the
ipv6
srh
header
and
this
information
is
carried
to
the
to
the
merging
node.
Actually,
this
information
is
only.
I
Yeah
and
also-
and
let
me
think
if
I
miss
that-
let
me
say
anything
and
yeah
by
using
this
information
and
it's
it's
for
what
the
the
the
packet
and
also
drop
the
the
redundancy
packet,
but
that
is
defined.
I
That
is
the
behavior
definition
that
defined
in
the
merging
node
in
the
merging
segment
so
yeah.
I
think
that
that
that
is
what
I
want
to
explain
here,
that
there
are
two
different
choices,
but
no
matter
which
different,
which
choice.
This
information
will
be
encapsulated
in
the
in
the
fpv6
header,
where
the
merging
node
proceed
process,
the
the
the
merging
segment,
so
it
always
in
they
are
always
encapsulated
together
in
one
ipv6
header
yeah.
I
Yeah
and
here
every
a
very
simple
tl
we
defined
to
carry
these
two
information,
and
I
also
see
there
are
discussions
in
the
in
the
mailing
list
to
to
discuss
that
the
flow
ipv6
flow
label
can
be
used
to
identify
the
flow
id.
So
I
think
I
I
would
like
to
have
more
discussion
there
yeah.
I
Currently
I
do,
I
don't
have
any
any
strong
preference
to
to
have
it
in
the
in
the
tre
or
in
the
or
in
the
ipv6
flow
identificat
flow
label,
but
I
think
it's
also
be
affected
by
the
choice
that
we
made
from
last
slide.
I
If
we
want
to
use
it
only
between
the
redundancy,
node
and
merging
node,
I
think
that's
that
it's
it's
okay
to
use
it
as
the
ipv5
v6
flow
label,
but
if
you
want
to
use
it,
if
you
want
to
have
the
have
the
flow
identification
from
the
ingress
of
the
app
of
the
srv6
domain,
that
means
that
you
will
have
yeah
this.
This
flow
label
will
be
used
for
ecmp
and
also
used
for
the
for
this
redundancy
protection,
so
that
would
be
some
conflict
there.
I
think
yeah.
Next,
please.
I
I
Yeah
yeah,
I
I
mentioned
this
ipv6
header
flow
label
yeah,
because
people
think
that
there
are
there's
some
discussion
that
they
think
the
flow
identification
can
be
carried
in
the
ipv6
flow
label
in
the
ipv6
header.
I
Yeah,
if,
if
I
have
a
preference,
I
think
I
prefer
to
use
it
here
in
the
srh
optional
trv,
because
I
see
if
you
use
it,
I
have
already
mentioned
if
you
use
it
in
in
the
ipv6
header
that
will,
and
this
information
is
encapsulated
at
the
ingress
node.
That
will
that
I
mean
the
ipv6
flow
label
can
only
be
used
for
this
function,
but
not
for
the
ecmp,
like
ecmp
in
other
parts
of
the
the
forwarding
pass.
K
Yeah
I
have
thank
you.
I
have
a
multi-part
question,
so
my
scan
of
the
document
is
that
you
want
to
allocate
some
new
tlvs
from
within
the
the
segment
reading
header.
I
K
And
the
iona
registry,
for
that
looks
like
it's
ietf
review,
which
means
you
know
it
doesn't,
which
means
this
doesn't
have
to
be
done
in
six
man,
so
this
could
be
done
by
spring
or
could
be
to
somewhere
else
I'll.
I
you
know,
I
don't
know
I'll
leave
it
to
you
and
the
chairs
in
the
group
to
decide
if
it
belongs
in
six
man,
but
I'll
just
observe
that
I
I
think
it
doesn't
have
to
be
done
here
in
order
to
get
your
allocation
so.
I
E
L
Make
the
same
comment
as
as
eric
on
where
this
could
live
but
but
since
that's
already
made,
I
just
want
to
make
a
comment
on
the
on
the
flow
label.
If
you
do
want
to
move
this
flow
id
into
the
flow
label,
load,
balancing
nodes
are
supposed
to
take
into
account
more
than
just
the
flow
label
when
they,
when
they
determine
how
they're
going
to
be
performing
ecmp
or
ucmp
load
balancing.
I
Yeah
yeah,
I
I
think
I
agree
with
you:
yeah
yeah,
the
ecm
ucmps.
It
should
be
protected
from
this.
It
should
be
used,
it
should
be.
They
should
have
this
capability
to
use
it.
Yeah.
A
F
I
Yeah,
actually,
we
have
this
history
that
we
first
proposed
this
in
that
that
and
after
a
few
meetings
that
they
they
propose
that
we
should,
because
this
this
thing,
the
implementation.
Now
the
the
it
includes
the
the
segment
definition
and
also
this
how
to
encapsulate
this
metadata
and
also
include
the
the
the
yeah,
the
the
the
the
srv,
the
sr
policy
definition.
I
So
they
they
they
shift.
This
worked
from
that
net
to
spring
and
if,
if
the
chairs
agree
that
we
can,
we
can
have
this
I
and
allocation
in
spring
working
group.
I
think
we
can
just
focus
on
the
screen
working
group
and
then,
after
the
after,
it's
all
done,
and
we
come
back
to
then
that
if,
if
it's
necessary
yeah,
actually
this
works
proposed
like
two
years
ago
to
then
that
first.
I
A
B
B
M
You
hear
me:
okay,
we
can
hear
you,
okay,
okay,
this
presentation
is
on
the
omni
adaptation
layer
go
to
the
next
chart.
Please.
M
What
it
is
is
an
overlay
interface
configured
over
multiple
underlying
interfaces
and
if
you
look
at
the
diagram
on
the
left,
that's
from
rfc
5558
from
back
in
2010,
you
can
see
that
thing
called
the
vet
interface.
Is
that
side?
Looking
thing
that
is,
got
some
substance
over
underlying
interfaces
and
then
later
in
2016
in
rse
7847,
a
better
diagram
was
drawn
and
you
can
see
the
omni
interface.
There
is
at
a
layer
below
ip,
but
above
the
underlying
data
link
interfaces.
M
So
what
about
the
omni
interface
characteristics?
It's
an
ordinary
ip
interface
with
a
9180
mtu,
and
what
that
means
is
that
the
ip
layer
expects
the
interface
to
deliver
packets
or
fragments
up
to
9180
bytes
internally.
The
interface
performs
ip
encapsulation
to
convey
original
ip
packets
up
to
9180
bytes
over
the
diverse
underlying
interfaces.
M
M
Please
so
what
the
ola
ol
is
is
an
omni
interface
sub
layer
below
the
ip
layer,
but
above
the
underlying
interfaces,
and
it's
based
on
rsc
2473
encapsulation,
ipv6
encapsulation.
In
other
words,
when
the
ip
layer
delivers
a
packet
to
the
omni
interface,
remember,
it
can
be
up
to
9180
bytes,
the
ol
inserts,
an
rc
2473
encapsulation
header
and
appends
a
two
by
trailing
fletcher
checksum
to
the
form
the
oal
packet.
M
We
count
the
trailer
as
part
of
the
payload
at
this
point,
where
we
put
the
the
payload
length
in
the
oel
header.
The
oel
next
then
uses
ipv6
fragmentation
to
break
the
oal
packet
into
fragments
containing
no
more
than
the
maximum
payload
size.
So
there
you
see
the
fragments
and
each
one
of
them
has
a
payload
in
blue.
That
is
a
portion
of
the
original
ip
packet
that
is
no
larger
than
the
mps,
and
the
final
fragment
has
that
little
trailing
check
sum
attached
to
it
next
chart.
Please.
M
So
how
do
we
find
out
this
maximum
payload
size?
So
some
hops
in
ipv6
oil
destination
paths,
could
be
over
tunnels
over
ipv4
through
ipv6,
over
ipv4
translators,
etc,
and
the
packets
could
also
be
asked
to
traverse
multiple
concatenated
inner
networks
with
diverse
ip
protocol
versions.
I'll
talk
more
about
that
later,
the
ipv4
minimum
path,
mtu
of
576
is
therefore
assumed,
unless
there
is
better
knowledge.
M
So
now,
let's
look
at
some
worst
case:
analysis
for
the
ol
encapsulation
header.
We
have
40
bytes
for
the
rfc
2473
header,
plus
40
bytes,
for
an
ol
routing
header,
a
single
routing
header,
plus
eight
bytes
for
the
fragment
header,
so
88
bytes
for
oil
encapsulation
and
then
each
underlying
network.
M
M
So
for
an
example,
if
we
had
a
1500
byte
original
ip
packet,
that
would
take
up
four
aol
fragments
three
fragments
with
400
by
payloads
and
the
final
fragment,
with
302
by
payload,
which
includes
the
two
octet
trailer,
but
fortunately
larger
per
path.
Maximum
payload
size
values
can
often
be
determined
so
that
we
don't
have
to
have
all
of
this
overhead
and
all
these
fragments
that's
chart.
Please.
M
M
M
M
M
The
only
oil
extension
headers
that
can
be
included
are
one
fragment
header
and
one
orh,
but
no
other
ipv6
extension
headers
and
that
allows
oil
destinations
to
drop
any
non-final
fragments
less
than
minimum
mass
impaired
size
of
payload,
which
defeats
the
tiny
fragment
attacks
and
oil
destinations
drop.
Oil,
fret,
packets
and
fragments
with
oil
extension
headers
other
than
a
single
fragment
header
and
a
single
omni
routing
header
nice
chart.
M
So
here's
what
it
looks
like
in
a
single
network
traversal,
we
have
an
original
source
that
sends
an
original
ip
packet
that
traverses
some
edge
network
until
it
gets
to
a
node
that
has
one
of
these
omni
interfaces.
You
can
see
the
diagram
on
the
left
there
with
the
omni
interface
and
that
omni
interface
then
performs
oil
packetization
and
then
breaks
the
oil
packet
into
encapsulated,
fragments
that
you
see
down
by
the
blue
cloud
there.
M
M
So
so
what
happens?
If
we
have
multiple
networks
that
we
want
to
traverse,
we
have
the
original
source
sends
to
the
first
blue
network
and
there's
an
intermediate
node
between
the
blue
and
the
red
networks.
That
has
an
omni
interface
and
it
performs.
M
I'm
sorry
is
my
audio
visible.
I'm
getting
a
signal
that
this
mirrors.
B
Yeah
same
thing
happened
to
me:
it
seems
to
be
back
now.
We
can
hear
you
fred.
M
Okay,
okay,
so
then
the
second
intermediate
node
concatenates,
the
red
and
the
yellow
networks
together
at
a
layer
below
ip
and
then
the
packet.
Finally,
pops
out
to
the
final
destination,
where
the
omni
interface
nearest,
the
final
destination,
removes
the
encapsulations
and
forwards
the
original
ip
packet
to
the
final
destination
next
chart.
Please.
M
M
Okay,
okay
I'll
try
to
move
faster,
then
so
this
it
may
be
more
efficient
to
pack
multiple
original
packets
into
single
oil
super
packet.
As
you
can
see
here,
we've
got
multiple
ip
packets
in
the
single
oil
header
and
check
some
trailer
next
chart.
Please.
M
M
It's
a
new
capability
for
hosts
to
dynamically
tune
packet
sizes
for
optimal
performance
without
loss.
Next
chart,
o
ol
is
a
new
ins
sub
layer,
so
it
has
to
include
its
own
integrity
check.
It
uses
fletcher
because
it's
dissimilar
from
the
underlying
inner
interface
crc32
and
the
upper
layer.
Internet
checksum
underlying
networks
can
disable
udp
checksums,
if,
if
possible,
because
we've
got
the
oil
checksum
and
the
some
underlying
network
cops
might
not
include
integrity
checks
at
all.
M
M
So
bridging
of
multiple
network
segments,
the
omni
link,
consists
of
segments
joined
by
oil
and
intermediate
nodes,
acting
as
bridges
as
I
showed
in
that
earlier
diagram.
Some
examples
of
what
these
blue,
red
and
yellow
networks
would
be
in
civil
aviation.
We
have
these
multiple
providers,
including
aaron,
ceda,
inmarsat
and
others.
M
Another
example
might
be
bridging
network
segments
within
an
enterprise
network.
Another
example
might
be
bridging
multiple
enterprise
networks
like
boeing,
airbus
lockheed
just
to
name
a
few
names
there,
but
an
even
more
relevant
example
to
this
group
is
that
this
can
be
used
to
bridge
the
ipv4
and
ipv6
internets.
M
B
So
at
least
someone
had
echo,
don't
worry
about
it
because
you,
you
are
on
the
next
presentation
as
well
right,
okay,
and
you
can
take
more
questions
on
this
one
at
the
end
as
well.
Right,
okay,
super
go
ahead:
omni,
ip6,
nd
message,
sizing.
M
So
I
ipv6
neighbor
discovery
messages
such
as
rsra
nsna
include
options
and
the
ipvc
neighbor
sub
discovery
message.
Options
in
tlb
format
is
defined
in
rc4861.
This
is
an
8
bit
type
and
an
8-bit
length
field
in
front
of
the
the
value
next
chart.
Please.
M
M
M
Please
so
the
omni
option
is
an
ipv6
neighbor
discovery
option
with
one
or
more
sub
options.
So
you
see
the
omni
option
there
on
the
left.
It
has
essentially
a
blank
slate
that
you
write,
sub
options
into
and
then
the
sub
option
format
is
on
the
right
there,
where
you
have
this
5-bit
subtype,
11-bit
sub-length,
and
then
the
sub-option
data
sub-option
types
include
pad
1
pad
in
and
several
others
they're
in
the
document.
M
Large
object,
fragmentation
across
multiple
omni
options
is
not
currently
supported.
It
be
specified
in
the
future
if
necessary,
but
at
this
point
it
doesn't
look
like
it's
needed,
but
ipv6
neighbor
discovery
message
is
as
large
as
the
omni
interface.
Mtu
are
permitted.
9180
bytes
for
an
ip6
neighbor
discovery
message
with
no
ipv6
fragmentation
per
rsc
for
6980.
M
M
This
allows
us
to
test
the
one-way
path
from
the
oil
source
to
the
oil
destination
across
any
concatenated
and
underlying
networks
in
the
path
individual
probes
are
expendable.
They
don't
interfere
with
data
traffic
and
single
probe.
Success
may
indicate
the
opportunity
to
increase
the
path
maximum
payload
size,
but
we
still
need
continuous
probing
to
detect
path.
Nps
changes
in
case
there's
a
change
in
the
path
next
chart,
and
this
is
what
it
looks
like.
M
So
the
ol
source
sends
a
large
ns
message
on
the
left
hand
side
it
may
or
may
not
make
it
to
the
final
destination
if
it
makes
it.
The
final
destination
sends
a
small
neighbor
advertising
message
in
the
reverse
direction
and
that's
going
to
be
assured
to
traverse
all
paths
back
to
the
original
source.
So
that's
how
we
do
the
one-way
probing
to
get
our
maximum
payload
size
larger
for
for
the
oil
fragmentation
next
chart.
M
I
I've
made
some
pretty
bold
claims
in
the
original
presentation
at
first,
including
the
fact
that
we
can
consider
the
ipv4
and
ipv6
transition
to
be
good
if
we
adopt
omni,
because
we're
already
where
we
need
to
be,
I
would
think
some
some
of
those
things
might
warrant
some
further
discussion.
A
B
Well,
if
there
is
no
more
people
in
the
queue,
I
think
we'll
see
each
other
again
on
on
thursday
with
the
more
regular
working
group
session.
B
So
it's
going
to
be
1600
to
1800
utc
that's
morning,
west
coast,
us
and,
in
the
meantime,
follow
up
on
a
mailing
list
with
you
know,
if
there's
any
interest
in
in
these
drafts
that
you
know
have
a
chance
of
you
know
either
finding
other
working
groups
to
continue
their
work
or
or
be
adopted
here.
But
we
need
to
see
some
active
interest
in
the
working
group
to
pick
any
of
them
up
alexandra.
Do
you
want
to
say
something.
E
Yeah
I'm
this
is
related
to
the
polls
that
have
been
run
and
meet
echo.
I
see
there
are
two
polls,
but
I
don't
know
what
are
the
questions?
I
see
the
results
in
the
polls,
but.
B
Yeah
I
didn't
yeah,
I
forgot
to
put
a
draft
in
the
in
the
poll.
The
first
poll
was
just
a
test
and
the
second
one
was
for
the
first
draft
we
had
today.
I
didn't
run
the
polls
for
the
for
the
other
drafts,
so
we'll
have
to
take
that
to
the
mailing
list.
If
there
is
interest
to
to
continue
work
or
in
this
working
group
or
in
other
places,.
A
Well,
good,
unless
there's
some
other
topics,
I
think
we
are
done
for
today.