►
From YouTube: IETF110-SPRING-20210311-1200
Description
SPRING meeting session at IETF110
2021/03/11 1200
https://datatracker.ietf.org/meeting/110/proceedings/
A
Okay,
my
various
clocks
say
it
is
the
top
of
the
hour.
This
is
joel
halpern,
with
bruno
de
crane
and
jim
gashard,
your
three
spring
chairs.
We
are
starting
the
second
spring
session
using
the
agenda
as
per
posted.
I
hope
we
can
manage
to
stay
on
time.
We
did
a
good
job
the
first
time,
but
still
we
want
conversation
and
discussion.
A
Okay
bruno,
shall
we
go
to
the
replication
segment,
slides
and
daniel
you're
up.
B
So,
last
time
I
we
presented
this
work.
It
was
in
in
2019
march
2019,
I
believe.
Since
then
we
we
started
a
charlotte
a
little
bit
the
authorless
as
you
can
see,
but
we
added
that
behind
us
there's
a
lot
of
heavy
contributor.
I
just
want
to
call
out
like
I
I
think
I
saw
andrew
stone
and
and
tax
this
morning.
So
those
are
av
contributors
also
behind
us
and
behind
this
work,
okay,
we
can
go
to
the
next
slide.
B
There
are
a
lot
of
different
pieces
of
this
work
that
are
progressing
in
different
other
working
groups
and
since
we've
been
able
to
have
working
group
adoption
here,
it
unlocked
the
working
group
adoption
of
the
other
piece
of
work
in
the
different
working
groups
that
are
shown
here
we
recently
last
year.
I
also
brought
the
mvpn
one
that
you
can
see
there,
that
is
adopted
on
the
best
and
that's
kind
of
an
iptv
type
of
use,
cases
for
the
people
that
run
mvpn
there.
B
We
recently
also
added
the
last
one.
There
is
the
ping
like
a
way
to
validate
the
tree,
that,
where
it's
fairly
new
and
it's
it's
been
pushed
into
the
paint
working
group,
the
pce
is
under
conversations
we're
progressing
there
too,
and
the
idr
is
also
something
that
we're
pressing
progressing
as
well.
B
We
can
move
to
the
other
slide.
Okay,
so
really
the
update
here
is
that
it's
very
simple.
We
we
just
decided
to
add
this
obvious
x
to
the
replication
segment
and,
more
precisely,
we
added,
like
the
n,
replicate
function
that
you
can
see
in
the
first
paragraph,
but
the
essential
in
in
compliance
to
the
network
programming
framework
that
is
now
an
rfc.
B
But
really
what
it
means
is
is
is
nothing
different
than
the
mechanics
brought
up
by
the
sr
mps
version
of
the
dragon.
The
first
three
versions
covered
at
the
sr
mps
here
and
really
what's
new
here
in
the
in
the
context
of
the
replication,
is
really
an
encapsulation
function
that
is
done
at
the
root
node.
B
To
encapsulate
the
s,
obviously
application,
so
that
is
the
the
n
indicated
function
now
that
works
exactly
the
same
as
it
is
in
the
sr
nps,
where
you
have
the
replication
set
label
that
is
associated
to
an
application
segment,
that
somewhere
along
the
side
of
the
network,
you
will
hit
the
replication
node,
will
decap
and
then
process
the
host.
So
we
need
nothing
really
different
from
a
mechanic
perspective.
B
We
can
move
to
slide
four,
all
right
so
more
specific
about
the
n
replication
function
here.
So
that's
a
function
that
is
local
to
the
replication
node.
As
you
can
see,
it
is
associated
to
the
replication
cell
that
will
serve
in
replicating
the
data
towards
the
lease.
I
have
an
example
coming
it.
It
will
enable
it
is
enabled
on
the
node,
where
it
will
replicate
the
incoming
traffic
towards
the
the
lee
for
the
other
and
the
destination
ipv6
host.
B
That's
pretty
much
what
it
is,
it's
exactly
the
same
as
the
empress.
As
I
was
saying,
the
leaf
it
would
perform
in
the
encapsulation
and
forward
the
traffic
directly
to
the
end
hose
now,
we
can
move
to
slide
five
now.
The
text
bottom
top
left
is
the
semantic
about.
What's
going
on
with
the
drawing
and
then
the
the
bottom
right
is
really
what's
happening
in
this
example,
this
example
is
been,
it's
the
one
that
just
to
stick
current
with
what
was
presented
by
humanity
working
group.
B
It's
the
it's
the
example
using
the
three
topology
to
explain
this
example,
but
there's
also
another
way.
We
can
do
the
same
thing,
which
is
the
spray
model.
Now
we
just
chose
to
use
the
tree
version
and
it's
pretty
easy
to
follow
with
the
text
that
we've
put
there.
So
basically
the
node
one
here
is
the
root,
and
then
the
node
2
is
the
replication
point.
Node
5,
6
and
7
are
the
leaves
and
then
node,
3
and
node.
4
are
just
transit
node
in
the
context
of
srv6.
B
They
could
be
just
ipv6
node
and
then
it
will
go
through
the
routing
extension,
the
routing
header,
so
they
don't
do
anything
with
node.
Three
and
four
in
this
context
at
node
two,
it
will
replicate
towards
the
leaf
five,
six
and
seven,
and
at
six
and
five
and
seven
it
will
decap
and
remove
the
srh
and
then
forward
the
traffic
directly
to
the
destination
according
to
the
ipv6
destination
address,
and
that's
pretty
much
it.
We
can
go
to
slide
six
and
open
for.
B
Question
well
I
hope
you,
you
were
able
to
hear
what
I
was
saying
that
beautiful
speech
and
I
was
not
on
mute
or
something.
A
C
C
C
C
C
C
C
C
So
also,
basically,
our
solution
provide
more
coverages
and,
in
addition,
our
solution
also
provides
protections
for
the
binding
segments,
our
node,
when
that
node
fails
so
first
because
for
each
of
the
node
we,
if
that,
if
that
node,
can
support
a
proxy
forwarding
and
then
we
that
node
will
advertise
f
capability
for
that
for
that
functions.
C
So
all
they
are
so
all
the
other
notes.
We
receive
the
discovery
for
those
node
and
then
it
will
send
the
routes,
a
cinder
center
package
to
the
least
capable
nodes
when
one
node
fails
next
to
that
node.
So
this
may
provide
more
protections,
even
though
some
of
the
node
doesn't
support
the
functions.
D
D
Yeah,
so
my
question
is
that
so
you
mentioned
that
in
the
previous
slide
you
mentioned
that
the
draft
ietf
spring
segment
protection.
Srt
paths
has
provided
a
solution,
but
your
you
are
pointing
out
that,
because
that
draft
is
informational,
so
somebody
may
not
implement
it.
So
are
you
suggesting
that
trap
should
be
made
standard?
C
I
mean
the
problem
is
that
if
some
of
the
lows
do
not
support
your
draft
right,
so
even
though
you
either
standard
and
then
maybe
some
notes,
don't
support
you
your
function
and
then
some
notes
may
not
support
your
function.
Right.
D
Yes,
so
if
some
node
does
not
support
this
function,
then
it
may
not.
The
behavior
of
the
network
won't
be
any
worse
than
you
know,
not
having
supported
it
to
having
supported
it
right.
So
so
the
solution
is
to
you
know,
make
that
note
to
support
this
draft.
Is
it
so?
So
I
I
see
that
you
are
proposing
an
alternate
where
there
is.
There
is
a
proxy
for
every
node.
There
is
a
proxy.
So
to
me
it
seems
like
it's.
D
C
No
not
that
way,
so
this
one
just
as
soon
as
that
node
have
capability
and
then
for
always
adjacent
node.
I
need
to
wear
device
data
capabilities
in
one
bit.
Don't
need
this
for
don't
need
to
configure,
for
example,
for
p,
which
other
configure
this
one
is
capable,
and
then
we
don't
need
to
configure.
This
p
is
capable
for
n
for
n1
and
then
for
p1.
No
just
configure
this
capable
and
then
p
will
advertise
one
p.
One
one
flat
indicate
that
p
is
capable
to
d
to
proxy
to
do
the
proxy
work.
D
The
problem
you
pointed
out
is,
if
b,
doesn't
support,
draft
iatf
spring
segment
protection,
srt
paths,
then
it's
a
problem,
but
if
b
doesn't
support
this
midpoint
failure
draft,
it's
the
same
problem
right.
C
And
this
can
do
more
because,
for
example,
we
know
which
node
is
capable
and
then
we
can
send
that
traffic
along
those
nodes
which
are
capable
and
then
we
can.
Even
so
we
can.
We
have
some
those
don't
support
this
capability
and
then
we
can
provide
more
coverages
compiled
to
just
keep
the
older
routes
to
not
end
with
no
damn
fails.
E
E
Whereas
I
understand
that
your
proposition
is
behaving
after
or
during
the
igp
convergence,
in
which
case
in
general,
we
call
about
restoration.
E
C
Yeah,
I
think,
for
the
for
the
existing
draft,
so
they
the
astronauts,
don't
know
one
node,
don't
know
the
other
node
whether
they
have
this
capability,
so
he
they.
So
we
just
use
the
old
node
all
the
routes
to
fail
the
node
for
some
time,
but
our
node
have
more
information
about
which
node
is
capable
and
then
we
can
send
the
traffic
along
those
nodes
with
capable,
and
then
we
have
more
chance
to
get
the
traffic
to
the
node.
Next
to
the
phase
of
the
road.
C
C
A
Okay,
thank
you.
Let's
move
on
to
the
next
presentation
on
segment
routing
for
redundancy
protection,
I'm
not
sure
whether
fun
or
zhu
song
is
going
to
present,
but
whichever
you
is
planning
to
present,
please
go
ahead.
F
Ahead
fan
hi.
Thank
you
this
actually,
this
topic
is
includes
two
two
jobs.
One
is
the
segment
routing
for
the
redundancy
protection
and
another
one
is
the
srh
extension
for
this
redundancy
protection.
So
I
will
I
will
cover
both
both
drafts
there.
F
Next,
please
yeah
just
very
brief
introduction
to
the
redundancy
protection
it
is
it
actually,
it
is
one
of
the
mechanism
to
achieve
the
service
protection
and
it
follows
the
prince
the
prey
off
prince.
F
So
there
is
an
example
scenario
there
that
when,
when
the
when
the
flow
comes
to
the
there
are,
there
are
two
nodes:
one
is
the
red
and
once
the
merge
when
the
flow,
when
the
flow
is
sent
to
the
red,
node
and
the
red
node,
to
replicate
the
flow
into
two
into
two
copies
and
send
the
two
copies
from
different
paths
respect
respectively,
and
when
the
merged
node
received
the
received
the
first
packet
of
each
sequence,
number
yeah.
F
I
forgot
on
the
redundancy
node
and
there
yeah
on
the
r1
or
on
the
redundancy
node.
There
is
a
flow
id
and
the
sequence
number
are
marked
there
on
the
traffic
on
the
packet
and
when
the
when
the
package,
when
the
first
package
of
each
sequence
number
is
received
on
the
merge
node,
it
will
be
sent
to
the
to
the
to
the
r2
and
the
other.
No
other
package
means
the
redundant
packet
will
be
dropped.
F
I
will
give
more
explanation
about
this
procedure
there
later
next.
Please.
F
And
to
support
the
redundancy
perception
that
we
define
this
several
new
new
things
there
and
first
is
the
redundancy
segment.
It
is
a
new
segment
defined
for
the
redundancy
note
and
there's
also
the
new
behavior
of
this
redundancy
of
this
segment
is
defined.
The
second
is
the
merging
segment.
F
It
is
the
segment
used
on
the
merging
node
and
with
the
behavior
of
the
of
this
segment
is
defined
and
to
support
the
to
to
identify
the
unique
flow
or
identify
the
packet
sequence
within
one
flow
that
there
is
flow
identification.
The
sequence
number
is
defined
there
and
and
last
there
is
a
redundancy
policy.
It
is
the
it
is
the
it
is
a
variant
of
the
sr
policy
and
it
is
used.
F
It
is
associated
with
the
redundancy
segment
when
the
redund
segment
is
is,
is
executed
there
and
the
difference
between
the
redundancy
policy
and
and
other
normal
sr
policy
is.
It
includes
more
than
one
other
list
of
the
segment
and
all
of
this
other
list
of
the
segment
will
be
used
at
the
same
time.
Next.
F
And
we
have
the
update,
since
we
have
the
update
the
the
draft
system-
etf,
yeah
yeah,
I
think.
Actually,
I
I
think
I
I
refined
the
this
is
not
the
the
latest
version
of
of
my
presentation,
but
I
will
try
to
explain
there.
Actually,
we
we
designed
the
process
and,
of
course
update
the
the
specification
of
the
redundancy
segment
modding
segment
and
the
encapsulation
of
the
flow
identification
sequence
number
and
also
add
some
description
of
the
redundancy
policy
and
yeah.
We
also
split
the
segment
description
into
two
parts.
F
Yeah,
as
I
mentioned,
that
redundant
segment
is
updated,
so
it
is
updated
as
a
variant
of
the
bounding
seat.
If
we
use
the
abundances
that
there
you
can
set
a
whole
list
for
the
for
the
service
from
the
from
the
ingress
of
the
sr
ingress
node
of
the
sr
domain.
So
there's
also
some
some
since
it
is,
it
is
defined
as
a
variant
of
binding
seeds.
F
So
there
is
a
change
on
the
suitable
node,
a
suit
code
and
another
another
big
change
is
we
decouple
the
replication
behavior
and
the
marking
of
the
flow
identification
and
the
sequence
number.
We
think
that
these
two,
these
two
behaviors
once
the
application
another
is
the
marking.
They
are
not
necessarily
bounded
together,
but
it
depends
on
the
on
the
other
choice
on
the
deployment
choice.
I
I
give
the
the
the
two
choices
I
give
the
two.
F
I
give
the
examples
of
the
two
choices
later
in
the
later
slides,
so
here
that,
but
since
from
the
actually
after
we
updated
this
draft-
and
we
also
received
the
comments
from
from
others,
so
we
actually
we
define.
Actually,
we
should
refine
the
suit
code
later
on,
and
I
will
I
will
not
explain
the
suit
code
here
because
it
will
be
refined
later
and
maybe
we
can
just
go
to
the
next
slide
and
to
explain
the
yeah.
F
Also
the
merging
node
the
update
is
we
specified
that
how
to
define
how
to
determine
the
pac
if
the
packet
is
redundant
packet
or
not,
because
the
sequence
number
is
there,
there
is
used
to
to
diff
to
determine
the
pac.
The
package
is
redundant
or
not,
and
there
is
a
there's
a
there's,
a
suggestion
that
to
to
use
it
to
look
up
a
local
table
there,
but
it's
not
yeah.
F
It
just
give
an
example
that
the
sequence
can
be
done
in
this
way
to
the
to
determine
the
the
package
is
redundant
or
not.
F
Yes.
Next,
please,
here
we
take
the
srv6
as
an
example
to
explain
the
process
of
the
redundancy
protection,
and
I
give
here.
I
give
two
choices:
two
options:
there:
the
difference
of
the
two
options
of
the
the
options
are,
whether
we
assign
the
flow
id
and
to
generate
the
sequence
number
at
the
inc
at
the
r
and
at
the
ingress
note
of
the
srv6
node
or
at
a
specific
redundancy
node
when
there,
when
the
redundancy
node
is
not
not
the
first
node,
not
the
ingress
node
of
the
of
the
srv6
domain.
F
F
Here,
because
the
mod,
the
one
when
the
sequence
number
and
flow
id
are
used
in
the
merging
node
and
it's
it
will
need
the
the
flow
identification
sequence
number
in
the
ipv6
in
the
inner
ipv6
header
or
maybe
there's
another
there's
another
alternative
to
use
it,
and
you
can
also
copy
the
flow
identif.
This
optional
trv
from
the
inner
of
inner
ipv6
header
to
the
outer
fpv6
header.
F
When
you
do
the
redundancy
note
when
they
do
the
will
perform
the
redundancy
segment
it
just
the
to
to
see,
there
are
actually
different
choices.
Different
options
to
define
this
to
to
make
this
redundancy
protection
work.
A
G
So
if
I
understand
correctly
what
you're
proposing
you're
proposing
support
packet,
replication,
duplicate
elimination
function
as
part
of
this
in
a
deadnet
group,
there
is
a
functionality
order
preservation.
G
So
is
it
something
that
as
well,
you
expect
that
merging
node
will
do
order
preservation
based
on
the
sequence
number.
F
A
Greg,
if
I
may,
for
a
moment
that
does
raise
the
more
general
point
that
given
the
overlap
and
functionality
between
this
and
and
the
debt
network,
we
need
to
make
sure
this
is
coordinated.
I'm
not
saying
it
has
to
be
done
there.
I'm
saying
it
has
to
be
coordinated,
though.
G
G
G
So
why
do
you
think
the
complexity
of
a
pre-op
function
has
a
benefit
comparing
to
one
plus
one
protection,
because
oneplus
1
protection
is
much
simpler.
The
merging
node
selects
the
working
path
and
one
path
is
backup.
The
redundancy
node
sends
data
on
both
paths
and
the
merging
nodes
picks
up
data
only
from
a
working
path.
If
the
working
path
fails,
it
switches
to
the
other
feed.
G
Here
you
do
packet
by
packet
selection
from
two
or
more
paths,
but
it
comes
with
a
lot
of
complexity.
F
Yeah,
but
actually
I
have
I've
never
compared
this
to
together,
because
there
are,
there
are
actually
different
mechanisms
yeah.
I
I
I
I
cannot
give
you
some
comparison
result
about
this
because
I've
never
considered
they
are
they
they
are,
they
are
the
same
or
they
have
the
same
effect
yeah,
but
I
think
I
yeah.
Maybe
we
can
discuss
the
in
the
middle
list.
Yeah
currently
have
left
no
answer
there.
H
Three,
I
was
on
mute.
This
this
proposal
makes
the
merging
nodes
stateful.
The
merging
node
has
a
finite
amount
of
memory
to
maintain
that
state.
What
happens
when
that
memory
gets
overrun.
F
What
specific
state
do
you
mean?
I
mean
the
to
st
the
state
of
the
packet.
F
There,
I'm
not
sure
if
you
mentioned
this
it
is.
It
is
really
related
to
the
sliding
window
that
how
you
compare
the.
F
Yeah,
how
you
compare
the
the
the
sequence
number
with
within
the
amount
of
the
amount
of
the
package
right.
A
J
Mike,
I
think
there
there's
a
supposed
to
be
a
controller
to
configure
us
as
a
policy
on
the
merchandise.
So
you
should
know
what's
what's
ability
of
the
emerging
nodes,
so
it
will
not
let
the
imaginary
or
overwrong.
If
the
memory
cannot
support
so
many
flow,
so
I
think
there
need
a
way
for
the
for
the
container
to
to
configure
whether
how
many,
how
many
flows
I
can
support
on
the
marginal.
F
Yeah
I
I
agree.
I
think
that
is
something
because
this
this
draft
is
the
only
current
cover
the
the
data
plane
part.
I
think
they
must
be
have
some
extra,
yet
they
will
have
some
specification
on
on
the
part
that,
when
about
about
the
issue
john
raised
it
yes
next,
please.
F
So
it
is
the
updated
one.
Okay,
yes,
the
second
option
is,
the
difference
is
just
when,
where
you
you
assign
the
flow
id
and
the
sequence
number,
because
on
the
on
in
this
option,
that's
the
flow
identification.
The
sequence
number
is
only
used
between
the
arm
node
and
the
m
node.
So
that's
the
two
and
two
choices
that
you
you
can
use.
This
redundancy
protection,
we'll
use
it
from
end
to
end
or
you
use
it
as
a
part
of
the
as
a
as
a
kind
of
service.
F
Yes
and
the
second,
the
second
there's
another
draft
that
we
we
specified,
how
the
how
the
flow
identification
and
sequence
number
are
encapsulated
in
the
in
the
in
in
an
srv,
srh,
optional,
trv,
and
so
here
we
give
this
this
this
proposal
there
and
from
the
from
the
feedback
from
six
men
that
they
think
that
we
should
change
this
draft
back
to
spring,
because
spring
has
already
can
give.
These
can
have
the
the
power
to
give
us
the
allocation
of
this
optional,
tl
srh,
optional
tre.
F
Yes,
here
with
that,
actually
we
we
define
this
redundancy
policy
and
through
the
tuple
that
there
is
a
redundancy,
node
and
redundancy
id
and
the
margin
node
there
will
be.
There
are
some
more
details
about
this:
the
specification
of
the
redundancy
policy.
F
Next,
please,
I
think,
yeah,
actually
from
the
from
last
since
last
itf,
that
we
have
already
refined
this
solution,
and
then
we
come
back
with
the
with
the
optional
options
of
the
solutions
there
and
we
we
try
to
refine
this
solution
and
up
and
also
specify
the
the
isrh
encapsulation
and
yeah.
I
think
that
next
step
was
we'll
specify
the
redundancy
policy,
another
draft
with
more
details,
and
yes-
and
we
we
want,
we
want.
F
We
seek
for
the
collaborations
like
focusing
on
the
segment
specification
in
the
sr
and
plc
data
plane
and
also
there
are.
Some
discussion
has
already
been
erased
there
in
the
mailing
list
about
the
flow
id
flow
flow
identification,
the
sequence
number.
I
would
like
to
continue
with
the
discussion
there.
A
G
Ahead
in
regard
to
your
plans
to
do
in
srm
pls,
I
can
point
to
that
net
or
in
mpls
data
plane
that
it
already
supports
packet,
replication,
duplicate,
elimination
and
order
preservation
function.
So
I
think
that
probably
better
to
the
discussion
in
the
deadnet
working
group.
Thank
you.
A
K
Next
question:
the
first
one
is
about
all
the
ordering
function.
Actually
we
have
already
considered
that
because
it
has
already
been
defined
in
the
net
in
this
version,
we
won't
include
it
because
for
the
simplest
case,
the
other
is
not
necessary
for
merge
node.
So
perhaps
in
the
following
versions,
we
can
include
the
ordering
functions
to
cover
more
scenarios
and
the
second
question
about
the
oneplus
one
protection.
K
Actually,
this
redundant
protection
can
provide
a
more
high
provided
service
for
more
high
high
ultra
high
requirement
applications
because
there
is
no
packing
loss
when
the
when
the,
when
the
flows
is
switched
from
one
part
to
the
standby
path,
so
that
can
require
satisfy
the
requirements
for
more,
maybe
strict
applications.
Okay,
thank
you.
G
That
I
agree
that
packet
replication
functionality
minimizes
the
packet
loss.
It
might
not
eliminate
it
completely
and
that's
basically
the
idea
why
that
net
supports
packet
replication
function,
but
it
comes
at
a
cost.
G
So
that's
why
I
I
brought
it
up
in
my
earlier
comment
so
that
compare
the
cost
and
then
versus
the
benefit
and,
as
rowan
correctly
pointed
out,
it
does
create
a
state
in
the
merging
node
because,
especially
if
you
are
doing
order,
preservation
and
packets
can
be
reordered.
A
You,
okay,
zusang.
I
don't
think
we
can
continue
this
conversation.
We
are
we're
at
twice
the
allocated
time
for
this,
so
we
need
to
move
on
to
the
next
item
on
the
on
the
agenda.
Yes,
strada
is
coming
forward
and
bruno
if
we
can
switch
to
the
seamless
sr
presentation.
D
D
So
we
got
a
comment
that
you
know
the
use
cases
and
requirements
should
be
a
separate
document,
so
in
zero
five
version
we
split
the
document
into
two.
One
document
is
covering
use
cases
and
requirements,
and
another
document
is
covering
solutions.
Next
slide,
please
yeah.
So
the
first
one
is
talking
about
the
requirements
and
use
cases.
The
second
one
is
about
the
solution.
D
So,
just
a
recap
of
the
basic
motivation
for
this
seamless
sr
architecture.
D
So
when
the
large
networks
organized
into
multiple
domains-
and
you
know,
seamless,
mpls
kind
of
architecture
gives
an
end-to-end
path
across
these
multiple
domains
using
bgplu,
so
evolving
requirements,
you
know,
need
multiple
end-to-end
paths
between
two
end
points
in
these
multi-domain
networks
and
seamless
sr
is
trying
to
address
that,
and
the
solution
that
cms
sr
is
focusing
on
is
a
completely
distributed
solution
so
where
there
is
no
single
entity
which
which
has
the
database
of
all
these
domains
and
computes
the
path
it's
more
of
a
distributed
solution,
where
the
paths
across
these
domains
get
stitched
together.
D
So
in
terms
of
requirements,
so
we
have
reorganized
the
use
cases
and
requirements
section
and
clearly
laid
out
the
requirements
categorize
the
requirements
into
multiple
categories
such
as
asn
domain
requirements
and
sla
requirements,
merger
and
migration
requirement
or
scalability
availability
requirements,
and
so
on
so
just
to
quickly
cover
asn
igp
domain
requirements.
So
these
are
different
ways.
The
slide
shows
the
different
ways
the
domains
can
be
organized.
We
have
covered
the
most
common
use
cases.
It's
not
a
exhaustive
list
of
all
the
types
all
the
different
ways:
operators.
D
D
Another
igp
and
the
end
to
end
connection
is
via
ibgp
sessions
and
there's
a
third
category
where
there
are
there's
a
single,
as
if
you
see
on
the
bottommost
right
side,
diagram
as1
is
the
as,
and
there
is
domain
one
domain
two
and
domain
three
and
there's
no
common
border
node
and
the
kind
of
endurant
connectivity
is
all
via
ibgp
sessions.
Next
slide.
Please.
D
Yeah,
so
this
is
tunneling
requirements
like
underlay
tunnels.
What
all
different
categories
of
tunnels
need
to
be
supported,
I
think
run
to
next
slide.
I
have
talked
about
it
multiple
times
already
so
next
slide,
please
so
various
sla
requirements
again
latency
bandwidth
in
link
node
and
domain
inclusion,
exclusion
and
end-to-end,
diverse
paths
and
constraint,
applicability
to
subset
of
domains
and
service
function.
Chaining
is
something
that
we
cover
in
this
section
next
slide,
please
so
mergers
and
migration
requirements
mainly
focuses
on
changing
networks.
D
You
know
when
evolving
networks
and
requirements
arising
out
of
those
network
evolution
such
as
during
migrations,
you
will
have
to
have
the
interoperability
with
bgplu,
and
we
should.
You
should
also
have
the
native
support
for
best
effort
paths
like
if
we
come
up
with
some
new
extension
that
should
also
have
support
for
end
to
end
ability
to
create
best
effort
paths
end
to
end
and
as
the
network
evolves,
we
should
not
have
the
requirement
of
running
two
different
families
option
a
and
option
b
use
cases,
and
also
in
case
of
mergers.
D
There
may
be
a
necessity
to
you
know.
The
color
mapping
may
vary
from
one
domain
to
another
domain,
so
there
is
a
need
to
have
a
ability
to
translate
that
intent
from
one
domain
to
another
domain,
and
then
there
is
also
a
requirement
for
you
know,
interrupt
with
other
tunneling
technologies,
like
your
ability
to
do.
Srv
six
in
one
domain
and
mpls
in
another
domain
is
kind
of
an
example.
Next
slide,
please.
D
So
this
slide
lists
down
the
scalability
aspects:
support
up
to
one
million
nodes
access
low.
I
mean
devices
having
low
access,
node
capabilities
and
scalable
response
to
network
events
and
automatic
filtering
of
routes
on
access
nodes,
an
ability
to
reduce
fib
scale
on
the
border
nodes.
Next
slide,
please
yeah.
We
also
cover
other
requirements
like
protection,
end-to-end
protection,
including
intra-domain,
as
well
as
border
node
and
egress
protection
operations
and
automation,
and
I
would
like
more
input
from
operators
community
on
operations.
D
So
right
now
we
have
covered
like
counters
and
ability
to
ping
and
trace
route,
and
it
will
be
useful
to
see
what
what
requirements
people
come
up.
Here
I
mean
if,
if
operators
have
to
deploy
this
and
have
to
debug
this
at
3
am
in
the
morning,
what
would
they
like
to
see
and
give
that
input
to
the
requirements
document,
and
we
also
cover
various
ways
of
traffic
steering
mechanisms
and
interaction
with
other
approaches
like
existing
centralized
approaches?
D
D
Yes,
so
we
will
we
request
more
review
and
comments,
and
so
recently
there
is
also
another
draft
which
is
talking
about
similar
use
cases
in
bgp
car
problem
statement,
and
we
are
in
discussion
with
authors
of
bgp
car
problem
statement.
G
B
Thank
you
good
presentation.
There
are.
There
is
another
srmps
to
a
services
document
as
well,
which
I
happened
to
also
water,
and
I
wonder
if
you
could
have
a
economizations
also
on
your
point
that
you're
making
in
the
next
step
to
also
try
and
merge
and
and
hopefully
not
having
another
design
team
to
get
together
to
you
know
define
which
one
we're
going
to
pick
at
the
end.
B
Would
you
be
interested
to
have
this
conversation
with
on
the
other
draft?
It's
it's
srmps
and
it's
isn't
the
working.
The
draft.
D
Yeah,
so
dan,
this
particular
draft
is
mostly
covering
requirements,
so
I
would,
I
would
first
like
to
cover
what
are
the
requirements
in
terms
of
srv6
and
srm
pls
interrupt
yeah
solution,
discussion
on
solution?
That's
that's
definite.
That
definitely
should
happen,
but
let's
at
least
close
on
requirements.
What
what
are
the
requirements
for?
You
know
this
at
a
high
level.
You
can
say
srv6
and
mpls
interrupt,
but
it
would
be
useful
to
cover
the
details
as
well.
D
Yeah,
so
what
I
talked
about
is
a
requirement
yeah,
but
we
do
have
a
solution
document
that
that
covers
how
these
requirements
are
stated
in.
This
document
can
be
achieved
using
a
bgp
protocol
extension
with
bgpct
and,
yes,
we
do
have
implementation
and
some
testing
going
on
in
the
lab,
and
this
was
presented
in
idr
as
well
as
yesterday
before
yesterday.
A
M
Can
you
hear
me,
yes,
go
ahead,
yeah,
my
question
is:
can
this
do
you
plan
to
support
srv6
in
this,
like
a
architectural
in
this
problem,
space.
D
Yes,
definitely
so
we
we
have
already
listed
as
an
srv6,
both
srv6
end
to
end
as
well
as
srv6
to
mpls
interop
use
cases
yeah.
Definitely.
D
D
The
draft
is
seamless
sr,
so
it
covers
srm
pls
as
well
as
sr
v6.
A
N
You
hear
me
folks:
yes,
we
can
hear
you.
Oh
okay,
great
thank
you
hi
everyone,
I'm
dhananjay
rao
from
cisco,
and
I
will
be
presenting
on
behalf
of
my
co-authors
and
contributors
on
the
bgp
color
aware,
routing
problem
statement
draft.
This
draft
was
also
presented
at
idr
and
best
next
slide.
Please,
the
objective
of
bgp
color,
aware
routing,
is
to
use
bgp
to
establish
an
end
to
end
intent,
aware
path
across
multiple
domains,
both
igp
and
bgp.
N
N
N
N
N
In
the
sr
policy
solution,
the
ingress
pe1
may
request
a
srpce
to
compute
the
interdomain
path.
Since
the
srpc
is
the
one
aware
of
the
intel
domain
topology,
the
srpc
will
return
computer
path
for
that
particular
intent
and
return.
A
label
or
a
sid
stack
that
the
ingress
pe
will
then
install
and
use
in
the
data
plane.
The
rest
of
the
network
nodes
are
stateless.
You
know
for
this
path
next
side,
please.
N
Now,
here
in
with
bgp
color
aware
routing,
we
have
the
same
automated
steering
you
know
requirement
as
the
srpce
case.
Only
here
the
color
aware
path
is
set
up
using
a
bgp,
hop
by
hop
route
distribution
and
you
know
best
path.
Computation
in
this
example,
route
to
e3
for
color
c1
is
originated,
say
from
a
border.
Node
32
may
be
redistributed
from
an
igp
flex,
algo
it's
propagated
by
hop
across
the
domains
till
it
you
know,
reaches
e1
at
each
hop.
N
A
best
path
is
computed
for
that
intent.
You
know
recursing
over
the
appropriate
intra-domain
color
path,
a
vpn
route
that
is
you
know,
colored
for
with
c1,
will
then
get
steered
via
this.
You
know
bgp
e3,
comma
c1
route
next
slide.
Please
here
we
see,
you
know
an
example
of
multiple
intents.
So,
for
the
same,
you
know
egress
p,
e3.
We
have
two
color
aware
routes,
c3,
c1
and
e3
c2.
They
are
independent
routes.
N
You
know
and
may
end
up
getting
set
up
via
completely
different
paths
through
the
inter-domain
network.
To
complete
the
example,
we
have
two
vpn
routes:
one
colored
with
c1
the
other
one
with
c2
each
route
will
respectively,
get
resolved
and
steered
via
the
appropriate
e3,
comma
c.
You
know
star
route
next
slide,
please
when
we
come
to
looking
at
the
deployment
requirements.
The
reference
you
know,
topologies
are
well
known.
You
can
have
multi-igp
or
multi-as
designs,
but
a
significant
difference
is
the
increase
in
scale.
N
You
know
there
are
some
networks
where
the
num
the
scale
can
go
into
the
order
of
hundreds
of
thousands
of
you
know
pes
in
the
network,
and
then
we
also
have
the
case
where
we
may
need
multiple
intents
to
be
supported
in
the
network.
You
know
some
common
ones
are
listed
here.
You
know
we
have
best
effort,
but
then
there's
low,
latency,
two
disjoint
planes
and
then
an
avoidance
use
case.
Perhaps
with
you
know,
links
nodes
or
or
domains.
N
So,
just
with
this
reference
example,
we
see
the
number
of
routes
in
the
network
can
go
up
to
you
know:
1.5
million
routes
next
site.
Please
here
we
just
list,
you
know
a
number
of
other
intent
use
cases.
The
draft
goes
into
you
know,
detail
in
terms
of
illustrating
the
key
ones
with
reference
topologies
next
slide.
Please.
N
The
focus
of
this
problem
statement
draft
has
been
to
have
you
know,
a
concise
technical
analysis
of
the
use,
cases
and
deployment
constraints
and
then
enumerate
the
resulting
protocol
and
design
requirements.
Some
key
ones
to
call
out,
of
course,
consistency
with
the
deployed
srbc,
a
policy-based
solution,
as
well
as
coexistence
and
interworking.
N
A
key
here
is
the
you
know
the
use
of
color
to
drive
automated
steering.
We
also
extend
the
problems
space.
Intentable
paths
need
to
be
supported
in
the
vpn
service
layer
as
well,
and
needs
to
take
into
account
nfe
service
chain
integration.
N
Next
slide,
please
there
are
a
number
of
deployment
requirements,
but
a
key
one
is,
you
know
the
setting
up
of
paths
across
heterogeneous
domains.
You
know
where
there's
different
technologies
and
encapsulations
being
used,
then
the
and
the
sort
of
the
most
significant
one
is
the
the
impact
of
scale
on
the
you
know
on
the
on
the
solution
both
on
the
data
plane,
especially
where
mpls
is
used,
as
well
as
on
the
bgp
control
plane.
So
we
clarify
these
aspects
in
the
draft
next
slide.
N
Please,
on
this
work
is
the
result
of
a
collaboration
with
many
people,
both
among
the
lead
operators
and
vendors,
so
we
acknowledge
their
contributions.
We
also
recognize
that
there
has
been
work
in
this
area,
specifically
by
the
seamless
sr
co-authors.
We,
you
know,
as
as
shadow
mentioned,
we
reached
out
to
them
back
in
november
december
and
there
is
ongoing.
N
You
know,
effort
to
come
up
with
a
joint
problem
statement
and
you
know
for
subsequent
work
next
slide,
please
yeah.
So
of
course
this
is
ongoing
effort,
but
you
know
we
request
a
new
review
from
the
working
group.
O
Hi
thanks
for
the
presentation,
this
is
stark
with
juniper.
My
question
is:
there
seems
to
be
an
assumption
that
the
color
has
the
same
meaning
when
it's
crossing
multiple
domains
has
a
uniform
meaning.
What
is
the
guarantee
that
the
same
color
means
the
same
thing
in
multiple
domains?
Is
there
a
need
for
translation?
For
example,
do
you
think.
N
P
Tariq,
this
is
jim.
Just
just
to
add
to
that
and
I
don't
I
don't
want
to
go
down
a
rat
hole,
but
that
question
is
actually
covered
in
the
solution
document,
which
is
not
part
of
these
slides.
So
we
can't
discuss
that.
But
if
you,
if
you
want
to
look
at
that,
go
ahead
and
look
at
that
solution
document,
I
believe
it's
covered
there.
Q
Yes,
I
have
a
question:
what's
the
difference
between
your
color
and
the
raw
target.
N
So,
firstly,
a
color
is,
you
know,
is
a
construct
that
is,
you
know,
used
to
represent
the
intent
and,
depending
on
the
protocol
being
used,
it
would
be
signaled
using
different
mechanisms
in
in
in
the
sr
policy
solution
to
signal
the
request
for
that
intent.
The
bgp
color,
extended
community
is
used
to
carry
this
color.
N
Now
when
it
comes
to
bgp
color,
aware
routing,
you
know,
depending
on
the
the
solution
it
would
be,
you
know
a
particular
bgp
construct
would
be
used
to.
You
know
signal
that
color
awareness.
Q
So
you
will
have
another
update
or
another
instruction
how
to
forward
the
colors.
How
to
steer
the
color.
That's
separate.
Is
that
correct.
N
I
mean
I
don't
want
to
go
into
the
solution
detail
since
this
was
a
you
know,
a
presentation
on
the
the
problem
statement.
But,
as
jim
pointed
that
you
know
there
is
a
solution
document.
You
know
proposal
that
defines
you
know
how
color
is
used
and
signaled.
R
Just
remind
me
that
we
have
a
draft
solves
the
same,
the
same
problem.
The
draft
name
is
drafted,
idr,
entertainment,
lcu
and
this
stuff
describes
describes
the
colored
bdplu
lsp,
in
which
the
routine
prefix
not
only
carries,
carries
a
label
but
also
carries
a
unique
color
attribute
which
helps
to
select
other
underlying
logic.
Slides,
please
review
and
give
us
some
comments.
A
A
Type
that
draft
name
into
the
chat,
because
I
doubt
very
many
people
were
able
to
figure
out
what
it
was.
So
please
type
it
and
I
imagine
the
authors
will
go,
look
we're
getting
it
move
on
to
the
next.
Actually.
N
A
S
So
this
draft
describes
a
solution
for
srv6
and
mpls
interworking.
As
you
know,
srvs
is
getting
deployed
in
the
customer
networks
and
we
have
a
brownfield
mpls.
So
such
interworking
is
it
is
a
requirement,
and
this
draft
provides
the
simple
building
blocks
to
make
this
interworking
happen.
The
this.
The
initial
version
was
posted
in
october
2018,
and
this
is
a
ref
five,
which
provides
the
additional
details.
The
draft
describes
both
the
data
plane,
as
well
as
associated
control,
plane
procedures
to
achieve
interworking
for
data
plane.
S
This
draft
has
introduced
new
and
dtm
dtm
behavior
as
well
as
it
it
updates
the
existing
n.b.m
behavior,
which
is
defined
in
the
network
programming
and
which
is
used
in
this
draft
in
this
draft.
Now
for
the
control
plane,
we
could
go
in
both
into
srpc
based
solution,
as
well
as
the
bgp
based
solutions
to
provide
such
interworking
next
slide,
please.
S
So
this
in
this
slide,
I'm
trying
to
what
we'll
try
to
do
is
we'll
try
to
generalize
the
interworking
problem
into
a
simple
problem
in
which
I'm
trying
to
show
a
central
domain
which
I'm
showing
in
different
example.
It
could
be
different
in
this
case.
Okay,
the
green
is
representing
the
srv6
domain
and
the
orange
is
representing
an
mpls
domain
and
the
dashed
oval
represents
an
interworking
function
that
happens
on
some
border
router.
Okay,
so
if
you
look
at
this
draft
proposes
two
main
scenarios:
one
is
a
transport
interworking.
S
Another
is
a
service
interworking
at
the
transport
interview.
What
it
means
is
our
l3
or
l2
services.
I
have
a
have
a
same
control,
pin
continuity,
so
what
it
means
is,
if
you
see
the
first
on
the
right
side.
First
diagram,
it
says:
srv6
on
is
on
the
leaf
on
the
edges
and
mpls
is
in
the
central
domain.
So
for
simplicity,
we
have
shown
this,
but
it
could
be
any
cascading
domains.
S
S
The
second
use
case
for
transport
interworking
is
we
have
the
edge,
which
is
an
mpls
which
runs
the
mpls
vpns,
and
we
have
to
transport
this
and
lsp
through
the
srv6
domain
and
the
third
one
that
interworkings
now
that
we
look
is
is
a
service
interworking,
where
the
service,
the
l2
l3
services
itself,
is
srv6
vpn
services
or
bgp
mpls
services.
That
is
shown
on
the
edges
as
a
vpn
as
a
srv6
vpn
and
the
mpls
vpn.
S
S
So
this
draft
defines
a
new
and
dtm
set
behavior
that
the
what
this
behavior
does
is
it
decapsulates,
the
ipv6
header
and
its
extension
headers,
and
then
look
then
mp,
the
following:
mpls
packet
top
label
in
the
mpls
table
and
forwards
the
packet
accordingly,
and
this
function
is
executed
on
the
interworking
box
between
srv6
and
mpls
domain.
There's
a
pseudo
code
listed
in
a
draft.
I
will
not
go
through
it,
but
it's
just
that.
After
doing
the
decap,
we
look
the
top
label
we
carried
in
the
packet
next
slide.
S
Please
also.
This
draft
introduces
the
srv6
and
hidden
behaviors
here.
What
we
say
it
we
have
introduced.
Two
behaviors
one
is
to
h
dot,
end
caps,
which
is
applied
to
an
mpls
label
stack
here.
When
we
receive
an
mps
label
stack,
we
pushed
an
ipv6
header,
along
with
it,
along
with
the
srh.
Together,
the
empirics
label
stack
and
the
its
payload
becomes,
the
payload
of
an
ipv6
packet,
and
the
next
header
field
of
an
srh
must
be
set
to
137..
S
Also
there's
a
reduced
version
in
which
the
first
segment
is
not
put
in
the
srh,
but
it
is
just
put
in
the
destination
address
next
slide.
Please
yeah,
as
I
described
earlier,
interwork
we
have.
We
have
broken
the
interworking
scenario,
transport
interworking.
What
it
means
is
we
need
to
provide
reachability
to
our
locator
if
it
is
srv6,
vpn
services
or
to
a
lsp
to
a
pe.
If
it
is
an
I
ampersa
vpn,
so
this
draft
provides
two
mechanisms
to
provide
such
thing.
S
One
is
a
a
known
as
existing
srpc
which
satisfies
the
intent
across
the
multiple
domain.
Also,
since
srpc's
learns
the
topology
through
bgpls
from
each
of
the
domains,
it
is
aware
of
the
data
plane,
discontinuity
at
certain
interworking
box
or
the
border
box.
Also
disk
draft
provides
the
best
effort
connectivity
through
bgp,
where
we
advertise
the
pe
locator
or
the
mpls
lsp
through
such
a
discontinuity
of
network
next
slide,
please
so
the
for
my
examples.
S
S
S
So
here
I'm
taking
an
example
of
srpc
a
solution
to
provide
the
transport
interworking
here.
Node,
1
and
node
10
are
running
srv6
vpns
because
that's
hr
srv6
in
this
case.
So
now
what
happened?
Node
10,
which
is
running
an
srv6
vpn
service,
will
advertise
a
vpn
prefix
with
a
srv
service
set
here,
which
is
shown
as
b
colon,
10,
common
toner
dt4,
because
vpn
service,
as
well
as
color
red,
because
it's
an
sr
srt
based
solution,
and
it
has
a
certain
intent
in
this
example.
Red
is
showing
a
low
latency
intent.
S
S
S
So
if
you
see
the
segments
on
the
node
one,
it
takes
through
the
low
latency
path
through
node
two
and
then
there's
a
binding
set
to
on
the
for
the
node
node
four
and
then
through
node,
five
in
that
and
nodes
eight,
and
if
you
see
the
packet
power
packet
that
it
received
on
the
note
4
the
packet
that
received
on
the
note
4
would
be
that
with
that
android
bm
function
on
note,
4,
and
what
that
n
dot
bm
pseudocode
is.
S
It
will
update
the
next
segments
in
the
srh
as
a
destination,
and
then
it
will
put
a
empire
stack
in
that
is
involved
that
is
bounded
to
that
policy
and
sends
the
packet
out
into
the
mpls
domain
and
rest
of
the
on
the
note
7.
It
will
do
a
lookup
of
that
ipv6
packet,
that
is
following
the
impeller
stack
and
packet,
will
flow
to
the
back
to
the
srv6
network.
Next
slide,
please
exactly
same
procedures
we
can
achieve
for
the
srpc
for
m
over
6..
S
S
5
and
then
important
part
is
the
last
segment
of
this
policy
is
a
new
behavior
that
we
already
described
is
a
dtm
function
which
causes
us
to
look
up
the
following
mpls
header
and
send
the
packet
further
into
mpls
domain
on
the
right.
And
if
you
see
that
in
this
case
again
the
now
srv6
policy,
the
bcd
is
represented
by
an
mpls
bc
and
that
srpc
will
install
on
the
note
1
as
one
of
the
segments
which
will
cause
the
stitching
or
the
interworking
function.
On
the
note
4.
from
the
packet
path
point
of
view.
S
On
the
note
4,
I
will
receive
the
packet
with
an
mpls
pc,
it's
shown
as
3007
here,
which
will
be,
which
will
push
the
ipv6
header
with
the
destination
as
the
first
segment
as
a
node
5,
and
then
the
la
and
and
the
next
segment
would
be
going
into
an
srh
would
be,
which
would
be
the
dtm
function
of
node
seven
and
on
node
seven.
Once
we
hit
the
dtm
function,
it
would
see
it's
my
my.
S
N
Yeah
thanks
joel
swades,
with
my
pc,
chair
hat
on,
I
would
request
you
to
talk
to
the
binding
sid
authors.
It's
a
working
group
document.
It's
almost.
We
think
it's
gonna
be
ready
for
working
group
last
call
as
well.
So
what
you
are
describing
seems
to
be
supported.
There
is
no
limitation
there
that
for
a
sr
path,
mpls
path,
you
can
have
a
binding
segment,
which
is
both
srv6
binding
segment
or
an
mpls
binding
segment.
So
it
seems
to
be
allowed,
but
I
would
prefer
if
it
is
made
explicit.
N
So
it
is
that
you
have
a
use
case
for
this
and
I
think
it's
a
very
valid
use
case.
So
maybe
just
talk
to
the
binding
set
authors
and
add
one
statements
there
so
that
it
is
very
clear.
Second
second
comment
I
would
have
is
we
also
have
multi-domain
case
that
we
handle
in
pc
quite
well,
and
most
of
your
document
is
focusing
on
a
single
pc
that
is
taking
care
of
all
the
domains,
so
just
think
about
even
handling
inter
domain
aspects
as
well.
S
S
So
here
we
have
described
both
the
srpc
solution
for
six
over
m
and
mo6,
something
similar
we
can
do
for
bgp
base
in
transport
interworking.
So
here
we
know
h,
domains
on
srv6,
so
our
services
would
be
srv6
vpn
services.
So
what
we
need
is
when
we
add
when
the
node
10
advertise
such
an
srv6,
vpn
services
and
node
1
receives
node
1
receives
needs
a
reachability
over
the
central
domain
for
the
pe
locator,
because
the
srv6
services
would
be
advertised
with
an
srv6
services
which
would
be
the
locator
of
node
10..
S
Now
this
reachability
is
like
an
ipv6
reachability
over
an
mpls
domain,
which
is
already
supported
like
a
6pe
functionality,
and
so
therefore
we
can
use
exactly
same
functionality.
When
an
ipv6
prefix
is
advertised
through
node
7
to
node
4,
it
would
be,
it
would
be
with
a
label.
Bgp
ipv6
lu
address
family
and
it
would
be
transported
over
an
sr
mpls
intra
domain
tunnel,
whatever
it
is
next
slide
piece.
S
S
What,
today,
existing
mechanism
is
a
bgp3107
which
cause
the
label
cross
connects
wherever
we
do
next
top
cell.
So
in
this
case,
on
the
note
7
and
on
the
note
4-
and
there
would
be
a
cross
connect
for
loop
back
to
node
10.,
so
exactly
same
mechanism
would
be
used.
The
extra
step
that
would
be
required
would
be,
instead
of
tunneling
from
the
that
bgp
lsp
from
note
4
to
note
7
over
an
mpls
lsp.
S
We
will
tunnel
that
into
an
into
an
ipv6
tunnel
whose
destination
would
be
of
the
behavior,
dtm
and
or
dtm,
which
is
already
described,
and
that
what
so?
What
note
4
is
supposed
to
do
is
it
need
to
do
a
label
swap
as
it
does
today
on
on
the
note
4.
S
In
addition
to
that,
it
will
put
an
ipv6
header,
whose
destination
would
be
n
dot,
dtm
and
and
put
that
ip
and
send
with
an
ipv6
encapsulation
to
the
node
7
with
a
and
it
would
be
of
the
dt
and
the
behavior
of
that
destination
would
be
of
the
dtm
type
and
on
note
7.
What
will
happen
is
since
it's
a
dtm
type.
We
will
decapsulate
the
ipv6
header
and
its
extensions
and
do
a
lookup
of
the
next
label
would
be,
which
would
be
the
again.
S
The
bgp,
lu,
lsp
label
and
packet
would
be
reaching
note
10..
So
this
draft
also
proposes
carrying
this
n
dot
dtm
function
as
an
srb6
sid
in
the
bgp
lu
update,
and
we
have
added
a
new
tlv
for
for
carrying
that
update
next
slide.
Please.
S
So
till
now
we
were
talking
about
transport
interworking,
so
there
is
a
use
case
of
interworking
in
which
we
have
to
make
a
control
plane,
which
is
running
srv6
vpn
and
the
mpls
mpls
vpn.
So
for
that
we
propose
there's
already
a
sub
a
gateway
solution.
A
gateway
is
a
box
which
supports
both
the
srv6
vpn
as
well
as
mpls
vpn,
and
what
happens
in
on
this
box.
S
A
B
Just
last
minute
question,
but
that
was
the
point
I
was
trying
to
do
with
shredder
or
on
her
presentation,
because
I
see
some
similarities.
Although
sure
you
seem
to
be
very
focused
on
defining
the
problem,
but
there's
a
piece
that
in
here
what
was
just
presented.
That
is
also
referring
to
defining
a
problem.
H
H
Okay,
this
draft
introduces
a
new
srv6
endpoint
behavior
called
ndtm
it's
for
interworking
between
srv6
and
srmpls
and
like
any
endpoint
behavior,
it
contains
a
function
plus
arguments.
H
The
function
causes
the
processing
node
to
de-encapsulate,
a
packet
that
is
to
remove
the
ipv6
header
and
all
its
extensions
to
impose
an
srm,
pls
label
stack
and
to
forward
the
packet
as
per
the
mpls
label
stack.
The
arguments
determine
the
label
stack
contents
and
anything
that
might
be
encoded
in
the
label.
Stack
next
slide.
H
Next
slide:
okay,
here's
a
typical
use
case:
we
have
an
sr
path
that
goes
from
node
one
to
node,
five
there's
an
srv6
part
that
has
nodes
one
two
and
three
in
it,
and
an
srm
pls
part
that
has
nodes
four
and
five
in
it.
Node
three
is
the
one
saddled
with
the
with
the
task
of
translation,
so
node
3
has
to
be
both
srv6
and
srmpls,
aware
1
and
2,
or
only
srv6,
or
only
need
to
be
srv6.
H
So
how
do
we
process
this?
And
here's
where
you'll
see
the
difference
between
the
ndtm?
You
just
saw
on
this
end
dtm.
H
If
segments
left
is
greater
than
zero,
we
discard
the
packet
and
send
an
icmp
message
to
the
source
when,
if
it
is
not
zero,
we
do
nothing
until
we're
processing
the
upper
layer
header.
When
we
process
the
upper
layer
header
we
de-encapsulate
the
packet,
we
push
an
mpls
label
stack
that
is
associated
with
the
dtm
arguments.
H
We
set
the
mpls
traffic
class
and
ttl
to
reflect
the
traffic
class
and
hop
count
that
was
in
the
ipv6
header,
and
we
submit
the
package
to
the
mpls
lookup
for
transmission
to
the
new
destination.
So
there
are
two
big
differences
between
this
ndtm
and
the
one
you
just
saw.
One
is
that
we
allow
any
payload,
not
just
mpls,
and
the
other
is
that
we
make
a
point
of
setting
the
mpls
traffic
class
and
ttl
values.
H
Next
slide,
the
next
steps
we're
asking
for
working
group
review
and
a
call
for
adoption.
We've
heard
a
request
from
dan
to
talk
about
merging
solutions,
so
that
probably
has
to
happen
too
and
that's
it.
S
Yeah
yeah
hi
ron,
so
isn't
what
you
are
proposing
is
the.
When
we
receive
that
function
and
its
argument,
you
have
to
put
an
mpls
stack
based
on
its
the
arguments.
Correct,
yes,
so
isn't
that
n.b.m
that
is
already
defined
in
network
programming
is
just
that
it's
a
decap
variant
of
it.
I'm
just
saying,
because
today
it's
very.
S
Yeah,
that's
what
I'm
saying
it's
your
lookup
function
and
arguments
already
give
you.
What's
it
stack
to
put
and
that's
the
difference
between
n.d.m,
where
we
are
proposing
to
do
a
lookup
into
a
whatever
is
falling
in
the
impellers
header
and
that
lookup
will
give
you
impeller
stack
here.
Okay,
yes,.
H
It's
very
close
to
n.b.m.
Also
the
mpls
stack
may
not
fit
in
the
arguments,
so
it
may
be
that
some
implementations
have
to
do
a
lookup.
The
draft
is
silent
on
that.
S
S
B
B
L
See
these
two
documents
define
the
same
thing
and
I'd
like
to
see
the
end
of
this
story
and
by
the
way
gsr6
for
amperes
also
defined
a
similar
mechanism.
So
maybe
we
can
combine
these
kind
of
things
to
discuss
in
the
main
list.
Thank
you.
A
T
T
T
It
basically
explains
how
these
building
blocks
work
together
as
independent
pieces
and
works
seamlessly
to
to
provide
scalable,
slicing
solution
and
solution.
That
is
that
are
defined
for
incremental
deployments
in
mind
next
slide.
Please
a
very
brief
history
of
the
draft.
We
started
the
work
in
july,
2
2018,
the
ref
2,
was
presented
in
ietf
106
in
spring
next
slide,
please.
T
T
Then
we
talked
about
flexible
algorithms,
the
complements
srte
policy
solution
by
adding
new
prefix
segments
with
a
specific,
optimization,
objective
and
constraints,
so
it
allows
so
this
is
an
example
of
what
why
we
call
them
independent,
cooperating
building
blocks
because
sr
policies
are
supported
with
or
without
flex,
algo
flex.
On
the
other
hand,
flexible
leverages,
odn
and
automatic
steering
construct
of
sr
policy,
then
we
talked
about
ti
lfa,
which
plays
a
role
in
providing
order.
50
millisecond
protection
in
the
underlay,
as
these
are
the
building
blocks
that
work
together.
T
The
way
that
say,
for
example,
tlf
and
flex
algo
works
is
that
the
backup
path
for
tlsp
lfa
is
express
with
prefix
sid
of
flux,
of
a
specific,
flexible
elbow.
The
backup
part
is
optimized
per
flux
elbow,
so
we
don't
have
so
you
don't
have.
We
don't
have
cases
where
traffic
for
a
flex
cell
go
which
is
low.
Delay
can
be
back
up
on
a
path
that
are
not
delay,
optimized
path,
then
we
have
vpns
that
provide
means
for
creating
logically
separated
network
to
different
set
of
users
to
access
the
common
network.
T
Here
we
would
like
to
draw
attention
that
qos
works,
independent
of
topology
or
routing,
and
then
slice
orchestrator
puts
things
together
at
the
management
plane
since
that
revision.
We
also
now
added
references
for
stateful
slice
id
these
references
so
later
on.
When
we
go
through
this
we're
going
to
go
through
this,
how
the
stick
less
slice
id
fits
together
as
a
building
block
that
does
not
disrupt
the
picture
from
routing
point
of
view
or
from
topological
point
of
view,
but
works
to
provide
additional
differentiate
treatment.
So
next
slide,
please.
T
T
Maybe
I
can
finish
this
and
then
we
can
take
a
question.
I
think
it's
not
so
important.
G
What
which
definition
of
the
network
slides
this
refers
to?
Is
it
3gpp
network
slice
or
itf
network
slides.
T
Could
we
wait
so
that
you
you're
clear
later
on
once
I
do
the
can
we
come
back
to
this
one
once
I'm
done
with
the
presentation.
G
Well,
I
I
think
it's
a
basic,
very
fundamental
question,
because
we
need
to
understand
the
context
of
what
the
network's
lies.
We're
talking
about.
T
T
T
T
T
T
So
now,
let's
take
a
look
at
exemplify
why
slice
id
construct
needs
to
be
independent
of
routing
topology.
So
I
like
to
draw
your
attention
to
the
picture
on
the
right
which
ignore
the
block.
The
big
box
in
that
picture.
For
now
here
within
that
you
have
a
network
with
two
flex,
algo
low
latency
flex
sell
go
orange,
128
and
low
cost
flex
algo
red
129..
T
I
did.
I
did
mention
earlier
that
if
you
have
a
failure,
let's
say
if
you
have
a
failure
between
link
one
and
two,
then
what
you
like,
because
the
low
latency
slide
traffic
you
you,
like
the
low
limit,
let's
say:
try:
traffic
to
use
backup
resources
that
are
also
a
backup
bar
that
is
also
low,
low,
latency,
optimize
and-
and
indeed
this
is
what
happens
for
flight
circle
case.
T
So
there's
a
failure
between
node
between
on
link
between
node
one
and
two.
Then
the
traffic
is
diverted
to
a
low
latency
delay
path
which
is
shown
by
orange.
T
Now
it
makes
sense
to
extend
this
to
the
differential
treatment
that
the
slices
provide,
and
in
this
case
now,
let's
take
this
picture,
which
is
half
blue
half
green,
and
this
is
like
that
because
it's
a
common
infrastructure
or
a
common
flux
algo.
So
we
are
not
replicating
flux,
algo
per
slice
id
but
reusing
the
same
flux,
algo
again.
Topology
and
routing
independence
is
important
using
the
same
instance
to
create
differential
treatment.
T
You
re
redirect
traffic
through
four
three
and
two,
so
we
are
going
to
focus
on
node
number
four,
when
node
number
four
received
the
packet
and
packet
is
a
stateless
instance
of
slice
id
in
the
packet.
So
when
node
number
four
receive
the
packet
with
slice
id
do
one.
Obviously
it
applies
the
differential
treatment
given
and
reserve
or
program
for
slice
id
1..
When
the
traffic
comes
for
slice
id
2,
then
it
gives
the
same
differential
treatment
that
that
is
programmed
at
node
number
4
for
slice
id
2..
T
So
in
this
fashion
you
really
need
the
tlfa
flex,
lgo
and
other
building
block
completely
transparent
to
the
differential
treatment
that
is
required
for
for
slicing
slicing,
really
slice
id
really
act
like
a
qos
which
is
independent
of
routing
topology.
Next
slide,
please,
okay!
So
regarding
the
seamless
working
of
this
building
block,
we
saw
that
flex
sell
go
how
it
works.
T
It's
the
same
thing
like
you:
have
you
have
a
construct
which
is
a
differentiated
treatment
on
a
node,
but
you
don't
need
too
many
instances
of
the
differential
treatment,
because
you
have
other
construct
like
vpn
policies,
qs
differentiation
that
make
you
scale
and
on
a
device
you're
not
going
to
have
too
many
behavior
for
differential
treatment.
So
this
is
how
the
slice
is
scaled
next
slide,
please
so
we're
going
to
take
an
example.
This
is
with
the
reference
of
house
stateless
life.
T
Id
information
is
carried
for
srv6
network
and
in
this
picture
we
have
an
sr
domain
and
we
show
that
traffic
is
in
green.
Payload
is
supposed
to
be
supposed
to
be
classified
into
green
slice,
and
a
blue
payload
is
supposed
to
be
classified
in.
The
blue
drive
slide,
so
we're
going
to
focus
on
sr
node
ingress,
one
where
which
always
encapsulate
the
packet
in
an
outer
ipv6
header,
an
optional
srh.
T
A
T
Sure
so
so
good,
so
so,
basically,
then,
then,
then
what
happens
is.
Is
that
node
number
two
which
is
which
is
a
transit
node?
If
it
slice
aware,
then
it
look
at
that
field
that
is
in
the
packet
and
apply
the
differential
treatment
that
the
that
is
programmed
at
node
number
two.
T
If
it
is
not
aware
of
slice
id
treatment,
then
it
just
forwards
the
packet
without
any
any
further
differential
treatment
using
other
building
blocks,
or
we
can
call
it
forward
the
traffic
using
the
default
sleep.
So
that's
enables
the
the
difference
at
the
that
enables
the
incremental
deployment
next
slide.
Please.
T
So
now
for
the
npls
draft,
for
there
is
a
draft
that
bruno
presented
to
mpls
working
group,
it
worked
for
both
segment,
routing
and
non-segment
routing
mpls,
and
it's
similar
to
what
what
the
other
reference
I
just
went
over.
T
The
the
exception
is
that
the
slice
id
is
carried
in
the
entropy
label
and
the
ttl
of
the
entropy
label
is
the
is
where
slice
id
presence
indication
is
provided
a
node
that
is
able
to
that
is
capable
of
doing
the
differential
treatment
applied
to
differential
treatment,
other
nodes
work,
work
seamlessly
and
provide
load
balancing
during
the
true
label
entropy
label.
T
With
with
that,
I
would
stop
here
for
the
next
slide.
Please
please,
let's
just
take.
T
So
I
stopped
here
and
basically
would
like
to
get
provide,
get
feedback
from
the
working
group
and
would
like
working
group
to
adopt
the.
M
Yeah
this
is
judo.
My
question
is
first.
The
first
part
of
this
document
presentation
is
a
good
summary
of
the
sr
mechanisms
which
have
been
defined
in
this
working
group
to
make
it
more
network
slide
specific.
Perhaps
I
suggest
to
reference
the
resource
aware
segment
and
sr4
weapon
plus
drafts,
which
are
also
the
working
group
documents
in
spring
working
group.
M
T
All
right
so
the
many
things
for
for
the
comments,
so
I
I
agree
we
can
work
offline
on
the
suggestions
that
you
made
and
this
is
informational
document.
This
document
does
not
so.
P
So
far,
I'm
going
to
cut
in
here
we've
got
literally
not
much
time
left.
We've
got
one
presentation
left
on
the
agenda
plus
two
that
are
there.
If
we
have
time
where
we're
going
to
run
out
of
time,
so
I
think
we
need
to
cut
the
queue
with
the
people
on
here.
Let's
get
these
questions
through
quickly
and
move
on.
Please
yeah
and
quick.
P
You
I
will
follow.
A
I
G
I
I
listened
for
the
whole
presentation,
and
still
I
don't
understand
to
which
definition
of
network
slicing
this
proposal
is
applicable
to
and
another
one
is
just
minor
net
that
exp
field
in
mpls
label
element
has
will
remain
traffic
class
for
10
plus
years.
So
just
please
use
the
proper
identification.
Thank
you
yeah
and.
L
L
P
O
Thank
you
hi.
This
is
tarek.
You
didn't
number
your
slides.
I
I
can't
tell
you
which
slide
to
go
to
I'll
I'll.
Tell
you
what
I
remember
you
you
you
flashed
on
the
slide
it
was
about.
A
slice
is
independent
of
the
topology.
O
In
fact,
there
are
requirements
that
a
slice
can
be
steered
or
should
be
steered
on
certain
topological
elements.
I'll
give
an
example.
You
want
secure
links
or
you
have
diversity
requirements
which
aren't
under
under
discussion
in
these
working
group.
A
slice
has
topological
dependency,
it's
not
independent,
I
don't
know
which
slide.
You
were.
Maybe
the
previous
one
or
this
one
you're
saying
that
yeah.
O
Also
had
same
comments.
A
O
S
A
S
B
So
greg
about
the
question
about
the
definition
of
the
slicing
that
you
had.
I
know
that
someone
from
from
huawei
robin
sent
an
email
with
like
in
this
working
group
on
the
or
the
test,
one
that
there's
about
like
eight
different
document,
attempting
to
come
up
with
a
definition
of
slicing-
and
I
read
them
all
in
in
this
document
here
that
tarak
is
just
a
presentation
if
we
zoom
out
a
little
bit,
especially
with
that
slide
or
the
one
previous
we're
talking
about
the
fundamental
block.
B
I
don't
think
it'd
violate
that,
even
though,
if
the
meaning
of
the
slices
is
not
that
really
clear,
I
think
those
building
black
are
still
required
and
that's
pretty
much.
How
I
I
see
it
myself
as
a
co-author
when
I'm
thinking
of
designing
a
network
that
requires
end-to-end
transport
slices
across
the
backbone.
S
B
We
could
have
a
summary
of
the
definition
of
of
of
what's
a
slice
added
to
this
one
or
having
a
reference
once
we
make
up
our
mind
about.
What's
the
real
definition
of
a
slice-
and
maybe
I
can
jeffy
were
behind
me,
but
I'm
pretty
sure
you
wanted
to
highlight
the
definition
of
slices
who
still
have
the
time
on
the
chair.
A
No
we're
out
of
time
I'm
there
is
a
lot
of
work
going
on
on
defining
slicing
to
the
degree.
This
is
a
definition,
independent
building
block,
that's
great
to
the
degree.
It's
definition
dependent.
It
needs
to
be
clear
about
the
definition
that
the
relationship
needs
to
be
clarified
in
the
draft
and
be
discussed
on
the
list,
and
with
that
we
move
on
to
terex.
O
I
don't
see
the
slides
yet
hello.
This
is
tarek,
and
today
I'm
going
to
give
you
an
update
about
a
solution
to
realize
network
slicing
over
an
sr
network
I
am
presenting
on
the
behalf
of
the
co-authors.
I
would
like
to
thank
all
of
them
next
slide.
Please
so
I'll
give
a
timeline
or
a
history
of
the
updates
on
this
draft
and
then
I'll
dig
into
the
maybe
a
review
or
recap
of
the
solution,
we're
presenting
and
then
close
off
with
the
next
steps.
Next
slide,
please
so
timeline
revision.
O
Zero
was
introduced
as
a
companion
document
to
the
solution
document
we
presented
in
teas,
and
it
describes
how
we
can
realize
the
solution
in
nsr
network
rev,
one
included,
updates
to
align
with
certain
terminology
that
we
added
in
the
based
document
and
we
addressed
feedback
from
working
group
members,
and
we
thank
them
for
that
and
we
have
new
co-authors
who
are
joined.
Our
work
next
slide
please.
O
So
these
are
key
terminologies,
I'm
not
going
to
go
word
by
word
in
details,
I'll
just
high
level
mention
that
the
slice
policy
is
a
is
a
means
to
instead
to
install
a
set
of
rules
on
a
network
element
so
that
we
can
realize
a
slice,
aggregate
and
I'll
I'll.
Let
you
read
through
these
terms,
offline
and
I'll
I'll,
be
using
the
slice
policy
in
subsequent
slides.
So
next
slide,
please
so
very
high
level
overview,
a
router
that
is
going
to
process
a
slice.
O
Aggregate
packet
will
require
to
do
a
multi
number
of
things.
The
first
is
to
identify
the
packet
belongs
to
slice.
Aggregate.
That's
number
one
number
two.
We
need
a
way
to
select
the
next
hop
to
forward
the
packet
on,
and
this
is
determined
by
the
topology
of
the
slice
and
the
optimization
criteria
that
we
are
seeking,
and
the
third
rule
that
we
need
to
enforce
is
the
forwarding
treatment.
This
is
you
know,
maybe
a
certain
percentage
of
the
bandwidth
on
a
shared
link
that
you
want
to
assign
to
a
slice
aggregate.
O
So
all
these
rules
are
part
of
a
splice
policy
definition
that
we
are
introducing
in
the
base
document.
On
the
right
hand,
side
I'm
just
showing
high
level
structure
of
that
slice
policy.
Definition
next
slide,
please.
O
So
we
have.
We
are
presenting
two
approaches
to
realize
the
solution
and
an
sr
capable
network.
The
first
one
is
by
assigning
associating
a
slice
aggregate
with
a
sid,
so
so
that
packets,
using
those
specific
sids,
will
will
get
the
the
identification
or
association
to
the
slice
aggregate,
as
well
as
the
forwarding
action,
the
next
top
selection.
O
So
the
the
extensions
the
to
advertise,
the
slice
aggregate
prefix
and
the
jcc
sets
were
presented
on
tuesday
there's
a
pointer
to
the
draft.
The
second
option
we
are
proposing
is
carrying
a
separate
id
in
the
packet,
so
this
is
very
similar
to
what
you've
seen
zafar
talked
about
it.
We
are
part
of
that
work
and
we
have
proposals
there.
So
the
idea
is
to
carry
an
identifier
that
will
associate
the
packet
a
slice
aggregate.
I'm
I'm
putting
three
pointers
here.
Just
for
reference.
O
Three,
we,
the
working
groups,
the
respective
working
groups,
are
still
working
out.
The
details
on
which
proposal
is
the
best,
and
but
we
are
advocating
this
proposal
as
well
next
slide.
Please.
O
I'll
go
with
the
first
option
that
we
are
presenting
to
realize
the
solution.
As
I
said,
we
we
will
advertise
sids
associated
with
a
specific
slice
aggregate,
so
the
forwarding
action
in
that
case
is
dictated
by
the
top
or
active
segment,
be
it
prefix
or
adjacency.
The
topology
is
defined
by
the
you
know.
If
it's
a
prefix
set,
it
carries
the
empty
id
and
a
flex
algorithm,
algo
number
and
the
fad
will
be,
will
dictate
the
the
topology
that
is
used
to
select
the
next
stop
this.
O
The
forwarding
treatment
is
the
is,
is
a
rule
part
of
the
slice
policy
definition,
as
I
I
mentioned
in
the
earlier
slide
now
some
scale
impact
to
this
proposal.
We,
we
are
not
proposing
a
flex
algo
definition
for
a
slice
aggregate,
so
the
same
flux,
algo
definition
or
multiple
slice
aggregate
can
use
the
same
flex.
Argo
definition.
This
is
a
good
point.
Other
proposals
are
advocating
a
flex,
algo
definition
per
slice
or
per
slice
aggregate,
whatever
the
term
would
be.
O
The
second
point
is
the
igp:
topology
can
be
shared
among
multiple
slice
aggregates
it
can
be,
and
that
will
allow
more
better
scale.
The
computed
path-
you
know,
a
result
of
the
optimization
criteria
can
also
be
shared
by
multiple
slice.
Aggregate
sets
next
slide
please.
O
So
this
is
an
example
where
a
packet
is
carried
by
a
top
sid
that
is
associated
with
a
slice
aggregate.
I
have
a
sa1
packet
with
a
top
set
of
16
05
and
sa2
packet
with
17
05.
Both
of
them
are
destined
to
note
5.
on
any
transit
node.
Let's
assume
it's
3.
O
We
have
a
slice
selector,
which
will
put
the
packet
based
on
the
top
active
segment
into
the
respective
queue
and
then
there's
a
hierarchical
queue
there
that
you
can
apply
differentiated
treatment
based
on
the
exp
as
well,
but
but
the
high
level.
What
I
want
to
share
here
is
that
the
top
active
segment
is
the
one
that
is
used
to
identify
the
packet
or
associated
with
the
slice
aggregate
next
slide.
Please.
O
O
Next
stops
are
computed
as
usual
by
igp,
based
on
the
topology
and
the
optimization
metric.
The
the
forwarding
treatment,
as
I
mentioned,
is
part
of
the
slice
policy
definition
that
we
install
and
in
terms
of
scale
impact.
Here
we
are
reusing
the
sids,
so
multiple
size
aggregates
can
use
sids
that
are
defined
or
instantiated
in
the
network.
They
are
not
specific
to
a
specific
slice
aggregate.
O
The
fads
can
be
shared
by
multiple
slice
aggregates
based
on
the
topology
requirement.
Again,
the
second
bullet
is
about
the
topology
can
be
shared,
and
the
last
point
is
that
the
computed
path
as
a
result
can
be
shared
by
multiple
slice
aggregate
packets,
depending
on
the
top
active
segment
next
slide.
Please.
O
Here's
an
example
where
we
have
a
packet
destined
from
let's
say
one
to
five
and
you'll
notice:
the
top
active
segment
for
sa1
packet
and
sa2
packet
both
have
the
same
segment:
routing
sid,
17
05.
The
selector
in
this
case
is
carried
in
a
different
field
in
the
packet.
I
did
not
put
details
because
there
are
multiple
proposals
there.
O
O
A
V
Hi,
this
is
the
ball
from
huawei
and
multiple
sites
is
the
same
topic.
It's
a
can
use
pesticides
as
id
auto
slice
extension
intel
plan.
This
solution
has
been
posed
in
sr
for
vpn,
plus
and
vtn
draft
in
2019
and
earlier
2020,
and
it's
practically
just
right.
Aggregate
id
is
same
as
we
can
id.
I
think
it's
no
need
to
support.
O
Yeah
my
feedback
on
this.
We
are
aware
about
this
vtn
proposal.
We
did
talk
to
the
authors
and
we
had
a
chance
to
talk
about
it.
I
we
there
is
an
email
thread
that
was
started.
We
think
the
slice
aggregate
has
some
key
differences
to
a
vtn
and
we
would
like
to
take
that
discussion
further.
If
there
is
any
alignment
that
could
happen,
we
are
welcome.
We
we
would
welcome
them,
we're
happy
to
do
it.
A
O
Okay,
so
g,
I
acknowledge
the
first
part
we
will
follow
up
on
the
second
part.
There
is
some
key
differences
between
s,
asid
and
resource.
Aware
sets
that
at
one
time
you
were
not
proposing
extensions
to
associate
a
sid
with
a
slice
aggregate
or
a
slice.
O
T
Far,
okay,
so
my
question
is
that
when
you
use
mpls
label
to
carry
the
slice
id,
can
you
comment
on
the
backward
compatibility
like
if
you
have
a
device
in
between
the
two
nodes
that
are
participating
and
what
happened?
Do
you
need
all
nodes
to
be
upgraded.
O
Depending
on
the
solution
that
we,
there
are
multiple
solutions
that
we
are
presenting,
depending
on
which
working
group
you
mean
we
would
like
the
solution
to
be
backward
compatible
such
that
a
slice
aggregate
packet,
traversing,
a
node
that
doesn't
support
that
capability
and
that
the
packet
will
be
forwarded
still,
and
it
might
get
the
forwarding
treatment
that
it's
after,
but
it
will
not
be
dropped.
T
O
A
E
Just
just
one
comment:
we
had
two
presentation
if
time
arrows
for
your
information,
there
are
a
schedule
in
other
working
groups,
so
you
can
have
a
look,
for
example,
for
associated
channel
over
ipv6.