►
From YouTube: IETF115-PALS-20221109-0930
Description
PALS meeting session at IETF115
2022/11/09 0930
https://datatracker.ietf.org/meeting/115/proceedings/
B
B
A
A
A
A
A
So
why
is
this
not
it's
not
sharing
on
this
end
here
right,
so
the
usual
note.
Well,
you
have
seen
this
many
many
times
by
the
time
you've
been
here.
Just
remember
that
yeah
I
can
see
that
I
can't
see
it
on
here.
A
Just
remember
that
anything.
You
say
why
has
that
disappeared.
C
A
A
A
Meeting
tips,
I'm
sure,
you've
read
this
most
important.
One
seems
to
be
to
keep
your
mask
on,
don't
forget
to
get
into
the
active
queue
rather
than
just
line
up
at
the
microphone
so
purpose
of
this
meeting.
This
is
a
meeting
is
a
joint
session
of
Pals,
mpls
and
debtnet,
and
it's
called
to
discuss
the
basic
architectural
issues
and
solution
proposals
arising
from
the
need
to
improve
mpls
support
for
new
applications
and
uses.
A
So
the
agenda
is
we
there
will
be
a
report
on
the
open
design
teamwork
and
then
we
will
look
at
some
updates
on
requirements,
the
m,
a
header
and
the
ioam
encapsulation.
A
And
then
there
are
two
debtnet
drafts
that
we've
been
requested
to
include
on
the
agenda,
because
it
is
felt
that
they
need
a
wider
Community
review
and
so
we're
going
to
put
them
here
in
this
joint
meeting.
And
then
there
is
some
open
microphone
time
to
discuss
issues
of
Interest.
A
A
However,
breaking
news,
because
we
only
realized
this
last
week,
much
many
of
our
environment,
many
of
us
are
being
embarrassed
by
this.
We
don't
believe
that
the
solution
properly
describes
how
to
do
non-ip
payloads
and
how
they
will
be
carried
over
an
m
n,
a
enabled
LSP,
so
we're
proposing
to
keep
the
open
design
team
Thursday
sessions
running
for
hopefully
a
short
while
longer
until
we
have
a
consensus
position
on
how
we're
going
to
do
payloads
other
than
IP.
A
So
we
are
we've
done
the
chairs
introduction
there's
going
Tariq
you're
on
next
with
an
open
design
team
report.
A
Then
we
have
a
bunch
of
net
specific
sorry
m,
a
specific
drafts
and
then
we'll
go
to
the
two
dead
net
discount
drafts
and
he
was
a
bunch
of
useful
resources
that
I
suggest
you
consult
at
your
leisure,
oh
well.
Let's
take
me
to
change
deck
so
Tariq
over
to
you.
Please.
A
D
Okay,
great
hi,
my
name
is
Tariq
and
I'm,
going
to
report
to
you
about
the
mpls
network
actions
open
design
team
activities.
D
This
is
report
number
four
that
we
give
out
and
obviously
it's
the
product.
The
work
is
the
product
of
the
mpls
m,
a
open
design,
team
I
don't
have
controls,
so
you
need
to
help
me
a
little
bit
with
the
pink
slides
right.
D
Move
to
the
next
one:
yes,
please!
So
a
bit
about
the
the
design
team
itself.
It's
a
joint
activity
between
three
working
groups,
mpls
spells
and
death
net.
D
We
meet
on
Thursdays,
11
A.M
eastern
time.
As
of
now,
the
open
design
team
chairs
also
meet
weeklies
on
Tuesdays.
D
We
have
good
participation
around
15
to
20
people
on
a
good
day,
a
compilation
of
all
the
m.
A
documents
are
given
on
that
link,
there's
quite
a
few
of
documents
that
were
produced
next
slide.
Please.
D
Yes,
this
is
good,
so
I'll
go
over
the
working,
the
open
design
team
working
group
documents
and
give
an
update
on
each
very
quickly
some
of
these
documents.
They
are
on
the
agenda
today,
so
authors
will
give
a
detailed
update
on
them.
Let
me
start
with
the
first
one:
we
have
the
use
cases
for
mpls
Network
action
indicators
and
ancillary
data,
the
status
of
this
document.
D
It
was
adopted
back
in
May
and
we
added
new
test
cases
recently.
Ioam
direct
export
and
generic
function
delivery
in
mpls
this,
the
working
group
and
the
design
team
continues
to
define
the
this
document
and
adding
or
updating
the
existing
use
cases
as
of
now.
D
The
authors
have
reported
or
have
addressed
another
round
of
comments
specifically
came
from
Adrian
this
time
and
as
of
now,
the
document
is
in
a
stable
condition
and
it
is
a
candidate
for
progressing
further
next
slide.
Please.
D
The
state
of
this
document
again
addressed
discussion
points
specifically
raised
in
the
weekly
open
design
team
meeting
and
we
will
go
over
those
and
more
details
and
and
subsequent
slides.
There
are
currently
no
outstanding
issues
reported
by
the
authors,
and
this
document
can
be
candidate
of
working
group.
Last
call
can
progress
further
next
slide.
Please.
D
So
the
update
number
one
that
we
recently
gave
since
last
time
we
wanna
we
want
to
report
since
last
time
the
open
it's
about
competing
m,
a
solution
proposals.
There
were
multiple
of
those
that
the
design
team
has
reviewed
for
specifically
for
packet
encodings
for
the
m,
a
solution
again
I'm
giving
the
link
where
all
the
proposals
are
compiled
in
the
open
design,
team
chairs,
encourage
the
authors
to
meet
and
discuss
and
bring
forward
a
converged
solution.
D
That
was
one
option.
The
other
option
is
to
bring
bring
about
a
new
unified
solution.
Maybe
this
unified
solution
can
be
cherry
picking
stuff
from
the
different
proposals.
D
Now
the
the
authors
of
the
m,
a
Solutions
competing
ones,
reported
back
to
the
open
design
team
chairs
that
progress
has
been
made
on
a
converged
set
of
Mna
solution
documents.
That's
the
status
that
we
have.
So
we
were
we're
going
ahead
with
this
assumption
that
we
have.
You
know
the
authors
are
collaborating
and
bringing
about
a
converged
set
of
mne
solutions.
D
D
This
need
of
order
of
evaluation
needs
to
be
articulated
somewhere
and
the
conclusion
was
to
add
it
into
the
framework
document
and
specifically
in
revision.
One
reflected
this
agreement
section
four
one
details
that
I
will
not
go
word
by
word,
but
I'll
leave
it
for
people
to
go
offline
at
the
leisure
next
slide,
please.
D
The
third
update
was
about
the
mpls
network
action
scope.
Again,
this
topic
was
triggered
in
the
design
team
weekly
meeting
additional
text
to
the
m.
A
frame
framework
document
was
proposed
to
generalize
the
scope.
We
ran
a
poll
on
this
text
to
solicit
the
support
for
it
and
we
concluded
the
Paul
with
a
good
support.
D
D
The
next
update
was
a
poll
on
the
different
implementations
of
an
mpls
forward
characteristics.
D
D
D
The
call
or
the
call
was
run
for
four
weeks
and
the
responses
were
anonymously
collected
and
reported
in
a
draft
that
Adrian
had
compiled
and
I'm
leaving
that
link
for
reference
next
slide.
Please
so
about
the
next
steps
that
we
have
on
the
table
for
the
design
team
Stewart
had
mentioned.
D
There
was
a
there
was
a
an
intention
of
continuing
forward
as
normal,
but
then
it's
a
question
up
for
the
design
team,
as
well
as
the
working
group
on
how
frequent,
and
is
it
a
recurring
meeting
that
we
need
for
the
design
team
moving
on.
D
So
that's
something
we
have
to
close
on.
The
next
thing
is
also
touched
upon
by
by
Stewart
earlier.
Is
the
discussion
of
non-ip
payloads
in
an
m
a
packet?
D
So
that's
another
thing
that
we
have
to
discuss
in
the
design
team
and
progress
the
solution,
documents
that
we
have
converged
on,
so
that
that
would
be
the
last
bullet
that
I'm
not
showing
this
is
it.
This
was
the
report
from
the
design
team
I'm
happy
to
answer
any
questions
that
I
can
have
an
answer
for.
A
D
A
E
Thank
you.
So
this
draft
is
co-edited
by
myself
and
Stuart
and
John
Drake
next
slide.
E
So
just
a
brief
update
on
where
we
are
with
the
the
m
a
requirements
document.
So
this
document
captures
the
key
requirements
for
these
mpls
Network
actions
that
affect
forwarding
or
other
processing
of
mpls
packets
and
it's
kind
of
broadly
structured
as
general
requirements,
sub
requirements
on
the
sub
stack
indicators,
requirements
on
the
network,
action
indicators
themselves
and
also
requirements
on
the
ancillary
data,
and
these
have
mostly
been
derived
by
looking
at
some
of
the
solution.
E
Proposals
for
these
additions
to
the
mpls
label
stack
also
based
on
the
use
cases
and
some
of
the
other
discussions
have
gone
on,
and
feedback
we've
had
through
these
regular
open
design,
team
meetings
and
just
to
reiterate,
these
are
requirements
on
the
protocol
design,
not
on
implementations
next
slide.
E
So
we
had
a
last
call
sometime
around
the
last
ITF
which
we
we
went
through.
We
addressed
those
comments
following
the
working
group
adoption,
so
there
were
so
many
comments
that
we
had
that
we
put
them
into
an
appendix
in
the
draft,
and
then
we
worked
our
way
through
the
comments
in
subsequent
revisions
that
appendix
has
now
been
been
removed.
We
also
had
some
very
detailed
comments
from
from
Adrian
Farrell,
which
we
very
much
appreciated.
Thank
you.
I!
Don't
see
him
in
the
room,
but
thank
you
for
those.
E
So
we
we
think
we've
we've
I,
hope
we've
addressed
those.
We
also
renamed
the
draft
to
requirements
for
mpls
Network
actions
to
be
more
concise
and
we've
been
doing
some
work
to
align
the
terminology
and
Concepts
as
they've
evolved
in
the
m
a
framework
draft
next
slide.
E
So
the
next
steps
we've
reviewed
several
versions
of
the
Draft
line
by
line
in
the
mpls
open,
DT
meetings.
We
may
we
think
we
may
need
to
refine
or
add
new
requirements
to
this
draft,
depending
on
the
the
outcome
of
the
discussion
on
support
for
non-ip
payloads
with
M
A.
So
that's
things
like
pseudo
wires
and
evpn
and
and
so
on
and
Dot
netc.
E
So
please
please
review
the
draft
and
and
post
comments
to
the
npls
list,
so
I
know
Tariq
mentioned
this
may
be
a
candidate
for
working
group
last
call
but
I
think
as
as
editors,
we
think,
maybe
not
until
we've
actually
had
some
discussion
on
the
on
on
the
situation
with
with
non-ip
payloads
and
if
there's
any
updates,
significant
updates
needed
to
this.
This
document,
any.
A
F
Hi
chidin
from
Holly
hi,
a
message
actually
I,
also
made
some
comments
on
the
previous
version.
I
think
national
weather
I
need
to
check
the
update
but
I'm
not
sure
whether
all
of
them
have
been
resolved
or
not,
and
since
the
issue
list
has
been
removed,
we
have
some
other
way
to
track
the
open
issues.
With
this
document.
E
A
G
A
Well
sure,
but
the
the
I
think
the
question
is
whether
we
missed
mistakenly
missed
some
of
his
comments
and.
G
A
But
it
will
be
nice
if
the
if
she
could
also
take
a
look
in
case
there
is
some
sort
of
you
know:
blindness
going
on
in
our
part,
yeah,
okay,
okay!
Is
that
it?
Thank
you
very
much.
H
A
H
Ahead:
okay,
hello,
everyone:
my
name
is
Cisco
today
I'm
going
to
present
the
solution
described
in
our
latest
Jags
draft
on
behalf
of
our
authors
and
co-authors.
Next
slide,
please.
H
So
we
had
a
substantial
contribution
from
a
lot
of
people,
so
I
would
like
to
flash
their
names
next
slide.
Please.
These
are
the
abbreviations
for
your
reference
that
are
frequently
used
on
our
presentation
and
talk.
Those
are
being
displayed.
I
H
Today
we
are
going
to
discuss
on
the
scope
the
high
level
view
of
our
solution,
some
of
the
reserved
Network
action
opports
used
to
build
our
solution
and
discuss
on
Network
action,
ordering
backward
compatibility
and
the
advantages
next
slide.
Please,
the
scope
of
this
document
is
to
provide
a
solution
for
m
a
encoding
format
carried
in
the
mpls
level
stack
by
complying
with
our
m.
H
H
H
Okay,
so
the
network
action
sub
stack
mainly
consists
of
the
m:
a
label
and
network
action,
subtract
parameters
which
are
common
and
applicable
for
all
the
network,
actions
encoded
under
the
specific
Network
action
substance,
a
single
action,
Network
action,
sub
stack
could
encode
multiple
Network
actions
in
it.
H
H
The
first
one
is
the
m
a
label
and
the
next
one
is
the
network
actions
of
Stack
parameters.
The
m
a
label
is
a
new
pspl
value
that
indicates
the
presence
of
mpls
Network
action,
software,
electric
actions
of
Stack
parameters
or
the
common
parameters.
Those
were
applicable
for
the
network
actions
encoded
in
the
substance.
Let's
take
a
quick
look
at
the
parameters,
the
p-bit.
This
indicates
the
presence
of
post
stack,
Network
action
and
IHS.
H
This
is
a
two-bit
value
that
indicates
the
scope
of
the
network
action
substance.
The
scope
could
be
Ingress
to
egress,
Hopper
hall
or
select,
and
the
nasl
is
nothing
but
a
network
action
sub
stack
length.
So
this
is
actually
a
four
bit
value
indicates
the
length
of
the
network
action
sub
stack
in
or
the
order
of
number
of
LLCs,
and
we
have
some
Reserve
bits
to
be
used
in
future
and
the
the
last
bit
is
the
orbit.
So
it's
a
ordering
bit
so
in
some
cases,
Network
actions
may
require
to
be
processed
in
order.
H
H
So
Network
action
encoding
are
encoded
in
the
tlv
format,
as
I
described
before.
Dna
up
code
is
the
type
of
genetic
action
they
optionally.
A
network
action
could
carry
an
anxiety
data,
so
this
anxiety
data
acts
as
a
value
and
the
network
action
length
acts
as
a
total
length
of
the
network
action
encoded.
That
includes
the
ancillary
data.
Some
of
the
opcodes
are
reserved
to
create
the
basic
building
blocks
of
the
instack
Mna
solution
and
the
rest
of
these
are
available
for
the
application
use.
H
That
is
an
Instax
op
code,
Ina
registry,
by
which
the
application
could
allocate
and
a
quote
for
themselves.
So
in
case
an
application
needs
to
carry
an
angular
data.
Then
they
need
to
define
the
data
format
and
length
that
will
carry
that
will
be
carried
in
the
anxiety
data
and
we
have
uoh
it's
a
unknown
opcode
handling.
So
all
the
opcodes
cannot
be
implemented
on
all
the
nodes
so
and
a
node
when
it
finds
an
in
a
up
code
and
that
it
cannot
understand.
H
So
here
actually
I
want
to
describe
a
little
bit
more
about
this
coping,
as
we
described
before,
each
sub
stack
could
belong
to
one
of
the
scope.
That's
i2e,
hph
are
select
a
separate
Network
action.
Sub
stack
for
each
scope
makes
it
easier
for
the
intermediate
nodes
to
process
the
hubby
Hub
or
select
option.
So
a
packet
could
carry
all
the
three
Scopes
simultaneously.
H
The
p-bit
in
each
subtract
will
be
set
with
respect
to
the
scope
of
the
post
tag,
data
that
is
encoded.
That
is,
if
a
network
action,
sub
stack,
has
a
scope
of
hubby
Hub
and
the
pivot
is
set
in
in
the
network
action
sub
stack
parameter.
Then
it
means
that
the
packet
is
carrying
a
post
type
Network
actions
with
the
hardware
options.
H
Everything
so
yeah.
H
Okay,
so
here
are
the
here:
are
some
of
the
reserved
Network
actions,
actions,
of
course,
which
I'm
going
to
describe
this
is
used
for
our
building
blocks
for
our
solution
in
the
Nia
one,
a
value
one
we
have
reserved
it
for
carrying
the
post
stack
data
offset.
H
So
this
will
indicate
the
starting
offset
of
the
post
tag.
Action.
Header
start
from
the
bottom
of
Stack,
so
in
some
cases,
if
there
are
cash
or
L2
to
informations
or
no,
the
non-ip
informations
are
encoded
between
the
mpls
mpls
stack
and
the
data.
So
even
those
kind
of
scenarios
could
be
accommodated
by
using
this
offset
so
that
post
stack
notifications
could
be
encoded,
not
necessarily
after
the
bottom
of
Stack.
It
could
be
like
even
after
some
offset.
H
H
So
we
have
a
separate
INR
registry
for
applications
to
allocate
their
offset
values
next
slide.
Please.
D
So
sorry,
I'm
interrupting
I
just
want
to
raise
the
attention
that
there
are
questions
in
the
queue.
Would
you
like
to
take
them
now
or
towards
the
end?
It's.
H
So
it
contains,
you
know
like
two
Fields,
like
an
a
bit
maps
and
anxiety
data
corresponding
to
those
upcode
value.
4
is
reserved
to
help
maintain
the
ordering
between
the
instack
and
post
stack
Network
actions.
H
In
this
example,
we
want
to.
We
want
the
node
to
process
the
post
stack,
Nai
6
before
processing
the
instack
of
code
eight.
So
when
the
ordering
is
mandated,
then
then
the
orbit
in
the
network
subtract
parameters
must
be
set.
That
indicates
that
we
need
to
keep
the
ordering
and
this
up
code.
New
up,
Code
4
is
going
to
say
that,
and
it
execute
this
postdoc
Nai
6
before
executing
my
instack
up
code.
8
next
slide,
please
thank
you.
H
So
here
we
reserve
the
label.
Sorry
the
up
code
value
126
to
fill
in
unused
20
bits
in
in
some
of
the
cases
like
we
won't
carry
the
instack
data.
We
only
carry
the
postdoc
data.
So
in
those
cases
you
know
like
this
acts
as
a
filler
for
the
20
bits
of
a
label
field
and
the
bit
the
P
bit
will
be
set
to
1.
H
So
currently
we
have
seven
bits.
It
is
a
maximum
of
code
we
can.
We
can
have,
is
147
so
for
the
future
expansion
we
reserve
this
127
so
that
so
in
the
future
actually
like.
If
you
want
to
allocate
more
than
120
127
up
code,
then
actually
this
127
could
be
used
for
allocating
the
extension
of
course,
next
slide.
H
Please,
let's
talk
some
something
more
about
the
network
ordering
in
some
cases
the
network
that
encapsulates
the
Mna
expects
the
other
node
to
process
the
Network
actions
in
certain
order.
So
the
below
example
provides
the
framework
for
maintaining
the
order
of
processing
the
network
actions
now
over.
This
is
one
of
the
main
building
block
of
the
ordering
construct.
This
orbit
must
be
set
to
indicate
the
mid
nodes
must
maintain
the
order
of
processing
the
network
action
in
general
processing.
H
The
network
action
in
order
is
a
complex
process,
especially
ordering
between
in
stack
and
post
stagnetic
actions.
So
we
can't
expect
all
the
mid.
All
the
intermediate
nodes
can
support
this
kind
of
a
complex
ordering
process.
So
based
on
the
nodes
capability.
The
node
could
drop
the
packet
if
it
is
not
supporting
the
ordering
that
could
be
a
multiple
types
of
ordering
that
is
ordering
could
be
between
only
only
in
stack
Network
actions
or
between
only
post
track.
Network
actions
are
between
instac
and
post
stack,
Network
actions.
H
I
just
want
to
describe
just
examples
in
the.
In
the
first
example
we
wanted.
We
wanted
the
Instax
op
code
5
to
be
processed
before
the
instack
occurred.
Two
in
the
second
example,
we
wanted
to
process
the
flag
based
ni
op
code,
one
to
be
processed
before
an
instack
of
code
5
and
the
flag
based
niop
code
0x22
to
be
processed
after
the
in
stack
of
code
file.
So
by
ordering
this
way
we
can,
we
can
maintain.
We
can
ask
the
intermediate
news
to
maintain
their
processing
order
slide.
H
Yeah,
so
this
is
an
example
where
the
post
tag
up
code
6
needs
to
be
processed
before
the
instack
upward
five
here,
the
reserved
in
stack
of
Code
4
indicates
that
the
post
stack
of
code
6
needs
to
be
executed
before
the
instack
of
code
file.
H
H
So
the
the
solution
as
Advantage
right,
so
the
solution
is
more
flexible
to
encode
genetic
actions
in
desired
order.
The
solution
is
also
extensible
in
future.
The
basic
constructs
or
Hardware
parser
friendly
yeah
next
slide,
please
yeah.
So
we
have
some
comments
and
feedbacks
among
our
authors
and
quotas,
so
we
are
trying
to
address
those
things
and
we
have
some
of
the
sections.
You
know
needs
more
clarity
in
our
document,
especially
in
the
select
scope
and
ordering.
So
these
are
the
things
we
are
working
on
currently
yeah,
that's
it!
H
Thank
you.
We
I
can
take
questions
now.
A
G
First,
okay,
so
I
have
you
asked
one
question?
You
claim
alignment
with
the
framework,
but
if
I
look
at
the
abbreviation
for
Network
action,
sub
stack,
you
use
n-a-s-s,
while
the
framework
use
Nas
I
haven't
checked
on
the
others,
but
there
might
be
more.
Is
this
something
that
will
be
updated.
H
Yeah
sure,
actually
we
can
update
that
Nas
yeah
sure
I'll
take
a
note
of
it.
I
Let
me
start
by
saying
the
overall
approach
here
is
fine
with
me:
I
don't
have
a
conceptual
problem
or
a
fundamental
objection,
but
if
I
have
understood
the
draft
right
and
I'm
not
sure
I
have
there
are
a
bunch
of
detailed
concerns.
One
is
sort
of
General.
I
There
are
so
many
ways
to
do
anything
in
this
draft
that
when
we
get
drafts
that
are
saying,
here's
how
to
solve
problem,
A
or
B,
they
say
well,
you
can
actually
solve
it
one
way
or
another
way
or
another
way
or
another
way.
Look
if
we
have
to
solve
every
problem.
Four
ways:
we're
not
going
to
get
interoperability.
Well
we're
going
to
have
to
have
every
implementation
to
put
four
different
mechanisms
to
do
the
same
thing.
I
hope
not!
I
I
If
I
have
to
be
able
to
process
the
things
in
order,
because
I
may
get
something
with
the
obitsat,
then
I
might
as
well
code
it
to
always
process
them
in
order,
because
that
will
be
simpler
than
writing
codes.
That
sometimes
does
them
in
order
and
sometimes
juggles
them
up
in
the
air
and
does
them
in
some
random
order,
as
if
we
don't
have
a
bit
that
has
two
different
kinds
of
processing.
H
Yeah
can
I
answer
sure.
H
Sure
so,
yeah
yeah,
if
you
see
a
network
right,
so
we
have
a
mix
of
Asics
in
the
network,
so
some
of
them
are
capable
of
ordering
some
of
them
does
it.
So
if
the
editor
expects
some
some
ordering
to
be
maintained,
so
then
actually
he
can
set
the
pit
and
say
that
this
must
be
ordered
so
that
the
intermediate
nodes,
which
is
which
doesn't
support
ordering,
can
drop
the
packet.
H
So
that
is
the
use
of
this
orbit,
basically
like
to
support.
You
know
like
low
end
Asics,
where
they
don't
support
the
complex
things.
I
Either
some
nodes
won't
support
the
old
bit,
which
would
create
one
class
of
whoops
that
doesn't
work
or
it
seems
simpler
to
just
mandate
that
the
obit
isn't
there
and
is
always
assumed
to
be
said,
I
mean
I,
just
don't
see
any
advantage.
In
fact,
the
earlier
discussion
on
the
list,
a
lot
of
us
said
just
specify
what
order
things
must
be
done
in
and
then
you
can
implement
it.
But
okay,
that's
for
the
working
group.
I
just
wanted
to
point
out.
I
It
was
one
example
of
we
have
more
ways
to
do
things
than
than
seem
to
be
useful,
there's
a
another,
more
basic
problem.
That
worries
me
and
maybe
I'm
worrying
about
something
that
isn't
an
issue.
But
let
us
assume
we
have
an
mpls
label
stack
with
one
or
more
Network
action.
Sub
stacks
and
post
stack
data.
I
So
if
all
of
them
get
popped
off,
we
still
have
the
post
stack
data
and
nobody
knows:
there's
still
stack
data
there
and
when
you
go
process
the
packet
whoops,
what
happens
and
what
happens
if
the
last
guy
doesn't
understand
this
new
extension?
Can
we
only
use
post
stack
data
if
we're
in
the
case
when
the
last
node
understands
this
new
service,
in
which
case
we'd
better
not
be
depending
on
post
stack
data,
so
there
seems
to
be
some
interactions
between
the
network
action
sub
stacks
and
the
post
stack
data
and
cleaning
things
up.
J
J
Beginning,
probably
probably
in
the
in
the
very
beginning,
yeah
next
next
next.
K
J
A
J
Or
where
yeah,
where
their
option,
one
is
explained
so
probably
next
one
too.
L
J
So
it
appears
that
there
are
two
ways:
yes,
so
that
there
are
two
ways
of
indicating
the
presence
of
posted
data
object.
J
So
first
it's
on
a
very
high
level
in
a
p
bit
and
then
it
tells
then
there
is
something
in
the
post
stack.
H
Misunderstanding
so
so
Greg
actually
I
think
there's
a
little
bit
of
misunderstanding
here.
The
PSD
is
not
to
indicate
that
there
is
a
presence
of
post
attack
data,
so
in
general
right,
so
we
we
think
that
the
postdoc
data
will
be
coming
just
after
the
label
stack.
So
in
case.
If
that
is
not
the
case,
if
there
is
an
some
data
between
the
post
stack
action,
header
and
the
render
of
the
stack,
then
how
do
we
indicate
that
the
postdoc
data
starts
at
this
particular
offset?
H
So
that
is
where
the
the
up
code
one
is
used.
So
the
pivot
says
that
there
is
a
there
is
a
postdoc
data
and
then
offset
it
says
that
at
which
point
actually
I
can
have
my
post
stack
data.
So
this
is
an
optional
opcode.
So
if
you
say
like
in
case
like,
if
you
are,
if
you
are
creating,
if
you're
encoding,
the
post
stack
action
header
immediately
after
the
mpls
label
stack,
then
we
don't
need
that
this
offset
PSD
offset
at
all.
H
H
If
you
see
this
figure
right
so
I
have
a
I.
Have
some
other
data
getting
you
know
like
a
cash
or
some
other
data
scaling
in
between
my
impeller
stack
and
the
post
stack
data.
Then
how
do
we
represent
or
encode
those
information
as
part
of
our
header?
So
this
is
how.
J
We
do
it
well
as
I
understand.
Gash
is
not
to
be
used
in
data
packets.
H
Okay,
but
the
real
two
words
can
be
used
right.
Excuse
me,
the
L2,
VPN
control
words
can
be
used
right.
So
in
those
cases
like
how
do
we
inform
the
nodes,
but.
J
Control
but
control
word
is
for
pseudo
wire.
H
Exactly
so,
that's
what
I'm
saying
it
doesn't
matter
right
so
like
here.
Actually
we
are.
This
has
to
work
with
everything.
J
No,
but
if,
if
there
is
a
control
word
then
interpretation
is
that
control
word
is
followed
by
the
payload.
Are
you
changing?
Are
you
proposing
to
change
the
interpretation.
H
Of
the
control
word
no,
no.
What
here
we
are
saying
is
that
you
know
like
if
there
is
a
possibility
that
if
you
want
to
add
any
any
such
data
after
the
stack,
then
still
we
sub,
we
use
this
kind
of
a
basic
up
quotes
to
say
where
the
poster
data
Lies
by
default.
If
this
is
not
specified,
it
is
just
immediately
after
the
post
update
sorry
after
the.
J
Yes
and
I,
I
I
I
I
sense
that
since
Jack
you,
you
brought
their
control
word
to
the
discussion.
It
definitely
falls
into
what
was
identified
as
a
gap
that
we
missed
to
discuss:
non-ip,
payload,
okay,
let's
do
it
and
oh
I.
A
Think
discuss
that
so
Matthew.
E
So
I,
just
like
this
kind
of
second
Joel's
comment
about
what
happens
if
you
kind
of
Orphan
the
post
stack
data.
If
you
strip
the
stack
somewhere
and
then
you
you
know,
then
you
then
you
terminate
it
may
be
in
a
PHP
case
or
other
cases,
tunnel,
internal
kind
of
cases.
E
You
know
the
terminating
PE
essentially
doesn't
know
what
to
do
with
that
PSD
or
doesn't
even
know
it's
there.
So
I
think
we
need
a
bit
more
thought
about
that.
Yeah.
A
E
M
E
Because
it
could
ads
for
IP
potentially
because
you
know
this
is
a
problem
we
tried
to
get
around
with
with
control
words
yep.
So
we
need
to
think
a
bit
more
about
that,
because
we'd
have
a
general
intention
not
to
break
anything.
That's
in
the
network
today,
even
if
it's
not
necessarily
standard
Behavior,
it's
common
Behavior.
E
H
H
Matthew
actually
like
when
you
talk
about
the
the
postdoc
data
which
needs
to
be
executed
at
the
egress
point
right.
That's
what
Joe
was
also
pointing
out
right.
So
we
have
a
slide.
Number
20
can
just
go
to
slide
number
20.
That's
a
22,
I!
Think
yeah.
E
H
E
H
A
H
A
H
Yeah,
so
this
is
the
exact
case
you
are
talking
about
right,
so
the
I
want
to
one
on
my
egress
note
to
understand
the
postdoc
data.
So
this
is
obviously
like.
You
know.
We
think
you
know
the
Network
stacks
and
the
email
stack
should
be
there,
and
this
makes
the
packet
to
reach
the
egress,
which
has
the
m
a
header
indicating
that
there
is
a
presence
of
poster
data
and
that's
going
to
be
processed
on
the
egress
PE.
E
H
E
A
M
Networks,
can
you
go
to
slide
11,
please.
E
M
So
in
the
first
illustration
here,
if
I
have
multiple
flag
based
nais
with
Ansley
data,
how
do
I
specify
the
scope
I
have
to
rely
on
the
one
on
the
top?
How
do
you
specify
the
scope
for
flag
based
niis
so.
H
N
O
M
Registry
is
simply
have
disappeared.
Is
there
any
guidance
on
how
to
specify
that.
H
So
so
this
this
n
a
bit.
This
is
the
sub
code
3,
especially
talking
about
right
or
in
general
n,
a
Flags
also
talking
you're
talking
about,
of
course,.
M
A
On
I
think
this
is
definitely
I.
Think
I'm
learning
two
things
here:
firstly,
that
we
definitely
need
to
continue
this.
This
discussion,
we
we,
we
really
need
to
you,
know,
maintain
our
momentum
of
weekly
meetings
because
there
have
been
so
many
discussions
on
on
this
on
this
draft.
A
Thank
you.
So
I
guess
we're
on
the
next.
The
next
slot,
which
is
data
plane
encapsulations,
are
in
situ.
Isn't
that
not
next.
A
Who's
presenting
this
you
might
want
this
I
only
got
one
there.
N
Good
morning,
everyone,
so
my
name
is
Rakesh
Gandhi
from
Cisco
Systems
and
I'm,
presenting
the
mpls
end
cap
for
the
iom.
On
behalf
of
the
authors
next
slide,
please,
the
next
slide,
please
so
agenda
is
to
look
at
the
requirements
and
the
scope
of
this
work.
The
summary
of
the
procedure
and
extensions
and
the
next
steps
next
slide.
Please.
N
So
the
requirement
is
simple.
There
is
quite
a
bit
of
work.
That's
done
in
ippm
working
group,
there's
RFC
now
9197
for
the
iom
data
fields.
There
is
also
another
draft
for
the
direct
export.
They
Define
Trace
points,
and
the
requirement
here
is
to
carry
them
with
mpls
encapsulation.
N
We
definitely
want
to
do
edge
to
edge
as
well
as
help
by
out
processing
and
the
scope
is
the
m
a
so
this
framework
jackstrap
songs
trapped.
So
these
are
all
the
drafts.
That's
kind
of
normative
for
this
work
next
slide,
please
so
history
of
this
draft.
We
started
this
work
a
while
ago
and
it
has
gone
through
multiple
iterations.
N
We
had
the
MP
like
RT
expert
review
as
well.
We
had
gone
to
gach
based
approach
as
well,
and
now
it's
aligned
with
m
a
so
it's
using
M
A
encoding,
as
well
as
the
host
attack
extension
header
from
Ohio.
So
these
are
the
the
solutions
that
we
are.
We
are
being.
We
are
discussing
this
in
the
working
group,
so
this
is.
B
N
So
this
is
using
Jack's
Draft,
there
is
an
m,
a
label
bspl,
and
if
you
want
to
put
iom
in
the
post
stack
data,
then
the
the
PBT
is
set.
Ihs
is
the
scope
of
it,
so
it
basically
says
so:
did
we
process
up
by
half
or
end
to
end
or
on
select
nodes?
N
And
then
there
are
other
bits
like
the
length
and
orbit
and
and
whatnot
that
comes
from
the
draft
next
slide.
Please.
N
So,
regarding
the
scope,
PB
says
that
there
is
a
iom
and
the
scope
says
where
it
should
be
processed.
This
is
exactly
how
it
is
in
traffic
extra.
Please.
N
So
this
is
encoding
using
highest
draft
for
the
Post
stack
extension
header.
That
is
a
common
header.
It
says
how
many
extents
and
headers
are
there
the
length
and
next
header?
And
what
is
this
draft
defining?
Is
the
iom
is
next
header
and
basically,
in
the
extension
header,
it
would
carry
an
iom
option
type
which
is
defined
in
9197
and
other
ippm
working
of
jobs.
N
So
if
you
want
to
have
multiple
iom
test
points,
there
are
use.
Cases
like
you
may
want
to
do
some
tracing
how
by
hop,
but
maybe
only
collect
end-to-end
latency.
So
we
would
put
multiple
accents
and
headers
with
their
iom
option
types
defined
in
the
ippm
and
how
buy
Hub
needs
to
be
should
be
higher,
should
be
able
to
easily
access
it
and
and
to
end
would
be
at
the
bottom
next
slide.
Please.
N
So
this
is
just
a
big
picture.
How
everything
put
together
would
look
like
you
have
Instax
M
A
and
the
post
stack
ioam
data,
so
this
is
a
big
picture
for
how
everything
fits
together
next
slide,
please
so
for
in
case
of
end
to
end,
there
are
option
types
defined
for
end
to
end
in
the
RFC
9197.
So
if
you're
doing
end-to-end,
that's
what
you
would
use.
N
Needless
to
say
that
the
d-cap
node
must
support
the
m
a
in
order
for
this
to
work.
There
was
a
question
on
how
would
the
Decap
node
would
it
would
just
pop
the
m
a
maybe
not
the
post
stack,
but
it
would.
It
would
remove
the
post
stack
as
well,
because
there's
a
p
bit
set
right,
so
it
will
detect
that
I
need
to
remove
M
A,
but
there's
pivot
once
I
also
need
to
remove
iom
if
I'm
doing
decaf.
N
With
the
hubby
hop,
there
are
Trace
points
defined
just
for
that,
so
we
would
use
that
and
again
the
procedure
is
the
same.
Just
the
trace
points
are
different,
except
now
it
gets
processed
on
the
on
each
hop
and
the
scope
is
set
accordingly
as
well,
so
that
M
A
will
tell
the
midpoint
that
you
need
to
process
this
next
slide
please.
N
So
we
welcome
your
comments
and
suggestions
and
I
think
we
do
have
a
few
normative
work,
that's
happening
so
once
that's
adopted,
then
we
would
request
that
this
also
be
adopted
as
a
use
case
plan
right.
J
Yes,
Greg
mirsky
Erickson
so
as
I
understand,
The
Proposal
is
to
use
priorit
and
incremental
IM
Trace
collecting
data
in
a
post
tag.
N
So
this
one
is
about
encapsulating
it,
so
those
Trace
points
are
defined
in
the
ippm,
the
RFC
9197.
So
you
say:
how
would
you
carry
all
the
stress
points
in
this
in
mpls
and
this
is
a
way
to
carry
it.
So
there
is
no.
There
is
no
recommendation
on
which
one
should
be
used
for
mpls
or
should
not
be
used.
J
I
think
that,
actually
that's
that's
my
question.
Okay,
so
according
to
IAM,
RFC
prelocated
in
incremental
Trace
options
are
to
collect
operational,
State
and
Telemetry
information
in
a
data
packet.
So
my
question
is
in
mpls
packet
if
incremental
and
Trace
and
incremental
and
pre-allocated
are
used
where
the
data
are
collected.
N
So
we
expect
that
the
pre-allocated
is
easy
to
implement.
So
wherever
that
extension,
the
post
extension
header
is
where
ioam
data
fields
are
defined.
This
is
where
the
pay
allocated
database
will
be
and
it
will
be
updated
to
carry
timestamp
interface,
whatnot.
J
Okay,
so
relative
to
the
payload
of
the
mpos
packet,
where
the
collected
data
will
be.
J
So
before
the
payload
okay,
so
if,
for
example,
there
are
pre-allocated
space
is
1K,
so
then
how
much
of
the
payload
will
be
able
to
carry.
N
I
mean
so
these
are
the
the
implementation
and
deployment
details
on
what
the
hardware
can
do.
How
big
is
pre-allocated,
it
can
support.
N
In
theory
you
can
say
64k,
but
Hardware
may
be
able
to
do
only
64
bytes.
So
it's
a
question
when
it
gets
implemented
and
what
Trace
point
to
implement
and
what
you
want
to
capture.
So
it
will
be
Hardware
capability
thing.
Yes,.
J
And
what's
an
impact
of
this
IM
modes
on
a
dead
net
over
mpos
data
plane.
N
Yeah,
so
the
job
doesn't
discuss
that
and
it
does
something
we
need
to
discuss
and
cover
it.
Okay,.
J
And
so
for
the
incremental
so
as
I
understand,
incremental
mode
means
that
each
node,
that
is
to
add
operational,
State
Telemetry
information,
must
add
additional
space
to
the
packet.
So
that
means
that
it
needs
to
push
the
payload
and
rewrite
the
packet.
So
what's
the
performance
impact.
N
So
this
is
coming
from
my
PPM
RFC
9197
I.
Definitely
it
is
understood
that
not
many
Hardware
can
Implement
that.
So,
if,
if
there
is
a
need
for
it,
we
can
explain
that
there
is
implications
if
you're
doing
incremental,
because
you
want
to
do
Hardware
needs
to
move
things
around
and
it
Hardware
be
capable
of
it,
but
yeah
it's
it's
when
you
implement
it.
Those
are
the
challenges
you're
going
to
face.
J
It
appears
that,
even
though
that
the
based
RFC
out
of
ibpm
working
group
and
IAM
defines
a
number
of
different
modes,
we
need
to
decide
which
modes
are
most
applicable
to
mpls
data
plane,
because
it's
not
necessarily
that
all
of
them
are
equally
applicable.
And,
for
example,
we
have
BFD
that
been
defined
for
several
modes
and
only
one
of
them
been
applied
to
BFD
over
mpos
LSP.
J
A
J
We
we
can
do
it
of
you
know.
I
I
agree
with
Rakesh.
So
once
we
have
a
settled
encapsulation
solution,
then
we
can
have
this
discussion
in
the
course
of
adopting
this
draft.
G
Yes,
yes,
addressing
the
request
for
a
working
group,
adoption
I
think
the
document
is
pretty
mature.
However,
it
Reliance
relies
pretty
heavily
on
draft
Jacks,
so
I
would
actually
like
to
go
ahead
with
your
off
Jacks.
First
before
pushing
this
for
a
working
group,
adoption
poll.
N
Yeah,
it
also
relies
heavily
on
highest
draft
as
well
for
post-tech
extension,
headers,
so
yeah.
We
we
need
to
stabilize
the
instack
post
stack
first
got
those
adopted
and
then
come
to
the
use
cases.
A
N
G
B
Hello
Kitty,
compeller,
so
I,
agree,
I
think
we
finally
came
to
the
same
conclusion.
There's
a
separate
discussion
on
you
know
what
one
can
do
with
iom.
What's
the
implication
on
forwarding
performance
and
so
on.
I
think
what
you're
talking
about
is
how
we
encode
it.
If
you
have
an
mpls
stack
and
Greg,
you
have
a
good
point
that
you
might
want
certain
modes,
but
not
all
modes
that
is
relevant
to
mpls,
but
the
actual
iom
stuff
is
elsewhere.
So.
I
A
A
O
Then
how
you
song
from
future
away,
yeah,
actually
I
like
to
mix
a
similar
comment
as
crazy
as
I
made
I,
think
the
solutions
are
not
just
applicable
to
the
iom
trace.
There
are
many
other
different
options
they
can
all
be
supported
by
the
same
framework,
but
as
for
the
real
implementation,
I
think
it's
up
to
the
implementer
to
decide
which
option
they
actually
use
in
their
design.
Right.
A
In
for
a
few
minutes
to
work
on
the
BFD,
RDI
work
so
who's
presenting
this,
please
yeah.
Q
Okay,
this
is
Tony
Juan.
Can
you
hear
me?
Yes,
okay,
let
me
begin
and
I'm
from
Huawei
and
here
in
the
drawing
meeting.
I
want
to
introduce
our
new
job
about
that
at
oam,
namini,
PFT
extension,
Fontana,
remote
different
indication,
and
this
is
join
work
with
Gen
tank
and
tianan,
and
this
is
submitted
to
ITF
for
the
first
time
next
slide.
Please.
Q
And
that
net
provides
reliable
service
for
data
flows
with
extremely
low
packing
loss
rates
and
body
end-to-end
delivery,
and
by
dedicating
network
resources,
such
as
linked
families
and
buffer
space,
and
to
download
flows
within
a
network
domain
and
as
listed
in
IFC
and
Asus
Wi-Fi
thenx
has
three
Street
close
requirements
and
the
first
compared
to
traditional
IP
network
and
which
nicknet
latency
and
leaves
it
to
transport
or
higher
layer
and
and
IP
adopts
best
effort
delivery
and
that
aggravates
the
situation
that
requires
deterministic
for
the
end-to-end
latency
and
second,
and
that
net
could
operate
and
load
the
underlay
Network.
Q
And
it
applies
service
protection
to
eliminary
laws,
since
it
requires
to
streak
packet,
loss
ratio
and
the
third
and
is
Packet
ordering
function
is
applied
to
preserve
order
in
Internet
and
as
it
doesn't
tolerate
much
out
of
all
the
packet
delivery
and
let's
try
please
and
any
violation
of
quality
of
service
should
be
quickly
reported
and
in
details,
and
then
that
oam
requires
quick
defect,
detection
and
remote
defect.
Indication
and
we
call
RDI
in
short
and
bi-directional.
Forwarding.
Q
Q
And
so,
let's
have
a
closer
look
at
than
a
specific
defense
when
we
need
detection
technology,
but
this
is
out
of
scope
of
this
draft
and
we
need
methods
for
testing
RTI
in
the
expense
of
latency
and
out
on
out
of
order.
They
are
not
well
defined
and
for
packet
loss
BFD
provides
insensitive
methods
which
is
not
suitable
for
that
net
OEM
So.
Q
Q
And
so
the
next
step
is
how
to
achieve
RDI,
and
we
extend
current
BFD
protocol
as
it
provides
diagnostic
code
field
and
in
in
its
control
packet
and
other
methods
to
carry
this
information
are
all
welcome
to
and
we
can
further
discuss
and
in
diagnostic
field
value
0
to
8
is
assigned
in
IFC,
5990
and
Knight
in
in
another
IFC
and
others
are
reserved
for
use
next
night.
Please.
Q
And
so
we
follow
similar
method
to
append
that's
in
a
specific
error
codes
to
indicate
three
main
violations
against
Dana
close
and
so
as
to
guarantee
the
net
and
the
Damage
service
and
further
later
value
localization,
and
they
are
packing
disorder
ratio,
limit,
reach
and
packet.
Latency
limit
reach
and
packet
loss
threshold,
limited
reach
and.
I
Q
Q
K
K
K
Comment
number
one
is
that
the
things
that
you're
trying
to
detect
you
know
for
debt
net
purposes,
packet
order
issues,
latency
themselves
can
disturb
the
PFT
session,
and
your
timers
would
have
to
be
much
longer.
You
know
to
survive
that,
so
this
probably
makes
BFD
not
a
good
fit
for
your
solution.
P
K
K
Now,
we've
had
many
discussions
about
this,
but
I
also
wish
to
give
you
the
gift
that
Greg
mirsky
has
in
additional
work,
taken,
BFD
State
machinery
and
is
proposing
to
carry
a
very
similar
state
that
is
explicitly
intended
to
carry
something
like
RDI,
so
my
recommendation,
one
is
that
this
probably
would
not
succeed
in
getting
code
points
from
BFD,
but
you
could
probably
work
with
Greg
to
actually
Advance
your
work.
R
Actually,
I
have
nothing
to
add
to
the
previous
speaker,
I
I
I
was
going
to
ask
the
same
question
and
to
comment
it.
In
any
case,
the
dog
there
is
a
recent
Errata
on
the
on
5880.
That
said
that
the
diagnostic
code
must
be
set
to
zero
every
time
the
session
reaches
its
Upstate.
So
I
presume
this
means
that
the
diagnosis
with
the
current
state
machine,
the
diagnostic
code
either,
should
be
always
zero
or
simply
ignored.
When
the
station
session
is
up.
C
C
The
agenda
includes
requirements,
solutions
to
increase
root
solution,
genetical
request
and
reply
extension,
the
definition
of
genetic
capability,
dictionary
objects
and
encapsulation
example
for
the
plan
next
slide.
Please.
C
As
per
the
draft
flight
for
the
network
and
framework
Daniel,
and
must
support
the
discovery
of
it
and
not
really
nodes
in
the
net
Network
and
must
support
the
discovery
of
packet,
replication,
elimination
and
other
prism
sub
functions,
locations
and
also
Master
supported
the
collection
of
the
tenure
service.
Specific
information
from
internet
really
notes
next
slide.
Please
the
alternative.
C
C
In
our
new
draft,
we
introduce
the
net
capability,
Discovery
objects
to
this
liver,
Dynamic
capabilities
of
really
nodes.
This
object
should
be
included
in
the
network
request
reply,
packets
and
it
they
comprise
of
the
net
capability.
Metadates
and
Abstract.
Objective
header
has
the
corresponding
format
depending
on
the
specific
type
of
genetic
plan.
The
formatter
capability.
Discovery
objects
is
shown
as
below.
Currently,
four
kinds
of
objects
are
defined
in
a
relative
object:
they're
not
really
nodes
identify
objects,
then
it
service
protection,
function,
objects
and
the
general
service
flow
information
objects
next
slide.
Please.
C
When
they
initiate
the
node
starts
a
standard
ping
to
discover
the
dynamic
capabilities
of
which,
by
naturally
node
the
initiator
node,
could
send
an
attack
or
requests
that
include.
The
net
capability
objects
indicating
that
a
set
of
dynamic
capability
information
is
requested
like
service
and
forwarding
sub
layer
capability,
as
well
as
the
incoming
an
outgoing
flow
configurations.
The
initiate
node
will
send
the
network
request
continuously
and
increased
by
one
to
TTL
each
time.
So
all
the
relay
nodes
along
the
path
of
dinosaur
flow
will
be
reached.
C
Reloads
receives
a
direct
request
with
Daniel
capability
objects
and
expel
the
detail.
It
replies
to
the
initiate
node
by
a
genetical
reply,
including
a
dnetic
capability
object,
indicating
it's
correlative
denied
capabilities
and
10.
Naturally,
nodes
identify
object
has
its
identification
next
slide.
Please.
C
Three
types
of
coordinary
nodes
identify
objects
are
defined.
It
compares
the
library
for
an
IPv6
compared
node
was
identified
by
a
20
bit
node
ID,
well
ipv4
and
IPv6
node
bytes
IP
address
open
food,
Spotify
service
operations
capable
on
that
note,
including
initiation
determination
or
relay
of
the
specific
stories
next
slide.
Please.
C
Also
they're,
not
really
not
applicable,
of
service
protection
could
encapsulate
service
protection
function,
objects
in
the
data
Network
replying
packets,
describing
supported
sequence
number
lens
in
the
service
protection
functions
of
packet,
replication,
elimination,
the
other
preservation
next
slide.
Please.
C
C
C
C
Service
flow
of
ipv4
IPv6
data
player
is
identified
by
the
six
type
of
IP
headers,
that
is
a
The
Source
address
that
initial
address
the
support
destination,
part
protocol
and
DHCP
and
optionally,
the
websacker
security
parameters,
index
or
IPv6
flow
level.
C
C
Capabilities
Discovery
objects
defined
in
this
craft
may
be
used
in
the
net
networks
with
different
data
plans
and
will
have
correlative
formats
to
comparatively
with
an
example.
Dana
capabilities.
Discovery
objects
could
be
encapsulated
with
the
typical
here
we
had
in
place
of
the
data
capability,
Discovery
header
next
slide.
Please.
C
A
So
I
think
that's
the
the
point
where
we
get
to
open
mic
and
the
original
plan
was
because
to
discuss
whether
we
were
ready
to
move
into
basically
stop
running
the
OD.
The
open
design
team
meetings.
I
think
we've
probably
concluded
that
we
we
are
pleasured
to
do
that.
A
D
We
could
we
could
use
the
remaining
time
to
talk
about
discussion
of
post
stack
data
and
where
does
it
sit?
There's
some
useful
comments
in
the
chat
we
could
bring
them
forward.
That's
something
you
know
the
audience
up
to
the
audience
and
chairs,
but
it's
a
it's
a
good
topic
to
continue
on
either
now
or
later
on.
G
However,
I
would
like
Stuart
if
you
can
elaborate
a
little
bit
on
what
you
said
about
the
complexity
and
mpls
simplicity,
to
kick
off
the
kick
off
the
discussion.
A
A
A
So
clearly,
we
need
to
make
a
space
for
that,
but
I
am
very
worried
when
I
see
us
move
away
from
this
Vector
to
a
predefined
action
model
to
a
model
that
requires
us
to
pick
apart
components
in
the
mpls
header
and
determine
what
we're
going
to
do
in
what
order
and
I
really
am
worried
that
we
have.
We
are
destroying
our
heritage,
not
because
I
have
any
sort
of
particular.
A
You
know
Nostalgia
about
it,
but
because
it
was
a
very
effective
method
that
was
very
Hardware
friendly
and
I
am
I
just
feeling
a
bit
queasy
that
we
are.
We
are
sacrificing
that
and
I'm
not
quite
sure
where
it
ends
and
whether
it
ends
in
tears.
B
So
I
I
share
your
queasiness
I
think
there
are
a
few
features
that
we're
putting
into
the
new
M
A
approach,
for
example,
processing
at
select
nodes,
I
think
that's
something
that
we
want
to
put
in
there
just
in
case,
but
we
don't
actually
have
a
use
case
for
it.
Quite
yet,
The
Ordering
of
things
that
you
process
that's
another
thing
that
I
think
we're
just
putting
in
there
thinking
that
it
would
be
useful.
B
The
efficiency
of
the
representation
I
think
has
gone
down,
and
so
that
means
there's
more
to
process.
It's
not
like
I
have
good
answers
for
every
all
of
this,
but
I
think
that's
a
fruitful
place
for
discussion
going
forward
and
I
will
have
those
discussions,
but
you're
not
alone.
A
Okay,
I
mean
I'm,
not
ready
to
be
disruptive
or
anything.
I
would
genuinely
like
us
to
make
rapid
progress.
I
I
just
want
to
make
sure
that
we
make
rapid
progress
in
a
way
that
doesn't
destroy
all
this.
You
know
20
years
of
really
good
work
Andrew.
You
look
like
you
wish
to
take
ad
prerogative
and
go
in
in
XD9.
I
will
put
you
next
Tony.
No,
no.
P
Thank
you.
It's
past
basic
queasiness
and
broken
out
to
outright
nausea,
so
I
agree.
P
There
is
no
question
that
what
we've
created
here
is
extremely
complex,
but
once
again
this
is
the
result
of
the
requirements
that
were
put
forth
in
our
discussions
and
arbitrary
ordering
seemed
to
be
something
that
everyone
wanted.
If
we
can
relax
that
we
can
dispense
with
a
great
deal
of
this
complexity.
P
A
That
sounds
like
a
really
good
agenda
item.
For
as
soon
as
we
start
meeting
on
on
Thursdays
to
see
which
ones
I
mean
you
won't
get
any
pushbacks
right,
I
think
from
the
requirements
draft
authors
we
just
want
to
do
the
right
thing.
Andrew.
S
So
Andrew
from
liquid
speaking
entirely
in
my
own
capacity
and
without
any
AD
hats
on
I
I
can
see
how
we
can
make
all
of
this
work.
Not
a
problem
right,
but
I
have
a
concern,
and
this
is
where
I
kind
of
almost
Echo
what
you
said
that,
because
of
the
complexity
here,
we
might
get
all
this
to
work
in
the
long
term.
S
There's
going
to
be
a
point
where
people
are
implementing
this
implementations
and
new
Etc,
and
that
complexity
is
such
that
I
feel
that
we
could
well
go
through
a
very
buggy
period
because
of
that
complexity
and
the
danger
to
that
is
that
if
you
give
something
an
operator
and
something
to
an
operator-
and
it
doesn't
work
and
it
does
have
bugs
because
of
the
complexity
Etc,
you
may
steer
them
away
from
it
and
once
there
you
lost
them,
they'll
never
come
back
again.
That's
just
the
way
that
it
works.
S
A
S
Yeah,
exactly
and
and
I
think
so
that's
where
my
concern
comes
is
about
that
complexity
and
saying
that
if
we're
not
sure
that
it's
going
to
work,
let's
be
very
sure
before
we
push
it,
it's
breakage
yeah.
We
can
kind
of
make
it
work
later,
but
by
that
stage
we've
lost
the
operators,
you're,
never
getting
them
back.
B
B
How
quickly
are
we
going
to
use
up
the
rest
because
they're
all
these
functions
that
we
wanted
and
they
were
all
initially
looking
for
independent
code
points
we're
now
in
a
place
where
we
we
have
eight
Reserve
labels
left
still
and
we'll
burn
one
for
this
particular
new
m,
a
indicator
and
that
will
Encompass
several
functions.
Maybe
10,
maybe
20
you
know,
and
why
don't
we
take
the
approach
we're
stepping
into
this
new
Waters
and
let's
take
a
keep
the
problem.
B
Much
more
constrained,
remove
to
Tony's
point
which
I
want
to
Echo
remove
a
lot
of
the
requirements
and
say
how
does
this
work
and
what
does
the
implementation
look
like
and
if
we
then
come
back
and
say,
oh,
we
really
needed
that
ordering
we're
born
one
more.
We
have
eight
and
each
of
those
can
do
multiple
things.
Once
we've
gone
past,
the
idea
that
one
Reserve
level
does
one
function,
we
don't
have
to
boil
the
ocean
with
the
first
one.
B
O
O
I
like
to
have
several
comments
regarding
the
ordering
issue,
I
think
so
far.
We
are
still
a
lack
of
a
tangible
use
cases
on
to
ask
for
actual
ordering
the
use
cases,
the
applications,
so
I
I
think
we
we
better
Force
to
find
such
a
solid
case
and
before
we
actually
make
a
design
mechanic
mechanism
to
support
that.
O
And
secondly,
right
oh
I
see
some
people
comment
that
we
we
might
make
it
simple
just
to
makes
the
ISD
action
in
stack
actions
to
be
executed
before
the
PSD,
but
I
don't
think
that's
a
reasonable
assumption,
because
the
reason
we
put
our
action
in
stack
is
not
because
it's
a
logically
it
should
be
executed
first
earlier.
O
It's
might
be
because
it's
just
don't
it
doesn't
ask
for
a
very
big
data
data
part,
because
we
just
simply
cannot
encode
a
lot
of
data
in
in
the
label
stack
if
it
requires
a
lot
of
ancillary
data
for
this
action,
we
probably
the
better
place
to
put
it
in
the
post
stack.
So
that's
the
reason
why
we
have
this
instack
and
post
stack
separation,
not
because
their
priority
or
importance,
but
because
the
data
they
required.
So
that
means
in
which
order
we
should
execute
them.
O
So
that's
my
second
point
so
so
on
combined,
we
I
think
we
we
need
to
further
consider
because
in
many
other
scenarios,
like
the
IPv6
extension
header,
people
also
are
proposing
new
applications,
new
use
cases,
but
nowhere
else
I,
I'm,
aware
that
some
people
are
talking
about
this,
that
enforce
some
order
in
the
protocol,
design.
I
think
for
each
use
case
they
have
a
the
schematics
of
the
use
case
is
clearly
defined.
O
When
is
it
should
be
executed
and
what's
the
meaning
of
each
data
is
clear
defined
by
the
use
case
itself,
so
I
think
somehow
it's
better
for
the
just
for
the
Implement
Mentor.
When
they
have
this
multiple
actions
appear
in
the
same
package.
They
should
automatically
understand
how
to
deal
with
them,
and
but
no,
it's
just
a
dictated
by
some
ordering
in
the
package
itself.
P
I'd
like
to
respond
to
how
you
use
comment,
we
went
over
why
we
needed
ordering
in
the
open
design
team
meetings
and
we
went
over
that
several
times.
I
I
urge
you
to
listen
to
those
recordings
again.
That
was
there's
not
much
question
there.
The
only
question
on
the
table
really
is
whether
we
want
to
backfill
from
that
requirement
at
this
point.
A
L
Adrian
Farrell
so
to
to
stretch
the
analogy
a
little
bit
further
I
I
find
myself
one
of
the
passengers
on
the
bus
screaming
as
it
heads
towards
the
Cliff
Edge
I,
like
the
people
on
the
bus
they're.
My
friends
I
like
the
color
of
the
bus,
but
I
would
really
like
to
not
go
over
the
edge
and
and
so
yeah.
L
My
I
share,
Tony's,
nausea,
I,
think
and
I
wonder
if
I've
spotted
a
micro
loop,
I'm
looking
at
the
the
abstract
of
the
requirements
document-
and
it
says
the
requirements
are
derived
from
a
number
of
proposals
for
additions,
and
it
sounds
to
me
that
maybe
what
we
are
doing
is
talking
ourselves
into
believing
the
requirements
as
Engineers
who
are
building
the
solutions
that
address
the
requirements
that
we
are
you
know
and
and
round,
and
round
and
round,
and
possibly
when
and
and
I
say
this
from
not
having
participated
in
a
design
team,
possibly
we're
not
standing
back
far
enough
when
we
look
at
the
requirements,
because
we're
saying
this
is
what
we
could
do
here
are
our
requirements
rather
than
this
is
what
we
need
to
do.
N
A
So
I'm
going
to
modify
the
Q
order
so
I
don't
know.
Let
Matthew
speak
from
the
requirements
thing,
and
maybe
you
hang
around
in
case
how
you
takes
us
in
the
same
place.
So.
E
Massive
watching
Nokia
yeah
I
agree
Adrian.
There
was
a
little
bit
of
there's
a
little
bit
of.
Where
do
we
start
with
the
requirements
when
we
started
with
the
requirements
draft?
Oh,
let's
look
at
the
solutions
that
have
gone
into
the
into
the
the
open
design
team,
which
is
a
bit
back
to
fun.
E
To
be
honest,
the
requirements,
in
my
view,
should
come
from
the
use
cases
not
from
the
solutions,
the
solution
as
a
result
of
looking
at
the
requirements,
and
but
the
problem
was
the
the
use
cases
weren't
sufficiently
documented
at
the
time.
E
So
maybe
we
need
to
go
back
to
the
use
cases
and
and
go
back
to
the
requirements
and
and
re
redo
them
a
bit
and
look
through
what's
really
needed
for
the
use
cases
we
have
we,
and
there
was
also
a
lot
of
discussion
about.
Well,
maybe
we
should
do
things
because
you
never
know
in
the
future.
Maybe
we'll
need
it
or
maybe
we'll
be
able
to
support
this
on
future
Hardware,
but
it's
very
difficult
to
design
for
for
the
unknown.
A
O
Yeah
some
respond
to
Tony
that
certainly
I
I'm,
aware
of
the
discussions
in
the
open
design
team
about
this
ordinary
issue,
but
I've
never
satisfied
with
you
know
the
cases
or
examples
people
race
they
are.
One
of
the
example
is
a
probably
for
ordering
the
slicing
and
the
iom
applications,
but
I
also
explained.
O
Actually
it
doesn't
matter
because
what
data
to
be
classed
for
collected
for
by
iom
the
semantics
available
Define,
it
really
doesn't
matter
how
you
put
this
two
extension
headers
or
cellular
datas
or
what's
their
order
is
in
the
packet?
You
will
again
your
guaranteed.
You
are
required
to
to
do
them
properly.
So,
in
this
case,
I,
don't
think
that's
a
valid
example
and
yeah.
You
have
to
see
some
other
real
examples.
O
P
I
assume
so
I'm.
Sorry,
how
are
you
that's?
There
is
no
assumption
that
we
can
make
that
you
would
automatically
do
it
in
the
right
order.
It
seems
like
if
we
say
any
implementation
can
do
anything
in
any
order.
We
exactly
end
up
in
the
problem
where
you
could
do
any
things
anything
you
wanted.
P
G
F
E
So
I
think
one
one
thing
that
might
help
as
well
as
simplifying
the
requirements
based
on
the
use
cases
is
maybe
making
the
the
architect
or
the
framework
or
architecture
a
little
bit
more
prescriptive,
a
bit
more
layered
a
bit
more
clear
as
to
what
exactly
interacts
with
what
in
in
this.
So
at
the
moment,
I
feel
I
have
to
go
to
the
solutions
to
look
at
how
all
the
bits
work
together
and
that
seems
to
be
jumping
too
far.
E
A
A
E
E
Designers,
stuff,
we
were
quite
prescriptive
in
the
architecture
yeah
and
maybe
maybe
that's
sort
of
what
we
write
would
help
here.
A
Okay
and
no
nobody
else
in
the
queue
do
any
of
the
chairs
wish
to
make
any
closing
remarks.
D
Thanks
for
every
for
the
good
discussions
and
everybody
attending.
B
A
Right
I,
thank
you.
I
thank
everyone
for
their
participation
and
I.
Think
we've
moved
on
a
bit
from
where
we
were.
Hopefully,
we
will
rapidly
come
to
a
to
a
conclusion
having
you
know
taking
in
in
mind
what
we,
what
we've
learned
getting
this
far
on
the
journey.