►
From YouTube: IETF112-DETNET-20211110-1200
Description
DETNET meeting session at IETF112
2021/11/10 1200
https://datatracker.ietf.org/meeting/112/proceedings/
A
Okay,
it's
the
top
of
the
hour
welcome
everyone.
This
is
a
session
of
the
that
net
working
group
at
iedf112,
louberger
and
myself.
Janusz
farkas,
co-chair
the
working
group,
and
we
are
thankful
to
ethan
grossman
our
secretary
for
the
great
minutes
and
have
provided
us
behind
the
scenes.
A
The
agenda
details
are
available
at
the
usual
place.
Links
are
provided
here
as
well
on
this
slide
next
slide,
please
it's
a
reminder
that
we
operate
under
the
ietf
rules
and
policies
captured
in
the
ietf
note.
Well,
so
all
the
rules
apply
and
by
participating
here
in
the
iitf.
You
agree
to
follow
these
policies
and
procedures.
A
A
There's
a
list
of
bcps
here
highlighted
that
drive
our
operations
and
in
the
next
slide
we
would
like
to
draw
attention
to
to
one
of
them
the
code
of
conduct
it
has
been
there
before
it
is.
I
think
it's
going
well
in
in
the
working
group
just
a
reminder
that
we
expected
to
behave
towards
our
colleagues
respectfully
and
with
courtesy
and
professional
next
slide.
Please,
okay,
mythical
yeah,
you
you
have
found
it.
A
You
are
here,
that's
that's
good
and
we
also
have
the
usual
way
of
chat
and
jumper
combined
in
mythical
as
well
available
and
the
jabber
link
is
in
the
bottom
as
well.
The
blue
sheets
are
automatic
in
this
electronic
meeting.
A
A
Please
help
me
taking
and
and
make
sure
that
your
points
in
the
discussion
have
been
captured
accurately
the
agenda
also
available
online.
I
think
that's
enough
for
this
slide.
Maybe
the
next
one.
A
Okay,
the
sessions
at
this
idea,
sorry
for
the
typo,
it
is
itf112
we
had
a
joint
session
with
pass
and
mpls
working
group
on
monday.
This
is
important
for
us,
because
mprs
is
is
one
of
our
data
planes
and
the
evolution
of
mpls
is
being
discussed
in
these
joint
working
group
meetings
and
also
in
weekly,
regular
design,
open
design
team
meetings.
If
you
are
interested
join
them
too,
there
was
a
a
contribution
presented
related
to
that
net,
and
we
discussed
that
one
here
at
the
deadnet
session
as
well.
A
Please
check
the
recording
if
you
were
not
able
to
attend.
We
are
here
on
wednesday
in
the
main.net
session
next
slide
and
slide,
please,
okay.
So
this
is
our
agenda.
A
After
this
intro
we
will
have
the
om
work
items
on
the
agenda
and
after
that,
some
data
plane
related
contributions
on
payoff
bracket
ordering
and
the
second
half
is
sort
of
newer
work
items.
First,
discussing
continuing
the
discussion
on
on
on
the
requirements
on
moving
or
towards
a
larger
scale
networks.
A
A
We
have
two
documents
for
which
publication
has
been
requested,
that
not
bounded
latency
draft
and
the
newer
one
is
that
that
net
yang
is
progressing.
A
We
have
two
working
do
group
documents
that
are
not
on
the
agenda.
They
we
have
the
op
oem
package.
There
is
no
specific
detail
on
ip
at
this
meeting.
That's
a
sort
of
a
bit
later
stage
in
the
in
the
queue.
First,
we
are
working
on
a
framework
and
an
mpls
and
an
ip
that
was
the
discussion
and
the
other
one
is
the
control
airplane
framework
on
which
there
was
an
update
provided
on
the
list.
A
This
falls
in
to
our
scope
and
the
work
we
are
conducting
here,
so
this
liaison
has
been
provided
as
for
information
to
us,
but
as
it
is
very
closely
related
to
our
work,
we
suggest
to
respond
to
this
liaison
and
discuss
the
details
on
the
list
next
slide.
Please.
A
Okay,
just
a
reminder
on
the
ipr
procedures:
we
follow
strictly
the
usual
ipr
procedures
and
there
are
two
points
when
we
request
ipr
call
prior
to
so
in
the
beginning,
at
the
end
prior
to
working
group
adoption
and
towards
the
working
group
last
call,
and
also
we
have
a
step
that
we
request
upgrade
statements
to
be
made
clear.
If
there's
a
new
outer
in
a
draft
next
slide,
please,
okay,
we
have
been
working
remote
for
a
while
and
just
to
to
remind
people
that
our
main
forum
is
the
is
the
list.
A
So
please,
please
use
the
list
comment
and
there
have
been
good
discussions,
I'm
really
glad
for
for
that,
and
we
have
opportunities
for
virtual
meetings
in
addition
to
the
regular
ietf
meetings
as
as
needed,
we
can
schedule
interview
meetings
as
we
just
had
recently,
and
we
can
also
have
informal
working
meetings
to
advance
our
documents.
A
We
used
to
do
this
for
data
plane
and
then
for
yang.
Actually
now
the
young
has
been
completed
and
the
one
that
is
ongoing
and
we
we
have
a
bi-weekly
meetings
on
the
progressing
the
oem
work,
so
these
meetings
can
be
set
up
by
by
the
chairs
and
please
reach
out
to
us.
If
you
see
other
topics
that
should
be
discussed
or
would
be
beneficial
for
the
group
to
be
discussed
either
way
in
our
interim
or
in
informal
working
meetings,
we
can.
C
Yes,
thank
you
good
morning,
and
so
this
is
an
update
on
our
framework
for
operation,
administration
maintenance
in
a
dead
net
next
or
at
least.
C
So
in
111
meeting
the
working
group
agreed
to
merge
part
of
their
draft
into
this
document
on
oem
requirements
for
that
net
service
sublayer
layer,
and
we
welcome
balash
indiana
as
a
coffers.
C
C
So
there
are
more
significant
updates,
as
mentioned,
so
we
integrated
these
requirements
for
deadlift
subnet
service
sub
layer
and
now
are
the
requirements
in
the
document
structured
in
three
groups:
general.net
oem
requirements,
requirements
for
forwarding
sub
layer
and
for
service
sub
layer.
C
So
among
general
requirements-
and
this
is
not
a
complete
list-
it's
just
the
highlights,
so
the
oem
sessions
are
between
that
net
maintenance
and
points.
C
Supporting
and
should
be
you
able
to
use
a
proactive
and
on
demand,
monitoring
and
measurement
and
proactive
is
something
that
is
continuous
and
periodic
using
probe
packets
and
on
demand
it's
more
understanded.
It's
basically
initiated
by
an
operator,
and
it
has
a
terminal
lifetime
of
the
test
session-
must
support
unidirectional
oem
methods,
for
example,
as
continuity
check
and
measure
packet,
delay
and
packet
class.
C
Also
support
oem
for
bi-directional.
Deadman
flows.
C
C
For
the
service
sub
layer,
so
that
is
the
part
that
we
integrated
and
it's
new
to
this
document.
It's
not
new.
It's
been
discussed
by
the
working.
C
In
this
document,
that's
an
update,
so
oem
functions
for
that
net,
sub
service,
sub
layer
and
support
discovery
of
the
dead
net
relay
nodes
and
in
a
service,
sub
layer.
These
relay
nodes
are
node
that
met
nodes
that
implement
support,
one
of
the
packet,
replication,
elimination
and
or
the
preservation
sub
functions.
C
Support
the
collection
of
that
net
service,
sub
layer,
specific
information
and
that
specific
information
is
related
to
the
pre-op
sub
functions
and
relay
nodes.
C
Pre-Offset
functions,
support
to
use,
alarm
indication
signal
between
that
net
relay
nodes,
and
so
that
is
important,
then
discovering
a
defect
on
service
layer
and
indicating
the
propagating
our
medication
signal
to
the
client
sub
layer,
support
performance,
monitoring
in
a
dead
service
sub
layer
with
a
pre-off
in
use.
C
Any
questions
about
their
requirements
and
the
structure
of
the
document,
because
that
is
new
as
we
added
there
is
section
on
the
service
sub
layer,
so
we
separate
it
into
the
general
and
the
forwarding
sub
layer.
Yes,.
D
No,
yes,
okay.
I
muted
that
guy
yeah,
he
did
it
backwards.
Sorry
you're!
You
have
a
metric
section.
That
seems
a
little
light.
My
expectations
are
going
to
beef
that
up,
while
you're
doing
that,
I
think
you're.
It
would
be
good
to
have
requirements
for
each
of
the
metrics
that
you're
talking
about
you
know
whether
the
metric
is
required
or
not
or
you
know
it
must,
should.
May
that's
it.
Okay,.
C
Okay,
next
slide,
please.
C
So,
while
we
were
working
on
the
requirements,
we
found
several
open
issues
that
we
consider
not
to
be
entirely
editorial
and
that's
why
we
are
highlighting
them
in
the
presentation
and
one
working
group
to
discuss
it.
C
So
the
hybrid
oem
and
that's
using
their
classification,
rfc
7799
and
as
a
oem
measurement
method
that
combines
passive,
active
measurements
is
often
represented
by
on
path.
Telemetry
methods
such
as,
for
example,
in
situ
oem
and
alternate
marking,
method
and
telemetry
information
can
be
collected
and
transported
in
banded
out
of
bank.
So
the
example
of
invent
is
one
of
their
options.
Trace
options.
C
Documented
in
oem
as
where
the
information
is
collected
in
packets,
either
hub
by
hub
or
and
by
to
end
so
hop
by
hub.
It's
a
transit
notes.
Iom
nodes,
add
requested
telemetry
information
in
the
data
packet
that
serves
as
a
trigger
and
end
to
end
as
an
ingress
and
igor's
only
collect
information.
C
But
in
addition,
there
are
out
of
band
methods
so
where
their
information
is
stored
locally,
either
for
direct
exporting
immediately
or
for
aggregation,
processing
and
exporting
of
some
calculated
performance
metrics,
and
so
this
out
of
band
methods
can
be
either
using
yank
data
model.
So
they
use
in
combination
with
a
model
driven
telemetry
or
use
some
other
methods
well
known,
like
grpc
kafka
else.
C
So
what
we're
asking
is
that
the
current
document
states
that
that
net
oem
may
support
hybrid
performance
measurement
methods,
but
in
our
discussions
we
agreed
that
these
methods
can
provide
valuable
and
important
information,
especially
important
for
streaming
telemetry
information
that
used
for
network
analytics
and
then
in
combination
with
out-of-band
collection,
entrance
of
information
provided
important
and
valuable
operational
mechanism.
So
the
our
proposal
is
that
makes
support
of
the
hybrid
measurement
methods
mandatory.
C
So
to
change
must-
and
we
understand
that
this
discussion
to
be
on
the
mailing
list,
and
I
would
like
a
working
group
to
discuss
it
and
come
to
some
conclusion
so
further,
are
we
proposing
to
splitting
the
requirement
that
combines
proactive
and
on-demand
oem
into
two
requirements
that
just
the
convenience,
because
it's
much
easier
to
evaluate
the
conformance
and
then
when
we
are
talking
about,
if
we
need
the
new
mechanisms
or
we're
doing
a
gap,
analysis
and
we're
need
to
do
the
checklist
attribute
to
two
separate
requirements
rather
than
just
say.
C
Okay,
that's
this
mechanism
can
be
used
for
the
part
of
this
requirement.
That
is
confusing
and
another
is
what
we
found
is
like
thought
is
inconsistency,
but
we
discussed
it.
C
So
it's
two
requirements
in
the
general
session
are
related
to
unidirectional
performance
measurements
and
they
seem
as
duplicate
so
and
yeah.
Actually,
it's
like
in
a
second
and
it
should
be
ten,
not
in
first
sentence
and
eleven,
so
we
just
want
to
remove
the
first
sentence,
so
that's
probably
more
editorial
and
next
slide.
Please.
C
So
we'll
continue
our
bi-weekly
discussion
and
everyone
is
welcome
to
join
so
this
is
our
open
calls.
C
D
Greg
I'd
like
to
go
back
for
a
moment
to
your
this
hybrid
discussion.
Yes,.
D
The
so
you
made
two
comments
here
about
one
is
about
off
path
or
out
of
band,
so
we
have
had
in
sort
of
traffic
engineering.
We've
done
a
lot
of
out
of
band
for
the
response
and
response
messages
or
for
feedback.
You
know
so
on
on
path
or
in
band
for
forward
and
then
off
out
of
band
off
path
for
reverse.
We
need
bi-directional
messaging
and
you're
doing
one-way
oam.
D
That
definition
is
a
little
different
than
rfc
77's,
99's
definition
of
out-of-band.
For
them
they
use
out-of-band
to
talk
about
act,
active
and
passive.
So
when
they're
talking
hybrid
they're,
talking
passive
monitoring
versus
active
monitoring.
C
The
title
of
the
rfc
is
active
passive
in
in
between
so
itself
the
hybrid
methods
that
use
some
that
add
some
information
to
the
data
packet
are
hybrid
because
they
need
to
mark
packets,
as
in
alternate
marking,
methods
to
create
the
flow
of
marked
packets
that
identify
changes
in
batches
and
iam,
which
inserts
it
as
a
minimum
inserts
the
header
that
indicates
and
lists
of
informational
elements
that
needs
to
be
collected
and
a
node
or
requested
to
be
collected
at
the
node.
C
D
You
know
out
of
band,
for
responses
is
something
that's
you
know
well
established
and
certainly
we're
going
to
want
to
support,
and
so
the
requirement
all
looks
good.
Actually,
I
think
the
text
in
the
document
is
pretty
good.
It's
really
just
your
characterization
here
that
I
had
an
issue
with.
C
D
The
the
other
thing
is
that
telemetry
is
usually
about
monitoring
of
oam,
not
the
oem
itself,
and
you
don't
use
telemetry
in
the
dock
document
right
now.
At
least
I
didn't
notice
you're
using
it.
Actually,
you
do
use
it
I'll
have
to
read
what
you
mean
by
telemetry,
because
the
the
examples
you
gave
are
more
about
monitoring
oem
than
actual
oem
right.
C
Again
so
the
telemetry
usually
differentiate
two
components
so
they're
generating
telemetry
information
and
collecting
transporting
such
information
and
their
protocols.
That
used
so
can
be.
C
Attributed
or
characterized
as
on
path,
telemetry
information,
because
the
other
methods
that
now
we
discussed
are
model
driven
telemetry
that
is
based
on
yank.
D
C
D
C
Yes,
I
I
didn't
mention,
because
I
didn't
go
into
details
of
what's
being
discussed
now
by
ippm
working
group.
So
there
is
a
a
very
useful
proposal
of
iom
direct
expert
so
where
the
information
indicated
in
the
data
packet
that
telemetry
information
to
be
stored
for
the
local
processing
so
not
to
collect
in
a
data
packet
but
just
to
generate,
originate
this
information.
C
And
then
their
processing
is
determined
by
the
local
policy.
So
it
could
be
either
exported
directly
from
the
packet
to
the
collector
and
the
collector
can
be
again.
The
collector
can
be
in
the
eager,
ingress,
node
or
egress
node
of
the
monitored
floor
or.
C
That
it
could
be
a
some
controller
that
will
use
it
for
network
analytics.
D
G
Okay,
hi
vargo
is
speaking
and
on
behalf
of
the
counters,
I
will
present
this
draft,
which
is
oem
for
the
net
service
sub
layer.
Next
slide,
please
so
there
is.
This
draft
is,
has
the
intention
to
to
collect
the
service,
sub
layer,
specific
oem
topics
and
the
text
is
targeted
to
be
moved
to
virgo
drafts,
and
this
is
what
greg
was
in.
The
previous
slot
highlighted
that
some
texts
were
already
moved
from
this
draft
to
the
oem
framework
document.
G
The
next
slide
is
showing
what
is
the
content
of
the
draft
and
there
were
updates
made
so
section
4.
This
is
what
were
already
moved
to
the
oem
framework
document,
section
5,
dealing
with
that
net
ping,
which
is
describing
oem
processing
at
the.net
service
sub
layer.
So
what
a
relay
node
should
do
when
it
is
serving
that
net
oem
packet.
G
The
new
text
is
added
based
on
the
weekly
discussions
about
the
service
sub
layer,
oem
challenges,
so
we
have
added
illustrator
illustrative
example,
just
a
very
simple
network,
which
is
which
can
be
very
useful
during
discussion
of
oem
related
issues
and
challenges.
G
G
There
is
also
a
section
dealing
with
what
information
needed
when
that
net
oem
packet
is
processed,
so
this
information
have
to
travel
with
the
oem
packet,
and
there
was
also
a
section
added
about
a
possible
format
of
the
net
ach
next
slide.
Please,
the
this
slide
is
just
summarizing
this.
This
new
text-
and
this
is,
as
I
said,
mainly
the
outcome
of
the
of
the
weekly
discussion
and
the
discussion
on
the
list
about
oem,
so
the
service
graph
in
case
of
that
net.
G
So
the
main
main
challenge
is
that
the
oem
packets
must
follow
precisely
the
same
path
as
the
packets
of
the
corresponding.net
data
flow.
Next
slide,
please,
this
slide
is
showing
a
possible
format
for
the
net
ach
associated
channel
header.
G
It
is
only
32
bits
and
if,
during
the
discussion
of
of
that
net
oem
stuff,
we
have
identified
multiple
pieces
of
information
which
have
to
be
present
in
a
definite
oem
packet,
and
it
would
be
hard
to
put
all
these
information
in
a
32-bit
field,
because
some
part
is
already
reserved
for
other
purposes,
so
the
first
nibble
is
0001
and
in
the
vergio
version,
with
version
1,
we
can
highlight
that
it
is
about
that
net
associated
channel
header.
G
There
are
in
the
in
the
draft
some
proposals
regarding
what
type
of
information
should
be
there
and
for
what
purposes
they
might
be
used.
This
is
something
up
to
discussion.
The
the
major
conclusion
that
we
wanted
to
to
highlight
with
this
slide
is
that
we
need
to
add
additional
32-bit
to
to
the
ach
and
create
that
net
specific
ach,
so
how
how
many
bits
for
sequence,
number
node
id
session
id
and
so
on
are
used.
G
This
is
up
for
further
discussion
and
level,
and
flags
fields
are
also
something
to
be
to
be
defined
in
more
detail
and
discuss.
Level
would
be
something
to
to
create
oem
domains
and
flags
can
be
used,
for
example,
to
to
control
the
service
sub-layer
functionalities
for
a
given
that
net
oem
packet
next
slide,
please.
G
So,
with
this
update,
we
have
included
in
the
in
the
draft
the
proposed
changes
and
clarifications
which
were
on
the
list
and
during
the
biblically
oem
course.
We
are
still
looking
for
further
comments
discussion.
As
I
said,
this
is
a
a
discussion
draft
in
order
to
to
help
to
make
text
of
the
virgo
documents.
G
D
Okay,
I
think
you
know
that
last
point
about
moving
into
the
working
group
document
is:
something
would
be
good
to
address
early.
If
this
belongs
in
the
working
group
document,
it
should
be
there
rather
than
as
a
separate
document.
D
D
C
C
Okay,
so
we
received
very
good
comments
from
belarus
and
we
addressed
them,
so
he
is
agreed
that
everything
is
much
better.
B
C
So
that's
it!
Yes,
we
you
just
heard
and
seen
the
new
proposed
for
that
net
ach
format
and
I
absolutely
agree-
and
we
discussed
it
with
bolas
and
yanash
about
how
to
proceed,
and
we
just
felt
that
it's
need
to
be
presented
and
discussed
by
the
working
group
and
the
working
group
will
decide
if
it
can
be
merged
into
their
working
group
document.
C
So
please
discuss
also,
as
yamash
mentioned,
in
shares,
update
at
the
top
of
the
meeting
their
new
format
for
that
net.
C
Ach
format
was
presented
as
joint
pals
mpls
and
that
networking
groups
meeting
that
discuss
advancement
or
new
architecture
for
mpls
data
plane
and
that
was
well
received.
And
what
was
the
good
comments?
Is
that
saying
that?
C
Oh
guys,
great,
that
you
are
not
asking
for
another
first
nibble,
but
rather
use
flexibility
of
versioning
of
ach
for
your
purposes,
and
there
were
no
concerns
that
we
need
in
other
32-bit
steps,
and
some
comments
said
that,
oh
now,
you
have
more
space
and
you
might
increase
their
sequencing
space
so
match
it
to
detonate
control.
Word,
that's
something
that
definitely
worth
considering,
but
will
require
some
more
work.
C
In
general,
again,
I
would
to
working
group
to
discuss
whether
it
agrees
to
changing
their
current
format.
That
is
documented
in
the
working
group
document
adopting
effectively
importing
their
text
and
new
format
from
the
individual
draft.
C
D
D
Sorry,
at
least
for
me
you're
breaking
up
if
someone
else
heard
yeah.
Did
you
hear
the
question.
D
Yeah
fun,
if
you
could
just
type
it
in
the
question
into
the
jabber
or
I
don't
know
if
you
want
to
try
again.
I
D
Fan,
if
you
want
to
try
one
more
time,
we
could
try
it
if
not
yeah.
Please
please
try.
I
Okay,
do
you
remember
again.
F
I
Okay,
so
a
long
latency,
I
think
my
question
is:
was
the
preceding
procedure
between
this
new
dent
ach
format
imported
and
the
mprsd
open
design
team,
because
I
think
they
are,
they
are
relevant.
I
So
do
we
need
to
first
follow
the
mps,
open
design
team
result
and
then
imported
this,
then
that
ach
format
or
we
import
it
first
and
then
align
with
the
the
discussion
of
mpl's,
open
design
team
yeah.
I'm
I
think
there
are.
There
are
three
things.
First,
is
the
discussion
from
npr's
working
group
and
the
second
is
this:
the
discussion
of
this
new,
that
net
sh
format
and
the
third
one
is
the
is
to
move
this
new
format
from
individual
draft
to
the
to
the
mps
working
group
shaft
yeah.
J
Yeah,
I
was
what
I
was.
I
was
going
to
talk
to
this
subject
so
in
terms
of
the
the
the
ownership,
in
inverted
commerce
of
the
structure
of
nibble,
one
aches
that
probably
belongs
in
pals,
which
is
where
we
created
it
in
the
first
place.
J
No
one
has
no
one
has
found
any
use
for
those
reserve
bits
in
the
last
20
years,
so
we
can
think
about
making
them
essentially
a
parameter
of
the
channel
type,
which
I
think
is
what
was
what
was
shown.
J
Assuming
I'm
we're
all
talking
about
the
same
thing,
but
I
think
it
would
be
as
well
to
sort
of
socialize
this
in
the
sort
of
pals
mpls
space,
just
to
make
sure
that
no
one
in
particular
the
pal
side
of
it,
to
make
sure
that
no
one
can
think
of
any
serious
long-term
issues
for
doing
that.
But
the
principle
of
making
those
reserve
bits
a
a
parameter
of
the
channel
type
is
probably
okay,
which
I
think
is
what
you
were
proposing.
D
So
I
think,
going
to
the
couple
of
questions
that
were
asked
about,
I
think
procedure-wise
I'd
like
to
address
those
and
then
greg.
If
you
want
to
get
back
to
technical,
that
would
be
good
afterwards.
So
we
have,
we
do
have
the
joint
group.
We
are
coordinating
with
them.
This
was
presented
there
so
that
that's
happening,
and
that
needs
to
continue
to
happen
in
terms
of
the
documents
and
what
we
do
in
this
working
group.
D
I
think
it
is
worthwhile
for
us
to
document
what
we
believe
is
the
working
group
position
and
then
make
sure
we
synchronize
it
right
now.
All
we
have
is
an
individual
proposal
that
that
doesn't
carry
the
same
weight
as
this
is
the
direction
that
working
group
would
like
to
go.
So
if
this
is
in
fact,
you
know,
blind
document
does
in
fact
represent
the
direction
the
working
group
like
would
like
to
go.
D
I
think
we
should
bring
it
into
the
working
group
document
and
continue
to
socialize
it
and
as
the
work
progresses,
if
it,
if
it
gets,
it
could
end
up
getting
moved
into
a
pals
document.
D
It
could
get
moved
into
a
joint
document
and
then
we
work
with
the
chairs
to
figure
out
where
it
is,
but
you
know
we'd
like
to
keep
the
process
going
at
the
same
time,
we're
you
know,
coordinating
so
being
able
to
represent
whether
or
not
this
is
a
working
group
position
or
an
individual
position,
I
think,
is
an
important
step.
J
C
I
presented
this
part
of
their
talking,
so
I
joined
janus
and
as
a
buffer
of
nuclear
build
craft,
and
I
presented
their
definite
hph
four
minutes
and
some
one
of
good
comments.
J
Somehow,
rather,
it
went
below
the
radar,
but
anyway
I
think
they're,
probably
more
or
less
in
sync
anyway,.
J
D
General
point
is
correct,
and
I
and
I
completely
support
it,
which
is
independent
well
in
parallel
with
what
we
do
in
this
working
group
and
what
we
decide
in
this
working
group,
we
must
socialize
coordinate
with
the
joint
activity
that
joint
design
team
the
and
and
we
may.
We
should
be
aware
that
anything
we
do
here.
D
It
could
be
affected
by
that,
for
example,
what
we
agree
may
be
modified
based
on
the
the
joint
agreement
between
the
working
groups
and
in
fact
we
may
take
the
format
out
eventually
and
run
it
through
the
pals
working
group.
If
that's
the
end
decision,
but
you
know
having
agreement
of
this
working
group,
I
think,
is
an
important
step.
D
You've
seen
mpls
making
you
know
doing
the
same
thing
of
of
putting
together
positions
that
are
right
for
the
mpls
working
group
and
then
we'll
we'll
coordinate
between
the
working
groups
cross
working
groups
to
get
make
sure
we
have
a
an
answer
that
works
for
all,
at
least
that's
how
I'm
seeing
the
process
I
should
mention.
Janos
is
being
quite
because
he's
a
co-author
on
the
work.
J
D
Needs
to
weigh
in
on
this
I
agree
with
you.
A
hundred
percent,
thousands
of
million,
and
maybe
the
right
thing
is,
is
if
we
that
net
end
up
last
calling
the
document
we
do.
A
joint
last
call.
D
Okay,
greg,
you
had
a
comment
and
we
should
try
to
wrap
up
because,
yes,
it's
usually
important.
We
are
running
over
brief
technical
comment.
C
So
we
believe,
I
believe
that
this
is
important,
and
so
I
would
appreciate
the
working
group
discussion
on
the
mailing
list
and
derived
conclusion
in
regard
to
updating
their
working
group
document
and
inc
with
their
new
that
net
ach
format,
their
interaction
to
the
work
of
open
design
team,
one
of
their
documents
that
working
as
now
its
individual
document
of
group
of
contributors
on
finally
creating
their
registry
for
the
first
nibble
after
mpls
stack
and
value
zero
for
the
first
value,
one
for
the
first
nibble
is
reserved
for
pseudo-wire
ach
and
now
it's
to
the
wire
that
net
ach
and
based
on
direction
from
mpls
working
group
or
direction
of
open
design
team
is
that
any
new
mechanism
should
not
impede
on
existing
functionality
and
it
can
be
used
as
it
can
be
used.
C
Is
it
used
currently
so
to
minimize
interaction?
So
probably
that
means
that
it
will
use
a
different
first
nibble
value
for
its
header
post
stack.
So
with
that,
I
can
close
this
presentation
and
let's
move
the
discussion
to
the
mailing
list.
D
Greg
as
author,
in
fact,
we'll
just
call
you
editor,
you
can
change
yourself
to
editor
of
the
document,
but
as
editor
of
the
working
group
document,
would
you
conduct
a
poll
on
the
list
to
see
if
there's
objections
to
bringing
in
the
technical
content
of
bellagio's
document
into
the
working
group
document.
C
G
Okay,
so
this
slide
drag
is
about
how
to
provide
pre-off
for
the
net
ip
data
plane
and
I'm
presenting
it
from
the
code
or
sienna
fergus
and
andrew
mills
next
slide
please.
G
So
this
document
is
focusing
on
how
to
add
the
po
functionality
to
the
that
net
ipp
ip
data
plane.
It
is
listing
the
requirements
for
adding
pre-off
and
it
is
also
providing
details.
How
we
can
do
that
there
are
solution.
Basics
described
encapsulation
are
also
described
in
the
document.
There
is
a
detailed
pocket
processing.
G
It
is
also
dealing
with
the
flow
aggregation
topic
and
the
defines
in
detail
of
the
the
pre-op
procedures.
Next
slide,
please,
we
are
currently
at
version
or
one,
and
there
were
these
updates
in
the
document.
It
is
quite
stable,
so
there
were
some
editorial
updates.
Typos
some
grammar
were
corrected
and
we
have
received
comments
on
the
list
in
order
to
clarify
that
this
solution
is
based
on
on
tunneling
techniques.
So
this
is
what
was
explicitly
added
to
the
document.
G
The
solution
creates
a
set
of
underlay
udpip
tunnels
between
an
overlay
set
of
that
net
relay
nodes.
The
text
of
the
document
is
quite
stable,
so
next
slide
please
so
just
to
summarize,
this
draft
is
really
leveraging
the
existing
data
data
plane
building
blocks.
There
are
no
new
header
fields
specified.
This
is
a
general
solution,
verbs
both
for
ipv4
and
ipv6
and
and
pre-op
defined
for
the
dotnet
service
sub-layer.
G
The
solution
is
applicable,
irrespective
what
routing
techniques
is
used
underneath
the
detonate
service
sub
layer,
so
any
ip
rooting
techniques
can
be
applied
and
last
but
not
least,
it
does
not
require
any
additional
processing
on
transit
nodes.
G
K
Go
ahead,
so
I
think
we're
stuck
in
the
discussion
about
the
point
I
raised.
K
We
went
back
and
forth
and
I
think
it
would
be
good
to
have
an
audio
discussion
about
that,
not
necessarily
here
in
in
the
meeting,
but
let's
find
some
time
afterwards,
because
I
don't
think
that
I
saw
a
you
know
satisfactory
answer
to
the
problem
I
raised
with
respect
to
the
need
to
consider
how
the
elimination
function
and
the
jitter
or
buffering
that
it
necessarily
creates,
can
be
integrated
into
any
of
the
possible
bounded
latency
calculi
that
we
may
use
in
a
way
that
it
doesn't
mess
up.
K
The
calculations,
for
you
know
whatever
shaping
queuing
or
other
processing
is
done
for
the
bounded
latency.
You
were
giving
some
explanations,
but
I
I
couldn't
make
heads
or
tail
a
lot
of
them.
K
No,
no
was
the
the
packet
elimination
function,
the
elimination
function,
which
basically,
you
know,
creates
the
jitter
when
the
primary
flow
you
know.
Let's
say
you
have
the
a
and
b
flow
a
has.
The
shorter
latency
b
has
the
longer
latency,
and
obviously
you
know
if
all
the
flows
from
a
come,
then
the
elimination
function
doesn't
really
do
anything
right.
K
It
just
drops
all
the
one
from
b,
but
as
soon
as
something
in
a
is
missing,
you
need
to
forward
the
the
same
packet
from
b
and
obviously
there
is
a
lot
of
you
know:
jitter
between
a
and
b
and
kind
of
the
differential
path
latency
and
taking
that
into
account
in
a
way
that
we
can
simply
drop
the
elimination
function
into
the
forwarding
chain
and
don't
mess
up
the
calculus
for
the
bounded
latency
of
the
queuing
in
before
or
after
the
the
pre-op.
K
D
This
is
a
clarification
question.
What
does
that
have
to
do
with
this
document,
as
opposed
to.
D
K
Oh
sorry,
then
then,
then
I
may
have
made
that
comment
on
the
on
the
wrong
document.
Here
I
I.
D
D
D
D
D
Comment,
I
think
it's
a
valid
comment.
I
don't
think
it's
a
valid
comment
on
this
document,
because
this
document
is
just
taking
a
mechanism
that
we
already
have
defined
and
applying
it
in
a
different
data
plane.
So
I
think
your
comment
is
a
good
one
and
perhaps
warrants
its
own
separate
discussion,
given
that
we
have
an
rfc
on
it.
K
No,
I
yeah,
I
think
it's
the
right
document.
I
think
the
yeah,
the
you're
right
that
that
this
is
not
specific,
of
course,
to
the
you
know,
just
yeah
yeah
to
to
the
it
is
not
specific
only
to
ip
I'm
going
to
pull
my
joker
card
that
it's
a
4
56
a.m.
In
the
morning.
Sorry.
D
J
J
So
so
I
mean
I
would
have
thought
that
all
wasn't
the
all
that
we
needed
to
do.
Then
it
sounds
like.
Maybe
it's
not
obvious
from
the
original
documents
is
to
write
a
one-line
update
to
the
udp
document.
That
says,
by
the
way
you
can
do
pre-os,
because
surely
everything
just
follows
and
we
don't
normally
sort
of
micro
document
things
that
are
already
inherited
in
the
in
our
existing
stacks.
D
I
think
we've
gotten
a
little
bit
better
about
helping
people
understand
how
to
use
the
technology
as
building
blocks.
So
an
informational
document
that
says
how
to
combine
things
you
know,
has
we've
done
those
in
other
groups,
so
I
don't
think
it's
off
completely
out
of
scope.
J
D
Even
though
the
queue
is
closed,
we'll
go
to
pascal
and
david
we're
way
over
time,
we're
gonna
just
so
as
a
heads
up
to
other
presenters,
we're
going
to
end
up
squeezing
your
time
a
little
bit
reducing
your
time.
Pascal
you're
up.
L
Yeah,
so
I'll
try
to
be
quick,
so
we're
talking
about
encapsulation
and
as
we
talk
about
encapsulation
we
and
for
ip
right,
we
basically
lose
the
the
visibility
of
the
inner
flow
unless
we
have
to
dig
deep
in
the
packet.
L
So
we
end
up
in
the
discussion
similar
to
the
one
we
just
had
at
raw,
which
is
basically
how
what
basic,
what
identifies
a
copy
of
the
packet-
you
know,
the
sequence
number
and
and
the
flow
id
or
something
is,
is
typically
what
we
comes
to
mind
in
that
row.
We
kind
of
make
a
difference
between
the
flow
identification
as
being
the
upper
layer,
udp
thing
versus
what
goes
in
ip,
and
if
you,
if
you
take
that
path,
it
means
that
you
need
to
have
ipad
ip
information
in
the
ipad.
L
G
H
You
know
into
the
list
just
because
we're.
L
Okay,
so,
basically
by
my
bottom
line,
let's
take
it
to
the
list.
I
agree
my
bottom
line
is:
there
are
different
documents
which
talk
about
the
data
plan
for
ip.
We
need
to
reconcile
this
as
opposed
to
just
adopt
one.
We
need
to
understand
why
this,
what
those
different
documents
are
saying
and
how
that
that
needs
to
be
combined
or
what.
L
Not
not
I'm
saying
it's
probably
not
sufficient.
We
if
we,
if
we
really
want
to
do
something
in
the
ap
later
plane,
we
probably
need
more
than
just
a
tunnel.
I
think
just
a
tunnel
may
lose
that
information
we
care
about,
or
it's
never
been
there
like
secrets.
D
E
Time,
if
one
is
prepared
to
run
the
mpls
or
udpr
or
ip
data
plane
end
to
end,
and
I
think
stewart's
comment
is
correct-
that
there
really
isn't
much
to
say.
However,
if
one
wants
to
imply
encapsulation
over
only
part
of
the
path
so
that
we're
going
to
pre-off
over
part
of
the
path
without
running
that
whole
data
plane
end-to-end,
then
there's
some
interesting
stuff
to
done.
I
think
pascal's
been
busy
been
been
helpfully
exploring
some
of
it
in
his
comments.
D
Okay,
thank
you
look
forward
to
a
good
discussion
on
the
list
on
this
clearly
there's
interest
and
disagreement
so
should
foster
some
good
discussion.
The
bellage
you're
up
next,
you
originally
scheduled
for
10
minutes.
Gonna
cut
you
down
to
five,
please
I
I.
G
Will
do
my
best
yeah
okay,
so
this
is
about
pocket
ordering
function,
and
I
have
quote
or
stefan
kerres,
tobias,
tobias
here
and
janos
farkas
next
slide.
Please.
G
So
this
draft
is
version
2
and
it
is
dealing
with
the
situation
that
replication
and
elimination
functions
may
result
in
out
of
order
packets,
and
this
is
what
the
packet
ordering
function.
Algorithms
can
correct
and
restore
the
original
packet
order.
G
So
the
document
is
is
quite
stable.
We
have
added
some
made
some
editorial
updates
since
the
last
version.
There
were
discussion
on
the
list
and
it
is
also
related
that
orless
commented
in
the
previous
slot,
so
we
have
made
it
clear
that
packet
ordering
function
may
cause
delay
variation.
However,
how
to
eliminate
this
delay
variation?
This
is
out
of
scope
in
this
document
and
dealing
with
the
delay.
G
G
So
we
have
the
discussion
on
the
list
we
have
made.
Some
proposed
changes
updates.
We
feel
that,
regarding
the
prof
algorithms,
how
it
is
working,
this
is
something
pretty
stable.
We
have
received
good
comments
for
the
previous
version
and
and
updated
the
draft.
According
to
that.
K
We
we
can't
take
the
problem
of
being
able
to
define
the
latency
calculus
for
the
pre-op
function,
out
of
scope
of
when
we're
defining
the
prior
function
right,
just
hand
waving
and
saving
this
forwarding
plane
function.
So
I
think
we
should
discuss
that
so
because
this
was
always
the
problem
with,
with
with
the
bounded
latency
that
we
need
to
have
a
linear
model.
K
You
know
that
every
component
we're
adding
to
the
path
you
know
can
be
calculated
by
itself
and
you
know
not
pushing
off
some
unknown
latency
variation
to
to
to
some
undefined
step.
G
K
No,
but
I
mean
the
the
point
is
about
pushing
pushing
from
this
document.
This
function,
this
component,
some
undefined,
you
know
later
engines,
data
off
to
something
else,
is
architecturally,
not
the
correct
thing.
This
is.
This
is
basically
a
you
know,
a
latency
that
needs
to
be
possible
to
calculate
for
this
function,
and
then
there
is
really
the
question.
Does
it
make
sense,
then,
to
push
off
the
the
problem
of
how
to
fix
these
things
so
that
it
becomes
linear
again
to
some
other
component.
G
D
I
I
think
it
would
be
very
good
to
get
a
informational
document
that
talks
about
how
implementation
should
manage
that
issue.
Tourists.
I
don't
know
if
you
want
to
contribute
that
or
work
with
balaj
to
to
do
that
and
whether
it's
this
document
or
a
separate
document,
you
know
that
would
be,
I
think,
really
help
the
working
group.
It's
a
good
issue,
you're.
Bringing
up
that
implementations
need
to
consider.
I
don't
think
we're
talking
about
changing
any
of
the
data
plane
behaviors.
D
It's
really
just
the
the
calculus
that
the
essentially
the
controllers
have
to
do
so.
It's
it's
a
good
implementation
guidance
document,
so
that
would.
K
Be
really
nice
worst
case.
My
contention
is
that
we
should
consider
that
the
elimination
ordering
function
need
to
have
a
shaper
per
flow
in
it.
Like
you
know,
the
the
like,
the
shapers
we
have
associated
with
the
queues
and
ats,
for
example
right.
The
the
cues
are
what
we
do
on
the
interface
and
then
we
have
the
shaper
for
them,
and
likewise
the
same
shaper
could
be
required.
You
know
for
the
elimination
ordering
function
and
that
would
be
part
of
the
elimination
and
ordering
architectural
component.
So
that's
the
worst
case.
M
Please
proceed:
you
have
ten
minutes:
yeah,
okay,
hello!
This
is
pony
from
channel
mobile
and
it's
my
pleasure
to
talk
about
the
requirements
of
large
scale,
deterministic
network
next
slides.
Please.
M
Okay,
here
is
the
motivations
in
the
last
entry
meeting,
which
may
talk
about
if
more
queuing
mechanisms
should
be
considered,
the
requirements
were
presented
and
would
like
to
work
in
the
requirement
draft
first,
so
we
submit
the
new
draft
and
hopefully
it
can
be
used
as
the
standpoint
compared
with
the
plantation
at
the
entry
meeting.
The
new
draft
has
more
analysis
about
the
large-scale
domestic
network
influence
the
queuing
mechanism
and
based
on
which
we
put
up
some
new
requirements
and
next
slides.
Please.
M
So,
let's
have
a
recap
on
the
different
levels
of
application
requirements
and
obviously
of
use
case
gave
the
requirements
of
industrial
electricity
buildings
and
so
on.
Some
of
them
clearly
specified
the
requirement
for
lateness.
Let's
say
a
jitter
and
well
some
not
for
the.
So
far
when
providing
deterministic
network
service
network
network
providers
always
face
the
problem
of
how
to
match
application
demands
to
the
technology,
so
the
service
can
be
clarified.
M
One
is
one
kind
has
critical,
as
is
such
as
remote
control
or
cloud
plc,
or
manufacturing
and
differential
protection
of
electricity,
and
there
are
also
some
relatively
lower
levels
of
srv
for
consumer
networks
such
as
cloud
gaming
cloud
vr.
M
The
user
of
these
applications
hope
to
have
a
better
network
experience,
but
they
can
tolerate
it
to
a
certain
extent
if
the
network
quality
is
not
so
good.
Sometimes,
moreover,
such
users
are
willing
to
spend
more
money
for
high
quality
network
service
and
they
need
some
aspects
because
they
have
no
industry
barriers
and
can
tolerate
exceeding
the
upper
boundary
of
latency
within
small
probability.
M
M
And
here
are
some
deployment
and
application
status
of
literary
deterministic
network,
so
we
won't
give
some
much
explanation
about
it.
Just
want
to
show
that
this
draws
on
both
operators
and
enterprise
users
begin
to
put
forward
deterministic
requirements
for
the
large-scale
networks,
but
the
used
technologies
are
not
exactly
the
same,
so
you
can
see
more
work
is
needed
for
network
service
providers
to
successfully
sell
that
net
type
service
to
customers.
M
This
is
a
requirement
to
a
list
according
to
the
differential
application
requirements,
including
the
tolerance
time
synchrony,
and
supports
the
large
signal
hope.
Propagation
latency
accommodates
the
higher
link
speed.
This
giveaway
tolerates
failures
of
links
and
all
nodes
and
wg
chain
changes
and
the
support
incremental
device
updates
next
slides.
Please.
M
Quantum
one
is
to
tolerate
time
a
synchrony,
including
four
situations,
first,
is
to
support
asynchronous
clocks
across
domains,
and
one
of
dennis
objects
is
to
stitch
tests
and
islands
together,
which
may
have
different
clocks
and
are
not
synchronized,
so
the
mechanism
should
be
able
to
support
the
interaction
across
time
domains.
The
second
is
storage
data
and
wonder
within
clock
synchronized
domain
within
a
single
time,
synchronization.
The
main
different
clock
accuracy
is
expected.
M
M
It
is
really
hard
to
achieve
the
full
time
synchronization
in
large
scale
networks
when
considering
the
diameter
of
network
topology,
and
it
is
designed
that
the
same
performance
in
terms
of
the
boundaries
and
the
data
can
be
achieved
when
full-time
synchronization
is
not
used,
such
as
the
frequency
synchronization
used
in
the
two
trials,
and
the
last
one
is
to
support
a
synchronization-based
master
method
due
to
the
large
amount
of
traffic
in
large
secured
network,
and
some
of
them
are
a
cyclic
and
and
not
all
the
networks
or
device
support.
M
Synchronizations
such
as
ats
use
perform
based
synchron
nervous
shipper
to
achieve
boundary
latency
and
the
formula
proof
shows
it
effectiveness
next
size.
Please.
M
Quantum
2
is
to
support
the
large
single
hoop
propagation
latency.
The
distance
of
transmission
transmission
transmission
is
long
enough
to
generate
a
larger
latency
that
line
for
cyclic
based
method.
The
length
of
cycle
must
be
either
set
long
enough
into
buffer
case
or
other
mechanism
should
be
provided
and
come
through
is
to
accommodate
the
higher
link,
speed.
M
It's
inc
include
two
aspects:
the
increase
or
decrease
of
the
network
device
influence,
topology
and
discovery
mechanisms.
For
example,
the
ultra
low
latency
of
application
spot
network
would
require
the
net
extent
to
every
5g
base
station,
which
might
be
hundreds
of
thousands
for
one
operator
and
for
the
massive
traffic
flows.
It
is
almost
impossible
to
identify
individual
rb
flow
at
the
net
plan,
because
the
larger
overhead
and
resource
reservation
for
massive
number
of
flows,
so
flow
correlation
is
required.
Individual
flow
may
join
and
exist.
M
Requirement,
6
is
to
support
incremental
device
updates
since
some
applications
that
require
relatively
low
level
of
sle.
It
will
bring
acceptable
for
those
applications
to
tolerate
a
deterministic
low
probability
to
exceed
the
upper
boundary
of
latency
for
those
applications.
Some
simple
solutions
that
may
be
realized
by
update
and
configure
the
ingress
and
egress
device
or
part
of
network
service
network
device
are
expected.
M
M
And
here
are
some
proposed
q
magazine
besides
test
and
insert
service,
they
are
not
included
in
the
boundary
latency
draft,
which
is
will
be
published
soon.
We
list
them
and
also
give
some
analysis
about
the
levels
of
deterministic,
synchronization
or
notes
the
cost
stability
and
the
flow
aggregation.
M
D
Interrupted
but
given
the
time,
I
think
it's
important
too,
the
okay
requirements
are
really
good,
but
you
don't
have
to
discuss
the
alternatives
out
there
or
do
a
gap
analysis
to
do
the
requirements,
and
this
is
going
to
cause
a
lot
of
discussion
and
argument
and
it
really
is
separate
from
your
point
about
requirements.
So
I
think
adding
this
having
this
in
the
presentation
and
the
document
is
actually
diminishes
from
the
document.
D
D
M
M
Okay,
next
steps,
wait,
wait,
wait
more
analysis
and
discussion
about
the
requirements.
I
will
come
and
people
who
are
interested
in
this
work.
Please
contact
contact.
B
A
The
question
in
terms
of
the
relationship
of
this
document
compared
to
the
one
you
presented.
B
M
Yeah,
I
think
this
question
may
be
discussed
later
after
this
presentation.
A
B
Yeah,
because
I'm
I'm
one
of
the
courses
of
this
of
just
now
the
zero
zero
version
and
the
one
presented
in
the
interim.
Actually
there
was
no
mature
draft,
so
basically
the
draft
presented
just
now
is
the
very
first
version
of
that.
B
A
N
Okay,
okay,
and
so
this
draft
is
also
about
the
requirements
for
the
wide
aerial
ip
domestic
networking.
D
Yeah
she
may
be
having
a
problem.
Please
please
please
proceed
with
the
president.
Yes,
please
proceed.
Yes,
please
proceed.
N
N
Okay,
I
will
proceed.
First,
we
discussed
about
the
possible
problems
with
the
wide
arrow
ip
deterministical
networking.
We
divided
it
into
two
parts
of
the
work.
They
are
deterministically
networking
and
wide
area
network.
For
this
mystery
network.
We
should
consider
two
issues.
First,
different
services
requires
different
shaded
satellite
cues,
for
example,
400
microseconds
delay
is
required
for
audio
and
video,
but
no
more
than
100
100
micro
seconds
delay
should
be
guaranteed
by
industrial
application.
N
Second,
problems
with
resource
location
rules
should
be
taken
into
account.
Account,
for
example,
resource
should
be
located
for
the
deadline,
but
it
will
lead
to
low
efficiency,
efficiency
of
resource
utilization
for
the
roots,
the
strict
latency
and
the
jitter
commitment.
Peru
should
be
provided,
and
it
must
not
change
due
to
the
lateral
changing
it's,
especially
in
pref
function
like
the
owners
proposed
the
latency
and
the
jitter
of
the
two
disjoined
rules.
We
cannot
have
two
big
difference
and
cannot
be
changing.
N
N
And
finally,
there
are
also
large
number
and
multiple
types
over
dynamic
flows
coexist,
so
micro
bursts
may
be
emerged
from
multiple
flows.
Last
slide.
Please.
N
So,
based
on
the
problems
for
with
the
white
aerial,
ip
did
administer
networking
we
listed
the
requirements
based
on
the
problems.
First
is
the
differentiated
the
network
differentiated
the
battle
desolate
queues
of
the
multi,
multiple
services,
their
deflate
flows
could
be
classified
based
on
their
data
cues
in
the
figure
show,
for
example,
over
deterministic
service
can
be
provided,
such
as
low
latency
jitter
and
the
low
latency
just
low
latency,
and
so
on.
Second,
is
the
large
scale
network?
It's
a.
N
N
N
So
this
is
the
solutions,
considerations
for
the
wider
area,
ip
deterministic
networking
we
for
the
solutions
we
suggest
to
solve
it
from
three
direction.
First
step
is
to
locate
deterministic
resources.
Second
step
is
to
establish
the
domestic
routes
and
the
validity
to
achieve
their
deterministic
accused.
N
So
the
the
draft
is
also
about
the
requirement
or
for
the
large-scale
or
wide
aerial
net
deterministic
networking.
We
welcome
the
comments
and
this
discussion.
Thank
you.
M
I
think
it's
the
right
direction
to
classify
the
categories
of
levels
of
use
case,
but
the
problem
is
that
whether
it
is
the
right
time
to
be
standardized
now,
maybe
in
the
future,
more
suitable,
because
in
the
previous
draft
we
also
mentioned
that,
but
we
don't
give
the
details
of
of
it
and
for
this
draft
I
think
some
details
of
analysis
are
expected
and
since
it's
a
little
conceptual,
I
think-
and
another
question
is
that
we
have
two
requirements
draft
and
what
I
would
ask.
What's
the
other,
I
think
about
that.
Yeah.
N
Yeah,
thank
you
so,
for
the
first
question
brings
you
a
classification
over
the
deafness
service
and
dirty
the
details
should
be
provided,
so
we
will
do
the
research
and.
N
Provide
the
details
for
the
classification
of
the
data
service
not
just
their
their,
not
the
nancy
and
the
jitter.
We
will
provide
the
clarification
for
the
other
requirements
in
the
draft.
D
D
So
I
believe
you're
originally
scheduled
for
15
minutes.
We
only
have
10
minutes
available
for
you.
K
Okay,
thank
you,
china,
and
this
is
a
false
version
of
the
microphones
decreasing
in
this
layer.
3
network
for
low
limited
traffic
dropped.
Next,
please.
K
Next
second,
this
page
will
show
some
modifications.
Firstly,
we
adjust
the
catalog
to
clarify
the
purpose
of
the
draft.
Our
purpose
is
to
explore,
explore
the
methods
to
decrease
microbursts
in
the
network
and
we
add
a
new
session
to
analyze
the
requirement
of
the
method
to
decrease
the
macro.
First
and
as
an
example,
we
we
show,
we
introduce
a
method
to
decrease
the
microburst
next
page.
Please.
K
Okay,
this
is
a
simple
test
from
my
colleague,
and
we
want
to
show
this
page
to
show
that
the
traffic
has
instinct
or
positiveness.
K
I
I
will
introduce
something
about
it:
the
traffic
passed
through
a
4g
network
and
fixed
the
access
network,
and
they
are
between
two
cpes
with
gps.
K
K
It
can
be
observed
that
the
ip
traffic
has
instinct
or
both
he
needs,
because
we
can
see
that
some
of
the
packet
experience
a
long
delay
in
the
between
the
two
cpes
next
page.
Please.
K
This
document,
many
folks
on
the
micro
bus
on
the
interface,
which
is
different
from
the
the
micro
aspects
in
the
previous
page,
but
they
show
similar
characteristics.
K
The
three
ways,
including
includes
the
traditional
ip
in
a
light
loaded
network.
This
second
scenario,
is
very
similar
to
the
situation
of
the
last
page,
and
the
second
is
the
tsm
mechanism,
such
as
cqf.
It
means
cyclic,
cueing
and
folding,
and
the
last
method
is
introduced
in
in
this
chart
as
an
example
to
decrease
the
microburst
on
the
interface
next
page.
Please.
K
However,
in
in
that
night
we
want
to
convey
more
critical
traffic
in
the
network,
so
this
traditional
ip
method,
perhaps
not
super
for
the
for
our
mechanism
being
considered
in
that
net.
But
the
ipod
has
a
good
scalability,
because
most
in
most
cases
only
perpetuate
the
treatment
in
the
forwarding.
Node
is
needed,
but
in
theory
every
folding
can
only
provide
an
unreliable
connection,
as
shown
in
the
last
page.
We
can
see
that
some,
sometimes
a
long
delay
is
caused
by
the
person
in
the
network.
K
The
second
method
to
decrease
the
microphones
in
the
network
is
the
tsm
mechanism,
such
as
the
cqf.
Of
course,
the
ctsm
mechanism
all
can
provide
a
reliable
path
through
the
network.
K
K
K
It
is
to
see
that
we
can
provide
a
different
treatment
for
the
critical
traffic
and
then
we
go
get
a
better
user
experience
on
the
inter
intermediate
node.
We
suggest
to
separate
the
process
or
control
plan
the
data
plan,
because
we
think
that,
in
a
large-scale
network,
the
status
on
the
aggregated
detonate
traffic
on
the
control
plane
may
change
frequently,
because
the
the
the
track,
the
net
flow
may
contain
a
lot
of
subfloors
and
they
are
aggregated.
K
K
So
we
propose
a
method
as
an
example.
In
this
method,
we
can
do
the
shipping
on
at
the
end
edge
and
try
to
keep
the
traffic
shaped
on
the
intermediate
node.
K
We
do
the
we
on
the
intermediate
node,
an
intermediate
node,
the
graduated
critical
traffic
will
be
shipped
again
as
a
whole
on
the
interface,
we
suggest
some
self-decision
process
in
the
shipping
and
the
purpose
is
to
maintain
a
reliable
or
reasonable
buffer
death,
while
shipping
the
traffic,
the
first
step
in
our
stock,
is
that
not
to
forward
the
packet
as
soon
as
possible
as
the
traditional
ip
forwarding
does,
because
we
think
it
is
one
of
the
reason
causing
the
microburst
on
the
interface
next
page.
Please.
K
It's
about
the
next
time
or
the
drop.
We
will
call
for
comments
and
continue
to
modify
the
draft,
and
we
also
call
for
contributions
from
anyone
who
are
interested
in
the
work.
I
think
this
is
the
last
one
last
page
and
welcome
for
comments.
Thank
you.
D
K
All
right,
so
this
is
about
the
oh.
The
title
is
wrong:
this
is
about
the
mpls
tech
traffic
class
queueing
for
cqf
next
slide.
K
So
the
the
concept
of
tech,
cyclic,
queueing
and
forwarding
was
proposed
to
that
a
long
time
ago
that
was
in.
I
really
only
get
one
minute.
K
Sure
right
so
so
we
introduced
that
in
2018,
first
as
a
concept-
and
you
know
got,
you
know,
lost
running
around
in
itf
working
groups
to
figure
out
where
we
could
standardize
this,
then
we
reintroduced
the
concept
in
2021
with
a
new
variation
of
that
draft.
So
I
think
you
know
that's
that's
why
we
said:
let's,
let's,
let's
make
make
something
that
actually
can
be
standardized
by.
K
You
know,
writing
down
how
this
would
actually
work
with
the
encapsulation
that
we
have
and
otherwise
complete
data
plan
for
which
is
mpls,
and
so
the
beauty
is
also
that
it
only
requires
some
three
to
five
values.
Out
of
you
know
a
qs
field,
and
so
in
mpls
we're
using
the
traffic
class
field,
so
we
introduce
and
explain
that
work
in
the
interim,
so
the
01
version
added
a
co-author
and
that
one
thanks,
andy
had
kind
of
alerted
me
to
the
problems
with
the
yang.
K
So
I
had
longer
discussion
also
with
young
expert,
and
I
ended
up
removing
the
yang
references
and
text
from
it
and
replaced
it
with
something
that
you
know
would
allow
us
to
forward
to
to
carry
forward
with
the
specification
independent
of
an
additional
yang
model
to
to
manage
it,
so
that
we
don't
get
that
conflation
of
of
context
in
one
document.
Next
slide.
K
Okay,
just
a
quick
summary,
so
cyclic,
cueing
and
forwarding
is
a
an
old
technology
from
tsn
that's
available
and
used
in
campus
industrial
networks
with
speeds
smaller
than
or
maybe
even
up
to
10
gigabits.
It
uses
two
cycles
for
the
forwarding.
K
The
main
issues
is
that
when
you
go
to
higher
speeds,
the
clock,
synchronization
accuracy
requirement
goes
up
linear
with
speed,
which
also
means
that
the
cost
of
the
clock
synchronization
goes
up
with
speed
and
it
cannot
support
you
know,
links
of
varying
or
larger
latency,
because
it
is
synchronizing
packets,
upon
receipt
based
on
the
receive
timestamp
of
the
packet
and
not
something
in
the
packet.
So
you
know
the
latency
of
the
links
has
to
be
very
small,
so
something
like
a
few
kilometers.
K
So
that's
why
also
a
tsn
didn't
want
to
do
it,
but
that
is
really
what
is
necessary
to
reduce
the
requirement
in
clock,
synchronization
and
its
cost,
and
then
allow
arbitrary
long,
wide
area,
network
links
and
also
links
with
jitter
that,
obviously,
even
in
you
know,
high-speed
gigabit,
ethernet
and
other
links
like
mobile
links
come
through
forward
error,
correction
and
re-transmissions
at
at
lincoln,
lower
layer,
so
that
technology
is
working
in
hardware.
K
So
we
we
did
prove
that
with
you
know,
proof
of
concept:
hardware,
implementation,
just
on
fpga
for
that
cyclic,
queuing
and
100
gigabit
standard
metropolitan
and
wide
area
network
routers,
so
at
2,
000,
kilometer
range
and
less
than
100
microsecond
end-to-end
jitter,
because
that
was
just
the
cycle
time
and
I'm
not
going
to
go
in
the
details
of
how
multi
more
cycles
like
shown
on
the
right,
give
you
more
variability
in
you
know
jitter
or
clock
inaccuracy,
but
that's
pretty
much
the
reason
to
go
for
more
than
three
cycles.
K
Most
industrial
tsn
applications
want
and
need
low
jitter,
for
example,
higher
jitter
may
incr
introduce
the
need
to
run
ptp
in
the
network
purely
for
client
devices.
Add
ptp
stacks
to
the
client
devices
to
resync
control
loops
because
of
the
introduced
jitter
and
the
current
latency
alternatives
to
cqf
have
really
the
maximum
jitter
right.
So
zero
queuing
latency
when
there
is
no
competing
traffic
and
maximum
latency
under
maximum
competing
traffic.
So
tech
secure
is
my
opinion,
really
the
only
short-term
option
that
solves
both
challenges.
K
There
are
some
other
options,
but
they
would
all
require
new
packet
headers
because
they
can't
get
away
with
just
a
few
values
to
indicate
the
necessary.
You
know
queueing
state,
that
we
need
to
make
it
work
and-
and
you
know
all
of
these-
you
know
additional
options.
Don't
have
any.
I
think
deployment,
validation
or
standardization
track
record
next
slide.
K
So
yeah,
so
this
is
this
is
basically
all
the
stuff
that
we
could
do
beyond
this
draft
and
where
it
gets
interesting,
but
also
difficult
so,
and
that
gets
pretty
much
to
extension
headers
for
what
we
want
to
do.
So
we
could
do
one
for
bounded
latency
now
for
for
for
additional
proposals,
but
I
think
we
should
be
careful
not
to
do
a
one-off
right.
K
So,
and
maybe
you
know
in
the
matter
of
time
not
go
through
all
of
this,
but
so
there
were
other
in
in
the
interim,
there
were
other.
You
know:
options
for
low
jitter
brought
up.
I
also
you
know,
published
one
particular
mechanism
last
month,
so
feel
free
to
read
up
on
that.
All
of
these
options
may
have
slightly
different
parameters.
Mine,
for
example,
has
one
time
stem
and
then
a
sequence
of
priorities
to
achieve
the
same
calculus
that
tsn
ats
has
so
that
it
can
be
pushed
back.
K
But
you
know
there
are
one
of
other
things.
You
know
jack
of
stein,
proposed
a
stochastical
bounded,
latency
solution,
so
I
think
all
of
this
is
much
further
out
before
we
could
successfully
deploy
it.
I
hope
we
can
and
will
work
on
it,
but
so
I
think
that
that
puts
it
in
a
separate
bucket
from
the
tcqf,
which
I
think
is
ready
to
deploy
now
if
we
had
a
standard
to
actually
allow
interoperability
so
which
brings
us
to
the
final
slide
next
slide.
Please.
K
So
here
are
pretty
much
my
two
asks
right,
so
you
know
working
group
adoption
for
this
tech
cqf
draft
and
you
know
if,
if
that
cannot
be
adopted
because
there
may
need
to
be
charter
changes,
let's,
let's
try
to
figure
out
how
to
work
through
that
process
quickly.
I
don't
think
it
really
would
need
to.
K
You
know,
be
considered
to
be
any
different
from
the
pre-op
work
that
we're
doing,
which
also
deals
with
algorithms
in
the
routers
and
has
since
we
started
that
net
and
then
the
second
ask
is
to
think
about
how
we
can
get
towards
something
like
a
net
us
design
team
with
more
regular
and
informal
meetings
yeah.
But
thanks
for
the
working
group
chair,
slides
that
you'll
make
the
with
the
webex
available.
K
So
that's
great
too
pre-off
latency,
not
sure
if
there's
anything
else
on
the
qs
side
and
then
maybe
you
know
start
discussing
there,
the
the
current
drafts
that
we
already
have.
I
want
to
bring
into
the
working
group
for
for
the
round
one
deliverables
and
then
the
round
two.
For
these
you
know
new
packet,
header
encapsulations.
A
Thank
you
dollas.
So
maybe
one
one
thought
on
the
working
group
adoption
that
perhaps
so
so
we
are
developing
the
requirements
and
I
think
that
goes
before
solution.
K
Well,
I
mean
we've
been
you
know
on
this
for
four
years
this
is
working.
We
know
the
industry
needs
it.
This
is.
This
is
not
necessarily
covering
all
the
operational
requirements
that
we
have.
So
I
think
you
know
any
of
these
larger
scale
requirements.
Documents
would
be
a
broader
scope.
Then
I
think
what
we
do
agree
to
be.
You
know
the
requirements
for
a
bounded,
latency
solution,
right
and
and
and
and
remember
that
you
know
we
don't
even
have
anything
for
the
other
bounded
latency
solutions
from
the
management
plane
either.
K
So
I
I
think
there
is
no
need
to
delay
this
this
this.
This
can
be
pretty
much
done
in
parallel,
and
you
know
if
in
my
working
group,
the
the
ads
even
wanted
us
to
first
work
on
the
solution
in
parallel.
If
at
all,
you
know
work
through,
you
know
the
other
aspect,
so
I
think
that's,
that's
always
in
the
eye
of
the
beholder
how
to
sequentialize
things.
D
So
I
I
do
think
we
have
a
charter
issue
to
work.
The
shares
can
start
working
that
with
the
ad
and
probably
come
up
with
some
proposed
text.
Maybe
you
would
get
the
id
at
least
to
agree
with
in
principle
and
then
bring
it
to
the
working
group
for
review
and
comment
and
run
that
process,
and
in
that
time
it
would
be
great
to
see
more
discussion
and
input
into
in
this
document
and
we
can
run
those
in
parallel.
K
As
I
said
right,
I
think
that
that-
and
I
think
that
was
a
little
bit
the
conclusion
what
you
took
down
at
the
end
of
the
interim.
I
I
think
it
mostly
applies
to
what
I
had
on
my
slide
as
the
the
second
part,
which
was
you
know.
Looking
into
all
these,
you
know
novel
things,
as
opposed
to
the
very
short-term
things
that
we've
also
been
doing
with
pre-off
and
where
this
air
draft
would
fall
into,
but
yeah
fine.
D
Thanks
yeah,
I
mean
we've
had
the
discussion
before
the
queueing
has
typically
not
been
done
in
the
routing
area.
Since
this
is
you
know
different
for
the
routing
area,
we
want
to
make
sure
that
the
isg
and
the
ad
is
fully
on
board
with
it.
I
think
just
doing
going
off
and
doing
it
without
their
buy-in
is
just
going
to
cause
problems
down
the
road
when
we
try
to
advance
it
through
the
isg.
So
again,
again.
D
We
can
run
it
in
parallel
and
certainly
try
to
have
a
text
that
is
either
complete
or
fairly
mature
by
the
next
meeting.
K
D
Been
started,
we
can
have
that
discussion
on
the
list.
It's
not
going
to
change
we're
going
to
work
with
the
ad
on
this
sure
and
the
isg.
So
I.
D
Keep
talking
but
we're
out
of
time
on
this
slot
more
discussion.
Take
it
to
the
list.
Certainly
technical
discussion,
we're
hoping
does
happen
on
the
list
and
with
that
we're
on
our
last
slot.
One.
N
So
we
mainly
discussed
the
requirements
for
the
flow
mapping.
There
are
two
parts
over
requirements
for
the
flow
mapping.
The
first
one
is
the
primary
primary
requirements
of
the
controller
play
for
the
tsn
and
desolate
flows.
Mapping
the
mapping
between
tsn
streams
and
the
data
flows
is
required
for
the
service
proxy
function
at
data
engineers
and
the
mapping
table
can
be
configured
and
maintained
in
the
control
plane.
N
N
So
this
document
proposes
extensions
in
control
play
to
bgp
flow
specification
for
their
flow
mapping
by
using
their
traffic
filtering
rules
to
identify
the
packet
and
they're
using
the
association
associated
action
to
map
the
packet
to
the
related
surface.
N
N
This
document
proposed
proposed
earlier
type
in
l2
components,
flows
back
type
for
tsm
traffic
filtering,
for
example,
the
mask
in
the
max
surface
data
unit
unit
in
mask
and
the
match
screen
identification.
N
So,
moreover,
for
the
traffic
action
of
tsn
strings,
the
action
is
to
define
to
accept
their
tests
and
streams
that
matches
the
rule
and
the
mapped
extremes
to
the
data
flows.
So
this
document
also
proposes
proposes
the
sequence
action
extending
the
community
next
slide.
Please.
N
So
and
so
forth
there
that
last
flow
matching
map
into
tsm
string.
This
document
also
proposes
a
dcw
type
in
l3
components,
flows
back
for
that
letter,
mps
flows,
the
extended
action
for
and
data
traffic
filtering
is
to
accept
that
flows
that
matches
the
death
rule
and
the
mapped
flows
to
the
tests
and
strings.
So
the
document
also
proposes
a
test
action
extended
community.
N
The
test
profile
can
be
converted
to
the
string
related
permit
parameters
and
requirements,
including
ts
string,
id
string,
handle
sequence
number
and
the
traffic
scale
scheduling
information.
Let's
select
list.
N
So
a
last
step
we
plan
to
present
and
discuss
in
idr
wg,
and
we
want
to
get
more
feedback
comments
and
the
discussion
are
very
welcome.
Thank
you.
O
Thanks
luke
I'm
going
to
apologize,
I
actually
only
caught
a
portion
of
the
presentation
after
I
was
flagged
about
this.
You
know
I've
read
the
draft
very
slightly.
I
do
see
that
it
is
doing
much
work
with
flow
spec.
A
quick
word
on
that
the
base
flow
spec
rfc,
which
is
the
version
of
the
protocol,
is
known
to
not
be
extensible.
So
there
is
work
that
is
starting
in
idr
to
do
a
flowspec
v2.
O
So
I
think
you're
going
to
find
that
some
of
your
changes
will
probably
require
the
flowspec
v2
work
in
order
to
become
a
viable
protocol
component
idr
should
be
holding
an
interim
on
flowspec
sometime
very
likely
early
december
and.
B
H
N
Thanks
jeff
and
I
will
change
that
extensions
to
flows
back
or
version
two
last
version,
but
in
this
presentation
I
I
I'd
like
to
get
some
feedback
for
the
requirements
of
the
flow
mapping.
Thank
you.
D
Now,
from
the
I
think,
the
net
side,
I
think
it's
important
to
qualify
the
document
that
it's
really
focused
on
tsn
mapping,
not
just
generic
net,
so
making
you
know
having
the
document
identify
its
scope
as
tsn
in
the
title,
the
abstract,
the
the
narrative
part.
That
would
be
important
in
terms
of
the
process
standpoint.
D
I
think
we,
you
know,
I
think
debt
net
is
the
chairs
are
happy
for
it
to
be
run
wherever
the
ad
thinks
is
is
appropriate,
whether
that
be
net
or
idr
sounds
like
it
might
be
best
to
run
it
with
idr
given
where
flospec
is
and
what
jeff
commented.
O
Thanks
luke,
so
the
the
first
thing
I
was
going
to
say
is
that
we
do
have
I'm
having
trouble
loading
my
agenda,
whether
you're
on
the
presentation
slot
or
not.
But
if
you
were
not
a
presenter
for
idr's
session
tomorrow,
we
do
actually
have
some
available
time.
So
please
consider
yourself
invited
to
present.
If
you
not
are
already
there.
D
And
jeff
after
that,
if
you
could,
let
us
know
sort
of
what
the
opinion
is
of
the
group
and
we
can
coordinate
with
our
ad
john
offline
to
to
make
sure
this
ends
up
in
the
right
place.
That
would
be
great,
okay
and.
O
Last
comment
for
you
to
give
your
time
back,
certainly
an
idea.
We
can
help
look
through
the
php
protocol,
extensions
and
you
know
much
like
other
plumbing
protocols.
It
will
be
up
to
you
guys
to
determine
whether
this
makes
sense
as
a
mechanism
for
deadnet.
Thank
you,
gotcha.
D
Gotcha,
thank
you
very
much
find
anything
else
you
want
to
say
before
we
end
the
session.
N
Yeah,
thank
you.
Thank
you
jeff.
Thank
you.
I
will
contact
the
jeff
and
to
see
if
we,
if
we
would
like
to
present.
D
Okay,
great,
thank
you
all
for
a
good
session,
really
appreciate
all
the
contribution
and
good
work,
janosh
apologizes
for
not
saying
goodbye.
He
had
some
audio
issues,
but
thank
you
all
and
we
have
a
chance
of
maybe
being
in
person
the
next
time.
At
least
some
of
us
so
hope
to
see
some
of
you
at
the
next
meeting.