►
From YouTube: IETF114-DETNET-20220728-1730
Description
DETNET meeting session at IETF114
2022/07/28 1730
https://datatracker.ietf.org/meeting/114/proceedings/
A
B
A
C
A
Minutes
and
so
on,
you
find
the
information
at
the
usual
places.
I
don't
go
there,
but
I
would
like
to
remind
everyone
to
the
ietf
note
12,
which
is
our
rules
of
way
of
working
operation,
and
I
would
like
to
also
remind
everyone
that
what
is
being
set
here
is
getting
minuted
and
becomes
part
of
the
permanent
record
of
the
itf
as
as
a
contribution.
A
If
you
are
not
familiar
with
noteworld,
then
please,
please,
go
and
double
check,
also
a
reminder
on
our
guidelines
in
terms
of
behavior,
like
we
expect
everybody
being.
A
As
for
attendance
in
these
meetings,
a
couple
of
hints
on
this
slide
in
person
attendance,
please
join
me
the
call
either
from
your
phone
by
a
mythical
light,
which
gives
you
the
opportunity
to
control
the
slides.
If
you.
A
Oh,
yes,
that's
a
good
point
that
we
will
have
a
couple
of
questions
to
the
group
and
we
will
use
the
polling
tool
of
mythical.
We
would.
F
A
Ask
everyone
to
join
for.
A
Yeah
the
qr
code
is
on
the
screen.
If
you
are
in
in
person
right
hand
side
to
you,
that's.
A
You
can
join
from
the
agenda
page.
The
blue
sheets
are
automatic
via
joining
with
a
post,
so
you
don't
need
to
do
anything
extra
and
we
also
use
the
chat.
For
example,
you
can
just
use
what
we
have
in
the
medical
note.
Taking
minutes
is
a
joint
effort,
so
I
would
like
to
encourage
everyone
to
join
and
and
head
on
and
go.
C
A
A
A
Update
on
on
our
the
status
of
our
working
group,
we
have
two
documents
for
which
publication
has
been
requested.
A
A
F
A
F
A
G
A
C
A
Process
has
been
or
is
being
ongoing
and
we
are
going
forward
as
usual,
and
we
have
two
drafts
working
group
drafts
that
are
not
on
the
agenda
at
this
meeting.
A
One
is
the
controller
thing
framework
which
we
should
come
back
and
maybe
touch
upon
it
a
bit
later
at
this
session
and
one
of
the
the
new
ones,
the
the
that
nuclear
wire
mpls
over
udpip.
A
We
have
received
a
liaison
letter
from
itu.
This
study
group
13
question
6.
I
think
yes,
question
6.,
we
have
been
in
liaison
exchange
with
them
received
one
formally
for
information.
We
provided
a
response
to
their
work
on
deterministic
communications
and
we
have
received
now
this
newer
liaison
for
information
with
three
attachments.
A
One
significant
step
happened
since
the
last
ietf
is
that
our
charter
has
been
approved.
I
mean
updated
and
approved.
The
approve
was
the
new
step
we
have
been
discussing
the
update
you
can
see
on
the
screen,
the
actual
changes
and,
as
we
have
been
discussing,
the
intention
behind
the
update
is
to
address
sort
of
enhanced
requirements
towards.net
and
to
address
these
requirements
to
enable
the
development
of
queueing
solution.
A
We
will
have
contributions
on
this
later
today
milestones.
We
have
just
recently
updated
the
milestones
in
the
beginning
of
the
week
capturing
our
our
our
progress.
A
The
the
blue
ones
on
the
screen
are
the
actual
changes
of
the
dates
just
to
adjust,
like
buzz
dates,
are
not
valid
or
were
not
valid,
and
we
should,
I
think,
try
to
pay
attention
to
move
forward
to
to
meet
these
microns,
and
I
think.
F
A
A
good
motivation,
also
a
reminder
on
on
the
ipr
disclosure
process
is
very
important
to
disclose
the
ipr
and
the
last
step
is
a
bit
more
than
the
the
rest
that
if
there's
a
new
author,
then
we
ask
the
disclosure.
A
As
for
progressing
our
work,
we
have
the
virtual
possibilities
in
between
the
idf
sessions
like
we
have
been
leveraging
the
the
working
group
webex.
We
can
do
that
for
for
other
topics.
Most
recently,
it
has
been
the
oem
progressing,
the
oem
drafts
we
had
by
weekly
meetings
and
upon
request.
We
are.
I
J
H
Let's
move
on.
K
You
can
I
take.
A
K
K
K
Okay,
so
we
have
three
documents:
working
group
documents
in
the
framework
for
that
net
as
you've
seen
that
this
is
in
a
working
group
last
call
and,
of
course,
appreciate
your
comments,
reviews
and
response
to
the
working
request,
call
and
two
documents
that
analyze
applicability
of
existing
oem
toolbox
for
ip
data
plane
and
mpos
data
plane
to
the
specifics
of
the
net
architecture,
specifically
that
we
have
two
sub
layers:
forwarding
sub
layer
and
that
net
service
sub
layer.
So
let's
take
a
look
at
what
we
have.
K
The
framework
document
lists
a
general
requirement
for
the
detonate
oem.
That
is.
K
Executed
between
that
net
maintenance,
endpoints
and
requirements
specific
for
proactive
on
on-demand
monitoring
and
measurement
oem
methods
for
the
active
oem.
Also,
we
recognize
that
some
of
the
tools
are
used
as
troubleshooting,
mostly
it's
on
demand
or
proactive
for
longer
lasting
test
sessions,
and
they
may
be
configured
and
to
be
instantiated
and
activated
upon
the
service
being
brought
up.
K
So
active
oems
should
support
active
and
passive,
and
hybrid
oem
are
identified,
active
oem,
it's
using
specifically
constructed
in
injected
in
their
network
test
packets,
passive,
it's
we're
more
familiar
with
their
snmp
queries,
but
at
the
same
time
that
can
be
based
on
notifications
or
rpc,
with
their
using
based
on
a
yang
data
model
and
hybrid.
It's
sometimes
characterized
something
in
between.
So
there
are
mechanisms
that
combine
elements
of
active,
oem
and
passive,
usually
with
a
minimum.
K
K
K
So,
as
I
mentioned
that
we
identified
their
detonate
architecture
identifies,
there
are
two
sub
layers:
a
forwarding
and
service
sub
layer
for
the
forwarding.
It's
important
is
to
be
able
to
do
a
path,
mtu
discovery
and
use
a
remote
defect,
indication
so
that
the
remote
appear
can
indicate
their
loss
of
path.
Continuity,
that's
usually
how
it's
done
in
bfd
also
support
monitoring
leveled
for
path
segment.
K
So!
For
the
service
sub
layer
we
identify
the
use
of
pre-off
or
packet
replication,
elimination
order,
preservation,
sub
functions
that
can
be
instantiated
in
a
net
domain
and
that's
what
their
requirements
for
the
dead
net
for
the
ser,
that
net
sub
layer
oem
to
do
discovery
and
functionality.
K
Okay!
Now
we
next
step
would
be
to
talk
about
and
discuss
their
that
net
and
ip
data
plane.
So
the
well-known
mechanisms
are
such
as
their
icmp
that
we
usually
refer
to
as
ping
and
trace
route
well
known
to
for
path.
Continuity.
K
K
The
challenges
here
is
that
these
are,
as
in
case
of
icmps
based
on
ip
or
a
bfd
they're,
using
their
udp
protocol
as
a
transport
and
well-known
udp
ports,
so
they're
mapping
to
the
particular
that
net
flow
and
the
forwarding
flow
it
cannot
have
if
the
detmit
flow
identified
based
on
a
six
tuple.
K
So
the
mapping
is
probably
could
be
an
operational
challenge.
Yes,
it's
possible
it's
doable,
but
so
the
operator
might
be
aware
of
it
and
use
appropriate
mechanisms
to
do
their
correlation,
so
that
oem
is
used
to
monitor
a
desired
and
targeted
that
net
flow.
K
Okay,
any
questions:
okay,
let's
move
on
next
slide,
please,
okay,
so
one
of
possible
methods
of
doing
active,
oem
in
a
forwarding
sub
layer
in
ip
environment,
for
that
net
is
to
use
that
net
is
in
udp
encapsulation,
so
the
detmit
flow
itself
is
encapsulated
in
the
udp
tunnel
and
then
active
oem
is
encapsulated
in
the
same
udp
tunnel.
So
thus
we
ensure
that
in
the
forwarding
clear
they
are
fade
sharing
because
they
use
the
same.
K
J
The
past
we've
talked
about
using
aggregated
state
to
not
have
to
put
do
an
exterior
encapsulation
and
we've
not
had
any
firm
proposal
on
that.
K
Yeah,
actually,
if
we
look
at
the
previous
slide.
K
So
that's
where
okay,
that's,
how
existing
tools
without
any
encapsulations
envisioned
to
be
used.
So
basically,
there
are
a
lot
of
operational
mechanisms
to
establish
relationship
between
monitored
or
that
net
flow
that
intended
to
be
monitored
and
oem
mechanisms,
because
the
problem
is
or
the
challenge
is
that
we're
using
a
specific
icmp
protocol,
a
specific
ip
protocol
as
icmp
or
we're
using
particular
well-known
udp
port
for
different
oem
protocols.
K
Bfd
is
distinct
from
t-wamp
stamp
can
use
the
same
udp
port
well
known,
872,
okay,
so
because
we
already
are
bound
by
oem
protocol
what
destination
port
udp
destination
port
we
use
and
that
with
the
fact
that
we're
using
udp
transport.
K
So
it
requires
special
provisioning
on
within
a
detonate
domain
on
a
forwarding
sub
layer
to
establish
this
correlation
so
that
test
packets
are
treated
the
same
way.
K
One
way
of
helping
is,
for
example,
it
could
be
use
of
source,
port
numbers
or
udp
source
port
numbers
to
help
to
establishing,
and
to
that
degree
we
have
this
discussion
reflected
in
that
net
for
ap
data
plane
document.
K
Oh
okay,
that's
interesting
so
basically
to
have
all
different,
active
oem
protocols
in
the
same
uvb
tunnel
and
then
map
that
udp
tunnel.
Oh
okay,
that,
yes,
that's
that's
something
different
yeah.
J
K
No,
actually,
I
think
that
okay,
then
I
missed
that
because
that's
a
little
bit
different.
That's
interesting
idea
because,
as
you
see,
okay,
I
have
okay.
I
have
to
say
I
haven't
thought
about
it.
Okay,
that's
fine!
No!
But
no!
No!
It's
it's!
A
good
idea,
I'll
I'll
make
sure
that
will
be
included
in
the
next
version
of
detriment
and
id
great.
J
Thank
you.
We
also
have
pascal
on.
M
Yes,
thank
you.
So
basically
we
need
that
correlation.
That's
that's
clear!
For
ipv4,
I
understand
that
we
are
stuck
with
what
you're
saying
for
ipv6.
M
M
The
cool
thing
about
using
an
option
is
you
don't
have
to
create
an
extra
udpr
caps
and
you
don't
have
to
have
the
silicone
go
all
the
way,
far
into
the
packet
after
the
udp
to
find
which
that
operation
you're
looking
at
if
it
can
hold
a
signal
right
after
the
ipv6
header
in
a
hub
by
hub
option.
So
that's
why
I
have
this
draft.
I
mean
I
don't.
We
have
not
discussed
it
much,
but
the
whole
reason
of
this
up
by
hub
or
it
can
also
be
a
destination
option.
M
I
mean,
depending
on
what
you
do
I
mean
if
you
look
at
the
drafts,
you'll
find
more,
but
basically
the
flow
is
something
it's
an
application
layer
concept.
The
recognition
of
the
network
operation
on
the
packet
is
a
network
concept.
It
doesn't
have
to
be
signaled
by
the
application
layer.
The
application
layer
can
have
multiple
flows
and
it's
up
to
us
to
decide
to
tag
the
treatment
that
we're
going
to
give
to
those
flows.
That
can
be
oam.
That
can
be
data
and
there
can
be
multiple
flows
being
merged
in
one
big
pipe.
M
K
K
So
if
it's
not
an
external
udp
encapsulation,
then
their
destination
port
number
has
to
be
for
the
given
protocol
and
then
it
basically
the
case
of
previous
slide
when
we
have
what's,
I
refer
to
as
existing
oem.
M
Yeah,
that's
exactly
what
I
was
telling
you.
We
can
solve
a
lot
easier
because
you
don't
have
to
change
your
packet.
You
just
tag
the
packet
with
an
ip
hobbyhop
header,
but
your
your
all
oem
packet
remains
the
same,
but
the
that
net
treatment
does
not
depend
on
the
ports
anymore.
It
depends
on
the
hub
by
hop
header,
so
they
get
the
same
treatment
as
long
as
they
have
the
same.
Homebuyer
pedal
and
you
use
the
port
that
you
like
you.
Do
you
do
everything
you
like
really.
K
Yeah,
we
need
to
talk
about
it
because
I
don't
think
that
their
processing
of
oem
protocols,
with
hub
by
hop
header
by
transit
nodes
have
been
defined
because
I,
for
example,
in
in
b
in
the
case
of
bfd
like
well
single
hop.
Obviously
it's
considered
to
be
for
single
ip
hub.
But
if
we
talk
about
multi-hop,
I
don't
need
to
look
at
and
we
need
to
look
at
and
think
of
how
the
transit
ip
node
react
to
multi-hop
bfd
packet,
which
is
it
does
not
have
a
bfd
discriminator.
M
M
The
mailing
list
over
the
draft,
the
draft-
is
there
right,
it's
been
posted,
it
details
all
this,
and
one
of
the
big
goal
of
these
drafts
is
to
enable
to
decouple
the
treatment
from
the
transport
ports
right
it.
The
goal
is
to
signal
the
treatment
at
the
lar3
and
let
the
application
information
like
the
ports
to
the
application,
so
they
do
what
they
like.
Now
we
can
we
should.
We
should
discuss
that
at
the
remaining
list.
They
don't
want
to
clog
euro.
N
M
K
Yes,
thank
you,
okay,
so
for
the
sub
layer
oem,
the
proposal
is
to
use
mpos
in
udp,
so
we
are
have
the
outer
udp
tunnel
and
then
within
it
we
encapsulate
as
we
encapsulate
that
net
for
mpls
network,
and
thus
it
will
give
us
detonate,
control,
work
and
for
oem.net
ach
and
then
we
can.
K
Do
their
pre-off
oem
in
terms
of
discovery
and
function,
verification,
and
if
we
go
next
slide,
so
this
is
more
specific.
What's
defined
in
that
net
for
mpos
data
plane,
they're
modified
ach.
K
Word
so,
as
you
see
it's
different
from
that
net
control
word:
first,
it's
by
the
first
nibble,
it's
one
comparing
to
zero
in
the
control
world
and
then
we're
effectively
we're
modifying
from
the
ach
defined
for
the
pseudo-wires
with
a
new
version
number
and
then
a
new
format.
O
Okay,
thank
you.
My
name
is
gino
jean
it's
nice
to
present
here.
This
is
about
the
asynchronous
deterministic
networking
framework
for
large-scale
networks.
O
Okay,
so
it
specifies
the
framework
for
both
latency
and
cheetah
bounce
guarantee
in
large-scale
networks
with
dynamic
sources
with
arbitrary
input
patterns.
O
O
O
L
O
Than
its
input
rate,
so
the
link
capacity
has
to
exceed
in
order
for
every
flow
has
has
the
guaranteed
rate
that
is
higher
than
the
average
rate,
so
the
resource
reservation
and
the
automation
controls
are
mandatory.
So
these
are
the
out
of
scope
of
this
draft,
so
the
diffs
framework
works
well
for
lightly
utilized
networks.
O
O
The
b
out
here
is
the
the
flows
burst
size
out
of
that
node.
So
anyway,
the
the
burst
becomes
larger,
but
you
can
see
that
the
the
burst
out
is
the
function
of
the
utilization
and
function
of
the
other
flows.
First
or
sometimes
it
is
a
function
of
the
other
flows
maximum
pekka
length,
so
they
can
vary
a
lot,
but
anyhow
they
increases.
O
O
O
So
these
solutions
to
mitigate
the
burst
accumulation-
there
are
quite
a
lot
of
solutions,
I
would
say
the
slotted
operation,
maybe
without
strict
synchronization,
can
be
the
solution
and
metadata
based
packet
forwarding
such
as
the
latency
budget,
etc,
can
be
also
a
solution
and
the
flow
regulation.
That
is
the
direct
solution
to
mitigate
the
burst
accumulation.
O
It
has
many
variations,
I
I
think,
may
may
disperse,
but
sometimes
may
not
disperse
the
accumulated
burst,
and
it
requires
the
lookup,
the
metadata,
lookup
and
decision
based
on
the
metadata
and
the
node
state
and
q
reordering
and
overwriting
the
metadata
at
the
departure
in
line
speed.
These
are
required.
These
are
required
in
the.
In
the
meantime,
the
flow
regulation
has
its
own.
I
show
coming.
It
requires
flow
state.
Maintaining,
but
we
argue
that
this
can
be
overcome
with
the
flow
aggregation.
P
O
Such
example
is
the
ats
I
triplets
and
ats.
It
is
based
on
the
interleaved
regulator,
which
is
the
nice
property
that
the
ier
does
not
increase
the
worst
latency
of
the
fifo
system,
so
the
the
fifo
system
must
be
given
in
front
of
the
minimal
ir
it
can
be
implemented
in
the
fixture
on
the
right
side.
O
A
O
O
The
flow
aggregation
is
the
set
of
flows
having
same
input,
output,
ports
of
a
node
and
mainly
it
regulates
flow
aggregate,
not
individual
flow
based
on
the
the
parameters
of
the
sum
of
the
burst
size
and
the
sum
of
the
average
rate,
so
it's
kind
of
a
loose
regulation,
so
it
has
the
best
scalability
no
need
to
maintain
individual
flow
state,
and
it
has
been
shown
to
work
almost
as
well
as
the
ats.
O
There
are
other
possible
solutions
not
specified
in
the
draft,
for
example,
a
more
strict
regulation,
then
this
pexa
just
can
be,
can
be
a
solution
as
well.
O
O
So
there
are
lots
of
possibility
in
this
framework,
I
would
say
finally
the
based
on
those
latency
guarantee
framework.
We
can
also
guarantee
cheater
upper
bound
here.
The
jitter
is
defined
as
the.
O
Yeah
the
it
has
been
shown,
it
has
been
proved
that
the
jitter
is
upper
bounded
and
the
end-to-end
buffered
latency
is
also
bounded.
We
can
control
the
jitter
bound.
We
can
even
have
a
charity
term.
In
that
case,
the
entity
and
buffered
latency
is
almost
twice
as
the
entire
8-inch
bound.
It
has
been
proven
sorry
for
the
the
time
passing.
So.
Thank
you.
Please
look
at
the
draft
and
comments
and
questions
are
very,
very
welcome.
Thank
you.
J
A
C
A
J
This
is
actually
our
first
document
on
sort
of
this
new
area
where
we're
looking
at
enhanced
data,
plane,
queuing
mechanisms
and
changes
in
in
how
we
can
support
traffic
treatment
in
that
net,
and
that's
the
the
the
term
that
was
used
in
the
charter.
J
It's
a
new
area
for
us
we're
very
interested
in
the
contribution.
So
thank
you
and
thank
you
for
the
that
we
have
a
number
of
presentations
on
this
topic.
So
that's
great
one
thing
to
keep
in
mind
is
there's
a
sort
of
a
theme
throughout
a
number
of
these
solution
documents
that
talk
about
changes
to
the
data
plane.
J
I
think
this
draft
used
the
the
term
metadata-
that's
sort
of
used
in
others,
but
the
there's
a
common
theme
of
some
extra
information,
that's
needed
to
support
the
enhanced
queuing
and
enhanced
traffic
treatment,
and
one
of
the
questions
for
the
working
group
is
going
to
be.
Do
we
want
to
identify
a
common
one?
How
many
of
these
do
we
want
to
put
forward-
and
this
is
just
something
we
don't
want.
J
In
into
your
head,
as
you
hear
the
different
discussion-
and
we
are
going
to
be
asking
about
if
you
are
interested
in
in
hearing
more
on
each
of
these
topics,
so
I
really
don't-
I
see
twirlish
getting
up
for
this.
I
don't
want
to
have
a
long
discussion
about
this.
I
want
to
give
time
for
the
presentations.
J
I
we
had
one
question
on
the
last
one,
so
why
don't
we
we
go
there
and
then
we'll
move
to
the
next
so
sure
so
who
was
first.
G
So
lincoln
from
fishway,
this
is
very
interesting
work
and
I
think
it's
very
important
for
the
then
that
applied
to
ip
network.
But
I
have
a
couple
questions
here.
So
first,
what
is
the
definition
of
flow
here?
It's
ip
or
this
map.
G
Okay,
second
question
is
absolutely
you're
using
regulation
or
record
traffic
shipping.
How
do
you
get
the
rate
by
provisioning
or
by
whatever
methods?
I.
J
Would
suggest
that
you'll
be
best
to
put
this
to
the
author
on
the
list
they're
not
in
the
room
and
we're?
Actually
they
went
over
a
little
bit.
They
were
nice
enough
to
apologize
for
going
over,
but
they
really
didn't
need
a
question
at
a
time
for
questions
on
the
document.
Sorry,
please,
please
send
it
to
the
list.
Shusang.
J
P
Two
very
brave
question:
just
for
ex
clarification.
The
first
one
is
that,
because
we
have
another
document
called
bonding
latency,
so
I
think
that
this
document
is
really
very
very
related
to
that
one.
So
the
the
question
is
for
the
others.
What
is
the
relationship
of
these
two
documents?
P
I
understand
that
this
document
mainly
focus
on
the
a
synchronized
case,
but
I
think
is
still
have
to
deal
with
the
the
deal
with
the
relationship
with
the
existing
document,
and
the
other
question
is
that
I
noticed
that
this
document
is
called
a
framework
of
the
synchronized,
maybe
shaping
or
scheduling
mechanisms,
and
it
listed
a
set
of
mechanisms
that
could
be
used
to
provide
a
funding,
latency
or
bonded
jitter.
I
think
that
is
a
very
great
work
and
it
could
be
a
good
start,
but
it
is
still
a
question
for
us.
P
What
is
the
relationship
with
the
existing
mechanisms
defined
in
ieee
because
the
as
we
we
discussed
before
in
the
working
group?
There
are
a
lot
of
existing
work
that
has
been
discussed
for
a
long
time
in
lee.
So
if
we
have
done
similar
working
than
that,
what
what
is
the
method
we
can,
you
know
synchronize
with
actually
people
and
how
to
cooperate
with
them.
I
think
that
will
be
another
question
from
from
me.
Thank
you.
It's
a
meaningful
work.
I
think
thank
you
for
the
contributions
from
the
others.
J
E
Just
a
quick
comment
on
what
what
what
you
were
saying
to
better
compare,
maybe
at
some
point
in
time
we
want
to
you
know
even
have
some
summary
of
alternatives
or
the
structure
or
something
to
just
not
have
a
long
list,
and
you
know
don't
even
know
how
to
do
it.
So,
for
example,
the
biggest
you
know
two
buckets
I
can
think
of
is
you
know
one
of
the
solutions
where
you
can
get
better
dead
net
treatments
by
just
relying
on
existing
packet
headers
that
we
have
effectively.
E
You
know,
I
think,
that's
what
we've
been
hoping
by
doing,
to
be
able
to
attach
state
to
the
five
double
or
six
tuple
flow
elements
that
is
implying.
Maybe
we
can
even
just
have
you
know
whatever
the
stolen
from
tsn
queueing
discipline
and
we
don't
need
to
treat
anything
in
the
forwarding
plane.
My
proposal
here
also
it
tries
to
do
that
trick
to
quicker
get
deployed
and
the
other
ones
do
cooler
things
by
having
new
packet
headers.
So
that's,
that's
one
big.
You
know
areas
of
distinction.
J
Yeah
and
and
going
to
shusang's
point,
we
sort
of
do
expect
that
there's
going
to
be
some
falling
out.
Maybe
some
consolidation
between
these
things
that
are
it
may
at
first
seem
pretty
separate
but
they're
aiming
towards
the
same
place.
In
some
cases,
some
cases
they're
not
but
hopefully
we'll
see.
A
R
Hello
and
I'm
john
chong
from
bt.
I
would
like
to
talk
about
the
that
led
enhancements
for
large-scale
details
to
peak
networks.
Let's
slide,
please.
R
Okay,
thank
you,
okay.
So
first
the
background
is
that
the
latest
working
group
charter
was
updated
in
july
package.
Treatment.
Related
methods
should
be
supported
and
data
plane.
The
milestone
showed
the
document
plan
over
deadlift
wg
in
the
future
two
years,
enhanced
that
that
is
the
last
focal
topic.
So
next
slide.
So
we
will
talk
about
some
problems
lead
to
be
discussed
first.
R
What
is
the
enhanced
that
led
from
a
charter
and
milestone
the
enhanced
that
that
is
required
to
provide
the
enhancements
overflow
identification
and
packed
treatment
and
support
the
enhanced
functions
or
mechanisms
for
that
data
plane
to
achieve
the
data
that
cues
and
so
was
the
enhancement
of
package
treatment
as
per
rfc
8938
that
led
related
date?
Plane
functions
must
be
decomposed
into
two
sub-layers,
a
surface
sub-layout
in
the
forwarding,
sub-layout
and
so
that
last
specific
metadata
and
that
led
ipmps
data
plane
has
been
described.
R
So,
in
my
view,
for
the
the
enhancement
of
packet
treatment,
the
treatment
functions
for
that
data
plane
should
be
described
and
the
treatment,
specific
metadata
and
encapsulation
should
be
defined
for
the
net
flow
for
requirements.
The
enhancement
requirements
has
been
discussed.
Therefore,
several
times,
including
technical
requirements
and
the
data
plan,
enhancement
requirements
and
worth
more,
the
requirements
from
a
perspective
of
surface
sub
layout
and
forwarding
subnet
has
also
been
described.
R
For
example,
the
deterministic
surface
may
demand
different
this
ministry
queues
requirements.
These
missed
rules
should
be
established
and
the
distributed
rules
and
the
inter
domain
rules
should
be
taken
into
consideration
and
the
the
resource
should
be
managed
to
provided
pending
latency
guarantees
for
the
dismissed
forwarding.
R
R
So
folder
data
plane,
the
enhancement
for
the
data
plane,
is
shown
on
the
right.
A
right
side
contrast
to
rfc
8938,
the
enhancement
in
the
enhanced
treatment
functions,
may
be
added,
such
as
flow
aggregation
flow
redundancy.
A
R
Okay,
okay,
I
will
continue
so
for
for
the
date
playing
consideration.
We
think
the
enhancement
for
the
data
plane
should
be
enhanced
shown
on
on
the
right
side.
R
Contrast
to
rc8938
the
enhanced
treatment
functions
may
be
added,
such
as
flow
aggregation
flow
redundancy
under
surface
level,
aggregation
and
for
forwarding
sublio,
maybe
multiple
queueing
mechanisms,
deterministic
paths,
resource
scheduling
and
distributed
routed
routes
and
so
on
and
and
many
functions
to
improve
their
their
their
that
letter
performance
to
achieve
the
dalek
queues
and
the
treatment
metadata
used
by
functions
should
be
carried
such
as
the
service
level,
information,
aggregation,
aggregated
flow
information,
redundancy
information,
pass
information,
q,
information
and
so
on,
and
many
and
many
other
information
can
can
be
added
to
the
metadata.
R
R
And
so
the
solutions
over
enhanced
treatment,
functions
and
metadata
are
open
to
working
group.
We
call
for
co-workers
and
the
reviewers
and
hope
to
provide
a
more
visible
and
achievable
way
to
progress
through
this
work
comments
and
the
questions
are
pretty
appreciated
and
welcome
to
join
us.
Thank
you.
P
I
think
my
first
question
is
also
for
the
chairs,
because
the
page
five
has
mentioned
some
requirements
for
control
page
five,
a
next
slide.
Thank
you.
P
Yes,
you
mentioned
some
controller
plan
consideration
actually
as
the
order
of
the
controller
plan
framework.
My
question
is
that
whether
the
existing
controller
plan
framework
is
supposed
to
only
contain
the
net
controller
plan
requirements
and
some
framework
considerations,
or
we
should
also
consider
the
enhanced
net
control
plan
consideration
if
the.
If
we
go
to
the
previous
method,
maybe
there
will
be
another
document,
maybe
called
controller
plan
considerations
or
framework
for
hansa.net,
as
listed
in
this
slide.
P
But
if
we
goes
to
the
letter
method,
maybe
it
can
be
combined
into
the
net
existing
framework,
as
the
other.
I
prefer
the
latter
one,
because
the
the
controller
framework
is
controller
plan
framework
document
itself
is
is
still
under
discussion.
So
we
would
like
to
combine
the
considerations
of
enhance.net.
A
P
Okay,
thank
you,
and
the
second
question
is
for
page
four.
I
noticed
that
the
here
you
also
listed
some
enhanced
data
plan
requirement
for
service
sub-layer,
for
example,
flow
aggregation
flow
redundancy
and
the
service
level
aggregation
all
these
functions.
I
think
that
have
already
been
defined
in
previous
destination
discussion.
So
my
pro
my
question
is
that
what
is
new
here
or
what
should
be
enhanced
here
for
service
supplier.
R
In
my
field,
rfc,
h3
and
h9
has
been
released
and
some
of
the
functions
or
some
of
the
encapsulation
has
not
been
covered
in
in
that
rfc.
So,
in
my
view,
there
there
are
additional
functions
or
encapsulations
all
belong
to
their
enhancement.
So
it's
up
to
their
data
working
group.
Thank
you.
J
For
instance,
you
mentioned
the
requirements
document,
the
individual
draft
and
we
do
have
on
our
milestones,
adopting
such
a
requirements
document,
and
it
seems
like
some
of
the
text
you've
described
in
your
draft-
could
go
to
that
document
and
perhaps,
as
we
start
looking
at
the
specific
text,
some
of
them
will
find
already
exist
in
rfcs
and
others
will
maybe
go
to
other
documents.
So
we
might
end
up,
realigning
and
redistributing
as
we
move
towards
working
group
adoption
of
solution
documents,
as
well
as
the
architecture
document.
That's.
A
Yes,
so
please
go
ahead
with
the
next
presentation.
You
have
the.
R
Okay,
thank
you.
This
is
about
the
deadlock
queuing
option.
R
Sorry
so
we
just
discussed
about
the
the
enhanced
death
length
that
they
play
with
the
data
plane.
We
should
provide
the
functions
to
achieve
the
data
queues
such
as
entry
and
deterministic
latency.
So
what
is
the
end-to-end
deterministic
latency,
as
per
itf,
that
abandoned
the
lantern
sea?
The
end
to
end
the
band
can
be
computed
as
the
sum
of
low
queueing
delay
under
the
queuing
delay
along
the
path.
R
The
upper
down
bound
of
no
queuing
delay
are
constant,
so
the
end
to
end
latency
depends
on
the
value
of
a
queuing
delay
along
with
the
queuing
mechanisms.
So
the
qe
information
should
be
taken
into
considerations
and
enhance
the
deflat
data
plan
to
realize
and
realize
the
deterministic
latency
and
how
to
ensure
deterministic
latency.
R
First,
we
should
provide
the
functions
or
technologies
to
ensure
deterministic
latencies,
such
as
pure
mechanic,
queuing,
mcleans
and
carried
the
related
information
in
the
data
plane
and
the
second.
We
pro
we
propose
a
common
queuing
option
and
the
generic
q
information
should
be
carried
in
data
plane
metadata
and
there
are
finally
mpls
ipv6
srv6
inside
encapsulation
should
be
taken
into
consideration.
R
And
so
for
the
request
requirements,
the
enhancement
requirements
as
described,
has
described
that
it
is
required
to
provide
to
provide
information
used
by
functions,
ensuring
deterministic
latency
and
the
defined
related
method,
data
to
help
regulation
and
the
queue
management
and
the
large-scale
enhancement
also
proposed
there
that
the
enhanced
that
database
required
to
support
a
method
over
a
packet
treatment
and
that
will
include.
R
We
include
the
deterministic
latency
scheduling
the
package
treatment
should
support
the
cueing
treatment
and
the
identification
of
killing
related
information,
and
so
multiple
killing
mechanisms
can
be
used
to
guarantee
their
deterministic
latency
for
foreign
application
in
data
networks.
So
it
is
required
to
carry
cueing
information
in
data
plane
to
make
a
pro
point
pack,
the
forwarding
and
the
scheduling
decision
to
meet
the
10
bonds.
R
Also,
what
the
q
information
will
list
out
their
qm
mechanisms,
which
has
been
discussed
in
that
lat,
that
that
will
that
should
get
a
confirmation
confirmation
from
a
working
group,
but
for
the
first
first
of
all,
the
and
unique
mechanism
should
be
selected
in
enhanced
that
data
plane
that
could
be
selected
to
guaranteed
the
latency.
R
R
Second,
the
queue
information
may
cover
the
planned
and
required
queuing
delay
such
as
maximum
qe
delay
and
the
maximum
qe
delay
variation,
and
the
queuing
like
parameters
should
be
carried
for
further
correlation
between
nodes,
for
example,
their
cycle,
information
that
line
permission
and
their
the
nantucket
badgered
and
so
on,
and
many
other
parameters
will
be
added
to
the
the
q
information.
R
So
we
proposed
the
the
i2
fpv6
extension
solution
to
provide
the
input
absolution
for
the
q
information.
We
defined
a
new
ipv6
option
for
that
led
to
cyclo
q
information
and
we
defined
the
queuing
flag
helps
to
this
dis
discriminate
the
types
of
cumin
metallisms.
R
R
We
also
proposed
the
mpls
extension
solution.
We
provide
additional
encapsulation
for
the
queuing
delay
over
deadlock
flows
in
mps
data
plane.
We
align
with
the
the
ongoing
work
in
nps
working
group,
this
itf
mpls
mna
framework,
and
that
has
been
just
adopted
as
the
working
document.
R
So
we
propose
to
add
spl
to
carry
deadlock.
Q
redefine
deadlock
queen
and
catering
spl,
and
they
used
tre
to
carry
that
qa
information
and
use
sub
tlv
to
carry
specific
data,
queueing
information
so.
R
The
types
of
queuing
mechanisms
used
for
that
sled
and
the
related
queue
information
should
be
discussed
in
details.
That
may
be
that
should
get
a
complete
confirmation
from
a
working
group
and
their
last
steps.
Other
encapsulations,
such
as
mps
over
udp
and
their
srh,
may
be
taken
into
consideration
and
we
will
follow
the
chatter
and
the
milestones
of
deadlock
and
the
line
with
the
tommy
terminology,
so
comments
and
the
questions
are
appreciated.
Thank
you.
R
The
this
is
up
to
the
the
new
liquor
queuing
mechanisms,
so
we
we,
this
doc
draft,
just
proposed
the
scheduling
over
multiple
queueing
mechanisms.
So
I
think
what
you
ask
is
belong
to
the
the
qa
mechanism,
not
their
scheduling.
C
K
A
Yes,
yes,
thank
you,
so
we
are
really
over
time.
I
don't
know
if,
if,
if
you
can
take
it
to
the
list,
that
would
be
the
best
sure.
J
Too
early,
so
we're
actually
jumping
you're
jumping
the
queue.
So
if
you
don't
mind
taking
to
the
list,
that'd
be
great
robin
if
you
don't
mind
taking
the
list
as
well.
Thank
you
and
I
think,
tourists
please
and
we're
gonna
steal
a
little
time
from
you
and
the
next
presenters.
It's
not
just
you.
It's
the
next
presenters.
Also
all
right.
J
E
So
this
stuff
has
been
around
for
five
years.
Three
different
drafts
next
slide
faster
all
right,
so
there
is
a
very
long
draft
that
I
wrote
with
all
the
nasty
and
interesting
background
expired
if
you're
interested
to
have
informational
or
even
individual.
For
that
draft,
let
me
know
separately
the
requirements
also
integrated
in
the
active
draft
that
carries
it
as
requirements.
E
This
is
all
the
good
stuff.
This
solution
does
bounded
latency,
dough,
right,
minimum
jitter,
that's
the
most
important
part,
synchronous,
industrial
control,
loops
and
other
stuff,
and
also
it
makes
end
devices
cheaper
because
it
removes
the
clock
synchronization
requirement
from
them
arbitrary
links
and
jitter,
which
is
what
tsn
doesn't
need
and
then
the
big
thing
it
scales
because
it
doesn't
have
perhaps
flowing
state.
E
So
it's
not
not
the
five
tuple
state,
even
though
it
just
then
has
the
latency.net
services
and
so
all
the
good
things
that
come
with
it,
like
you
know,
no
interruption
during
network
reconvergence
and
so
on.
Minimum
clock,
synchronization
requirement
proven
mechanism,
so
this
is
basically
tsn
cyclic
hearing
and
forwarding
adapted
to
that
net
and
no
change
to
the
packet.
Headers
is
one
option.
E
We
can
obviously,
as
you
saw
these
cycle
stuff,
also
put
into
new
headers,
but
we
don't
need
to,
especially
not
in
the
mpls
forwarding
plane,
which
we
already
have
so
makes
it
a
great.
You
know
first
step
mechanism
where
we
don't
only
have
all
the
queuing
stuff
well
worked
out,
but
also
no
packet,
headers
and
sorry
no
coffee
for
it
next
slide.
E
E
Five
has
a
very
high
clock:
synchronization
accuracy
requirement
because
it
synchronizes
on
nanosecond
clock,
which
we
don't
and
the
throughput
for
that
stuff
goes
quickly
down
to
zero
when
you
have
a
network
path
longer
than
let's
say
two
kilometers
on
the
typical
parameters,
so
with
tagging,
we're
synchronizing
based
on
attack
in
the
packet
header,
which
we
need
three
values
at
minimum,
so
that
would
be
mpls
encoding,
three
values
of
the
traffic
class
field.
E
This
is
in
the
draft
and
the
validation
that
we've
done
is
based
on
100
gigabit
fpga
three
four
years
ago,
production
routers
with
additional
qs
fpga,
and
that
was
deployed
in
a
2000
kilometer
network
in
china
across
multiple
parties
to
validate
that
this
is
running
fine
and,
as
I
said,
proposed
since
then,
but
obviously
not
on
the
charter,
so
always
waiting
in
some
way.
Not
deterministic
queue
next
slide.
E
Okay,
so
this
is
I'm
going
to
skip
this
right.
So
this
is
the
mechanism.
This
is
when
we
ever
discussed
the
details
of
the
technology
next
slide.
So
what
did
we
change
since
113.?
So
we
added
a
reference
to
the
2021
published
research
paper
that
is
showing
all
the
gory
details
based
on
a
highly
accurate
simulation.
So
it's
a
lot
easier
to
do
a
lot
of
timing,
validation
in
the
simulation.
This
is
an
ifip
conference,
which
means
there
is
no
paywall.
E
The
url
is
here
and
we
edit
the
text
and
forwarding
pseudo
code
for
the
ingress
operation.
When
you
get
your
tuple
flow
and
you
do
per
five
type
of
flow
and
queuing
into
this
non-perf
low
forwarding
mechanism,
the
way
the
spec
is
written,
this
is
all
done
based
on
textual
plus
pseudocode.
I
come
from
a
multicast
work
where
also
a
lot
of
rfcs
in
multicast
routing
have
been
defined
on
pseudocode,
I
preferred
over
some
of
this
diffserv
behavioral
description
that
we've
seen
in
diffserv.
E
So
that's
you
know
for
us
to
decide
how
we
write
these
texts
and
that's
very
simple
right.
Every
flow
just
has
a
certain
amount
of
configured
bits
that
it
can
enqueue
into
every
cycle
has
to
be
larger
than
a
single
packet,
of
course,
and
the
pure
code
explains
that
and
then
I
added
more
implementation
deployment,
validation,
considerations
for
high
speed
implementations
for
network
with
different
speed
links.
So
next
slide.
E
So
I
think
you
know
from
what
I
think
we
want
to.
We
need
to
achieve
in
a
draft
to
actually
get
to
a
point
where
we
can
approve,
build
and
deploy
it.
It's
functionally
complete.
Obviously
you
know
not
a
lot
of
review
done.
The
shutter
puts
it
into
scope.
I
think,
as
I
said,
it
can
be
done
with
existing
packet
headers,
because
we
just
need
a
few
values
for
the
tag
in
each
header.
Mpls
is
what
we've
defined.
E
If
we
put
it
into
dscp,
we
can
do
it
for
ip,
but
we
may
not
have.
You
know,
worked
out
all
the
net
service
stuff
in
the
architecture
by
itself
which
is
trying
to
tunnel
it
over
mpls.
At
this
point
in
time,
that
might
be
an
additional
text
and
then,
of
course,
yeah.
How
do
we
want
to
structure
this
stuff?
E
I
was
talking
with
ishan
in
vienna
and
he
was
saying,
maybe
separate
out
the
mechanism
independent
of
the
tagging
in
in
mpls
or
dscp
or
so,
but
that
might
be
an
informational
document,
so
I
was
in
the
first
place,
trying
to
go.
What's
the
minimum
one
document
has
the
mpls
in
it.
We
do
an
add-on
document
in
tsvwg.net
for
dscp,
so
that's
all
structure
in
the
end,
I
think,
would
be
great
if
we
could
do
an
adoption
call
right.
E
This
is
uniquely
different
from
all
the
other
things
that
are
being
brought
in
right
now,
because
I
think
it
hits
all
the
check
marks
of
you
know:
high
speed,
validation
being
around
long
being
derived
from
proven
tsn
technology,
and
you
know
meeting
all
the
requirements
that
I
could
have
ever
think
of
and
written
down
in
a
requirements
document.
So
thank
you.
I
Thank
you
for
reviving
this
reusing
just
from
a
basic
stuff.
Using
the
existing
data.
Print
mechanism
simplifies
any
deployment
because
trying
to
bring
any
new
forwarding
mechanism
into
the
data
plane
puts
in
a
very
long
cycle
before
we
can
see
any
results
that
can
be
put
into
commercial
offerings.
E
E
E
I
L
All
right:
well,
it's
good
blood
loaded
question.
What's
the
scope?
Yes,
sorry,
wait!
A
minute.
Did
we
get
shafu
on
yeah.
E
S
E
S
From
a
multiple
incoming
interface,
so
assuming
that
the
world
is
traffic,
I
expected
to
be
sent
in
a
single
cycle.
At
the
same
time,
the
problem
is
how
to
ensure
that
the
sum
of
this
traffic,
we
know
that
you
can
see
that
the
maximum
number
of
bids
that
can
be
sent
in
a
single
cycle
if
it
exceeds,
is
it
any
compensation
schema
by
t
circa,
f.
S
E
L
Loaded
question:
what's
the
scope
of
this.
E
You
know,
I
would
say,
replace
tsn,
but
obviously
we
want
to
start
with
the
you
know
where
the
network
becomes
larger,
where
tsn
doesn't
work
well
right.
So
this
could.
L
Ask
a
different
question:
how
many
administrative
domains,
sorry,
how
many
administrative
domains
and
who
knows
the
answer?
Let's
see,
if
you
know
it
too
right,
the
answer
is
one
okay,
so
I
have
good
news
for
you.
You
do
not
need
tsvwgs
permission
to
go
forward
here
within
a
single
administrative
domain.
Dscps
are
generally
usable
at
the
operator's
discretion.
It
might
be
worth
talking
a
little
bit
to
tcwg
about
how
to
select
some
dhcps
will
cause
less
trouble,
but
you
don't
need
to
drive
a
full
diff
server.
J
N
Thanks
so
I
just
realized
this
one
actually
possibly
would
be
best
to
put
before
a
tallest
presentation,
so
it
basically
talks
about
the
similar
thing,
but
it
gives
a
little
bit
of
background
and
reasoning
why
this
kind
of
tagging
or
the
or
the
cycle
id
would
be
required
in
order
to
enhance
the
cqf.
N
Yeah
yeah,
I
just
realized
that
yeah.
So
a
quick,
quick
quickly
try
to
refresh
the
memory.
We
have
the
fundamental
cqf
which
was
defined
by
ieee
and
I'm
not
going
to
touch
the
details
of
it
here.
But
basically
the
fundamental
cqf
is
has
two
buffer
port
and
the
input
up
and
output
was
swapped
once
every
cycle
time
tc.
So
this
cycle
time
sorry
cycle
interval,
tc,
is
very
important
and
also
as
given
by
the
working
group
document,
bounding
latency.
N
Of
course,
here
we
have
at
that
time,
terminology
defined
I'll
revisit
this
later,
but
in
the
fundamental
cqf,
the
the
that
time,
dt
value
is
really
small,
so
basically
it
can
be
ignitable,
so
that
gives
very
attractive
simplicity
features
from
cqf,
which
means
it's
simple,
that
it
gives
the
simple,
simple
calculable:
latency
bound,
which
only
relevant
to
the
cycle,
interval,
tc
and
number
of
hubs,
and
also
it
gives
the
simple
maintainers.
N
Those
are
the
attractive
features
that
people
want
to
consider
it
for
the
wider
deployment,
and
we
are
also
seeing
that
the
cqf
has
the
potential
for
the
wider
deployment.
When
we
talk
about
the
wider
deployment,
basically
we're
thinking
about
four
at
least
four
four
features
to
be
supported,
which
are
labeled
as
one
two
four
under
the
first
bullet,
namely
they
are
the
smaller
end-to-end
latency
bound.
The
second
one
is
a
larger
number
of
hubs.
N
The
third
one
is
longer
links
which
means
the
longer
propagation
delay
and
the
first
one
is
the
larger
processing
time
variance
because
we
are
expecting
on
the
different
node
types
will
be
will
be
put
as
the
intermediate
nodes,
such
as
different
switches
or
routers,
from
different
vendors,
or
even
some
of
the
layer
layer,
one
equipment
which
are
not
visible
at
the
layer
to
a
layer,
three
layer.
N
So
I
recall
that
the
secure
latency
bound
are
only
relevant
to
the
number
of
hops
and
times
and
and
the
cycle
interval.
So
when
we
talk
about
these
four
requirements,
the
first
two
we
are
thinking
that,
naturally,
can
be
supported
by
the
higher
speed
links.
N
N
N
So
so
we
are
seeing
that
the
potentials
for
for
the
item,
one
and
two
basically
can
be
achieved.
So,
let's,
let's
talk
about
item
three
and
four,
which
are
the
longer
links
and
the
longer
time
variance.
N
So
if
we
look
at
the
fundamental
to
buffer
cqf,
we
are
thinking
that
it
can
support
the
requirement
three
and
four,
but
it
will
encounter
the
low
utilization
issue.
So
here
we
want
to
revisit
the
that
time,
which
imposed
by
the
fundamental
cqf,
which
is
in
the
red
box.
Here
we
can
see
that,
basically
that
that
time
is
is
a
time
to
be
put
normally
at
the
end
of
a
cycle.
It's
the
last
byte.
N
The
the
purpose
of
it
is
to
make
sure
that
the
last
byte
sent
by
the
node
a
in
the
cycle
I
minus
1,
has
to
be
ready
for
sending
at
the
next
node,
which
is
node
b
here
before
the
next
cycle,
which
is
cycle
I
here
so
basically
the
dead
time
should
be
at
least
the
sum
of
the
maximum
propagation
delay
between
two
neighbors
plus
the
maximum
processing
delay
at
the
next
node,
and
also
the
and
also
some
of
the
other
time
variants,
for
example
like
the
clock,
shifting
something
like
that,
so
the
the
the
longer
the
propagation
or
the
processing
delay
the
larger
the
dt.
N
So
if
we
want
to
achieve
the
lower
end-to-end
band,
lower
end-to-end
boundary
latency,
that
means
we
have
to
use
the
lower
cycle
interval
so
that
basically
means
that
that
time
will
eat
up
the
cycle
interval
when
the
when
the
cycle
interval
is
small.
N
So
that's
that's
why
we
see
that
the
low
utilization
here
by
the
fundamental
cqf
and
the
secure
variant
is
introduced.
Actually,
it's
not
a
brand
new
idea.
It
has
been
brought
brought
up
from
time
to
time.
So
here
we
indicated
here
the
cqf
as
a
three
buffer,
so
the
three
buffer
works
in
rotation
manner.
So
it's
a
straightforward
variant
to
the
fundamental
two
buffer.
The
configuration
is
very
similar
and
the
logic
can
be
easily
deduced
from
the
fundamental
to
buffer
cqf.
N
N
We
see
that
there
is
a
processing
time
window
here.
It
swells
because
we
want
to
support
the
larger
processing
time,
variance
with
the
increasing
of
the
time
variance
we
can
see.
The
the
degree
of
the
swelling
is
is
more
severe,
so
the
time
ambiguity
window
would
exist
for
two
consecutive
cycles
from
the
node.
A
so
which
is
here
is
a
little
bit
smaller.
I
guess,
but
if
you
can
see
it,
there
is
a
ambiguity
window
here,
so
basically
the
larger
the
time
variance
or
the
smaller
the
that
time.
N
The
larger
will
be
the
ambiguity
window
here
and
remember
that
in
the
cqf.
Normally
we
need
to
preset
a
time
demarcation
to
differentiate
the
packets
from
two
consecutive
cycles.
Then
in
here
it
it
would
be
in
impractical
because
the
blue
dashed
line.
If
we
said
here
that
it
looks
like
for
the
case,
one,
the
green
and
the
red
packets,
which
are
from
different
cycles
from
node
a
they
are
perfectly
demarcated.
N
However,
maybe
for
the
next
round
there
there
would
be
again
ambiguity.
The
the
leftmost
red
box
here
will
be
wrongly
identified
as
the
green
one.
Actually
so
a
simple
and
a
straight
way
out
is
to
let
the
packet
carry
the
cycle
id
as
the
metadata
at
the
output
to
help
the
downstream
node
to
determine
which
one
is
the
correct
buffer.
To
put
it
in,
I
think
I'm
running.
N
Yeah
yeah,
so
so
here
we
have
showed
the
packet
of
ipv
basics
of
option
format,
but
I
don't
think
that's
the
most
important
one.
We
want
to
select
the
feedback
to
see
whether
people
think
that
the
right
way
to
address
the
ambiguity
issue
in
order
to
facilitate
the
increasing
demand
to
use
cqf
and
these
variants
in
the
wider
deployment,
and
also
we
want
to
study
the
feedback
and
also
from
chairs
if
we
want
to
define
ib
basic
options,
whether
and
how
to
collaborate.
Other
working
groups,
yeah.
J
J
J
We
seem
to
be
having
a
problem
with
the
slide,
so
we're
going
to
skip
to
just
and
we'll
find
it.
Okay.
S
So,
for
motivations
of
this
document
is
that
deadnet
defines
the
goals
of
deterministic
the
body
that
legit
packet
loss
ratio
out
of
all
the
neighborhood
use
resource
reservation
explicit
routine
under
service
protection.
To
achieve
these
goals
well,
resource
resolution
is
the
basis
of
ensuring
bounding
the
deleted
and
ultimately
depends
on
the
queue
mechanism
of
the
floating
plane.
S
The
issues
of
candidate
mechanisms
for
large
scale
packet
network,
cbs
and
ats
come
with
a
high
latency
values,
because
the
minimum
latest
is
not
affected
by
the
this
mechanism.
Seeker
is
quite
challenging
because
it
requires
time
of
synchronization
anyway,
the
maybe
some
other
seeker
for
best
violence
to
widen
it.
Use
the
priority
based
scheme
make
it
but
a
very
generic
test,
but
with
worst
case
latency.
S
So
this
is
our
way
of
the
the
total
scheme
we
introduced.
Another
mechanism
using
cool
that
struck
here.
39
kills
violence
of
pq
and
are
also
based
on
priority
scheduling.
S
World
39
qs
has
tdl
attributes
a
staggered.
Follow
me
on
the
decreasing
with
the
positive
of
the
timer.
I,
the
third
and
accuracy
0
has
the
highest
priority
after
the
authorization
time
for
sending
it
is.
The
tdl
will
be
reversed
to
the
maximum
initial
volume.
The
39q
with
ctd090
has
a
normal
priority.
If
interval
mode
all
profibusing
probability
for
anthem
mode,
the
former
can
be
involved
in
scheduling
rather
later
canada.
S
S
S
S
The
following
figure
shoot
that
six
packets
follow
the
same.
Deterministic
service
are
received
on
the
ui
port,
assuming
that
the
arrival
at
the
outgoing
port,
one
by
one
after
internal
the
forwarding,
delay
each
packet.
We
enter
the
key
with
the
tdl
value,
consistent
with
the
alarm,
excluding
the
delay
q
of
the
packet.
That
is
current
time.
S
S
The
former
recorder
does
not
even
knows,
and
the
native,
according
to
our
awareness,
internal
mode
is
similar
to
sp
latency.
However,
compared
with
sp
the
borderline
consecutive
package
by
ternary
information
other
than
traffic
class,
that
means
our
partners
will
not
always
experience
the
worst
in
the
failures
delay.
I
will
hope.
S
S
So,
what's
the
benefits
for
this
enhanced
scheme?
Of
course
the
timer
synchronization
is
another
required
before
networking
knows
I'll
operate
based
on
the
relative
time
stays
the
flow.
All
flow
aggregate
is
not
required
for
deployment.
A
particular
multiplexing
base
is
an
enhancement
of
pq
scheduling,
algorithm
easy
to
upgrade.
S
Each
node
can
independently
set
authorization
timer
of
the
data
locals
based
on
self-port
bandwidth
and
support
partial
upgrade
for
scalability.
A
single
set
of
dynamic
queues
support.
The
multiple
levels
of
residence
time
kills
with
a
higher
max
maximum
material
can
be
created
incrementally
according
to
services
for
performance,
good
achieved
control,
just
a
single
authorization
time.
J
We're
sort
of
out
of
time,
so
I
know
torlas,
is
in
cuba.
No
wasn't
cordless
was
someone
in
queue
and
then
left.
I
bet
you
left.
Thank
you.
So
we
really
don't
have
time
I'd
like
to
discuss
this
more
on
the
list.
If
you
can
take
your
comment,
there
would
appreciate
it
and
with
that
the
last
person
on
this
particular
topic
is.
H
Okay,
I
see
the
slide.
This
is
fanyang
from
huawei
and
I'll
present
this
the
net
enhancement
plane.
H
Let's
see
if
I
can
flip
yeah,
here's
the
simple
logic
that
while
we
have
this
draft
first,
according
to
the
net
architecture,
then
that
is
required
to
support
bond
latency
and
right
now
we
see
there
are
already
several
mechanisms
proposed
in
data
to
support
the
boundary
latency,
so
in
the
short
in
the
short
feature
that
people
envision,
that
that
net
data
plan
should
be
enhanced
to
support
the
latency,
which
is
which
is
specific
specifically
at
at
the
moment.
The
latest
specific,
specific
metadata.
H
And
to
meet
the
goals
of
supporting
the
minimal
and
maximum
end-to-end
latency
in
date,
plane
in
that
data
plane
that
the
one
that
net
specific
metadata
flow
id
is
used
to
identify
the
net
net
flow.
But,
however,
there
is
a
there's,
no
other
metadata
defined
to
support
the
end
to
end
lattice
latency.
H
So
we
see
there
is
a
missing
in
the
net
data
plane
and
also
there
are
also
other
requirements
to
then
that
data
plane
that
defined
in
this
requirement
to
the
large
scale
than
that
draft
and
including
the
functions
the
functions,
including
the
explicit
inclusion
of
the
metadata,
and
also
compatibility
to
different
underlying
network
and
network
technologies,
etc.
H
So
to
meet
the
goals
and
the
requirements
to
to
support
the
boundary
latency
that
there
there
there
have
been
several
mechanisms
that
proposed
to
them
that
to
solve
this,
to
support
the
boundary
latency
and
we
list
the
mechanisms
here
on
the
left,
and
we
think
that
if
there
are
more
mechanism
proposed,
it
should
be
added
to
this
list.
But
of
course,
we
think
that
it
should
be
accepted
acknowledged
by
then
that
first.
H
It
is
used
to
be
carried
in
data
plane
to
facilitate
that
that
that
net
transit
node
to
support
the
boundary
latency
transmission
and
as
we
as
it's
shown
in
this
figure,
that
this
bounded
latency
information
is
transmitted
across
multiple
than
net
transit
node
and
used
by
the
deadnet
forwarding.
Sub-Layer.
H
H
H
There
are
two
reasons
for
the
for
it.
One
is,
inter
interoperability
to
have
different
to
have
separate
format,
for
each
mechanism,
obviously,
obviously
doesn't
help
the
interoperability,
and
second,
is
this
creative
consideration.
H
If
there
is
if
there
is
an
equipment
with
non-standard
algorithm
inserted
into
network
this
boundary,
latency
information
should
help
the
equipment
either
drop
the
packet
or
transmit
without
processing
the
packet
and
according
to
the
analysis,
to
the
mechanism
that
in
previous
shown
in
perez
previous
slide
and
we
we-
we
classified
this
boundary
latency
information
into
two
categories
and,
and
they
are
shown
in
the
in
these
two
tables
below.
T
H
H
And
in
this
boundary,
latency
information
data
field
that
we
use
bri
type
to
it,
to
represent
or
indicate
the
type
of
the
boundary
latency
information.
If
there
are
more
and
if
there
are
there
is
new
algorithm,
that's
accepted
by
then
that
and
if
the,
if
the
boundary
license,
information
is
different
from
the.
What
we
have
to
in
the
in
this
table
that
it
will
be
added
as
a
new
about
the
latency
information
type
and
the
format.
H
H
For
app
basics,
based,
then,
that
data
playing
that
we
define
a
new
ipv6
extension
header
option
called
vra
option
and
this
bri
option
can
be
encapsulated
in
either.
Ibra6,
however,
help
or
the
oh
extended
header,
depending
on
the
processing
behavior,
and
we
the
there
could
be
more
than
one
monthly
latest
information
appear
in
one
bri
option.
H
So
similar
to
acquire
6
based
that
net
data
plane
that
in
npr's,
based
that
net
data
plane,
a
new
nprc
extension
header
called
the
bri
extension
header
is
defined
and
the
the
processing
behavior
and
how
to
use
it
is
very
similar
to
to
the
to
the
processing
in
ipv6.
H
C
All
right,
what's
the
benefit
of
having
this
bli
information,
both.
H
Yeah,
actually,
if
you
look
at
the
the
yeah,
the
detailed
design
definition
about
the
bri
type
and
we
have
the
different
types
of
the,
we
have
different
types
of
the
bri
body,
latency
information.
H
Some
of
these
some
of
this
type
can
is,
and
some
of
them,
some
of
them,
some
of
the
bri
from
bli
is
used,
is
used
in
the
hobby.
Help
scenario
and
some
of
them
actually
can
be
used
by
can
be,
can
be
processed
by
the
the
destination
node
so
that
key
so
in
in
that's
in
that
case,
it
is
better
to
be
placed
in
the
ho
ho
in
the
destination
option.
Header,
yes,.
C
Particular
hub.
C
J
T
H
R
H
Sure
I
could
quite
catch
the
question,
but
here
that
we
define
a
format
and
this
format
can
be
placed
in
in
hobby
hub
or
or
at
the
end
doh,
so
this
information
can
be
can
be
played
can
be
processed
either
by
help
either
either
hope
I
or
at
the
end
destination.
H
So
so
it
depends
on
the
on
the
algorithm
that
other
requirements
from
of
the
algorithms,
I'm
not
sure
I
quite
catch
the
question,
but
maybe
you
can
come
back
to
list
for
more
discussion
here.
I'm
I'm
not
sure
the
the
I
answered
it.
J
Okay,
thank
you
and
before
robin
asks
this
question
I'm
going
to
put
up
a
show
start
a
show
of
hands.
I
I
put
in
the
chat
that
this
was
coming.
Basically,
we
should
keep
in
mind
that
our
new
milestone
is
is
to
get
a
a
requirements
document
adopted.
J
So
with
that
in
mind,
I'm
gonna
start
this
show
of
hands,
while
robin
continues
asking
questions
on
this
the
same
draft.
So
this
this
is
the
question.
A
J
Q
Q
H
J
If
you
hear
him
fan,
if
you
could
just
reproduce
repeat
the
question,
because
we
can't.
D
While
robin
is
coming,
let
me
ask
a
quick
question:
my
question
is
maybe
to
the
working
group
and
the
chairs.
Also,
we
have
two
documents
from
that
net
on
pc
agenda
and
we
had
the
documents
on
enhanced
net
in
spring,
as
well
as
in
bgp,
so
idr
so
wanted
to.
I
think,
from
this
meeting.
I
realized
that
this
is
very
early
work.
D
I
think
you
guys
are
still
sorting
things
out,
but
just
so
that
other
working
groups
can
catch
on
if
clear
requirements
can
come
in
while
you
guys
are
developing
that
from
the
protocols.
D
What
are
the
clear
requirements
that
you
would
have
and
the
terminology,
because,
right
now,
we
were
seeing
that
even
the
terms
used
in
the
documents
were
very
very
different
and
it
was
a
little
difficult
to
fill
things
out.
So
just
a
request
for
the
working
group
think
about
that
as
well.
When
you
are
developing
this.
J
So
I
think
you're
saying
you
really
want
this
requirements
document
that
we're
talking
about
at
the
moment
with
the
the
poll.
So
you
really
want
to.
J
J
No,
no,
I'm
just
making
sure
I
I
heard
you
correctly,
so
we
have
like
the
numbers.
If
I
look
at
them,
we
have
67
who
are
on
meet
echo.
We
have
participation,
which
is
a
little
better
than
half
and
we
have
about
half
who've
read
the
document,
so
you
know
we.
We
clearly
need
people
to
take
a
look
at
this
document.
J
If
you
have
authored
one
of
the
proposed
solutions
take
a
look
to
see
if
your
solution
fits
into
the
requirements
that-
and
I
think
drew
brought
up
a
great
point
about
language
about
terminology-
see
if
the
terms
you
want
to
use
are
in
the
document
and
start
sending
comments
to
the
list.
We
really
would
like
to
mature
the
requirements
document
to
the
point
where
we
can
accept
it
as
a
working
group
document,
hopefully
even
before
the
next
meeting.
We
had
hoped
to
do
it
before
this
meeting.
J
The
iesg
took
a
little
longer
to
give
us
a
new
charter,
but
we
have
that
now
and
we
really
want
to
start
moving
out,
get
the
requirements
in
there
and
then
start
getting
some
of
the
the
solutions
there.
With
that
robin.
Are
you
still
are
you
able
to.
Q
See
he's
okay
now
he's
okay,
sorry
in
fact,
either
quick
comments
jamming
from
huawei.
In
fact,
in
my
opinion,
we
should
not
expose
the
q
implementation
information
in
the
network
layer
and
the
ip
layer
so
based
on
this
thinking,
we
think
we
need
the
requirement
information
or
some
this
the
resource
id
to
indicate
the
specific
process
for
each
node.
That's
my
comment.
J
B
No
problem
thanks,
I
probably
will
be
even
faster,
so
I
encountered
another
preset
in
draft
on
behalf
of
my
quarters.
Allen
would
have
myself
next,
please.
I
Still
showing
the
yeah.
S
J
G
B
So,
thank
you.
So
the
idea
here
is
that
we
have
been
dead
in
that
role
mostly,
even
though
the
architecture
and
the
service
model
supports
multi-domain.
We
have
been
more
or
less
focused
on
single
domain
aspects,
so
we
don't
have
any
explicit
discussion
on
multi-domain
here.
We
I'm
just
copying
the
the
service
model
diagram
from
the
rc,
so
the
idea
is
to
basically
try
to
study
and
analyze
what
could
be
needed
to
be
done
to
support
multi-domain.
B
That's
the
idea
to
identify
potential
gaps
and
if
there
are
gaps
then
discuss
whether
we
need
to
do
some
additional
work
in
the
working
group
or
not.
This
discussion
started
at
the
raw
and
then
in
an
interim
we
basically
we
thought
I
got
the
the
request
or
the
suggestion
to
move
the
discussion
to
attend
it,
as
this
belongs
more
to
the
net
discussion.
This
next
slide.
B
So
there
are
different
scenarios
where
we
need
to
have
multi-domain
support,
where
we
have,
for
example,
one
host
connected
to
a
domain
that
is
connected
to
a
different
administrative
domain,
and
there
is
another
host
there
there
may
be.
I
mean
I
in
the
draft
we
list
some
use
cases,
potentially
that
where
we,
where
this
may
happen,
one
important
qualification
is
that
by
domain
we
need
administrative
domain
and
or
technology
domain.
B
So
we
may
have
a
wire
domain
connected
to
a
wireless
domain,
and
this
is
also
multi-domain,
even
if
those
two
domains
belong
or
are
managed
by
the
same
administrative
domain.
So,
as
I
mentioned,
the
goal
is
to
explore
what
are
the
potential
gaps
in
the
architecture
or
mechanisms
control,
plane
data
plane
and
then,
if
there
are
to
identify
potential
solutions,
next
slide,
please.
B
So
the
the
draft
is
very
simple.
At
this
point,
we
basically
try
to
identify
some
very
high
level
things
where
we
need
to
take
a
look
from
the
application,
control
and
network
planes,
so
an
application
plane,
and
then
that
document
we
have
this
user
area,
which
is
the
application,
the
entity
that
is
interacting
with
the
the
user
and
the
operator
and
performing
the
net,
the
request
for
the
net
services
and
if
we
consider
a
multi-domain
deployment
that
user
agent
may
be
aware
or
unaware
that
there
is
multiple
domains.
B
I
Can
you
have
a
good
question?
It's
just
a
very
simple
one:
multi-domain
is
within
a
single
enterprise
or
or
between
multiple
enterprises.
That's
a
very
big!
That's
a
very
big
issue,
which
one
are
you
just
focusing
on
this
single
enterprise,
multi-domain
or
multi-domain
in
general,
multiple
enterprises.
B
B
But
these
are
the
type
of
things
that
we
need
to
get
into.
An
additional
row
may
be
may
maybe
bring
in
additional
gaps.
That's
also
the
discussion
we
are
having
in
rowing
the
different
draft
regarding
to
the
pce
coordination
that
may
be
required,
as
well
in
a
multi-domain
environment.
Next
slide
and
again,
the
same
type
of
high-level
analysis
on
the
network
data
plane
in
a
multi-domain
environment,
nodes
belonging
to
different
planes
may
have
to
exchange
information,
and
there
may
be
potential
need
for
protocol
translation
or
abstractions.
B
A
different
domain
might
not
be
using
or
offering
the
same
capabilities
because
they
are,
for
example,
different
technologies
and
the
same
thing.
Oem
protocols
may
also
need
to
be
extended
to
support
multi-domain
operation
as
another
example
performing
pre-offer
period
across
multiple
domains
may
also
pose
additional
challenges.
That's
the
knowledge
of
the
different
domains
may
not
be
available
at
the
one
domain
and
the
data
planes
may
be
also
different
or
very
different
capabilities,
and
the
last
last
slide
please
so
again
here
the
the
idea
is
to
present
this
problem.
B
Initial
work
very
initial
work:
there
are
some
gaps,
high
level
gaps,
identifier
identified
and
to
discuss
or
to
get
your
feedback
on
whether
this
is
something
interesting
to
work
in
the
working
group,
and
if
so,
please
join
us
in
the
mailing
list
and
offline.
We
will
be
happy
to
take
more
people
on
board.
F
Oh
god,
that's
loud
rick
taylor,
speaking
of
my
raw
chair
hat
on,
we
would
like
this
work
to
begin
in
debt
net.
So
if
it's
we'd,
like
the
the
resolution
to
start
at
that
net
and
then
raw
to
build
on
whatever
happens
in
debit,
but
personally
I
support
this
thanks.
I
I
would
like
this
work
to
be
focused
for
intra
enterprise,
for
a
single,
let's
say
overall
administrative
domain,
and
they
can
have
multiple
sub-domains,
because
I
believe
this
work
would
get
them
too
much
too
diluted
and
by
keeping
precise
for
the
enterprise,
because
the
the
that
net
use
cases
are
very
much
applied
for
intra
enterprise
network.
So
that
would
be
supporting
this
work
that
will
likely
focus
for
within
a
single
enterprise.
E
Yeah
so
maybe
split
it
halfway,
so
I
think
we'll
only
have
one
round
of
you
know
improving
forwarding
planes.
So
you
know
we
have
options
that
don't
require
it,
but
hopefully
we'll
get
to
one
option
and
then
it
would
be
good
which
takes
longer
right,
but
which
would
be
good
to
be
sure
that
we
know
all
the
candidates
of
what
needs
to
be
in
the
data
packet
header.
E
B
I
E
Well,
what
did
it
have
the
cake
and
eat
it
too?
So
no,
no
wait
a
second,
so
so,
very
short
term.
I
think
no
forwarding
plane
header
changes.
Then
we
have
one
option.
I
think
for
each
of
the
forwarding
planes
to
come
up
with
a
great
new
header
right
and
either
we
figure
out.
We
make
it
extensible
or
we're
safe
that
we
don't
need
to
extend
that
header
for,
for
example,
inter
domain
case
right.
So
if,
if
the
parameters
we
need
for
intel
domain,
we
also
feel
safely,
we
need
them
intra
domain.
E
J
All
right,
so
thank
you
for
the
comment.
Work.
No
we're
actually
out
of
time.
I
was
in
queue
so
I'll
make
the
comment.
It's
actually
a
chair
comment
I'll
point
out
that
you
brought
up
controller
plane
changes.
The
controller
plane
document
is
still
an
active
working
group
document.
There
is
no
reason
why
you
cannot
suggest
text
about
multi-domain
that
should
be
included
in
the
controller
claim
document
immediately.
J
So
please
don't
wait.
Don't
you
don't
have
to.
You
know
worry
about
adoption
of
your
document
just
take
text,
that's
appropriate
and
suggest
to
the
list
that
it
be
incorporated
in
the
open
working
group
document.
So
thank
you
for
the
topic
really
appreciate
it.
Thank
you
all
for
a
really
good
session
and
we
appreciate
all
the
contribution
and
look
forward
to
seeing
you
all,
virtually
or
physically
in
the
next
meeting
and
a
special
thanks
to
all
who
participated
remotely.