►
From YouTube: IETF113-DETNET-20220322-1330
Description
DETNET meeting session at IETF113
2022/03/22 1330
https://datatracker.ietf.org/meeting/113/proceedings/
A
Okay
with
that,
I
believe
we
are
at
our
start
time.
A
A
That
includes
myself
my
co-chair
janos
farkas
and
our
secretary
ethan
grossman
ethan.
We
appreciate
you
staying
up
late
or
getting
up
early
as
you're
in
california.
Luis
is
our
in-room
coordinator
he's
volunteered
to
help
us
out
luis.
Thank
you
so
much
for
the
help.
A
As
we
are
at
the
ietf,
we
are
governed
by
certain
processes
and
policies
that
are
covered
through
our
note
through
various
bcps
rfcs
and,
if
you're
unfamiliar
with
them,
we
suggest
you
go
look
them
up,
but
it
governs
all
contributions
made
in
the
ietf,
whether
it's
in
this
meeting
on
the
list
in
a
document
or
even
hallway
conversations.
A
Similarly,
we
have
a
code
of
conduct
that
there
are
guidelines
that
basically
say
we
should
always
treat
each
other
professionally
and
treat
each
other
with
courtesy
and
respect.
A
So
we
are
hybrid,
there
is
a
on-site
tool.
It
allows
you
to
advance
the
slides
as
well
as
enter
the
queue,
if
you
use
that
that
would
be
most
helpful.
If
you
don't
luis,
will
ask
you
to
help
manage
the
the
balance
between
the
online
and
the
in
in-room
queue,
and
I
think
people
are
pretty
good
about
working
on
online
nowadays,
since
we've
been
doing
it
so
long,
we're
using
meat
echo
for
the
cupid
we've
talked
about
that,
but
we're
also
using
it
for
blue
sheets.
A
We
think
it's
important
to
sign
in
it's
automatic
if
you're
remote,
please
do
so
if
you're
in
the
room
and
for
note
taking
we're
using
our
usual
shared
note-taking
facility.
In
this
case,
it's
now
hedgedock
we've
been
through
the
third
iteration.
Please
join
us
on
that
on
that
link
and
help
contribute.
It
would
be
very.
We
really
appreciate
it
and
it's
also
very
beneficial
to
make
sure
your
comments
are
accurately
captured.
A
A
There's
been
a
couple
of
updates,
including
who
is
presenting.
You
know.
What's
really
interesting
janos,
I
just
uploaded
a
version,
but
this
is
the
old
version.
So
somehow
meat
echo
didn't
pick
up
the
latest
version.
So
the
if
you
go
to
the
our
the
either
any
of
the
online
agendas
you'll
see,
that's
scott
mansfield
is
going
to
be
presenting
on
the
second
slot
and
also
we
corrected
the
name.
A
This
slide
is
complete.
I
don't
know
what
happened
to
this.
It's
complete
garbage,
let's
see,
if
all
that's
so
strange,
so
I
believe
this
was
status
and
talking
about
documents
that
are
moving
through
I'll,
have
to
pull
that
up
separately
and
remember
what
else
is
here?
A
A
A
Okay
charter
update:
this
is
a
really
important
conversation.
We've
been
talking
for
a
number
of
meetings
about
bringing
in
potentially
new
traffic
treatments,
including
and
notably
queuing
on
list
we've.
Also
someone
raised
the
point
about
we
want
to
allow
for
scheduling.
We
agree
that
there
are
different
types
of
of
traffic
treatments.
A
Our
suggestion
to
address
the
the
list
comment
which
is
not
reflected
here
but
is
on
the
online
version,
is
to
add
a
the
clause,
for
example
between
the
word,
including
and
forwarding,
and
so
that
this
list
is
not
prescriptive
and
so
to
allow
for
any
other.
A
Changes
in
treatment
the
do
we
have
our
a.d
in
the
room.
John,
are
you
here,
billy
hi,
john?
Oh,
for
some
reason
I
thought
you
were
going
to
be
in
person.
Maybe
I
just
was
confused
or
you
changed
your
plans.
I
changed
my
plan.
Yeah.
Sorry,
I
thought
I
had
told
you
I'm
sure
you
did
and
I
just
missed
it
and
by
the
way
I
was
right
there
with
you,
because
I
was
originally
planning
to
go
also.
A
B
No
objection
at
all
great,
I
think
it's
it's
it
anyway,
but
it
makes
it
clear.
I
think
that's
just
fine.
A
Yeah
and
the
comment
on
the
list
and
you'll
see
that
this
is
already
on
the
list
that
the
someone
was
saying
that
they
want
to
talk
about,
scheduling
versus
queuing
and
and
and
there's
a
fairly
fine
dif
difference
there
that
I
think
most
people
would
miss,
and
rather
than
getting
into
that
exhaustive
list,
we
thought
that
making
it
exactly
an
example
would
would
be
the
best
way
to
address
it.
So
glad
to
hear
you
do
not
have
any
objections.
We
do
have
someone
in
queue
robin.
C
A
A
So
I
I
the
way
I
view
it
is
that
it
would
be
a
alternate
queuing
mechanism.
We
always
can
use
different
subnet
layer
technologies
that
bring
in
their
own
queuing.
I
I
think
in
there's
the
sort
of
in
theory
and
in
practice
in
in
theory,
we
want
to
coordinate
with
other
sdos
and
the
technologies
that
they're
defining
and
make
sure
that
we're
compatible
with
them.
In
practice,
we
need
to
look
at
the
specifics
of
a
proposal
in
order
to
address
the
specific
question
in
terms
of
working
with
tsn.
A
All
right
well,
thank
you
on
on
this
point
and
john,
I
believe,
is
going
to
bring
us
to
the
iesg
immediately
following
this
meeting
and
then
we'll
run
that
process.
A
And
for
the
last
slide,
a
and
a
good
thing
we're
running
ahead
is
we've
updated
our
milestones
based
on
where
we
stand
in
the
working
group,
we
haven't
updated
them
in
quite
some
time,
so
we
thought
it
was
good
to
do
that.
A
Some
of
the
changes
are
pretty
obvious
related
to
oam
and
our
existing
control
plane
work.
The
some
are
related
to
what
we've
just
been
talking
about
in
terms
of
expanding
the
scope
and
we've
talked
about,
rather
than
call
it
out
as
queuing
or
scheduling.
We
just
talk
about
data
plate
enhancements
and
we've
put
in
some
straw.
A
Man
dates
here
for
when
we
start
adopting
documents
and
submitting
to
the
isg
keeping
in
mind
that
we
are
always
contribution
driven
so
that
you
know
in
order
to
make
these
dates,
we
really
need
contributions
from
the
working
group,
the
requirements
date
we
chose
because
we
think
we
have
a
a
a
reasonable
document,
or,
I
shouldn't
say
reasonable,
a
more
consolidated
document
based
on
the
discussions
that
we've
had,
and
so
we
think
we
suspect
we're
pretty
close
there.
We're
going
to
be
talking
about
that
today.
A
We
left
a
little
more
time
on
the
it
solutions,
because
we
feel
that
in
changing
the
charter
today,
we
want
to
give
some
time
for
different
solution
documents
to
be
submitted
and
to
come
into
the
working
group
and
then
be
discussed.
A
So
we
wanted
to
give
a
little
more
time
on
the
solutions
than
on
the
requirements
document,
and
I
think
that's
those
are
all
the
points
we
wanted
to.
A
We
wanted
to
cover
ben.
E
E
So
I'm
not
sure
that
why
and
this
this
document
is
first
submitted
between
a
bit
before
the
other
oem
documents
or
is
there
any
solution
is
not
not
quite
decided
to
be,
the
one
should
be
submitted
and
the
second
one
is
the
clarification
question,
because
we
are
not
quite
yet
sure
about
the
enhanced.
E
What
what
it
has
to
mean
in
the
in
the
force
to
to
yeah
from
the
four
to
two
from
the
fourth
to
the
last
lines
about
these
about
the
the
enhanced
that
then
that
data
playing
requirements
so
I'd
like
to
first
have
the
discussion
on
these
two
understood
points
and
then,
if
it
is
possible
and
then
submit
it
to
iesgb4
before
we
have
the
consensus
on
this
list.
Thank
you.
A
Okay,
on
the
first
point,
it's
pretty
easy.
I
believe
that
should
have
just
said
first,
the
oam
document.
So
that's
a
good
catch.
We
were
not
trying
to
be
specific
as
to
which,
which
it
was
so,
I
think,
that's
a
simple
mistake-
certainly
a
necessary
correction.
I
did
not
quite
follow
your
question
or
comment
on
the
last
four.
I
don't
yanosh.
Did
you
catch
it?
Do
you
want
to
respond
or
do
should
we
ask
the
statement.
E
Maybe
I
can
repeat
my
question.
The
question
is,
I
don't
quite
follow
the
mailing
of
the
enhanced
yeah
that
specification
of
this.
This
word.
A
So
we've
been
having
discussions
about
large
scale
requirements
and
then
some
proposed
queuing
mechanisms.
Those
have
been
going
on
for
a
little
while
in
the
working
group,
and
we
kept
coming
back
to
a
question
of
whether
or
not
changing,
defining
queuing
belonged
in
this
working
group
or
a
different
working
group
and
and
we've
in
discussions
with
our
id
aad
with
the
transport
area.
A
There
was
agreement
that
this
was
something
we
could
do
inside
the
routing
area
and
inside
our
working
group,
and
so
the
require
we
have
a
requirements
need
to
define
what
we're
going
to
do.
That's
the
requirements,
and
then
we
have
room
for
defining
one
or
more
solutions
and
that's
the
solution
document.
So
that's
what's
meant
here
by
enhanced
data
plane
now,
if
there's
other
enhancements
that
are
needed,
that's
also
within
our
within
our
charter.
Some
of
those
have
been
clearly
youth
in
the
charter.
E
A
D
Thank
you
so
that
that
looks
to
me
as
if
there
would
be
only
one
document
for
each
of
these
areas
is
that
kind
of,
because
I
don't
think
we
necessarily
did
the
same
thing
right
with
different
ip
and
mpls
data
plane
documents.
So
can?
Can
you
try
to
clarify
that?
For
me.
A
We
certainly
expect
one
requirements
document
for
a
solution.
This
says
first
solution,
it
doesn't
say
last
solution,
it
doesn't
say
only
solution.
A
If
we
get,
you
know,
as
I
said
earlier,
we're
contribution
driven.
So
if
we
have
documents
that
look
like
they
don't
make
sense
to
be
merged,
then
certainly
we
can
do
more
than
one
at
this
time.
We're
not
seeing
that.
A
So
we
do
have
a
requirements
document
on
the
agenda.
If
you
think
that
there
are
other
use,
cases
that
are
needed
or
not
covered,
perhaps
bring
it
up
then,
and
because
then
we
can
have
it
in
the
context
of
that
specific
document-
and
you
know
talking,
specifics
is
always
better
than
talking
abstract.
No.
D
No,
I
mean
the
the
requirement
document
that
I'm
on
I
understand
what
it's
good
for
and
that
we
need
it.
I
wouldn't
claim
that
it
necessarily
covers
all
the
possible
areas
where
that
net
might
be
of
interest.
So
that's
why
why
I
think
for
the
requirements
there
might
be
multiple
ones,
and
I
think
also
as
a
general
note
right.
D
A
All
right
do
we
have
any
other
comments.
Questions
on
this.
F
And
maybe
maybe,
if
you
don't
mind
just
there
was
that
broken
slide.
Just
in
one
sentence,
we
have
two
documents
for
which
publication
requested.
You
can
check
it
and
there
is
a
newly
adopted
document.
The
defeat
of
by
mpls
over
udpip
and
the
oem
framework
document
is
not
on
agenda.
But
you
will
hear
the
details
and
sorry
about
this
library.
A
A
G
G
G
G
We
considered
it
and
replied
to
it
in
I
don't
know
november
or
something
you
can
check
the
check
the
liaison
and
then
the
so.
I've
been
having
some
conversations
with
the
rapper
tours
of
q6
and
there
was
just
a
q6
meeting
last
week
or
two
and
so
the
chair,
the
rapid
tour,
is,
if
you
I
go
flip
to
the
next
slot.
Oh,
I
can
flip
to
the
next
slide.
G
I
don't
need
to
tell
you
that
the
rapid
tours
graciously
created
a
short
slide
deck
that
provides
the
status
of
what
it
is
that
that
q613
has
been
working
on
and
a
pointer
to
how
to
participate
and
when
their
next
meetings
are
so
I
know
I
have
like
eight
minutes
left,
so
I'm
not
going
to
get
into
the
details
of
these
slides.
I
wanted
to
provide
a
just
a
snapshot
of
where
the
q6
work
now
has
landed
post
the
liaison
responses
that
we've
had.
G
I
also
attend
those
meetings
and
we
can
discuss
it
from
that
angle
as
well.
So
the
the
two
rapid
tours
tasting
from
etri
is
the
one
that
put
these
slides
together.
I
really
appreciate
the
the
effort
that
he's
put
into
this
to
provide
a
nice
communication
mechanism
for
those
that
don't
know
the
work,
I'm
not
reading
this
whole
slide.
The
thing
that
you
need
to
understand
is
what
is
study
group
13.
The
itu-t
is
arranged
in
study
group
study.
G
Group
13
is
the
one
on
next
generation
work
and
q6
in
particular,
is
about
qos,
so
quality
of
service
and
quality
of
experience
related
to
networks,
resource
control,
large-scale
networks,
even
how
qos
and
qoe
are
going
to
be
applied
to
quantum
key
distribution
networks.
So
this
is
kind
of
the
space
that
they're
in
I
put
in
the
chat,
I
put
a
pointer
to
the
full
terms
of
reference
and
a
summary
of
what
q6
is
doing
for
those
that
are
interested.
G
G
So
without
going
into
too
much
detail
there,
I
wanted
to
provide
some
information
about
what
what
we
see
the
itu-t
and,
in
this
regard,
question
six
of
study.
Group
13
is
going
to
be
working
on
and
they
created
a
supplement
to
the
itu
recommendation
series
and
a
supplement
is
an
informational
document.
G
So
this
is
work
that
was
completed
as
part
of
a
study,
basically
in
in
terms
that
most
would
understand
it
was
a
study
and
a
report
has
been
created,
and
this
is
a
supplement
providing
the
information
particular
to
compact
capabilities
and
performance
related
to
applications
that
the
study
group
13
thinks
will
be
important
in
the
year
2030.
So
this
provides
some
details
on
how
they're
looking
at
terms
like
later
and
later,
latency
and
jitter,
I
had
too
much
caffeine.
G
My
jitter
is
too
high
right
now,
so
the
so
that's
the
basic
so
and
I
put
a
pointer
to
that
document
as
well
in
the
chat
window.
G
If
you
wanted
to
see
that
and
then
so
some
of
the
buzzwords
here
that
need
that
you
need
to
understand
so
that
we
understand
where
the
dividing
line
between
itu-t
and
ietf
work
lies
is
the
itu-t
it
likes
to
use
the
term
large-scale
networks
entered
into
domain
networks,
and
so
looking
at
use
cases,
frameworks
and
architectures
that
support
latency
time
synchronization
with
or
without
time,
synchronization
in
large-scale
networks,
so
they're
looking
at
it
from
the
top
down
where
you
know
the
ietf.
F
G
Time
so
yeah,
just
just
more
like
I
say,
I'm
not
going
into
the
technical
detail.
I
only
have
a
couple
minutes.
I
wanted
to
provide
this
so
that
if
you
have
questions
about
the
the
terminology
used,
how
the
documents
are
progressed,
what
documents
are
that
are
the
most
relevant
and
the
most
current?
Then
you
can
contact
me
or
tasing
and
we
can
open
a
conversation
so
documents
that
you
see
in
this
format.
G
So
if
it's
like
a
y
dot,
a
number,
that's
kind
of
like
an
rfc
if
it's
y
dot,
something
like
this,
this
is
work
in
progress,
and
so
there
is
now
documents
related
to
functional
architecture.
Breakdown
related
to
the
terminology
talked
about
in
y.3113,
and
so
this
document-
and
this
is
one
of
the
documents
I
believe
we
received
and
have
sent
comments
back
and
then
there
is
lsn
large-scale
networks,
and
this
is
a
jitter
guarantee.
G
So
this
is
a
document
about
well
I
mean
I
need
to
re-say
what
it
says.
So
it
talks
about
the
the
math
and
the
gives
the
information
and
definitions
on
on
how
they're
talking
about
buffering
in
in
this
document,
related
to
how
you
deal
with
it
across
a
large
scale
network
and
then
there's
qos,
so
you'll
notice,
requirements
and
frameworks.
So
a
lot
of
requirements
frameworks
use
cases
architectures.
G
Those
are
the
things
that
are
being
discussed
in
in
this
in
this
group
and
then
so.
Here's
another
document,
that's
qos
requirements
on
large-scale
telecommunication
networks.
This
is
a
key
word.
Well,
you
can't
highlight
so
imt
2020
networks
and
beyond
is
the
terminology
that
the
itu-t
is
using
until
the
itu-r
comes
up
with
what
they're
going
to
call
5g
or
you
know
beyond
5g.
G
So
this
is
that's
what
that
means,
and
then
so
so
yet
another
picture
that
shows
the
general
view
of
the
deterministic
network,
so
the
large-scale
deterministic
network
framework
and
provides
some
pointers
to
the
terminology:
that's
being
used
user
side,
network
site
and
multi-domain.
G
So
one
of
the
things
that
I've
found
being
a
liaison
officer
for
as
long
as
I
have
is
that
it's
it's
important
for
both
organizations
to
understand
the
terminology
that
they're
using
to
make
sure
that
they're
talking
about
the
same
things
and
then
so.
This
is
an
example
of
a
use
case
document,
that's
talking
about
particular
industrial
ethernet
use
cases
and
and
how
and
how
the
work
will
move
forward.
G
So,
and
this
is
yet
another
detail,
but
another
dive
deeper
into
the
document
that
describes
the
figures
a
little
more
and
provides
pointers
to
the
information
about
how
the
problem
has
been
decomposed
and
then
so.
This
is
a
slide
that
will
obviously
take
a
lot.
This
there's
a
lot
packed
into
this
slide,
and
this
is
something
that
I
encourage
the
people
on
both
the
the
ietf
people
and
also
the
the
itu-t
people
that
are
working
on
these
documents
to
look
at
the
entire
space.
G
The
entire
standards
area,
that's
working
on
problems
related
to
deterministic
network
networking,
quality
of
service,
and
this
is
a
local
area,
so
local
area
network.
So
this
one
is
directly
related
to
things
that
would
happen
that
that
net
would
care
about,
and
also
the
ieee
802.1
tsn
group.
So
looking
for,
where
are
their
gaps?
Are
there
gaps
in
protocol?
Is
it
just
a
way
of
looking
at
it?
Is
it
a
use
case
problem?
G
Is
it
a
framework
architecture
problem,
and
so
I
encourage
people
to
to
look
at
these
to
make
sure
that
the
right
questions
are
being
asked
and
then
the
last
slide
and
I'm
looking
at
the
elapsed
time
and
I'm
I'm
still
on
time
so
good.
So
there
are
meetings.
These
are
meetings
that
if
you
want
to
come,
you
don't
have
to
be
a
member
of
the
itu-t
to
come
to
these
if
you're
an
invited
expert,
so
you
can
contact
me
contact
the
chairs
contact
the
itu-t.
G
What's
called
the
tsb,
that's
their
steering
group.
If
you're
interested
in
participating
in
in
these,
we
can
find
a
way
to
get
you
an
account
so
that
you
can
log
into
these
meetings
and
and
provide
your
opinion.
Another
way
is
contact
me
for
a
liaison,
so
I
will
leave
it
there
open.
For
I
see
no
one
in
the
queue,
but
I
will
stop
talking
now
and
let
the
chairs
do
their
job
here.
Thank
you
very
much
for
your
time
and
I
appreciate
the
time
thanks.
B
A
Okay,
we
are
moving
on
shusang.
A
Is
sure
hold
on
one
moment
I'll,
maybe
take
control
back.
I
H
J
H
Yes,
that's
it.
This
slide
provides
the
background
and
purpose
of
this
document.
This
document,
then
add
controller
plan
is
first,
the
is
the
the
concept
of
data
controller
plan
is
first,
firstly,
define
the
erc
8655,
which
is
the
combination
of
control,
plan
management
plan
and
in
a
data
plan
framework
document.
H
H
It
mentions
that
there
are
some
requirements
for
label,
distribution
and
label
management
in
controller
plan,
and
this
document
is
intended
to
give
a
clear
framework
for
the
whole
picture
of
then
at
the
controller
plan
which
could
compile
all
the
net
controller
plan
requirements
in
one
place
and
also
give
an
overview
of
control
architecture
which
hopefully
could
give
guidance
for
the
following
work
about
the
control
plan
and
management
plan
next
slide.
Please.
H
In
section
one
we
we
have
an
introduction,
and
in
section
two
we
listed
the
primary
requirements
for
dynamic
controller
plan
which
align
with
the
the
definition
of
rfc
8938
section.
Three
and
section
four
are
both
about
then,
and
the
control
plan
in
the
nf3.
There
are
three
types
of
data
control
plan
architecture.
H
Also
different
data
plan
solutions
are
considered,
for
example,
nps
ip
and
also
segment
routing
in
section
5.
It
is
still
very
brave.
It
gives
a
management
plan
overview,
which
mainly
includes
some
brief
introduction
about
static
provisioning,
for
example,
yamado
and
also
some
brief
introduction
about
data
that
net
oem
next
slide.
Please
pascal,
is
in
cube.
Do
you
want
to
ask
now
or
after
the
presentation,
pascal
I'm
okay.
K
Can
you
hear
me?
Okay?
Yes,
yes,
I
can
hear
you.
Yes,
it's
it's
about
the
controller
plane
and
control
plane
or
for
that
mechanisms
at
fro.
We
are
defining
functions
in
the
architecture
which
which
require
some
control
of
plane
activity,
and
there
was
this
discussion
whether
this
should
be
part
of
the
raw
architecture
or
not
and
and
a
strong
tendency
to
think
just
like
the
bat-net
architecture,
that
this
belongs
to
the
controller
plane
and
not
to
the
row
or
the
deadman
architecture.
K
K
H
I
think
in
the
architecture
document
the
the
specifications
for
these
functions
are
not
so
detailed,
especially
in
the
aspect
of
controller
plan,
and
this
document
may
provide
some
explanations
and
a
more
detailed
understanding
about
what
could
be
done
in
controller
plan
for
these
functions,
so
it.
I
think
it
can
be
a
detailed
version
of
the
explanation
if
I
understand
this
right,
but
if
working
group
have
some
other
yes,
please.
A
Yeah,
I
think,
pascal,
if
you
have
something
specific
you'd
like
to
propose,
do
so,
and
then
we
can
judge
if
it
belongs
in
this
document
or
a
raw
specific
document.
K
A
L
Yeah,
this
is
kind
of
generous.
You
see
three.
I
just
wanted
to
echo
what
pascal
said.
I
think
that
is
very
relevant
to
have
that
kind
of
consideration.
I
don't
really
care
where,
but
I
I
think
it
will
be
important
to
agree
where
this
work
may
be
or
who
it's
gonna
take
that
work
either
that
net
or
I
mean
with
a
specific
document,
but
I
think
that
that
is
a
very
useful
piece
of
information
that
we
need
to
add.
Thanks.
H
I
D
H
D
Right,
tallest,
eckerd
future
way,
I
think
in
bangkok
there
was
a
a
quick
overview
also
about
some
of
the
tsn
protocols
for
their
controller
plane,
which
I
forgot,
the
name
of
because
they're
always
crazy
names,
but
that
that
that
is
a
I
think,
a
very
nice
hybrid
one
that
allows
you
to
put
the
central
controller
in
or
not,
and
it
makes
it
much
more
flexible.
So
I
think,
would
be
great
to
add
a
reference
to
that.
I
think
some
of
the
co-authors
should
be
aware
of
that.
D
Ahead,
please
yeah.
I
think
the
important
part
is
really
to
to
recognize
that
in
the
past
we
always
had
you
know
a
solution
where
the
operator
couldn't
freely
choose
how
much
to
centralize
and
how?
How
much
not
to
right
he
either
had
to
buy
into
the
full
distributed
rsvp
or
the
whole
sdn
controller
and.
D
Exactly
exactly
but.
A
H
I
will
go
really
fast.
Here
is
the
update
in
this
version.
In
the
first
section,
we
modified
the
management
plan
specification
based
on
the
discussion
in
the
mailing
list.
Basically,
these
descriptions
is
coming
from
rfc,
7426
and
also
in
section
three.
We
remove
some
detailed
descript
descriptions
of
protocol
extensions
requirements,
for
example,
igp
and
rsvpe,
and
the
detailed
example
is
removed,
which
may
depend
on
the
implementation
and
in
section
4,
we
we
remove
some
description
about
the
streak
and
lose
path,
which
is
also
depends
on
the
the
further
implementation
option.
H
Here
is
some
issues
may
need
further
discussion
in
working
group,
the
first
one
which
has
already
been
discussed
in
the
main
list,
and
also
in
some
previous
meetings,
how
to
deal
with
the
relationship
between
this
document
and
other
rfc
and
working
group
documents.
For
example,
the
architecture
document
has
been
mentioned
by
pascal
and
also
the
data
plan
framework
document
and
also
some
other
working
group
documents,
for
example,
then,
at
oem
framework,
so
suggestions
from
working
group
will
be
very
helpful
in
the
existing
status.
We
will
treat
we.
H
We
would
like
to
treat
this
document
as
an
calibrations
or
explanations
of
the
existing
specifications,
and
if
there
are
some
solutions
has
already
been
defined
in
other
work,
for
example,
young
model
and
om
framework.
It
is
virtual
and
it
could
be
referred
to
in
this
document.
H
And
another
question
is
what
is
the
scope
of
the
net
management
plan?
The
orders
cannot
find
a
very
specific
definition
yet
so,
but
there
are
some
descriptions
in
rc,
865
and
rc8938.
H
So
in
our
existing
understand,
the
management
plan
could
include
the
young
as
configuration
management
and
also
oem
as
fault
and
monitoring
management.
If
people
have
different
understanding,
please
also,
let
us
know
a
next
slide.
Please.
H
So
what
is
the
next
step?
This
document
is
haven't
include
flow
aggregation,
yet
so
we
will
end
the
flu
aggregation,
which
is
very
important.
A
function
in
that
in
section
4,
and
also
we
ask
more
input
from
working
group,
is
glad
to
see.
There
are
some
comments
in
the
in
our
discussion.
A
Okay,
thank
you
very
much
for
the
update,
really
appreciate
the
work
and
look
forward
to
the
contribution
from
others
on
the
topic
and
we're
going
to
move
now
to
oam
greg
europe.
M
M
M
B
M
It's
a
fading
colors.
Thank
you,
okay,
so
this
is
update.
We
welcome
our
co-authors,
janice
and
and.
M
M
So
that
ach
is
now
different
from
where
we
started
with
their
more
like
based
on
pseudo-wire
ach
it
has
a
new
version
number,
so
it
will
be
differentiated,
easy
to
differentiate
from
the
sudower
ach
it
includes
its
own
sequence,
number
channel
type
and
also
their
second
long.
Four
byte
long
word
carries
the
node
id
level
flags
and
session
identifier.
M
Okay,
now
the
flags.
A
How
much
have
you
been
coordinating
with
the
joint
work?
That's
going
on
with
pals
and
mpls.
M
M
So
we
are
in
a
different
position,
so
we
are
in
this
sort
of
a
protected
space
that
enhanced
mpls
architecture
that
open
design
team
is
working
should
not
affect.
A
All
right,
thank
you.
I
was
that
was
sort
of
a
planted
question
to
see
if
we
would
get
any
comments
from
others
who
are
participating,
looks
like
they're
happy,
so
we
can
move
on.
M
Yes,
thank
you,
and
I
appreciate
this
opportunity
to
clarify
how
we,
how
mpl
that
met
with
mpls
is
related
to
work
out
enhancing
mpls
architecture.
M
Okay,
so
now,
for
now,
their
flex
field
is
composed
of
five
bits
filled.
Each
flag
filled
is
one
bit
long
and
for
now,
in
this
document
each
of
them
is
unused,
and
that
is
reflected
in
the
proposed
ayana
registry
that
is
requested
to
create
a
new
deadnet,
mpls
oem
flags
registry
and
do
assignment
as
in
this
copy
of
the
table.
M
Okay,
thank
you
so
for
next
steps,
we
always
appreciate
the
comments,
welcome
suggestions
and
we
are
planning
to
work
on
this
document
and
would
probably
ask
the
working
group
to
consider
working
with
last
call
before
the
next
meeting.
E
Hi
hello
work
and
thank
you
for
the
presentation.
I
have
a
question
for
the
for
the
for
the
encoding
or
encapsulation
of
this
that
net
oem
format
I
I
see
there
are
there-
is
a
node
id
and
session
session
session.
I
id
I
think
I
remembered,
but
before
we
before
we
set
the
before
we
define
this
encoding
of
oem
packets
and
do
that
we
should
have
a.
E
I
have
a
general
consensus
on
the
other
on
the
mac
on
the
oem
mechanisms,
because
I
see
there
are,
there
is
a
new
encoding,
so
that's
it
it's
different
from
what
we
have
in
previous
om
mechanisms.
E
So
I
don't
know
if
we
should
first
define
first
to
discuss
the
the
mechanism,
how
we
use
it
and
then
we
decide
how
to
encode
it.
I
think
that.
M
That
is
yes,
very
good
question.
So,
as
for
encoding,
we
are
not
envisioned
that
okay,
we
propose
to
reuse
all
the
encodings
that
are.
M
We
are
familiar
with
in
sudower
encapsulation,
so
that
there
are
ipv4
ipv6
channel
types
for
ach,
and
that
means
that
that
net
ach
can
be
followed
immediately
by
ip
encapsulated
oem.
M
It
could
be
a
bfd
control
message
in
case
of
lspping
and
bfd
control,
message
with
ap
encapsulation
so
that
they
can
use
loopback
address,
whether
it's
127s,
slash,
8
range
or
as
prescribed
in
5884,
for
example
their
range
for
ipv6
or
it
can
use
non-ip
encapsulation
of
bfd.
E
Yeah,
I
understand
it.
I
agree
with
what
you
have
said,
but
for
the
for
the
new
defined
like
the
the
session
id,
how
we
use
it.
Actually
we
don't
know
this.
So
oh.
M
Yeah,
yes,
yeah,
actually
that's
a
very
good
point
and
we
welcome
suggestions
and
proposals.
M
M
So
we
found
that
it
might
be
more
practical
to
define
these
flags
now
and
then
let
further
documents
to
define
and
allocate
particular
flags,
because
for
now
that's
like
unused-
and
I
can
point
that
it's
the
approach
that
been
used,
for
example
in
srh
extension
header
in
ipv6,
where
the
srh
defined
flags
in
this
base
rfc
and
then
now.
For
example,
their
proposal
on
oem
in
in
srv6
defines
one
of
the
flag
fields
as
oem
field.
E
Okay,
thank
you.
Thank
you
for
the
for
the
reply
and
I
think
we
I
I
won't.
I
use
too
much
too
much
too
much
time
here
to
to
waste
for
the
to
waste
the
at
the
time
for
the
other
presentations,
but
let's
continue
with
the.
Let's
continue
the
discussion
on
the
mailing
list.
I
just
want
to
say
it's
not
quite
ready
for
the
working
group
last
call
yeah.
Thank
you.
M
Okay,
so
now
I'll
try
to
return
some
time,
because
this
will
be
another
short
presentation,
I
hope
shorter
than
a
lot
of
time.
Okay,
so.
M
Let's
recap:
what
we
have
proposed
for
active
ip
or
deadnet
ip
oem,
using
that
net
in
udp,
and
then
I
will
be
looking
at
how
active
oem
can
realize
a
deadnet
service,
sub
layer
in
a
deadnet
ip
environment.
M
We
have
to
map
the
active
oem
flow
with
their
monitor
that
net
flow,
so
that
to
ensure
their
fate
sharing
and
that's
it
because
in
ip
we
have
to
work
with
their
several
active
oem
protocols,
for
example
it's
icmp
or
it's
a
bfd
or
it's
a
performance
measurement,
whether
it's
a
t-ramp
or
stamp,
and
they
usually
use
a
udp
destination
port
or
transport
encapsulation
diff
that
is
distinct
from
the
death
net
flow.
M
Okay,
now
to
find
fun
part,
realizing
a
deadnet
service,
sub
layer
for
the
detonate
flow
it
uses
mpls
over
udp
and
that's
well-known
and
established
technique
for
that
net
oem.
M
In
this
case,
it
also
must
use
mpls
over
udp,
and
then
it's
naturally
is
sharing
fate
with
the
detmit
flow,
because
it
can
use
the
same
ip,
the
same
udp
and
the
same
mpls
encapsulation
and
then
identify
their
active
oem
just
using
the
ach-
and
the
next
slide
illustrates
that
very
nicely.
M
So,
as
you
can
see,
all
the
transport
encapsulation
for
deadlift
flow
and
active
oem
are
identical
and
the
differentiation
only
in
using
that
net
control
word
for
the
deadline,
flow
or
detonate
ach
for
active
oem.
M
And
your
comments
suggestions.
A
So
you
know,
we've
talked
about
this
in
the
past,
having
a
solution
that
leverages
mpls
over
ip
is
really
very
straightforward
and
and
makes
a
lot
of
sense,
but
we
also
need
a
solution
that
works
when
we're
not
running
an
mpls
data
plane
and
we
have
no
mpls
at
all
so
really
needs
a
solution
that
works
for
native
ip,
and
you
know,
I
think,
that's
I,
I
think
there's
room
in
this
document
or
maybe
another
document
to
cover
that.
But
it's
really
important.
M
Okay,
let
me
bring
it
a
little
bit
back
here,
so
we
are
following
the
data
plane
encapsulation,
so
that
net
in
udp
does
not
realize.
Doesn't
that
net
service
sub
layer
so.
A
There's
there's
going
back
to
a
hallway
conversation
you
and
I
had
walking
on
the
streets
of
singapore.
Actually,
the
a
net
flow
or
whether
it's
a
a
fully
distinguished
or
an
aggregate
flow
can
operate
just
at
the
ip
header
levels.
Currently
we
have
it,
you
know
the
six
tuple
and
we
it's
important
to
have
a
solution
that
also
works
with
that
and,
as
I
read
this
it,
this
doesn't
work
with
any
six
tuple
flow.
It
only
works
with
udv
and
that's
yes
with
six.
A
Why
not
why?
Why
do
you
have
to
do
encapsulation?
You
can
just
do
fake
sharing
with
having
an
aggregate
flow.
I
I
suggest
we
talk
more
about
this
offline.
Okay.
A
M
Okay,
thank
you.
Okay,
pascal,
please.
K
Reference
but
greg.
If
you
want
to
work
together,
we
can
effectively
see
how
we
can
transport
oem
information
natively
in
ipv6,
using
basically
what
we
are
trying
with
the
option:
the
that
net
options
in
my
presentation
a
bit
later
in
this
meeting.
So
the
idea
is
pretty
much
like
your
other
slide.
Can
you
go
back
on
slides?
It's
gonna
be
important.
Can
you
go
back
the
other
slide
when,
yes,
exactly
so
in
this
slide,
we
see
that
the
oim
is
wrapped
up
in
this
case,
mpls
right.
K
It's
wrapped
up
on
the
same
envelope
as
the
the
the
data,
and
this
envelope
is
actually
a
network
construct
which
indicates
the
treatment
of
of
the
packets,
so
npls
is
really
pointing
on
the
path
and
on
the
treatment
and
inside
that
signaling
we
signal
oem.
So
basically
the
goal
would
be
to
do.
The
exact
same
thing
for
ip
is,
is
have
a
network
layer
construct
which
signals
the
path,
the
treatment
and
then
inside
that
thing
place
one
or
multiple
application
flows
and
oam.
M
Okay,
so
I
think
that
well,
we
have
several
discussions
on
the
mailing
list
from
these
presentations
and
I
appreciate
your
comments
and
interest
and
yes,
let's
continue
work.
Thank
you.
N
Okay,
so
this
presentation
is
about
the
packet
ordering
function
together
with
my
co-post
obvious
here,
stefan
ryan
farkas
and
myself
do
I
have
control
all
your
message
to
the
next
slide.
N
Oh
yeah,
I
have,
you,
should
have
control
excellent,
sorry,
okay,
so
the
actual
status
is
version
two
of
this
draft.
This
is
describing
algorithms,
which
can
be
used
for
packet
ordering
function
in
the
deafness
service
sub-layer.
N
N
The
conclusion
of
the
discussion
was
that
in
that
net
we
are
building
functions
in
order
to
achieve
a
given
target,
a
given
end-to-end
deterministic
characteristics
and
they
are
intending
to
use
and
define
building
blocks.
In
order
to
achieve
this
end-to-end
characteristics
and
in
version
2,
it
is
already
stated
that
it
is
out
of
scope
how
to
eliminate
the
delay
variation
caused
by
the
packet
ordering.
The
text
already
contains
some
hints
how
to
do
that.
N
You
can
use,
for
example,
digital
buffer
or
a
flow
regulator
like
a
shaping
function
after
the
functionality,
but
the
draft
is
focusing
exclusively
only
on
the
algorithms
which
are
ensuring
the
packet
ordering
functions.
So
this
is
why
there
was
no
update
in
the
draft,
and
this
discussion
resulted
in
that.
If
you
intend
to
change
the
delay,
variation
or
or
fine-tune
the
delay
variation,
then
you
will
need
additional
functions
after
the
off
functionality.
N
So
the
content
of
the
of
the
drastic
draft
is
pretty
stable.
We
have
received
many
good
comments
for
version
one
and
they
were
already
included
in
version
two,
which
is
not
stable
version
from
the
puff
algorithms
perspective.
N
D
No
thanks
for
the
for
the
work.
I
think
the
the
difficult
part
that
that
hasn't
been
approached
now.
Is
that
all
this?
D
What
what
I
would
love
to
see
ultimately,
is,
I
guess,
a
normative
standard
yang
document
that
exposes
the
parameters
of
whatever
puff
is
being
done
so
that
when
we
go
into
deployments,
the
the
yang
model
gives
us
the
knowledge
that
that
we
can
actually
build
a
controller
plane
that
has
the
necessary
information
and
obviously
such
such
a
young
model
would
have
to
be
built
from
what
we're
describing
here
more
informationally
and
I'm
not
sure
what
the
processes
we're
doing
and
if
we
even
agree
on
that
goal
to
to
getting
a
normative
yang
model
to
to
expose
all
the
relevant
parameters.
N
I
think
it
would
be
interesting
to
have
such
a
young
model
not
only
for
the
puff,
but
for
all
the
other
service,
sub
layer
and
forwarding
sub
layer
functions
as
well.
So
I
think
that
that
is
definitely
something
that
is
that
is
needed.
I
I
agree
with
that.
From
from
this
document
perspective,
we
have
defined
the
configuration
parameters
which,
which
are
configured
for
the
both
algorithms
described
in
this
document,
so
the
parameters
are
defined
and
for
the
young
model.
N
I
think
we
are
missing
not
only
for
both
but
for
other
functions,
as
well
extension
to
the
to
the
young
model
that
we
already
have
for.
For
that.
D
A
Yeah
I'll
just
remind
you,
both
we're
contribution
driven,
so
a
a
draft
that
augments
our
existing
yang
model
model
or
even
a
completely
independent
one
if
appropriate,
is
always
welcome.
Pascal.
K
And
me
the
presentation
for
the
slides.
Oh,
I
think
I
have
them.
Okay,
okay,
very
neat.
Many
thanks
look
so
yeah
the
this
is
basically
a
continuation
of
the
discussion
we
just
had
on
greg's
slide
about
being
able
to
transport
multiple
types
of
flow,
using
a
network
construct
that
is
independent
of
the
application
construct.
K
So
the
application
constructs
read
the
circuit
is
is
expressed
by
the
five
six
tuples.
It
retells
you
what
the
application
flow
is
and
that's
what
the
architecture
says.
On
the
other
hand,
if
we
want
to
be
able
to
transport
multiple
objects,
including
oam
and
aggregated
flows,
it
might
be
ideal
just
like
we
did
for
mpls
to
have
a
network
construct
that
expresses
what
the
network
should
be
doing
for
this
collection
of
packets,
regardless
of
the
flows
or
of
the
oem
data
being
transported
and
effectively.
K
Ipv6
provide
that
sort
of
constructs
it's
done
through
extension,
headers
like
the
hub
by
hub
the
destination
information,
object
and
and
the
source
routing
editor.
So,
depending
on
what
you
really
want
to
do,
I
mean
one
or
the
other
of
those
extension.
Letters
is
ideal,
for
instance,
in
in
the
art
of
ripple
and
now
for
vtn
ids
that
was
discussed
at
six
months
this
morning.
K
K
Okay,
okay,
okay,
so
just
let
me
do
this
slide
and
and
then
I'll
point
to
you.
K
So
basically,
I
was
saying
that
the
concept
of
expressing
a
a
logical
topology,
using
a
hub
by
hop
header,
as
already
some
history
at
the
atf
and
there
is
effectively
for
virtual
transport
network,
for
instance,
a
draft
which
is
work
document
at
xman,
which
encodes
the
vtl
id
into
hobby,
hop
header
in
the
same
fashion,
repo
as
the
ripple
option,
which
is
also
encoded
in
hub
by
hub
header,
to
express
what
we
call
an
instance,
which
is
also
a
virtual
topology.
K
So
if
we
consider
that
a
path
which
is
a
complex
beast
between
a
an
ingress
and
negras
is,
is
a
topology
is
a
logical
topology.
It
just
makes
sense
to
do
what
the
others
do
and
and
express
it
as
a
hobby
repeater
in
particular,
because
we
want
that
every
node
on
the
path
is
as
the
forwarding
stab
layer
operation.
So
it
makes
for
sense
to
encode
it
in
half
by
operator.
The
cool
thing
about
operator
is
that
it's
completely
near
the
it's
the
first
thing
after
the
ipv6
header.
K
So
if
you,
if
you
need
a
other
implementation
and
that
cannot
go,
you
know
beyond,
say
64
or
128
bytes,
then
having
the
robe
up
header
immediately
after
the
ipv6
header,
allows
you
to
process
it
in
hardware.
So
that's
pretty
important
for
forwarding
now
for
for
the
service
sub
layer,
not
every
hub,
necessarily
processes
it.
It's
only
the
relays.
K
So
another
way
of
encoding
that
that
could
be
substitutable
is
using
srh,
which
source
rotator,
basically,
which
indicates
what
the
relays
are
and
the
destination
option
that
provides
the
service
supplier
information
just
before
the
srh
and
that
actually
how
it's
going
to
be
processed
by
the
intermediate
hubs,
which
are
signaling
the
srh.
So
basically,
all
we're
saying
is
instead
of
of
looking
at
the
application
signaling,
which
is
the
five
double
one
which
can
be
far
far
in
the
packet.
K
M
Yeah
now
it's
off
okay,
unmuted!
Thank
you
for
this
presentation
for
the
putting
it
so
clearly
together.
I've
been
following
discussion
in
six
men
on
a
hub
by
hop
extension
header.
M
So
my
understanding
is
that,
yes,
there
is
a
requirement,
or
I
would
say,
probably
expectation
that
each
node
will
decide
on
its
own,
whether
it
can
capable
of
processing
a
received
hub
by
hub
extension,
header
at
line
rate,
and
if
it's
not,
then
it
will
just
forward
it.
M
So
as
a
result,
or
as
I
understand
it,
the
processing
of
how,
by
hub
extension,
header,
is
not
guaranteed
unless
we
tighten
our
unless
we
tighten
our
requirements
for
the
domain
for
the
nodes
that
are
in
the
main
in
encapsulations,
so
that
we
can
ensure
that
in
that
net
ip
domain
with
ipv6
data
plane,
the
processing
of
that
net
top
by
hub
extension
or
hubba
hub
headers.
That
used
by
that
net,
is
guaranteed.
K
Yeah
I
mean
that's.
The
most
of
the
prime
you're
talking
about
is
because
the
ipv6
by
hop
document
is
for
the
global
internet
as
soon
as
you
you're
in
a
local
domain,
which
is
by
charter.net.
K
K
So
the
point
is
not
the
capability
of
the
routers,
because
for
limited
domain
you
will
effectively
have
the
right
routers,
whether
it's
over
udp
or
if
it's
right
after
the
ipv6
header,
but
you
might
have
trouble
implementing
it
on
other
much
and
silicon
if
the
the
that
net
signaling
is
beyond
the
first
64
bytes,
and
that
will
be
a
prime,
because
if,
if
you
are
above
udp,
basically
for
some
harder
platforms,
whereas
placing
it
in
the
hub
by
half
gives
you
much
better
guarantee
that
you
can
find
more
routers
which
are
capable
of
processing
it.
K
M
Yes,
what
I
think
that,
and
that
probably
will
discuss
it
on
the
mainland
list
and
in
following
discussions,
is
the
impact
or
interworking
with
there
are
this
mechanism
and
other
extension
headers?
For
example,
srh.
K
Okay,
so
I
will
skip
the
next
slide.
It
pretty
much
shows
that
you
know
the
extension.
The
hub
by
heart
is
right
after
the
ipv6
header,
whereas
the
transport
can
be
far
far
away,
depending
on
how
many
other
idles
you
have,
which,
which
can
be
a
prime
in
some
ip
networks,
where
the
hardware
cannot
go
too
deep
inside
the
packet.
K
So
the
requirements
are
twofold:
we've
got
all
the
redundancy
in
the
service,
tab
layer
and
we've
got
what
I
was
talking
about.
The
envelope
of
your
multiple
application
flows,
plus
oam,
and
so
that's
that's
why
we
have
two
different
types
of
options
proposed
in
the
document
three
tests
of
time,
I'm
moving
on
so
yeah
that
we
insist
in
the
draft-
and
I
guess
the
raw
architectural
system
that
as
well
not
to
to
to
confuse
what
we
call
the
water
and
what
we
call
the
pipe.
K
The
water
is,
the
application
flows
and-
and
there
is
signaling
for
that-
which
is
really
under
the
control
of
the
application
in
the
network.
We
don't
control
the
tagging
of
of
the
ports
and
and
the
upper
layer
protocol
that
signal
the
application
flow
so
using
them
inside
the
network
is
actually
sometimes
complicated,
whereas
having
like
mpls
our
own
tag
in
the
packets
which
we
can
use
to
to
to
find
our
state
and
our
operation
is
a
lot
easier.
K
So
so
that's
really
a
reason
why
we
we
want
to
place
it
in
a
hub
by
a
hub,
as
opposed
to
leveraging
the
five
double
the
other
problem
with
the
five
tuple
is.
You
would
need
a
five
to
four
inside
the
five
topol.
If
you
want
to
merge,
oam
and
other
and
the
application
flows
here,
we
are
a
lot
more
economical
because
we
just
place
a
tag
which,
which
basically
indicates
the
virtual
topology-
that
you
want
to
follow.
K
And
yes,
current
version
is
seven,
so
we
we
added
effectively
to
to
greg's
point
some
text
regarding
the
various
encapsulation
and
the
placement
of
the
headers
so
greg.
If
you
look
at
the
new
text
and
the
submission
by
fan
and
fun
joined
as
co-author,
and
then
we
we
in
section
in
seven,
we
enrich
the
application
section.
K
So
we
position
better
versus
the
debt
and
architecture
which
shows
it's
a
good
fit
and
it's
time
to
stop,
but
basically
many
thanks
to
alice
and
brian
for
the
comments
and
we'll
publish
very
soon
the
new
version
which
incorporates
that
and
I'm
pretty
much
done.
If
we
have
any
question.
K
A
Thanks
for
cheering
us,
we
don't
really
have
time,
unfortunately,
for
questions.
I
think
this
is
really
interesting
work.
We
do
have
room
in
our
charter
for
more
enhanced
or
actually
not
even
more
enhanced.
This
was
just
a
more
sophisticated
data
plane.
Ip
data
plane
support.
We
always
called
the
original
five
tuple
the
simplified
approach,
so
this
this
certainly
fits
in
whether
or
not
you
use
hop
by
hop
or
maybe
even
flow
id.
A
I
know
that's
not
that's
being
frowned
upon,
but
it
it
would
be
good
to
talk
over
with.
A
K
You
say
it
flower,
you
mean
yeah
the
problem
with
flow
labor
and
and
five
topo.
It's
it's
the
end
point
decision,
so
it's
for
one
flow
for
the
next
flow
that
wants
to
fair
shade
to
to
surface.
Well,
I
don't
know,
share
the
same
fate.
You
have
a
pride
of
signaling.
We
should
leave
the
endpoint
application
level
stuff
to
the
end
points
and
and
do
our
own
construct
for
the
network
just
like
mpls
does.
It
makes
a
lot
more
sense.
A
K
A
O
A
Okay,
another
size,
you
have
control
on
the
bottom.
You
should
be
able
to
select
just
underneath
the
slide.
O
There
any
problem
yes,
but
but
like
I
could
actually
say
the
cap
and
motivation.
It
was
presented
first
in
the
interim
without
a
draft
but
absorbed
some
good
points
from
the
earlier
draft,
and
since
there
was
a
positive
feedback,
we
submit
and
presented
a
new
draft
based
on
the
slides
last
meeting
and
before
this
meeting
we
merged
another
requirement
draft
according
to
chess
suggestion
and
addressed
the
comments
in
the
last
meeting
and
offline.
I
O
I
control
it
by
myself,
so
forget
it
and
from
the
last
meeting,
got
some
comments
and
those
other
updates
we
merged.
Another
requirement.
Draft
had
two
technical
requirements,
show
the
internet
two
slides
and
the
ad
an
example
of
in
electric
mix
and
telecommunic
communications
research
into
and
move
to,
the
example
of
the
trial
to
appendix
for
reference
based
on
the
contribution
added
tolerance
chain
and
jingdom
has
cursors
and
also
removes
these
two.
Based
on
the
comments.
O
So
this
is
the
first
to
added
requirements,
support
configuration
of
multiple
queuing
mechanisms.
There
will
be
the
level
servers
fight,
determines
the
traffic
flow
for
what
is
based
on
different
mechanism
in
some
network
device
of
flight
square
network
and,
for
instance,
the
network
aggregation
device
may
need
to
support
both
ts
and
ats.
So
the
network
device
shows
the
ports,
multiple
parameters
and
the
coding
is
unified
or
simplified
configuration
and
management.
O
I
expected
to
be
support
so
for
the
distributed
and
centralized
scenarios,
some
simple
requirements
for
the
advertising
or
some
resource
topological
when
needed-
and
this
is
the
second
added
requirement-
is
to
support
current
mechanisms
which
over
cross
multi-domains,
it
may
cross
multi-domains
and
adopt
a
variety
of
different
mechanisms
within
instrument.
So
it
is
required
to
support
the
interdependent
mechanisms
at
inter
domain
boundary
nodes
such
as
a
priority,
but
redefinition
and
rescheduling
of
cues.
Moreover,
changing
from
the
one
queuing
mechanism
to
another
may
generate
additional
end-to-end,
latency
or
data,
so
a
collaboration
mechanisms.
O
And
this
is
the
appendix
about
the
trials.
Here's
a
new
remote
industrial
internet
of
things,
service
of
eti
and
data
through
three
different
operators
network
and
demonstrates
the
real-time
with
a
smart
factory
and
the
low
latency
and
from
the
jitter
showed
here.
So
some
thicker
way
is
that
strong
motivations
and
demand
on
the
enhancements
about
life,
skill
than
that
and
the
pro
battery
mechanisms
coexist
with
thunder
technology
in
trials.
O
So
there
might
be
some
enhancement
to
support
it,
and
then
here
is
the
overall
technique,
requirements
and
first
a
thoroughly
time.
Synchronization,
considering
the
in
the
large-scale
network,
there
might
be.
Some
device
comes
about
the
synchronization
and
the
second
one
is
supports.
The
large-scale
single-hook
propagation
latency
brings
long-distance
latency
to
support
it
and
the
third
one
is
accommodates.
A
higher
inner
speed
and
force
is
key
over
to
numerous
network
devices
and
massive
traffic
flows
and
the
fifth
tolerant
figures
of
links
or
notes
and
topology
changes,
because
those
will
bring
maybe
second
levels
latency.
O
Additionally,
yes
and
six
and
seven
was
introduced
before
so.
Next
is
that
considering
the
recharger
of
the
wg,
we
extract
some
potential
data
plan
implications
from
the
requirements.
The
first
is
a
ipv6
extend
header
based
enhancement,
to
supports
the
native
app
basic
network,
and
currently
there
is
no
change
of
the
extension
headers
and
the
second
is
the
array,
don't
say
related
files
for
the
free
off,
then
ps
data
plan.
Suppose,
while
the
native
ipv6
data
plan
doesn't
have
the
files,
okay,
third,
one
is
for
the
aggregation
flow
identification.
O
The
ipv6
aggregate
flow
identification
is
supposed
by
five
or
six
turbo,
ib,
prefix
or
wheel
cards.
It
is
a
local
window
that
works
more,
but
in
english
skills,
the
number
of
individual
flow
charts
that
may
randomly
join
and
leave
the
aggregated
flow.
O
Finally
is
some
conclusion
about
the
queue
related
information
to
be
considered,
such
as
cycle
specification
or
the
time
step
or
time
information
like
fields.
So
next
is,
since
they
dropped
up
absorbed
the
earlier
draft,
merge
another
requirement
dropped
and
got
enough
discussion.
The
notification
and
online
offline
would
like
to
request
for
an
adoption
call
as
a
start
point
for
the
enhanced
requirements
document
thanks
and
any
comments.
A
Unfortunately,
we
don't
have
time
for
more
discussion,
so
others
it
would
be
good
to.
If
you
have
comments,
take
it
to
the
list,
I'm
particularly
interested
in
if
we're
missing
requirements.
A
I
Next,
up,
I
believe,
is
twirlis.
A
Twereless
you're
in
the
room:
do
you
want
someone
else
to
advance
for
you?
Where
do
you
have
the
the
app
I'll
give
it
to
maybe
luis?
If
you
don't
mind
advancing.
D
So
echoey
it's
really
hard
to
understand
next
slide
or.
D
O
D
All
right
so
wanted
to
re-summarize
the
high
level
what
we're
trying
to
achieve
here
next
slide.
D
So
we
think
we
need,
according
to
the
you
know,
previous
presentation,
a
standardized
bounded,
latency
solution
for
large-scale
high-speed
networks,
and
if
we
look
at
what
we
you
know
are
kind
of
under
the
hood
thinking
of
being
available
today
that
has
been
proven.
I
think
we
have
the
guaranteed
services
from
rsc,
2212
and
tsn
ats
as
the
two
mechanisms
that
I
think
people
who
have
worked
in
tsn.
D
No,
we
don't
have
the
yang
models.
There
is
a
proposal
for
rspdp
for
that.
But
if
we
look
at
both
of
these
queuing
mechanisms,
they
really
have
three
core
issues
right.
So
one
is
the
insufficiency
for
applications
because
of
the
fact
that
they
have
the
maximum
jitter,
because
it's
load
dependent
and
that
may
lead
to
a
lot
more
complicated
applications
with
clock.
Synchronization
on
small
iot,
end
endpoints,
as
opposed
to
only
in
the
network.
D
The
second
one
is
the
an
expensive
scalability
because
of
the
per
hop
state
required
in
service
provider
networks
and
then
of
course,
kind
of
like
as
a
follow-up,
the
non-applicability
to
current
and
future
forwarding
planes
that
can
operate
stateless
like
sr
mpls
srv6
and
for
multicast
beer
or
other
similar
upcoming
ones.
So
those
are
the
three
big
limitations
of
these
well-known
queuing
mechanism.
Next
slide.
D
So
and
then
there
is
tsn
cyclic,
queuing
and
forwarding,
which
is
another
tsn,
proven
standardized
solution
and
that
solves
problem
one
and
two.
But
it
doesn't
really
work
for
high
speed
or
wide
area
networks,
because
the
cycles
on
adjacent
nodes
need
to
be
matched
by
the
synchronized
clock.
So
that
means
that
they
don't
support
arbitrary
length
with
the
latency
and
jitter
and
require
an
extremely
high
clock
accuracy,
linear
to
the
speed
of
the
network.
D
D
So
that
brings
us
to
the
wrong
indented
tech,
cycling
and
queuing
forwarding
that
we're
proposing,
which
is
really
an
enhanced
version
of
the
cyclic
and
queuing
from
tsn,
and
it
solves
that
high
speed
when
link
issues
by
carrying
the
cycle-
identity,
type
identifier,
not
just
internally-
in
the
node
and
updated
by
the
clock,
but
it's
in
the
packet
and
that
basically
makes
it
easy
to
support
arbitrary
latency
links
even
with
jitter,
and
there
are
high-speed
implementations
as
proof-of-concept
pre-standard,
obviously
with
equal
to
or
larger
than
100
gigabit
interfaces
and
deployed
for
testing
in
a
2
000
mile
long
network
in
china
next
slide.
D
So
we
we
talked
about
this
on
the
last
session.
It's
also
in
the
draft,
it
kind
of
explains
how
the
more
cycles
you
configure
into
the
solution,
the
more
leniency
you
have
against
latency
variation
or
time
interval
areas
in
your
clock,
synchronization
protocol.
So
basically
it's
it's
also
flexible
in
so
far.
Is
that
it
can.
You
know,
by
mean
of
configuration
of
more
than
three
cycles,
accommodate
for
more
variances
next
slide
so
right.
D
So
the
draft
that
we
have
was
the
attempt
to
try
to
get
two
things
out
of
it
right,
so
one
is
to
propose
a
standard
tcqf
mechanism
with
the
per-hop
state
machinery
that
could
support
any
type
of
tagging.
D
The
one
tagging
that
we
know
we
can
do
without
expanding
any
forwarding
plane,
but
just
using
the
one
forwarding
plane,
we
already
have
almost
everything
else,
which
is
the
mpls
forwarding
plane
with
the
traffic
class
field,
which
just
relies
on
rfc
3270
and
that's
the
easiest
to
operationalize
thing,
and
this
is
what
we
this
is
the
draft
we
want
to
ask.net
to
adopt
as
a
working
group
draft
next
slide,
and
so
then,
obviously
there
is
the
question
of
what's
the
adjacent
work
right.
So
obviously
we
want
the
yang
model
to
specify
the
configurability.
D
D
I
think
much
more
easily
code
to
other
working
groups
accordingly,
right
the
tagging
for
others,
one
like
dscp,
probably
with
tsvwg,
and
then
of
course,
the
whole
question
of
how
do
we
do
it
with
srv6,
where
we
may
want
to
have
a
source
routing
header.
That
per
hop
indicates
the
the
tag,
something
I
think
where
even
an
old
and
draft
existed
right.
D
A
Thank
you
trois
for
this
work.
As
we
mentioned,
we're
going
to
give
some
time
for
other
solutions
to
be
proposed
before
considering
adoption.
A
Yes,
torellis
was
just
making
a
comment.
If
you
could
let
him
just
finish
his
comment
and
then
and
then
go
tour
list.
Do
you
want
to
make
her
finish
your
comment,
I
know.
A
Okay,
thanks
go
ahead.
J
Okay,
thank
you.
My
question
is
not
only
for
dollars,
but
all,
but
also
for
chairs,
and
do
do
we
plan
to
adopt
one
data
plane
solution
for
a
qm
mechanism
or
it
or
it
may
be
better
to
define,
try
to
define
a
generic
data
plane
solution
to
cover
all
kinds
of
cream
mechanisms,
because
the
this
this
draft
has
mentioned
ipv6,
nps
and
srv6
for
the
tcqf,
and
we
maybe
we
have
other
queueing
mechanisms,
then
we
will
have
many
display
solutions.
D
So
if
my
answer
to
that
would
be
that,
I
would
hope
that
the
debt
networking
group
has
space
for
multiple.
You
know
bounded
latency,
queuing
or
scheduling
mechanism,
as
we
want
to
call
them.
I
was
just
thinking
that
this
one
would
be
the
one
that
would
be
most
easy
to
justify
as
a
standardizing
first,
because
we've
got
the
deployment
experience,
it's
derived
from
something
that
has
even
more
deployment
experience
with
psn.
D
So
I
mean
we
have
nice
research
work
for
something
that
we,
you
know,
think
might
long
term
be
even
better,
but
where
you
know
high-speed
hardware,
implementation
experience
is
missing
so,
and
it
may
be.
The
same
would
be
a
problem,
probably
true
for
for
other
cool
ideas
that
are
being
brought
to
that
net.
So
maybe
we
should
look
at.
You
know
the
realistic
timeline
of
adoption
in
actual
high-speed
networks,
as
as
a
factor
to
guide
us,
but
not
prohibit
something
to
happen.
A
Yeah
and
quan,
I
think,
you're,
actually
getting
at
the
point
that
there's
a
difference
between
queuing
and
marking
and
the
the
data.
The
mark
marking
tends
to
be
data
plane
specific
and
we
would
expect
that
to
be
in
individual
documents.
The
base
queueing
mechanism
could
be
in
a
shared
document
or
even
show
up
in
the
in
one
document.
One
data
plane
document
first
and
then
be
referenced
by
subsequent
ones.
A
We
do
see
that
there's
definitely
different
data
planes
were
managing
and
there
we
talked
about
this
being
there
being
a
first
document
for
a
solution
and
that
we
may
entertain
multiple
but
as
usual,
as
I
said,
multiple
times
we're
contribution
driven.
A
So
if
you
have
an
alternative,
it'd
be
really
interesting
to
see,
see
the
contribution
on
that,
and
there
is
time
before
we
talk
about
adoption
on
this
one
specifically
to
to
allow
for
more
documents
to
be
published,
and
with
that
we
really
any
subsequent
discussion
has
to
go
to
the
list,
and
we
need
to
move
on
to
the
next
presentations
where
they
won't
have
any
time.
A
No,
no,
let's
be
fair
to
the
others
please
and
move
on,
so
I
think
I'm
not
sure
who's
presenting
next,
but.
A
F
F
A
A
Okay,
I'm
sorry!
I
can't
sorry
about
that,
so
you
should
be
able
to
present.
A
P
A
Yes,
you
sound
something
and
you
should
have
yeah.
Thank
you.
P
N
P
Okay,
thank
you
very
much.
Sorry.
So
today,
I'm
gonna
give
a
presentation
about
the
my
draft
on
the
reference
delay-based
mechanism
about
the
along
with
the
delay
measurement.
So
one
delay
marine
is
very
important
and
is
a
great
performance
indicator
for
sla
guarantee
and
especially
for
some
scenarios
in
5g
network,
so
owd,
the
one
with
that
delay
environment
is
not
novel
research.
P
Many
studies
have
been
down
in
years
before,
and
especially
some
offices
also
been
published
in
ip
performance
environment
group,
but
our
proposal
is
quite
different
and
so
it's
based
on
a
prerequisite
about
the
net.
So
that's
why
we
want
to
you
know
predict
this
in
needs
in
that
net
group,
so
you
can
see
that
the
current
deployment
of
the
one-way
delay
measurement
deployment
is
always
is
all
usually
based
on
some
clock
synchronization.
P
You
can
then
such
as
pdp
or
gps,
but
the
deployment
cost
is
quite
high
or
if
what,
if
you
want
to
do
some
approximate
measurement,
you
usually
use
rdt
half
of
the
idt,
but
it's
not
accurate
for
the
some
scenarios.
P
P
So,
as
as
you
can
see
in
the
typical
network,
topology
we
deploy
between
the
sender
and
the
receiver,
we
have
a
clock
offset.
We
need
to
first
send
a
reference
packet
from
the
sender
to
the
receiver
and
then
follow
following
we.
We
want
to
measure
the
target
packet
and
between
the
center
and
the
receiver.
So
there
is
a
clock
offset
between
the
sender
and
the
receiver,
and
you
know
for
the
reference
packet.
P
It
is
then
through
deterministic
path,
so
that
it
means
that
the
the
delay
is
known
before
it's
based
on
the
that
and
and
the
target
package
is
what
we
want
to
marry.
So
we
just
need
to
record
the
the
send
the
send
time,
the
reference
packet
and
target
packet
and
then
the
similar
and
the
arrival
time
and
the
timestamp
of
the
both
the
packets.
P
So
our
calculation
method,
as
shown
here,
is
only
has
two
equations.
We
can.
We
know
that
the
the
reference
delay
can
be
calculated
inside
the
at
the
very
beginning
by
at
the
very
beginning,
and
it
won't
be
changed
during
the
measurement
and
the
offset
the
offset
one
and
offset
2
can
be
seen
as
equal
when
we
send
the
the
reference
packet
and
target
passes.
P
The
the
interval
is
quite
short,
so
we
can
assume
that
both
off
test
is
the
same.
So
we
can
use
equation
one
and
equation
two
to
derive
to
the
the
third
quad
equation,
where
the
target
delay
between
the
center
and
the
receiver
is
determined
by
the
full
timestamps.
I
recorded
I
at
the
both
end
and
the
delay
reference
delay.
So
all
of
the
five
values
are
known
so
here
that
the
target
delay
can
be
calculated.
P
So
our
method
here
are
some
measurement
procedures.
I
want
to
explain
it
in
detail.
You
can
see
in
my
draft
and
the
the
time
stamp
can
can
be
also
encapsulated
in
normal
in
normal
format
and
and
you
can
insert
it
inside
that
tcp
header
and
between
the
dot
and
the
payload
and
tcp
header,
and
also
so
yeah,
so
our
so.
I
want
to
sum
up
with
my
proposal,
so
our
mechanism
don't
has
no
need
to
deploy
time
discrimination.
P
So
that's
the
biggest.
You
know
the.
I
mean
the
vented
point
of
our
proposal
yeah,
so
you
can
largely
reduce
the
deployment
cost
of
accurate,
one-way
delay
measurement
compared
to
the
current
or
state-of-the-art
environment
in
canada.
So
it
also
has
no
impact
on
intermediate
network
devices.
It's
only
you,
you
don't
need
to
care
about
the
the
intermediate
network
path.
Just
that
you
need
to
know
that
it.
P
You
have
a
deterministic
path
between
the
two
end
and
the
target
package
might
not
send
through
the
the
deterministic
path
it
can
can
be
sent
through
some
different
path.
I
mean
between
the
sender
and
the
receiver,
so
you
can
you
you
can
use
the
delay.
I
mean
reference
delay
to
to
calculate
the
the
measurement
path
and
the
the
target
path.
So
so
yeah,
that's
my
major
concern
to
propose
it
in
that
net.
P
Thank
you.
So
if
you
are
interested
in
my
I
mean
my
proposal,
you
can
talk
to
me
yeah
yeah,
one
also.
You
are
welcome
drawing
to
our
work,
so
any
questions.
I
see
that
some
some
friends
are
raising
their
hands
in
the
queue.
So,
let's
try
to.
P
M
I
just
want
to
note
that
clock
synchronization
helps
with
their
or
mitigate
their
instability
of
the
timers,
so
without
clock
synchronization,
the
network
of
the
timers
might
fluctuate
differently
and
skew,
so
it
not
be
a
constant
difference
between
clocks
and
different
nodes,
but
it
will
be
floating
so
that
much
more
challenging
and
difficult.
Thank
you.
A
Thanks,
I
think,
in
fairness
to
the
others,
we
have
to
take
the
questions
on
the
list
and
give
them
an
opportunity
to
to
talk
in
the
remaining
10
minutes.
If
that's,
okay
to
dean
and
dan
and
bing
yang.
A
Okay,
we're
going
to
try
again,
if
shampoo,
if
you
could
say
something.
A
He
said
that
his
local
test
passed,
but
we
are
still
not
hearing
you
hello.
Yes,
we
hear
you
excellent.
Q
Oh
so
I
will
have
to
this
representation
quickly.
We
introduced
a
specific
solution
that
about
basis
forwarding
next.
Q
So
the
motivations
internet
architecture
defines
whose
goals
we
know
that
our
boundary
delegate
abundant
private
law
simulation
bonded
out
of
ordinary
and
use
to
resource
resolution,
explicit,
diluting
and
service
protection
to
achieve
these
goals.
Resource
resolution
is
the
basis
of
insuring
bonding
delegita
and
ultimately
depends
on
the
crew
mechanism
of
the
floating
plane.
The
wide
and
use
the
priority
basically
scheme
making
whether
or
not
it's
about
with
worst
case
related,
thus
enhancement
scheme
of
before
pq
is
proposed.
Q
Okay,
this
page
is
about
the
deadline
clues.
The
total
anchors
are
violent
of
people
and
also
based
on
our
already
skilled
in
it.
There
are
several
attributes
of
the
terminal.
Cursor
all
right
now
gives
how
tdl
attributes
staggered
from
each
other,
decreasing
with
the
passage
of
time
and
the
third
night
with
ddr0
has
the
highest
priority,
heighten
the
existence
jiggled
for
all
his
priorities
sub.
Secondly,
zero
will
be
reversed
to
the
maximum
initial.
Q
The
third
with
tdlo
non-zero,
has
a
normal
priority
for
end
time
policy
and
the
proliferation
and
already
for
anthem
policy.
The
former
can
be
involved
in
skilled
in
while
the
later
canada
and
the
figure
the
following
figures:
a
simple
example
of
that
code
that
it
contains
four
delanaku
each.
We
see
satellite
attributes
zero,
one
two
three
after
the
time
of
the
passage
of
time
or,
for
example,
one
microseconds,
the
tdu
actually
will
be
changed.
Q
Q
Okay,
so
this
page
is
about
how
to
put
pages
to
the
description
is
mainly
included,
booster,
but
the
first
good
at
the
denial
information
over
the
package,
including
the
planned
terminal.
Now
the
planetarium
can
be
computed
by
under
under
delay
minus
the.
Q
Accumulated
delay
violation
by
one
caused
by
one
absolutely,
this
information
came
together
from
a
package
itself.
Q
Second,
the
product
package,
the
allowable
skills.
Q
Plus
the
accumulated
denomination
minus
the
avoiding
delay
internode,
then
the
property
we
are
putting
to
the
cru
with
ddl
attribute
consistent
with
one
of
q.
The
following
figure
shows
a
simple
example,
so
I
will
go
next.
Q
The
following
figure
shows
a
six
part
from
the
same
deterministic
service
are
received
on
the
ui
problem,
assuming
that
they
arrive
at
the
outgoing
port,
one
by
one
after
internet
of
delay,
forwarding
delay
f
each
part
we
under
the
queue
with
ddl
one
consistent
with
a
low
possible
delay
of
the
target,
at
least
current
time.
So
next.
Q
Okay
for
the
coast,
time
synchronization
is
not
required,
because
network
nodes
operate
based
on
relative
time
deployment.
Q
A
pragmatic
collection
base
is
an
enhancement
of
bigger
scheduling,
algorithm
from
the
network
upgrade
each
node
can
independently
set
the
authorization
time
of
the
deadline.
Crews,
based
on
several
port
bandwidths
support,
partial
upgrade
scalability.
A
single
set
of
dynamic
users
support
multiple
levels
of
residence.
Time
performance.
Could
data
control
just
a
single
authorization?
A
J
J
J
First,
as
per
the
death
net
controller
play
framework,
explicit
paths
should
be
calculated
and
established
in
control
play
to
guarantee
their
deterministic,
the
deterministic
transmission,
so
first,
the
end
to
end
bend.
Independency
concentrates
should
be
taken
into
consideration
in
past,
consider
computation
and
second,
as
per
the
atlantic
bended
lantern
sea
the
end
to
end
delay.
Bands
can
be
presented
as
the
sum
of
a
long
queueing
delay
and
the
queuing
delay
along
the
path,
so
the
queueing
mechanisms
and
the
parameters
should
be
determined
during
past
computation.
J
So
we
proposed
some
precept.
Extensions
first
is
the
metric
object.
We
proposed
two
liu
types
over
metric
object.
First
is
the
end
to
end
abandonment
latency
in
the
p,
the
pc
pcc
indicate
carry
the
end-to-end
validity
metric
to
indicate
the
end-to-end
band
in
latency
constraint
in
past
computation.
J
Their
end
to
end
bending
jitter
is
the
same
with
their
bending,
their
latency
and
second,
we
also
propose
the
iosp
and
extended
flag
tle.
We
propose
b
bit
to
indicate
under
a
deterministic
pass
capability.
J
Finally,
we
proposed
to
extend
the
ero
and
the
srero
to
distribute
the
past
computation.
We
proposed
the
new
information
structure.
For
example,
we
proposed
the
death
knight
sub
trv
and
the
cycles
of
qre
for
different
viewing
mechanisms.
J
So
last
step:
we
we
hope
that
later
to
determine
this,
work
could
be
done
in
data
or
pcwg.
A
Thank
you
for
this.
I
would
suggest
bringing
it
to
pce
as
well,
and
it's
likely
that
they're
going
to
ask
for
this
working
group
to
agree
that
it's
good
work
and
then
once
there's
general
agreement
that
by
the
working
group
here
that
there's
interest
in
it,
we
can
coordinate
with
the
pce
chairs
as
to
which
working
group
does
the
actual
work.
A
But
thank
you.
You
know
this
is
new
work
and
it
clearly
needs
to
be
discussed
by
the
working
group
to
decide
if
it's
something
that
the
the
group
is
interested
in.
A
So
thank
you
we're
we're
just
now
at
time,
so
apologize
to
those
at
the
end
who
got
squeezed
on
time
and
appreciate
you
shortening
things
up,
and
this
takes
us
to
the
end
of
the
session
and
really
look
forward
to
seeing
everyone,
hopefully
in
person,
in
philadelphia
at
the
next
ietf
janos
anything
else.