►
From YouTube: IETF-DETNET-20230524-1200
Description
DETNET meeting session at IETF
2023/05/24 1200
https://datatracker.ietf.org/meeting//proceedings/
A
B
Hello,
everybody
we'll
wait
a
few
minutes
for
people
to
join.
B
Okay,
I've
just
pasted
a
copy
of
the
agenda
into
the
notes
file.
B
D
You
can
upload
as
usual,
we're
using
the
upload
tool
and
I
can
approve
it.
If
I
did
yeah.
D
Then
what
I
see
here
there
is
this,
but
I
see
the
chest
view
there
is
an
upload,
slides
button
button
here.
This
is
the
meeting
we
are
at.
Oh.
B
It's
usual
data,
tracker
I,
think
you
go
and
propose
slides
there
and
then
Jana
sure
I
approve
them.
Yep.
B
And
thanks
for
doing
this,
we
still
don't
have
a
sanitized
version
of
Tucson.
A
shoe
song's
slides
from
the
last
meeting
need
to
need.
C
D
I,
don't
bother
too
much
with
the
title,
if
you
don't
mind
and
just
so
it
approved
and
now
should
I,
don't
know
how
much
how
long
it
takes
for
meter
code
to.
E
In
we
see
it
in
them,
and
why
do
we
see
it
in
the
middle.
C
I,
don't
see
it
as
the
slides
yet
so,
but
hopefully
soon.
D
E
It's
there
is
a.
D
B
E
B
Go
ahead,
Giannis
I'm,
just
going
to
go,
grab
get
the
there
it
is.
We
need
to
put
need
to
put
this
up
to
get
started.
B
So
this
is
the
note
well
slide.
This
is
an
ietf
meeting,
you're
expected
to
be
familiar
with
this,
because
the
provisions
of
this
slide
apply
to
all
participants.
B
B
Alrighty
here's
the
agenda.
This
should
look
familiar
I,
just
basically
take
I.
Just
keep
editing
this
file
every
week
to
hopefully
get
a
time
slot
to
write
going
forward
and
insert
the
the
draft
that
we're
gonna
do
an
item
two.
B
We
have
a
couple
more
drafts
queued
up
for
the
next
two
meetings.
We
have
time
slot
draft
and
then
Antoine's
new
draft
are
technically
queued
up
for
the
next
two
meetings.
If
people
want
more
drafts
in
the
queue
just
send
email
and
maybe
I'll,
maybe
I'll
start
maintaining
maintaining
the
queue
in
the
wiki
somewhere.
B
All
right
consider
the
agenda
bashed
see
if
I
just
call
that
Bingo
okay,
that's
what
that
was
the
right
thing
for
me
to
Echo
to
do
so.
I
guess
we
are
on
the
sort
of
General
discussion
portion
I,
see
a
new
version
of
the
requirements.
Draft
was
posted.
B
F
F
Yes,
one
of
the
some
of
the
authors
proposed
some
new
comments
and
we
add
the
the
to
the
new
version
and
but
we
just
add
one
new
requirement
and
for
others
we
add
them
as
a
text
in
the
existing
requirements
and
for
this
version
we
think
maybe
it's
really
better
than
than
the
previous
one,
and
also
then
the
zero
zero
version
of
the
of
the
wgs
draft,
so
maybe
well
I
think
it
is
stable
now,
but.
C
B
All
right
we'll
have
to
see
what
happens
when
we
get
there.
We
do
need
to
see
a
requirements
graph,
because
I
think
we've
gotten
to
the
point
where
we've
got
to
at
least
be
attempting
to
evaluate
one
or
more
of
the
existing
TSN
mechanisms.
I
view
this
as
sort
of
test
running
with
the
requirements,
so
we
can
shake
out
bugs
or
unanticipated
problems
with
them
without
them,
reflecting
on
a
proposed
on
a
proposed
new
mechanism.
G
B
Morning,
just
for
my
own
grins,
let
me
quickly
share,
let's
see,
if
I
do
this
right.
B
That
one
I
think
yeah
there.
It
is
okay,
I
popped,
this
up
simply
to
show
here's
three
seven,
which
is
here
this
is
this,
is
the
new
requirement
that
Peng
was
was
was
referred,
was
referring
to
yeah,
and
you
can
also
see
the
other
major
text
Edition,
just
above
it
on
the
screen.
B
So
this
is
just
for
everybody's
information
and
I.
Guess
we'll
come
back
to
this
during
the
item
two
presentation.
B
C
Yeah
so,
for
example,
on
that
complex,
topology,
I,
very
much
liked
from
the
prior
from
one
of
the
prior
presentation,
the
the
explanation
about
the
you
know
burst
buildup
in
rings
right
so
I
mean
that
might
be
a
good
reference
to
add
to
that
section
to
to
to
to
to
give
some
better
understanding
how
to
assess
the
requirement
right
because
I
think
that's
that's
that's
one
of
our
next
issues.
B
So
I
think
we're
looking
for
a
volunt.
We
should
have
had
this
discussion
last
time.
I
think
we're
looking
for
a
volunteer
or
two
or
three
to
take
the
existing
TSN
mechanisms
and
and
attempt
to
evaluate
against
the
requirements.
We
know
that
they're
not
going
to
meet
most
of
them,
but
having
an
initial
attempt
to
do
the
evaluation
will
give
us
some
confidence
that
the
requirements
are
tractable
before
we
start
going
through
the
new
proposals.
B
So,
looking
for
volunteers,
don't
have
to
speak
up
now,
but
I
think
this
is
I.
Think
this
is
this
is,
is
an
important
Next
Step
to
to
make
progress.
C
I
just
have
a
little
bit
of
backlog
on
my
on
the
things
I
wanted
to
do
for
that
net.
So
in
two
weeks
time
or
so,
I
can't
do
it
something
in
the
next
two
weeks,
but
afterwards
I
should
be
able
to.
B
Yeah
and
there's
there
are
enough
people
interested
in
in
cqf,
I
mean
we.
We
have
at
least
three
three
drafts
that
are
that
are
proposing
to,
in
effect,
extend
extend
cqf
with
a
cycle
ID
mechanism,
so
between
the
authors
of
those
drafts,
I,
would
hope.
There'd
be
one
at
least
one
author
who'd
be
willing
to
take
a
take.
An
initial
attempt
at
evaluating
the
original
cqf
mechanism
against
our
requirements.
B
B
Okay,
I
need
to
take
some
notes.
Here
would
help
if
somebody
else
could
get
into
hedge
talk
and
start
taking
notes,
I'm
not
trying
to
run
the
meeting
and
do
all
do
all
the
note.
Do
all
the
note
taking
myself.
B
Okay,
I'm
caught
up
again,
so
we're
still
in
in
item
one
process,
oriented
topics,
any
other
comments,
requests
questions.
B
Okay
Turlock:
should
you
volunteered
yourself
as
the
designated
victim
to
do
the
presentation
on
tcqf.
B
Okay
sounds
good.
Why
don't
we
do
that
because
I
think
you
said
you
had
some
slides
in
there
that
may
affect
the
requirements.
So
let
me
go
ahead
and
go
go
ahead
and
go
to
item
two.
Do
that
presentation
and
then
we
can
get
return,
requirements,
discussion
and
talk
about
next
steps
as
we
go
along
all
right
now,
let's
see
if
the
theory
works,
I
click
on
meeting
materials
and
I
hit
refresh
and.
B
There's
there's
a
refresh
button
in
the
materials
window.
If
you
hit
that
Miracle
goes
back
and
looks
okay,
oh
all
right!
So
we're
like
okay!
Now
yeah
I
watched
the
right
thing
to
do
with
me.
If
I
click
on
the
slides,
I
get
a
Chrome
tab
that
I
can
share,
but
I
think
we
want
to
do
something
different
there
yeah.
If.
D
Next
to
the
race-
and
that
is
share,
preloaded
slides,
but
nothing
appears
there
for
me.
So
I
can
just
yeah.
C
Always
have
to
ask
back
and
forward
all.
C
B
B
Okay.
So
if
I
click
on
share
pre-loaded,
slides,
okay,
so
it's
so
all
right.
So
what
we're
going
to
have
to
do
is
we're
going
to
do
it
the
hard
way.
C
Well,
I
can
I
mean
I.
Can
okay.
C
It's
a
somewhat
frustrating
that
we
still
can't
figure
out
our
own
tools.
So
all
right!
So
here
we
are.
It's
like
the
queuing
in
forward.
C
Right
so,
and
you
can
see
on
the
bottom
where
the
slides
are
so.
This
is
presentation
for
what
we've
been
talking
about
several
times
in
the
working
group,
hopefully
with
improved
explanations
and,
of
course,
more
time
to
ask
questions.
And
as
you
see,
there
is
now
a
lot
of
authors
because
we're
in
the
process
of
merging
two
of
the
drafts
presented
already
and
we're
going
to
get
into
that
next
slide.
C
So
I
was
trying
to
figure
out
how
to
structure
this.
This
presentation-
and
you
know
as
a
completely
opposite
to
you,
know
how
I've
seen
a
lot
of
research
work
being
done.
C
Where
you
know
a
lot
of
theoretical
stuff
is
being
done
and,
at
the
end,
some
very
insufficient
explanation
how
this
relates
to
reality
and
has
actually
test
been
tested
in
reality,
I
wanted
to
try
to
start
with
the
motivational
stuff
being
not
only
the
history
of
what
we
came
from,
but
also
what
we
have
done
over
the
years
in
support
of
it
and
then
the
other
points
I
think
we
will
we'll
see
them
when
we
get
to
them
next
slide.
C
So
we've
been
on
this
since
about
2018
2017
when
when
this
was
started
to
be
looked
into,
and
the
first
draft
we
presented
was
in
2018
until
2019
with
Christina
and
a
couple
of
other
co-authors.
C
There
was
also
around
the
time
when
we
had
the
Bangkok
interim
together
with
TSN,
if
folks
remember
that
another
draft
in
2019
was
also
looking
at
mpls
encapsulation
and
then,
of
course,
because
we
didn't
have
the
agreement
and
time
in
that
net
to
work
on
this
type
of
work.
It
kind
of
fell
into
disrepair
and
was
revived
when
we
were
getting
closer
and
and
finishing
up
on
the
charter
items
in
that
net.
C
So
2021,
a
new
version
of
the
draft
queuing
with
multiple
buffers
was
written
and
then
I
started
to
work
on
with
with
new
co-authors,
coming
from
the
mpls
side
of
that
net,
to
figure
out
how
we
can
add
to
the
draft,
not
only
the
basic
mechanism
but
also
the
other
pieces
that
hopefully
are
sufficient
for
standardization
and
from
that
respect
it
also
focused
then,
as
the
name
implied
on
the
mpls
encapsulation,
because
that
was
pretty
much
what
we
had
a
lot
more
complete
in
the
debt
net
forwarding
plane,
then
an
IP
forwarding,
plane
and
with
all
the
aspects
of
the
IP
forwarding
plane
being
less
well
resolved.
C
It
seemed
like
a
faster
starting
point.
So
then,
in
2022,
when
we
discussed
this
in
in
the
dead
net
working
group
meeting,
David
made
the
wonderful
remark
that
we
don't
have
necessarily
the
need
for
dscp
standardization,
which
would
allow
us
a
lot
easier
to
do.
Dscp
encapsulation
in
detonate
itself
and
I
changed
the
draft
to
the
IP
and
mpls
encapsulation,
and
then
you
do
another
co-authors.
Also
added
a
draft
that
provided
a
lot
more
of
the
original
text,
improved
on
the
explanation
of
the
mechanism.
C
C
You
know
within
Huawei
what
what
we
call
the
the
whole
thing
is
deterministic
IP,
so
I
think
the
name
already
implies
that
we're
a
lot
more
interested
in
IP
and
IPv6
forwarding
planes,
which
is
kind
of
I,
think
easy
to
explain
when
you
look
at
the
landscape
of
service
provider
networks
in
China
and
then
Asia,
where
mpls
just
never
had
that
much
proliferation
than
the
newer
IPv6
technology,
and
so
the
the
two
main
vehicles
that
we
use
to
validate
the
technology
was,
first
of
all,
very
thorough,
large-scale
high-speed
simulation
with
I
think
ns3,
if
I
remember
correctly,
so
that
was
presented
in
ifip
2021.
C
That
was
already
done
much
earlier,
but
it
took
a
while
to
get
through
to
the
conferencing
side.
So
that
is
a
very
good
mathematical
review.
I
think
for
the
model.
The
picture
on
the
right
hand,
side
shows
a
little
bit
The
Logical
Network
that
that
was
tested
for
that.
But
to
me
much
more
interesting.
We
also
built
a
high-speed
router
implementation
of
the
hardware,
so
that
was
based
on
off
the
shelf.
C
100
gigabit
Ethernet
routers,
with
an
fpga
to
do
the
cyclic
queuing
just
because
that
was
not
standard
in
that
type
of
Hardware.
Now,
of
course,
the
cycling
queuing
itself
is
very
much
standard
on
one
gigabit
and
10
gigabit
switches
that
are
built
in
the
ethernet
space
against
TSN
requirements.
C
So
those
could
much
more
easily
support
this
technology
than
than
the
high
speed
switches,
and
that
was
also
then
validated
with
omnet
plus
simulations
on
the
larger
topologies
of
a
is
1239,
which
is
the
well-known
Sprint
topology,
with
315
routers
and
1944
links,
so
both
Hardware
simulation
in
detail
and
in
large
scale
next
slide.
B
One
quick
question
on
the
side
of
just
shown
here:
let
me
go
back
on
the
lower
right
hand,
diagram
the
red
flows
or
B
is
obviously
best
effort.
What
does
the
HP
stand
for
in
the
green
flash.
C
Precision,
sorry,
that
was
another
wonderful
marketing
term
that
was
invented,
so
we
could
have
said
that
net
or
dip,
okay,
tkf
yep.
C
Right,
so
the
large-scale
validation
was
done
on
the
cine
research
Network
across
China.
So
that's
similar
to
you
know:
European
American,
Research
networks,
dark
fiber,
connecting
in
many
of
the
cities
and
then
also
having
processes
in
place
on
how
research
work
can
be
deployed
there
so
effectively.
You
know
in
different
parts
of
of
the
network,
these
prototype
routers
were
deployed
and
then
measured
and
so
the
report.
So
we
had
problems
with
these
web
pages.
So
that's
that's
going
to
change
and
those
reports
were
just
in
Chinese.
C
We
have
them
now
in
English,
so
stay
tuned,
I'll
send
it
to
the
mailing
list.
Once
once,
we
have
overhauled
that
next
slide
shows
a
little
bit
kind
of
of
Eye
Candy
of
from,
unfortunately
still
yeah.
Sorry
next
slide
from
that.
So
that's
that's
an
example.
Topology
I
think
the
longest
links
that
that
were
used
in
this
were
a
couple
of
several
hundred
miles.
C
Long,
so
I
think
the
the
total
length,
for
example,
just
for
the
propagation
latency,
which
of
course
is
very
important
in
in
the
mechanism,
was
over
a
thousand
kilometers
and
then,
of
course,
also
going
across
multiple
hubs.
Doing
the
injection
of
traffic
measuring
the
burstiness
next
slide.
C
And
then
actually
proving
here
that
the
latency
really
sticks
within
a
very
narrow
window,
so
it's
not
only
bounded,
but
it
also
has
a
very
low
Jitter
and
yeah.
Once
once
the
team
is
translated,
it
makes
a
lot
more
sense
yeah.
So
so
this
was
just
you
know.
C
For
me,
it's
always
good
eye
candy
to
to
to
know
that
there
was
a
lot
of
validation
that
this
is
not
just
a
you
know,
a
research
project
for
which
maybe
a
best
a
simulation
has
been
done,
but
that
this
is
a
technology
much
further
down
the
line,
so
that
we
feel
very
comfortable
for
saying
this.
This
can
be
standardized
in
the
ITF.
B
C
Yeah
so
that
that
I
think
is
the
the
total
Jitter,
and
so
that's
where,
on
the
on
the
IP,
that's
best
effort,
right
and-
and
that
of
course
shows
that
that
you
have
in
the
IP
case
across
many
hops.
So
the
the
rows
are
the
number
of
hops
and
that
that
shows
then
that
that
you're
accumulating
more
more
Jitter
in
the
best
effort,
traffic
and,
of
course,
you're,
not
incuring,
more
Jitter
in
the
deterministic
in
the
cqf
traffic.
B
Thank
you
and
I
think
we
look
forward
to
a
version
of
this
slide
with
with
with
suitable
translation.
Thank
you,
yeah.
C
Okay,
so
so
our
claims
right
so
tcqf
can
easily
be
implemented
at
scale
in
existing
type
of
wide
area.
Network
routers
tckf
has
been
validated
through
simulation
POG
implementations
in
white
area
network
deployments
up
to
2000
kilometer
path.
It
provides
very
low,
latency,
very
sorry,
very
long
Jitter,
which
we
call
on.
E
C
Forwarding
and
I
think
then
we're
going
to
talk
about
that
later
in
the
slides.
That's
one
of
the
core
requirements
for
many
core
use
cases:
tcqf
can
support
iitf
network
layer
data
planes,
validation
was
done
with
IPv6
headers
draft
currently
supports
IP,
IPv6,
mpls
and
then
Sr
srv6
srmpls
are
equally
supportable
and
of
course,
even
my
favorite
topic
of
beer
from
the
multicast
site
is
is
also
supportable.
The
header
extensions
there
are
another
interesting
aspect
and
the
the
the
biggest
Beauty
would
be
that
for
immediate
adoption.
A
B
Let
me
quick
interject
on
a
little
bit
of
structural
question
before
you
dive
into
the
the
the
details.
So
my
understanding
from
shushong's
presentation
last
week
is
that
the
basic
extensions
to
the
cqf
forwarding,
queuing
and
forwarding
are
common
between
what
you're
doing
here
and
what
that
draft
was
doing
and
the
primary
difference
is.
B
This
draft
is
proposing
just
communicating
a
single
identifier,
tag
or
cycle
label,
whatever
term
you
want
to
use,
whereas
the
csqf
draft
last
week
is
specific
to
something
like
Sr
or
mpls
and
is
effectively
communicating
a
stack
of
tags
or
labels
to
the
same
mechanism.
Does
that
match
your
understanding?
Yes,.
C
That
does,
and
unfortunately
it
didn't
have
the
time
to
watch
the
Youtube,
yet
so
I'm
not
quite
sure
how
much
in
details
she,
for
example,
was
trying
to
do
a
comparison
in
terms
of
complexities
and
or
benefits.
Downside
between
the
two
mechanisms.
I
could
easily
see
that
we
merged
that
all
together
into
a
single
solution.
G
B
I
was
I
was
only
trying
to
go
after
make
sure
we
have
a
shared
understanding
of
structure.
I,
don't
recall
much
in
the
way
of
of
comparison.
It
was
a
rather
different
architecture
in
that
it
relies
on
a
centralized
controller
to
make
the
label
Stacks
work.
Okay,
thank
you.
Now
we
go
next
slide.
A
C
Okay,
so
here
is
the
wonderful
Joyride
next
slide.
C
So
right
so
we're
starting
with
cycle
queuing
and
forwarding
from
TSN
802.1
c
q
CH
as
it
was
originally
called
and
then
merged
into
the
overall
802.1
standard.
C
But
if
you
hear
qch
from
TSN
people,
that
means
cqf,
you
can
have
never
enough
tlas
to
to
name
the
same
thing
so
and
it
was
evolved
actually
as
a
very
simple
profile
from
the
TSN
time
aware:
Shapers,
which
is
802.1
qbv,
and
so
the
interesting
part,
is
that
you
don't
necessarily
in
TSN
equipment,
see
that
they're
supporting
cqf.
You
just
need
to
look
if
they're
supporting
qbv
and
then
pretty
much.
You
know
it's
it's.
C
It's
a
controller
plane
mechanism
over
there
to
configure
that
so
too,
as
to
do
cqf
and
so
at
the
core
of
Tess
and
cqf.
Are
the
pro
hop
forwarding
of
packets
based
on
their
arrival
time
on
each
switch
right
so,
and
this
allows
the
you
know
being
built
against
the
arrival
time,
allows
these
two
mechanisms
to
operate
without
additional
new
headers
to
carry.
You
know,
queuing
information,
and
that
was
I
think
one
of
the
big
design
goals,
and
so
ultimately
they
were
also
designed
against
the
requirements.
C
Back
then,
from
TSN,
which
means
primarily
land
deployments,
intra-car
up
to
maybe
manufacturing
floor
and
then
speeds
of
100
megabits
and
one
gigabits
I
think
the
maximum
would
be
10
gigabits
in
switches,
but
I
haven't
actually
been
able
to
validate
which
10
gigabit
switch
interfaces
would
support
it
in
the
TSN
space,
but
certainly
be
happy
to
hear
about
that
from
TSN
people
with
respect
to
their
knowledge.
Next.
A
C
So
in
this
slide
was
already
shown
in
ITF,
so
a
couple
of
repetitions
here.
So,
as
you
see,
you
have
a
path
here
from
left
to
right
through
three
routers.
C
Each
router
has
two
cycle
buffers
and
you
periodically.
Let's
say
every
you
know:
100
microseconds
are
switching
between
these
buffers.
One
buffer
is
receiving
the
packets,
the
other
buffer
is
sending
the
packets.
C
So
it's
kind
of
bucket
passing
if,
if
you
know
that
old
scheme-
and
so
that
obviously
means
that
you
very
simply
are
having
a
well-defined
perhap
latency,
which
is
the
cycle
time
and
you
are
just
having
to
fit
all
the
traffic
you
want
to
forward
into
the
cycle
buffer,
which
you
can
calculate
based
on
the
cycle
time
and
then
the
link,
speed
and
so
you're
just
periodically
running
this,
and
that
way
you
get
deterministic,
bounded,
latency
and,
of
course,
only
cycle
time
latency
and,
on
the
right
hand,
side
they're,
the
the
formulastic
repetition
of
what
I
said
now.
C
The
interesting
part
is
that
the
propagation
latency
from
one
hop
to
the
other
is
introducing
a
dreaded
problem
which
is
called
the
dead
time
and
so
we'll
have
that
on
a
later
slide
in
more
detail,
and
that's
the
orange
block
so
I
think
let's
go
to
the
next
slide,
should
be
the
explanation
of
the
Dead
time.
E
C
Oh
no
we're
getting
to
that
later.
Sorry,
oh,
there
is
some
okay.
We've
got
a
PDF
issue,
but
hopefully
we
will
survive
now.
The
the
interesting
part
is
that,
even
though
cqf
was
built
for
in
our
opinions,
rather
low
speed
networks,
nothing
against
cars,
it
actually
becomes
a
lot
more
attractive.
C
Fundamentally,
the
faster
the
networks
become
right,
because
the
faster
the
network
is
the
more
bytes
you
can
fit
into
a
buffer
for
the
same
length
of
time,
and
so
the
table
below
shows
that,
obviously
right.
So
whenever
you
increase
the
speed
by
Factor
10,
you
get
10
times
more
bytes
in
there.
So
you
don't
need
to
increase
the
cycle
time
to
make
make
make
more
traffic
work
with
this
mechanism,
but
instead
you
can
build
very
high
speed
networks
with
very
short
latency
incurred
by
the
cycle
time
right
on
every
hop.
C
You
need
to
wait
cycle
time
before
you
can
forward
the
packet
again
so
that
latency
adds
up.
If
you
have
many
hops,
that
adds
up
more.
But
if
you
have
100
gigabit
networks,
you
now
got
a
much
lower
overall
impact
on
that
than
in
smaller
networks.
As
early
slower
networks,
gigabit
Networks.
C
So
now
here
is
that
that
time
issue-
and
hopefully
that's
visible,
so
this
was
directly
taken
from
the
draft,
and
so,
if
you
look
here,
what
you'll
see
in
the
picture
is
that
you
have
the
buffer
on
cycle
I
minus
one
that
is
being
sent
and
on
the
upper
part.
You
see
node
a
so
it
is
sending
this
this
buffer
content,
just
a
sequence
of
all
the
packets
filling
up
the
cycle.
You
know,
let's
say
as
good
as
possible,
but
there
is
propagation
latency
there
is
processing
delay.
C
So
there
are
a
couple
of
aspects
why
these
bytes
arrive
at
the
receiving
node
B
somewhat
later,
and
what
we
need
to
ensure
is
because
any
byte
any
packet
being
received
can
only
be
put
in
the
right
cycle
buffer
based
on
the
arrival
time.
We
need
to
make
sure
that
the
last
byte
of
the
last
packet
still
arrived
at
the
time
before
the
cycle
is
being
switched.
C
So
if
the
total
latency
that
is
introduced
by
the
propagation
of
the
link
by
the
processing,
if
that
accumulates
up
to,
let's
say
here
in
the
picture,
looks
like
30
of
the
cycle
time.
That
means
you
cannot
really
support
the
cycle
to
be
full
of
data.
You
can
only
fill
it
up
to
63
66
33
percent
of
the
cycle.
C
Now
you
can
argue
that
you
never
need
100
of
deterministic
traffic
in
a
network.
So
that's
why
some
that
time
may
be
acceptable,
but
it's
certainly
something
that
makes
a
design
of
a
network,
especially
when
it
becomes
larger,
really
really
problematic
and
would
be
really
good
to
avoid
having
to
deal
with
the
debt
time
and
eliminate
it.
C
So
we
would
rather
like
to
have
something
that
independent
of
how
large
the
network
is.
We
can
have
100
utilization
next
slide.
C
So
now,
when
we
go
to
three
cycle
operations,
you
see
here,
hopefully
a
little
bit
better
visualization,
although
I'm
not
quite
sure
how
well
I
get
that
into
the
draft,
then
so
Cycle
One
cycle
two
cycle.
Three
then
cycle
one
again
so
now
we're
cycling
through
three
cycles
and
yes,
we're
looking
at
cycle
one
and
it's
being
propagated
to
the
receiver
node
B
and
as
we
see
it,
does
arrive
somewhere
in
between
cycle
one
and
cycle
two.
C
So
what
we
obviously
need
to
do
now
is
to
say
okay
packets
from
cycle
one
when
they
arrive,
they
need
to
be
put
into
cycle
three,
because,
obviously,
during
the
time
that
Cycle
One
packets
are
arriving,
we
are
sending
out
first
packets
from
cycle
one
and
then
packets
from
cycle
two
right.
So
we
need
to
kind
of
map
anything
from
Cycle
One
into
cycle
three.
C
So
when
cycle
three
is
being
sent
out,
we
know
Cycle,
One,
packets
never
arrive
so
far
so
good,
and
we
can't
even
do
this
without
you
know
indicating
the
cycle
identifier
in
the
packet
purely
relying
on
the
reception
clock,
because
we
assume-
or
let's
say
if
we
assume-
that
the
propagation
latency
is
exactly
accurate
as
it
was
in
the
TSN
case,
because
you
know
when
a
packet
arrives
exactly
in
this
interval
that
we're
showing
here
between
cycle
one
and
cycle,
two,
the
controller
would
have
had
to
figure
out.
C
Okay,
this
is
how
high
the
propagation
latency
is.
So
in
this
period
of
time,
all
the
packets
that
are
arriving
will
be
from
cycle
one.
So
this
can
be
done.
This
is
also
was
proposed
to
a
TSN
as
the
so-called
multi-buffer
cqf
version,
to
the
best
of
of
my
understanding
that
has
not
been
adopted
by
TSN,
but
would
certainly
love
to
hear
if
that
was
adopted
by
TSN.
C
So
now
the
reality,
unfortunately,
is
that
relying
on
the
accurate
arrival
time
is
really
impossible
in
large-scale
networks
and
that's
where
we
come
to
the
variations
of
the
arrival
time
that
we
have
to
deal
with
in
realistic,
high-speed,
wide
area
networks
right
and
the
most
important
reasons
for
variations
of
the
arrival.
C
Time
is
the
inaccuracy
in
synchronized
clocks
and
that
is
being
measured
by
a
factor
called
the
maximum
time
interval
error
and
that
pretty
much
means
that
if
B
thinks
you
know
some
particular
time
stamp
is
time
T,
then
the
sender
has
thought
it
is
time.
T,
minus
mpie
up
to
the
point
t
plus
mtie
right,
so
that
is
the
window
of
difference
between
the
timestamps
in
two
synchronized
clocks
and
so
cqf
at
one
gigabit
per
second
already
requires
sub
microsecond
mtie
so
that
the
variations
due
to
the
mpie
Do
Not
impact
the
performance.
C
So
the
mtie
becomes
a
factor
in
the
debt
time.
You
want
the
dead
time,
maybe
to
be
less
than
10
percent
of
the
cycle
interval
you
have
other
processing,
latencies
and
so
on
in,
and
so
you
try
to
keep
the
mtie
below
one
percent
of
the
cycle
time
and
Ebola
you're
ending
up
already
with
the
one
gigabit
link
with
such
a
high
accuracy.
C
Now,
if
we
wanted
to
take
the
same
mechanism
to
100
gigabit,
maybe
even
reduce
the
cycle
times,
then
we
might
end
up
even
with
as
much
as
100
times
better
clock
synchronization,
so
that
wouldn't
be
possible
because
the
you
know
clock
synchronization
can
only
go
to
a
certain
degree.
Every
time
you're,
making
clock
synchronization
more
accurate,
you're,
increasing
the
cost
of
the
clock
synchronization
right.
So
that's
that's,
in
our
opinion,
the
the
most
important
factor.
C
So
then
we
have
differences
in
the
link
media,
propagation
latency.
So
one
good
example
is
the
length
of
the
link
changes
right.
So
if
you
have
any
type
of
wires
hanging
from
Poles,
typically
those
are
on
power
lines,
but
in
the
US
I
think
almost
everything
is,
you
know
over
the
Earth,
copper
or
Fiber
lines,
and
then
they
actually
do
hang
through
a
lot
more.
You
see
it
in
summer.
C
So
if
you
go
into
you
know
power
networks,
they
do
a
humongous
lot
of
calculations
and,
if
they're
seeing
30
to
30
percent
length
increasing
between
the
hottest
and
coldest
times
so
even
through
a
24
hour
cycle.
So
that's
that's
one
example
of
actual
differences
in
link
and
media
propagation,
latency,
I'm,
not
sure,
actually,
I
put
all
the
all
the
factors
that
impact
on
that
do
create
variations
in
the
real
right
buckets
right.
C
So
when
we
look
into
processing
latencies
right,
so
we
have
various
different
Advanced
mechanisms
forward,
error,
correction,
re-transmissions.
So,
for
example,
in
DSL
we
have
eight
millisecond
Windows.
If
a
packet
is
lost,
that
one
is
recalculated
from
FEC
would
be
eight
milliseconds
later.
Obviously,
it's
a
bad
example.
That's
way
too
much
of
a
low
speed
link
to
be
of
interest,
but
just
you
know
the
only
FEC
example
that
I
really
understood
well
I've
have
never
looked
into
the
100
gigabit
FEC.
C
That
I
think
they
also
have
there
as
well
Wi-Fi
radio
links
there
I,
don't
think
all
of
them
are
processing
latencies
right.
We
also
get
interference
Reflections,
which
I'd
rather
say
are
linked
media
propagation
latencies.
C
Then,
when
we
get
into
actual
high-speed
fabric
based
multi-line
card
forwarders
and
I
I'm,
not
sure
if
people
remember
from
early
days
in
in
that
net,
when
we
had
these
discussions
on
the
basic
forwarding
plane
right
so
so
there
are
a
lot
of
variations
possible
in
the
processing,
and
you
can
argue
that,
oh
we
can,
you
know,
shape
these
out
all
at
the
lower
layers
and
basically
just
take
the
worst
case,
latency
that
we're
going
to
accept
and
buffer
up
everything,
but
this
may
actually
introduce
a
lot
of
unnecessary
complexity
at
these
lower
layers
If.
C
Instead,
we
can
solve
this
problem
in
the
cqf
solution.
So
that's
basically
in
our
opinion,
then
also
a
overall
system
design,
simplification
by
having
the
shaping
effectively
happening
through
the
cqf
layer.
Next
slide.
B
I
turned
off
my
mic.
Let
me
turn
it
back
on
sorry,
I
was
going
to
say
is:
I
would
suggest
that
we
use
the
term
Jitter
for
where
you
have
different
processing
processing
latency,
so
that
we
can.
The
link
has
both
latency
and
and
Jitter
properties
that
that
need
help
with
Clarity,
okay.
My
apologies
next
slide.
Yep.
C
So
here
is
is
an
attempt
to
visualize
the
problem
and
explain
why
the
variations,
the
Jitter
in
processing
in
propagation
do
necessitate
putting
the
cycle
identifier
into
the
packet
in
some
shape
or
fashion.
So
we
started
with
the
previous
picture
right,
so
we
have
cycle
one
now
we're
also
showing
cycle
two.
Now,
if
we
look
at
at
the
bottom
here
of
the
picture,
we
have
some
cycle
I
and
we
have
the
variation
and
that's
the
orange
part
right.
C
We
have
to
add
all
the
variation
as
kind
of
attributing
to
the
possible
time
window
in
which
for
the
clock
of
the
receiver,
packets
are
arriving
and
when
you
then
paint
two
different
Cycles
in
this
case
cycle,
one
in
cycle
two,
you
see
that
the
arrival
windows
are
overlapping
right
and
so
this
over
Apple,
over
overlap
of
the
possible
arrival
Windows
of
adjacent
Cycles,
means
that
we
cannot
determine
from
the
arrival
time
alone
which
cycle
the
packet
was
sent
from
right.
So
let's
go
to
the
next
slide.
C
Where
we're
kind
of
discussing
this
in
more
detail
right,
so
we
cannot
simply
guess
that
right,
the
sending
cycle
and
put
the
package
just
into
oh
yeah,
just
one
of
the
cycle
buffers
will
fit
so
that
completely
messes
up
the
admission
control
the
latency
calculation
and
creates
possible
congestion
packet
drop
right.
So
we
could
use
the
dead
time
concept.
But
if
you
go
back
to
the
prior
slide,
you'll
see
that
this
quickly
reduces
the
possible
throughput
Right,
with
variations
of
50
of
the
cycle
time.
C
The
throughput
already
goes
down
to
zero
in
terms
of
the
Dead
time
that
that
we
effectively
are
getting
eats
up
the
whole
cycle
time
right
and
with
tagging.
We
never
need
to
reduce
the
throughput
right,
but
the
higher
the
variations,
the
higher
the
Jitter,
the
more
Cycles.
We
simply
need
right
so
and-
and
this
is
another
part-
we
still
need
to
put
in
more
detail
into
the
drafts,
but
I
think
it
was
easily
visible
from
the
prior
things.
So
if
we
have
less
than
50
cycle
time,
then
three
Cycles
will
suffice.
C
So
here
in
summary
right,
so
we
think
that
cqf
has
attractive
features
and
potential
for
large-scale
deterministic
network
deployments.
Multi-Buffer
cqf
is
straightforward:
extension
from
TSN
easy
to
build,
PLC
Hardware
cycle
to
cycle
mapping
configuration
based
on
link
propagation
latency
and
the
number
of
Cycles
based
on
variation
right.
So
then,
simply
we
need
to
eliminate
the
reception,
time-based
assignment
of
packets
to
a
cycle
buffer,
and
so,
instead
we
are
relying
on
a
packet
metadata
where
we
put
the
cycle
ID
into
the
packet
header,
we're
calling
we
in.
C
In
the
case
of
my
draft,
we
called
it
tag
because
you
know
late
in
the
90s
there
was
a
technology
that
was
called
Tech
switching
some
packet
header
field,
which
was
Rewritten
in
every
hop
and
magically
that
actually
was
later
called
mpls.
So
Steward
basically
suggested
that
we
call
this
that
then
now
a
tech,
tech-based
cqf
right.
So
in
result
we
can
support
arbitrary
links.
We
can
support
lower
clock
sync
accuracy
right,
so
we
can
go
up
to
mtie.
C
That
is
larger
than
the
cycle
time
and
simply
calculate
how
many
you
know
Cycles.
We
then
need,
together
with
the
propagation
latency
and,
of
course
we
can
support
higher
node
propagation
latency
variations
as
well
in
case
we
do
have
that
and
I
think
where
especially
then
also
safe.
If,
let's
say
the
mobile
World
wants
to
come
and
also
adopt
this,
and
not
only
the
wired
links
that
we
are
I
think
currently
mostly
looking
into
okay.
C
Okay,
so
here
is
kind
of
a
couple
of
sections
which
hopefully,
we
can
run
through
fast,
so
these
These
are
repeating
what
I
was
I
think
presenting
two
years
back,
just
as
as
as
I
think
reminders.
C
Why
is
low
Jitter
important
and
what
impact
does
it
have
to
the
oval
design?
Complexity
next
slide.
C
So
this
this
was
some
picture
to
to
outline
that
we've
always
been
thinking
about
two
type
of
application
requirements,
but
also
mechanisms
that
we
have
in
the
network
right.
The
in
time
means
that
the
packet
can
arrive
as
fast
as
as
possible.
C
So
if
the
network
has
no
other
competing
traffic,
then
there
is
no
no
queuing
latency,
so
TSN
ATS
is
an
example
of
that
z-score
I
think
is
claiming
to
do
the
same
still
need
to
sit
down
on
the
details
and
then,
of
course,
if
the
network
is
loaded
with
the
maximum
amount
of
traffic,
then
you
have
the
maximum
bounded
latency
and
in
the
on
time
case
you
have
just
a
very
small
red
window
of
latency
variation,
Jitter
and
so
tcqf
cqf
they're,
all
on
time
mechanism
and
the
damper
mechanisms
as
well.
C
C
So
this
goes
pretty
much
back
to
how
a
lot
of
distributed
applications
are
written
that
want
to
have
deterministic
networking
and
so
manufacturing
is
one
example.
But
if
we
look
into
large-scale
networks,
then
it
would
equally,
let's
say
remote
driving
of
a
car,
Cloud,
PLC
and
so
on.
So
what
we
have
are
control
loops,
where
packets
are
being
sent
between.
Let's
say
one
Central
entity
often
called
the
PLC
in
you,
know,
industrial
networks
and
then
many
so
we're
just
showing
two
type
of
devices.
C
Some
of
them
are
only
returning
data,
they're
called
sensors,
and
then
others
are
basically
doing
something.
They're
called
actors
and
what
you're
doing
from
the
central
logic
is
you're
sending
out
packets
that
are
triggering
a
response,
and
so
the
sensors
and
actors
in
in
an
on-time
service
in
the
industrial
world
is
called
synchronous,
Network
or
synchronous
traffic,
the
arrival
time
of
the
packet
and
the
immediate
triggering
of
the
response,
that
is
by
itself
the
timing
mechanism,
the
sensors
in
the
actress.
C
They
do
not
need
to
have
clock
synchronization,
the
Central
device
knows
what
the
latency
is.
What
the
the
guaranteed
synchronous
latency
is
to
each
sender
and
each
actor,
so
it
can
calculate
when
to
send
the
packets
to
trigger
an
action
at
a
particular
time
or
to
get
a
feedback
that
is
from
a
sensor
that
is
valid
at
a
particular
time.
So
you
are
not
requiring
clock
synchronization
through
the
network
to
all
these
devices.
C
And
so
that
means
that
even
if
I
take
out
supposedly
clock
synchronization
from
the
network
with
something
like
TSN
ATS
I
need
to
bring
it
back
to
the
network,
just
to
be
able
to
provide
clock
synchronization
to
the
client
devices
and
if
I
look
at
a
lot
of
networks
like,
for
example,
mobile
network
access
rings,
every
node
in
that
network
is
connected
to
Edge
devices,
so
I
pretty
much
have
clock
synchronization
back
everywhere.
C
C
So
this
is
now
kind
of
the
the
the
outline
of
how
networks,
with
our
current
per
flow
per
hop
State
based
models
like
TSN
ATS
have
to
look
like
right,
so
how
inserv
RFC,
2212,
TSN
ATS
would
look
like,
and
you
see
on
the
bottom,
a
multi-hop
networks
with
a
service
provider,
notion
of
the
provider,
Edge
node,
connecting
to
senders
and
receivers
and
then
intermediate
hops.
Even
if
they
are,
you
know
serving
for
other
flows
as
Edge
devices.
C
Still
we
need
to
look
at
the
totality
of
flows
that
they're
dealing
with
and
so
for.
Every
TSN,
ATS
or
you
know,
insert
flow.
What
we
need
to
do
is
the
controller
plane
needs
to
push
down
for
each
of
these
flows
a
state,
and
you
need
to
process
this
state
right.
So
there's
both
controller
plane,
signaling,
plane
and
then
execution
forwarding
plane
impact
on
the
forwarding
playing.
C
So
we
did
exactly
deploy
this
in
wide
area
networks
with
a
technology
called
multicast
not
for
queuing,
but
for
replication
and
customers
really
hate
it
and
still
hate
the
need
of
having
to
look
on
every
intermediate
router
into
the
behavior
of
every
individual
multicast
flow,
TV
stream,
and
which
is
why
you
know,
in
the
ITF,
for
five
years,
we've
been
working
on
getting
rid
of
that
per
hop
per
multicast,
3
state
so
very
same.
You
know
thing
different
technology,
but
obviously
same
you
know,
requirements
desires
from
Network
operators
next
slide.
C
Right
so
the
scalability
issue
that
I
also
showed
this
slide
right.
So
it's
an
n-square
issue
right,
of
course
we're
coming
in
and
saying.
Well,
you
know
we
have
aggregation
when
we
get
into
the
service
provider
Network.
We
can
aggregate
all
the
flows
that
are
behind
our
sender,
Edge
device
that
go
to
the
same
egress
Edge
device
to
the
same
egress
PE.
But
if
we
have
a
hundred
PE
devices,
then
we
just
have
100
times
99
divided
by
two.
So
we
have.
C
You
know
100
square
in
the
order
of
that
many
flows
and
that's
where
we
need
to
have
Shapers
for
that's
what
we
have
interleaved
Regulators
for
the
signaling
whenever
these
are
changing
right
so
and
if
you
look
at
it
in
tcqf
on
each
of
the
intervening
routers,
we
would
just
need
three
to
five
cycle
cues
on
P1
on
P2
right,
so
totally
independent
of
the
number
of
edge
devices.
A
C
Ly
really
brings
us
to
then
the
desirable
that
net
Qs
option
right.
We
of
course,
still
have
the
controller
plane,
we're
still
doing
all
the
wonderful
calculation
for
every
aggregated
and
aggregated
flow
that
we
want
to
have
through
the
network,
and
we
still
need
to
talk
from
the
controller
plane
to
the
sender
and
then
also
the
Ingress
PE,
to
make
sure
that
you
know
the
senders
aren't
sending
too
much
traffic,
but
then
on
every
hop
on
all
the
P
nodes.
C
When
we
only
have
our
tcqf
cycle
cues,
we
really
have
what
I
would
call
and
I'm
happy
to
hear.
If,
if
David
disagrees,
what
I
would
call
a
diffserv
only
per
class
Qs
right
so
with
a
tcqf,
maybe
being
one
div
surf
class,
of
course,
needing
more
than
one
marking
for
it.
The
cycle
ID.
But
that's
basically,
is
exactly
the
same
scale.
Discussion
which,
in
the
90s
kind
of
30
years
25
years
ago,
brought
us
to
go
away
from
insurf
to
diff,
Surf
and
I.
C
B
David
says
we
can
split
hairs
on
exactly
what
words
used
to
call
the
concept
later.
Diffserv
has
this:
has
this
structural
concept
already
with
the
notion
of
multiple
multiple
drop
precedences
in
a
in
in
a
group,
and
this
would
be
applying
the
concept
into
in
a
different
context?.
C
That's
a
good
comparison.
Okay,
so
summary
Metro
and
larger
Network
operators
do
not
want
per
flow,
perhaps
State
on
P
routers,
it's
an
operational
nightmare.
It
is
why
segment
routing
in
mpls
in
IPv6
and
beer
in
multicast
were
created
by
the
ietf
right.
It's
about
control,
plane,
performance,
reliability,
scale,
challenges
updates
to
each
P
routers
and
the
dead
net
Solutions
need
to
support
all
those
stateless
forwarding
traffic
steering
options
right.
C
C
So
that's
provider,
core
router
right
so
p
and
p
e
p.
Is
the
provider
Edge
router?
So
it's
the
first
Hub
into
a
service
provider
Network
and
the
P
router
is
then
a
router
internally
in
the
service
provider
Network
and
then
the
packet
ultimately
arrives
at
the
egress
PE
router,
where
it
is
passed
on
to
the
customer
again.
C
Thank
you.
So
that
was
in
the
pictures
on
the
bottom
and
yeah.
The
slides
are
then
readable,
yeah
right
so
and
I
think
I'm
repeating
that
a
little
bit
as
a
summary
right.
So
I
was
referring
to
that.
We
did
go
through
the
same
discussion
in
the
90s
in
Surf
RAC
2212,
exactly
because
of
the
same
reasons,
but
I
think
the
problems
today
with
more
Hardware,
faster,
Hardware
becoming
even
more
important
and
also
from
the
multicast
experience.
It
wasn't
only
kind
of
looking
at
thousands
of
multicast
States.
C
It's
also
the
problem
which,
if
we
expand
the
the
debtnet
use
cases
or
even
some
of
the
use
cases
we
have
in
the
dead
net
work
already.
One
of
the
important
core
things
to
see
is
that
traditionally,
if
you,
if
customers
are
changing
traffic,
that
doesn't
impact
the
behavior
of
the
service
provider,
routers
forwarding
and
control
plane
right,
so
only
when
the
topology
of
the
network
changes
does
routing
change
right.
So
now
it's
it's,
certainly
not
100.
C
True,
if,
if
over
longer
periods
of
time
the
whole
traffic
patterns
of
customers
are
changing,
then
you
do
change
your
traffic
engineering
accordingly,
but
that's
a
very
convoluted,
complex,
well-planned
and
long
process.
You
cannot
have
a
few
customers
very
quickly,
creating
big
changes
in
the
control
and
forwarding
plane
of
the
router
in
normal
unicast.
C
You
can
do
that
exactly
with
multicast,
because
you
create
a
lot
of
flows
and
these
flows
create
State
on
every
P
router,
and
so
you
have,
you
know,
total
new
type
of
attack
vectors,
and
that
was
what
what
another
aspect
of
what
was
killing
normal
multi
cars
and
service
providers.
They
wanted
not
to
have
these
attack
vectors
and
the
same
way.
C
If
we
consider
that
we
don't
want
to
have
long
provisioned,
that
net
Services,
where
you
say
oh
I'm,
sending
effects
to
service
provider
and
a
week
later,
I'm
going
to
get
you
know
my
that
net
a
service
class
but
you're
starting
an
actual
application
like
an
instance
of
a
remote
driving,
and
you
want
that
traffic
to
immediately
have
death
net
services
with
guaranteed
latency
and
low
Jitter.
C
Then
you're
also
talking
about
what
service
providers
may
call
Auto
provisioning
automatically
the
application
signaled
and
that
holds
signaling,
and
that
admission
of
resources
should
only
go
to
the
controller
plane
and
at
best
to
an
Ingress
router
on
the
service
provider.
For
you
know,
a
service
provider
managed
CE,
but
it
shouldn't
impact
any
of
the
state
in
the
core
routers,
and
only
these
stateless
mechanisms
like
what
we
can
get
with
tcqf
will
allow
for
that.
C
A
C
Okay,
so
that
gets
us
to
the
wonderful
short
section
on
packet
encapsulations
of
the
cycle.
B
Okay,
just
a
quick
reminder
that
we're
we're
separating
a
selection
of
the
queuing
and
scheduling
mechanisms
in
the
nodes
from
dealing
with
the
data
that
has
to
be
communicated
to
make
them
work
and
the
pack
encapsulation
is
about
how
to
communicate
the
data
with.
A
C
Next
slide,
so
of
course
you
know,
I
I
think
it's
fine.
When
we,
you
know
architecturally
want
to
separate
that
and
by
the
way
there
is
this
on
the
last
Slide
the
question
of
what
should
we
put
into
which
draft
I
think
the
what
I
have
a
hard
time
in
separating
out
just
architecturally
is
that
different
encapsulation
options,
of
course
have
different
Pro
and
cons,
and
different
mechanisms
have
different
Pro
and
cons
with
respect
to
different
encapsulations.
C
So
one
of
the
biggest
benefits
for
short-term
adoption
of
any
mechanism
is
that
if
we
can
fit
it
into
existing
packet
header
attacks
that
we
can
use,
that's
you
know,
I
think
a
deployment
benefit,
and
because
we
only
need
a
very
few
different
values
to
carry
the
cycle
ID,
which
three
or
four
would
be
perfectly
fine
for
a
lot
of
initial
deployments.
C
It
can
fit,
for
example,
in
the
mpls
traffic
class
field,
where
you
know
we
arguably
could
have
four
values
in
dscp
we
can
have
have
up
to
16
private
dscp,
so
that
will
give
us
I.
Think
all
the
you
know
cycle
IDs
we
would
ever
want.
So
that's
an
interaction.
I
think
it
would
be
good
to
be
able
to
somehow
highlight
that,
as
as
possible
value
proposition
compared
to
other
mechanisms,
even
even
mechanisms
I
would
prefer
longer
term,
but
which
would
require
a
lot
more
header
work,
but
even
in
tcqf.
C
Obviously,
when
we
have
the
discussion,
you
know
how
do
we
do
extension
headers,
then?
Obviously,
the
question
is:
are
we
going
to
have
extension
headers
differently
for
each
queuing
mechanism
and
then
also
for
the
other
death
net
aspects?
C
I,
for
example,
would
be
you
know
a
fan
of
having
a
single
that
net
extension
header,
at
least
for
IP
right,
so
the
deterministic
IP
header,
or
something
like
that,
and
so
just
to
that
extent
that
one
of
the
two
drafts
that
we've
written
next
slide,
please
is
proposing
as
a.
B
Starting
okay,
let
me
quick
make
one
comment
on
this
slide.
There
is
a
there
is
Believe,
It
or
Not
a
severe
understatement
on
this
slide
right
in
the
about
the
middle
of
the
slide
towards
the
right.
It
says:
TC
is
quite
limited,
that's
being
very
polite,
there's
a
lot
of
stuff
that
tries
to
go
into
TC.
If
anything
that
you
have
a
bigger
problem
with
TC
than
you
have
with
the
SCP
right
right,
I'd.
B
I
might
suggest
you
talk
with
you,
I
might
suggest
you
talk
with
Stewart
Bryant,
who
is
very
familiar
with
everything
that
that
that
wants
to
land
in
DC
and
what
happens
out
there.
C
Well,
actually,
I
think
that
was
the
the
one
piece
is
where
I
think
as
Stewart
could
could
remember
a
lot
of
drafts,
but
I
think
the
the
problem
was
a
little
bit
the
difference
between
drafts
and
that
what
was
deployed
in
networks
and
so
the
the
deployment
memories
that
we
have
was
that
there
was
really
not
that
much
adoption
of
different
TC
values
in
the
networks
so
and
that
may
be
wrong
so
because
that
might
have
come
a
lot
later.
So
that's
that's.
C
And
when
it
comes
so
so
the
proposal,
of
course,
with
IP,
you
know
being
well
under
represented
compared
to
mpls
in
the
forwarding
plane
natively.
There
would
certainly
become
a
a
discussion
with
the
six
men
working
group,
so
the
the
draft
specifies
some
possible
options
so
with
the
destination
option,
header
and
the
what
was
the
other
option,
header
called
I'm,
always
the
hopper
hop
header.
So
obviously
we
know
there
is
a
lot
of
ongoing
work
already
in
six
months.
C
So
that's
the
rightfully
something
which
I
think
we
should
start
talking
about
six
men,
how
they
would
like
to
see.
You
know
this
being
discussed
with
them,
and
so
the
header
here
very
simple
right,
so
typical
overhead
of
an
IPv6
extension
header
cycle,
ID
and
then
also
proposal
for
starting
extensions,
I
I
think
we
may
want
to
think
about
adding
the
the
fields
that
we
wanted
to
have
for
pre-op
to
that
right.
C
So
there
was
so
we'll
leave
it
in
here
right
now
in
in
our
drafts
and
then
see
how
that
goes
in
the
discussion.
Next
slide.
B
Okay,
quick
quick
note
of
featuring
this
slide.
There's
a
nasty
air
cream
Collision
on
Doh
and
ietf.
You
probably
ought
to
be-
ought
not
to
be
using
that
acronym
here.
B
B
Yeah,
just
just
just
just
just
just
a
warning,
particularly
if
we
for
discretion.
That's
outside
of
this
working
group,
we
had
that
that
acronym
has
a
collision
problem.
C
That's
a
good
question:
if
what
what
six
men
says
about
that
right,
because
they
were
a
lot
earlier
than
the
DNS
people
next.
C
Yeah
so
I'm
and
I'm
not
going
to
show
any
any
any
of
the
slides.
With
with
the
details
of
the
specification
we
very
much
overhauled
by
by
integrating
in
the
next
version.
The
second
draft,
the
the
overview
motivation
background.
So
this
from
cqf
to
tcaf
to
understand
those
details.
C
Then
we
already
got
the
section
how
it
fits
the
detnet
architecture,
which
is
also
trying
to
bring
in
that
type
of
you
know,
aggregated
behavior
kind
of
like
more
more
like
that
net
and
then,
of
course,
the
specification
of
the
forwarding
plane
with
a
per
hop
configuration
model
where,
basically
for
every
incoming
interface,
there
is
a
cycle
mapping
of
the
received
cycle
to
the
outgoing
cycle
based
on
population
by
the
controller
and
then
the
per
hop
packet
processing
specifications
so
how
to
basically
send
the
packet
to
the
output
queue
cycle,
ID
termination,
textual
pseudocode
and
I'm
coming
from
the
multicast
side,
so
we've
kind
of
developed
a
preference
there
with
pseudocode.
C
Well
after
here,
if
people
like
that,
the
Ingress
node
Behavior,
where
for
each
individual
flow,
we
need
to
determine
you
know
how
to
put
the
packets
into
the
cycle.
Buffers.
Obviously
we're
trying
to
keep
this
at
a
minimum
I
think
that's
that's
not
necessarily,
in
our
opinion,
even
part
of
the
basic
specification,
but
for
the
systematic
stuff.
I
think
it's
good.
C
You
could
equally
use
something
like
you
know:
the
tests
mechanism
on
the
first
Hub
router
from
from
TSN
for
that
and
then
section
five
per
encapsulation
specification,
so
the
details
of
what
is
different
in
the
different
forwarding
planes,
mostly
it
is
about
I,
think
encapsulation,
but
I
also
thought
that
the
kind
of
where
stuff
is
carried
what's
the
semantic
in
mpls
between
the
service
and
the
transport
label,
so
I
think
that's,
that's
not
necessarily
only
encapsulation,
but
yeah.
Next.
A
C
Yeah
and
then
the
considerations
consideration
chapter
right,
so
high
speed,
optimizations
how
how
implementation
can
be
made
easier,
the
computation
of
the
cycle,
mapping
at
the
controller
by
taking
latencies
and
then
different
clock
offsets
into
account,
I'm,
not
sure
what
other
you
know.
Issues
for
the
controller
plane
are
are
important
for
us
to
specify
I
I.
C
Think
most
most
of
the
calculation
is
really
very
simple:
support
for
tcqf
across
links
with
different
bandwidth,
I
think
that
that's
an
interesting
aspect
that
may
not
be
obvious
that
it's
very
simple
for
everybody,
because
it's
also
just
a
controller
plane
issue
and
then
I
think
General
controller,
plane
considerations.
There
was
a
feedback
in
terms
of.
Do
we
need
a
centralized
controller?
Can
it
work
decentralized,
yeah?
C
We
can
also
use
the
same
decentralized
mechanism
that
we've
for
the
controller
plane
that
we've
done
in
the
past
for
decentralized
cspf
calculation
in
with
RSV
pte,
but
of
course
it
will
run
in
the
same
issues
when
we
get
into
the
high
utilization
space.
So
I
think
there
are
some
basic
things
and
I'm
not
sure
kind
of
how
we're
actually
going
at
some
point
in
time
to
to
talk
about
that
with
teas
yeah,
so
I
think
in
general
it
would
maybe
maybe
one
of
the
subjects
for
for
for
for
the
team
overall.
C
B
B
C
I
think
they're,
so
Kieran
had
one
question
about
this
cycle
sizes,
whether
they
can
be
changed
or
configurable
and
I'm.
Trying
to
remember
if,
if
I,
if
I
wrote
something
there
I
think
there
is
a
requirement
for
one
particular
cycle
time
to
be
supported.
I
think
that's
very
much
for
us
to
specify
in
the
spec
what
requirements
we
want
to
have
against
interoperability
right.
So
that's,
that's.
C
That's
I
think
a
fairly
wide
open
how
how
broad
or
how
limited
I
think
we
definitely
need
some
mandatory
to
implement
set
of
parameters.
I
I
did
specify
a
must
for
three
Cycles.
Probably
you
know
should
be
a
must
for
site
four
cycle
operation
and
then
some
mandatory
cycle
times
that
that
we
would
agree
on
I
think
that's
the
minimum
and
anything
more
that
the
the
iitf
working
group
feels
appropriate
should
go
in
the
draft.
C
So
my
best
understanding
is
that
that,
so
that's
that's
that
I
I
wrote
in
the
in
the
controller
spec
that
that's
not
necessarily
true,
but
you
need
to
be
able
to
do
the
mapping
right.
So
if
you
have
different
speeds,
if
on
certain
links,
you
don't
have
the
full
capacity
of
the
link
that
you're
offering
right,
you
obviously
can
have
different
cycle
times.
C
I'm
trying
to
remember
if
I
had
a
more
succinct
summarization
of
of
the
conditions
so
that
you
can
have
different
cycle
times.
Let
me
take
that
as
a
note
here
to
to
review
myself
from
that
section,
because
it's
been
a
long
time
that
I
worked
on
that
section.
B
Yeah
I
mean
at
a
minimum
I
think
all
of
those
have
to
agree
on
what
the
duration
of
a
single
cycle
is
otherwise
I
think
you're
you're,
you
have
an
opportunity
or
confusion.
Karen
go
ahead.
G
C
No,
no,
no,
certainly
not,
obviously
the
the
aggregate
of
traffic
that
you
can
pass
end
to
end
across
a
one
gigabit
link
will
not
magically
be
more
than
one
gigabit,
but
obviously
you
know
on
the
next
10
gigabit
Hub.
C
C
B
Me
see
where
there
are
other,
don't
simply
also
I'll
jump,
I'll
drop
in
a
question,
so
it
was
in
listening
to
you.
It
occurs
with
by
comparison
to
past
sessions.
I
heard
the
word
multicast
a
lot
more.
This
time.
Do
you
do
you
believe
the
scalability
draft
adequately
covers
covers
multicast
in
addition
to
to
unicast?
This
is
this?
Is
the
requirements
draft.
C
B
C
Was
I
was
only
using
multicast
as
as
as
a
comparison
of
actual
deployed
experience
with
per
customer
per
application,
traffic,
State
and
networks
right
and
and
that
we
made
miserable
experiences
with
that.
But
I
think
your
yours
is
a
completely
different
point
and
yeah
I'll
have
to
go
back
to
the
requirements
draft
and
then
talk
it
and
see
the
multicast.
B
Part
as
well
yeah
I
suggest
you
find
the
Cycles
to
look
at
look
at
the
requirements
draft
and
ensure
that
it
does
an
adequate
job
of
cover
of.
C
Especially
because
it's
it's,
the
funny
part
is
that
in
in
TSN,
you
don't
hear
it
being
talked
about
a
lot,
but
then
it
actually
is
supported
in
all
the
places
and
when
I
talk
with
people
in
the
car
industry
and
so
on,
they're
actually
also
using
it
so
certainly
is,
is
not
to
be
ignored.
C
C
Any
opinions
that
people
have
about
you
know,
structure
of
the
document
or
that
that
type
of
thing
that
may
not
be
applicable
only
to
cqf.
It
certainly
love
to
hear
what
people
think
about
you
know
what
how
we
can
improve
the
draft.
B
B
Okay,
so
I
think
we're
done
for
now
would
very
much
like
to
see
someone
intent
to
take
on
evaluation
of
the
cqf
mechanism
against
the
current
state
of
the
requirements
draft
this
is
this
is
not
so
much
about.
Is
cqf
unmodified
useful,
but
rather
shaking
out
whether
we
have
a
good
set
of
requirements
in
the
requirements
draft
that
is
useful
for
evaluation
of
these
mechanisms.
B
G
Just
for
a
clarification,
so
when
you
talk
about
evaluating
with
DSN,
it
is
only
for
the
cyclic
related
stuff,
not
for
the
asynchronous
part
of
the
queuing
I.
B
Think
any
of
the
anyone
who's
willing
to
to
take
a
shot
at
evaluating
any
of
the
TSN
mechanisms
against
the
scale
requirements
draft
would
be
greatly
appreciated.
I'm,
just
looking
for,
let's
take
a
take
one
of
the
TSM
mechanisms,
which
is
not
sort
of
Up
For
Debate
debate,
whatever
on
on
whether
we
adopted
here
and
use
it
to
to
do
an
initial
test
on
usability
of
requirements
draft.
B
So,
if
you'd
like
to
do
to
do
a
different
mechanism,
that's
fine
I've,
just
simply
been
suggesting
cqf,
because
this
is
the
second
session
in
a
row
that
we've
been
looking
at
extensions
to
cqf
another
one.
Please
go
for
it.
G
My
understanding
is
request.
Requirements
draft
is
slightly
broader
right.
It
is
addressing
large-scale
issues
and
not
only
specific
on
specific
to
the
queuing
aspects.
So
will
it
clutter
the
document
if
we
try
to
bring
in
that
perspective.
B
I'm
not
sure
I
understand
the
question.
The
requirements
draft
ultimately
is
gonna
going
to
going
to
wind
up.
Writing
us
some
guidance
on
the
data
and
the
encapsulation,
but
there's
quite
a
bit
in
the
requirements
draft
that
can
be
used
to
evaluate
the
queuing
that
the
queuing
and
scheduling.
C
C
So
how
such
a
mechanism
existing
from
TSN
does
or
does
not
meet
the
requirements
and
I
think
the
the
the
the
answers
to
the
requirements
of
the
review
cannot
only
include
meat
does
not
meet,
and
why,
but
also
okay,
this
requirement
does
not,
you
know,
account
again
and
the
basic
mechanism
itself,
but
is
encapsulation
specific,
for
example,.
G
Or
something
like
that
right
yeah,
so
only
it's
not
sort
of
the
requirements
in
the
document
would
map
to
queuing
and
scheduling
and
when
we
present
a
requirement,
we
do
not
say
that
this
belongs
to.
This
can
be
satisfied
through
asynchronous
queuing
mechanisms
or
synchronous,
queuing
mechanisms.
So
or
should
we
put
potential
solution
how
those
requirements
could
have
been
fulfilled
in
TSN.
G
B
Then
the
question
is
to
take
request,
is
to
take
TSN
to
take
a
TS,
take
one
more
TSM
mechanisms
as
they
exist,
and
you
brief
run
through
the
requirements
draft,
resulting
in
a
sentence
per
section
requirements.
Draft
of
whether
it
meets
partially
major
does
not
meet
the
requirement
and
why
speculations
on
extensions
for
TSN
are
not
are
not
helpful.
The
goal
is
to
take
something,
that's
known,
known
and
understood,
and
use
that
to
to
to
test
the
use
of
the
requirements
draft.