►
From YouTube: IETF-DETNET-20230510-0000
Description
DETNET meeting session at IETF
2023/05/10 0000
https://datatracker.ietf.org/meeting//proceedings/
A
B
A
C
A
So
Peng
I
know
you're
working
on
a
revision
to
the
requirements
draft.
What's
the
expected
timing
on
finalizing
and
posting
that.
D
Because
the
updates
are
just
to
proposed
by
some
of
the
course
of
notes,
all
of
them,
so
I'm
just
going
to
send
to
the
main
list
before
that
I
want
to
know.
If
there
are
any
comments
within
the
capacitors,
but
I
haven't
got
any
so
I'm,
just
posted
them
now.
Yeah.
C
A
That
sounds
good,
so
perhaps
we'll
have
discussion
and
mailing
list
the
next
few
days
and
maybe
end
of
this
week,
we'll
see
a
revised
version
of
the
draft.
Is
that
plausible.
D
Yes,
but
would
you
like
to
that's
we
we
just
post
a
new
version
or
we
just
send
a
updated
version
to
the
main
list.
First.
A
A
It'll
be
fine,
I
mean
I.
I
had
a
lot
of
stuff.
I
was
planning
to
get
done
last
week
and
other
life
is
what
happened,
but
one
is
busy
making
other
plans.
Okay.
Thank
you.
E
Yeah
up
for
the
cup
of
TSN
mechanism
under
begin,
the.
E
A
Oh
okay,
so
I
think
I
understand
the
question.
I'm,
not
sure
it
has
its
sort
of
implied
in
the.
A
Let
me
see
if
I
can
get
all
that
blue
background
off
I'm.
Sorry,
I
didn't
mean
to
do
that.
Okay,
as
it's
implied
in
the
the
agenda,
it'd
be
good,
run
good
idea
to
evaluate
the
TSN
q
and
scheduling
mechanisms
against
the
requirements.
Draft
it'll
give
us
some
confidence
that
the
requirements
are
that
we
that
we
understand
the
requirements
and
that
the
tsmic
groups
don't
need
them.
Yes,
that
ought
to
be
in
some
sort
of
separate
document
if
it's
convenient
to
do
it
as
a
draft.
A
A
E
So
so
you
know
we
could
the
the
test,
along
with
the
TSN
scheduling
mechanism,
is
adjusted
to
to
to
test,
or
it
is
necessary
to
somebody
the
document
to
describe
or
why
why
the
TSM
print
scanning
mechanism
doesn't
know
the
satisfied.
The
requirement
so
I
suggest
the
the
latest
assembly
document
may
be
necessary.
C
B
C
G
I
think
I
remember:
there
is
a
published
RC
in
the
networking
group.
It
is
called
bandi
latency
and
in
this
document
there
is
a
summary
of
the
existing
TSN
QE
mechanism,
and
the
motivation
of
this
document
is
just
to
show
that
the
existing
TSN
mechanisms
can
satisfy
the
requirements
of
the
net.
So
I
think
the
the
conclusion
that
the
existing
TSM
mechanisms
cannot
satisfy.
Then
that
requirement
is
a
is
a
little
I.
G
Don't
know
how
to
say
it,
because
the
requirement,
for
example,
of
scaling
Risk
by
then
that
is
it's
also
Case
by
case,
for
example,
sometimes
the
in
a
layer,
3
Network.
It
is
quite
big
and
a
lot
of
flows
and
the
we
need
some
scalability,
but
it
doesn't
mean
I,
think
it
doesn't
mean
that
all
the
existing
tiers
and
mechanisms
can
satisfy
the
requirement
of
the
net.
G
It's
it's
just
some
of
them
cannot
satisfy
some
of
the
requirement,
so
maybe
I
think
it
can
make
it
Case
by
case
rather
than
have
a
separated
document,
because
that
will
make
people
think
that
all
the
TSM
mechanisms
is
cannot
be
used.
That
is
my
concern.
G
F
Respond
to
it,
there's.
F
A
A
Are
we
expecting
that
every
TSN
mechanism
will
meet
none
of
the
scaling
requirements
or
we
expecting
that
some
of
the
mechanism,
TS
methods-
will
meet
some
of
the
requirements,
but
none
of
them
will
meet
all
of
the
requirements
across
the
board.
Is
that
is
that
the
question
you
thought
you
were
asking.
G
Yes,
I
think
not
only
the
existing
TX
and
TSM
mechanisms
actually
I
think
all
the
mechanisms
that
we
have
raised
cannot
satisfy
all
their
requirements
at
the
same
time,
so
I
think
we
cannot
give
an
impression
to
other
other
people
in
ITF
or
in
at
Triple
E
that
we
have
a
conclusion
that
TSN
mechanisms
cannot
satisfy
these
requirements.
It
depends
I,
think
Case
by
case.
A
G
Yes,
for
example,
I
think
today
we
will
have
a
discussion
about
the
tcqf
and
says
qf
are
these
two
mechanisms
is
the
is
the
some
extension
or
some
enhancement
based
on
the
existing
TSM
mechanisms
cqf.
G
So
the
the
basic
idea
here
is
that
cqf
has
some
scalability
problem,
so
we
have
some.
We
have
some
modification
based
on
that
existing
mechanism,
so
that
will
be
very
clear
with
which
mechanisms
has
what
kind
of
scalability
problem
which
cannot
satisfy
the
requirement
of
that
net
and
what
kind
of
more
we
can
do
in
then
and
to
make
it
work.
So
I
think
it
can
be
done
Case
by
case
rather
than
to
to
say
that
all
the
mechanisms
in
TSA
cannot
be
used.
F
E
A
quick
response
to
this
discussion-
yes
I,
agree
with
the
situation
that
it
would
be
kids
by
kids.
But
my
motivation
is
that
there's
a
good
place
for
separate
document
to
describe
that
the
kids
were
kids
for
the
teacher
curve
or
the
sales
curve,
for
that
is.
E
A
Okay,
as
you
can
read
back
when
I'm
busy
busy
writing
down,
because
I
should
be
taking
the
notes,
so
she
sung
when
I
first
wrote
it.
She
saw
a
David
agree
that
it
is
unlikely
any
TSN
mechanism
or
new
mechanism
will
meet
all
the
scaling
requirements.
Individual
mechanisms
will
meet
individual
requirements
and
maybe
not,
and
it
may
be
necessary
to
select
multiple
mechanisms
to
provide
mechanisms
that
can
meet
each
of
the
scaling
requirements.
Cells
will
be
Case
by
case
with
individual
method
of
selection,
based
on
what's
important
for
specific
usage
scenarios.
G
A
A
H
So
I'm
thinking
it's
probably
a
good
idea
that
each
of
the
solution
documents
try
to
capture
its
own,
evaluate
evaluation,
section,
for
example,
for
tcqf.
H
It
is
quite
clear
that
it
tried
to
extend
the
cqf
in
order
to
meet
some
of
the
scaling
requirements
and
for
the
I
think
for
the
asynchronous
asynchronous.
What's
that
called
sorry,
that's
draft
asynchronous
framework,
I,
I
kind
of
think
it
should.
It
probably
can
try
to
evaluate
the
TS
and
ATS,
because
that's
the
most
relevant
and
the
stable
reference
it
can
use
in
the
for
shuffles
I
might
I
might
be
wrong,
but
I
I
kind
of
think.
H
It's
also
the
ATS,
the
asynchronous
traffic
scheduling
is
a
good
source
that
you
can
try.
You
may
try
to
start
with
for
the
evaluation
for
the
for
each
of
the
requirements,
so
my
in
summary,
I
kind
of
think
we
can.
We
probably
can
leave
it
for
each
of
the
solution
document.
Try
to
capture
its
own
to
explain
why
the
such
such
new
mechanisms
or
mechanism
extensions
can
meet
the
scaling
requirements
rather
than
using
a
standalone,
separate
document
to
capture
everything.
A
My
underlying
cons
and
my
underlying
concern,
which
is
why
am
I
wasting
time
on
this,
is
I,
think
the
first
mechanism
or
two
that
goes
through
this
evaluation
against
the
requirements.
Doc
is
going
to
have
a
harder
time
than
the
subsequent
ones,
and
so
I'd
like
to
take
something
well
known
and
existing
that
hopefully
doesn't
bias
us
one
way
or
another
among
the
proposed
mechanisms
to
use
that
to
survive.
That.
H
Yeah
so
of
that
I,
if
I
understand
correctly,
are
you
saying
that
if
for
each
of
the
document
authors,
if
if
the
others
would
like
to
try
to
post
or
contribute
to
that
post,
I
mean
leupon's
requirement
document
that
we
can
make
it
we
can
directly
put
it
into
current
working
groups
requirement
documents?
Is
that
what
I
heard
or
I
misunderstood.
A
H
H
H
To
do
this
work,
it's
more!
It's
more
fair,
yeah.
A
Okay,
I
think
I've
got
that
captured
in
the
notes
and
issue.
We
were
also
suggesting
that
perhaps
some
of
the
solution
authors
might
want
to
take
on
evaluation
of
the
TSN
mechanism,
that's
closest
to
our
solution.
H
So
that
we
are
clearly
so
the
readers
can
clearly
know
that
this
particular
solution.
How
can
why
and
how
this
particular
solution
address
the
scaling
problem.
A
Okay,
agree
with
that,
and
are
you
also
suggesting
that
the
solution
dock
that
that
a
dock
for
a
new
solution
ought
to
also
evaluate
an
underlying
TSN
mechanism?
If
it's,
if
it's
based
on
an
underlying
TSN
mechanism,.
H
And
it
would
be
good
if
the
because
we
have
this
interim
meeting
some
of
the
authors,
I
think
probably
want
to
show
show
the
testing
data
in
the
slides
rather
than
in
the
documents,
because
it
looks
not
so
common
that
the
testing
data
appears
in
a
draft
right.
So
so
I
mean
either
the
the
the
explanation
or
the
testing
data.
D
Maybe
every
sorry
every
Solutions
can
evaluate
themselves
and
to
give
well
most
rights.
Just
like
our
members
in
surface
rise,
there
was
a
evaluation
or
a
result.
So
after
that
we
can
put
them
together
to
discuss
more
about
the
performance.
I
think.
H
I
kind
of
think
it
depends
on
how
we
what
and
how
we
Define
this
evaluation.
It
could
be
just
explanation
like
a
table,
a
check
against
each
of
the
check
against
each
of
the
requirements
in
the
working
group
document
to
say:
I
need
this
or
or
halfway
meet
this
or
not,
cannot
meet
this.
So
this
is
a
form
one
of
the
forms
for
the
evaluation.
Another
evaluation
is
to
show
some
testing
data,
so
there
are
two
forms
at
least
two
forms
of
evaluation,
I
think
so.
A
I
might
try
to
split
those,
certainly
if
I
go
back
to
school
discussion
earlier
meetings,
something
closer
to
your
second
alternative
I
think
is
more
important.
So
a
quick
summary
of
the
extent
to
which
a
mechanism
or
solution
does
or
does
not
meet
the
individual
requirements.
H
I
think
tested
data
could
be
presented
during
the
interim
meeting
in
the
slides,
but
for
each
of
the
solution
documents
it
will
be.
We
we
suggest
that
that
a
separate
section
try
to
do
the
something
like
the
table:
evaluation
for
each
of
the
listed
listed
requirements.
That's
that's
still
called
we.
We
still
call
the
evaluation
right
in
the
draft
timing.
C
A
Think
I've
got
that
captured
it's
a
chef
who
were
you
proposing
to
actually
start
the
document
to
capture
the
state
of
the
classic
TSN
mechanisms
with
respect
to
the
scaling
requirements.
E
Because
it
is
a
common
to
all
a
new
mechanism
for
I
think
maybe.
E
Yeah
for
the
new
mechanism
document,
it
is
necessary
to
add
a
selection
to
describe
the
Chinese,
the
based
on
the
requirement.
E
G
Shuffle
I
I'm
actually
I'm
a
little
confused
about
the
the
common
common
thing
you
have
mentioned
because
I
think
as
lijo
has
already
proposed.
G
Maybe
when
we
talk
when
we
discuss
one
mechanisms
internet,
maybe
that
kind
of
mechanisms
that
has
already
been
discussed
in
TSN
and
what
we
have
done
is
just
to
propose
some
modification
or
enhancement
based
on
existing
mechanisms
and
to
that
point
in
that
document
we
can
analyze
what
kind
of
scaling
problem
for
that
type
of
TSN
mechanisms,
because
there
are
a
lot
of
mechanisms
in
TSN,
so
I'm
a
little
confused
you.
You
have
mentioned
that
there
are
some
common
problems
in
TSM
mechanisms.
What
do
you
mean.
E
Is
an
understanding
some
some
description,
I
regarded
the
for
each
new
document
to
maybe
unnecessary,
to
add
the
section
to
describe
the
existing,
classical
tierson
mechanism
for
to
describe
the
gap
of
the
existing
TS
mechanisms
are
between
and
the
the
the
requirement,
the
document.
A
And
as
I
said
earlier,
I
would
like
to
see
at
least
one
of
the
classic
TSN
mechanisms
taken
through
evaluation
against
the
scaling
requirements
draft
so
that
we
can
learn
from
that
experience
about
how
well
that
works
and
maybe
pain.
Maybe
we'll
have
some
more
stuff
for
pain
to
revise
the
scaling
requirements.
Draft
again.
G
What
you
have
suggest
is
to
take
one
TSA
mechanisms
as
an
example
to
analyze
why
it
cannot
satisfy
the
existing
scaling
requirement.
Is
that
your
proposal.
A
G
Yeah
on
this
dude
actually
I
think
maybe
a
tea
stick.
You
have-
and
this
says
qf
can
give
a
a
proper
example
for
that,
because
it
is
the
very
clear
enhancement
to
the
existing
secure
problem
and
the
the
secure
problem
for
scaling
is.
It
is
not
suitable
for
long
distance,
especially
the
the
link
delay
is
very
big.
Maybe
we
can
go
into
it
when
we
discuss
these
two
mechanisms.
A
C
D
D
Requires
the
time
synchronization
and
our
requirements
first
of
the
comments
that
your
require
not
really
for
time.
Synchronization.
C
D
For
other
sorry,
for
others,
maybe
I'm
synchronized
Edition
based
solution
like
shall
Focus
erosion
and
channels.
Maybe
they
can
get
more
analysis
if
there
is
nothing.
H
So
I
I
think
for
for
the
tcqf
part
or
CSC
curve
anyway.
So
the
original
draft,
which
is
my
draft,
is
a
ego
c2f
Orion.
It
takes
quite
a
long
chapters
or
sections
to
explain
why
the
traditional
cqf
cannot
meet
the
scaling
requirements
and
explaining
quite
details,
but
it
is
not
in
the
form
to
check
against
one
by
one
of
the
requirement
listed
in
current
working
group
document.
So
I
think.
If,
if
ever
you
you,
you
think
that
it
would
be
worthwhile
that
we
have
a
separate.
H
We
have
at
least
a
example
document
to
show
that
one
of
the
existing
TSM
mechanisms,
which
is
cqf,
should
be
taken
through
the
evaluation
against
a
one
by
one
requirements
in
the
doc
in
the
working
group
document.
I
think
I
can
modify
that
that
draft
to
make
it
more
clearly
stated
in
this
form
at
least.
A
C
C
A
A
All
right
any
other
comments
or
questions.
G
I
hope
prepare
some
slides,
so
the
the
tcqf
will
be
presenting
next
time.
A
A
The
plan
I
think
the
tcqf
authors.
The
two
drafts
could
use
a
little
bit
more
time
to
come
to
a
common
view
of
things
and
I
I
guess
we
can
start
here.
I
mean
one
of
my
questions
is
whether
we
have
a
common
underlying
Extended,
cqf
scheduling
and
queuing
mechanism
and
we're
looking
at
three
ways
of
doing
doing
the
communication
instances
or
whether
there
is
something
fundamental
about
Sr.
That
makes
what
you're
proposing
this
draft
different.
G
Yeah,
okay
I
will
give
some
introduction
to
that.
So
I
can
assure
let.
G
Okay,
the
Assumption
here
is
that
people
has
already
been
familiar
with
the
the
secure
mechanisms
and
also
the
tcqf
and
the
cesqf
is
kind
of
based
on
this
expectation.
A
Oh
I'm,
sorry
wait
a
minute
there.
It
is
I
need
to
go,
I
need
to
go
say.
Yes,
you
can
do
that.
Yes,
I'm
sorry,.
B
C
A
G
Sure
I
will
send
it
to
you.
What
I'm
trying
to
introduce
today
is
psycho
specified,
curing
and
forwarding
mechanisms.
We
call
it
csqf,
and
this
slide
is,
is
something
I
have
presented
in
this
ITF.
There
are
three
parts
of
mechanisms
for
guaranteed
money
latency.
The
first
one
is
QE
mechanisms,
which
is
the
core
of
our
discussion.
Another
one
is
encapsulation
because
this
mechanism
is
based
on
SR,
so
we
I
also
mentioned
this
part
and
also
resource
allocation
in
this
mechanisms.
Csqf
based
on.
B
G
The
controller
or
centralized
paths
and
resource
calculation
and
the
first
I
will
give
some
brief
introduction
of
the
queuing
or
shaping
mechanisms.
Basically,
we
have
three
or
more
than
three
cues
and
there
are
different
rules
for
these
three
cues.
One
of
them
is
the
existing
output
queue,
and
one
of
them
is
the
the
Target
in
input
queue
and
another
one
is
the
burst,
tolerant,
Cube.
G
If
we
have
more
than
three
queues,
we
have
more
than
one
burst,
tolerant,
Q,
so
we
can
tolerant,
more
more
more
burst
and
each
queue
will
correspond
to
a
cycle
number.
The
second
number
means
that,
in
which
time
slots,
the
queue
will
be
the
output
q
and
the
the
target.
Second
number
second
number
will
be
updated
based
on
the
existing
timeline,
for
example.
In
this,
this
following
pictures
shows
what
will
will
happen
in
each
time
slot
for
in
cycle
one.
G
If
we
have
received
a
packet
which
is
have
a
label
to
indicated
to
to
be
sent
out
in
cycle
two,
it
will
be
put
in
yeah.
A
What
what
slide
are
you
on?
My
screen
is
showing
your
title
slide
number
one.
G
Oh
sorry,
it's
it's!
So
whatever.
G
Okay,
so
I
I
will
I
will
quit
the
presenting
mode.
Maybe
you
cannot
see
the
the
I
I
I
I
I
I
switch
the
the
slide
yeah.
A
G
Okay,
okay,
honest
dude:
this
is
the
previous
slide
that
this
has
already
been
presented
in
ital
116
and
now
we're
going
into
the
query
mechanisms
and
we
have
three
cues.
It
can
have
a
different
role
and
each
kill
have
a
correspond
cycle
number
and
for
take
it
exam
example.
When
we
we,
when
we
are,
you
know
times
a
lot
of
cycle
one.
We
have
a
packet
of
cycle
two,
which
means
that
the
packet
is
supposed
to
be
sent
out
in
cycle
two.
G
We
will
put
it
into
the
Q3,
because
that
is
the
target
input
queue
and
the
the
target
second
number
of
Q3
is
cycle
two
and
you.
If
we
have
received
a
packet
that
is
labeled
as
cycle
three,
we
will
put
it
in
q1
because
it
is
the
first
tolerant
queue
and
if,
in
second
one
we
have
received
a
which
is,
has
a
label
of
Circle
Four.
G
We
have
no
cue
to
put
it
because
the
we
have
only
one
burst,
tolerant,
Q
in
in
this
example.
But
if
we
have
more
cues
for
burst,
tolerant,
we
can
also
put
the
cycle
4
packet
into
the
queue,
and
that
is
what
happens
in
cycle
one
and
if
we
have,
if
the
time
slot
has
passed
to
cycle
two,
the
the
second
number
for
each
queue
will
will
also
change
and
the
rule
of
the
queue
will
update
it.
G
That
is
the
mechanisms
that
is
quite
similar
to
cqf
or
what
has
been
changed
is
that
we
add
more
cues
for
a
burst,
tolerant,
and
so
the
question
is
that
how
to
provide
bounded
Jitter
in
each
Hub?
The
the
basic
idea
is
here.
We
have
you
know
three
part
of
latency.
One
is
for
processing,
one
is
for
queuing
and
one
is
for
output
port
and
the
link
delay
is
stable,
so
it
is
now
discussed
in
this
slide
there.
G
The
link,
delay,
won't
cost
Jitter
and
the
processing
delay
will
will
have
some
Jitter
and
the
the
queuing
delay
is
bounded
by
the
mechanisms
itself,
because
the
time
slot
has
already
been
scheduled.
So
the
Jitter
rule
comes
from
the
processing
delay
and
when
the
processing
delay
is
more,
we
will
give
it
a
larger
queuing
delay
to
make
it
to
make
it
abundant
abundant
Jitter.
And
if
the
the
processing
delay
is
large,
the
queuing
delay
will
be
smaller.
G
G
That
is
the
the
queuing
part
and
so
how
we,
how
we
guarantee
that
in
each
time
slot
the
packet
can
be
sent
out
and
there
is
no
confliction,
no
no
congestion.
G
The
answer
is
that
the
controller
know
that,
because
the
controller
knows
the
the
traffic
specification
of
each
flow,
so
it
will
reserve
the
time
slot
for
each
flow
and
it
will
calculate
the
path
and
adds
the
same
time
calculate
the
the
the
time
slot
for
each
packet
and
to
indicate
that
which
times
loss
the
packet
is
supposed
to
be
sent
out
in
each
hop.
G
For
example,
in
this
picture,
the
the
calculation
result
is
that
the
package
will
be
sent
out
in
second
one
in
node
a
and
will
be
will
be
received
in
cycle
2
for
node
B
and
send
out
in
cycle
three
in
node
B
and
receive
the
insect
of
four
for
node
C
and
send
out
in
cycle
five
for
normalcy.
So
that
will
be
the
the
calculation
result
of
the
controller.
G
And
then
that
is
the
reason
why
we
need
the
SR
label,
because
there
will
be
a
label
stack
for
the
path
to
indicate
which
output
Port.
The
package
should
be
sent
out,
and
if
we
enhance
that
part,
we
can
the
the
SR
label
or
s56
seed.
Not
only
indicates
that
which
part
the
package
should
be
sent
out.
It
also
indicates
that
wage
time
slot
or
which
cycle
the
the
package
should
be
sent
out,
which
is
also
the
result
of
the
the
calculation
from
the
controller.
G
So
in
the
first
function
we
read
the
the
seed
and
we
know
the
output
port
for
that
packet,
and
we
also
read
the
seed
and
know
the
the
second
number
the
packet
should
be
sent
out
and
based
on
the
second
number.
We
know
which
queue
we
should
put
the
packet
into
and
when
the
the
that
Q
will
be
the
output
Q,
the
packet
will
be
sent
out
in
the
indicated
time
slot.
G
So
that
is
the
whole
idea
and
we
give
I
I
have
Give
an
example,
it's
very
simple
to
to
show
how
to
work
together.
First,
we
have
to
give
the
controller
some
information
about
the
topology
about
the
meaning
of
the
label
and
also
some
basic
delay
information.
For
example,
the
link
delay
the
processing
delay.
G
Maybe
the
processing
delay
has
some
Jitter,
so
it
will
be
a
rank,
for
example,
10,
10
milliseconds
to
20
milliseconds,
and
also
the
the
proposed
queuing
delay
and
after
the
controller
have
already
collected
all
this
information,
it
can
calculate
the
past.
The
it
has
the
it
in
the
past
will
satisfy
the
requirement
of
the
end-to-end
latency
and
also
the
end
to
end
the
end.
G
To
end
the
Jitter
after
calculate
after
the
the
past
calculation,
the
the
controller
will
also
indicate
that
in
each
Hub
which
time
slot
the
package
should
be
sent
out,
so
it
will
will
present
the
the
result.
As
a
label
stack
and
label
stack
will
be
allocated
to
the
first
half
of
the
pass,
and
then
the
packet
will
forward
it
based
on
the
label.
Stack
of
the
label,
as
I
have
mentioned,
not
only
indicate
that
which
output
Port
the
package
should
be
sent
out
also
which
time
slot
the
package
should
be
sent
out.
G
So
it
just
will
send
the
packet
based
on
the
label
and
the
underlying
mechanisms
is
the
queuing,
the
cyclic
query
mechanisms
as
I
mentioned,
and
because
for
each
packet
the
arriving
time
is
different.
So
there
will,
there
will
be
different
label
stack
for
different
packet
in
the
same
flow,
but
there
will
be
a
pattern
because
then,
the
traffic
specification
of
the
flow
is
known
by
the
controller.
G
So
there
will
be
a
set
of
level
stack
allocated
to
the
first
half
of
the
pass
and
when
a
packet
arrives,
it
will
select
the
right
one
to
indicate
that
packet
to
go
through
all
the
paths
and
if
there
are
different
flows,
the
the
intermediate
nodes
will
not
recognize
which
flow
the
packet
belongs.
So
it
just
recognized
the
label
so
it
so.
There
will
be
no
flow
status
maintained
in
intermediate
nodes.
G
The
the
nodes
just
send
out
a
packet
based
on
label
and
that
make
it
more
scalable
when
there
are
multiple
flows
running
the
in
the
same
time
in
the
network
and
the
controller
will
deal
with
the
possible
packet
confliction,
because
there
there
is
a
resource
reservation
beforehand.
Even
these
two
flows
share
the
same
time
slot
the
pack.
The
the
controller
will
guarantee
that
the
this
time
slot
is
enough
for
both
flows
to
be
sent
out
and
because
we
can
see
these
mechanisms
depends
heavily
on
the
controller
mechanism.
G
So
we
have
done
some
work
about
the
how
to
do
the
calculation,
both
the
the
past
calculation
and
also
the
time
slot
calculation
and
there's
offline
planning.
There
is
online
planning
and
there
are
different
goals.
I
won't
go
to
details
because
this
is
related
to
the
controller
algorithms.
It's
just
some
give
some
idea
that
we
we
have
already
done
this
and
it
can
work
and
there
are
different
kind
of
algorithms,
for
example
the
the
coolant
generation
or
a
Grady,
and
here
is
some
some
scenarios
we
have.
G
We
have
used
as
an
example
for
past
calculation,
and
it
is
a
give
some
idea
how
many
nodes
can
be
calculated,
how
many
flows
can
be
carried
in
this
network,
and
here
are
some
results
and
also
we
have
done
some
work
about
load
balance.
It
means
that,
because
the
controller
knows
everything
it
can
give
some
mechanisms
for
load
balance
to
make
the
network
can
carry
more
flows.
G
So
here
are
some
basic
questions
from
David,
perhaps
also
from
the
working
group.
So
we
have
three
documents
are
what
about
the
query
mechanisms?
Are
they
share
the
same
or
common
cream
mechanisms?
Yes,
the
underlying
mechanism
is
the
same
because
it's
all
based
on
the
The
Psychotic
QE
mechanisms,
and
there
will
be
a
time
slot
for
each
output
queue
and
the
other
queue
is
for
for
Jitter
tolerant.
So
what
is
the
difference
between,
for
example,
tcqf
and
csqf?
G
Actually,
the
the
basic
idea
is
that
the
mapping
relationship
is
calculated
by
the
controller
here
is
the
picture
shows
how
the
controller
calculates
the
memory
relationship
between
each
nodes
and
each
time
slot,
so
the
controller
will
calculate
it
for
TC
graph.
G
The
the
mapping
relationship
will
be
maintained
in
the
each
node
and
it
is
stable,
but
for
controller
it
is
calculated
together
based
on
the
collected
information,
and
so
the
controller
can
also
adjust
the
mapping
relationship
based
on
the
the
reservation
status,
for
example,
if
the,
if,
based
on
the
previous
mapping
relationship
the
the
time
slot,
is
full
and
cannot
cannot
carry
on
another
packet.
In
that
time
slot
there
can
be
some
adjustment.
G
For
example,
if
we
have
molecules,
we
can
put
the
packet
into
another
time
slot
and
it
can
also
be
indicated
by
the
label
stack,
so
it
is
kind
of
more
stable
in
scheduling
because
it
is
based
on
a
centralized
controller
yeah.
That
is
my
presentation.
Maybe
do
you
have
any
any
questions
about
this
mechanisms?
I
can
give
more
explanation.
A
G
And
not
really,
actually
we
have
give
the
label
two
meanings.
For
example,
in
the
previous
mechanisms,
for
example,
traditional
Sr,
each
seed
will
represent
one
output
port,
for
example,
but
now
we
we
we
allocate
a
set
of
seed
for
one
output
Port.
It
means
that,
for
example,
one
or
from
101
to
105
all
these
three
labels
represent
part
one,
but
there.
The
difference
of
these
five
labels
is
that
it
presents
different
Cycles.
G
So
it's
it
means
that
if
we
have
103
first,
we
know
that
the
package
will
be
be
sent
out
in
one
of
the
output.
A
A
G
Is
it's
just
actually
it's.
It's
just
is
an
example.
There
is
no
meaning
for
for
for
a
dash
here.
It's
just
the
same
as
the
y
o
y
or
Y102.
G
G
Because
this
example
is
we
we
take
srmps
I
as
an
example,
so
the
the
the
cycle
ID
will
be
represented
by
a
label.
But
if
we
use
srvc
sr6
a
seed,
for
example,
we
can
Define
the
cycle
ID
in
the
arguments
of
as
obviously
seed.
It's
it's
similar,
so
I
I
just
take
MPS
label
as
an
example,
foreign.
F
A
A
A
I
H
Thanks
Debbie
so
I
know,
this
is
for
the
asabi
sex
right,
basically
for
the
SR
vsx
encapsulation
and
you
try
to
use
the
set,
which
is
a
concept
in
segment
routing
I'm,
not
the
expert
in
segment
routing.
So
would
you
please
illustrate
a
little
bit
more
regarding,
for
example,
in
in
segment
routing,
there
are
different
types
of
said
right,
so
either
ending
extensions
or
constraints
because
of
the
encapsulation
and
because
of
the
segment
routing
calculation
we
need
to
follow.
Is
there
anything.
G
Yes,
actually,
we
introduced
new
meaning
for
the
existing
seed,
for
example,
for
MPS
label.
The
previous
label
is
only
to
indicate,
for
example,
other
other
adjacency
or
no
C
it
just
used
for
routine,
but
now
we
use
it
for
a
Time
information.
The
the
there
are
two
to
modification
here.
One
is
that
we
we
introduce
a
new
meaning
for
the
seed,
and
it
also
indicates
the
new
functionality
and
the
another
one
is
that
we
need
more
seeds.
G
For
example,
we
need
more
MPS
labels
for
each
other
Port,
because
we
we
have
to
indicate
for
each
other
support
the
difference
of
different
Cycles.
So
that
is
the
the
two
new
things
we
bring
to
the
traditional
Sr
MPS
or
Sr
V6
mechanisms.
E
G
Oh,
that
is
really
a
good
question,
because
that
is
the
the
the
sweet
part
of
this
mechanisms.
This
slide
shows
that
how
we
do
that,
for
example,
we
have
a
flow
from
the
destination
source
and
we
have
give
it
a
pass
from
A
to
B,
to
C
and
to
the
the
destination
and
because
the
controller
has
already
collected
the
the
the
link
and
the
processing
delay
of
each
node,
so
it
it
will
give
it
will
give
a
time
slot
result
for
each
node,
as
I
have
mentioned
in
in
this.
G
The
the
the
result
is
that
second,
one
for
a
second
three
for
B
and
second
five
for
C,
and
this
is
result
for
this
flow
right
and
now
the
controller
knows
that
in
cycle
one,
there
has
already
been
a
packet
and
for
B
the
cycle.
G
3
has
already
been
a
packet,
and
you
know
the
blue
one
is
the
newly
scheduled
flow
and
you
can
also
notice
there
is
gray
one
that
means
that
that
is
the
flow
that
has
already
been
scheduled
in
that
time
slot,
because
the
controller
knows
everything
even
the
pass
is
different.
The
sources
difference.
There
are
multiple
flows,
but
the
controller
knows
that
so
every
time
the
controller
schedule
a
new
flow
into
this
network.
G
1
can
only
provide
times
resource
for,
for
example,
five,
packets
and,
of
course,
there
will
be
a
maximum
taxes
size
and
if,
if
we
have
another
flow
that
make
it
the
sixth
packet,
in
that
same
time,
slot
the
controller
has
to
reschedule,
because
the
node
a
cannot
give
the
results
for
that
flow
in
in
cycle
one
so
that
that
is
the
the
most
important
thing
for
this
mechanisms,
and
that
is
also
what
we
are
trying
to
do
in
all
these
controller
algorithms,
because
we,
the
controller,
has
to
to
check
whether
the
time
slot
is
used
or
or
not,
and
the
the
time
slot
is
enough
or
not,
and
then
it
can
give
the
result
of
the
calculation
and
when
the
controller
can
give
the
result,
it
means
that
okay,
the
the
resource
is
still
enough
and
there
will
be
no
confliction
if,
if
the
controller
cannot
give
the
result,
it
means
that
oh,
we
have
already
all
the
the
the
suitable
time
slot
has
already
been
occupied.
G
E
Okay,
how
a
needle
confused
about
the
the.
E
Yeah
yeah
yeah,
okay,
yeah.
Also.
If
we
take
a
look
at
this
big
for
the
Transcendent
node
C
yeah,
they
may
be
multiple
follows
received
from
different
incoming
Port.
Yes,
10,
for
example,
the
flow
one
and
the
flow
two
there
may
be
full
of
multiple
source,
so
I'm
not
sure
how
the
controller
to
calculate
the,
for
example.
E
In
the
note
that
flow
one
we
arrived
at
the
translator,
C,
for
example,
it
may
consumed
the
cycle
suit
in
Transylvania
the
Sea
Under
again,
the
flow
2
basically
consumed
the
circus
sleep
that
is
at
the
same
cycle
are
under
transfer
notices
but
the.
But
these
two
flows
May.
The
third
there's
a
large
time
with
interval
between
these
two
flows,
but
they
just
take
the
same
cycle
three.
E
Actually,
configured
all
data,
not
actually
a
company
that.
G
Is
actually
the
the
point
is
that,
because
we
have
a
time
slot
lens,
for
example,
how
long
the
the
cycle
will
be
if,
for
example,
it's
it's
10
millisecond,
and
then
that
is
scheduled
beforehand
right
and
then
we
know
the
capability
of
the
output
port
for
node
three,
for
example:
it's
it
10g,
and
then
we
know
in
each
time
slot
how
how
many
packets,
the
the
for
output
port
for
node,
node
C.
It
can
send
it
out.
G
It's
it's
a
number
that
we
can
calculate
it
or
if
the
controller
can
calculate
it.
So
when
a
new
flow
arrives,
it
it
says:
okay,
there
will
be
a
packet
in
cycle
five
and
then
the
the
controller
will
check
how
many
packets
that
have
already
been
scheduled
in
cycle
five
and
whether
there
will
be
more
space
for
cycle
file
and
then
the
the
packet
Size
Matters,
and
what
we
use
is
the
maximum
practice
size,
because
the
the
parameters
of
that
flow
include
some
core
information
for
the
calculation.
G
One
is
the
maximum
package
size.
Another
one
is
a
pattern
of
the
track
traffic,
for
example,
the
the
interval
of
the
packet
and
the
maximum
burst
of
each
interval.
For
example,
you
you
can
see
the
the
gray
the
gray
package
in
node
a
it
means
that
in
each
cycle
there
has
already
been
a
flow.
There
will
be
a
packet
in
each
cycle.
So
that
is
the
pattern
for
that
flow
and
for
node
B.
G
There
is
another
flow
that
okay
there
will
be
two
packets
in
each
cycle
so
for
based
on
that
pattern,
the
controller
knows
that
each
packet
will
be
will
supposed
to
be
sent
out
in
which
cycle,
and
also
based
on
that
it
can
can
decide
whether
that
Cycles
resources
do
enough
or
not.
E
G
No,
no
there's
it's
just
an
example:
it
can.
The
the
pattern
of
the
traffic
can
be
different.
It's
just
for
this
example.
It's
for.
J
G
G
But
remember
if
the
the
cycle
number
of
the
packet
for
the
packet
to
arrive
is
not
determined,
it
means
that
there
will
be
Jitter
for
that
traffic.
So
we
have.
We
have
to
maintain
more
cues
for
that
Jitter.
That
is,
that
is
the
rule,
because,
if,
if
you
want
to
to
tolerant
mode
cheaters,
you
have
to
maintain
more
kills
in
the
in
the
in
the
device.
G
G
G
G
The
flow
you
have
to
pay.
G
To
that,
if
you
reserve
a
flow
to
the
controller-
and
you
mentioned
that
okay,
our
packet
will
be
arrived
one
of
late
one
another
later.
It
means
that
the
timeline
of
reservation
will
be
one
hour
later,
so
the
there
will
be
no
results.
Reservation
for
the
flow
in
this
hour,
but
it
will
happen
one
hour
ago,
but
if
you
talk
to
the
controller
and
assess
that
okay
I
have
one
packet
but
I
don't
know
when
it
will
arrive
it.
G
So
there
is
no
no
deterministic
for
that
flow
because
you
don't
even
know
when
it
will
arrive.
There's
no
resource
reservation
or
you
can
give
redundant
resource
reservations,
even
though
your
flow
hasn't
arrived.
But
I
do
the
reservation
first
and
when,
whenever
you
arrive,
you
can
get
you,
you
can
get
your
seat,
that's
you
cannot,
you
cannot
say:
okay,
I,
don't
know
when
I
arrive,
but
but
when
I
arrive
you
have
have
the
resource
reservation
ready.
It's
not
reasonable.
You
cannot
satisfy
this
requirement
in
any
way.
E
Thank
you,
but
maybe
we
can't
discuss
in
many,
is
that
the
question
is.
We
need
a
northern
related
with
the
Jitter
Corner
lens.
It
is
exactly
that,
for
example,
social
one
send
the
traffic
one
hour
later.
I
want
one
hour
before
under
social
one
hour
later,.
G
Yes,
I
I
think
I
understand
your
point
is
the
my
answer
is
that
if
you
arrive
one
hour
later,
it
means
that
your
reservation
begins
one
hour
later,
but
if
you,
but
the
Assumption
here,
is
that
you
know
exactly
the
packet
will
arrive
one
hour
later.
But
if
you
don't
know
you
just
don't
know
when
the
package
will
arrive,
the
the
the
the
most
I
think
the
most
efficient
ways
to
reservation
to
do
the
reservation
beforehand.
It
means
that
whenever
you
arrive,
your
reservation
is
already
there.
G
It's
actually,
you
have
to
understand
the
Assumption
of
that
net,
because
for
that
net
there
is
a
traffic
specification.
What's
traffic
specification
means
that
it's
the
description
of
the
traffic
model
of
the
flow
and
if
you
have
already
provide
your
traffic
specification
and
for
the
results
reservation
will
be
based
on
the
risk,
the
traffic
specification
you
have
provided
to
the
controller.
G
So
if
you
don't
send
your
packet
based
on
the
traffic
specification,
for
example,
you
you
have
reserved
a
1G
bandwidth
for
your
pack
for
your
flow,
but
you
only
send
one
packet.
It's
it's
your
problem,
it's
not
a
problem
of
the
that
net.
That
is
the
case.
For
example,
you
it
it.
That
is
the
case
for
every
mechanisms.
If
you
do
the
resource
reservation
you
have
to
based
on
the
results
you
have
reserved
or
you
you
can.
You
can
get
nothing.
A
Okay,
so
I
have
a
question
for
you:
shoe
song.
I
want
to
check
with
I
understand
it.
You
said
that
the
the
queuing
and
scheduling
mechanism
is
the
same,
although
is
it
correct
that
the
centralized
controller
algorithm
that
you're
using
requires
a
label
stack
because
the
cycle
number
May
differ
at
different
nodes
along
the
path.
G
David
I,
I
I,
don't
think
I
get
your
question
correctly.
You
mean
that
the
the
sacrament
of
the
label
and
the
the
queue
the
corresponding
relationship
between
the
label.
A
C
A
That's
what
I
was
asking
about,
because
some
of
the
other
mechanisms-
I've
mechanisms
I,
think
I've
seen-
would
not
allow
the
cycle
number
to
vary
between
nodes
and
that
that
would
that
would
be.
That
might
be
an
important
difference.
G
Yes,
because
that
that
is
also
the
benefit
of
this
mechanisms,
because
you
know,
as
the
this
example
shows,
that
in
packet
in
this
for
this
package,
it
the
schedule
result
is
cycle
while
for
a
cycle
three
for
B
and
cycle
file
for
C
and
each
cycle
can
be
represented
by
the
different
Sr
labels,
and
if
we
reschedule
the
the
flow,
for
example,
to
to
move
the
cycle
5
to
cycle
six
in
node
C,
we
can
just
update
the
label
stack
and
the
the
the
time
slot
can
adjust
based
on
the
status.
A
A
K
The
controller
pass
calculation
and
their
location,
so
this
is
there
is
there
past
planning
is.
Is
that
exactly
the
past
calculation
and
the
the
time
planning
is
exactly
the
resource
allocation.
G
Sorry,
we
missed
the
first
part
of
your
question.
Can
you
repeat
it.
K
G
Yeah
I
think
I
understand
your
question.
Actually
it
it
happens
together
in
this
mechanisms,
because
it's
all
all
done
by
the
algorithm
from
the
controller,
for
example,
the
traditional
past
calculation,
for
the
controller,
is
that
maybe
we
have
to
pick
up
a
a
path
that
can,
for
example,
satisfy
the
end-to-end
requirement
of
of
a
flow.
But
now
the
the
limitation
is
more
is
more.
We
we
not
only
have
to
satisfy
the
end-to-end
latency.
We
have
also
to
find
a
proper
time
slot
in
each
node
for
that
flow.
G
So
if
the,
if
the
the
controller
can't
give
the
result
of
that
of
that
flow,
it
means
that
okay,
we
have
already
selected
a
path
that
can
satisfy
our
datency
requirement
and
we
also
have
already
scheduled
a
proper
time
slot
along
the
path
and
if
that
result,
come
out
comes
out.
G
That
means
the
the
resource
reservation
has
already
been
done,
because
that
time
slot
that
has
already
been
reserved
for
that
flow
if
net
flow
goes
through
the
same
node
in
the
same
time
spot
the
controller
will
decide
whether
the
resource
is
enough
or
not.
K
G
K
G
Oh,
it's
in
the
mail
list.
I
have
sent
out
email.
It
is
more
additional
information.
The
last
one
is
the
link
for
the
paper,
so
this
one
is
join
rooting
and
scheduling
for
a
large-scale
deterministic
type
networks,
because
you
can
even
see
from
the
title
it's
joined
routine
and
scheduling.
It
means
that
it
need
to
schedule
times
a
lot
at
the
same
time
with
the
selecting
the
pass.
K
Okay,
thank
you.
So
what?
What
timing
that,
if
we
should
add
a
past
planning
or
transplanting
it's
a
new
item
for
the
control
plane?
So
maybe
we
could
pay
discounts
more
in
the
mailing
list.
It
is
related
to
their
controller
playing
draft.
G
Actually,
the
controller
I
think
that
is
a
good
point.
We
can
end
some
description
in
the
past
calculation,
because
the
existing
controller
framework
document
is
just
gives
some
very
initial
description.
It
means
that
we
have
do
the
resource
reservation.
We
have,
we
have
to
plan
a
proper
pass
for
for
flow,
but
there
is
no
description
about
the
time
scanning.
Maybe
we
can
add
something
to
that.
B
J
Yes,
turkey,
can
you
hear
me
just.
J
Clear,
okay,
sure,
thank
you.
If
you
don't
I
have
a
question.
Actually
it's
regarding
the
whole
Behavior
here
seems
to
me
you're,
trying
to
give
people
either
to
all
the
devices
around
the
past.
Has
that
been
installed
already?
You
know
why
I'm
asking
this
question
so.
J
G
Actually
not
yet
because
the
the
the
core
part
of
the
standardization
is
the
extension
to
the
SR,
MPS
or
srv6.
We
have
presented
this
work
in
Spring
working
group
several
times,
but
the
conclusion
is
that
it
has
to
be
confirmed,
but
then
I'm
working
very
first.
It
means
that
Leonard
green
group
has
to
to
say:
okay,
it
works,
it
is
requested
by
the
net
and
then
the
screen
can
take
it
into
their
scope
that
okay,
the
SR
extension,
is
reasonable
or
not.
So
we
are
still
now.
We
are
in
the
core
point
for
this
work.
G
A
I
think
been
trying
to
to
support
this
sort
of
distinction
between
the
queuing
and
scheduling
mechanism
and
the
the
protocol
extensions
needed
to
carry
information
for
that
mechanism.
We
could
easily
be
nice
if
we're
in
a
world
where
we
had
queuing
and
scheduling
mechanisms
that
were
amenable
to
different
protocols
for
carrying
the
information.
G
Yes,
I
I
think
so
so
actually,
as
you
have
mentioned,
David
there
will
be
multiple
ways
for
carrying
this
information
about
for
some
of
the
routine.
Maybe
it
has
some
benefit
for
for
this
mechanisms,
as
as
I
have
mentioned,
it
can
flexible.
B
G
J
J
G
There
is
no
actually
it's
because
in
this
picture
you
can
only
see
one
packet.
The
blue
packet
is
only
one.
It's
just
leave
node
a
to
node
B
and
then
to
no
C.
It's
kind
of
a
timeline,
but
for
the
traffic
specification
is
the
description
of
the
behavior
of
a
workflow.
It
means
that
there
will
be
multiple
packets
for
that
flow,
a
long
list
so
the
the
but
the
the
traffic
pattern
for
that
flow
won't
change.
I
G
Is
guaranteed
by
each
hop?
It
means
that
if
the
the
traffic
pattern
is
is
as
described,
the
node
a
will
make
sure
that
traffic
pattern
won't
change,
because
the
the
the
expected
Jitter
will
be
controlled
in
each
node,
and
it
means
that
the
traffic
pattern
will
be
the
same
in
node,
a
node,
B
and
node
C.
It
won't
change
because
if,
if
there
is
any
change
brought
by
node
a
for
example
in
the
processing
it
will
be,
it
will
be
tolerated
by
the
tolerant
q
and
the
it's
like.
J
Okay,
so
since
you
have
a
coordinated
management
among
all
the
three
nodes
in
this
example,.
G
Not
really
it's
just
you
know
it's
yeah,
because
it's
it
has
already
been
scheduled
by
the
controller
it's
like
it
is.
It
is
supposed
to
be
sent
out
in
cycle
one,
but
it
processed
so
fast
it
arrives
early,
and
now
we
have
a
a
Jitter,
tolerant
queue.
We
can
just
put
it
in
that
queue.
It
can
wait
a
little
longer
than
before,
because
it's
a
process
fast,
but.
C
H
H
G
Controller
yeah,
it's
pretty
determined
by
the
control.
Flexibility
is
not
real
time.
It's
just
the
the
flexibility
behind
the
controller
when
the
past
calculation
is
happening.
Well,
the
the
controller
will
see
whether
the
results
of
cycle
3
is
enough.
It's
if
it's
not,
it
can
put
it
in
maybe
other
cycle
later.
That
is
determined
by
the
controller
beforehand.
H
H
Time,
oh,
okay,
so
that
means
once
the
controller
determines
this
mapping,
then
all
the
packets
are
following
this
cycle:
yeah
mapping
relationship.
If
I
want
to
change
it,
I
I
need
to
go
through
the
whole
life
cycle
again.
So
at
a
certain
time
point
then
I
do
it
all
at
once.
It's
not
a
real
time
put
here
and
put
there
determined
by
okay
I,
see
you.
G
G
H
H
Oh
okay,
so
basically,
okay,
so
if
that
a
case,
so
basically
the
more
likely
the
controller
should
say,
should
determine
that
the
transmission
time
or
transmission
cycle
for,
for
example,
for
node
B
is
from
cycle.
One
two
three
and
at
node
C,
is
from
three
to
five.
Something
like
that.
Yes,.
C
Okay,
Jeong
dog.
L
L
Your
frequency
has
to
be
synchronized,
then
how
about
the
controller
I
mean?
Does
control
only
to
know
when
the
cycle
of
each
node
starts.
G
J
L
So
so
we
need
we
need
some
times
in
colonization
between
controller
and
each
and
every
node.
L
You're
right,
okay,
so
controller
needs
to
maintain
kind
of
a
separate
synchronization
with
each
node
to
make
sure
what
time
instance,
the
cycle
of
each
node
starts.
Is
that
correct.
G
C
I
Can
you
hear
me
yes,
thank
you,
okay,
can
you
explain
how
to
calculate
the
psycho
duration
when
there
are
multiple
flows
having
different
traffic
specifications?
Can
you
explain
that
please.
G
Here
is
a
picture,
but
I
don't
know
whether
it
can
show
that
very
obviously.
This
is
just
a
four
one
flow.
For
example,
you
have
a
flow
and
you
know
the
arriving
time
of
that
flow,
and
you
just
you
just
end
all
the
collect
information
of
delay
you
have
collected
collected,
including
the
the
link
delay,
the
processing
delay,
and
then
you
know
what
site
code
package
should
be
sent
out
right
and
then
you
have
another
flow.
I
G
Oh,
okay,
I
I,
understand
your
point.
You
mean
how
to
calculate
the
latency.
It
is
determined
by
this
one,
the
the
QE
mechanisms,
because
we
have
three
cues
and
when
the
packet
is
arrives
it
has
to
because
there
is
one
output
queue.
It
means
that
the
this
queue
is
only
make
the
the
packet
out
of
the
queue
and
don't
receive
Queue
at
this
cycle,
and
we
have
two
two
molecules.
That
means
that
these
two
cubes
only
only
put
the
packet
inside
and
it
won't
send
out
the
packet
at
this
cycle.
G
So
when
we
have
a
packet
arrives,
it
will
be
in
Q2
or
Q3
and
in
each
queue
it
will
the
the
the
the
output
Q
lasts.
That
means
that
the
lens
of
the
queuing
sending
out
packet
is
10
microseconds
and
because
there
are
only
three
queues
and
the
longest
time
the
package
will
stay
inside.
The
queue
is
two
cycle
two
cycles,
so
the
king
lens
is
10
microseconds.
It's
in
that
is
the
longest
time
it
will
be
inside
the
queue,
but
it
can
be
shorter.
G
The
Jitter
will
be
tolerant
in
that
in
this
three
Q
cyclic
mechanisms.
G
And
I
noticed
that
you
also
mentioned
that
how
it
can
be
guaranteed
when
there
are
multiple
flows,
the
that
is
also
the
the
point
how
to
do
the
resource
reservation,
because
the
because
the
the
cycle
capability
equals
the
the
the
link
capability
and
multiple
the
cycle
generation.
So
if
the
the
psycho
capability
is
full,
there
will
be
no
more
kills
and
ended
to
that
cycle.
G
So
that
guarantees
that
all
when
the
the
packet
is
put
inside
the
queue
and
when
the
queue
is
becoming
the
output
Q,
all
the
packets
inside
the
queue
can
be
sent
out
in
that
cycle.
It
is
guaranteed
by
the
this.
I
Yeah
yeah,
my
question
was
so:
is
the
cycle
duration
related
to
the
flow
specifications
it,
for
example,
frame.
G
It's
not
related
the
cycle
is,
is
planned
by
each
node
beforehand
and
then,
when
the
traffic
arrives,
the
traffic
specification
will
be
will
be
different
from
the
cycle.
For
example
in
in
this,
we
just
I
just
make
it
easy
to
understand
to.
In
each
cycle
there
will
be
one
packet,
but
based
on
the
traffic
specification
there,
it
can
be
quite
different,
for
example,
10
Cycles
one
packet.
It
just
depends
on
the
traffic
specification
not
related
to
the
cycle
planning.
J
G
B
A
G
Yeah
yeah.
Okay,
all
these
all
these
contents
are
are
public,
so
so
I.
A
Will
oh
I
assume
it's
much?
The
other
thing
is
I
saw
what
might
have
been
patent
application
IDs
here.
If,
if
those
are
are
important
to
the
draft,
you
might
need
to
ensure
that
that
is
a
super
light.
A
suitable
IPR
notices
filed
with
ietf.
A
G
K
A
Yeah,
please
please
look
into
it
if,
if,
if
those
are
inflows
are
important
to
what
you're
proposing
you'll
need
to,
you
should
arrange
that
they'd
be
disclosed
to
the
ietf.
G
A
C
A
A
Yes,
if
that's
a
patent,
if
that's
a
patent
application,
you
should
arrange
to
to
found
an
IPR
disclosure
without
ETF.
G
It's
okay!
Actually,
this
this
is
for
the
load,
balancing
it's
not
for
the
mechanisms
itself,
but
I
I
will
check.
A
A
Think
so,
thank
you
very
much.
You've
certainly
certainly
great
reason
of
questions.
There's
been
plenty
of
time.