►
From YouTube: IETF-DETNET-20230607-0000
Description
DETNET meeting session at IETF
2023/06/07 0000
https://datatracker.ietf.org/meeting//proceedings/
A
A
A
I've
pasted
the
agenda
over
in
the
note
tool
it'd
be
helpful
if
a
few
people
could
go
over
to
note
tool
and
take
notes,
as
the
discussion
goes
along.
B
A
C
B
A
Okay
appreciate
everybody
who.
A
Who
has
come
to
attend
this?
Is
the
agenda
and
we're
going
to
go
ahead
and
Bash
it
a
bit.
The
under
process,
oriented
topics,
I
believe,
is:
you
has
an
initial
of
test
run
of
the
cqf
queuing
excavation
mechanism
from
TSN
against
the
Criterion
requirements
draft.
So
we'll
include
that
in
the
discussion
under
item
one
and
under
item
two,
it's
detailed
presentation
on
the
time
slot
mechanism.
A
Anybody
else
want
to
go
and
and
bat
and
Bash
the
agenda.
A
A
A
I'm
hearing
a
lot
of
quiet,
I
hope
people
can
I
hope.
People
can
hear
me
easier.
Do
you
want
want
to
take
the
floor?
I'll
stop
sharing
and
you
can
go
ahead
and
show
us
what
you've
done.
D
Looks
like
I
may
have
some
problem
to
share
the
share.
The
screen.
I,
don't
know
the
permission
denied.
It
says.
D
I
I
think
I
have
some
problem
here,
since
this
is
only
a
single
page,
one.
How
about
I
send
to
you
right
now,
then
sure.
A
A
I'll
make
it
work
now,
instead
of
the
basically
out
of
China,
is
going
to
be
how
fast,
how
fast
can
Dell
process
email
from
China?
We
can
have
a
real-time.
E
B
B
A
A
A
and
I
suspect
I
see
a
slightly
different
screen
because
they,
because
the
z
track
apparently
thinks
I'm
in
charge
of
this
meeting
you'll,
but
there
should
be
a
link
there
where
you
can
try
to.
We
at
least
propose
the
slides
and
then
we'll
see
what
happens.
A
D
Thanks
so
this
is,
this
is
a
single
page
for
the
of
kind
of
very
initial
cqf
evaluation
with
their
scaling
requirement.
The
SQL
here
refers
to
the
published
IEEE
standard,
802.1
qch
2017.,
so
because
I
remember
last
time
we
somehow
talked
about
that.
It
may
be
worthwhile
that
tried
to
start
with
the
evaluating
the
original
cqf.
So
this
is
what
I
tried
to
do
yesterday.
Actually
so
so
this
is
a.
This
is
still
draft.
Basically,
so
I
I
put
the
requirements
from
the
scaling
scaling
requirement
document.
D
Here
there
are
basically
eight
requirements
and
I
think
for
the
six
and
eight
they
are
not
directly
relevant
to
the
queuing
so
I
grade
them
so
and
check
them
for
the
rest
of
the
requirements,
one
by
one
and
give
some
the
comments,
so
we
can
yeah,
so
we
can
discuss
it.
So
the
first
one
is
to
tolerate
the
time
as
asynchronously,
so
the
SQ
graph
here
is
a
no
because,
because
in
original
the
original
TSN
requirement,
the
time
sync
is
a
mandatory.
D
So
in
the
in
the
SQL,
especially
in
the
cqf,
all
the
nodes
need
to
rotate
their
transmission
buffers
normally
two
buffers.
According
to
one
wall
clock
time,
so
that's
a
no.
So
kindly
note
we
are
evaluating
sqf,
not
the
T
SQL
thing,
so
that's
the
first
one
I
think
you
you
can
stop
me
anytime
for
for
each
of
the
item.
A
D
D
On
okay,
so
the
item
two
is
to
support
the
large
single
Hub,
propagation
delay
and
C
curve.
Here
is
a
no
because
the
propagation
delay
in
in
theory,
it
should
be
much
lesser
than
the
cycle
interval
time
TC
in
order
to
make
the
utilization
practical
and
if
the
propagation
latency
say
is
so
large.
Normally
if
a
one
kilometer,
it
would
take
like
a
five.
A
one
kilometer
take
five
microseconds.
So
in
that
case,
let's
say
a
few,
only
a
few
kilometers
we
will
make
their
only
in
the
propagation
delay.
D
For
example
like
the
10
or
something
micro
seconds,
then
it
will
make
the
TC
that's
the
cycle
interval
time
extremely
large,
and
we
know
in
cqf
the
end-to-end
boundary
latency
is
proportional
to
the
cycle
time
TC,
so
that
would
be
hard
to
achieve
the
end-to-end
boundary
latency,
so
the
so
the
cqf
Is
We
I
put
a
no
here
for
for
item
two.
D
So
I
move
on
to
the
item
three,
which
is
accommodate
the
higher
link
speed.
This
is
kind
of
conditional,
I
think
so.
I
put
a
partial
here:
The
Secret,
the
the
the
the
fundamental
cqf
I,
think
it
was
not
designed
for
the
high
high
link,
speed,
so
I
I
believe
it
was
not
tested
for
the
for
the
speed
higher
than
one
gig
as
far
as
I
know.
D
So
that's
by
Theory.
If
we
have
a
high
link
speed
and
then
then
then
there
are.
There
are
some
background
assumption
that
we
want
to
achieve
the
title:
end-to-end
boundary
latency,
with
the
higher
speed
link,
so
that
again
we
want
to
make
the
cycle
time
smaller
and
with
the
highest
building.
It
doesn't
necessarily
mean
that
the
processing,
processing,
latency
variation
or
clock
accuracy
is
also
better.
D
So
in
that
case
they
could
be
relatively
large
time
variation
in
higher
link
speed
compared
with
the
cycle
time
we
can
achieve
in
higher
speed.
So
that
is,
it
makes
the
it
is
harder
to
determine
from
the
arrival
time
to
determine
from
absolute
arrival
time
about
which
cycle
a
package
was
sent
from
from
the
apps
from
the
Upstream
node.
So
I
put
a
partial
here,
then
item
4,
the
requirement
says,
be
scalable
to
the
large
number
of
flows
and
tolerate
the
high
utilization
I
I.
Try
to
split
them
to
two.
D
The
first
one
is
to
is
be
scalable
to
the
larger
number
of
flows.
I
put
a
partial
here
for
cqf,
because
the
original
cqf,
actually
the
transmission
control,
is
based
on
the
traffic
class
gate
control,
which
is
quite
scalable,
because
normally
the
traffic
class
is
only
it's
only
eight,
and
if
it
is
a
two
buffers
eqf
and
then
normal.
This
two
by
four
takes
two
of
the
traffic
classes.
So
it
is
a
transmission
control
is
scalable.
D
At
the
same
time,
the
sqlf
requires
at
the
input
Port
there.
Normally
there
is
a
stream
gate,
filtering
and
policies
sometimes
called
psfp,
and
it's
it
needs
to
be
applied
either
to
per
stream
or
some
aggregated
way.
So
it
really
depends
on
how
we
how
the
user
try
to
aggregate
it.
If,
if,
if
the
user
wants
to
really
find
finer
granularity
control,
then
the
string
Gates
filtering
could
be
based
on
the
per
stream,
but
normally
it
is
not
what
we
are
trying
to
do.
D
So
it
is
an
aggregated
way,
is
more
more
more
possible,
so
I
put
a
partial
here
because
it
could
be
aggregated
at
the
string
gate
and
the
SEC.
The
the
second
part
for
item
four
is
to
tolerate
the
high
utilization
I
put
a
no
here,
because
in
cqf
not
all
the
cycle.
Time
is
usable
for
the
real
data
traffic.
D
The
utilization
is
constrained
by
the
ratio
of
that
time
to
a
cycle
time
so
in
smaller
cycle
time
cases,
for
example
the
tens
of
microseconds
second
time
it
would
be
very
hard
to
achieve
the
high
utilization
with
the
with
the
normal
cqf
and
item
5
prevent
flow
fluctuation
from
disrupting
service
for
this
one
I
put
a
partial
here
so
because,
if
it
is
a
if
the
flows
are
all
compliant
when
I
say
compliant,
basically
it
means
order
every
single
stream
of
flow.
D
At
the
Ingress
point,
it's
always
fits
into
what
has
been
agreed
on
the
traffic
pack
of
the
traffic
pattern,
so
for
the
compliant
flows.
That
should
be
all
right
for
non-compliant
flows
and
because
the
SQL
have
normally
used
the
number
of
buffer
is
very
limited.
So
it
is
fundamentally
is
only
two,
so
it
was
started
to
drop
the
packet,
for
example.
If,
if
one
of
the,
if
the
package
from
flow
one
is
designed
for
buffer
one
going
into
buffer
one,
it
missed
that
cycle.
So
then,
then,.
D
If
it
is
missed
the
the
cycle
of
the
buffer
one,
then
it
rotates
to
buffer
two,
then
buffer,
one
because
it
already
starts
transmission,
then
fundamentally
the
buffer
stats
transmission.
We
will
stop
dropping
the
buffer
into
that
sorry
drop
the
package
into
that
buffer.
So
when
I
say
job
I'm
moving
I
mean
put
put
the
packets
into
that
buffer.
So
basically
that
package
will
be
dropped.
So
that's
this
part
actually
I'm,
not
so
sure,
because
it
sounds
a
little
bit
item
five
kind
of
relevant,
maybe
relevant,
to
item
4.1.
D
So
anyway,
I
put
a
partial
here.
Then
we
jump
to
we
ignore
item
six.
We
jump
to
item
seven
and
the
original
text.
It
says
to
be
be
scalable
to
a
large
number
of
hops
with
complex
topology
when
I
try
to
think
about
this,
because
I
think
my
evaluation
actually
doesn't
go
with
any
any
anything
with
complex
topology.
So
that's
why
I
cross
out
with
complex
topology
here
only
think
about
the
large
number
of
hops-
and
there
are
an
implicit
assumption-
is
with
a
large
number
of
hops.
D
We
still
want
to
achieve
and
bounded
end-to-end
latency.
So
I
put
a
no
here
because
in
most
of
cases
to
buffer
cqf
normally
supports
like
a
few
hundred
at
least
a
few
hundred
microsecond
cycle
time,
because
the
buffer
needs
to
be
sufficiently
large
and
there
are
only
two
of
them.
So
we
need
to
large
enough
to
absorb
all
the
converged
flows
and
at
the
same
time
the
number
of
hops
is
relatively
is
roughly
equals
end-to-end
latency
over
the
second
time.
D
So
if
we
say
the
number
of
halves
needs
to
be
large,
then
for
a
given
end-to-end
latency,
then
the
number
of
hops
actually
is
limited
by
the
second
time.
So
because
we
don't,
for
example,
if
the
number
of
hops
is
really
go
large
like
a
like
a
100
hops,
then
the
then
it
would
be
could
be
problematic
because
that
will
cause
in
order
to
tolerate.
In
order
to
support
that,
then
the
time
cycle
time
has
to
be
really
small.
So
so
I
put
a
no
I.
D
Think,
probably
I
should
put
the
partial
here,
but
anyway,
I
put
a
no
here,
because
there,
in
that
case,
in
some
of
the
cases
I
think
is
when
the
number
of
hops
is
extremely
large.
It
will
become
impractical,
but
at
the
same
time
the
data
bound
is
okay,
because
no
matter
how
many
hops
there,
the
Jitter
bounce
is
almost
always
two
times
the
TC.
So
it
sounds
okay
here
for
the
Jitter
bound.
So
that's,
basically
what
I
drafted
here.
F
G
G
It
seems
like
you
want
to
mention
that
there
are
only
two
cues
and
there
is
no
no
extra
cue
for
for
burst,
for
example,
but
I'm
not
sure
it
is
suitable
for
cqf,
because
in
the
mechanism
of
secure
activities
there
is
bandwidth
reservation
from
the
before
the
flow
goes
into
the
network
and
also
there
is
traffic
specification
to
limit
the
behavior
of
the
then
F
low.
So,
although
there
is,
there
is
only
two
cues
for
cqf
still.
G
Maybe
it
gives
limitation
to
the
to
what
kind
of
flow
can
be
allowed
to
use
this
mechanisms,
but
at
once
it
is
allowed.
It
means
that
it
can
handle
this,
so
I
think
it
can
prevent
flow
disturb
disrupting.
G
D
Thanks,
yes
yeah
so
I
think
that's
also
so.
G
D
Yeah,
so
for
item
five,
what
you
said
I
think
that's
also
the
reason
I
put
a
partial
here,
so
you
either
write
and.
D
D
So
if,
if
we
really
want
to
use
the
cqf
at
the
very
beginning
time,
we
we
will
try
to
make
the
so-called
over
provisioning
huge,
basically
larger,
so
that
if
there
is
there
is
a
additional
flows
coming
up
now,
I
can
still
try
to
fit
into
the
current
buffers
in
that
case,
I
consider
them
all
as
the
compliant
flows
so
so
come
for
the
compliant
flows.
I
think
that's
all
fine.
G
Foreign
yeah
I
think
I
understand
your
point.
My
concern
is
that
you
know
even
we
have
some
other
enhanced
mechanisms
based
on
cqf.
G
It
is
I
think
it
is
still
still
no
no
approach
to
guarantee
for
non-uh
complete
flows
so
I.
Just
the
concern
is
just
that
we
cannot
yeah
yeah,
because
if
the
the
flows
behavior
is
out
of
the
expectation,
the,
although
we
have
more
cues
more
than
two
kills
or
more,
it
is
still
have
some.
You
know
risk
that
that
kind
of
flow
cannot
be
handled,
probably
because
there
is
no
reservation
beforehand.
H
H
Sorry,
oh
and
I
forgot
to
be
in.
D
E
Okay,
thank
you.
I
also
have
a
question
about
the.
E
It
is
a
regulating
aspect.
I
think
it
is.
Cultural
plane
has
to
do
something
about
it
and
if,
if
it
is
within
the
admissible
region
and
the
flow
is
automated
already,
but
what
that
means
is
the
dynamic
changes
of
the
flow
join
and
leaving.
Then
it
has
to
be
handled
within
the
data
plane.
I
think
if
that
is
the
case,
if
the
the
packet
has
to
be
dropped
because
of
the
buffer
space,
then
it
is
quite
unacceptable
and
I
would
say
it
is
not
acceptable.
A
H
Yeah
so
I'm
I'm,
comparing
with
the
section
three
five
of
the
requirements
document,
which
I
think
is
what
this
is
against
right.
So
maybe
the
only
fundamental
Improvement
on
this
otherwise
very
nice
slide
would
be
that
we
put
the
numbers
from
the
requirements
document
and
the
draft
version
in
there
against
which
this
is
right,
so
I.
Hopefully
this
is
3.5.
D
H
H
I
think
the
the
first
two
paragraphs
really
are
about
the
burst
accumulation,
problem
and
I.
Think
from
that
perspective,
that
if
I
look
at
all
the
different
algorithms
I
think
we
have
seen
a
few
that
were
arguing
that
we
can
deal
with
burst
accumulation
as
something
across
multiple
hops,
but
to
me
against
the
requirement
that
would
be
kind
of
not
met
because
it
it
does
encure
the
additional
problems.
H
I
think
cqf
is
fine
with
not
not
causing
burst
accumulation
across
multiple
hubs
so
in
in
that
respect,
I
think
it
is
fine.
The
the
the
non-compliant
flow
stuff
so
I'm
not
sure
how
to
match
that
up
against
so
I
think
it's
it's
an
interesting
consideration.
The
evaluation,
but
I,
don't
think
we
have
have
text
in
in
3.5
that
it
matches
up
against
right
in
terms
of
how
relevant
do
we
think
non-compliant.
Traffic
is
and-
and
maybe
that's
that's
something
to
take
back
to
the
requirements
document.
H
So
I
think
that
that
would
be
it
right.
So
I
would
I
would
say
it's
it's
in
it's
in
full
compliance
to
what
I
can
read
from
3.5
and
the
other
aspect.
I
think.
If,
if
that
is
seen
as
something
useful
about
the
non-compliant
traffic,
then
I
think
we
should
try
to
see
if,
if
we
can
write
that
in
as
a
requirement
and
see
if
we
can
get
agreement
on
that,
because
I'm
not
sure
right
now
how
to
how
to
best
translate
that
does
that
make
sense.
D
D
Sorry
I
I
think
yeah
because
I'm
also
reading
rereading
3.5,
so
it
would
be
more.
It
will
be
more
clear
to
me
that
if
we
directly
use
something
like
the
burst
accumulation
in
as
the
exact
text
in
the
requirements,
so
so
so
what's
tallest
said
I
I
think
that's
does
make
sense
to
me.
Yeah.
B
A
D
A
A
D
A
Couple
of
other
I
have
a
couple
of
other
questions,
most
which
I
think
come
down
to
what
certain
terms
mean:
I'll
start
with
4.2.
A
It's
what
what
did?
What
do
you?
What
did
you
this
assume
that
high
utilization
means
it
sounded
like
you
were,
assuming
that
debt
net
traffic
could
be
100
of
the
link,
in
which
case
dead
time
is
a
problem.
D
Oh
so
I
I
own
I'm,
not
sure
I
I
get
your
question
from
on
and
off
I'm.
A
When
you
do
the
evaluation,
what
meaning
did
you
give
to
high
utilization?
Was
it
all
of
the
link.
D
B
D
In
in
my
understanding,
and
normally
the
detonate
flows
would
not
take
up
order
order,
utility
order,
take
full
link,
speed,
link,
bandwidth
and
normally
people
would
like
to
do
is
somehow
to
reserve
the
sufficiently
large
resources,
for
example,
40
of
a
one
gig
that
makes
it
a
500
mag
reserved
for
the
dead
net
flows.
In
that
case,
when
I
say
the
utilization,
basically,
it
is
only
talks
about
the
the
the
portion.
That's
usable
over
the
reserved
bandwidth
for
the
net
flows.
A
H
H
H
Time
so
the
the
the
the
the
possible
utilization
of
that
net
even
goes
to
zero
at
some
length
of
let's
say
two
miles
or
so
of
a
link
right.
So.
A
I
think
you're
up
in
item
two
under
propagation
latency.
At
the
moment,
turless.
H
H
Okay,
how
do
we
separate
that,
from
from
number
two
foreign.
A
I
believe
number
two
has
identified
that
cqf,
as
you
explained,
does
not
scale
to
large
single
hop
propagation
latencies
due
to
the
net,
the
the
necessary
relationship
between
propagation,
latency
and
and
cycle
and
cycle
interval
time.
So
setting
that
aside
for
a
moment,
maybe
the
question
is
assuming
number
two
is
not
a
problem.
What
does
high
utilization
mean
in
4.2.
H
Exactly
but
I
think
that's
that's
more
more,
a
question
against
our
requirements
document
then
right.
H
They
agree,
let
me
try
to
get
up
the
text
there
and
see
if
there
is
clarification
in
there,
yeah.
D
H
H
So
I
mean
we
have
we
have
these
these?
These
mechanisms
that
were
proposed
to
operate
on
stochastical
means
as
opposed
to
deterministic
means
and
those
those
are
the
ones
that
to
me
were
immediately
visible,
is
something
that
start
to
have
problems
with
high
utilization
right
because
effectively
they
they
create
stochastical
latency
longer
than
the
bounded
latency
and
the
higher
the
utilization,
the
the
higher
the
probability
at
some
point
in
time.
You
know
you
have
to
drop
too
many
packets
because
they're
too
late,
the
higher
the
utilization.
H
So
that's
that's
where
you
know,
even
without
reading
the
text,
I
would
know
what
I
think,
how
to
interpret
a
high
utilization.
I,
really
don't
know
how
to
utilize
High
how
to
interpret
High
utilization
in
cqf
right
with
or
without
taking
the
propagation
latency
account.
So
that's
an
interpretation
issue.
So
now
what
do
I
think
about
our
text
here.
B
D
So
I
think
there
might
be
another
aspect,
because
I
think
the
item
to
the
large
propagation
delay
would
be
a
big
contributing
fact
factor
to
the
high
utilization
or
or
consideration,
but
even
we
don't
think
about.
We
don't
talk
about
item
two,
which
is
a
large
propagation
latency.
There
could
still
be
some
other
aspects
to
consider
that
might
affect
the
high
utilization,
for
example,
the
large
time
variation
either
caused
by
the
clock
in
accuracy
or
caused
by
the
processing
latency
variation
over
the
large-scale
network.
D
There
are
so
many
kinds
of
different
kinds
of
nodes
and
some
of
the
nodes
they
have.
They
have
larger
processing,
latency
variation
because
the
incoming
Port.
Actually
there
are
some
popular
from
different
line
cards,
then
they
are
when
the
weather
package
is
kind
of
arbitated
habited
to
the
to
a
single
output
port
and
they
may
experience
different
processing
latency
internally.
So
that's
also
a
contributing
factor
to
the
high
utilization,
especially
when
a
cycle
time
is
more.
A
Yeah,
so
maybe
the
an
effective
you've
already
started
doing
this
so
God,
basically
suggestion
you
keep
to
continue
what
you've
already
already
doing.
Maybe
the
thing
do
is
to
focus
on
circumstances
under
which
high
utilization
is
not
not
possible
and
to
be
clear
that
we're
talking
about
high
utilization
of
whatever
fraction
of
the
of
the
the
link
is
dedicated
to
detnet.
B
G
To
yeah
I
want
to
end
one
more
point
about
the
the
link
utilization
because
for
the
mechanism
of
cqf
there
is
no
difference
between
each
cycle.
So
if
one
flow
want
to
make
a
reservation
along
the
path,
it
has
to
reserve
reserve
a
time
slot,
for
example,
in
each
cycle.
So
it
means
that,
although
the
flow
is
not
very
large,
it
has
to
occupy
some
space
in
each
cycle
that
that
that
make
brings
some
bandwidth
waste.
G
For
example,
if
we
have
some
Advanced
mechanisms
like
csqf
as
I
have
mentioned,
there
is
a
difference
between
each
cycle.
So
if
the
the
flow
is
quite
small-
and
maybe
it
can
just
occupy
some
space
in
each-
for
example-
five
Cycles
it
this
allows
that
for
the
the
same
network,
it
can
can
have
more
flows
in
that
Network.
So
maybe
that
is
some
aspect
about
the
link
utilization.
A
A
H
Yeah
yeah,
so
so
I
I
I,
actually
like
that
one
as
well.
So
I
need
to
take
a
note
on
that
on
how
we
we
capture
this.
The
I
I
had
more
problems
with
David.
What
what
you
were
saying
in
terms
of
is
it?
Is
it
fine?
H
If
there's
some
overlap
with
you
know
the
the
propagation
agency
and
and
the
high
utilization
I
mean
if,
if
basically,
my
links
are
so
long,
that
I
can
only
use
10
percent
off
the
link
band
with
for
for
that
net
traffic
and
I
can
use
the
10.
Is
that
still
high
utilization.
A
Oh
okay,
maybe
maybe
a
better
way
to
approach
that
would
be
to
point
out
that
large
single
propagation
latency,
in
particular,
very
large
variation
in
single
propagation
latency,
makes
high
utilization
difficult
right.
A
D
The
song
mentioned
just
now
is,
is
quite
a
consistent
with
the
last
paragraph
of
current
current
4.4
points.
Sorry
3.4,
so
so
it
Tim
to
me
maybe
try
to
not
not
to
over
provisioning
or
make
the
more
over
provisioning
limited
is
a
better
word
than
their
utilization
here.
If
we
try
to
so
so
so.
D
For
me,
it
looks
like
the
link
utilization
in
here
in
current
requirement
document
it
all
it
covers
different
parts
in
one
of
the
one
of
the
part
is
the
over
provisioning
issue
actually
and
and
it's
on
the
other
hand,
it
overlaps
with
item
two,
because
the
large
propagation
delay
is.
It
is
a
big
contributing
fact
to
the
high
utilization.
So
that's
that's
just
my
observations,
so,
if
possible,
maybe
we
could
kind
of
revise
the
text
of
the
document.
D
I
mean
the
requirement
document
to
to
make
it
to
make
it
a
clearer,
for
example,
puts
their
over
provisioning
issue
as
4.2
and
move
the
high
utilization
to
to
as
a
separate
paragraph
for
item
two
that
could
be
one
way
out,
but.
A
Okay,
so
item
seven,
assuming
we're
having
fun
with
trying
to
understand
what
does
high
utilization
mean
in
4.2
item
seven
I
think
we
have
fun
understanding
scalable,
it's
easy
to
exercise.
I
understood
you
to
do
was
to
hold
the
required
end-to-end,
latency,
constant
and
scale
up
number
of
hops,
and
that's
certainly
when
we
do
it
another
way
to
do,
it
would
be
to
say
well
no
I'm
going
to
have
an
end-to-end
latency
that
grows
as
my
number
of
hops
grow
and
I
guess.
D
For
a
given
latency,
so
I
I,
I
I,
guess
the
assumption
is
for
a
given
end-to-end
boundary,
the
latency
that
is
given
and
We.
Compare,
for
example,
10
a
number
of
hops
is
10
and
number
of
hops
is
100.,
whether
whether
there
it
can
it
can,
it
can
still
meet
the
original.
The
required
bounded,
latency
entry
and
latency,
but
certainly
if
the
required
bonded
latency
for
end-to-air
is
so
big.
For
example,
it's
like
a
one
second,
something
or
two:
it's
like
a
second
scale.
D
Then
it
is
fine
I
guess
so.
Only
when
the
required
end-to-end
boundary
latency
is,
for
example,
at
at
least
the
millisecond
level,
then
the
number
of
hops
would
be
would
would
be,
should
should
be
a
fact
to
be
considered
whether
whether
it
is
scalable
to
that
number
of
hops
in
order
to
still
achieve
that
quite
a
tightly
required.
End-To-End
latency,
that's
my
assumption.
I
guess:
okay,.
A
I
mean
and
I
I
just
called
up
the
requirements.
Drafting
3.7
is
very
very
short,
so
everything
we've
talked
about
and
more
could
be
included
under
what
scalable
means.
So
perhaps
we
need
some
expansion
of
text
in
3.7
I
if
I
take
a
if
I
take
a
fixed,
end-to-end,
latency
and
scale
up
the
number
of
hops
sooner
or
later,
the
time
allocated
to
each
hop
is
going
to
be
too
small.
A
There's
going
to
be
some
minimum
minimum
propagation
delay
in
there,
so
that
that
makes
me
wonder
what
scalable
or
rather
scale
may
need
to
refer
to
not
only
number
of
hops
but
to
use
of
end-to-end
latency
bounds
that
are
plausible
for
that
number
of
hops.
So
if
you
scale
number
of
hops,
perhaps
the
end
to
end
latency,
bounded
latency
bound
also
has
to
scale
up
slightly
anyhow
ping,
I
hope
you
and
the
scalability
and
the
current
shift.
A
E
Oh
yeah,
thank
you
yeah
regarding
the
number
seven
yeah,
the
I
agree
to
David
that
the
number
of
me
number
of
hops
means
a
large
net
drop,
so
it
deals
with
the
scalability
I.
Think
the
scalability
of
cqf
depends
on
the
feasibility
of
slot
scheduling.
E
A
Okay,
an
issue
for
clarity
nonetheless
intended
to
criticize
what
you've
done.
Somebody
had
to
go
figure
out
what's
going
on
here,
so
thank
you
for
getting
started.
A
D
I
just
want
to
remind
there
could
be
some
misunderstanding
that
we,
when
we
read
the
paragraph
because
in
item
seven,
is
only
says
to
be
scalable
to
the
large
number
of
hubs
actually
actually
4.1
is
the
is
the
one
that
most
likely
we
we
care
more
about
is
just
be
scalable
to
the
large
number
of
flows.
E
G
I
think
I
I
understand
the
concern
from
Chino
and
I
agree
that
the
scale
scalability
is
not
equals.
It
doesn't
equal
to
scale.
So
maybe,
but
still
you
know,
because
this
is
some
evaluation
of
cqf
mechanisms.
There
is
some
limitation
in
cqf,
for
example,
how
many
flows
it
can
provide
latency
guarantee
and
how
many
hopes
it
can
provide
to
the
path
calculation.
G
If
we
have
done
something
more
based
on
cqf,
it
can
make
it
better.
I
think
it
is
still
can
benefit
the
the
point
of
scalability,
so
I
think
there
is
limitation
for
sure
how
many
flows
the
network
can
carry
and
how
many
hops
can
meet
the
end-to-end
latency.
There
will
be
a
limit
for
all
the
mechanisms
I
think,
but
the
the
evaluation
point
for
cqf
is
I
think
provide
the
the
comparison
reference
for
some
new
mechanisms.
G
A
Yeah,
that
might
be
good
to
add
the
requirements.
Draft
and
one
of
the
themes
here
is
yeah
we're
talking
about
cqf,
but
what
we're
really
talking
about
is
how
do
we
go
about
evaluating
a
mechanism
against
the
requirements
and
the
requirements
draft
and
better
to
once,
we
figure
out
how
to
do
it.
We
should
be
doing
the
same
way
of
all
the
new
mechanisms
so
better.
Do
it
here
and
learn,
learn
the
lessons
with
something
that
that
exists
and
is
understood.
I
Seven,
the
point:
seven
I
think
for
the
cqf-
maybe
it's
your
maybe
their
partial
and
because
Jurassic
Craft
there
can
guarantee
their
tutor.
It
is
the
two
QT
and
cannot
guarantee
the
entry
under
land.
You
think
because
it
will
be
n,
multiplas
t
so
I
think
if
they're
selling
is
to
be
is
be
being
scalable
to
a
large
number
of
hopes
to
achieve,
end-to-end
latency
will
be
known,
but
if,
if
it
is
to
achieve
end
to
end
Jitter,
it
will
be
yes,
so
I
think
this.
I
This
seven
requirements
can
be
clarified
to
two
parts
and
to
end
the
latency
or
end-to-end
Jitter.
So
the
cqf
may
be
a
partial
here.
A
Yeah
in
in
particular,
I
think
it
would
be
very,
very
useful
for
the
requirements
draft
to
indicate
whether
the
end-to-end
latency
scaling
up
with
number
of
hops
is
except
the
the
minimum
end
in
latency
scale.
Number
of
hops
is
acceptable
or
not,
or
whether
we're
talking
about
large
number
of
hops
against
a
fixed
end-to-end
latency.
The
sort
of
the
where
the
discussion
between
EU
and
I
started.
A
Okay,
so
issue
I.
Guess
you
you'll
try
to
refine
to
refine
this
and
suggest
changes
to
the
requirements
draft
based
on
this
discussion.
H
As
a
course
on
the
requirements,
draft
right
so
would
be
good
if
we
can
on
the
mailing
list,
repeat
what
we
see
as
issues
of
the
mechanisms
right
so
and
then
I
think.
Maybe,
instead
of
trying
to
generalize
it,
we
can
just
simply
start
adding
as
examples
to
the
appropriate
section.
So
I
think
what
I,
what
I?
What
I
heard
from
Izu
here
is
that
the
the
adding
up
of
the
Dead
intervals
on
a
per
hop
basis
is
a
factor
that
impacts
scaling
across
multiple
hops.
H
That's
the
way
on
how
I
would
I
would
describe
that
challenge
to
requirement
seven
here,
as
as
one
example
and
whether
we
agree
that
that
should
be
in
seven
or
that
is
inappropriate
for
for
number.
Seven,
that's
a
different
point,
but
if
we
kind
of
start
to
collecting
the
challenges
that
we
see
with
cqf
against
the
individual
points
or
for
other
mechanisms,
I
think
that's
that
that
helps
to
clarify.
C
Okay,
something
so
a
house
in
the
number
to
the
design
team.
We
see
that
we
just
wanted
to
submit
a
draft
of
the
world
TSM
mechanism
to
even
a
easy
imagination,
but.
A
A
Okay
issue.
Thank
you
very
much
for
doing
this.
You
gave
us
something
that
we
that
we
definitely
had
some
very
interesting
discussions
about.
A
Hang
on
a
minute:
I
am
not
the
fastest
in
navigating
through
meat
Echo.
Your
slides
are
coming
in
a
minute.
A
A
B
A
All
right,
I
think
we're
gonna
have
to
share
this.
Do
you
want
to
share
your
own
slides.
A
Okay,
I'll
I'll
share
them.
There
is
somewhere
in
here
where
I
can
get
meat
Echo
to
share
the
slides
and
allow
you
to
advance
them
and
I've
seen
people
do
it.
I
don't
seem
to
be
able
to
find
it
so
we'll
do
it.
This
way.
A
C
Okay,
this
proposal
is
inspired
secretary.
C
And
we
wanted
to
provide
a
more
flexible
Timeless,
world-based
resource
multiplication
and
the
security
University
to
improve
Services
care
of
the
network
can
support,
as
we
know
that
the
two
perform
mode
of
C
curve
or
three
performance.
C
Of
the
data
plane
version
pose,
but
this
comes
at
the
cost
of
reducing
Services
scale
or
bringing
complexity
of
the
source
of
literation.
So
in
the
figure
below
we
can
see
multiple
sources
that
may
release
it
fast.
C
C
So
one
must
say
that
the
summary
of
War
released,
but
during
a
single
cycle
and
water
source
is
less
than
the
transmission
capacity
of
a
single
cycle.
In
this
case,
the
service
scale
can
be
supported
in
the
small
and
another
method
to
carefully
select
the
timer
moment,
for
example,
P1
to
Quality
one
to
TN,
to
issue
that
the
person
one
positive
two
and
the
purpose
and
arrive
at
the
same
Transcendent
node
at
the
different
times.
C
C
C
C
C
C
C
The
multiple
message
to
go
to
the
marketing
relationship
between
the
incoming
timesward
under
the
ongoing
sending
Thomas
water.
This
page
gives
two
messages
that
maybe
some
other
music.
C
C
C
There
may
be
multiple
sub
of
us
in
the
silver
surpass
region
and
each
May
consume
different
terms.
Work
firstly,
for
this
subtest
allocate
are
fixed
outgoing
times,
G2
plus
O2.
According
to
the
sixth
aligned
position
after
the
Summer
Bus
reaching
the
original
Vision
period,
then
the
fixed
outgoing
terms,
mode
of
the
third
and
the
node
volume
up
to
a
fixed
I'm,
going
sending
Thomas
water
J1
and
the
first
of
the
transfer
node,
and
continue
to
allocate
a
fixed
algorithms
with
the
J1
plus
o1
and
the
transition
and
under
the
translator
and
so
on.
T.
C
How
about
the
arrival
Position
will
send
the
original
situation
period
may
have
some
calculations,
but
this
must
be
limited.
This
position
should
be
considered
during
the
resource,
materials
and
philosophy
to
ensued.
The
water
parties
that
arrived,
we
send
a
local
Direction
means
can
be
successful
asserted
into
the
cube.
C
Instead,
if
the
arrival
position
is
the
company,
then
it's
difficult
on
a
cold
time.
School
in
this
case,
mobile,
creates
energy,
the
first
off
he
and
the
network
entry
and
explicit
the
pathways
can
be
placed
before
the
figure
of
scheduler
to
support
it
to
get
the
fixed
arrival
position,
or
we
can
take
the
second
options
to
note
the
scheduling
period
called
the
opportunities.
C
C
C
C
C
Divided
by
the
number
of
reports,
but
the
second,
obviously
we
come
between
missed,
which
containing
different
Enterprise.
C
C
So
that's
it.
The
set
aspect
to
access
the
time
slot
of
the
foot
and
the
data
plane.
The
data
package
cuddle
time
with
the
solution.
Pl
foreign.
C
C
In
the
following
figure,
we
can
see
the
ideal
arrival
time
of
the
bus.
We
are
not
the
boss
of
falling
to
the
angular
single
time
G,
but
the
arrival
is
too
late.
Then
the
latest
array
that
can
be
totally
we
are
not
the
person
before
into
Uncle,
using
in
terms
of
the
G
Plus
o
minus
one.
C
C
Because
even
the
server
bus
make
some
different
types
at
the
original
function
is
the.
If
you
don't
receive,
the
target
has
run
the
arrival
time
and
it's
pleased
about
may
be
placed
before
the
Tigers
schedule
to
learn
each
other
bus
a
bay.
It
is
a
fixed
arrival
position
within
the
ultravision
pyramid.
C
In
late
pages,
we
will
see
a
a
single
dose
of
runtime
times
through
the
mapping
of
the
global
Supply
Atomic
in
in
this.
It
doesn't
matter
today
to
fix
the
level
position.
C
C
That
means
we
have
a
constant
o
equal
to
like
the
result
of
n
plus
I
minus
G.
Both
are
if
G
is,
unfortunately
equal
to
I
mean,
and
you
can
express
the
bubble
should
hold
the
department
to
whether
it
refer
to
a
new
I'm
going
to
send
the
information.
C
The
following
table
shows
that
the
comparison
of
different
organizations,
species
and
the
Easy
Spirit
is
making
a
special
Style
for
the
first
first
possibility
and
if
all
is
flexible
in
the
range
for
more
a
one
on
the
left
hand
and
is
a
local
time.
C
That
means
the
time
is
a
local
number
of
each
node.
The
second
spirit
is
that,
if
all
is
constantly
earned,
plus
I
minus
G,
then
is
a
global
times.
That
means
the
number
of
times
it
is
similar
to
volume
nodes
under
the
last
thing.
If
o
is
the
quotient
of
one,
that
means
we
always
Reserve
that
next
times
what
after
the
ongoing
sentence,
so
this
actually
need
the
money
Seeker.
C
Okay
for
Global
time
slots
there
there's
a
Yin
timer
one.
The
key
feature
is
that
even
under
control
plane,
the
constant
outgoing
time
fluid
I
is
reserved
for
the
incoming
transmission,
I,
I
synchronous
marking
from
eye
to
eye.
It
may
take
a
singular
nose
run
time
or
time
throughout
the
market
from
I
to
X.
On
the
data
plane
over
X
is
a
nearby
Thomas
Rhode
Island.
This
one
means
has
worked:
conservant
Behavior,
but
I'm
not
guaranteed
a
possible
implementation.
C
Is
that
positive
is
in
common
terms,
with
I
acetyl
inserted
with
q
for
the
Thomas
with
I,
so
that
the
the
other
there
is
no
possibility
of
overflow
and
easy
to.
If
you,
if
it
is
not
empty,
it
will
be
how
this
is.
C
C
F
H
Thank
you
very
much,
they're,
very
interesting,
good
details
also
on
the
scheduling
everything.
What's
what's
the
difference
over
scqf
as
presented
by
chitron
two
meetings
ago,.
C
So
for
other
aspects,
I
think
the
difference,
maybe
that,
as
you
can
see,
we
we
have
could
pack
the
affording
source
of
the
community,
that
is
a
world
by
the
network
device
and
but
not
only
under
control.
Theory.
C
C
H
H
C
G
Thank
you
for
raising
this
point.
Actually,
that
is
also
my
question
because
for
the
queuing
mechanisms,
the
the
core
questions
for
the
the
others
to
answer
is
that
how
to
guarantee
the
Bundy
latency
and
how
to
calculate
how
to
combine
them
queue,
mechanisms
and
the
resource
reservation
mechanisms
to
to
provide
on
the
latency
and
on
the
tutor,
so
I
think
yeah.
G
It
is
also
not
very
clear
to
me
what
is
the
difference
of
these
mechanisms
and
disaster
how
we
introduced
and
if
there
is
any
I
think
maybe
some
detailed
comparison
is
aren't
necessary
and.
B
G
C
C
Okay,
I,
indeed,
I'll
give
the
latency
formula
slide.
C
G
Actually,
as
I've
mentioned
in
my
presentation,
I
think
it
is
quite
clear
for
csqf
about
how
to
do
this,
because
csqf.
The
basic
idea
is
that
the
every
device
there
will
be
cycles
and
the
the
controller
which
has
the
global
view
can
calculate
the
the
time
time
space
that
is
requested
for
a
particular
Dental
screen
and
after
the
calculation
is
down,
it
can
allocate
the
result
as
a
secondary
routine.
The
label
stack
or
SRH
I.
G
Think
I
have
give
a
quite
clear
presentation,
or
you
can
have
some
details
in
my
draft,
but
for
your
work.
I
think
it
is
still
a
little
confused
that,
because
I
think
it
is
just
like
what
I
have
done
so
I
think
it's
better
to
to
clarify
that
was
more
or
what
what
is
the
problem
with
SQL
and
why
we
want
to
solve
it
and
how
to
solve
it.
I
think
it
is
very,
very
crucial
from
the
beginning.
C
A
good
question:
okay,
just
according
to
the
presentation.
G
Because
also
as
as
the
order
of
csqf
I
am
happy
to
see
some
following
works,
but
from
the
beginning,
I
think
you
you
have
to
understand
what
has
been
done
right
now
and
keep
your
proposal
first.
If
you
just
repeat
I,
think
it
is
meaningless.
C
I
I
would
just
answer
the
question.
I,
don't
see
any
more
information
about
the
resource
reservation
and
in
your
presentation
before.
G
Actually,
I
have
give
a
very
detailed
reference
at
that
point.
I
have
give
the
paper,
and
there
is
a
very,
very
detailed
mechanisms
about
how
to
calculate
the
time.
Stop
for
your
speech
screen
it's
quite
clear
and
I
think
you
have
also
asked
some
questions
in
the
mailing
list.
I
think
it
is
also
clear
for
you
so
I'm
a
little
confused.
G
G
Beginning
of
the
question:
if
you
have
a
fine
okay,
there
is
some
limitation
and
you
can
give
further
further
enhancement
based
on
the
existing
work.
I
think
it's
great
and
we
can
we
can
discuss,
as
if
you
you
just
want
to
repeat
I,
think
we
we
don't,
we
don't.
It
is
not
necessary
to
have
a
new
document
about
the
similar
mechanisms.
C
A
Has
done
something
new
and
I
think
it
would
be
good
for
the
draft,
a
summary
of
what
distinguishes
this
from
the
csqf
product.
A
F
C
D
D
If
so,
I
I
didn't
find
it
as
it
is
used
as
a
reference
in
the
reference
section
of
the
draft.
That's
the
first
thing.
The
second
thing
is,
it
looks
like
you
are
more
or
less
is
still
referenced
to
SQL
F
in
your
slides.
D
So
my
question
is
how
it
is
relevant
to
the
UBS
or
cqf,
which
mode
which
one
is
more
relevant,
which
the
fundamental
mathematical
mathematical,
end-to-end
boundary
latency
calculation
is
is,
is,
is
it
can
be
rich
can
be
retrieved
from
which
which
current
standard
or
proposal
or
paper.
F
D
F
E
E
Oh
great
yeah
xiaofu,
thanks
for
the
presentation
but
I
think
you
have
to
clarify
the
benefit
of
this
approaching
compared
to
the
other.
You
know:
slope-based,
schedulers,
I.
First
thought
the
benefit
is
about
its
flexibility,
but
as
we
see
this
page
regarding
the
admission
control,
I
think
this
is
very
complicated.
E
I'm
not
sure
what
that
the
state
of
each
subversed
means
you
know
the
maintaining
the
state
of
the
flowy
cell
free
is
not
scalable,
but
maintaining
the
sub
burst.
Is
it
scalable?
Do
you
think?
Thank
you.
C
Resource
even
for
the
same
service
flow
in
the
service
bus,
Again
Tour,
then
they
contain
multiples
of
buses.
C
So
institutely
to
make
to
to
know
each
and
some
parts
to
use
different
kind
of
food,
so
that
is
a
different
stage
for
for
world
summer.
Optimization.
C
G
About
maybe
the
difference
between
this
document
and
the
sqf
I
noticed
that
Gino
mentioned
that
the
flag
stability
is
the
benefit
you
want
to
bring
this
document
I.
Think
it's
a
good
point,
so
maybe
my
my
suggestion
is
personal.
Suggestion
is,
to
maybe
end
more
more
descriptions
about
how
to
make
it
flexible,
based
on
the
existing
size,
to
have
more
problems,
for
example,
how
to
handle
the
newly
arrived
flows
or
how
to
handle
the
the
flows
without
result.
Regulation.
A
Okay,
in
that
case,
I
think
we
might
be
done
looking
forward
to
some
list
discussion
of
some
of
the
things
we
discovered
about
the
requirements
draft
and
hopefully,
a
revised
version
requirements
draft
sometime
soon.
H
Just
just
wondering
David
Logistics
do
do
we
have
all
the
prior
stuff
up
there
or
do
you
still
miss
miss
slides
or
something
from
anyone?
Everything's.