►
From YouTube: IETF-DETNET-20230912-1200
Description
DETNET interim meeting session
2023/09/12 1200
A
A
Greetings:
everyone
wait
a
few
minutes
for
other
people
to
join
foreign.
A
Appreciate
you
accommodating
this
time
slot
there's
no
there's
no
perfect
time
slot
for
this
meeting.
No.
A
Go
ahead
and
get
started:
this
is
the
note.
Well
slide
needs
to
be
shown
started
every
ietf
meeting,
you're
expected
to
understand
this
and
the
policies
and
procedures
and
everything
on
this
slide
does
apply
to
all
IET
participation.
A
Okay,
so
let's
stop
that
now
we
need
to
go
now.
We
need
to
go
bash
the
agenda,
because
today's
agenda
definitely
needs
some
bashing.
A
This
is
what
we're
gonna
do
right
now
we
are
in
the
middle
of
of
agenda
bashing
and
I
was
optimistically
hoping
that
that
would
be
only
five
would
be
only
five
minutes.
A
A
A
A
I,
don't
see
Shu
sung
on
go
ahead,
Charlie
just
just
go
ahead,
go
ahead
and
speak
up
on
that.
B
Yeah
so
I
sent
email
yesterday
and
uploaded
two
revisions
of
this
light
deck
and
that's
for
csqf,
tcqf
and
glbf,
and
the
three
of
us
did
edit
and
review
the
slides
Okay.
A
So
we've
got
them,
I'm,
not
sure
I
saw
the
email,
I'm
not
sure.
I
was
looking
in
the
right
place,
though.
A
Your
query:
okay,
if
you
sent
a
detonate
mailing
list,
what
probably
happened
is
it
was,
after
the
last
time,
I
looked
at
it
yesterday
and
it's
sitting
over
in
my
inbox
and
I'll:
go
find
it
okay,.
A
Good
hang
on
a
minute:
what
happens
if
I
look
over
here,
foreign.
B
A
A
I
believe
I
have
slides
for
these
okay,
it's
a
quick
check
to
here.
A
I,
don't
think
I
see
Antoine.
Does
anybody
have
slides
for
this
one?
The
boundary
boundary
delay
cueing.
A
A
Okay,
let
me
say
a
few
things
about
what
we're
sort
of
trying
to
do
do
today
and
I'm
happy
to
take
input,
and
if
this,
if
everything
down
here
slips
by
five
minutes,
I
think
we're
still
okay,
so.
A
We've
gone
through
the
requirements:
draft
we've
done
eval
some
initial
evaluations
of
the
TSN
techniques,
including
the
ecqf
that
is
currently
in
progress
there
to
to
check
that.
The
that
we
can
actually
use
requirements
wrap
to
do
the
evaluations.
So
the
goal
for
today
is
simply
to
put
up
and
discuss
the
initial
evaluations
of
the
new
mechanisms,
I'm
not
going
to
try
to
go
scoreboarding
or
summing.
Summing
up
of
things.
A
The
reminder
to
presenters
is:
please
focus
on
the
two
scalability
requirements
that
we
really
don't
have
good
solutions
for
in
the
existing
TSN
mechanisms.
A
A
Either
I'm
very
good
or
nobody's
awake,
it's
probably
some
combination,
the
two
is
there
people
in
time
zones
where
I'm
sure
they're
awake
all
right
the
order,
presentations,
I,
went
to
random.org
and
did
a
random
and
and
pulled
a
random
permutation.
This
is
this.
This
is
what
happened.
Sure,
let's
go
ahead.
B
Get
past
the
agenda
bashing
so,
as
I
said
in
the
email
I
in
in
the
process
of
evaluating
the
three
mechanisms
in
my
slides,
I
kind
of
ended
up
in
revisiting
some
of
the
requirements.
I
already
discussed
this
with
the
co-authors
of
the
requirement
draft,
but
in
any
case,
I've
got
a
section
of
kind
of
the
Revisited
requirements
that
I
evaluated
against
and
then
also
if
we
have
time
for
that,
some
concern
about
the
existing
evaluation
of
Etc.
A
A
If
we
have
time
down
at
the
end,
we
can
do
that.
Do
appreciate
that
that
I
do
appreciate
and
thank
you
for
looking
at
the
requirements
in
more
detail.
You
try
to
do
evaluations
and
finding
opportunities
for
improvement.
B
It's
I
I
think
it's
it's
hard
not
to
to
to
show
those
in
the
beginning,
even
though
we
can
skip
through
them
quickly,
but
otherwise
the
the
evaluation
template
that
I'm
showing
is
going
to
look
slightly
different,
and
so
it's
it's.
It's
rather
easier
to
have
a
quick
explanation
on
front,
but
given
how
I'm
doing
a
joint
evaluation
of
tcqf
and
csqf
I
think
we're
going
to
save
a
lot
of
time
there.
A
Okay,
let's,
let's
stick
with
the
rough
plan,
which
is
time
slot
per
draft.
If
the
evaluation
template
is
close
to
what
we've
got,
that's
fine
I
mean
today's
goal
is
discussion.
We
can
drive
towards
consistency
in
the
in
in
the
next
meeting.
C
C
C
Maximum
peculims
service
rate
and
number
of
hopes
of
the
flow
and
LMX
and
r
are
the
maximum
pick
a
lens
and
capacity
of
a
link
yeah.
So
if
you
understand
this
equation
or
the
mathematical
representation,
almost
all
the
requirements
in
the
recumbent
document
could
be
understood
could
be
understand.
Could.
C
C
C
Question
I
recommend
to
read
the
draft
first
and
let's
read
that
too:
yes,
it
supports
largest,
singular
proposition
latency,
so
the
proposition
delay
or
proposition
delay
itself
is
a
factor
of
the
time
difference
between
10
dollars,
and
it
also
can
be
simply
added
to
the
Daily
Factor
function
and
can
be
added
to
the
end-to-10
latency
found
expression
of
both
the
above
expression,
the
USD
proposition
delay,
but
it
can
be
just
simply
added
so.
B
C
To
read
the
three
accommodate
the
higher
link
speed,
it
is
partial.
As
you
all
know,
this
Cisco
requires
particue.
It
can
be
possible
to
sort
the
packets
according
to
the
priority
or
the
so-called
finish
time.
So.
Product
Q
should
be
supportive,
but
it
is,
it
is
about
600.
Gigabies
ethernet
can
be
supported
if.
C
C
However,
utilities
have
to
be
less
than
100
percent,
that's
that
is
assumed
to
be
given
yes
below
the
100
percent.
The
utilization
level
is
not
not.
C
C
C
C
Is
scalable
for
the
three
that
eight
The
Cisco
itself
does
not
provide
multiple
mechanisms,
but
it
copes
well
with
the
other
asynchronous
Solutions
such
as
TSN.
D
C
C
Yeah
this
is
the
region.
I
claim
that
600
gigabits
throughput
is
possible
with
about
2.5
acigarettes
AC,
the
paper
I'm,
referring
is
the
oath.
The
first
author
is
big
one.
It
was
presented
in
2000
in
infocomp,
and
the
author
is
from
University
of
California
San
Diego
anyway.
C
Info.Com
was
at
that
time
around
was
the
best
conference
and
it
was
very
hard
to
get
hard
to
get
accepted
from
that
conference.
I
believe
anyway,
this
paper,
such
as
so-called
pipeline
the
Heap
or
P
Heat,
using
that
technique,
one
in
queue
and
DQ
operation.
So
there
are
three
operations.
The
first
one
is
in
queue.
The
second
is
DQ
and
the
third
is
into
ndq
and
using
pipeline.
The
Heap
combined
in
Q
and
DQ
can
be
done
together.
C
Yeah
and
the
next
slide
is
yeah
I've
shown
you
this
light,
like
that's
three
or
four
times
so
far,
oh,
but
any
for
some
reason.
The
equation
has
come,
I
should
have
given
you
a
PDF
file
anyway,
these
latency
bound
expression
is
similar
to
the
one
I
have
shown
you
in
the
first
slide.
So
I
won't
go
into
detail,
but
I'm
I'm,
trying
to
emphasize
that
the
the
z-score
has
very
concrete
mathematical
background
and
all
the
proofs
and
so
on
so
yeah.
C
It
gives
the
definite
entity
and
the
latency
balance
expression
and
based
on
this
expression,
I,
can
claim
that
all
the
scalability
scalability
issues
can
be
fulfilled.
Thank.
A
A
Okay,
if
there
are
no
further
questions,
then
I
guess
we
carry
on
deadline
based
forwarding
is
next.
Let
me
go
find
that
slide
deck
I
think
I
know
where
it
is
hang
on.
A
All
right,
this
is
going
to
be
done
a
little
bit
differently.
Meat,
echo
says
it
once
apparently
wants
to
use
one
of
my
browser
windows
to
do
this
so
we'll
do
it.
This
way.
A
Is
that
the
right
one
to
start
on?
Yes,
it
is
xiaofu,
go
ahead.
F
A
Okay,
let
me
just
check
whether
I've,
maybe
yeah,
hang
on
okay,
good,
that's
up
and
yes,
I'll
get
the
timer
started.
Thank
you.
Okay,
I
told
you
it's
going
to
take
a
few
I
told
you
it's
going
to
take
a
few
minutes
of
slide
fumbling.
It
did
go
for
it.
D
D
F
Okay,
I
will
continue
to
have
some
assume
easy
to
share
the
screen
and
for
the
first
one
that
you
want
to
issue.
Yes,
according
to
the
requirements
of
the,
we
think
that
we
should
a
little
frequency
synchronization.
F
D
A
A
A
All
right,
with
apologies
for
the
slide
fumbling
careless
I,
think
you
said
that
all
three
are
in
this
deck
and
it
looks
like
the
random.
The
random
selection
put
them
in
order.
So
I
guess
you
can
do
them
in
whatever
order
you
like,
since
they're
all
in
one
deck.
D
A
I
will
run
the
10
minute
clock
per
and
if
we
get
ahead
of
schedule,
that's
even
better.
We
can
give
xiaofu
more
time
to
get
his
audio
working
hang
on.
Let
me
go
start
over
I'm
busy
talking
and
taking
your
time.
Okay,
go.
A
D
B
Author
gets
confused
on
the
wait.
A
second
please
go
back
to
the
the
main.
A
Oh
wait:
oh
wait,
I
know
what
the
problem
is.
I
hit
the
wrong
button,
my
bad
all
right,
let's
see
if
I
just
all
right.
Okay,
this
one
is
one!
You
want
right.
Yes
right,
sorry
I
hit
the
wrong
button
pilot
error
over
here,
wow
and.
B
Let
me
so
we
added
to
our
evaluation
a
column
to
indicate
whether
the
requirement
is
something
that
needs
to
be
done
in
a
mechanism
on
every
hop
or
which
can
be
satisfied
by
doing
it
only
on
the
first
top,
and
then
we
evaluated
the
3.1
sub
items
311
to
314
individually,
because
we
saw
that
especially
compared
to
other
pre-existing
TSN
algorithm.
That's
where
the
differences
are,
and
so
it's
much
clearer.
B
By
having
that
listed
individually,
then
I
had
a
large
issue
in
trying
to
see
how
to
evaluate
3.1.4,
because
it
has
I
think
two
good
requirements,
but
then
in
implication,
which
I
don't
agree
to
right.
So
so,
first
of
all,
it
talks
about
the
no
clock,
Essence
chronic
city-
that's
already
covered
I-
think
in
313.
B
So
the
really
good
requirement
that
this
point
implies
is
the
support
for
aperiodic
flows
and
that's
what
I
basically
evaluated
against.
But
what
it
says
is
that
this
support
for
aperotic
flows
can
only
be
done
with
per
Hub
asynchronous
mechanisms
and
I.
Don't
think
this
is
true.
We
because
we
can
equally
support
asynchronous
flows
in
tcqfcsqf
or
even
cqf
ecqf,
by
handling
the
burst,
either
through
over
provisioning
or
through
adding
the
latency
on
the
Ingress
and
therefore
shaping
the
burst
right.
B
So
we
could
stretch
out
the
the
burst
of
flows.
We
receive
over
500
microsecond
window
on
Ingress
and
therefore
increase
the
utilization
that
we
can
get
in
our
synchronous
mechanisms
by
a
significant
factor.
So
it's
not
not
necessary
to
use
the
asynchronous
mechanism
like
RFC,
2211
qcr,
which
is
TSN,
ATS
or
I,
would
assume
also
a
c-score
because
the
the
problem
and
glbf
my
my
own
LG
lbf
as
well
right
because
they
natively
come
at
the
price
of.
B
If
you
want
to
cover
the
bursts,
then
you
need
to
suffer
the
large
per
hop
buffers
and
therefore
the
increased
path
along
latency.
So
that
is
not
necessarily
a
benefit
by
itself.
So
then
I
added
a
requirement
which
I
think
is
a
quite
significant,
which
is
the
evaluation
of
single
Hub
propagation
Jitter
support
and
for
node
processing
Jitter
at
larger
than
100
gigabit
speeds
in
large
router
architectures.
That
is
significant,
and
then
we
also
now
are
getting
raw.
B
The
the
the
mobile
interfaces,
where
re-transmission,
for
example,
introduces
perhap
Jitter
length
deviation,
I
talked
about
that.
So
I
feel
that
that
is
a
an
important
evaluation
criteria
to
to
check
the
algorithms
against
and,
finally,
which
I
think
in
the
same
way
as
we
did
three
four
in
parenthesis.
One
and
two
in
early
evaluation,
I
added
three,
eight
one
which
evaluates
the
Jitter
on
a
per
hop
basis,
and
so
so
those
are
the
changes
that
I
did
next
slide
is
adjust
the
template.
B
So
if
others
want
to
use
it
as
well
and
then
next
slide,
just
for
a
reference
is
how
we
evaluate
it.
So,
for
example,
the
title
Jitter
means
that
the
end
to
end
Jitter
is
independent
of
the
number
of
hops
381
and
likewise
the
criteria
that
are
exactly
used
for
evaluating
the
other
points.
I
think
we
can
go
back
for
this
as
reference
later
on
next
slide.
B
Okay,
so
common
evaluation,
because
they're
pretty
much
the
same
fundamental
mechanism
and
we
only
got
differences
relevant
to
the
evaluation
on
one
point,
so
the
difference
in
Behavior
are
really
only
how
the
mechanisms
integrate
with
segment
routing
csqf
depends
on
segment
routing
and
gets
the
benefit
of
more
flexible
cycle
assignment
and
in
tcqf
we
are
independent
of
segment
routing,
but
practically
speaking,
I'd
very
much
expect
that
any
large-scale
networks
will
also
want
to
use
segment
routing
anyhow,
but
we
can
use
it
with
mplstc
and
ipdscp.
So
that
was
just
the
technical
background.
A
Okay,
hang
on
a
minute
hit
that
one
there
we
go.
A
Go
back
I'm.
Sorry
tell
me
the
title
of
slide
you
want
to
be
on.
It
looks
like
we've
got
some
propagation
latency
here.
Oh.
B
Wow,
okay
I
want
to
have
the
the
second
of
the
two
comparison
exactly
this.
Is
it
okay.
B
Yeah,
sorry,
there
is
large
latency,
no,
no
it's
okay!
There
is
no
bounded
latency
need
Echo.
We
need
to
fix
that
yeah.
So
this
is.
This
is
also
putting
in
the
comparison
as
I
Revisited
it
for
cqf
and
ecqf,
and
we
can
take
the
discussion
of
that
offline,
but
it
was
meant
to
make
it
easier
to
understand
why
we
feel
that
tcqf
csqf
are
significant
improvement
over
especially
also
ecqf,
so
the
first
one,
three
one
one
asynchronous
across
a
TSN
subdomain.
So
we
think
that
is
an
Ingress
function.
B
So
in
tcqf
we
Define
such
an
Ingress
function
that
would
be
equally
applicable
to
csqf.
So
it's
just
not
repeated
in
that
draft
yet
and
I
would
assume
something.
Similar
is
possible
in
TSN,
so
I
put
the
yes
into
quote,
although
I'm
not
quite
sure
exactly
if
they
have
defined
such
a
function.
B
So
there
should
maybe
be
a
question
mark
then
the
second
one,
three
one,
two
tolerate
clock,
wonder
and
Jitter
and
I
think
that
is
also
one
of
the
main
benefits
over
ecqf
because
in
in
both
cqf
and
ecuf,
by
not
having
the
cycle
metadata
in
the
packet
header,
you're.
B
Still,
relying
on
the
arrival
clock
time
to
assign
a
cycle
for
the
packet,
and
that
means
you
either
need
to
start
doing
a
window
of
that
time
or
you're
going
to
get
into
problems
with
any
type
of
encounter,
Jitter
and
wonder,
and
so
because
we
have
the
cycle
number
in
the
packet.
We
can
support
Jitter
and
wonder
within
the
limits
of
cycle
time.
B
So
if
we
have
four
Cycles,
we
can
support
up
to
one
cycle
time
worth
of
Jitter
and
wonder
short
and
long
term
drift
of
clock
against
each
other
three
one,
three,
no
full
time
synchronization
required.
So
that
is,
you
know
in
tckfcsqf
partial
instead
of
no
for
the
TSN
Solutions,
and
so
we
defined
a
yes.
B
There
is
something
where,
where
I
can
have
unlimited
Jitter
and
wonder-
and
obviously
that's
not
true,
because
we
do
need
clock
synchronization
with
limited
Jitter
and
wonder,
but
we
do
require
a
90
lower
accuracy
in
the
clock
synchronization
than
we
need
in
TSN
and
in
TSN.
That
would
need
to
go
up
linearly
with
a
speed
as
well.
B
So
we
feel
also
from
experiment
with
large-scale
actual
deployment
that
this
reduced
accuracy
makes
it
quite
feasible.
So
as
as
far
as
understanding
the
requirement
against
low
clock
synchronization
to
be
a
problem
of
deployment
of
clock
synchronization,
we
don't
see
that
there
is
a
problem
with
the
deployment
of
that
reduced
accuracy,
clock
synchronization
in
our
Solutions
314
support
for
aperotic
flows
and,
as
I
said,
that
is
yes,
but
it
would
occur
on
the
Ingress
routers.
B
So
we
would
need
in
the
Ingress
function,
to
have
an
additional
lengthening
out
of
of
the
burst
to
get
higher
utilization.
But
of
course
that
means
the
overall
end-to-end
latency
will
also
be
lower
than,
for
example,
by
those
mechanisms
that
have
to
do
it
on
a
perhap
basis,
and
that
is
actually
something
that
may
need
to
have
additional
detailed
writing
in
the
text
of
our
drafts,
though
three
two
large
single
Hub,
propagation,
latency,
obviously
right.
So
that
is
also
something
that
ecqf
solves,
and
that
is
of
course
something
we
started.
B
The
whole
thing
out
with
by
using
multiple
Cycles
three
two
one
support
single
hop
propagation
Jitter,
yes,
and
that
is,
as
already
mentioned,
a
new
requirement
that
I
added
there,
because
the
pre-existing
Solutions
don't
support
that
and
I
mentioned
already
the
different
points
where
this
is
required,
especially
in
complex
high-speed
routers.
The
processing
Jitter
that
we
see
when
we
can't
have
over
the
whole
system.
Ingress
and
egress
line
cards
through
distributed
fabric
total
synchronization,
for
example.
Then
we
will
have
Jitter
in
processing
packets,
received.
A
Okay,
quick,
quick,
quick
interrupt
on
time
management,
10
minutes
first
10
minute
slot
is
expired.
Second
10
minute
slots
is,
has
started
since
you're,
doing
both
tcgov
and
csqf
as
a
combined
present
go
for
it.
B
Thank
you.
Three
three
support
higher
link
speed,
so
we
have
tested
no
sorry,
two
thousand
kilometer
100
gigabits,
so
that
has
been
proven.
B
We
don't
think
that
has
been
proven
for
ecqf
or
cqf,
obviously
for
cqf
it
wouldn't
work
in
the
length,
but
also
I,
don't
think
100
gigabits
have
been
proven
and
I
think
Yuzu
had
more
details
on
why
we
felt
that
these
things
cannot
be
made
applicable
here,
341
scalable
to
large
number
of
flows
yeah.
We
don't
have
any
per
flow
per
hop
state
reading
and
writing
tolerate
High
utilization.
B
So
we
don't
have
that
time.
So
that's
also.
Yes,
the
3-5
linked
load
failures,
topology
changes,
so
you
know
in
the
prior
evaluations
we
said
it's
not
applicable
and
I.
I
also
saw
this
in
other
evaluations,
but
I
think
for
all
our
Solutions.
B
B
You
can
only
do
it
on
a
per
flow
basis
through
every
hop,
but
we
have
the
segment
routing
architecture
that
doesn't
do
it
stateless,
and
so
it's
that
segment,
routing
architecture
that
obviously
all
our
solutions
for
scalability
would
need
to
integrate
grade
with
to
scale
for
these
requirements.
So
that's
why
that's
a
yes,
likewise,
3.7
just
to
stay
in
line
with
that.
That
is
also
by
virtue
of
integrating
with
segment
routing
and
then
the
this
is
where
the
only
minor
differences
between
csqf
and
tcqf
comes
in.
Where
csqf
has
some.
B
Possible
increment
in
performance
3.6
prevent
flow
fluctuation
from
disrupting
service.
Yes,
of
course,
that's
that's
already
the
same
for
all
the
solutions.
No
change
in
additional
flows
changes
the
per
hop
behavior
of
existing
flows
and
then
the
new
requirements
support
tight,
editor
sync
control
loops.
That
is
also
not
a
difference
between
these
Solutions
but
I
think
very
much
a
difference
between
these
cycle
Solutions
and
the
other
solutions
that
are
being
proposed.
So
that
is
a
really
important
Network
size,
independent
hubby
Hub,
and
to
end
Jitter.
A
Next
slide
quickly
observe
that
you
and
Tron
have
seen
have
reached
about
the
same
conclusion
that
in
the
air
of
three
seven,
we
need
to
split
Delay
from
delay
accumulation
from
Jitter
accumulation.
So
that's
a
good
sign
foreign.
B
Right
so
here
are
the
justifications
for
three
four
I
think
I
already
said.
Most
of
that
we
had
the
research
paper
with
simulations.
We
had
the
validation
with
the
inexpensive
fpga
hardware
and
long
length
and
then,
of
course,
also
the
showcase
for
the
reduced
accuracy
of
clock
synchronization
that
is
enabling
that
large-scale
deployment,
then
because
we
do
have
the
cycle
number
opposed
to
the
TSN
solution,
the
packet
header.
We
can
have
the
clock,
Jitter
Wonder
and
the
perhap
propagation
tutor
as
well.
Okay,
next
slide.
B
Three
seven
right,
so
the
pro
hop
Cycles
avoid
any
burst
accumulation
so
same
thing,
and
this
is
basically
the
the
details
of
the
explanation
where
tcqf
and
csqf
differ.
B
So
in
CS,
TS
tcqf,
the
cycle,
the
next
cycle
on
the
out
bound
side
is
fixed
and
therefore,
when
you
do
the
admission
control,
the
the
sequence
of
Cycles
is
fixed
and
in
T
in
csqf
the
sequence
of
Cycles
can
be
assigned
by
the
controller
as
part
of
the
packet
header
and
therefore,
if
you
start,
for
example,
with
the
sequence
of
16
Cycles,
you
have
a
factor,
a
factor
of
more
than
10,
to
increase
utilization
up
to
very
high
utilizations
in
the
network,
because
you
avoid
burst,
collisions
on
the
path
and
then,
of
course,
the
the
benefit
of
segment
routing
as
the
core
death
net
overall
versus
TN
TSN
solution.
B
Right
because
remember,
we
do
not
have
cqf
or
ecqf
as
a
per
hob.net
Solutions.
We
can't
build
this
without
having
perhaps
date
because,
as
as
they
exist,
they
don't
know
what
a.net
flow
is.
The
only
thing
we
can
do
is
have
a
TSN
subdomain
and
that
subdomain
cannot
have
a
segment
routing
in
it
right.
So
that's
that's.
Why
I
think
this
this
comparison
is
so
important,
but
complex
next
slide.
I
think
we
should
be
done.
A
A
Don't
know
I'll
start
a
new
clock
and
what
we
will
do
is
hang
on.
Clickity
Clickety
Clickety
will
give
you
add
on
the
unused
time.
Go.
B
So
this
this
is
very
simple
now
so
I
don't
need
to
explain
anything
new
here
so
asynchronous
across
TSN
subdomains,
so
it's
naturally
asynchronous.
There
is
no
special
Ingress
function
needed
then
the
tolerate
clock
Jitter
in
one
day.
B
Yes,
it's
naturally
asynchronous
so
in
the
same
way,
as
you
know,
explained,
any
differences
in
the
clocks
also
have
been
just
added
to
the
calculations,
tolerate
asynchronicity,
yes,
so
that
is
the
full
yes,
so
we
can
have,
as
in
my
evaluation,
template
slide
explained
something
like
a
clock
drift
of
six
seconds
over
a
day
between
adjacent
routers
and
have
no
problem,
which
is
what
I
calculated
from
normal
ethernet
interface
clock
requirements,
which
are
also
not
synchronized,
as
described
in
our
requirements.
Draft
support
a
periodic
flows.
B
Yes,
so
that
can
be
done
directly
on
the
P
of
on
on
every
node
per
Hub
basis,
like
with
any
other
asynchronous
solution
at
the
cost
of
then
having
to
provision
larger
buffers
and
buffers
and
latency
like
all
of
them,
or
we
can
of
course
use
it
on
the
PE,
with
an
appropriate
PE
Ingress
function
to
reduce
the
latency,
the
large
single
Hub
propagation
latency.
Yes,
there
is
also
nothing
special
here:
support
large
single
Hub
proper.
B
Yes,
so
in
this
case
we
are
across
that
Hub
not
asynchronous,
because
then
we
would
do
local
Crocs
organization
between
those
nodes.
I'd.
Imagine
this
for
wireless
nodes
and
they
typically
need
to
have
clock
synchronization
anyhow
between
the
mobile
node
and
the
the
radio
tower,
for
example,
at
the
at
a
very
low
accuracy,
support
higher
link
speed,
so
this
I
think
is
the
TBD
like
in
I
would
hope
we
would
get
to
in
any
other
solution
that
doesn't
have
actually
a
proven
implementation.
So
we
did
look
and
the
proposed
algorithm
in
time.
B
Fifos
also
has
a
lot
of
justifications.
Why
we
think
this
is
very
well
feasible,
but
it
hasn't
been
done.
So
that's
why
it's
DVD
here
scalable
to
large
number
of
flows.
Yes,
we
Have
No
flows
dead,
tolerate
High
utilization.
Yes,
you
know
at
that
time,
so
we
can
get
easily
200
utilization
on
any
links
here.
The
link,
node
failure
yeah.
So
that's
also
again
integrating
with
segment
routing
prevent
flow
fluctuation.
Yes,
because
there
is
no
burst
accumulation.
B
The
latency
across
every
Hub
is
accurate
to
the
clock
accuracy
scalable
to
a
large
number
of
hops
with
complex
topologies.
Yes,
so
there
is
a
little
bit
TBD
here,
so
the
flow
interleaving
I'll
still
need
to
check
out
things
and
then
support
Thai
Jitter
control
loops.
Yes,
because
the
end-to-end
Jitter,
of
course,
is
also
pretty
much
zero
because
on
every
hop
it
is
also
very
much
zero.
So
the
only
thing
that
adds
up
is
the
perhap
clock
in
accuracy.
B
B
Yeah
so
I
think
I
explained
all
of
that.
No
perhaps
no
need
for
a
single
cycle.
We
write
yeah
TSN
yeah,
so
the
the
the
the
calculus
the
algebra
for
3.7,
that
is
the
same
as
for
a
TSN,
ATS
qcr
and
just
the
prop
shaper
that
require
perhaps
state
is
replaced
by
damper,
which
we
can
build
stateless.
So
that
also
makes
it
very
easy
to
to
I
think
work
in
TSN
environments
and
with
a
TSN
built
a
controller
and
yeah,
so
the
high
utilization
that
TBD
is
under
review.
B
So
that's
that's
pretty
much.
It
I.
Don't
think
that
the
nine
minutes
is
I.
Don't
think
I
spoke
just
one
minute
here.
Something
is
wrong
with
the
clock,
but
in
my
favor
so
I
don't
think
there
is
any
more
slide
here
other
than
the
reevaluation.
You
can
click
forward
to
an
E
cqf
slide.
If
you
like
to.
A
Read
but
the
reason
for
the
timing
is
I
credit
you,
the
three
and
a
half
minutes
that
you
hadn't
hadn't
used
out
of
the
first
one:
oh
okay,
all
right!
So
all
right,
I
guess
we
can
go
ahead
and
use
this
use
this
time
use
this
time
for
that.
But
why
don't
we
stop
here?
Let
me
go
pause
your
clock
for
a
minute
to
see
if
I
can
do
that.
A
So
you
take
any
question
what
we've
done
so
far,
hang
on
all
right,
they're
about
eight
minutes
there,
any
questions
on
what's
been
gone
through
so
far
aside
from
it
looks
like
we
need
another
round
of
work
on
the
requirements.
Draft
jeanu.
E
B
Yeah,
so
the
cycle
mechanisms
I
think
are
very
simple
to
understand
right
so
that
that
is
the
you
know
the
math
that
we
have
in
our
draft.
So
it's
just
you
know
the
cycle
time.
You
add
up
the
cycle
times
and,
of
course,
the
propagation
latencies
so
that
that
is,
that
is
very
simple
math
and
in
in
glpf
it
is
exactly
the
same
math
as
for
ubf,
which
is
the
algorithm
used
by
TSN
qcr.
B
So
that
is
also
simply
the
sum
of
the
burst
buffers
for
the
different
priorities.
So
the
math
for
both
of
them
is
extremely
simple.
C
Okay,
so
the
cycle
based
mechanisms
are
just
the
the
hook
count
times
the
cycle
time.
That's
what
you're
claiming.
B
Yes,
yes,
okay,
number
of
Cycles
right
so
on
every
hop
you
can
have,
let's
say
two
three
or
four
based
on
configuration
times
the
cycle
as
as
queuing
propagation,
latency,
okay,.
C
B
B
So
what
you
need
to
do
is
you
need
to
make
the
next
cycle
larger
than
the
propagation
latency,
plus
any
a
Jitter
that
you
have
so
these
these
things
we
showed
in
slides
several
times
and
I
hope
they're
good
enough
defined
in
the
draft.
So
if
not,
please,
let's
fix
that
in
the
draft
text.
C
D
B
What
I,
exactly
oh
sorry,
I
think
we
again
have
a
lot
of
latency
here
in
Meet
Echo.
Sorry,
I
didn't
want
to
talk
over
you
no,
but
this
is
exactly
what
I
was
saying:
I
think
if
you
read
draft
Eckert
flow
into
that
net
flow
into
leaving.
That
is
exactly
talking
about
that
problem
right.
B
So
if
we
have
a
lot
of
aprilic
flows
with
a
very
low
rate,
but
let's
say
once
every
millisecond
one
packet
and
you
have
ten
thousands
of
these
flows,
then
the
only
thing
you
can
do
is
either
provision
on
every
hop
the
worst
case
that
all
10
000
flows
have
their
burst
at
the
same
time
or
you
start
giving
them
some
added
latency
somewhere,
so
that
the
bursts
are
Decor
related.
That's
a
generic
problem
without
that
that
we
have
whatever
algorithm
we
use.
B
Now
when
we
use
a
cycle
mechanism,
we
are
deciding
on
decorating
these
flows
on
Ingress
right
and
not
on
every.
If
we
decorate
them
on
every
hop
like
we
would
do
I
think
in
z-score
and
in
glbf,
then
the
cost
of
that
is
higher
buffer,
which
is
maybe
a
good
thing.
Maybe
not
right
so
I
think
that
is
that
that
is
an
ongoing
debate
where
you
need
to
look
at
the
math
of
your
traffic
load.
C
So
you
know
in
order
to,
in
order
to
remove
the
necessity
of
the
decoration
you
have
to
hold
the
packets
at
the
network
entrance
and.
B
Put
it
what
I'm
saying
is:
if
you
decorate
them
on
Ingress,
you
don't
have
to
do
it
on
every
hop.
Remember
when
you're
looking
at
the
fact
that
every
hop
could
have
a
potentially
10
000
flows
which
send
a
packet
only
once
in
a
millisecond
right,
then
you
still
need
to
have
for
the
buffer
size
for
for
the
burst
size
of
each
of
these
flows
that
buffer
there,
because
they
could
just
by
random
loss
of
of
draw
or
occur
all
at
the
same
time,
and
then
you
need
to
buffer
them
all
up
right.
B
B
Right
so,
typically
in
a
large
service
provider
networks
we're
doing
the
difficult
things
like
in
this
case.
The
new
difficult
thing
would
be
the
correlation
through
the
frame
interleaving
right
so,
and
we
would
be
doing
that
in
Ingress,
like
we
also
do
with
the
pre-op
fund.
We're.
B
I'm
assigning
that,
from
the
admission
controller
I'm,
basically
the
admission
controller
design
in
which
time
window
within
500
microsecond,
for
example,
a
particular
flows,
bursts,
are
being
led
into
the
network.
C
D
C
B
A
C
A
Moving
beyond
clarification,
probably
to
take
this
to
the
probably
to
take
this
to
the
list.
A
Thank
you.
Thank
you.
Okay,
so
Turtles.
What
I'm
going
to
do
is
I'm
going
to
put
the
ecqf
Eva
we
visited
on
hold
shafu.
Have
you
managed
to
overcome
your
your
network,
propagation
difficulties.
F
Another
state,
just
during
this
period,
how
most
of
the
so
many
times
for
yourself,
okay,.
A
A
All
right
he's
there
shafu,
can
you
present
the
slides
or
can
anyone
else
walk
us
through
them.
F
Hi
I've
seen
the
I
think
I.
Let
me
try
to
represent
the
kids
light,
hopefully
that
okay
for
I
think
I
will
go
through
the
sleeplessly.
A
F
F
Permitted
based
on
the
resources
of
resolution
of
Illinois
and
isolated
for
me
job
and
the
next
one
salute
of
the
Sun,
be
scalable
to
a
large
number
of
hopes
in
this
components
for
quality
DC
Universe
is
partial
because
the
mobile
phone
may
be
needed
when
the
latency
conversation
Plus
in
time
is
used.
However,
in
other
options
is
used
to
and
it
has
a
little
disease.
Okay,
please
go
to
next
page.
F
So
this
is
some
further
explanations
of
for
the
Solutions
or
commutated
the
higher
English
speed.
We
can't
consider
this
item
from
two
aspects
to
see
this
of
a
post
so
for
a
gaming
solicited
amount
of
model
requirement
and
we
can
use
the
minimum
molecular
level
provided
can
be
correspondingly
smaller.
Otherwise,
what's
the
resource
of
the
level,
for
example
di?
If
it
is
a
duration,
remained
unchanged
Independence
of
the
English
leader,
it
will
be
proportionate
to
the
link
speed
because
the
higher
Nick
speed,
the
more
positive
resource
provided
needed.
F
F
The
instructions
of
micro
code
will
include
just
calculate
the
low
vacuum
delay
and,
secondly,
it
will
write
the
latency.
F
So
the
first
aspect
is
the
state
code.
There
is
no
state
of
the
flow
is
maintained
on
the
nodes
and
the
state,
for
example
the
Poland
versus
time
and
that
delay
that
which
is
covered
in
the
pocket
under
the
overpublishing
aspect.
F
The
First
Resource
and
the
bundle
where
this
is
the
source
of
each
student
logo
is
limited
and
derivative
of
almost
Infinity
condition
and
the
cannot
support
the
flow
scarce.
You
see
it
in
the
Poor
Side.
The
following
is
an
example,
so
there's
a
low
overpollution
that
is,
it
doesn't
take
a
year.
It
doesn't
take
the
calculation
result
of
past
directed
by
the
general
knowledge
to
get
an
overcommissioned
like
this,
and
the
aspected
to
support
the
more
illegal
may
be
explained
as
a
particular
classes.
We
can
it
flexibly
Define
advantage
to
achieve
High
utilization.
F
That
means
we
also.
Each
level
has
only
military.
The
passive
resource,
more
possible
resource
can
be
provided
by
defining
MultiLing
levels,
to
admit
more
services
notice
that
the
minimum
multi
level
is
still
the
the
most
of
the
efficiency
for
the
past
calculation,
because
the
the
calculations
in
order
per
Globe
passes
there
is
shared
by
multiple
flows.
Next,
please.
F
Substitution
and
the
three
to
the
seven
of
this
item
we
can
consider
through
the
following
aspect.
Firstly,
the
past
population.
The
calculation
is
simple
only
to
check
the
source
of
reasoning
dependently
independently
to
see
if
it
is
enough.
The
first,
the
first
accumulation
of
low
May
powers
are
possible
with
a
large
number
of
performance
for
internal
mode.
It
may
produce
more
accurately
the
best.
F
F
F
A
F
Okay,
so
I'm
sorry
just
voice
technique.
This
is
for
T2
of
human
Edition.
So
for
the
first
item
so
we
go
to
one
is
a
Time
assignment.
The
numerology
is
also
yes,
it
is
just
the
need
of
frequency,
synchronity
and
the
second
fluid,
or
the
tools
for
the
90
single
probability
latest.
The
university
is
a
yes
called.
The
Thomas
model
machine
is
independent
of
the
middle
contribution
today
and
the
sleep
of
the
sleep
are
convenient
speed.
F
F
F
Is
yes?
Flows
are
permitted
based
on
timer
slot
resolution,
I,
actually
Hit
the
Floor
media
there's
two
times
words.
Another
slate
of
seven
be
scalable
to
large
number
of
hopes
with
complexity
of
policy.
The
universe
is
partial,
because
the
calculating
peak
level
passes
for
all
silver
seeds,
I
mean
and
the
hard
problem
are
related
to
popcorns
next
page.
F
So
this
is
the
sum
of
first
explanations
of
particular
for
3.3
are
accommodated
the
high
Lingus
need.
We
can't
because
aspect,
because
it's
a
buffer
course
for
a
given
solicited
amount
of
requirement
at
the
Timeless
models
of
the
teaching
period
of
the
instance
can
be
corresponding
in
a
smaller,
otherwise
First
Resource
of
each
time.
Smooth
is
probably
pushing
data
to
the
negative
speed.
F
Thus,
the
higher
level
speed
the
more
positive
the
resources
are
provided
and
the
more
buffer
size
needed
in
the
case
of
a
need
to
hit
the
total
path
of
a
hardware,
only
a
smaller
number
for
longitude
Crews
can
be
provided
that
is
scheduling,
period
or
all
pythony's
relatively
soon,
and
they
may
affect
the
success
of
past
activity.
Parts
calculation
for
flow
internet
pollution
caused
for
London,
looping,
there's
a
simple
calculation
that
is
to
calculate
the
scheduling
period.
Alcohol
in
terms.
F
Of
the
mode
and
for
powerful
I
will
shoot
the
computer
work
for
each
packet
at
this.
According
to
the
idea,
outgoing
type,
smooth
and
intervalency
and
organization
period
lens
and
the
timesholders
so
for
the
next
3.4
is
scalable
to
the
large
number
of
fluids
and
the
problem
with
the
high
organization.
First,
the
state
code
is.
F
Okay,
the
first
the
state
code.
There
is
no
slipper
flow
for
the
next
overprication.
There
is
no
overpollution
and
it
does
not
take
the
calculation
result
of
bus
divided
by
times
load
to
get
an
open
which
impacting
like
this
and
the
support
more
Lewis.
That
is,
we
can
support
the
multiple
participation
period
balance.
It
is.
F
Okay,
this
give
an
example
to
to
use
a
multiple
OB
instance
for
different
service
input.
F
F
So
for
the
oscillator
7,
this
evaluation
is
a
wonderful
parts
or
we
can
consider
in
the
following
aspect:
first:
population,
okay,
the
past
calculation
may
be
and
be
hard
that
is
related
to
the
whole
account
and
the
the
positive
accommendation.
So
we
can
support
the
internal
mode
for
kicker
for
mechanism.
Okay,
foreign.
F
F
F
Which
must
be
less
than
or
equal
to
the
budget
volume,
so
the
following
two
is
the
under
and
the
formula.
F
C
Yes,
thank
you
for
the
very
detailed
and
deliberate.
C
And
the
presentation
I
have
a
similar
question
to
you
again.
If
you
go
to
slide
number
three
you
gave,
you
gave
the
expression
to
the
entertainment
latency
found
as
far
as
I
remember.
It
is
Hope
count
times,
delay,
Level
D,
right
n
times
t
the
my
question
is
again
similar.
C
D
C
Can
you
make
p
arbitrarily
small,
that
is,
if
these
arbitrarily
small,
can
you
guarantee,
or
you
have
to
make
it
sufficiently
large
to
accommodate
all
the
flows
and
burst?
That's
my
question.
Thank
you.
E
F
Or
no
so
can
you
just
like?
Please
repeat
your
question.
C
F
C
F
A
Okay,
thank
you.
Let
me
open
up
the
floor
for
any
other
comments,
questions
suggestions
before
we
go
back
to
the
ecqf
evaluation
topic
that
Turtles
wanted
wanted
to
talk
about.
G
There
are
three
different
explanations
for
3.7,
for
example,
The
genome
explains
I
explained,
that
is,
is
the
linear
relationship
with
the
the
latency
and
the
the
whole
account
and
the
tallest
explained
that
is
flow
into
living
and
the
software
is
explained.
The
data
is
about
the
past
calculation,
so
I
think
if
we
shoot
first
to
do
some
light
alignment
with
the
Fulfillment
explanations
for
the
different.
A
G
Yeah
I
think
it's
not
just
the
3.7,
I
I.
Think,
for
example,
there
are
3.4.
G
There
are
four
aspects
over
the
explanations
in
Southwest
evaluations,
so
I
think
we
should
summarize
and
collect
the
the
different
explanations
and
do
some
alignment.
A
I
think
that
would
be
good
would
hope
that
the
requirements
draft
authors
are
prepared
to
lead.
This
would
be
happy
to
make
time
available
in
the
next
meeting
to
try
to
come
to
conclusion.
If
that's
helpful,.
B
Yeah
I
think
the
the
main
in
my
slides.
There
are
some
some,
the
the
few
main
points
right,
so
the
I
think
the
most
complex
one
would
be
three
dot.
What
the
no
I
forgot,
the
numbering,
the
the
the
one
about
the
API
erotic
flows,
right
I,
think
it
would
be
good
if
we
all
try
to
understand
how
to
do
that,
and-
and
maybe
we
can
just
also
write
some
up
some
additional
text
to
explain
it.
E
Yeah
regarding
3.1.4
I,
totally
agree
with
Tories
on
this
item.
The
text
of
this
section
3.1.4
is
really
talking
about
non-periodic
traffic
flows,
which
required
to
be
supported
with
a
better
a
better
mechanism
than
Arrow
2.1
qcr,
so
so
I'm
I'm,
really
in
support
of
of
Tolerance
proposal
here.
Thank
you.
A
Okay,
please
watch
the
list
for
discussion
and
ever
of
how
to
update
the
requirements.
Draft
as
I
said,
I'd
be
happy
to
make
time
available
in
our
next
open
meeting.
If
we
have
some
proposals
that
need
to
be
discussed
for
how
to
do
it
or
requirements,
draft
can
just
can
just
be
updated.
C
So
entertain
the
latency
bound
and
and
attends
the
cheetah
bound
is
very
important.
We
all
know
that,
but
I'm
not
so
sure
about
her
hope,
Jitter
itself.
C
B
B
A
B
Yeah,
so
that
I
I
had
a
slide
where
I
was
showing
the
the
suggested
evaluations
and
what
I
I
had
in
there.
Is
that
the
the
perhaps
Jitter
that,
ultimately,
it.
A
D
A
B
Echo
there
may
be
other
useful
things
which
we
do
agree
or
disagree
on
this
and
I
think
that
slide.
Maybe
maybe
this
one
yeah
yeah
exactly
so
I
think
you
know
that
that
that
that
would
be
good
if
we
could
come
to
agreements
of
how
we
evalu
how
EV
we
evaluated
this
right.
So
this
this
is
what
I
used
for
the
three
drafts
that
that
I
presented
and
this
381,
the
you
know,
per
hob,
City
or
so.
B
The
whole
point
is
that
the
end
to
end
Jitter
is
independent
of
the
number
of
hops
in
the
path
right.
So
when
we
have
more
and
more
hops,
then
the
jittery
shouldn't
increase
with
the
algorithms.
Now
it
is
clear
that
we
can
always
do
a
digitering
function
on
the
egress
node,
if
we
do
add
a
mechanism
for
that,
but
the
mechanism
for
that
would
require
an
additional,
let's
say,
Ingress
to
egress,
timestamp
and
then
Degen
on
the
egress.
If
an
algorithm
proposes
to
use
that,
then
I
think
we
should
have
text
for
that.
B
I
think
we
did
brainstorm
a
little
bit
that
and
of
course
there
is
still
the
problem
that
the
digitering
buffer
on
that
hop
now
needs
to
be
scaled.
Depending
on
how
large
the
network
is
so
you're,
starting
to
create
another
PE
requirement
that
people
have
to
investigate
and
say
whether
that
is
a
better
solution
right
and
there
needs
to
be
clock
synchronization
between
Ingress
and
egress.
So
there
are
alternatives,
but
ultimately
it
is
not
to
increase
the
Jitter
when
the
network
becomes
larger.
B
Description
of
that
requirement-
and
you
know
how
important
that
requirement
is
compared
to
the
others,
I
think
by
this
point
in
time.
It
should
be
clear
to
everybody
that
there
is
no
ideal
algorithm
that
solves
everything,
ideally,
but
I
think
we
should
understand
the
requirements
as
the
aspects
on
on
which
we
evaluate
and
then
draw
our
conclusions.
A
B
A
Add
is
that
what
you
described
about,
allowing
allowing
the
delay
bound
to
ver
to
be
linear
in
number
of
hops,
but
have
an
end
to
inbound
on
Jitter
I
think
is
consistent
with
what
Tron
post
posted
to
the
list
late
late
last
week,
but.
B
Yeah
so
far
I
just
you
know,
used
the
trick
that
I
think
hizu
also
did
initially
with
three
four
one
and
three
four
two
in
terms
of
interpreting
the
requirements
as
something
that
already
exists
in
three
eight
right.
So
three
eight
does
talk
about
the
different
type
of
traffic
and
obviously
load
Jitter
is
specifically
important
to
the
tight
control
loops,
which
are
periodic
so,
but
obviously
we
can
also
move
that
into
a
new
point.
If
that
is
easier
and
I
think
some
of
the
co-authors
indicated
that
already.
A
G
If
this
is
and
therefore
their
service
with
the
tight
Jitter,
we
we
should
a
lot
of
tie.
The
detergent
end
to
end.
The
Jitter
should
be
referred
to
the
independent
of
level
over
a
number
of
the
hopes.
But
if
the
service
that
does
not
refer
the
the
end-to-end
title,
just
the
the
end
to
end
just
a
little
the
end
to
end
latency,
then
it's
not
a
lot.
G
A
must:
a
necessary
requirements
and
they're
for
the
requirements.
I
think
it
was
okay
until
the
United
States
for
the
surface,
with
their
tight
Jitter
requirements
and
they're
they're,
untied
digital
repellent,
so
I
think
it's
and
and
an
optional
requirement,
and
it
can
be
divided
into
staff
performance.
A
I'm
gonna
go
back
something
tourless
said
there
is
no
algorithm
that
is
going
to
completely
meet
every
single
one
of
these
requirements.
We're
going
to
wind
up
doing
some
kind
of
engineering
trade-off
on
which
ones
are
more
important
where
we
can
compromise
and
why.
B
Fully,
you
know
in-time
mechanisms,
then
we
do
need
to
Define.
This
digit
ring
function
for
the
network
on
the
Ingress
and
egress
p
and
have
according
additional
new
network
packet
headers
right.
So
it
involves
I
think
writing
text
that
we
haven't
done
so
right.
I'd
I'd
be
happy
to
do
that,
but
then
I
think
much
clearer
to
to
compare
the
complexity
of
both
of
these
Solutions.
C
B
I
think
it's
the
the
as
I
tried
to
explain
we
we
can.
We
can
discuss
on
how
to
phrase
the
requirement
doing
it
on
a
perhap
basis
is
just
an
expression
of
the
ability
to
provide.
You
know:
Network
size,
independent,
end-to-end
Jitter,
without
requiring
an
additional
Ingress
and
egress
PE
digitering
function.
So
that's
that's
kind
of
a
network
operator
deployment
benefit
that
he
doesn't
need
that
additional
function.
C
So
if
you
have,
if
you
want
to
break
down
the
network
into
hopes
you,
you
will
likely
ends
up
without
having
that
benefit
of
that
property
pay
burst
only
ones.
So
I
would
suggest
to
look
the
network
as
a
single
whole
thing,
rather
than
hope,
I
hope,
aggregated
devices.
This
might
suggestion.
Oh.
B
I
completely
agree
and
I
was
making
exactly
that
argument
when
I
was
talking
about
the
support
for
the
aperiotic
flows
and
not
incuring
large
buffering
on
a
perhap
basis,
but
with
tcqf
csqf
having
shallow
buffers
on
every
hop
and
then
in
curing
the
burst
delay
only
once
on
the
Ingress
PE
as
an
Ingress
P
function
for
that
benefit,
so
I
think
the
argument
goes
either
way
and
I
think
we
just
need
to
be
clearly
expressing
the
options.
A
More
more
to
come,
it
sounds
like
the
the
Jitter
aspects
of
the
requirements.
Draft
are
going
to
need
some
we're
going
to
get
some
significant
revision.
Zhendong.
E
E
F
E
Sorry,
anyway,
third
third
page
page,
three
announcements
to
evaluation
sheet.
E
Up,
yes,
yes,
yes,
yes
and
I
have
question
about
this
of
the
first
of
first
bullet
icon.
Here
it
says
new
column
to
indicate
where
the
requirement
is
perhap
or
Ingress,
I'm,
not
sure.
If
we
can
indicate
whether
requirement
is
ingress.
E
My
understanding
a
requirement
applies
to
alternate
data
plane
as
a
whole
regard,
regardless
of
Ingress
over
hope,
maybe
solution.
A
solution
can
be
can
be
performed,
which
may
mean
every
every
node
in
a
network
is
required
to
to
to
to
to
to
to
to
to
to
implement
this
solution,
or
something
like
that
so
I'm.
E
B
Let
me
rephrase
the
the
expression
I
think
the
column
is
meant
to
indicate
whether
a
proposal
wants
to
support
this
through
a
per
hop
function
or
a
PE
function
right.
So,
okay,
okay
right!
So,
if
it's
on
a
PE
function,
we
know
that
it
doesn't
impact
the
complexity
of
every.
So
typically,
the
forwarding
speed
required
for
p-hop
functions
as
a
factor
10
or
more
higher
and
faster
than
for
the
PE
functions
right.
Even
if
you
have
a
ring
and
every
router
is
p
and
p
e.
B
Only
a
tenth
or
so
of
the
overall
traffic
needs
to
deal
with
PE
functions
right,
so
that
makes
a
big
difference
and
I
think
that
helps
the
evaluation
and
I
think.
B
It
also
helps
us
more
to
understand
that,
just
because
a
hop
by
hop
mechanism
doesn't
do
something,
we
can
add
it
to
the
mechanism
as
an
edge
function
and
I
think
we
talked
about
specifically
two
options
here:
right,
one
is
to
shape
bursts
on
on
Ingress,
to
get
higher
utilization
or
to
reshape
traffic
flows
on
egress
to
eradicate
Jitter
right,
so
those
seem
to
be
for
utilization
and
Jitter
management.
B
The
two
Edge
functions
that
either
algorithm
needs
right,
so
one
algorithm
does
seem
to
need
the
Ingress
shaping
so
that
we
get
higher
utilization
with
lower
latency,
ultimately
and
the
other
algorithms
the
end
time.
They
need
shaping
on
egress
to
eradicate
Jitter
But.
Ultimately,
that
may
still
meet
to
higher
end-to-end
latency.
So
those
are
the
tough
comparisons.
E
B
For
authorization
for
the
evaluation
right,
so
the
The
Columns
would
indicate
for
the
evaluation
whether
the
requirement
is
resolved
in
the
proposed
mechanism
as
a
per
hop,
a
behavior
or
as
an
Ingress
or
I.
Think
in
the
future.
As
an
egress
PE
mechanism.
E
A
This
is
p
and
p
if
you're
looking
for
an
analogy,
you
might
take
a
look
at
diff
serve,
which
draws
a
very
strong
distinction
between
what
is
done
at
the
network
Edge
and
what
is
done
in
the
network
core.
The
per
hop
roughly
is
Network
core
and
what
trulis
is
calling
Ingress
and
it
really
means
Ingress
or
egress
I
agree-
is-
is
Edge
processing.
E
Okay,
yes,
then,
the
then
the
it's
not
for
requirement
is
it.
It
applies
to
solution,
correct
right.
A
E
Yeah
then,
the
I
should
just
move
this
p
and
p
e
column
after
just
just
prior
to
notes
not
no
not
not
as
precise
requirements.
Okay,.
A
E
D
A
Okay,
trailers:
you
want
to
take
the
last
about
10
minutes
to
quickly
do
the
comments
you
wanted
to
make
on
ecqf.
B
Yeah
but
probably
you
know,
given
the
audio
quality
that
we
have
probably
just
the
starter,
for
what
we
need
to
take
to
the
list
to
get
a
good
discussion.
I
fear.
A
B
Yes,
so,
oh
you
know,
I
was
I,
was
looking
through
ecqf
and
and
maybe
I'm
misunderstanding,
something
which,
which
is
a
fine,
but
the
the
whole
point
really
is
that
ecqf
does
not
eliminate
any
of
the
requirements
against
the
clock
accuracy
as
far
as
synchronization
that
did
exist
in
in
cqf
and
as
because
the
the
arrival
time
based
on
the
receiver
clock
is
used
to
assign
the
cycle
and
that
that
is
the
core
limitation.
B
When
you
do
not
have
a
packet
header
now,
you
need
to
be
worried
that
there
is
no
difference
between
what
the
send
dressing
thinks
is
a
valid
sending
time
for
the
cycle
and
what
the
receiver
thinks
is
a
valid
reception
time
for
the
Target
cycle
and
as
soon
as
there
is
a
Jitter
between
these
two,
you
start
to
assigning
packets
to
the
wrong
cycle,
and
you
can
only
eliminate
that
by
increasing
the
debt
time
again
so
you're,
not
using
some
percentage
of
the
sending
time
that
you
actually
have
to
send
packets
and
that
you
know
increase
of
that.
B
Time
is,
of
course,
what
we
never
want
to
have
and
which
is
obviously
very
easily
avoided.
By
doing
the
the
indicating
the
cycle
in
the
packet
header,
which
is
the
core
difference
between
our
solution
and
the
TSN
Solutions
right
and
of
course,
then
comes
also
the
problem
that
as
soon
as
we
go
faster
than
the
10
gigabit
per
second,
that
TSN
is
really
targeting.
B
When
we
go
to
400
gigabits
or
so
then
we
we
deal
with
a
much
larger
increase
in
variability
of
any
type
of
clock
measurement,
also
within
a
node
right.
So
the
the
routers
that
we're
looking
at
are
really
networks
of
smaller
routers
right.
Each
Ingress
and
egress
line
card
can
be
separate,
router
fairly
asynchronously
connected
to
through
a
fabric,
and
there
can
be
a
lot
of
variations
in
propagation
latency
through
the
Ingress
line
card
and
all
the
algorithms
we're
looking
into
ideally
only
need
to
work
localized
in
an
egress
line
card.
B
And
so
it
would
be
ideal
if
we
do
not
need
to
to
capture
and
synchronize
the
clock
across
the
whole
router,
but
but
only
so
there,
there
really
a
bunch
of
details
and
I
think
we're
trying
to
get
closer
and
closer
of
trying
to
understand
ex
and
express
them,
but
that
that's
as
good
as
I
think
we've
gotten
so
far
in
expressing
that
the
you
know
assumption
that
the
clock
on
on
arrival
of
a
packet
can
be
accurate
enough
to
to
to
to
derive
the
cycle.
B
Number
from
is
really
you
know
not
working
for
higher
speed
networks
and
and
more
larger
scale
networks
and,
of
course,
also
to
be
reminded
of
csqf
by
mean
of
giving
the
network
controller.
The
ability
to
indicate
one
out
of
potentially
10
possible
receiving
cycle
has
a
much
broader
way
to
shape.
Also
potential
bursts
and
let
the
controller
decide
that
so
that's
yet
another
core
benefit,
and
we
try
to
indicate
that
a
little
bit
through
the
evaluation.
B
D
A
All
right,
thank
you
very
much
everybody
for
taking
the
time
to
attend.
I
hope
this
has
been
this.
This
has
been
useful.
I
think
the
most
immediate
thing
that
needs
attention
is
the
requirements,
draft
of
follow-up
based
on
some
things
that
have
been
discovered
in
doing
the
doing
these
initial
evaluations.