►
From YouTube: IETF-DETNET-20230719-1200
Description
DETNET meeting session at IETF
2023/07/19 1200
https://datatracker.ietf.org/meeting//proceedings/
A
A
A
Okay,
well,
people
are
joining,
can
I
get
a
volunteer
or
two
to
head
over
to
hedge
Doc
and
try
to
take
some
brief
notes
on
the
discussion.
A
A
Okay,
still
looking
for
a
volunteer
or
two
to
take
notes,
otherwise,
you'd
be
stuck
with
me,
taking
very,
very
brief
notes,
just
need
to
over
in
the
hedge
talk
for
the
session,
just
just
making
it
make
an
occasional
note
of
what's
going
on
in
the
discussion
that
would
that
would
really
help
and
I
think
we'll
go
ahead
and
get
started.
A
A
A
few
more
clicks
than
I
was
expecting.
This
is
the
note
well
slide.
This
is
an
ietf
meeting,
so
the
contents
of
this
slide
apply
you're
expected
to
be
to
to
be
familiar
with
it.
C
A
A
A
We're
going
to
try
to
do
items
one
and
two
in
about
45
minutes.
Each
I
have
a
conflict
at
9,
30
and
unfortunately,
neither
of
the
debt
net
chairs
are
able
to
attend.
So
we're
only
going
to
have
about
an
hour
and
a
half
this
morning
and
I
will
leave
the
meeting
open.
It
might
run
until
10
o'clock,
but
I'm
not
sure
whether
that's
going
to
work.
A
Okay,
any
comments,
questions
anybody
else
want
to
want
to
try
to
bash
the
agenda.
A
Okay
process
oriented
topics,
so
we
have
in
the
proceedings
of
the
last
couple
of
interim
meetings,
some
evaluation
slides
that
take
a
look
at
the
the
Tia,
the
TSN
mechanisms.
A
What
I'm
planning
to
do
for
the
meeting
next
week
is
basically
grab
those
slides,
put
a
few
text
slides
in
front
of
them
to
indicate
what
we've
been
doing
interim
meetings
and
that's
what
will
go
into
the
the
the
main
meeting
presentation
I
mean
if
anybody
has
a
evaluation
slide
of
one
of
the
proposals,
and
this
we
could
do
a
little
bit
of
discussion
of
that
now.
A
But
I
don't
think
when
I
start
doing
serious
evaluations
of
the
proposals
until
after
the
I
the
ietf
meetings
meetings
next
week.
D
D
You
know
high
level
enough
importance
right
so
I
think
that
should
be
a
separate
line
item
a
separate
point
in
in
the
evaluations
and
so
that
that
we
can,
independently
of
of
latency,
evaluate
that
so
that
that
that
I
think
is
my
main
concern.
Thanks,
I.
A
Would
suggest
you
send
an
email
to
the
list
because
that
needs
to
be
reflected
in
the
requirements
draft
you
know,
since
we're
going
to
do
the
technical
evaluation,
primarily
against
the
contents
requirements
draft.
So
if
you
think
the
requirements
draft
doesn't
have
enough
enough
discussion
or
requirements
around
Jitter,
please
send
it.
Please
send
a
note
to
the
list
indicating
what
you
think
needs
to
be
changed
and
where,
where
in
the
requirements
draft,
it
might
go.
A
A
Do
any
of
the
authors
of
the
proposals
want
to
be
brave
and
put
up
a
an
evaluation
slide
for
their
proposal
and
see
see
what
we
think
of
it.
A
F
A
If
you
have
such
a
slide,
we
could
we
could
take
a
look
at
it
now
we'll
need.
We
will
want
such
slides
on
all
the
proposals
at
some
point
after
the
ietf
meetings
next
week,.
A
Mean
this
wasn't
a
requirement
for
this
meeting.
This
is
an
opportunity.
If
somebody
has
such
a
slide,
we'll
be
happy
to
look
at
it
it'll
be
a
requirement
for
proposals
that
we
consider
at
some
point
after
the
meetings
next
week,
but
it's
it's
not
a
requirement.
Now.
Okay,.
A
F
I
thought:
do
you
think
that
the
you
weigh
how
time
to
discuss
the
tears
and
mechanism
the
the
evaluation
of
the
TSM
mechanism,
because
they
have
a
last
item
that
is
80s
solution?
We
have
not
discussed
the
last
time.
A
Okay,
if
you
want
to
put
if
you
have
the
ATS,
that
I
would
like
to
put
it
up,
that
would
be
a
good
thing
to
do
for
maybe
10
or
15
minutes.
A
F
Okay,
we
found
a
good
discussion
for
about
the
first
item.
F
Okay,
we
look
at
the
first
set
and
that
is
the
finally,
the
timer
singularly
for
eight
years
basically
received,
but
it
is
open
magical
receipt
compared
to
the
traditional
shape
per
floor.
It
just
maintained
the
solo
shape
instance
per
incoming
Port,
plus
the
tropical
class,
so
for
the
the
hidden
function
is
not
dependent
on
the
time
of
singular
digestion.
So
the
first
item:
the
evaluation
is
yes.
F
F
A
I,
don't
think
that's
a
problem.
In
essence,
this,
the
the
the
primary
requirement
here,
was
tolerate
a
lack
of
clock
synchronization
as
long
as
the
time
bases
at
each
node
are
high
quality.
They
should
be
a
very
small
amount
of
frequency
drift
or
mismatch.
Okay,
go
ahead.
A
D
D
Do
you
have
any
art?
No,
no
I,
I,
I
I
think
that
so
so
I
I
I
think
the
answer
is
it
does
allow
clock,
drift
right
and
I.
Think
that's
that's
actually
the
the
good
distinguishing
aspect
so
they're
they're.
We.
D
What's
what's
the
most
accurate?
So
let
me
just
take
a
note
down
that
we
may
be
in
the
requirement
documents
see
let's
check
in
the
requirement
document
whether
we
have
the
most
accurate
way
to
describe
the
bear.
A
Yeah,
that
would
that
would
be
good.
I
mean
I.
I
would
guess
that
with
high
quality
time
sources,
the
amount
the
degree
of
frequency
mismatch
will
be
small
enough
as
to
be
negligible,
but
there's
really
I
agree
tortoise.
Thank
you
for
taking
the
note
that
the
requirements
draft
ought
to
cover
this
I
mean
if,
if
the,
if
the
clocks
are
way
out
of
a
way
out
of
frequency,
match,
if
they're
off
by
25
weird
things
are
going
to
happen,.
D
Yeah,
but
that's
David,
that
is,
that,
isn't
exactly
not
the
the
real
point
right.
The
real
point
is
that
the
requirements
that
are
synchronous-
the
clocks
cannot
arbitrarily
drift
between
each
other.
For
for
those,
we
have
the
so-called
mtie
maximum
time
interval
error
right.
So
the
clocks
of
those
can
never
drift
more
than
whatever
apart
right,
some
millisecond,
some
microsecond,
whatever
the
factor
for
the
the
mechanism
is
right.
So,
for
example,
in
tcqf
we
said
we
can
be
a
factor
of
100
higher
in
mtie
than
with
a
cqf
from
TSN
right.
A
Agree
and
what
I'm
saying
here
is
that
the
the
notion
of
what
you
said
our
can
drift
slowly
over
time?
That
topic
needs
to
be
captured
in
requirements.
G
C
D
D
Thank
you.
Okay,
I!
Don't
think
that
that
contradicts
what
I
was
saying
right,
so
they
if
they
are
much
if,
if,
if
the
clock
frequencies
of
these
clocks
are
too
too
much
off
from
each
other,
then
that
simply
impacts
the
calculation
of
how
much
traffic
you
can
achieve
the
same
latency.
For
so
in
my
understanding
we
can
have
arbitrary
drifting
clocks,
except
that
at
that
point
in
time
you
need
to
take
the
impact
into
account
right.
We
did
have
from
Mohammed
and
John
Eve's.
D
The
first
version
of
one
of
those
you
know
possible
calculus
to
to
calculate
the
impact
of
clock
of
of
of
clock
drift
and
I.
Don't
think
we
at
that
point
in
time
we
were
far
enough
advanced
in
our
thinking
to
to
accept
that
into
the
working
group,
because
I
don't
think
most
people
understood
it
so
I
think
it's
possible.
It's
not
good,
but
I.
Think
it's
a
separate
aspect
from
clock.
Synchronization.
G
B
A
Okay,
so
it
sounds
like
a
requirements:
draft
will
need
to
may
need
some
attention
on
clock
drift
and
as
well
as
what
trulis
mentioned
earlier
about
better
treatment
of
Jitter,
okay,
shampoo.
Let's
that
was
interesting
first
line,
let's
try
the
second
line.
F
F
Just
like
the
traditional
localization
function,
ATS
doesn't
affect
the
the
but
is
not
affected
by
the
linger
propagation
delay.
That
has
noted.
If
there
are
any
comments
for
decide.
F
So
so
for
the
next
item
accommodate
the
high
ending
speed.
F
G
Yes,
it.
G
To
3-4
large
number
of
rows,
as
you
mentioned,
that
the
it
has
to
maintain
full
state.
G
A
You
know
I
think
you're
right,
although
it
sounds
like
it's,
it's
a
a
concern
that
matches
up
well
with
the
partial
on
the
next
line,
which
is
that
the
more
State
you
have
to
maintain
the
harder
it
is
to
get
to
a
large
number
of
flows.
I
agree
with
you
that
higher
link
speed
just
makes
it
worse,
but
I
think
it's
mostly
about
scaling
to
large
numbers
of
flows.
D
David
so
process
order
questions.
Anybody
sharing
the
slide,
I,
don't
see
it
or
it's
being
shared
I
mean.
A
D
Who's
taking
my
side,
sorry
so
technically,
the
the
main
issue
of
the
flow
interleavers
in
ATS
is
the
line
rate
rate,
no
not
the
line
rate,
but
even
the
packet
processing
speed
read
write
cycle
of
the
interlever
variables.
You
don't
only
need
to
retrieve
it,
but
you
need
to
write
it
in
the
speed
that
for
the
the
fastest
possible
next
packet,
you
already
are
reading
again
the
the
updated
data,
so
that
is
that
is
the
high
speed
problem
right.
D
D
Of
course,
everything
becomes
more
expensive
with
the
number
of
flows
as
well
right.
So
basically,
if
you
look
at
the
the
cost
of
high-speed
Ram
that
you
need
to
do,
you
know
the
the
large,
because
there
is
some
amount
that
you
may
be
able
to
put
into
the
network
processing
engine
itself,
which
may
be
done,
for
example,
with
these
link
interface
counters
right
that
that
you
have.
D
But
then,
if
you
go
to
to
a
number
of
flows
that
you
would
need
kind
of
which
exceeds
thousands
or
so
then
you
need
to
look
at
the
next
level
of
cost
right.
Everything
is
possible.
It's
just
not
affordable.
A
G
B
G
Ps1
more
then,
we
only
have
a
very
limited
clock
numbers
to
look
up
the
flow
table,
so
I'm
not
sure
nobody
would
be
free
from
the
the
limitation
of
the
high
speed
limit.
If
the
link
speed
goes
up
above
100
gigabit,
yes,
damn
yeah
every
candidate
can
have
problem.
Thank
you.
A
Yeah
I
I,
you
know,
I
I
was
having
similar
thoughts,
which
is
that
at
higher
link
speeds.
Obviously
your
timed
process.
Each
packet
is
reduced
by
the
link
speed.
So
it's
that's
going
to
affect
all
of
the
proposals
and
I
think
the
evaluation
should
be
focused
on
proposals
that
fare
less
well,
because
they
have
to
do
more
processing
per
packet
than
others
that
perhaps
have
to
do
less
processing
per
packet
and
that
that
might
be
what
what
is
useful
in
evaluation
is
a
statement
that
that,
as
link
speak,
is
faster.
D
F
Sorry,
it
doesn't
know
the
unmute
on
the
programs,
so
I
agree
with
the
discussion
about
the
for
the
higher
Legacy.
The
we
just
need
a
more
distribution
about
the
illumination
of
this
site
so
for
the
next
item,
is
the
large
number
of
floats?
Okay.
This
item
is
considered
by
two
yesterday,
illumination
is
partial.
F
F
F
If
there
are
any
comments
for
this
side,
we
have
just
discussed
the.
F
F
G
Yeah
The
Regulators
Regulators
do
not
Sarah
pick
is
in
more
conserving
manner,
so,
in
terms
of
the
average
delay
yeah,
there
is
some
Coast
for
sure,
but
in
terms
of
the
maximum
delay
or
the
worst
delay,
it
is
proven
that
the
the
airplane
blocking
does.
G
B
F
Prevent
the
flow
fluctuation
from
the
disrupting
service,
the
location
function
had
some
functions
for
this
site.
F
F
You
know
if
we
look
at
the
evaluation
without
the
that
you
see
the
problem
of
our
Third
offline
working
should
be
considered,
so
I
just
hit
your
opinion.
That
is
not
affected
the
worst
case
today.
Yes,
I
agree
with
this
point.
F
F
F
So
for
okay,
we
have
just
discuss
the
three
to
the
five.
The
evaluation
is
partial.
F
If
the
there's
not
any
other
comments,
so
we
go
to
the
next
item
the
next:
let's
see
that
is
a
discoverable.
This
scalable
to
a
large
number
of
hopes
with
always.
B
G
F
F
For
3.7
I
think
I'm,
not
sure
just
that
maybe
I
have
mistaken,
that
the
under
delay
upper
bound
is
mainly
determined
by
the
cooling
subsystem,
maybe
not
provided
by
eight
years.
Ats
is
just
a
regulation
function,
so
this
is
the
you
may
need
to
update.
A
If
you
want
to
send
me
a
revised
slide,
Deck
with
the
update,
I'll
put
it
into
the
meeting
materials
for
this
meeting.
F
E
A
Yes,
I
put
in
I'll,
make
a
note.
Let
me
make
a
note
to
myself.
Yeah
I.
Think
thanks,
I
mean
this
actually
was
pulled
from
the
proceeding
from
the
preceding
proceedings,
but
I'll
upload
them
to
this
one
as
well.
E
F
I
I
just
reconsided
this
item
I.
Remember
the
these
actually
need
some
paper
to
discuss
the
most
peace
leaders
provided
by
ATS.
Yes,
pts
has
its
own
the
worst
case,
latency
formula,
so
you
need
to
so
it
is
basically
universally
proportional
to
the
reserved
bundle
virus,
yeah
sure.
F
So
so
you
believe
the
text
provided
in
the
three
to
the
seven
need
a
note
to
be
updated.
E
A
And
as
as
a
certain
response
to
Carson,
I
will
put
these
slides,
which
were
in
the
last
material
for
the
last
meeting
into
the
materials
this
meeting.
So
people
can
find
them.
A
A
Careless,
if
I
can
get
your
slides
up,
are
you
ready
to
talk
about
glbf.
D
Yeah,
maybe
I
can
share
the
slides.
D
A
That
might
actually
that
might
work
better
yeah.
Let
me
go,
let
me
let
you
share
the
slides
that
way.
If
I
have
to
duck
out
of
here,
you're
still
in
control
right.
A
D
Okay,
so
right
so
these
these
slides
also
say
where
they
are
on
GitHub
yep,
let's
get
directly
to
it,
as
we
don't
have
that
much
time,
so
just
to
be
clear
about
how
we
think
this
to
be
positioned
right.
So
our
you
know.
From
our
perspective,
our
main
goal
of
what
to
get
first
deployed
is
is
still
you
know
what
what
we've
been
showing
in
in
Prior
meetings,
tcqfs
qcf,
because
that
has
been
validated
in
high
speed
implemented.
So
we
have
a
much
safer
understanding
about
this.
D
So
glbf
is,
from
our
perspective,
a
proposed
second
generation
solution,
it's
incremental
in
integrative,
so
it
keeps
the
benefits
of
the
prior
solution
and
it
adds
the
benefit
of
TSN
ATS
right
so
and
and
the
gaps
why
we
we
think
it
it
might
take
longer
to
adopt
is
that
it
depends
on
your
header,
Fields
right
so,
which
you
know
getting
new
packet
headers
in
the
ietf
is
always
a
big
pain.
We
don't
have
any
high-speed
implementation
validation
right.
D
D
So
this
is.
This
is
basically
coming
out
of
more
explorative
research,
which
we
call
latency
based
forwarding,
which
I
I
have
the
references
to
as
well
at
the
end
and
and
in
the
draft,
and
so
that
was
exploring
what
we
can
do
with
the
packet
headers
signaling,
the
latency
goals
in
the
packet
minimum
maximum
and
then
doing
the
queuing
for
that
that
and
that's
all
very
cool
and
and
great
stuff,
except
that
it
was
so
flexible
that
we
couldn't
figure
out
so
far
any
calculus
for
the
bounded
latency.
D
So
that's
when
we
went
back
and
looked
at
the
calculus
that
are
available
and
that's
exactly
the
tsnats
calculus
coming
from
the
urgency
based
scheduling,
research
paper,
and
so
ultimately
we,
you
know,
reuse
the
damper
mechanism
from
30
years
ago,
even
though
we
didn't
know
it
by
that
name,
and
specifically
with
that.
Ts
is
an
ATS
approach
and
the
use
cases.
Of
course,
large-scale
networks
are
not
necessarily
limited
to
that.
D
It
can
also
equally
work
well
in
small-scale
networks,
but
Cloud
PLC,
remote
driving
machine
robot
control
Outsourcing
to
the
cloud
of
a
mobile
device
computes
when
you
don't
have
to
compute
power
locally
and
ultimately,
I
think
the
industry
is
starting
to
develop
this
as
control
as
a
service,
so
I
think.
Overall,
there
might
be.
You
know
for
all
the
large-scale
networks
that
application
we're
thinking
of.
D
Even
additional
chapters
to
you
know
a
base
version
of
our
use
case
document,
but
that's
another
story:
okay,
let's
understand
the
basic
behavior
of
of
dampers
and
the
problem
statement
right.
So
we
have
two
routers
in
a
row
router
one.
We
have
some
queuing
scheduling
and
we
can
calculate
the
bounded
latency
for
that,
so
we're
happy
to
provide
a
deterministic
latency
service.
Unfortunately,
when
we
don't
do
something
specific,
we
will
not
be
able
to
do
that
anymore
on
the
second
hop,
because
we
have
what
we
call
burst.
D
Collision
multiple
flows
occurring
on
router,
one
they're
getting
delayed
in
router
one
and
when
they're
then
being
rescheduled
in
router
2,
we
will
have
longer
latency
than
we
expect.
This
problem
adds
up
across
multiple
Hops,
and
so
that's
basically,
why
we're
starting
to
have
all
these
convoluted
algorithms
we're
talking
about
because
we
can't
you
know
in
linear
fashion,
calculate
the
latency
and
guarantee
it
without
them.
D
And
here
is
the
simple.
Hopefully
you
know
visually
appealing
a
way
to
show
this.
So
we
have
one
green
flow
that
we're
interested
in.
It
regularly
sends
a
burst
of
three
packets
and
then
on
each
of
the
other
three
interfaces.
We
have
simultaneously
occurring,
one.
You
know
competing
blue
packet.
D
So,
at
the
point
in
time,
where
I'm,
showing
here
with,
hopefully
my
mouse
pointer,
you
can
see,
we
have
four
packets
in
parallel
and
when
we
look
at
how
that
shows
up
on
the
outgoing
interface,
then
for
the
poor
lack
of
loss,
the
blue
three
blue
packets
got
in
the
outgoing
interface
queue
first
and
then
come
the
three
green
packets
from
the
green
flow
that
we're
interested
in.
D
And
if
we
now
look
at
it,
then
the
two
bursts
of
the
green
flow
are
closer
together
and
effectively
that
flow
maximum
burst
is
now
doubled,
and
that
is
something
we
couldn't
have
predicted
in
our
latency
calculus.
So
on
the
next
hop
we
will
have
more
buffer
use
more
latency
for
that
flow
and
we
will
have
lost.
You
know
our
guarantee
on
the
bounded
latency.
D
So
now,
let's
quickly
revisit
the
the
the
algebra
graph
for
the
calculus
right,
so
the
most
simple
method
in
which
we
describe
the
flow
Behavior
as
a
rate
in
bits
per
second
and
then
the
burst
of
a
flow
is
given
in
the
total
number
of
bits,
and
the
condition
is
that
you
know
the
total
rate
of
flow.
Can
over
any
period
of
time
not
be
larger
than
the
rate
and
one
burst?
D
And
then,
when
you
have
these
very
simple
description
of
flows
in
the
way
that
UBS
does
it
we're
ending
up
with
a
very
simple
calculus
for
the
per
hop
bounded
latency
calculation,
which
is
simply
the
aggregate
of
the
bursts
of
all
the
flows
divided
by
the
serving
rate
of
the
outgoing
interface
right?
So
you're
not
interested
in
the
rates
anymore
that
the
flows
have
they.
They
need
to
be
admitted.
So
the
outgoing
interface
of
course
needs
to
be
larger,
but
for
all
intent
and
purpose.
D
We
are
only
looking
at
the
sum
of
the
bursts
of
the
flows
divided
by
the
outgoing
interface
rate,
as
the
determining
factor
for
the
latency
in
queuing
on
that
up,
and
so
the
classical
Solutions
TSN
ATS
are
solving
the
problem
by
when
they
do
receive
on
the
second
router
doing
that
per
flow
shaping,
or
is
it
now
called
interleaved
regulator
and
on
the
bottom,
we're
seeing
the
impact
of
this
so
we're
having
here
in
red
the
three
packets
that
we
already
saw
in
the
prior
slide
they're
getting
delayed
because
of
competing
traffic
and
then
the
further
blocks
of
the
same
flow
are
also
getting
delayed.
D
But
now
it
is
because
of
the
per
flow
shaper
on
router
number,
two
that
they're
getting
delayed,
and
that
way
we
see,
you
know
visually
nicely
that
the
flow
is
never
exceeding
its
maximum
burst
rate.
And,
let's
assume
here
at
some
point
in
time
the
flow
sets
up
to
not
send
the
burst
at
that
point
in
time.
The
per
flow
shaper
you
know
is
is
going
down
again,
and
so
afterwards
we
would
have
the
perfect
low
shaper
induced
delay,
not
anymore
in
the
flow.
So
that's
number
five
on
the
bottom.
D
So
the
provisioning
issues
that
all
the
state
needs
to
be
pushed
which
violates
large-scale
PPE
design
paradigms,
creates
churn
on
the
control
plane
and
we
cannot
use
Source
routing
which
we
like
to
use
in
large
scale
networks
like
segment
routing
in
the
ITF,
because
you
know
we
have
the
perhapa
flow
cross
packet
processing
and
then
also
what
I
mentioned
already
today
and
before
the
read
write
cycles
of
this
shaper
state
right-
and
this
actually
are
the
problems
of
The
interleaved
Regulators
of
TSN,
ATS
or
good
old
RFC
2212
from
the
mid
90s
is
even
worse
the
insurf.
D
So
here
is
basically
you
know
what
I
I
like
to
show
as
how
we
like
large-scale
service
provider
networks,
to
look.
We
have
the
Ingress
and
egress
so-called
provider
Edge
routers.
We
have
the
core
routers,
and
then
we
have
our
controller
plane
up
there
and
when
we
use
TSN
ATS,
we
have
all
this
signaling.
D
That
needs
to
go
between
the
controller
plane
and
every
hub
for
every
flow,
which
is
exactly
this
per
Flow
State
table
that
needs
to
be
updated
every
time
a
new
flow
in
TS
and
ATS
comes
along
and
then,
of
course,
when
packets
go
through
it
in
this
flow
table
on
every
hop
State
needs
to
be
updated
at
line
rate
speed.
So
we
don't
like
that
and
it
doesn't
scale.
D
You
know,
setting
up
the
flows
and
making
sure
that
when
the
sender
is
not
complying
to
what
was
admitted
that
those
you
know
access
packets
aren't
allowed
in,
and
so
this
Ingress
Behavior
I
listed
all
these
different
things
that
we
can
accept
in
here
and
I.
Think
that
goes
a
little
bit
beyond
the
core
topic
today.
So
I'm
not
going
to
discuss
all
the
points
here
so
dampers.
D
How
are
we
getting
rid
of
the
pro
Hopper
Flow
State,
we're
pretty
much
considering
that
we
do
have
the
calculus
in
in
the
the
first
top
router?
That
gives
us
a
maximum
limit
for
the
queuing
and
scheduling,
and
actually
we
also
need
to
include
the
transmission
over
the
interface
to
the
next
router.
So
we
have
the
calculus
to
do
that.
D
We
are
taking
a
timestamp
before
the
queuing
in
router
R1.
That's
the
red
T1,
we're
remembering
that
time
and
once
the
packet
leaves
the
queuing
and
scheduling
we're
inserting
into
the
packet
metadata
at
Target,
Time
T,
which
is
T1
plus
that
maximum
that
we
know
from
our
calculus.
So
then
the
packet
arrives
at
router
2,
which
obviously
is
at
any
point
in
time,
T1,
equal
or
earlier
to
T1,
plus
Max.
D
So
in
our
router
2
we
take
time
number
T2
and
then
we
delay
the
packet
by
T
minus
T2
and
that
actually
means
that
at
this
white
dot,
where
we
don't
need
to
actually
measure
it's
just
kind
of
a
point
of
reference.
That
will
exactly
be
the
clock
time,
T1
plus
Max,
and
at
that
point
in
time
we
have
re-established
the
same
sequence
in
time
of
the
packets
of
all
the
flows
that
we
had
in
router
R1.
D
D
Now,
if
we
look
over
to
TSN
ATS
is
that
in
TSN
ATS
we
only
need
to
delay
the
packets
when
necessary
right
so
during
bursts
and
as
needed
afterwards,
but
when
there
is
no
competing
traffic,
packets
will
be
delivered
much
earlier
than
the
maximum
latency
for
the
hop
right
with
the
dampers
packets
will
always
be
delivered
hop
by
hop
with
their
maximum
guaranteed
latency.
So
isn't
it
better
to
deliver
them
as
fast
as
possible?
Right
and
the
point
you
know,
glbf
makes
and
I
think
a
lot
of
other
algorithms
coming
from
synchronous.
D
Delivery
is
that
this
is
actually
beneficial
to
deliver
them
exactly
in
their
guaranteed
latency
for
synchronous,
delivery,
control,
loops
and
media
play
out,
and
it
eliminates
also
the
play
out
buffers
right
and
it
eliminates
the
needs
for
clocks
and
you
can
never
guarantee
the
shorter
latency
anyhow
right.
So
it's
a
very
questionable
how
many
applications
actually
do
benefit
from
you
know:
random
fast
arrival
that
is
not
guaranteed,
which
is
what
you
know,
TSN,
ATS
and
other
similar
algorithms
are
offering.
D
So
here
is
the
kind
of
picture
I
already
showed
some
time
back,
comparing
in
typical
control,
Loop
applications
where
you
have
one
central
element
on
the
bottom
called
the
programmable
logic
controller.
Then
you
may
have
this.
You
know
remotely
driven
car.
Let's
say
the
the
PLC
is
in
the
cloud.
The
sensors
and
actors
are
in
the
car,
and
so
all
these
sensors
and
actors
don't
need
to
have
their
own
clock
and
synchronization
of
that.
D
You
know
wonderfully
red
circled
additional
components
that
we
need
to
put
into
the
solution,
especially
in
very
low
cost,
sensors
and
actors,
which
is
PTP
clock
synchronization,
based
on
a
local
clock,
play
out
buffers
resynchronization
right.
So
that
is
the
added
cost
to
Applications.
If
we
do
something
like
TSN
with
the
so-called
in-time
delivery.
Sorry
yeah,
in-time
delivery,
where
the
packets
may
arrive
earlier
as
opposed
to
synchronous,
on-time
delivery,
which
is
what
glbf
offers.
D
So,
let's,
let's
look
at
glbf
now
the
first
thing
to
fix
up
versus
the
basic
concept
that
we
did
was
that
the
glbf,
as
described
in
the
first
slide,
still
requires
clock
synchronization,
and
we
really
don't
like
clock
synchronization
in
high
speed,
wide
scale
area
networks.
It's
great.
D
It
does
give
potentially
other
benefits,
but
ideally
we
should
always
have
the
option
to
not
require
it
when
not
needed,
and
we
can
actually
do
this
with
glbf,
because
we
can
slightly
fix
up
the
algorithm
and
that
algorithm,
fixed
up
is
shown
here.
So
after
the
queuing
and
scheduling,
we
actually
measure
the
time
on
R1
the
sender
as
well.
So
we
now
know
the
total
time
spent
in
queuing
and
scheduling
on
R1.
D
So
we
simply
need
to
take
the
Delta
of
that
subtract
that
from
the
maximum
that
we
know,
and
so
that
is
the
missing
delay.
Latency
see
that
we
know
the
packet
now
needs
to
experience,
so
we're
not
sending
an
absolute
a
clock,
timestamp
from
router
r1s
metadata
to
router
or
two,
but
we're
sending
the
desired
delay
that
we
want
router
to
to
apply
so
router
2
basically
gets
that
and
takes
its
local
timestamp
from
a
local
non-synchronized
clock
and
applies
the
latency
sorry
this.
D
This
needs
to
be
T2
plus
d
right,
so
it
basically
simply
delays
based
on
it
local
clock
and
then
we
again
get
the
equivalent
of
T1
on
router,
1,
plus
the
maximum.
Now
the
calculation
error
that
is
shown
here
in
red,
that
is
exactly
what
needs
to
be
a
thought
of,
and
the
most
important
part
of
that
is
actually
simply
the
serialization
latency,
and
that
actually
can
be.
You
know,
captured
by
calculation
on
wired
networks
and
that's
exactly
what
we're
doing
in
that,
showing
in
further
slides
in
the
details
right
so.
D
So
so
this
this
here
is
now
adding
all
the
actual
stages
of
interest.
So
the
main
issue
that
was
missing
on
the
prior
one
is
that
serialization
right.
So
if
you,
if
you
have
a
large
Network,
obviously
that
does
incur
by
pure
speed
of
having
to
you
know
serialize
1500
bytes,
as
opposed
in
a
small
packet
50
bytes,
a
higher
latency
to
router
two.
But
obviously
we
can
take
that
into
the
calculation.
D
And
so
here
are
the
updated
calculations
here,
we're
simply
needing
to
introduce
in
white
a
t
reference,
which
is,
let's
say
the
first
bit
of
the
packet
when
it
is
being
serialized
out
on
the
link.
We
we're
not
interested
here
to
compensate
for
speed
of
light.
So,
however
long
the
link
is
we're
not
interested
in
right,
so
we
just
need
to
consider
these
routers
are
back
to
back
we're
interested
in
t
ref
as
the
time
when
the
first
bit
of
the
packet
is
seen
on
that
minimum
shortest
link
between
a
router
R1
on
R2.
D
D
So
in
any
case,
T
ref
can
perfectly
be
calculated
and
router
R1
can
calculate
now
the
desired
delay
on
router
to
by
taking
all
that
into
account
right,
which
is
the
maximum
from
the
calculus,
which
is
the
maximum
latency
through
the
queue
the
latency
of
the
largest
possible
packet
over
the
link
and
basically
from
that
subtract
for
the
actual
packet,
the
reference
time
and
then
the
T1
time
that
was
measured
before
the
queuing.
D
So
equally,
when
the
packet
arrives
on
router
2,
the
router
2
also
needs
to
take
the
packet
size
and
the
serialization
speed
of
the
link
into
account
to
recalculate
T
reference
and
then
basically
the
delay
is
that
tier
reference
plus
the
delay
in
the
package
all
right.
So
this
way
we
have
what
we
call
asynchronous
dampers
and
that's
one
of
the
you
know.
First
things
of
glbf
kind
of
the
core
thing
to
make.
It
work
without
clock,
synchronization
in
wired,
Networks.
D
So
if
we
do
have,
for
example,
radio
links
where
the
propagation
over
the
internet
as
sorry
over
the
interface
over
the
link
can
be
highly
variable
because
of
physical
or
link
layer,
re-transmission
or
also
Reflections,
right
so
and
and
those
you
know,
errors
account
to
something
we're
worried
about.
Then,
on
that
link
we
can
simply
use
glbf
with
synchronous
transmission.
At
that
point
in
time
we
just
need
to
have
for
that
link
instead
of
a
you
know,
a
relative
delay.
D
We
need
to
put
the
absolute
timestamp
in
there
and
the
receiver
needs
to
know
that
right.
So
we
could
apply
this
to
radiolink,
but
you
know
I
think
that's
a
raw
problem,
so
we
can
get
to
that
later
when
when
we
get
to
to
Raw
issues,
but
for
Wireline
there
is
no
such
issues,
so
the
asynchronous
model
will
work
perfectly.
D
So
great,
are
we
done
yeah
it's
great,
but
we're
not
done
right.
So
this
is
not
good
enough
for
standardization
and
this
is
not
complete
enough
for
high-speed
implementation.
So
let's
see
what
we're
missing
here.
D
So
the
first
thing
is
that
we
were
just
looking
into
the
arbitrary
somebody
knows
some
form
of
calculus,
queuing
and
scheduling
of
the
packets.
But
if
we're
starting
to
say
hey
every
vendor
could
do
their
own
queuing
and
scheduling.
We
just
need
to
add
the
dampers.
Then
we
can't
build
a
standards
vendor
independent
controller
plane,
because
the
controller
plane
needs
to
do
complex
calculation
and
take
path,
selection
and
latency
control
into
account,
and
it
can't
do
that
if,
for
every
Hub
it
has
some
proprietary
latency
calculus
right.
D
And
so
that's
why
in
glbf
we
chose
to
use
the
TSN
ATS
calculus,
which
is
not
only
you
know
well
known,
published.
Well,
you
know
mathematically
proven,
but
it's
also
gotten
good
experience
from
TSN
ATS
and
we
have
the
added
benefit
that
we
can
very
likely
reuse,
almost
everything
from
a
TSN
ATS
controller
plane
as
far
as
calculation
of
latency
path
selection.
All
these
things
are
considered
and
so
glbf
could
become
a
plug-in
replacement
in
the
network
for
such
an
existing
or
shared
controller
plane,
implementation
for
TSN
ATS.
D
So
now
this
is
how
glbf
completely
looks
so
this
is
kind
of.
As
far
as
the
model
is
concerned,
the
last
slide
on
a
glbf
and
as
we
see
here,
we
have
replaced
the
one
single
block
of
queuing
and
scheduling
with
the
queuing
scheduling
block
of
TSN
ATS,
which
is
a
set
of
up
to
I.
D
Think
eight
fifo
queues
followed
by
strict
priority
queuing,
and
then
you
can
easily
understand
from
what
I
said
and
before
that
the
latency
of
a
particular
flow
through
the
router
here
is
determined
by
the
total
maximum
number
of
bytes,
which
is
the
num,
which
is
the
sum
of
the
bursts
of
the
flows
that
are
admitted
in
the
prior
one
pi
f5o,
and
when
we
go
to
Priority
two.
It's
the
sum
of
you
know
501
and
502,
and
so
on
right.
D
So
there
is
a
very
clear
and
simple
calculus
to
to
calculate
the
maximum
bounded
latency,
and
that
is
simply
the
sum
of
the
length
of
the
fifos
of
the
priority
of
the
packet
and
plus
the
lower
priorities.
And
so
then,
basically
here
the
calculations
are
again
the
same
as
we
did
in
before,
because
they're
not
impacted.
D
You
know
perhap
a
different
priority,
for
example,
captured
by
the
Pearl
hop
sets
that
we
use
to
steer
the
package
and
then
the
state
in
the
router
is.
We
simply
need
to
have
the
eight
maximum
numbers
that
we
pre-calculate
pre-populate
from
the
controller
plane.
So
for
each
of
the
priorities,
we
need
the
maximum
value
so
that
we
can
do
the
per
packet
calculation
to
insert
then
the
packet
delay
when
we
send
out
the
packet
right.
D
So
this
is,
you
know
everything
in
one
slide,
probably
too
busy
to
understand
the
first
time
around,
but
slides
are
uploaded
right.
So
it's
simply
eight
values
that
need
to
be
populated
for
the
outgoing
interface,
the
maximum
bounded
latency
pre-calculated
for
each
of
the
eight
priorities,
and
then
these
Yellow
Boxes
are
the
calculations
that
need
to
be
done
on
the
packet,
and
we
think
that
with
a
very
high
speed,
you
know
in
cash
calculation-
and
you
know
eight
variables
that
are
always
in
cash,
which
are
the
eight
priorities.
D
So,
but
there
is
an
issue
with
high-speed
implementation,
and
so
that
is
also
now
very
much
following
the
you
know,
ideas
that
were
explained
in
the
UBS
research
paper
for
TSN
ATS
in
terms
of
implementation,
and
that
is,
if
you
look
at
the
point
between
the
dampening
and
the
scheduling
you
can
at
that
point
in
time,
have
many
packets,
remember
the
the
original
picture
that
we
had
right.
So
all
these
packets
that
are
colliding
at
the
same
point
in
time.
These
would
have
to
be.
D
You
know,
forwarded
in
exactly
zero
time
between
damper
and
queuing.
Otherwise
we
would
add
delay
in
implementation,
which
we
would
need
to
be
able
to
calculate,
and
it's
very
difficult
right.
So
the
rule
of
thumb
is
you
don't
want
to
have
a
multi-stage
queuing
one
in
the
dampening
and
one
in
the
following
priority
queuing.
You
want
to
bring
that
together
into
single
stage
and
that's
the
interesting
challenge
and
in
TSN
ATS
that
was
described.
How
that
is
done,
and
so
the
the
interesting
challenge
was
was
how
to
do
that
in
glbf.
D
So
the
the
first,
you
know
likely
ideal.
Long-Term
implementation
is
to
use
so-called
pythos,
which
are
pushed
in
first
out,
but
we're
also
calling
it
time
pythos.
D
So
pythos
are
ones
where
you're
saying
that
the
order
of
the
packets
in
the
pi
4
are
determined
by
their
ring
and
we
use
as
the
rank
the
target
sending
time,
which
is
the
time
that
the
that
is
being
calculated
for
the
damper
and
then
basically,
when
these
packets
are
ready,
we're
using
strict
priority
dequeuing
timed
in
terms
of
we're
removing
the
packets
only
after
they
have
met
their
target
time,
and
so
this
is
something
which
isn't
entirely
clear
how
fast
it
can
be
built.
D
So
I've
I've
been
tracking
what
research
has
been
doing
on
implementations
and
they
have
actually
gotten
to
the
point
where
this
year,
we'll
see
I
think
the
first
really
good
high-speed
fpga
implementation
of
pifo
that
scales
better
than
the
number
of
flows
people
expect
to
have
in
there.
So
the
prior
implementations
weren't
very
good,
but
that
still
hasn't
gotten
to
the
point
where
we
can
actually
clock
timestamps
as
the
rank
of
the
Python.
So
there
is
still
a
little
bit
left
to
be
done.
D
I'm
very
positive
that
this
can
happen
easily
in
the
next
few
years
yeah.
So
this.
This
is
what
I
was
talking
about
here
in.
In
terms
of
you
know,
the
state
of
research
and
prototype
implementations
so
and
I'll
need
to
start
discussing
the
gaps
that
still
exist
on
the
research
side
for
this
option.
So
here
is
an
option
that
can
be
built
today
and
as
we'll
see,
that
comes
with
an
interesting
twist.
D
So
this
actually
follows
very
much.
The
high
speed
implementation
of
TSN
ATS,
and
so
what
we're
doing
here
is
instead
of
using
pythos.
We
are
using
fifos
so
that
we
can
ensure
that
whenever
we're
enqueuing
packets
in
one
of
the
fifos,
the
packets
need
to
come
out
in
the
same
order.
So
we
don't
need
to
do
that.
Complex
insertion,
sort
that
troubles
Pi
for
implementation
makes
them
complex
and
expensive,
and
the
the
subject
of
good
research
and
what
we're
getting
to,
and
so
in
TSN
ATS.
D
Exactly
the
same
thing
was
done
in
TSN
ATS.
We
have
fifos
based
on
per
interface
and
per
priority
so
that
exactly
they
can
achieve
the
same
thing,
except
that
in
TSN
ATS
packets
are
dequeued
based
on
the
per
flow
shaper
interleaved
regulator
and
we're
simply
dequeuing
them
based
on
the
time.
So
we
don't
have
that
per
Flow
State
anymore,
but
what
we
need
to
do
here
is
to
make
sure
that
packets
are
always
in
order.
We
need
to
have
one
fifo
per
incoming
interface
per
prio
and
per
previous
Hub
priority.
So
that's
the
P
prior.
D
That's,
so
we
basically
need
you
know
some
number
of
more
queues
right,
P,
Prime,
Times
prior,
so
it's
64
times
more
or
eight
times
more
fifos
than
we
would
need
in
the
TSN
ATS
solution
to
do
this
right
so
and
so
with
that
number
of
fifos
and
the
queuing
based
on
that.
This
is,
you
know
single
stage,
and
it
can
be
done
today
with
a
number
of
fifos.
D
If,
if
we're
looking
at
the
number
of
fifos,
we
already
have
in
high
speed
routers,
so
I'm
not
concerned
of
that
being
very
well
feasible.
But
the
interesting
challenge,
of
course
now
is
that
we
need
to
insert
the
packet
based
on
the
prior
hop
priority,
so
we
need
to
be
able
to
extract
that
from
the
packet
header.
So
that's
kind
of
you
know
we
have
a
workaround
implementation
for
the
high
speed
way
and
we
need
to
be
able
to
still
maintain
that
prior
priority.
It's
not
an
issue
in
the
set
case
in
srv6.
D
It's
just
a
question:
if,
if
that's
a
good
encoding
to
to
look
that
up,
because
you
always
maintain
every
hubs
sits
in
that.
But
if
we
come
up
with
another
header,
then
it
might
be
another.
You
know
three-bit
field,
that
we
would
need
to
have
for
this.
D
So,
and
this
is
basically
what
I'm
saying
here,
validation.
So
we
did
do
a
simulation.
An
ad
hoc
simulation
implementation
for
a
research
paper
and
I
was
exactly
replicating
the
minimum
case
in
which
we
can
show
the
problem
of
burst
collisions.
So
we
have
three
routers,
each
of
them
receiving
three
flows.
D
So
what
we'll
get
is
on
each
of
the
outgoing
interfaces
of
router
one
two
and
three:
we
will
have
packets,
where
the
flows
burst
have
become
larger
than
they
should
be,
but
the
latency
through
the
router
one,
two
and
three
of
course
is
perfect,
has
been
pre-calculated.
But
when
these
you
know,
flows
now
arrive
into
the
router
4.
Then
at
that
point
in
time.
If
we
wouldn't
do
the
in
our
implementation,
python
based
glbf,
but
only
a
fifo
without
the
damper,
then
we
will
see
that
package
of
the
flows
are
exceeding
Dell
latency.
D
D
Black
bars
are
the
latency
of
some
packets
of
the
one
flow
we
looked
at,
which
are
exceeding
the
pre-calculated
limit
of
the
calculus,
and
then
we
apply
the
dampers
and,
as
you'll
see,
the
packets
are
all
perfectly
staying
on
that
router
within
the
latency
bounds
yeah,
so
packet
metadata
encapsulation.
D
D
If
we
want
to
implement
synchronous
mode,
we
would
have
the
TD,
which
is
a
timestamp
that
could
also
be
implemented
in
20-bit
resolution,
because
it
could
circle
around
synchronized
in
a
one
millisecond
fashion.
We
do
Calculus
math,
so
the
the
timestamp
doesn't
need
to
be.
D
You
know
128
bits
or
something
like
that,
and
then
the
three
bit
priority
and,
of
course,
when
we
do
the
fifo
implementation,
we
would
need
to
have
the
prior
Hub
priority
if
that's
needed
and
then,
of
course,
the
sequence
of
priorities
when
we
do
that
so
benefit
summary
we're
getting
to
the
summary
a
solution
right.
D
So
if
we
compare
that
with
our
current,
you
know
proposed
first
generation
solution,
see
that
tcqfcsqf
we
have
somewhat
lower
cycle
times
right
because
the
cycle
times
is
in
the
order
of
let's
say
20,
microsecond
or
larger
1020,
microsecond
or
larger.
So
we
would
be
more
accurate.
How
important
is
that
I
think
the
main
benefit
is
can
be
asynchronous
right.
D
We
don't
need
any
PTP
clock
synchronization
and
we
can
also
support
jittery
links,
but
at
the
cost
of
then
having
clock
synchronization
across
the
radio
link
right,
it
is
TC
curve,
Backward
Compatible
right,
so
we
can
set
up
a
tcqf
to
sorry.
We
can
set
up
glbf
to
behave
like
tcqf.
D
We
simply
need
to
make
the
buffers
on
each
hop
the
the
amount
of
admitted
traffic
that
we
do
exactly
the
same
size
and
then
we'll
very
much
get
into
the
same
behavior
as
tcqf
right
single
priority
buffer
space
exactly
the
same
as
the
cycle
buffer
and
we
we
can
likely
make
the
same
argument
against
csqf.
That
would
probably
require
more
of
a
slide
and
picture
work
than
I
had
time
for
this
time
to
set
this
up
with
with
multiple
priorities.
D
So,
but
we
we
do
think
that
it
is
a
nice
Evolution
from
you
know,
first
generation
implementations
with
tcqf
csqf
and
then,
when
the
hardware
for
glbf
is
available,
evolve
into
this
and
gain
the
benefit
of
not
having
to
do
clock,
synchronization
and
then
the
added
benefits.
In
terms
of
that
we
do
have
higher
flexibility.
We
can
have
the
buffer
size
on
the
different
hops
be
different
and
therefore,
potentially
you
know
achieve
more
flexible
admission,
control
options.
D
What's
the
benefit
over
TSN
ATS
right
so
obviously
also
Backward
Compatible
in
so
far
is
that
it
maintains
the
calculus
of
that
solution.
So,
as
I
already
said,
it
could
be
a
plug-in
for
the
same
controller
plane.
It
is
equally
asynchronous
and
we
do
have
the
on-time
packet
delivery.
So
we
don't
have
the
Jitter
of
TSN
ATS,
something
which
remember
the
the
the
slide
with
the
control
Loop
environments,
something
which
makes
applications
more
difficult.
D
Okay,
so
overall,
what's
the
takeaway,
so
glbf
is
an
adoption
of
the
old
damper.
Ideas
enables
per
flow
stateless
operation
required
for
large-scale
networks
as
minimal
or
no
Jitter
minimal,
or
no
playout
buffer
requirements,
and
the
time
for
that
seems
to
be
right.
It
seems
to
be
feasible
to
implement
high
speed
with
fifo
and
Pi
for
options,
but
it
doesn't
exist
today,
so
it
will
take
somewhat
longer
than
other
options
like
what
a
tcqf
is.
D
The
specification
contributes
the
pro
hop,
asynchronous
and
or
synchronous
operation,
the
use
of
the
UBS
calculus,
the
proposals
for
high-speed
implementation
and
then
the
combined
benefits
of
TSN,
ATS
and
TSN
cqf.
So
it's
designed
for
large-scale
networks,
but
it
should
likely
also
be
very
beneficial
to
smaller
scale
networks
like
the
existing
one
or
even
when
people
build
it
into
a
low-cost.
You
know,
let's
say
in
car
systems,
okay,
our
references
and
that's
it.
Thank
you
very
much.
G
G
It
and
I
do
agree
to
the
basic
idea
of
the
dampening.
G
B
D
You
have
a
lot
of
traffic
that
enters
the
ring
and
that's
exactly
creating
the
art
to
calculate
calculate
latency
and-
and
you
have
large
number
of
hops
you
may
in
these
rings-
have
20
or
more
hops
in
large
scale
environments,
and
so
that's
that's
why
I
do
think
see
it's
hard
to
to
to
to
think
of
Hardware
that
you
can
build
and
say
hey.
This
isn't
doing
the
per
hop
dampening
here
and
it
makes
the
Box
cheaper
and
there
is
a
guaranteed
place
where
you
can
deploy
it
right.
D
So,
given
how
we
start
thinking
of
okay,
we're
selling
the
boxes,
it
needs
to
be
able
to
do
the
dampening
anyhow.
It's
it's
one
kind
of
implementation
fits
all
places.
Then
comes
the
question.
Okay,
do
I
gain
something
by
not
enabling
the
feature
on
a
particular
hop
and
I
think
it
doesn't
because,
as
soon
as
we
damp
on
a
particular
hop
we
have,
we
are
again
to
the
very
simple
calculus
that
we
have,
which
is
the
latency,
is
just
the
sum
of
the
buffer
sizes
and
I.
Do
think.
D
It
also,
therefore
keeps
it
a
minimum
right.
So
if
we
go
back
to
the
validation
slide
right,
so
if
I
would
keep-
or
is
it
here
right
if
if
I
would
keep
through
router
four,
if,
if
I
wouldn't
do
the
dampening
on
router
4
now
on
the
next
top,
I
would
have
to
try
to
figure
out
what
is
the
bloody
calculus
to
to
capture
the
edit
latency
that
I
have,
in
this
case
kind
of
some
something
like
2800
instead
of
2200
microseconds
for
the
worst
case
flow
right?
D
Why
would
I
get
into
the
trouble
of
doing
that?
What's?
What's
the
benefit
right,
I
need
to
be
able
to
have
in
the
heart
with
a
dampener
everywhere.
I,
don't
expect
that
to
be
an
added
cost
in
you
know
forwarding
packets,
reducing
the
performance,
so
you
know
I,
don't
see
the
benefit
of
not
doing
it.
I
only
see
problems
occurring
with
it.
D
G
What
I
want
to
say
is
that
you
still
have
a
forward
or
q
q
with
the
damper
and
the
the
major
part
of
what
the
latency
is
guaranteed
is
affected
by
the
forwarder.
So
if
you
use
just
fifo
the
bound
becomes
very
large,
and
if
you
have
a
more
elaborative
or
sophistic
sophisticated
scheduler
in
front
of
the
damper,
then
the
worst
latency
will
be
much
more
smaller.
G
So
it
will
be
a
very
good
idea
to
have
a
nice,
scheduler
or
sophisticated
schedule
in
front
of
the
temper,
not
just
a
strict
priority
queue.
That's
my
comment.
So
my
idea
is
to
have
a
very
high
speed,
Network,
that
pack
the
service
packet
as
soon
as
possible,
and
then
just
samples
at
the
network.
Boundary
I
think
that
is
much
immersive.
D
Yeah,
but
you
know
I
I
think
so
I
mean
we
saw
if,
if
we,
if
we
have,
if,
if
we
have
the
really
important
ring
topologies
for
metropolitan
areas,
then
we
cannot
calculate
across
multiple
hops,
the
bounded
latency.
If
we're
not
doing
per
hop,
you
know
either
interleaf
Regulators,
ATS
or
dampers
right
or
Cycles
right
any
of
these
per
hop
things
to
make
the
perhap
latency
easily
calculated
in
our
calculus
right.
C
E
G
D
D
No
I
I,
I,
I
I
think,
let's,
let's
continue
that
on
the
list,
thanks
very
much
for
the
feedback.
I
think
we
need
to
continue
that
discuss.
Okay,.
C
Thank
you
for
your
presentation.
Not
always
I
have
a
few
clarification
questions
now.
One
of
the
one
of
the
question
is
in
one
of
slides
with
the
timing
models.
I
think
I
heard
the
term
serial
serialization
speed.
D
C
Oh
okay,
same
thing
as
link
link,
speed.
Okay,
thank
you.
A
second
question
is
on
the
on
the
second
or
third
page.
Here
from
the
back,
you
mentioned
the
csqf.
What
is
csqf.
D
Oh
this,
this
is
the
you
know,
Proposal
with
the
Cycles
where
which,
which
is
similar
to
tcqf,
but
instead
of
doing
the
cycle
mapping
through
a
mapping
table
on
the
router.
The
cycle
for
every
hop
is
indicated
in
the
packet
header,
for
example,
through
the
segment
routing
header,
where
it
comes
easily
Oh.
D
D
D
You
I
will
have
a
comparison
of
those
in
the
117
presentation,
yep.
A
Alexander
we
can't
we,
we
can't
hear
you,
do
you
have
another
mute.
You
need
to
need
to
clear.
F
Okay,
I
I,
just.
F
Response
from
my
coincidence,
in
the
Bernice
and
I
just
send
it
to
you
related
with
the
question
that
she
knew
just
asked
to.
You
suggest.
A
D
A
B
D
No
no,
this.
These
are
two
alternative
options
right.
So
the
first
option
here
is
in
the
pi.
Sorry
in
the
python.
We
simply
need
one
Pi
for
per
priority
right.
So
so
that's
that
looks
very
nice
and
easy,
so
eight
pythons
and
then,
if
we
don't
want
to
have
pythos,
because
we
don't
yet
have
the
highest
implementations,
we
need
to
have
five
holes,
but
we
have
one
fifo
for
the
first
incoming
interface
and
the
first
priority
and
the
first
prior
priority.
D
Then
we
have
another
fifo
for
the
incoming
interface
one
priority
two
prior
priority.
So
on
so
you
have
you
know,
let's,
if
you
have
eight
priorities
on
every
hop,
it's
eight
times
eight
times
the
number
of
incoming
interface-
and
this
is
exactly
very
similar
to
the
way
that
TSN,
ATS
fifos
would
work.
You
would
have
NTS
and
ATS.
You
have
a
50
for
every
incoming
interface
and
every
priority.
So
the
the
new
trick
that
we
do
for
glpf
to
make
fifos
work
is
to
also
affect
the
priority
here.
B
D
Cbscr
exactly
exactly
the
the
only
thing
that
happens
is
that
you
have
the
strict
priority
dequeuing
and
it
looks
at
the
rank
of
each
packet,
which
is
the
pre-calculated
TD,
which
is
you
know
the
pre-calculation
was
in
the
other
glbf
slide,
so
it
picks
the
packet
with
the
earliest
TD.
That
is
already
reached
right.
D
So
you,
you
are
at
some
clock,
so
you
have
a
local
clock
and
you're
comparing
the
TD
of
the
head
of
all
the
fifos
and
the
first
TD
that
is
now
reached
of
the
highest
priority
is
the
5
info
from
which
you're
dequeuing
the
package.
So
it's
a
simple
parallel
comparison
of
these,
you
know
rings
as
we
call
in
in
in
in
this
land
and
that's
what's
being
dequeued.
So
that's
that's
proven
to
be
very
simple
to
implement
in
parallel
in
in
every
fpga
and
I.
D
Think
it's
a
good
point
that
that
you
asked
this
so
I
should,
you
know,
have
redo
a
slide
to
show
that
detail
in
in
the
slide
itself.
B
Thanks
in
the
very
last
one
I
see
there
are
actually,
there
are
two
points
when
you
do
the
measurements,
which
is
behind
the
timeline
T
and
the
other
is
behind
the
timeline.
T2
I
made
me
some
other
previous
slides,
so
why
we
need
to
especially
for
the
later
one
after
the
t2.
What
what
are
we
mirroring
for
here.
D
Oh
yeah,
so
the
the
basically
the
the
the
thing
that
we
need
to
so
first
of
all
measurement
simply
means
you're
having
a
locally
free,
running
clock
and
you're.
Basically,
you
know
in
your
forwarding
engine
you're,
taking
that
clock
value
and
doing
the
simple
math
with
it
right.
So
the
measurement
is
a
little
bit
of
a
overwhelming
term
for
taking
the
local
clock
value
and
you
need
to
take
the
local
clock
value
at
T
for
received
packets
and
then
for
forwarding
it
further
out.
You
need
to
take
number
T2
right.
D
The
the
whole
point.
What
you
want
to
do
is
the
difference
between
t
and
T2
is
the
time
that
the
packet
has
actually
spent,
and
you
never
know
that,
because
you
know
the
time
that
the
packet
spends
because
of
competing
traffic.
That
is
that
you
cannot
predict,
and
that
is
exactly
what
we
need
to
measure
right,
so
that
difference.
D
That's
why
we
need
to
you
know,
measure
you
know
remember
when
the
packet
passes
the
point
t
the
the
clock
there
and
then
we
subtract
that
at
the
point
T2
and
that
way,
we
know
the
time
that
the
packet
internally
has
spent.
However
many
microseconds
it
has
spent
there
and
that's
basically,
then
part
of
the
calculation
when
the
packet
is
being
sent
out.
C
So
this
sorry
to
jump
in
so
this
in
order
to
avoid
the
synchronication
between
neighboring
nodes
right,
yes,
okay,
okay,
but.
A
D
G
D
The
the
whole
point-
and
that
was
explained
in
the
UBS
research
paper,
where,
as
I
said
they
do
the
very
same
scheme,
is
that
you
don't
when,
when
you
receive
packets,
you
can
only
put
those
packets
into
a
single
fifo
that
will
be
removed
in
the
same
order
in
which
they
arrive
right.
You,
you
don't
want
to
have
the
problem
that
you
receive
a
packet
you
calculate
when
it
gets
out,
and
basically
you
need
to
insert
it
somewhere
in
in
in
the
queue
in
the
middle
of
the
queue
right.
D
That's
that's
why
you
need
pythos,
so
you
can
always
replace
a
python
with
a
set
of
fifos.
When
you
do
figure
out,
hey
wait!
A
second
there
we
I
can
figure
out.
The
packet
belongs
to
a
subset
of
other
packets
that
will
be
kept
in
order,
so
a
new
packet
that
arrives
simply
needs
to
be
put
at
the
end
of
that
fifo
and
then
I
don't
need
to
insert
it
somewhere
in
the
middle,
and
so
we
this
is.
This
is
what
we
do
with
the
scheme
and
the
only
intelligence
we
need
beyond.
D
G
Well,
my
question
was
very
simple:
different
flows
may
want
to
get
different
maximum
latency
I
know
my
question
was:
do
they
get
the
separated
or
different
maximum
latency.
D
D
The
latency
calculus
of
glbf
is
still
the
ubf
calculus,
so
with
respect
to
calculating
the
latency
of
a
particular
flows.
Packets,
it
solely
depends
which
of
the
priority
on
that
hop
it
gets
into.
So
let's
say
each
of
the
eight
priorities
has
a
maximum
buffer
size
of
let's
say:
10
microseconds,
then
a
packet
in
Priority
One,
the
highest
priority,
will
have
a
bounded
queuing
latency
through
that
Hub
of
10
microseconds
in
priority
queue.
It
will
have
a
maximum
latency
of
20
microseconds,
which
is
the
you
know,
latency
of
any
priority
one
packet
plus.
D
You
know
the
latency
of
priority
two
right,
so
each
is
is
10
microseconds,
so
every
priority
one
packet
will
be
served
before
priority
two,
so
10,
microsecond
plus
10
microsecond,
makes
20
microseconds
priority
same
thing
again:
10
plus
10,
plus
10
30
microseconds
for
priority
three.
So
that's
that's
a
very
simple
UBS
calculus
and
exactly
the
same
calculus
is
what
glbf
gives
you.
D
G
Thank
you.
Well,
do
you
have
any
mathematical
proof.
D
F
Yes,
I
just
have
the
same
question:
I
think
the
under
the
shavingly
a
different
sense,
so,
according
to
dancing
function
each
part
of
the
main,
how
the
same
worst
case,
latency.
Perhaps
the
reason
I
understand.
D
As
you
see,
the
maximum
latency
through
the
Hop
is
different,
based
on
the
priority
that
the
packet
has
on
this
hub.
So
if
we
use
the
the
UBS,
so
this
is
basically
why
we
need,
in
the
packet
header
some
metadata
to
indicate
for
this
hop
the
priority
right.
So
packet
comes
in
here
and
in
this
box
measure
and
dampen
here
we
will
look
up
the
priority
from
the
priority.
We
will
look
up
the
maximum
value
and
sorry
this.
F
Yes,
my
my
question
is
that
for
for
all
packets
belongs
to
the
same
priority
that
we
experience
the
same
worst
case
latest
hope.
Yes,
for
example,
for
the
polarity
one,
the
worst
case
latency
may
be,
for
example,
a
10
microseconds,
so
World
park
is
belonged
to
the
same
probability,
one
which
is
experience
the
same,
a
delay
for
couple.
That
is
a
term
microservice.
F
D
Okay-
and
the
interesting
aspect
of
course,
is
when
you
do
have
a
flexible
scheme
where
you
can
give
a
packet
on
each
hop
a
different
priority.
You
can
very
nicely
find
you
in
the
end-to-end
latency
by,
for
example,
saying
hey.
You
know,
I
want
to
have
something:
that's
in
between
the
end-to-end
latency
of
priority
two
and
three,
so
you'll
kind
of
you
know
give
the
packet
priority
two
on
one
hop
priority:
three
on
the
next
top
priority:
two
three,
two
three,
so
you're
kind
of
dithering
between
per
hop
priorities
to
create
end
to
end.