►
Description
IEEE802.1_TimeSensitiveNetworking TUTORIAL at IETF99
2017/07/16 1345
https://datatracker.ietf.org/meeting/99/proceedings/
A
B
C
C
C
So
what
we
have
today
is,
after
some
introduction
we
dive
into
the
details
of
TSN,
TSN,
twos
TSN
features
like
reliability,
deterministic,
latency,
resource
management
and
after
the
tutorial
after
the
summary
and
questions
and
answers,
we
have
a
brief
demonstration
of
the
reliability
feature
recently
published
or
finished
almost
almost
published
in
anatomy,
which
is
also
one
of
the
major
work
items
in
the.net
data
plane
on
how
to
provide
this
reliability.
So
this
proof
of
concept
includes
both.
C
Actually
there
is
quite
wide
use
case
interest
or
many
areas
see
TSN
interesting.
So
there
are
a
number
of
potential
markets
that
can
use
leverage
TSN.
So
this
is
not
another
full
list.
Of
course,
one
of
the
the
key
ones
is
industrial
automation.
So
in
industry,
4.0
TSN
is
kind
of
the
new
networking
technology
that
is
planned
to
be
used
and
automotive
for
vehicular
networking
is
very
keen
on
having
eaten
at
in
the
car
we
controlled
by
TSN
tools,
and
you
see
many
others,
including
5g,
that
can
also
leverage
TSN.
C
That
is
very
boring,
going
on
how
to
provide
transport
Eterna
transport
for
the
frontal
interface,
which
is
part
of
5g,
so
TSN
or
this
TSN
TSN,
is
in
actually
one
of
the
task
groups
in
the
IT
plato,
2.1
working
group,
which
is
part
of
the
802
Landman
standards
committee.
That
develops
language
standards
and
mainly
focuses
on
on
the
data
link
and
physical
layers
into
2.5,
as
illustrated
in
the
figure
sits
on
on
top
of
the
802
scope.
C
This
Greenfield
do
I,
have
a
laser
pointer
here,
so
II
22.1
sits
here
and
TSN
is
one
of
the
test
groups
there
so
802
dot.
One
is
the
one
that
is
responsible
for
the
802
architecture
and
for
the
interworking
among
the
different
among
the
802.
Lance
provides
also
security
and
also
responsible
for
the
overall
management
and
protocol
layers
above
the
Mac,
and
there
are
C
layers.
C
The
SN
was
actually
started
as
audio/video
bridging,
so
you
may
hear
the
a
VB
abbreviation
it
was
started
in
to
2005
in
order
to
address
professional
audio
and
video
market
has
reflected
reflected
by
the
name,
including
consumer,
electronics
and
and
actually
automotive
infotainment.
So
it
is
already
a
VBS
used
in
automotive
there's,
a
new
alliance
which
is
an
associated
group
to
provide
conformance
and
and
marketing
for
a
VB,
and
it
got
very
interesting
for
other
use
cases
and
the
scope
of
the
work
has
been
extended
in
a
task
group
and
the
name.
C
C
Ok,
so
what
is
TSN?
What
we
are
after?
We
are
after
to
provide
ground
guranteed
data
transport
with
bounded
low
latency,
low
delay,
variation
and
extremely
low
lows.
So
TSN
is
in
fact
includes
4
add-ons
to
generate
networking,
or
they
can
call
them
four
pillars.
Synchronization
is
one
of
them.
It's
very
important
and
provides
grounds
for
many
other
tools.
C
Queueing
tools,
reliability
is
crucial
for
the
mission-critical
applications
and
we
have
dedicated
standards
tools
for
reliability
like
the
film
application
and
illumination
and
well.
We
will
see
details
on
this
shortly.
This
is
the
high-level
overview
and,
as
even
the
name
of
the
group
such
as
we
are
after
low
literacy,
for
which
multiple
shapers
and
cueing
mechanisms
have
been
already
developed
and
work
is
still
ongoing
and,
together
with
the
resource
management,
which
is
essential
in
order
to
provide
zero
conscious,
shadows,
that's
what
we
are
after
so
the
this
oops.
D
We're
targeting
real-time
applications
and
by
real-time
I
mean
something
happens,
and
you
have
to
respond
to
it
or
something
horrible
will
happen
like
two
parts
of
the
machine
will
bang
into
each
other
the
plane
crashes.
So
that
means
what
we
care
about
is
worst
case
latency
most
of
the
applications,
not
all
of
them
by
any
means,
but
many
of
the
applications
are
control,
loops,
I
measure
something
I
respond
to
it.
I
measure
something
I
respond
to
it
and
cause
an
action.
D
Control
theory
depends
very
strongly
and
what's
the
period
of
your
control
loop
and
so
getting
the
information
there
before
you
need,
it
doesn't
help
anything
because
I
have
to
do
the
thing
at
the
time.
That's
scheduled
to
do
it.
So
all
that
matter
is
that
you
get
there
soon
enough
average
mean
best-case.
D
Latencies
are
completely
irrelevant,
so
that's
not
what
you're
commonly
what
we
commonly
think
of
as
optimizing,
because
I'm
not
optimizing,
for
example,
buffer
occupancy
two
fundamental
ways:
to
balance
your
latency,
throw
away
anything:
that's
late,
grossly
over
provision
the
network,
lots
of
intensive
engineering,
you're
testing
or
provide
zero
congestion
loss.
We
chose
the
latter
solution.
So
what
am
I
talking
about
I've
got
several
hops
at
each
hop.
There
are
some
queues
now.
D
If
the
given
is
constant
input
rate
and
I
should
flag
you
here,
we
can't
solve
all
everybody's
problems,
there's
one
kind
of
problem
that
we
are
interested
in
solving
and
that's
constant
bitrate
streams.
If
you
have
a
widely
varying
bitrate
stream,
we
can't
help
you.
It's
got
to
be
relatively
constant
and
it
has
to
be
last
last
long
enough
that
it's
worth
your
effort
to
make
a
reservation
for
resources
before
you
start
using
it.
D
So
assuming
you've
satisfied
that,
given
that
you
have
a
constant
input
rate,
you
want
a
constant
output
rate,
otherwise
you're
filling
up
or
you're
draining.
So
in
the
long
term
the
output
has
to
equal
the
input.
We
find
that
then
the
latency
is
necessarily
bounded
if
you're
not
throwing
anything
away.
So
how
do
we
get
zero
congestion
loss?
What
we
do
is
at
every
hop
the
packets
per
interval
n
equals
the
packets
per
interval
out.
The
data
rate
n
equals
the
data
rate
out
now.
D
D
D
We
use
a
cueing
and
shaping
discipline
that
minimizes
the
interference
between
flows
and
also
provides
a
predictable
gap
and
burst
behavior
so
that
we
can
figure
out
what
is
the
worst
case
of
the
bursts
versus
gaps
in
two
adjacent
hops
that
tells
us
how
many
packets
we
have
to
buffer
in
each
node
to
ensure
that
we
never
miss
an
opportunity
to
transmit
so
that
it
went
when
it's
my
flows,
turn
to
transmit
I
have
to
have
a
packet
there.
That
means
I
have
to
have
a
buffer
of
a
few
packets.
D
So
we
want
queuing
algorithms
that
will
give
us
a
small
number
of
the
smallest
number
of
packets,
as
that
gives
us
the
best
latency,
but
it's
got
to
be
predictable,
its
number
one
thing
and
then
for
certain
known
delay
variations
for
example.
It
may
take
a
varying
amount
of
time
to
get
from
the
input
port
to
the
output
queue.
It
may
take
a
slightly
varying
amount
of
time
to
get
from
the
output
queue
having
selected
for
output
until
it
actually
hits
the
wire.
There
are
lots
of
variations
in
there.
D
You
have
to
have
a
little
bit
of
extra
buffer
for
the
variations,
because
the
variation
in
delivery
time
means
you're
storing
data.
That's
the
only
way
you
can
have
a
variation
in
delivery
time
is
that
you're
storing
data,
and
when
that
time
varies
you
wind
up
dumping
the
data
there
has
to
be
a
place
for
it.
So
having
computed
the
buffer
space,
we
can
get
the
latency
now
to
give
you
an
idea
of
why
what
we're
doing
the
usual
best
different
kinds
of
service.
D
This
doesn't
have
a
okay,
the
usual
best
different
kinds
of
service,
the
more
buffers
you
allocate,
the
fewer
you
draw.
Where
is
it
that
one?
Thank
you,
okay,
more
buffers!
You
allocate
the
fewer
packets
you
drop.
If
you
look
at
what
is
the
latency
for
an
average
packet,
you
find
there's
a
minimum.
It's
got
to
get
to
where
it's
going
so
there's
a
minimum
latency
most
packets
don't
take
too
long
because
the
network
is
not
is
typically
not
congestion,
but
if
it
networks
a
bit
congestion,
some
packets
can
take
a
longer
time.
D
Some
packets
can
take
a
really
long
time.
If
there's
a
topology
change,
it
gets
ridiculously
long
and
the
variation
in
latency.
Sorry
most
packets
occur
at
roughly
the
same
time
near
that
peak,
but
you
can
have
very
long
latency.
So
if
I'm
trying
to
engineer
my
network
and
I
say
this
is
pretty
good,
but
this
is
my.
This
is
my
cutoff.
If
I
want
to,
if
I
want
this
to
be
my
cutoff
and
I,
say
I,
don't
want
anything
to
be
take
any
longer
than
that
to
get
into
n.
D
D
So
what
we
do
instead
is
we
try
to
come
up
with
the
scheme
such
that
we
know
how
many
buffers
it
takes,
so
that
I'll
never
lose
one.
If
I
exceed
that
number
of
buffers
doesn't
matter,
I've
still
got
some
religion.
Residual
packet
loss
because
foxes
can
fail
wires
can
fail.
Cosmic
rays
can
hit
the
packet.
D
D
Try
to
set
the
timing
on
your
transmitter
so
that
they
don't
interfere
too
much.
There's
lots
of
things
you
can
do
and
it
winds
up
being
people
do
this
today,
a
lot
and
they
wind
up
having
to
spend
a
very
long
time
testing,
especially
the
corner
cases,
especially
what
happens
when
I
stop
these
three
machines
and
the
other
two
machines
are
still
running.
You
can
make
it
work,
but
with
TSN,
it's
trivial
to
engineer,
because
you
ask
it:
will
that
work?
Yes,
because
the
algorithms
we
use,
give
you
a
yes
or
no
answer.
D
Can
you
add
this
stream?
It
works
even
when
you
have
the
hard
to
test
corner
cases
shutting
off
all,
but
one
stream
does
not
cause
the
one
stream
to
suddenly
burst
or
change
change.
The
way
it
works,
which
makes
it
cheaper
because
your
costs,
your
people,
costs,
go
way
down
and
because
you
can
use
the
same
network
for
the
TSN
and
for
ordinary
traffic
I.
D
Sometimes
you
send
the
packets
on
both
paths.
Sometimes
you
don't,
but
you
notice
that
it
failed
and
you
switch
over.
This
is
kind
of
like
that,
except
what
we
do
is
we
serial
number
every
packet
and
we
receive
both
packets
from
both
streams
or
all
three
streams
or,
however
many
you
have,
and
by
looking
at
the
serial
number
we
throw
away
the
ones
we
don't
want.
D
So
there
is
no
detect
failure
cycle
and
respond
to
the
failure
with
some
kind
of
action
that
doesn't
happen,
we're
always
taking
both
paths,
and
that
means
that
it
would
be
unusual
to
lose
even
one
packet.
Even
if
something
goes
down,
even
if
something
goes
intermittent,
we've
all
had
the
experience
of
a
link
starts
to
degrade.
You
start
to
lose
more
and
more
packets
from
it.
At
some
point,
after
losing
a
bunch
of
packets,
you
decide
it's
out
of
its
service
parameters.
It's
giving
you
too
many
errors.
Now
you
switch
over
in
the
whole
process.
D
D
D
D
If
you
look
at
a
system
with
lots
of
parts,
especially
a
real-time
system,
packets
flying
everywhere,
it's
extremely
extremely
hard
to
analyze
all
failure
modes-
okay
and
these
things
go
into
airplanes.
They
go
into
automobiles,
where
such
analysis
is
very
very
important
because
people
can
die
if
you
don't
so.
D
D
Every
frame
can
we
can
identify
which
flow
it
belongs
to.
We
can
market
the
usual
red,
green
yellow.
We
can
also
have
timed
gates
that
say
this
port
is
open
for
packets
of
this
type.
Now
it's
closed,
so
we
can
insist
that
the
break
sensor
not
only
transmit
only
the
brake
sensor
packets,
but
it
only
transmits
it
at
these
times
when
it's
supposed
to
and
that
it
only
transmit
the
one.
D
Okay.
Now
a
breaks
entered
sensor
that
runs
off
at
the
mouth
and
starts
transmitting
continuously,
can't
screw
up
the
rest
of
the
car
various
ways
to
deal
with
that
you
may
be,
you
turn
it
up.
We
have
in
our
standard.
We
have
various
ways
to
view
with
that.
Maybe
you
drop
the
offending
packet.
Maybe
you
cut
off
the
offending
port.
D
One
bad
thing
about
that
is
that
might
maybe
you
programmed
it
wrong,
so
don't
fix
that
in
testing,
but
on
the
other
hand,
when
it
comes
to
running
off
at
the
mouth.
So
what
what
is
it
transmitting?
Well,
that's
a
good
question.
That's
a
lot
of
failure
modes!
If
it's
transmitting
too
many
packets,
when
it
shouldn't,
then
you
know
something's,
wrong
and
I
guarantee
your
failure.
Analysis
did
include
the
case
where
that
thing
dies.
D
So
if
it
does
anything
weird,
you
shut
it
down.
Your
failure.
Analysis
has
covered
that.
That's
a
good
thing
when
you're
trying
to
figure
out
what's
broken,
so
we
can
protect
against
bandwidth
violations.
This
decisions
can
be
per
stream
per
priority.
We
have
a
the
gate,
can
be
operated
on
a
time
schedule.
All
of
the
devices
can
be
synchronized,
so
the
time
schedules
are
running
all
the
time.
C
Ok,
so
after
the
reliability
aspects,
which
is
the
includes
protection
in
the
ingress,
let's
see
what
happens
on
the
other
box
at
the
output
port,
the
queueing
man
isms.
In
order
to
provide
deterministic
latency
I
guess
you
are
all
familiar
with
the
stick
by
the
queueing,
but
it
was
started
in
1998.
C
So
that's
that
was
the
basic
queuing
mechanism
we
had
and
that
waited
for
awaited
queues
were
added
with
simple
hooks
in
order
to
not
to
over
specifies
the
details-
and
this
is
the
first
mechanism
that
was
actually
specified
by
the
ABB
test
group
for
audio/video
bridging
streams.
At
the
time
it
is
called
credit
based,
shaper
I
will
explain
in
the
next
slide.
Why
why
the
name
this
is
she
so
the
shape
queues?
C
Oops
the
shape
queues
are
these,
so
this
is
the
in
these
diagrams,
not
mark
marking,
for
the
shapes,
shapes
cues
a
have
higher
priority
than
any
other
queues,
and
this
man
is
of
still
guarantees
bandwidth
the
highest
priority.
That
is
not
shaped
like
a
priority
7.
In
this
example,
the
credit
base
shaper
is
similar
to
a
typical
run,
late
birthday
at
the
shaper,
but
very
nice
with
very
nice
properties.
C
C
So
that
was
that's
the
avb
queueing
if
you
heard
that
term
and
what
I'm
explaining
from
now
on
was
specified
as
TSN.
Well,
it's
all
one
package
together
today,
so
this
was
this
is
called
schedule.
Traffic,
which
introduces
time
based
scheduling
to
the
queues
it
is
for
it
is
to
reduce
the
variation
delay
variation
for
CBR
streams,
which
are
periodic
with
known
timing.
C
C
So
it
gives
a
time
schedule
and
of
course
you
need
to
know
the
time
so
time
synchronization
is
needed,
so
you
can
establish
times,
thoughts
or
or
can
do
quite
fancy
things
with
this
with
this
tool,
the
time
gated
queues,
for
example,
the
time
gated
basic
tool
we
use
for
the
cyclic
queuing
and
forwarding,
which
is
in
essence,
uses
double
buffers
to
establish
cyclists
in
the
network
and
and
the
goal
is
that
each
packet
spends
exactly
one
cycle
at
each
hop.
This
example
shows
two
pairs,
so
four
and
five
are
one
pair.
C
So
what
happens
is
that
the
frames
that
you
receive
are
collected
for
a
certain
time
period
and
then
with
one
of
the
queues
and
then
they
are
transmitted
and
the
collection
happens
in
the
other
queue
so
that
the
two
queues
are
served
in
an
alternate
fashion
and
that's
what
provides
the
DC
click
queuing
and
in
order
to
achieve
this
alternate
operation
oops
we
use,
we
use
the
time
gates
up
front
of
the
queue.
So
actually,
we
program
the
time
gates.
C
According
to
the
cycle
time,
we
want
to
achieve
what
I
described
up
or
actually
all
of
us.
This
kind
of
up
to
this
point
are
all
done:
work
published
or
close
to
memory
standards.
The
next
slide
is
ongoing
work.
It
is
called
a
synchronous
traffic
shaping
which
aims
to
provide
zero
congestion
loss
without
time
synchronization,
so
it
what
happens
is
that
we
aim
to
smooth
a
smooth
traffic
pattern
by
shaping
/
hope
and
give
priority
to
urgent
traffic
over
more
relaxed
traffic.
This
figure
illustrates
it.
C
So
what
you
see
these
are
the
classic
use.
So
this
is
a
kind
of
you
can
see
it
as
a
kind
of
hierarchical
queuing
that
up
front
before
that,
using
the
ingress,
filtering
and
policing
tools,
we
have
specified
and
provides
a
lots
of
possibilities
to
David.
One
of
them
is
to
imply
and
a
synchronous
traffic
shaping.
So
that's
where
the
decision
is
made
which
flow
or
which
packet
is
more
urgent
than
the
other
and
then
after
making
that
decision
they
get
into
the
regular
it
accused.
E
So
when
we
have
when
we
have
the
scheduled
traffic
that
scheduled
traffic
is
like
rocks
during
the
the
transmission
time
that
those
rocks
can't
move,
they
always
happen
in
that
spot.
Now
we
would
like
to
handle
the
non
scheduled
traffic,
the
the
traditional
network
traffic
as
efficiently
and
as
possible,
and
use
the
bandwidth
between
the
rocks
to
to
send
as
much
of
that
traffic
as
we
can
now.
If
we
just
leave
things
as
they
are
and
send
them,
we'll
have
a
situation
shown
here
where.
E
That's
void.
Okay,
so
you
know
here
you
we've
sent
packet
one
in
this
gap
and
then
we
have
packet
two
to
sit
sent,
but
it's
a
little
to
fit
largely
for
the
remaining
gap.
So
this
bandwidth
is
wasted
and
then
it
doesn't
fit
in
this
gap
either,
and
so
the
bandwidth
is
again
wasted.
And
finally,
here
we
get
a
gap
big
enough
to
send
packet.
Two,
that's
without
preemption.
E
When
we
add
preemption,
we
let
the
traditional
traffic
act
like
sand
between
the
rocks.
So
in
this
case
it's
okay
that
there's
not
enough
room
to
finish
sending
packet
to.
We
can
start
it
here
and
use
the
bandwidth
and
then,
when
the
scheduled
Rock
finishes,
then
we
can
finish
sending
packet
to
and
start
sending
three
and
again
continue
it
after
the
next
scheduled
Rock.
So
that's
why
we
did
it
so
we
could
efficiently
use
the
bandwidth
while
having
the
scheduled
traffic
in
there
and
the
traditional
traffic
sharing
the
same
links.
E
So
we
also
put
in
something
called
hold
and
release.
Now
it's
not
needed
all
the
time.
Preemption
doesn't
happen.
The
minute
you
say,
I
want
to
preempt
the
packet.
That's
going
for
one
thing
in
order
to
be
compatible
with
all
the
physical
layers
out
there
and
have
them
not
have
to
know
that
preemption
was
going
on.
We
wanted
the
fragments
that
we
create
to
look
as
much
like
regular
packets
to
the
physical
layer
as
possible,
so
that
included
keeping
the
minimum
size
of
64
bytes.
E
Octet
Stan,
we
can't
preempt
it
because
we
have
to
be
able
to
divide
it
into
two
groups.
Actually,
if
it's
one
has
to
be
128
in
order
to
preempt
it,
because
we
have
to
be
able
to
get
to
64,
then
the
extra
CRC
we
add
counts
on
that,
which
is
why
it
comes
out
to
being
124
and
then
in
many
use
cases
we
can.
We
can
stand
that
amount
of
delay.
E
We
don't
need
to
do
anything
special
and
we
just
when
we
want
to
send
a
an
Express
packet,
a
packet
that
can
preamp
the
preemptable
traffic.
We
just
send
it
and
go
ahead
and
live
with
the
fact
that
that
doesn't
happen
instantaneously,
that
that
we
have
to
get
to
the
end
of
the
current
packet
or
gets
the
point
where
we
preempt
and
have
an
inter
frame
gap,
but
in
the
for
the
most
tight
timing,
for
we
want
to
have
really
minimize
the
amount
of
jitter
on
the
scheduled
traffic.
We
have
the
hold
and
release.
E
So
we
know
the
scheduled
rock
is
coming
up.
Basically,
we
have
a
guard
band
of
time
before
that
happens,
that
we
send
a
primitive
called
hold
to
the
Mac
and
the
Mac.
Then
we'll
go
ahead
and
preempt,
even
though
we
aren't
giving
it
the
scheduled
packet.
Yet
it
will
go
ahead
and
preempt,
and
so
that
then,
when
the
scheduled
packet
arrives,
we
will
transmit
that
right
away.
E
We
have
the
Mac
merge
sub
layer,
which
is
a
shim
sub
layer
that
that
basically
takes
the
output
of
these
two
and
does
the
actual
preemption
when
necessary.
And
then
we
have
below
that
the
Phi,
which
is
a
regular,
find,
doesn't
have
to
know
anything
about
preemption
going
on
I'm
not
going
to
go
into
the
details
of
this.
This
is
the
structures
we
made
for
handling
the
and
formats,
and
basically
the
principles
here
were
to
make
the
fragments
look
again
as
much
like
a
regular
packet
as
possible.
E
So
the
physical
layer
doesn't
see
anything
other
than
what
it
expects
to
see
and
also
to
minimize
the
amount
of
bandwidth
taken
when
you
fragment
a
packet
so
to
the
extent
possible
consistent
with
the
first
one.
So
basically
the
encapsulations
designs,
so
that
if
you're
not
fragmenting,
there's
no
extra
bandwidth
needed
so
anytime,
you
don't
preempt
a
packet.
It
takes
the
same
amount
of
time
on
the
wire
as
it
would
if
it
was.
If
there
was
no
preemption
on
the
link
at
all.
And
then
when
we
do
preempt,
we
have
some
extra
bytes.
E
We
need
for
protecting
the
data
integrity,
but
we
keep
those
down
to
a
minimum.
So
basically,
in
the
the
data
integrity
goal
is
to
keep
the
same
data
integrity
that
we
have
without
preemption
operating.
So
we
make
sure
that
there's
some
bytes
in
there
that
keep
us
from
reassembling
fragments
incorrectly
to
make
a
Franken
packet,
a
packet
that
had
a
missing,
fragment
or
was
had
fragments
from
parts
of
two
packets
so
that
we
keep
the
same
level
of
protection
as
the
Ethernet
CRC
gives
us
in
unprompted
traffic
and.
C
Okay,
so
after
these,
let's
take
a
look
on
the
fourth
pillar,
the
resources.
So
what
do
we
have
on
that
front?
In
TSN?
We
have.
A
document
describes
that
in
qcc
that
describes
the
DSN
configuration
and
the
three
approaches
we
can
have
in
TS
on.
One
of
them
is
the
fully
distributed
model.
That
actually
means
you
use
the
rebooted
protocols
for
resource
protocol
for
resource
reservation.
C
Is
the
centralized
network
distributed
user
model
where
the
bridges
the
network
nodes
are
under
the
control
of
a
central
entity
which
is
called
in
this
documents
at
a
central
network
configuration
entity
and
but
the
end
stations,
listeners
and
talkers
are
not
under
the
control
there
is
some
protocol
exchange
or
some
information
exchange
between
the
institutions
and
the
network.
Actually
in
a
wire
user
network
interface,
this
document
provides
information
model
and
yang
for
TSN
configuration.
C
C
As
I
mentioned,
we
we
have
protocol
a
standard
protocol
SRP
for
the
reservation
of
resources,
so
this
is
a
distributed
protocol
that
advertises
the
streams
registers.
The
paths
for
the
streams
even
calculates
the
worst
case,
latency
and
trend,
and
establishes
the
domain
for
a
VB
domain.
What
is
it
called
that
can
be
used
by
the
stream
so
have
the
same
characteristics
or
setups
in
the
bridge
and
ultimately,
of
course,
reserves
the
bandwidth
for
streams.
C
So
this
is
what
we
planned
to
talk
about,
and
we
have
some
stuff
going
on
and
available.
We
have
no
time
to
talk
about
just
just
to
flash
time
synchronization
I
mentioned
in
the
beginning.
It
is
very
important
like,
for
example,
that
for
the
time
gated
queues,
you
have
to
know
the
time
to
program
your
schedule
or
for
the
cyclic
queueing,
and
we
have
standard
for
that.
C
C
C
F
F
E
F
F
D
But
you
start
all
over
again
if
you
shut
up
and
start
over
again,
just
like
very
much
like
answer.
The
other
is
time
based
and
the
time
based
ones
with
synchronized
clocks
can
deliver
the
packet
within
a
fixed
time
window.
That's
the
same
for
all
flows
that
are
coming
out
of
that
port
or
it
can
be
down
to
a
nanosecond.
F
H
Two
questions
so
the
first
one
has
to
do
with
the
source
and
the
destination
which
looks
like
if
I
got
it
right
there
or
excluded
and
many
times,
I,
don't
know
what
the
assumption
is
for
what
type
of
devices
those
are,
but
many
times
significant
amount
of
jitter
and
latency
can
occur
at
both
ends.
So
how
is
that
actually
addressed
here
in
the
overall
scheme?
So
so
the.
C
One
of
the
use
cases
is
industrial
automation
and
in
those
cases
the
sources
and
destinations
are
controllers
and
and
actuators
and
the
the
control
flow
is
a
CBR
flow
for
them
for
the
control
loop
between
those.
So
that
is
very
periodic
and
that's
why
we
can
have
this
time
based
man
isms
in
in
the
queues.
Okay,.
H
D
H
Still
still
at
the
wire
level,
I'm
gonna
have
that
interference,
no
matter
what
and,
and
so
are
we
so
so?
Is
that
something
that
you
guys
are
interested
in
the
IDF
to
work
with
you
on
or
what
is
what
is
the
areas
that
you're
looking
for,
because
I
Triple
E
obviously
is
limited
at
some
point,
and
the
question
is
exactly,
as
you
said:
if
I
would
build
a
product
that
has
the
head,
I,
obviously
am
NOT
going
to
be
satisfied
with
the
number
eight.
So
how
are
we
dealing
with
that?
E
So
realize
that
the
the
per
stream
part,
the
per
stream
filtering
and
Kirk
Ewing,
that
that
can
be
operating.
That's
not
limited
to
the
eight
cues
that
that's
looking
at
the
stream
characteristics
to
figure
out
what
stream
the
flow
is
in.
So
so
yeah.
The
final
getting
onto
the
network
is,
is
in
the
priority
queues,
but
there's
there's
also
the
this.
The.
E
D
H
D
D
E
So
so
new
and
I
coach,
Odette
net
and
the
purpose
of
debt
net
is
is
obviously
there's
a
lot
of
networks
that
are
purely
or
two
networks,
and
we
still
want
to
provide
these
kinds
of
capabilities
over
layer.
Three
networks,
as
well
as
overlay
or
two
networks,
and
so
the
purpose
of
debt
net
is
to
is
to
provide
the
capabilities
to
handle
this
kind
of
traffic
and
and
set
up
the
the
reservations
and
the
flows
and
such
so
that
this
we
get
these
behaviors
over
layer,
three
networks
and
not
just
over
layer.
Two.
E
C
Yeah,
thank
you
for
and
I
think
they
tend.
What
is
coming
is
actually
that
net
as
well.
So
as
I
already
mentioned
in
the
data
plane,
that
was
one
of
the
hot
topics
on
how
to
provide
this
film
or
packet,
replication
and
elimination
for
the
reliability
in
linear
to
data
plane.
So
what
we
have
here
is
a
is
a
proof-of-concept
demo,
both
showing
the
layer,
2
and
layer,
3
data
pane
and
the
packet
or
or
frame
replication
and
elimination
feature
for
reliability.
So
we
focus
on
on
the
reliability
aspect.
C
In
this
case
and
at
layer
2,
it
is
frame,
replication
and
elimination
for
reliability,
so
the
802
dot
1
CB
specification
provides
you.
The
mechanism
actually
includes
a
pseudo
code
and
provided
the
layer,
2
data
plane
details
and
in
the.net
document
called
the
data
data
plane
solution
linked
linked
here
that
provides
the
layer,
3
data,
plane
aspect,
and
in
that
document
it
is
called
the
packet
replication
and
elimination
faction.
C
So
what
you
can
see
here
is
a
remote
control
of
a
balancing
robot.
So
there
is
a
lagoon
robot
down
there
and
the
control
logic
of
the
balancing
logic
has
been
moved
away.
Actually
to
that
laptop
computer.
We
have
implemented
the
replication
elimination
function
on
PCs.
Actually
they
are
in
the
two
tower
pcs.
C
When
we
take
a
look
on
the
layer,
2
data
plane,
the
the
controller
sends
VLAN
tag,
the
traffic
to
the
edge
node,
and
this
CB
mechanism
adds
so
called
our
tag,
which
includes
the
sequence
number
for
the
replication
and
elimination
at
layer
3.
If
the
layer
2
network
is
used
to
provide
a
TSN
service,
so
we
have
the
same
and
host
then
at
the
Uni.
We
have
the
same
format
in
case
of
an
MPLS
data
plane.
C
C
This
is
just
a
virus
charka
capture
on
one
on
the
wire,
and
so
let's
see
the
demo
part.
So
the
first
scenario
is
when
we
have
a
link
failure
and
if
we
use
the
protection
switching
mechanism
with
the
work
working
working
tunnel
protection
tunnel
scheme,
then
we
can
get
50
million
SEK
failover.
There
are
many
many
mechanisms
for
that,
and
and
the
application
will
be
impacted
when
we
turn
on
the
application
and
illumination
that
then
actually
the
packet
loss
is
eliminated.
So,
let's
see
so.
C
C
C
C
I
It
may
be
unusual
to
the
ATF,
but
it's
a
very
classical
metric
for
the
applications
in
industrial
control.
Oops.
The
number
of
loss
in
a
row
is
very
critical.
Like
usual,
they
would
always
let
you
lose
a
packet,
but
if
you
lose
three
four
packets
in
a
row,
they
will
stop
the
production
line
if
they
have
to
stop
the
production
line
like
twice
in
a
year
on.
Your
hardware
is
on
the
curve.
C
Yes,
so,
let's
see
the
other
failure
scenario
which
we
could
link
flapping
when
thing
goes
up
goes
down
and
if
it
is,
as
illustrated
in
the
figure
twenty
milliseconds,
then
the
monitoring
Mahon
ISM
you
use
for
the
classic
protection,
switching
doesn't
even
detect,
but
that's
just
passed.
Pascale
explained
that
can
be
critical
for
for
a
control
loop,
it's
very
sensitive.
So
even
not
that
easy
to
stop
started.