►
From YouTube: IETF102-DETNET-20180716-0930
Description
DETNET meeting session at IETF102
2018/07/16 0930
https://datatracker.ietf.org/meeting/102/proceedings/
A
Hopefully
we're
all
in
the
right
place,
so
this
is
a
little
bit
of
an
experiment.
We're
running
from
the
ITF
provided
Chromebook.
Hopefully
that'll
go
smoothly,
but
if
it
doesn't
just
bear
with
us
I'm
Lou
burger,
we
have
the
honors
for
us
here.
Next
to
me,
we
have
couple
changes
on
the
secretary
side.
We
had
I
think
we
previously
talked.
We
talked
to
the
previous
meeting
about
Ethan
Grossman
volunteering
to
be
a
secretary,
so
he's
here.
We
really
appreciate
his
help
and
yoni
was
our
secretary
from
the
beginning.
A
He
had
a
sort
of
a
position
change
which
means
that
he's
no
longer
regularly
coming.
So
that's
prevented
him
from
continuing
that
role.
We
appreciate
his
service
to
us
and
he's
now
gonna
be
faded
off
and
actually
it's
officially
no
longer
secretaries.
So
it's
just
Ethan.
Thank
you,
our
slides
and
agenda
in
the
usual
place.
A
If
you
look
at
the
agenda,
you'll
see
it
right
at
the
top.
There's
a
link
to
take
you
over
to
etherpad.
Please
jump
on
that
and
help
us
out.
I'll
mention
that
a
couple
of
times
since
it
is
the
first
session
of
this
meeting,
it's
good
to
note
our
note.
Well,
it
changed
at
the
last
meeting.
There's
been
no
change
this
meeting.
The
key
thing
to
take
away
is:
is
that
anything
you
say
here
in
the
working
group,
anything
you
say
on
the
lists.
I
think
it's
a
publicly
becomes
part
of
our
permanent
record.
A
That's
available
to
all.
Please
feel
free
to
look
at
that
link
ITF
that
org
about
note.
Well,
if
you're
not
familiar
with
this,
and
it's
a
good
thing
to
just
know
that
it's
sort
of
the
ground
rules
by
which
we
operate,
we
have
video
recording,
we
also
Audio
and
jabber
for
or
we've
been
doing,
a
joint
note-taking,
the
ether
pad.
Please
jump
on
that
link.
It's
really
helpful
to
get
as
many
people
contributing
to
that.
A
Also,
if
you
do
speak
at
a
mic
and
want
to
make
sure
that
your
name
is
captured
properly
as
well
as
that,
your
comment
is
captured
appropriately.
You
can
immediately
go
in
and
make
sure
that
the
the
notes
are
right
and
that's
really
helpful,
particularly
on
the
names
and
I,
do
remind
everyone
to
speak
their
name
clearly
when
they
come
to
the
mic
so
that
we
can
it
get
it
into
those
into
our
minutes.
A
A
A
We
follow
a
process
that
other
working
groups
follow
as
well,
which
is
before
adopting
a
new
working
group
document,
as
well
as
before
progressing
to
publication
requests.
We
do
a
public
IPR
call
and
ask
that
all
authors
and
contributors
respond
and
do
so
in
a
timely
fashion.
The
if
it
you
don't
respond.
It
slows
the
process
down.
A
So
where
do
we
stand
in
terms
of
our
documents?
We
have
two
documents
that
are
post
last,
call
use
cases
and
problem
statement.
The
use
cases
document
is
ready
to
go
the
main
reason:
I'm
the
Shepard
on
it.
The
reason
I
haven't
hit.
The
submit
is
I'd
like
to
keep
it
I'd
like
to
time
it
to
go
forward
with
the
architecture
document,
so
I'm
just
waiting
for
that.
The
problem
sink
and
I
think
there
was
one
more
change.
Yes,.
B
B
A
Certainly,
we
expect
that
these
documents
to
move
forward
in
pretty
short
order
and
well
before
the
next
IETF
the
architecture
document.
We
had
some
discussions
and
ended
up
with
enough
technical
changes
that
we
wanted
to
do.
A
second
last
call
because
of
the
timing
of
the
meeting.
We
did
an
extended
last
fall,
so
a
three-week
last
call
that
Moscow
ends
on
Friday.
This
is
on
the
agenda
today
and
it
will
review
they'll
review
the
changes,
as
well
as
any
planned
additional
changes.
We
have
several
working
group
documents
on
the
agenda.
A
You'll
note
that
the
solution
documents
they
deplane
solution.
Documents
are
I,
believe
they're,
still
0/0
documents
and
they
came
in
as
working
group
documents
in
their
first
Rev
and
the
reason
for
that
is
it's
a
split
of
an
existing
working
group
document,
our
normal
processes.
As
we
do
polls
for
zero-zero
documents,
we
didn't
because
it
was
a
breakup
of
an
original
one.
A
The
only
document
that
we
have
as
a
working
document-
that's
not
on
the
agenda-
is
a
security
document.
There
was
an
update
sent
to
the
list
and
basically
it's
it's
waiting
to
be
aligned
with
the
solution
documents.
It
would
be
unfortunate
to
publish
a
document
and
then
have
to
support
a
change
in
the
actual
solutions
that
we
define
and
then
have
to
rev.
A
We
don't
have
a
yang
model,
yet
we
do
have
an
individual
yang
model.
That's
been
discussed
at
the
last
ITF,
it's
on
the
agenda
for
this
ITF.
We
had
some
pretty
good
support
for
it.
The
last
meeting
we're
hoping
that,
at
the
end
of
this
meeting
that
we'll
talk
about
adoption
and
the
room
will
feel
it's
in
good
shape,
but
it's
not.
Obviously
we
should
wait
until
it.
Those
issues
are
addressed,
so
the
this
is
an
important
discussion
that
we're
gonna
have
to.
B
That
November
meeting
is
in
a
unique
situation
that
we
have
the
IETF
103
first
and
after
that,
the
title
802
plenary
meeting
the
following
week
right
after
the
80th-
and
there
are
multiple
activities
of
course
relevant
for
both
groups.
That
is,
if
you
have
not
known
there,
is
a
coordination
team
between
the
two
SD
rows
we
meet
regularly
and
we
want
to
leverage
this
opportunity
to
use
the
weekend
in
between
the
two
meetings
for
joint
workshops.
B
B
It
seems
that
most
people
are
available
on
Sunday,
November
11th.
That's
our
target
date
for
the
workshop.
We
are
arranging
the
meeting
rooms
as
it
seems
at
the
moment.
There
are
typically
so
Sundays
closer
to
the
ITP
meeting,
starting
on
Monday.
We
had
the
ITP
limiting
last
week
and
have
been
working
with
with
the
relevant
people
on
the
meeting
room.
It
looks
promising
that
I
typically
can
cover
the
meeting
rooms.
There
will
be
a
registration
website
to
be
aware
who
is
coming
and
prepare
badges
and
so
on.
B
A
Yeah,
this
is
a
really
great
opportunity
for
us
to
better
synchronize
the
efforts
that
are
happening
here
in
the
IETF
and
in
TSN
in
802
dot,
one
with
TSN,
as
well
as
the
IEC
works,
and
so
it's
a
it's
a
really
nice
opportunity
to
take
advantage
of.
We
think
that
there's
approximately
30
30,
maybe
40
people
that
expressed
interest
in
it.
A
We
had
a
doodle
poll
that
went
out
on
on
our
list,
as
well
as
the
I
Triple
E
list,
so
we
think
it'll
be
a
very
good
information
exchange
and
help
both
efforts
move
forward
so
stay
tuned
for
the
additional
information.
Oh
and
I
should
say
we
really
have
to
thank
the
I
Triple
E,
because
they're
the
ones
who
are
going
to
be
providing
the
space
without
charge.
Well,
we
hope
we
expect
without
charge
to
any
of
the
participants,
and
so
we
really
appreciate
that.
B
Perfect,
so
we
had
an
initial
working
group
last
call
and
there
have
been
a
number
of
changes,
so
we
actually
have
an
extended
working
group,
a
score
going
gone,
the
initial
Alaska
Elvis
on
0.5
and
the
current
draft
is
0-6
on
which
the
open
still
open.
Our
last
call
is
going
on
as
no
mentioned
it
causes
this
Friday.
B
So
in
the
initial
last
call,
this
slide
summarizes
the
main
changes
and
I
will
not
go
into
the
finer
fine-tune.
It's
one
of
the
main
changes
during
the
initial
or
due
to
the
initial
within
the
Pascale's
that
we
have
introduced
a
new
QoS
parameter,
which
we
called
the
maximum
allowed
miss
ordering.
So
this
is
the
area
of
you
know
the
delivery
out
of
order
delivery.
B
B
This
has
been
discussed
previously
and
and
and
touched
upon
in
other
documents,
like
the
flow
information
model,
that
some
applications
cannot
tolerate
at
all
any
kind
of
out
of
order
delivery,
but
there
are
some
applications
that
have
certain
level
of
tolerance,
and
we
want
to
do
this
at
about
the
maximum
allowed
miss
ordering
is
to
capture
that
this
is
related
to
deter
the
delay
variation.
I
mean
the
auto
for
that.
B
Every
that
causes
jitter
as
well,
and
so
what
can
cause
out
of
order
delivery
if
one
uses
restoration
techniques
and
of
course
any
change
can
cause
in
the
topology
and
the
update
of
the
forwarding
paths
or
if
one
uses
service
protection
with
ballast
with
establish
pre-established
explicit
routes,
then
the
change
between
the
explicit
routes
can
be
can
cause
out
of
order
delivery.
Actually,
the
tools
that
we
can
apply
to
overcome
the
out
of
order,
delivery
or
mitigate
the
miss
ordering
or
or
reorder
packets
actually
are
similar
to
what
we
can
do.
B
Vegeta
like
we
can
have
a
playout
buffer
that
reorders
a
packet
packets,
so
I
already
mentioned
service
protection.
There
has
been
changes
in
the
service
protection
description
as
well.
Here
the
idea
was
to
make
the
description
more
generic.
So
in
previous
versions,
the
Service
protection
text
was
very
focused
on
the
packet,
replication
and
elimination,
but
this
is
the
architecture
document
to
give
higher
layer,
architectural
concepts
and
not
choose
technical
solution,
details
or
not
describe
the
details
of
the
technical
solution,
so
it
has
been
brought
a
bit
higher
level.
B
The
Service
protection
term
itself
instead
of
this
protection,
is
to
address
packet
loss
due
to
equipment,
failure
or
media
access,
failure
or
memory
force,
and
so
on.
So
there
is
the
congestion
protection
to
address
all
kinds
of
queuing
and
so
on
type
of
aspects
and
the
service
protection
is
for
for
equipment
failures.
B
When
we
apply
a
Service
protection
technique,
we
most
often
distributed
the
data
over
multiple
parts,
typically
disjoint
or
maximally
disjoint
paths,
and,
as
I
mentioned
before,
there
are.
A
number
of
techniques
already
exists
for
service
protection,
for
example,
the
vamp
last
fall
in
our
protection
and
so
on,
depending
on
the
requirements
of
the
data
flows
or
the
definite
service.
Multiple,
there
are
multiple
choices
and
one
can
choose
which
which
technique
is
used
on
top
of
the
existing
techniques.
B
Their
architecture
document
describes
the
high
level
of
a
a
new
technique,
which
you
call
packet,
replication
and
elimination,
and
this
can
be
implemented
as
described
in
detail
by
the
data
plane
documents
like
the
MPLS
document,
in
order
to
provide
the
1
plus
and
hitless
protection
type
of
service
protection.
So
the
architecture
documents
describe
the
high-level
functions
and
the
solution
details
are
described
in
the
solution
documents
like
the
MPLS
database
solution.
As
for
the
high-level
functions
of
the
packet,
replication
and
elimination,
we
distinguish
three
different
functions.
B
The
first
one
is
the
replication
function
that
replicates
packets
to
be
sent
along
the
disjoint
paths.
On
the
other
side,
we
have
the
packet
elimination
function
to
keep
only
one
of
the
copies
of
the
packets
of
a
flow,
send
it
to
that
destination
and
eliminate
duplicates,
and
the
third
one
is
the
packet
ordering
function
to
reorder
packets
in
order
to
meet
the
maximum
miss
ordering
q
S
parameter
I
described
in
the
beginning.
So
these
are
the
abbreviations
we
use
those
and
all
together
as
a
collective
term.
B
We
call
them
free
off
to
cover
all
this
function,
with
an
abbreviation
need
to
highlight
here
that
the
how
the
order
of
the
application
of
these
functions
within
a
node
is
out
of
scope.
That's
implementation,
specific
every
implementation
has
the
freedom
to
choose
which
order
is
applied
within
a
particular
node.
B
There
has
been
updates
on
tax
related
to
explicit
routes,
as
I
mentioned
before
that
out
of
order,
delivery
can
be
a
side
effect
of
of
distributing
the
flows
over
multiple
parts,
and
especially
if
someone
uses
restoration,
then,
and
that
can
cause
out
of
order
delivery.
If
any
change
happens
in
the
topology
and
the
restoration
metal
reacts
to
it.
B
There
are
two
comments
or
two
aspects
that
we
are
discussing
in
the
list
there
was.
There
is
no
document
updated
nor
revision.
Yet
we
are
trying
to
conclude
what
changes
to
make
and
one
of
the
commands
was
related
to
active
or
a.m.
that.
The
architecture
documents
should
capture
that
active.
Our
methods
require
additional
bandwidth
at
the
first
sense
first
night
it
seems
natural
or
obvious
that,
yes,
if
you
want
to
bacterium,
then
you
need
to
have
a
bandwidth
or
for
that,
but
it
seems
that
it's
it's
good
to
capture
it
document.
B
So
this
is
a
this
text
you
are
to
see
in
the
in
the
in
the
screen
is:
is
the
proposed
tax?
We
will
most
likely
add
to
the
corresponding
paragraph
and
there
may
be
smaller
changes
in
other
paragraphs
to
capture
this
and
the
second
one
is
the
sections
4.4
and
4.
4.4.2
and
4.4.3
talked
about
the
control,
plane
and
NASA.
Discussion
on
the
on
the
email
is
to
to
clarify
some
aspects,
so
it
is
just
to
me
it's
just
clarification
and
to
choice
of
proper
acronyms
to
be
used,
but
we
need
to.
D
A
Yeah,
and
so
it
was
abbreviated
ced
and
that's
a
well
known
acronym,
like
480,
noticing,
that's
a
well
known
acronym
in
according
to
the
RFC
editor.
So
it's
good
to
avoid
that
the
proposal
I
think
on
the
table
and
to
be
clear,
I'm
opposed.
It
is
rather
than
change
the
name.
Let's
just
call
it
CP
n
tour
CP
entity
or
something
like
that.
It's
not
used
in
a
whole
lot
of
places.
E
A
And
the
discussion
that
was
on
the
list
is
that
that
implies
too
many
a
physical
box.
While
from
an
architecture
standpoint,
the
control
plane
can
be
fully
centralized
on
a
box
and
be
fully
distributed
across
all
the
network
components.
You
know
the
the
devices,
it
seems
a
little
constrictive,
yeah
well
I'm.
F
Let's
get
you
well
that
was
concerned
by
the
term
entity
actually
because
it
seems
to
relate
to,
but
it
might
be
my
wrong
English
right,
but
as
I
read
it,
it
seems
to
be
like
a
single
thing
like
like
again
device,
prime
or
a
single
function,
and
actually
it's
a
number
of
functions
and
they
may
be
distributed.
Each
function
may
be
distributed,
or
at
least
this
function
may
be
implemented
here
and
that
function
may
be
implemented
here.
F
A
B
B
B
B
Okay,
thank
you.
So
this
is
an
update
on
the
flow
information
model.
Before
going
into
the
details,
I
need
to
highlight
that
that
the
group
has
been
mainly
working
on
finalizing
the
architecture
and
putting
the
MPLS
and
the
IP
data
plane
solution
drafts
on
the
table
in
a
solid
number
so-
and
this
was
a
significant
effort
of
these
three
documents
from
the
previous
IDF
to
this
one.
B
So
there
has
been
not
much
of
a
progress
made
to
the
flow
information
draft
itself
yet,
and
what
we
have
done
so
far
is
is,
is
analyze
what
changes
before
see
and
what
is
the
work
to
be
done
as
a
consequence
of
updates?
In
these
other
documents,
the
architecture
and
the
two
data
plan
solution
documents.
So
this
is
this
presentation
is
just
to
list
the
changes
we
foresee
and-
and
we
have
one
on
our
to
do
list
due
to
the
changes
on
these
other
documents,
as
I
just
explained
for
the
architecture.
B
There
is
this
new
attribute
the
miss
maximum
allowed
miss
ordering
in
the
flow
information
model.
We
have
this
attribute,
but
it's
a
binary
attribute.
So
in
the
current
zero
one
version,
the
flow
either
can
do
no
does
not
care
so
can
tolerate
any
kind
of
order,
delivery
or
have
a
zero
tolerance
of
miss
ordering
I
did
not
mention
that
the
zero
is
a
valid
value
for
the
maximal
or
miss
ordering.
That
means
that
the
particular
flow
cannot
tolerate
any
kind
of
out
of
order
delivery.
B
So
the
plan
here
is
to
expand
this
attribute
more
refined
one
to
to
be
able
to
capture
the
this
maximum
a
lot
miss
ordering.
We
have
been
discussing
it
with
a
couple
of
contributors
on
on
how
to
do
it.
Actually,
the
theoretically.
There
are
two
two
approaches
we
can
take.
We
can
take
the
maximum
number
of
packets
type
of
that
to
capture
this
parameter,
or
we
could
have
a
time
related
parameter,
as
I
mentioned
before.
B
This
is
maximum
missile
allowed
missile
during
is
related
to
the
jitter,
and
we
have
the
timing
parameters
captured
in
the
image
it
attributes
already.
So,
at
least
for
some
of
us,
the
the
the
view
or
the
proposal
is
to
capture
the
maximum
allowed
missile
during
as
the
number
of
packets
and
Pascal
problem
has
put
initial
text
on
the
table
for
this,
which
I've
fine-tuned
the
little
bit.
B
Further
discussion
will
be
done
in
the
in
the
list
and,
of
course,
you
will
see
the
updates
in
the
next
draft,
which
is
for
comments
and
discussions.
We
need
further
refinements
in
the
attributes.
You
have
seen
a
discussion
on
on
the
jitter.
For
example,
I
don't
have
a
particular
proposal
on
the
table
for
that
yet,
but
this
is
coming
and
actually
it
was
pointed
out
that
we
should
improve
the
consistency
within
within
the
draft
for
the
different
views
we
are
capturing.
B
B
On
the
other
side,
the
network
provides
a
service
so
for
the
network.
It
is
the
service
parameters,
data
stand
at
the
network
view
and
we
have
the
interface
in
between
the
two,
the
user
network
interface
with
signaling
and
so
on,
and
these
the
parameters
have
to
match
on
a
two
side,
so
we
are
able
to
live,
revise
this
and
and
make
make
necessary
changes
to
make
it
absolutely
consistent.
B
The
other
thing
I
mentioned
before
that
we
need,
when
revising
the
flow
information
draft,
we
need
to
keep
in
a
mind
to
provide
a
consistency
amond
among
the
other
drafts
or
with
the
other
drafts,
like
the
architecture.
The
data
plane
solutions
and
we
need
to
care
special
care
is
needed
towards
the
end
draft,
because
the
yank
in
draft
that
is,
provides
the
data
matter
model
for
the
flow
information,
so
that
that
is
clearly
a
synchronization
between
two
two
graphs
is
needed,
and
we
will
come
back
to
that
during
the
presentation
as
well.
B
I
B
I
B
That's
true
so,
for
example,
I
wanted
to-
and
this
is
what
I
was
thinking
of.
If
you,
for
example,
just
taking
a
look
at
the
topology,
the
basic
basis
could
be
a
traffic
engineer
topology,
but
on
top
we
need
to
know
which
that
net
functions
like
the
replication,
elimination
and
so
on,
which
note
is
capable
to
implement,
of,
or
things
like
that.
So
we
need
some
cement
to
add
on
to
traffic
engine
and
that's
the
purpose
here
as
well,
that
to
traffic
engineering.
There
is
that
net
service-related
and
flow
related
attributes.
A
Excuse
me
just
before
you
go
away
also.
Actn
is
very
much
focused
on
control
architecture
and
this
right
now
the
the
current
document
is
more
focused
on
the
data
plane
and
the
data
plane
implications,
and
we
really
haven't
had
discussions
about
how
that's
going
to
be
controlled
and
that's
something
we
should
start
talking
about
in
this
working
group,
but
we
have
to
be
not
get
distracted
from
delivering
on
our
core
deliverables.
A
So
once
we
have
a
yang
document
and
once
we've
made
good
progress
on
that,
we
should
start
having
the
discussion
of
what
are
the
control
techniques
and
how
does
it
relate
how
we
want
to
control
debt
net
and
relationship
to
the
existing
toolkit
and
actn
certainly
fits
well.
Thank
you.
I
have
a
from
the
from
Jabbar.
This
is
from
Greg
mirskiy.
We
may
discuss
the
out
of
order
performance
metric
with
I
ppm
working
group.
Okay,.
B
A
All
right
we're
gonna,
try
to
have
a
remote
presentation
right
now.
Well,
I
should
you
could
enter
the
room.
A
A
K
A
J
J
Is
that
my
batter?
Yes,
okay,
then,
let's
go
to
the
next
slide,
which
is
just
showing
some
history
how
these
two
data
plane
document
was
done.
So
previously
we
had
a
verbal
document
which
are
dealing
both
with
the
IP
TSM
and
the
MPLS
PS
and
networks
how
to
provide
that
not
data
plane
over
them,
and
there
were
plenty
of
discussion.
J
We
also
had
some
individual
drafts
which
have
describing
the
results
of
those
discussions
and
in
the
last
meeting
we
have
decided
that
let's
split
the
document
into
part
and
let's
have
dedicated
documents
to
IP
and
mpls
data
plane
in
order
to
describe
all
the
details,
these
traps
are
still
the
initial
versions
of
version
0.
They
are
capturing
the
major
concepts,
but
there
are
still
over
to
them.
J
So
every
contributions
at,
of
course
welcome,
and
we
have
to
work
also
a
little
bit
on
the
language
in
order
to
have
the
RFC
2119
related
statements
in
the
document.
So,
let's
look
what
is
inside
and
let's
jump
first
and
check
what
is
with
the
IP,
that
not
data
plane
and
on
the
next
slide.
We
have
summarized
the
basics,
the
basic
characteristics
and
staff
of
the
IP
domain
solution,
and
this
is
for
IP
host
and
Reuters
that
provide
the
data
service
to
IP
encapsulated
data.
J
J
No
second
formation
that
we
faceted
ingested.
This
is
why
you
can
see
display
that
means
in
the
IP
date
of
resolution,
the
transport
data
transport
layer,
which
is
providing
congestion
control
related
functionalities.
You
can
expect,
of
course,
if
you
need
service
protection,
you
cannot
do
it
end-to-end,
but
you
can
do
it
on
a
first
subnet
or
link
basis
using
technologies
like
ambulance
or
of
the
SS,
which
will
be
also
highlighted
in
front
of
us
lights.
J
L
L
J
L
J
J
J
In
such
a
case,
the
end
systems
are
connected
to
H
nodes,
and
these
H
nodes
are
located
at
the
boundary
of
the
dead
net
domain
and
they
act
as
a
that
net
service
proxies
for
the
end
application
and
they
are
initiating
and
termination
terminating
the
definite
service.
What
is
known
that
not
aware
IV
flows
and
the
flow
identification
again
in
this
case,
you
can
do
it
based
on
the
existing
header
information,
but
you
know
we
are
seeing
sex
life,
so
this
life
is.
J
L
J
I
J
J
D
M
J
There
varied
some
further
discussion
and
further
work
here.
We
have
just
listed
these
topics
on
some
of
them.
We
already
have
some
tax
on
some
of
them.
We
still
have
just
placeholders
in
the
data
plane
document,
so
networks
with
multiple
technology
segments
or
aggregation.
This
is
something
for
a
further
discussion
and
to
add
new
tax.
J
And
now
we
have
the
major
concept
for
IP
data
plane,
so
some
other
work
and
the
discussion
can
be
started
on
management
and
control
in
consideration
that
the
tightly
incarceration
procedures
and
have
web
IP
definite
laws,
TSM
adverse
I
believe
a
2.1
at
some
songs.
We
have
the
nursing
in
conformance
language
at
some
point
not
so.
This
is
something
also
have
to
be
added
and
updated
in
the
document.
J
So
this
is
operating
and
providing
that
that
service
is
board,
I,
think
at
the
internet
layer
across,
and
we
have
also
specified
that
that
capacity
encapsulation
we
have
just
calling
it
as
a
definite
south
of
Iraq
the
data
plane.
You
can
provide
all
the
services
layer
and
definite
transport
in
your
functionality.
L
J
And
for
the
transport
layer
to
provide
the
congestion
protection
stuff
and
it
is
practically
supported
on
advances,
things
and
pls
traffic,
a
missionary,
encapsulation
Sentinelese,
let's
jump
next
slide,
which
is
just
showing
some
stuff
regarding
how
work
group
scenarios.
So
we
have
two
types
of
and
systems
which
are
distinguishing
layer,
two
and
system.
J
I
L
J
I
J
I
J
J
Next
slide,
please:
this
is
showing
the
depth
IP
over
MPLS
network,
and
this
is
the
scenario
we
are
repairing
from
their
IT
discipline
document,
so
in
this
case
I
adapted
and
system
to
sending
the
flow
Network
and
it
will
be
transported
over
several
wires
and
appropriate
control
of
the
traffic
engineering
MPLS
parts.
So
here
the.
L
L
J
J
L
L
J
L
J
L
J
L
A
J
J
So
for
a
definite
floor
you
have
to
have
two
identifiers
or
or
two
two
parameter,
one
is
to
identify
the
flow,
and
this
is
but
is
encoded
in
the
as
labor.
So
this
is
I
know
which
flow
we
are
speaking
about
and
the
third
control
verb
is
containing
the
second
standard
formation,
and
we
always
must
add
information.
D
J
J
A
I
think
so
Yaakov
is
saying,
I
understand
what
a
control
word
is.
This
is
precisely
the
pseudo
wire
stack.
The
I
think
he,
the
the
pseudo
wire
is
in
normal.
Sugawara
label
would
be
the
dead
net
control
word.
The
s
label
would
be
a
context:
MPLS
label,
it's
not
a
pseudo
wire
label,
but
it's
a
context.
Mpls
label
and
I
think
that's
the
point
that
the
questions
on.
A
C
That's
Jeff,
Ron
I
think
we're
sort
of
getting
hung
up
on
names
and
we
should
focus
on
the
functionality.
We
can
sort
the
names
out
at
another
time
if
we
want
there
is,
there
are
some
sort
of
differences
between
the
way
pseudo
wires
work
and
the
way
debt
networks,
I
believe
in
terms
of
the
added
functionality
of
debt
net,
so
I'm
fine
with
them
being
called
something
else.
If
the
consensus
is
to
move
them
and
merge
the
names
I'm.
Also
fine
with
that,
but
I
think
it's
a
perfectly
fine
set
of
names.
I.
J
J
We
feel
that
it
is
very
important
to
high
flow
aggregation,
because
that
will
provide
the
possibility
to
involve
the
scalability
of
the
database
solution,
and
here
Stewart
have
already
documented
three
methods
of
glorification.
One
is
that
we
are
aggregated
at
the
Ellison
levels
of
articulated
transport
that
the
transport
layer.
We
can
also
make
aggregation
of.
N
K
J
So,
as
you
have
heard,
or
securing
the
architecture
presentation,
we
have
not
got
needed
to
have
the
abbreviation
use
for
this
application,
elimination
and
ordering
function.
This
is
something
that
is
also
described
in
the
unreality
of
in
document
for
the
Aged
node
and
for
the
relay
nodes,
how
these
functionalities
of
working
and
what
I
was
not.
As
we.
L
J
L
K
J
A
His
comment
is,
if
we
call
it
a
debt
net
pseudo
wire
than
it
should
use
pseudo
wire
terminology
and
use
pseudo
wire
control
word
it's
almost
there
I
think
there
was
a
discussion
among
the
authors
about
whether
or
not
to
use
pseudo
wire
terminology
or
not,
and
actually
it
we
had
it
in
several
times.
Even
in
the
working
group
and
I
think
we
decided
that
we
would
avoid
the
term
pseudo
wire,
because
we
didn't
want
to
constrain.
A
The
the
way
that
the
the
control
word
and
the
debt
net
services
are
managed,
for
example,
that
they
could
be
men
and
fusing
evpn
approaches
which
has
a
control
word,
but
is
not
a
pseudo
wire
from
some
people's
perspective.
Other
people's
perspective
is
exactly
this
through
the
wire,
but
I
don't
know
if
the
author's
want
to
make
a
comment
on
that,
but
I
think
that's
the
answer
to
Yoko's
I.
L
C
Yes,
Stewart
yeah
I
think
it
kind
of
depended
on
whether
you
started
at
the
architecture
and
worked
your
way
down
and
carried
the
architecture,
specific
specification
and
naming
down
or
whether
you
kind
of
jump
to
the
bottom
and
said
in
one
particular
cases
remember.
It
is
only
one
particular
case
we're
going
to
do
this
with
a
suit
of
our
construct
and
we
should
use
pseudo
wire
terminology.
C
So,
depending
on
whether
you
start
from
the
top
of
the
bottom,
you
ended
up
with
a
different
terminology:
we've
chosen
to
start
at
the
top
and
propagate
the
particular
property.
Eight,
the
terminology
down
I,
believe
it's
a
perfectly
fine
way
of
describing
it
and
I
have
no
particular
hang-up
over
whether
we
which
of
the
terminologies
we
use,
but
we
kind
of
decided
to
stick
with
the
architecture,
one
and
propagate
it
down.
E
L
A
A
I'm
gonna
ask
it
now,
while
we
have
blush
in
the
room
so
to
speak,
and
then
we'll
ask
it
again
when
Greg's
presenting
his
om
draft
is
where
do
we
think
I
am
related
to
the
MPLS
control
playing
sorry
MPLS
theta
plane
belongs,
does
it
belong
in
the
data
plane
document
or
does
it
belong
in
a
separate
OEM
document
and
I?
Think
Greg
is
bringing
up
a
good
point
about
that.
You
might
have
some
implications
on
the
definition
of
the
encapsulation.
If
you
don't
take
into
account.
Oh
yeah,
no.
C
Sister
I'm
not
sure
I,
understand
Greg's
point
this.
The
the
the
design
is
consistent
with
the
most
basic
version
of
the
pseudo
yrm
control
word
and
the
if
the
OEM
is
done
with
using
the
ACH
mechanism,
it
is
still
consistent
with
that
there's
just
you
set
the
four
zeros
to
three
zeros
and
one
so
I'm
not
unlike
to
understand
Greg.
A
C
Would
put
in
a
companion
document
making
sure
that
the
data
plane
document
has
got
enough
hooks
in
it
that
you
can
do
the
OEM
indicator
on
it?
So
clearly,
we
should
make
sure
that
it's
possible
to
specify
how
you
know
it
is
OEM
and
the
the
characteristics
of
the
OEM
following
the
same
path,
etc,
etc,
but
for
the
detailed
OEM
design
is
going
to
be
a
big
piece
of
work
in
its
own
right
and
really
belongs
in
its
own
document.
Okay,.
A
A
C
Not
sure
I
agree
with
Stuart
I,
don't
sure
I
agree
with
ya
cough
I
mean
the
the
particular
control
work
design.
With
the
with
the
zero
indicator
only
applied
to
a
certain
class
of
pseudo
eyes.
You
can
have
whatever
design
you
want.
The
only
piece
that's
important
in
the
most
basic
design
is
that
you
have
the
first
nibbles
set.
So
it
sounds
like
we
need
a
longer
discussion,
perhaps
on
the
list.
Yeah.
A
A
A
It
captures
the
each
of
the
details
separately.
It
doesn't
have
all
the
conformance
requirements
yet,
but
it
has
all
the
sort
of
narrative
to
understand
what
the
solution
is.
Now
would
be
a
great
time
for
those
who
have
not
yet
read
the
document
or
haven't
read
the
the
solution.
Doc
data
playing
solution
document
recently
would
be
a
great
time
to
go.
Read
it
make
sure
that
a
it's
clear
as
to
what
how
the
data
plane
is
supposed
to
operate
and
to.
A
Of
the
of
a
proposed
standard
to
have
all
the
performance
statements,
the
the
fundamental
pieces,
the
fun
of
those
funtom
of
definitions
are
just
going
to
relate
back
to
the
text.
That's
already
there.
So
now
is
a
really
good
time
to
read.
The
document
provide
comments.
If
you
very
recently
a
blush
anything
else
that.
G
Hi
I'm
norm
fen
one
of
the
authors
of
the
bounded
latency
draft
I'm
not
going
to
talk
about
the
whole
draft
I'm
going
to
talk
mainly
about
the
new
work.
The
new
stuff
that's
been
introduced
by
jean
yves
LeBeau
deck
and
a
son
muhammad
were
of
the
Ecole
Polytechnique
in
Lausanne
we
had
a
presentation
from
John
Eve
about
his
network
calculus,
and
this
is
critical
for
the
aspect
of
debt
net,
which
is
the
upper
bound
to
latency
and
the
zero
congestion
loss.
A
reminder
to
the
new
folks.
G
G
There
is
a
standard
in
progress
in
I,
Triple,
E,
802
dot,
one
called
asynchronous
traffic
shaping
ATS,
and
what
this
talks
about
is
a
second
layer
of
cues.
Now
this
is
perfectly
compatible
and
a
natural
thing
that
we've
done
for
years
in
terms
of
per
flow
cues.
It's
easy,
we've
known
in
serve
days
in
the
90s
that
per
flow
cues
can
give
good
QoS
what's
novel
about
these
cues
is
that
you
can
share
any
number
of
flows
on
one
cue
as
long
as
those
flows
come
in
the
same
port
and
go
out
the
same
court.
G
If
you
have
a
number
of
flows
that
come
in
on
one
port
and
go
out
on,
one
court
share
the
same
route
through
the
router
bridge.
Whatever,
then
you
can
put
all
of
those
flows
in
one
queue,
you
still
need
a
separate
shaper
per
flow,
but
as
the
packets
come
to
the
front
of
the
queue
you
can
say
which
flow
is
this
and
apply
the
correct,
shaper
and
you
get
by
with
fewer
cues
in
the
box?
G
G
The
key
is
the
second
point
here:
adding
the
interleave
regulator
means
that
your
computation
of
end-to-end
latency
for
a
flow
is
linear
with
the
number
of
hops
in
the
flow
in
general.
If
you're
trying
to
figure
out
what
is
the
latency
of
this
flow
to
the
network
through
the
network,
you
have
to
analyze
the
impact
that
every
other
flow
that
crosses
its
path
has
on
the
flow
you're
analyzing
and
to
see
what
that
impact
is
you
have
to
analyze
the
impact
of
other
flows
on
the
flow
that
affects
the
flow
you're
trying
to
compute?
G
G
G
G
There's
a
slide
coming
up.
It'll
it'll
make
this
a
little
more
clear.
First
of
all,
I
want
to
show
how
the
regulator's
work.
The
idea
is
that
flows
that
come
in
from
the
same
port
in
the
left
hops
are
combined
at
the
output
queues.
You
have
the
flow
regulators,
one
flow
regulator
per
input
port
that
would
be
the
red
and
the
green
two
different
input.
Ports
go
through
two
different
regulators.
G
They
then
come
out
as
the
yellow
stream
when
it
gets
to
the
next
hop
the
stuff
it
to
one
quart
goes
in
the
green
half
the
stuff
that
goes
to
the
other
port
takes
the
red
path
and
again
the
inputs
are
combined.
Everything
from
one
quart
goes
through
one
regulator
and
the
regulators
are
combined.
This
shows
how
this
works
at
every
hop.
G
When
you're
computing,
the
the
total
delay,
you
add
up
the
delay
at
each
hop.
Of
course,
the
trick
is
that
if
you
compute
the
worst
case,
delay
for
every
hop
and
simply
add
them
together,
you
get
a
very
pessimistic
estimate
of
what
the
total
end-to-end
delay
is.
The
reason
is
the
worst
case.
Delay
at
one
hop
can
only
occur
if
you've
drained
stuff
from
previous
hops
and
filled
up
the
queue
at
this
hop,
but
if
you've
drained
stuff
from
previous
hops,
that
meant
it
move
faster
through
the
previous
song.
G
You
can't
possibly
have
the
worst
case
at
every
hop
and
when
you
compute
what
the
worst
case
is.
That's
where
you
come
back
to
this
slide,
which
says
that
essentially,
the
regulator's
are
free,
so
their
regulators
enable
the
computation
to
be
linear
and
cost
you
nothing
and
end-to-end
worst
case
delay.
This
is
an
extremely
valuable
property.
G
G
Here
are
the
pointers
to
the
papers
that
make
this
clear.
I
will
mention
one
other
thing:
that's
going
on.
There
is
a
new.
A
new
project
has
been
approved
for
802
dot,
1
TSN,
which
is
a
document
entitled
quality
of
service
provision
by
network
systems.
I
quoted
the
scope
here,
which
is
the
contract
with
my
Tripoli
standards,
Association,
for
what
the
document
is
supposed
to
do.
G
G
802
dot,
1
DC
will
explain
that
if
you're
building
a
network
relay
device
and
I
don't
care
what
address
family
you're
working
on
to
forwards,
your
packets
I
don't
care
whether
it's
a
router
or
a
label
switch
or
an
app
box
or
a
firewall
or
anything
else.
If
packets
are
coming
in
some
ports
and
going
out
other
ports,
here's
how
here's?
G
What
to
look
at
in
802
dot1q
to
tell
you
how
to
build
a
device
that
can
claim
I
conform
to
the
QoS
provisions
of
802
dot1q
I,
implement
the
queues
queuing
mechanisms
defined
in
a
todo
2.1
queue
so
that
I
can
get
the
T
in
class
of
services
with
my
box,
which
is
not
a
bridge.
That's
what
the
purpose
of
this
document
will
be.
I
hope
that
it
would
be
a
reference
from
some
of
the.
F
On
your
regulator's,
I
mean
it
seems
to
me
that
it
works
as
long
as
all
the
flows
that
go
out
the
same
queue
are
all
regulated
right
so
say
it
again.
The
way
you
presented
the
regulator's
I
understand
that
it
works
as
long
as
all
the
flows
that
go
out
the
same
queue
are
all
regulated
right.
You
can't
inject
traffic,
which
is
not
yes,.
G
H
Told
us
that
so
this
is
cool
I,
think
that,
obviously
you
know
there
there's
a
little
bit
more
about
what
happens
in
larger
routers
internally.
In
terms
of
you
know,
expectation
of
having
a
blocking
free
fabric,
for
example,
so
that
anything
from
input,
port
R
to
output,
port
B
does
never.
You
know,
get
unpredictable.
There's.
G
H
G
H
H
H
G
H
H
G
A
G
A
G
G
A
A
H
H
I
think
named
Holly
Secord
again,
so
I
think
there
is
a
difference
with
respect
how
the
ITF
treats
this
stuff,
whether
or
not
there
is
something
observable
between
the
notes
right.
So
the
way
I
understand
it
is
that
this
this
proposal,
solution
from
norm,
can
actually
be
something
internal
behavior
and
does
not
introduce
anything
new
on
the
wire
other.
A
Discussion,
so
the
first
question
is
how
many
people
are
in
the
room
are
interested
in
prescribing
and
implementation.
A
specific
implementation
inside
a
piece
of
equipment.
Please
show
a
hands,
so
I
would
say
that's
very
few
if
how
many
are
interested
in
or
think
it's
useful
to
have
an
informational
document
that
represents
one
possible
implementation
of
the
of
a
device
that
would
support
that
net
services.
A
H
So
the
problem
is
that
I
think
you
know
if
we
all
want
the
requirements
document
for
the
bounded
latency
right
to
be
a
core
of
the
working
group,
then
you
know
not
having
an
opinion
about
how
to
do.
It
is
basically
escaping
to
answer
right.
So
that's
no
I
mean
that
the
the
problem
is
that
customers
who
want
to
do
it
in
ten
or
so
right,
really
need
some
guidance
right,
so
you're
putting
cool
requirements
in
there
and
saying
that
that
can
be
done
and
then
we're
kind
of
hand
grading
how
to
do
it
right.
H
A
O
O
Condon
to
light
networks,
so
you
guys,
what's
your
name,
Paul
Congdon!
Thank
you
to
luck!
Networks.
You
asked
the
question
about
a
queueing
specification
personally,
I'd
be
in
favor
of
a
queueing
specification
in
802
got
one
in
particular
the
DC
document,
and
that
I'd
like
to
see
an
informational
document
that
references
that
okay.
P
Thank
you
two
questions,
two
attacking
your
questions.
The
first
one
is
about
the
accuracy
about
I'm
being
from
hobby.
Sorry,
the
accuracy
of
the
scheduling
you
know.
So
the
key
point
of
this
technique
is
about
the
interleaving
between
the
package
right,
so
the
time
controlled,
scheduling
at
the
key
point,
whether
it
is
accurate,
accurate
or
not,
the
determines
whether
you
will
miss
your
deadline
or
miss
your
time
to
schedule.
So.
P
G
Techniques
that
we've
talked
about
in
802
and
the
stuff
that
LeBeau
Dec
and
Mohamed
Mohamud
work
have
presented,
have
treat
inaccuracies
like
that
as
uncertainties
in
forwarding
delay,
uncertainties
in
queuing
delay
and
those
uncertainties
are
factored
into
the
computation
of
the
worst
case
latency,
so
that
the
number
you
get
from
the
computation
is
slightly
pessimistic,
assuming
worst
case
inaccuracies
according
to
the
implementer
says.
This
is
my
worst
case
in
accuracy.
P
You
know
the
you
have
to
support
the
data
night
float
with
I'd,
say
together
with
the
best
effort
traffic,
but
if
we
want
to
reuse
the
resource
of
the
interface
you
may
after
you
send
that
net
packet,
you
have
to
send
another
best-ever
packet
in
that
case,
but
but
you
have
to
control
the
interleaving
between
the
data
net
packets.
So
if
you
have
my
question
is
that
if
you
have
best-ever
packets
between
your
data
and
packets,
so
it
may.
P
G
You
do
the
next
step
net
packet,
there's
no
requirement
that
you
and
relief
that
met
with
each
one
if
you're
using
time
scheduled-
and
we
talk
about
reusing
unused
reserved
bandwidth
if
you're
doing
time
scheduling,
for
example,
you
can
enable
both
debt
and
best
effort
during
the
same
window
and
you
give
the
debt
net
higher
priority.
So
if
you
have
definite
traffic,
it
goes.
If
you
have
no
debt
net
traffic,
then
the
best
different
can
go
so.
G
P
G
The
resources
it
depends
if
the
interval
between
the
two
debt
net
packets
is
smaller
than
the
minimum
size
of
a
preemption
fragment.
Then
that's
true,
but
the
biggest
reason
for
preemption
is
for
those
who
are
unfamiliar.
Preemption
is
an
802
new
802
3
standard
that
allows
you
to
interrupt
the
transmission
of
a
low
priority
packet,
transmit
a
high
priority
packet
or
five
priority
packets
and
then
resume
transmission
of
the
interrupted
packet.
The
biggest
reason
for
that
is
so
that
you
can
fit
best
effort
traffic
between
the.
G
A
A
D
A
To
the
last
question,
we
had
a
couple
of
comments
from
both
Greg
Murphy
and
Yaakov,
something
the
essence
is
really
two
points.
The
first
is
Yaakov
is
saying:
if
it's
that
an
informational
implementation
document
would
be
very
useful.
Yes,
oh,
but
informational,
yeah
and
Gregg
agreed
the
other
comment.
It
was
actually
earlier
from
Gregg.
That,
basically,
was
saying
is
if
it's
standard
there's
actually
another
working
group
that
standardized
queueing
techniques,
and
if
you
really
want
to
standardize,
it
perhaps
think
about
going
there
and
that's
the
AQS
work.
Okay,
good.
G
G
I
think
it
would
be
the
way
I'm
leaning
right
now
is
that
we
need
an
RFC
here,
that's
standard
track.
That
says
this
is
what's
required
of
any
low-level,
these
the
characteristics
that
are
required
of
any
queuing
mechanism
in
order
to
meet
our
goals
and-
and
perhaps
an
informational
RFC
is
says
by
the
way.
Here
are
some
techniques
that
meet
the
goals.
A
A
D
G
A
And
see
if
it
has
sufficient
description
for
how
to
do
flow
identification,
at
least
within
the
Nagre
gated
case,
use
on
the
in
the
next
steps.
In
both
documents
they
talked
about
the
need
to
do
a
group
more
on
aggregation,
so
at
least
in
the
Nagre
gated
individual
flow
identification
case
it
should
be
covered.
Please
take
a
look
and
make
sure
it's
sufficient.
D
A
M
M
But
it's
not
feministic
latency
terminals
and
we
also
have
solutions
paper
on
on
security
requirement.
But
to
my
knowledge
there
is
yet
a
solution,
a
native,
a
detonator
for
wasn't
playing
solution
for
terminus
a
deterministic.
They
we
rely
on
TSN
solution
for
end-to-end,
bonded
latency,
but
remember
that
TSM
is
designs
and
local
air
and
that's
what
applications
is
it's
good
enough
Alaska?
It
is
really
where
this
requirement
and
coming
from
next
one.
M
So
here
is
actually
the
first
requirement:
I
call
it
stitching
different
TSM
domains,
respondent
latency,
but
if
we
assume
that
we
have
several
TSM
domain
that
interconnect,
we
basically
have
two
options.
One
is
to
basically
have
a
works:
I'm,
a
green
food
implementation
of
end-to-end
TSM
enabled
network
with
strong
times
with
time,
synchronization
Wow.
M
This
of
course
provides
a
using
TSM
methodology,
Thunder
latency
think
internet
option.
Two
is
two:
we
got
the
system
as
yes,
a
wine
data
matter
and
then
TS.
Until
these
two
domain
of
TSN
are
not
actually
synchronized,
then
you
will
have
to
have
a
data
net
connection
between
this
two
today
which
provide
natively
the
capability
of
integrity
to
performance
between
this
bootless
baby.
M
Was
the
new
status
of
founded
latency,
but
what
we
have
observed
in
terms
of
you
know
detonator
service.
We
talk
about,
for
example,
virtual
reality,
aggregated
reality
or
even
holographic
communication
type
of
things.
It's
really
the
consumer
and
the
generator
of
of
the
traffic
locate
and
may
locate
very
deferred.
They
should
locations.
So
we
expect.
M
M
That's
frequently,
then
the
scenario
in
a
local
area
network,
so
is
it
feasible,
have
a
cubed
e
type
of
methodology
to
calculate
every
time
when
a
new
flow
is
injected
to
the
network,
or
do
we
have
a
mechanism
where
convergence
can
be
more
faster
instead
of
on
scale
with
the
frequency
of
new
services
establishment
and
the
third
requirement
is:
can
we
have
a
certain
level
of
tolerance
of
end-to-end
time
navigation?
M
Time,
synchronization,
although,
for
example,
we
have
a
synchronized
that
whole
network,
because
in
4G
we're
basically
a
TDM
technology.
However,
it
is
very
costly.
You
need
to
have
the
equipment
to
the
end-to-end
support.
It
other
is,
for
example,
Ethernet
type
of
synchronization
technology,
and
it's
very
hard.
If
we
have
a
service
end-to-end
service
that
don't
offer
a
hater
genus
network,
you
basically
are
out
of
control.
Do
not
whether
it
is
capable
of
translate
the
time.
Synchronization
signal,
though,
is
there
a
possible
as
they
are.
M
M
Yes,
the
fourth
one,
given
the
variety
of
tetanus
services
will
expect
that
the
resource
reservation
status
will
be
a
in
a
in
a
big
number
of
effort
as
well.
So
is
there
any
better
mechanism
that
we
can
aggregate
to
some
extent
at
some
point
of
a
definite,
no
to
basically
reduce
ages.
We
need
to
maintain
or
resource
reservations,
of
course,
and
there
are
trade-offs
because
aggregated
it
makes
some
about
the
final
arm
finer
granularity
of
the
resource
reservation.
But
there
will
be
some
studies
to
be
made
in
the
last.
M
1.1
TC
8:
it
is
required
that
the
data
packet
transmitting
from
don't
need
to
be
to
have
a
transmission
delay
that
is
much
smaller
than
the
the
basic
recycling
of
the
cycling
queuing
for
was
a
period,
and
so
this
is
required
by
TS
and
method
in
q
CH,
which
means,
if
we
do
the
calculation
dieter
performance,
is
actually
scales
with
the
cycling
paper
hearing,
which
means,
if
you
have
a
really
very
long
transmission
delay,
you
will
be
obliged
to
increase
capital
T
to
basically
fit
with
this
requirement
in
queue
3/8.
M
So
if
we
have
a
very
unlink
or
very
ladling
because
of
your
transmission
media,
these
basically
jeopardize
your
jitter
performance
as
well,
and
we
find
something
that
can
get
rid
of
this
connection
between
your
delta
T
and
end
your
bakwin
period.
This
is
the
tolerance
of
transmission
legacy
requirement,
which
is
the
last
requirement
in
the
stocking
as
well
and
next
slide.
Please.
M
Thank
you
asses
conclusion,
though,
there's
yet
a
native
data
net
layer
solution
for
determination,
a
latency
procedure,
we're
not
saying
that
we
can't
deal
with
determination.
Lady
T
essence
is
a
great
solution,
but
when
we
think
about
rough
scale,
there
might
be
some
problem
and
the
solution
could
be
based
on
key
CH,
but
it
has
to
be
scalable
about
the
cycling
area
of
our
requirement
and
time
synchronization.
B
Comments
for
completeness
that
TSN
overlay
is
one
of
the
cases,
but
TSN
is
as
well
considered
as
a
subnet
possibility
to
be
you
to
meet
the
requirements,
so
it
is
located
in
a
number
of
figures
in
the
architectural
document,
and
another
thing
is
that
there
is
a
synchronous
solution
for
T
and
T
SN,
as
well
as
in
the
previous
presentation
pointed
out.
The
asynchronous
traffic
shaper
does
not
require
time.
M
H
So,
okay,
that's
my
screen!
Okay,
so
this
is
a
proposal
solution
for
the
requirements
that
were
just
presented,
and
so
the
goals
are
really
to
have
a
solution
with
really
what
we
call
tightly
bounded
delay
we're
really
the
range
of
arrival
time
of
individual
packets
can
be
defined
to
be
bounded
within
a
really
small
Delta
between
a
minimum
and
a
maximum
delay.
The
delay
itself,
of
course,
can
be
large
through
a
network
right
so
and
the
typical
edge
network.
H
So
here
is
kind
of
the
overview
picture
of
how
this
looks
like
in
the
network.
So
we've
got
our
senders
and
we've
got
per
flow
state
to
basically
do
the
equivalent
of
what
they're,
calling
in
TSN
the
gate,
functionality
to
schedule
and
shape
out
the
packets.
But
then,
across
the
core
of
the
network,
we
only
have
aggregated
state
and
not
per
flow
state,
and
that
is
really
the
core
piece
of
creating
scalability
for
potentially
millions
of
flow
through
a
big
service
provider
network
that
wants
to
provide
deterministic
services
next
slide.
H
That's
me,
okay,
so
here
is
basically
a
quick
comparison
of
what
we
call
our
large-scale
deterministic
Network
scalable
deterministic
forwarding
data
plane,
so
it
is
sharing
the
cyclic
forwarding.
Insofar
as
that
we
have
a
bunch
of
queues.
We
think
that
we
get
away
in
the
best
cases,
with
three
queues
that
are
being
cyclically
scheduled
on
output
and
when
we're
receiving
packets
on
input.
H
We
know
basically
what
q
therefore
and
we're
mapping
it
into,
and
so,
if
you
look
on
the
left-hand
side,
that's
the
TSN
solution
and
that's
where
really
these
slots
are
tightly
synchronized
between
input
and
output.
So
the
propagation
delay
of
links
between
these
nodes
can
only
be
a
small
portion
of
the
actual
cycle
time,
whereas
an
hour
solution,
the
you
know
that
the
propagation
delay
of
the
links
can
be
arbitrary
large
because
we're
basically
doing
explicit
mapping.
H
So
something
is
being
sent
from
a
cycle
X
and
were
actually
including
the
cycle
as
information
the
packets
as
we're
seeing
in
the
next
slide,
and
that
is
basically
being
mapped
to
the
in
cycle
Y
that
is
sent.
You
know
to
so
to
speak
as
soon
as
possible
after
having
received
all
the
packets
for
it
right,
and
so
that
basically
means
that
hop
I
hope
we're
not
in
curing
any
additional
jitter
we're
just
incurring
additional
delay.
H
Right
so
this
does
it
in
more
detail
showing
the
cycle
mapping.
So,
given
the
you
know,
limits
on
time
that
were
given
I'll
leave
that
up
for
reading
this
I
think
is
the
best
representation
of
the
overall
process.
So
you're
seeing
AB
note
a
that
is
sending
packets
indicating
you
know
the
cycles
that
they're
sending
from
through
a
label
we're
thinking
that
we
can
get
away
with
two
bits
in
the
whatever
headers
we're
having
in
the
network
right
and
on
the
right
hand,
side
we're
seeing
a
small
set
of
possible.
H
You
know
encapsulations,
we
can
put
these
two
bits
to
just
indicate
three
values
for
the
cycle.
So
takes
arbitrary
amount
of
time.
To
get
to
note
B
note
B
sees
it
picks
up
the
label
and
then,
basically,
through
the
mapping,
puts
it
out
into
the
appropriate
cycle
and
then
sends
it
out
again.
Next
slide.
N
H
So
yeah,
so
this
is
basically
really
the
more
cooler
interesting
parts
where
we
think
were
kind
of
50%
through
and
defining
the
solution.
The
first
one,
of
course,
is
what
amount
of
synchronization
do
I
need
between
the
device
and
I
think
that's
true,
whether
we're
using
the
cyclic
queuing
or
any
other
solution
right.
If
you
basically
start
creating
packets,
20
or
30
percent
faster,
because
your
clock
is
running
20
or
30
percent
faster,
you
won't
have
a
solution
working.
H
So
there
is
a
certain
amount
of
frequency,
synchronization
required
between
any
set
of
devices
through
which
packets
for
a
deterministic
network
are
going
to
run,
and
in
our
case,
what
we
can
do
is
we
could
basically
deal
with
larger
derivations
between
the
frequency
synchronization.
So
one
clock
runs
a
few
percent
faster
than
the
other,
through
basically
increasing
the
number
of
cycles
that
were
basically
pushing
packets
upfront
and
that
basically,
would
then
allow
to
correct
for
frequency.
Synchronization
mismatch.
H
The
more
interesting
stuff,
probably
is
that
we're
going
to
have
link
delay
variations
so
I'm,
not
sure
about
all
the
cases
and
networks.
But
you
know:
we've
got
these
wonderful
hanging
wires
on
poles
that
are
going
to
be
longer
and
shorter
over
the
period
of
24
hours,
and
that
can
be
as
much
as
10
percent
change
in
propagation
delay,
and
so
that's
basically
also
something
very
easily
managed
by
this
solution.
H
So
I'll
skip
the
control,
plane
and
yeah
five
minutes
so
summary
here
again
right
know
per
flow
state
in
the
core
of
the
network,
using
the
principles
that
we
know
from
TSN
solutions:
adding
label
field
with
just
the
number
of
bits
required
to
represent
cycles
and
then
having
on
the
edge
on
the
sender
side,
specifically
as
necessary.
The
same
type
of
gating
State
as
we
know
from
TSM.
G
Yeah
Norman
Finn,
thanks
Curtis
as
far
as
the
two
buffers
versus
there's
two
at
several
aspects
of
this
I
think
there's
a
certain
amount
of
characterizing
dot,
one
qch,
the
cyclic
queuing
and
forwarding
thing
instead
in
a
very
narrow
sense,
it's
better
than
that.
The
you're
absolutely
right
about
two
buffers
versus
three
buffers
I
was
just
I'll,
send
the
pointer
to
the
list.
We've
pointed
that
this
has
been
discussed.
G
A
H
A
H
G
A
Sounds
like
it
would
be
really
good
for
you
guys
to
work
offline,
the
authors
of
those
last
real
documents
to
work
offline
and
see
if
they
can
come
up
with
a
consolidated
document,
or
maybe
a
recommendation
on
the
best
way
to
move
it
forward.
But
I
do
think
that
the
what
I've
heard
is
there's
definitely
more
interested
in
the
informational
approach.
You.
H
A
D
This
is
useful
and
I
will
introduce
the
deck
configuration,
llamado,
update
and
first
I
will
introduce
the
structure
change
about
the
new
version
in
the
version.
While
we
define
the
topology
llamado
and
then
add
a
static
configuration
llamado
and
in
the
new
version,
the
topology
llamado
is
unchanged
and
we
split
the
static
configuration
yamato
into
two
models.
D
Then
a
device
young
model
which
is
flow,
independent
and
common
for
all
the
flows
and
then
a
flow
configuration
for
the
flow
depending
the
corporation
and
as
the
output
of
the
path
computation
and
after
the
data
plan
solution
is
stabilized.
The
way
you
want
to
define
in
the
next
step,
three
more
than
Llamados
and
in
the
dennah
flow
configuration
yamato.
D
We
have
three
configured
configuration
instance,
which
are
than
a
service
proxy
instance
than
a
service
instance
and
then
I
transit
instance,
and
these
three
instance
are
corresponding
to
the
definition
of
the
architecture
draft
and
the
DSP.
I
is
for
the
net
edge
node
configuration
which
can
map
to
client
flows
or
application
flows
to
the
Dennis
in
a
particular
domain.
D
The
deadness
service
instances
for
ten
that
relay
another
configuration
and
it
can
enable
or
disable
a
particular
function
of
the
tenon
service,
and
it
also
configured
the
service
path
connecting
the
multiple
10s
segments
and
the
den
a
transient
instance
is
for
them
at
transcended
configuration
which
can
build
up
a
transit
tunnel
between
10
s,
resistance
and
also
configure
the
Q's
parameters.
In
the
following
slides.
D
The
in
statement
includes
the
function
defined
in
the
architecture
and
discipline
draft
and
when
the
instrument
is
non
13:8
as
the
ingress
at
node
sequence,
number
generation
function
should
be
enabled,
and
if
the
year
segment
is
that
net
at
the
relay,
node
or
eQuest
note,
the
parameters
will
include
incoming
interface
and
the
flood
of
identification
in
this
node
and
the
out
in
Osterman
include
the
outgoing
interface
and
the
flowing
identification
for
the
next
relay
or
ingress
node.
Also,
it
includes
the
tenor
transferring
instance.
D
Oh,
the
picture
on
the
right
is
the
tree
of
our
young
model
and
we
think
more
parameters
will
be
needed
for
that
than
a
service
functions
just
like
for
replication
and
the
imitation.
We
define
different
models,
different
mapping,
relationship
models
to
do
the
configuration
just
like
the
right
picture
shows
if
the
packet
elimination
is
enabled
for
a
particular
flow.
You
know,
then,
that
service
instance
multiple
in
segments
will
map
to
one
single
out
segment
and
they
Eve
in
the
next
picture.
D
The
parameter
is
limited
by
the
buffer
size
and
then
an
internetwork
function
is
defined
in
the
determination
and
the
flow
identification
is
already
covered
by
the
in
segment
content,
and
there
are
three
methods
of
generating
a
sequence,
number
copy,
translation
or
regeneration,
and
we
also
have
some
considerations
to
deal
with
the
flow
aggregation
in
the
current
solution.
There
are
three
methods
of
doing
flow,
aggregation,
aggregation
at
OSP
aggregation,
the
10m
flows
as
a
noodle
net
flow
or
simple
aggregation
at
the
tendon
layer.
All
these
three
measures
can
be
supported
by
the
current
configuration.
D
Llamado
and
I
won't
go
to
details
because
the
time
the
tennis
service
Yamato
is
a
new
Yamada.
We
ended
to
the
current
version
of
the
draft
and
the
the
three
function
of
replication,
elimination
and
all
the
rings
are
different
from
the
function
parameters
we
just
introduced,
because
these
functions
are
capability
for
the
device
for
all
the
flows,
not
for
a
particular
flow,
and
it
is
to
be
decided
whether
these
functions
will
be
implemented
in
a
device
or
in
your
particular
surface,
particularly
interface.
Sorry,
and
we
also
need
some
configuration.
A
D
A
Little
higher
so
a
few
people
we
have
in
our
charter
to
have
a
yang
model
for
covering
covering
our
work
so,
and
it
seems
that
this
document
is
emerging
as
the
sole
candidate,
so
I
think
the
question
that
we
really
have
to
ask
isn't
whether
we
should
adopt
it
or
not.
It's
a
question
of
when
the
documents
still
maturing
there's
pieces
that
are
still
going
I
know.
I
personally,
have
some
technical
questions,
but
also
I
think
it
would
be
reasonable
to
use
this
as
a
foundation
for
work.
A
It
would
also
be
reasonable
to
wait
to
let
it
mature
a
little
bit
and
we'd
like
to
ask
the
working
which
of
those
two
they
would
like
yeah
do
we
want
to
adopt
now
with
it
being
a
bit
rough
technically
and
then
we'll
have
more
changes
during
the
life
of
as
it
progresses
in
the
working
group.
Or
do
we
want
to
wait
until
it's
a
little
more
mature
before
adopting
those
are
really
the
two
questions,
we're
gonna
ask
and
we're
gonna
ask
the
whole
questions.
A
H
A
Do
we
want
to
wait
for
the
document
to
mature
a
bit
if
we're
adopting
or
move
to
adopting
now,
so
those
are
the
questions
and
we're
gonna
pull
the
room
right
now
on
that.
So
the
first
question
is
how
many
think
we
should
wait
before
adopting
this
document
show
of
hands.
So
you
want
to
see
it
mature
a
little
bit
more
before
adopting
so
a
couple,
a
couple
of
hands
there
and
not
even
really
strong
heads.
They
didn't
shoot
up.
A
How
many
think
that
it's
time
to
adopt
and
we
can
proceed
and
mature
the
document
in
the
working
group
again
not
a
lot
of
hands
but
clearly
there's
more
interest
than
that
on
that
side.
So
I
think
we'll
take
it
to
the
list
and
look
to
do
an
adoption
poll
and
see
what
comes
out
there
and
to
your
question
of
whether
or
not
we
had.
A
H
I
mean
what
I
think
this
can
take
a
long
time
as
an
ITF
working
group
document.
I.
Think
I
would
like
to
give
the
author's
the
safety
that
they're
working
on
something
that
the
working
group
really
desperately
needs.
That
would
be
my
reason
to
adopt
earlier,
but
in
counterpart
right
we
want
the
author's
to
be
active.
Okay.
So
with
that
we're
gonna
move
to
Greg.
Thank.
H
A
H
H
R
Will
do
my
best?
Okay,
let's
go
next
slide.
Thank
you.
So
this
is
just
a
reminder
for
people
outside
this
working
group.
If
they
want
to
interest
what
it
is
and
even
though
the
replication
elimination
function
being
few
there's
an
optional,
that
doesn't
mean
that
somebody
can
consider
will
consider
implementation.
There
doesn't
support
it.
R
So,
let's
go
to
the
next
slide,
which
concentrates
more
on,
what's
being
suggested
as
impellers
data
flame
encapsulation
for
the
death
net
and
a
quick
reminder
that
the
sequence
number
has
the
first
neighbor
of
zero
to
differentiate
it
after
the
label
and
associate
the
channel.
Heather
has
first
label
as
one
and
use
of
gal
well
for
now
is
for
further
study.
So
let's
look
at
how
proposed
MPLS
encapsulation
for
the
death
networks
for
om
and
next
slide,
please.
R
So
this
is
a
topology
that
had
been
used
as
an
example
of
data
flow,
with
a
packet
replication
in
elimination,
sub
functions
and
the
replicating
nodes
are
in
one
and
r4
and
nodes
are
to
eat
and
en
two
are
eliminating
notes
and
the
numbers
don't
represent
the
labels
if
they
just
represent
a
path
and
copy
number
that
arrives
next
slide.
Please.
R
R
R
R
So
the
conclusion
the
recommendation
would
be
is
that
sequence
information
has
to
be
part
of
that
net
encapsulation,
which
is
not
the
transport
encapsulation,
because
the
proposed
solution
using
the
suitor
wire.
How
can
we
go
slide
back?
The
purpose
is
using
the
pseudo
wire
as
a
transport,
not
as
a
service.
R
And
then
the
document
talks
about
hybrid
methods
to
using
alternate
marking
methods.
In
some
other
aspect
and
again,
this
is
the
only
first
scope.
So
the
next
step
that
will
be
taken
is
extended.
Net
requirements
for
OAM
consider
active
in
hybrid
methods
to
be
used,
appreciate
the
comments
and,
at
some
point,
look
for
working
group
reduction.
Yes,.
B
So
we
discussed
in
the
previous
discussions
that
we
want
to
have
a
separate
OEM
document
and
this
seems
to
be
a
good
start
with
a
set
of
requirements,
but
I
think
the
document
should
be
developed
further
and
really.
Contributions
will
be
welcomed.
As
you,
as
you
suggest,
and
having
a
more
mature
document,
we
can.
We
can
start
working
group
reduction
bit
later
yeah.
R
Yes
again,
what
I
wanted
to
stress
is
that
from
well
I'm,
the
only
offer
for
now
but
I
welcome
anybody
who
wants
to
contribute
so
that
proposed
MPs
encapsulation
limits
and
restrict
om,
active
OAM.
So
basically,
there
needs
to
be
some
adjustments
done
to
MPLS
encapsulation
so
that
om,
active
OEM
can
explore
and
be
in
band
with
the
data
flows.
Greg.
A
N
This
version
mainly
aligned
with
newest
architecture
and
certain
working
group
jobs.
So
we
are
using
the
PRF
as
a
replication
function
in
the
PF.
That's
a
packet
and
nomination
functions
keep
Island
and
we
are
converted
to
a
single
solutions.
We
don't
care
about.
Has
an
internal
implementations
at
the
N
word.
N
The
mattress
in
the
net
can
be
realized,
as
this
slide
shows.
The
PRF
will
replicate
the
market
package
in
two
directions
in
clockwise
in
the
counter
clockwise
to
both
directions
and
Anna
leaves
nodes
will
have
both
PRF
in
a
PF,
so
the
package
can
be
replicated
in
the
same
direction
and
another
copy
will
be
a
central
PF,
so
elimination
can
be
done.
Furthermore,
we
we
can
also
add
POF
that
all
the
ordering
of
the
parkade
of
her
order
some
sins,
so
anything
can
be
done
in
the
slips
note.
N
If
there
are
international
variants,
the
same
idea
applies.
We
can
see
from
this
picture
indication
nodes
such
as
i1
at
who
will
have
a
post
POF
in
the
PF,
so
she
can
replicate
the
service
in
some
directions
and
also
to
some
factor
replication
elimination,
so
only
one
copy
will
be
essentials
secondary
in
so
this
is
the
same
idea.
N
N
A
Few
people,
how
many
have
read
the
document,
notably
more
ok,
thank
you.
Thank
you.
Well,
we'll
look
forward
to
keep
hearing
how
it
develops
relative
to
the
actual
data
playing
solutions.
Thank
you.
Thank
you,
yeah,
and
with
that
actually
now
I
just
wanted
to
point
out
something
I
didn't
point
out
before,
because
he
wasn't
in
the
room.
We
have
David
black
here
who's
our
technical
advisor
from
the
transport
area.
We
talked
about
him
the
last
time
and
that
we
really
appreciate
him
contributing,
and
you
have
30
seconds
to
say
something
happy.
A
Please
go
read
the
IP
document.
Tell
us
what
you
think
all
right.
Thank
you
all
very
much
keep
in
mind
that
I
know
it's
early
to
be
thinking
about
Bangkok,
but
ITF
103
we're
going
to
have
that
special
meeting
on
Sunday
and
keep
that
in
mind,
as
you
think,
of
your
travel
planning
and
that
that's
gonna
and
and
tentatively
right
now,
it's
ending
at
5:00
p.m.
on
Sunday
will
start
at
9:00
and
have
a
break
in
the
middle.
It
may
move
a
little
bit.
A
You
know
half
an
hour
an
hour,
but
not
certainly
not
beyond
5:00
p.m.
and
the
reason
why
it's
Sunday
is
because
if
we
did
it
on
Friday
or
Saturday,
we
basically
had
no
participation
from
802,
and
so
it's
unfortunate
for
us
that
that's
the
meeting
where
we're
running
this
experiment
where
we're
at
ending
early.
But
it
is
what
it
is.
If
we
want
to
have
the
meeting
it
fortunately
has
to
be
Sunday
I
can't.