►
From YouTube: IETF104-DETNET-20190327-0900
Description
DETNET meeting session at IETF104
2019/03/27 0900
https://datatracker.ietf.org/meeting/104/proceedings/
A
Funny
thing
is:
I
just
happened
to
pull
out
the
agenda
and
pulled
out
the
one
that
said
that
net
agenda
Prague
Congress
hall,
one
and
I
realized
it
was
the
wrong
day.
So
we
were
in
this
room
two
years
ago.
Well,
so
welcome
back
I'm,
Lou
burger,
there's
a
janosh
Park
us
Ethan
Grossman,
our
secretary
is
sitting
up
front.
Our
agenda
and
slides
are
available
in
the
standard
place,
as
is
our
working
group
information.
A
This
is
the
IETF.
If
you
didn't
notice
anything
you
say
and
do
here
is
governed
by
our
contribution:
BCP,
seventy,
seventy
nine,
if
you're
unfamiliar
with
it,
please
go,
look
it
up
and
they
also
have
the
link
on
the
bottom
to
the
note
well,
which
is
capturing
what
you
see
here,
anything
that
you
say
here
in
the
session
at
the
mic,
while
standing
become
a
part
of
our
permanent
record
and
is
subject
to
the
IETF
contribution
rules.
A
A
Please
also
help
out
by
joining
in
on
etherpad
and
help
with
our
collaborative
note-taking.
It's
particularly
important,
if
you
make
comments
at
the
mic,
to
go
check
to
make
sure
that
your
name
is
there
properly
and
that
your
comment
is
appropriately
captured.
So
please
jump
on
you
actually
don't
need
to
put
the
question
mark
part
there.
If
you
just
put
in
notes,
ITF
104
debt
net
that'll,
get
you
to
the
right
place,
there's
also
a
link
to
it
on
the
tools
page,
it's
a
little
easier
to
find.
A
This
is
the
agenda
that
was
posted,
the
only
changes
we
have
to
it
and
it's
actually
not
reflected
on
this
slide,
but
the
only
changes
we
have
aren't
speakers.
Unfortunately
norm
Finn
can't
be
with
us
today.
We
hope
to
see
him
in
Montreal,
and
so
we
have
someone
else
presenting
and
we
have
a
change
of
presenters
on
the
flow
information
draft.
A
As
always,
we,
the
the
mailing
list,
is
there
for
a
few
things.
It
is
where
we
establish
consensus
for
the
working
group,
so
anything
that
we
say
here.
Any
decisions
that
we
take
here
aren't
in
fact
it
decisions
until
they
are
confirmed
on
the
list.
Also,
it
is
the
right
place
to
introduce
a
draft.
This
is
not
the
right
place
to
introduce
a
draft
to
the
working
group.
A
You
should
do
it
on
the
list
and
get
some
good
discussion
on
the
list,
and
it's
far
more
likely
that
if
you
have
something
a
new
contribution,
it's
far
more
likely
that
you'll
make
progress
with
that
contribution.
If
it's
first
discussed
on
the
list-
and
two
authors
like
to
remind
you
that
author
authorship,
we
really
appreciate
your
contribution
having
private
discussions
to
help
move.
The
ball
forward
is
fantastic,
but
no
change
agreed
to
by
authors
our
final
until
they're,
agreed
to
by
the
working
group.
A
So
it's
great
that
you
have
agreements
on
the
among
authorship
among
the
authors,
but
that's
not
the
valuable
place.
The
valuable
place
is
the
working
group
and
the
list
is
the
right
place
to
confirm
that
we
do
have
an
IPR
disclosure
process
to
make
sure
that
we
have
appropriate
disclosures
on
and-
and
we
check
that
at
sort
of
two
wickets
one
is.
The
document
is
adopted
to
the
working
group
and
then
the
second
is
before
we
go
to
last
call
and
the
these
met.
A
These
IPR
polls
are
directed
to
authors
and
contributors
named
in
the
draft,
but
it's
really
to
everyone
in
the
working
group,
because
everyone
has
an
opportunity
to
contribute
to
this.
Just
because
you're
listed
on
the
draft
doesn't
mean
you
own.
The
draft
the
working
group
owns
two
drafts,
so
it's
your
draft,
so
please
contribute
and
if
you
contribute
please
do
make
sure
you
can
form
with
the
IETF
IPR
rules.
A
So
where
do
we
stand
on
our
different
documents?
We
have
two
documents
with
the
RFC
editor.
This
is
fantastic.
These
are
our
first
set
of
documents
that
are
finally
making
out
too
into
the
RFC
editor
queue
and
will
be
our
first
documents
published
as
a
working
group.
It
it's
taken
us
a
little
longer
than
we'd
wanted
and
the
way
it
hoped
for,
but
we
are
starting
to
see
things,
make
it
into
the
IFC
and
make
it
into
the
RFC
attitude
and
will
soon
have
RFC.
A
A
We
have
a
number
of
working
group
documents
on
the
agenda,
so
we'll
be
hearing
about
those
shortly
not
on
the
agenda.
Is
the
security
document
we've
been
holding
that
up
to
make
sure
that
we
don't
make
sure
it's
well
aligned
with
the
solutions
and
that
if
we
find
something
and
defining
the
solutions,
we
don't
have
to
reopen
the
security
so
document
you'll
hear
later
that
we
hope
to
be
getting
solutions.
Documents
popping
out
of
the
working
group
being
submitted
to
with
publication
requests
the
iesg
by
Montreal.
A
A
A
The
one
thing
to
think
about
is
where
we
go
next,
we're
not
quite
ready
to
talk
about
that.
We
need
some
documents
actually
can
make
it
into
the
isg,
but
we've
had
a
number
of
discussions
about
whether
we
go
to
control,
plane
or
start
bringing
in
that
there's
other
just
topics
that
we'll
get
to
one
later
about
what
things
that
are
in
scope
or
out
of
scope.
But
if
you
have
other
topics
you
think
are
appropriate
to
bring
into
the
debt
networking
group
you
know.
Perhaps
Montreal
is
a
good
place
to
start
thinking
about
that.
A
A
But
you
can
bring
things
in
well,
we'll
discuss
it
figure
out
where,
where
to
go,
and
of
course
those
discussions
happen
with
rads
in
the
room
Deborah
as
well
as
our
advisers
such
as
David,
will
see
David
black
sitting
right
here,
so
I
think
that's
enough
of
that
Oh.
With
that
we're
ready
to
move
on
yeah.
No,
she
is
going
to
talk
about
architecture.
B
C
C
Actually,
there
were
two
groups
of
review
comments
on
version
8.
We
got
the
reviews
from
the
from
the
various
areas
of
the
IETF
and
then
on
version
11.
The
iesg
review
comments.
These
links
in
the
slide
lead
you
to
the
two
ways
of
showing
the
diff
between
the
two
versions:
version
8
and
version
12.
If
you
are
interested
in
the
details,
you
can
check
the
order
of
the
updates
described
in
this
presentation
is
based
on
what
had
the
major
effect
or
may
bigger
effect
in
the
document.
C
The
first
category
of
the
updates
was
to
clean
up
the
document
in
order
to
avoid
confusion
with
the
terminology
that
is
used
in
the
transport
area,
like
the
word
transport
itself
and
congestion
control
and
so
on.
So
the
one
of
the
big
steps
was
renaming,
the
former
transport
sub
layer
to
forward
in
sub
layer.
So
the
the
figure
is
copied
from
the
draft
the
we
have.
We
have.
We
divided
the
definite
layer
into
two
sub
layers
and
the
lower
one
is
the
forwarding
sub
layer.
The
upper
one
is
the
service
supplier.
C
Another
big
terminology
change
that
had
an
effect
on
the
draft
was
to
use
the
use
of
resource
allocation
instead
of
the
former
congestion
protection.
So
these
both
these
changes,
as
I
said
it's
to
avoid
confusion
with
the
terminology
of
the
transport
area.
This
text
in
the
bottom
is
copied
from
the
draft.
The
main
techniques
were
using
in
that
nut
with
this
update
is
the
resource
allocation
service
protection
and
the
explicit
routes.
C
There
were
further
clarification
is
related
to
congestion
control,
so
we
had
to
talk
about
okay,
what,
if
some
lava
flows
and
the
congestion
control
as
RFC
2914,
and
we
have
added
text
after
discussions
on
the
list
too,
to
address
and
clarify
these
aspects.
Actually,
this
presentation
is
populated
with
links
to
emails.
C
I
tried
to
capture,
of
course,
I
could
not
capture
everything,
because
it
was
multiple
internet
threats,
but
if
you
are
interested,
that's
that's
where
the
links
lead
you
and
further
clarification
was
added
related
to
congestion
control,
that
there
is
no
expectation
for
the
death
metaphors
to
be
to
be
responsive
to
the
congestion,
control
or
explicit
congestion
notification.
So
this
is
the
new
phrasing
in
the
draft.
C
How
we
make
it
clear
that
we
don't
expect
congestion
control
to
be
used
for
data
to
flows,
I
copied
the
principles
we
had
behind
those
updates,
because
it
was
multiple
rounds.
First
first,
we
got
rid
of
the
terms,
transport
and
congestion
control,
and
then
we
had
to
add
it
back,
and
for
that
we
didn't
want
to
introduce
any
new
term.
We
wanted
to
minimize
it.
You
I
think
back
to
congestion
like
the
term
we
have
removed
previously
and
we
removed
phrases
like
throttling
that
were
not
not
clear.
A
Is
interruption?
Do
you
mind
going
back
one
there's
a
really
important
here.
There's
a
ripple
effect
is
that
we
end
up
and
tell
me
if
I'm
jumping
the
gun
we
end
up
having
to
at
the
sort
of
service
interface
level
understand
whether
or
not
the
application
traffic
is
supports,
congestion,
control
or
not,
and
that
has
a
ripple
effect
down
to
our
flow
model
to
eventually
our
yang
model.
A
That's
something
that
we
hadn't
really
had
before
was
awareness
of
something
it
related
to
the
application,
and
the
reason
for
that
is
in
the
as
part
of
addressing
this
and
I.
Think
you
may
hit.
That
later
is.
Is
that
there's
some
extra
text
and
some
requirements
that
what
we
have
to
do
for
traffic
that
is
not
progestin
controlled?
A
D
I'm
kind
of
worried
about
that,
because
people
used
to
have
to
throw
that
rocket
us
in
the
pseudo
while
working
group,
and
we
and
we
pointed
out
that
the
the
amount
of
traffic
that
went
in
the
open
that
was
not
congestion,
aware,
was
trivial
and
we
managed
to
avoid
doing
because
in
the
cases
where
it
is
congestion
or
congestion,
causing
I
suppose
or
non
non
non
elastic,
it's
in
a
controlled
environment,
I
hope
we're
not
going
to
require
sort
of
full
analysis
of
all
the
applications
before
they
can
deploy
it
on
this
network.
No,
no.
A
C
We
are
for
a
single
administrative
domain,
so
it's
a
kind
of
controlled
environment
plus
we
had
the
text
to
make
rate
limiting
and
use
of
shapers
and
so
on,
and
that's
all
you
need,
you
don't
have
to
know
every
floor.
Everything
like
that.
It
is
these
really
think
tools
that
that
you
used
to
have
protection
against
misbehavior
and
stuff
like
that
and
you
in
the
input
to
the
death
net
domain.
So.
D
Once
they
have
to
be
a
single
domain
as
opposed
to
a
set
of
domains
that
understand
what
they're
doing
we're
following
our
charter,
or
maybe
we
need
to
revisit
that,
because
it
seems
unreasonable
that
you
can't
split
your
area
into
two
yeah,
your
domain
into
multiple
components.
It's
an
su
Qi.
Maybe
we
can
we
can.
We
can
add
that
extension
in.
C
C
Okay
back
to
the
slides
another
big
change,
as
you
also
mentioned,
is
actually
we
have
rewritten
the
security
considerations
section
based
on
the
review
comments.
We
got
from
security
ad
and
thanks
for
all
the
help
on
that
I
think
it
is
an
improvement.
It
is.
It
is
a
lot
here.
What
from
architecture
perspective,
what
we
are
after
and
somewhat
related
update
was
to
the
privacy
consideration
sections
where
actually,
the
clarification
was
to
make
it
clear
that
that
nut
doesn't
bring
in
any
new
privacy
considerations.
C
C
C
So
how
do
we
make
sure
that
that
somebody
which
remains
for
non
that
not
flows
and
stuff
like
that
protection
against
misbehaving
devices
or
misbehaving
flows,
whether
or
not
that
not
on
and
that
net
flows,
as
I
mentioned
before?
The
key
thing
here
is
the
lady
meeting
and
shaping
for
functions.
We
have
in
at
the
input
of
the
death
net
domain
for
for
death
net
flows,
and
this
was
another
text
added
for
the
protection
of
the
downstream
hope
after
that
net
domain.
C
So
these
were
the
the
updates
related
to
malfunctioning
misbehavior
and
compared
to
the
previous
one.
There
were
smaller
updates
on
clarifications
related
to
definitions
and
terms.
Definitions
have
been
updated
like
the
app
flow,
the
definite
flow
itself.
These
are
the
most
important
ones.
That's
why
I
copied
them
here
and
we
have
multiple
types
of
death
net
nodes
and
in
version
8,
it
was
not
clear.
We
have
clarified
actually
the
definition
plus
the
use
of
these.
At
these
terms
to
the
document
and
correspondingly
we
added
an
umbrella
term.
C
That
did
yet
not
note
if
we
don't
want
to
go
specify
whether
it's
transit
note
or
relay
another.
What
specific
aspect
so
this
this
was
a
smaller
updates
to
add
the
document
and
the
deadness
sub
layers.
I
mentioned
the
terminology,
change
and
actually,
in
addition
to
that,
the
definition
was
extended
to
make
it
clear
upfront
in
the
definition
that
that
nut
has
two
sub
layers
and
the
reader
doesn't
have
to
jump
up
and
read
that
that
section
with
the
figure
I
copied.
But
it's
obvious
from
the
definition
that
okay,
we
have
two
sub
layers.
C
One
of
them
is
the
forwarding.
One
of
them
is
the
service.
Earlier
there
was
a
reordering
of
the
definition
to
make
them
in
lexicographic
order,
and
there
were
comments
to
clarify
the
scope.
This
was
the
on
resonate
like
as
I
mentioned,
that,
based
on
the
Charter,
they
are
first
single
administrative
domain.
D
As
soon
as
we've
done
pseudo
wires,
which
were
designed
for
a
single
administrative
domain,
we
had
to
do
multi
segment
pseudo
wires,
which
are
definitely
not
a
single
administrative
terrain,
and
I
can
imagine
that
such
things
will
be
needed
in
debt
net
as
well.
So
I
think.
The
important
thing
is
that
it's
a
it's
it's
one
or
more
controlled
domains,
not
a
single
administrative
domain,
which
has
its
own
set
of
restrictions.
C
D
Yes,
yes,
oh
so,
for
example,
in
multi
segment,
pseudo
wire,
we
the
one
the
reason
for
introducing
it
was
so
that
the
tax
could
be
managed
by
each
other.
Labels
can
be
managed
by
the
domain
separately,
so
I
think
we
would
serve
as
much
better.
If
we
point
you
to
the
real
problem
here,
which
is
that
this
is
about
control
and
oversight
of
the
deployment
and
not
about
the
the
restriction
to
a
single
administration.
A
So
I
think
it's
worthwhile
to
go.
Look
at
the
text.
The
actual
text
of
the
document
and
see,
what's
in
the
body
of
the
text,
see
what's
in
the
different
consider
section,
I'm,
pretty
sure
that
the
only
place
that
uses
the
word
single
administrative
domain
is
when
we're
talking
about
security
and
the
rest
of
the
the
architecture.
Talks
about
I
think
the
exact
quote:
it's
not
for
large
groups
of
domains
such
as
the
internet,
so
I
think
we
can
probably
all
agree
that
debt
net
is
not
for
the
large
groups.
A
E
Deborah
is
speaking
with
both
hats,
because
this
comes
up
all
the
time
now
is
Jay
and
also
as
an
operator
participating
in
the
many
groups.
This
is
really
a
hot
button
item
domain.
What
is
the
domain
and-
and
you
could
even
say,
a
detonate
domain-
crosses
multiple
administrative
domains.
Do
its
pointing
out
it.
The
most
important
thing
is
it's
a
controlled
domain
right
or
actually
like
you
insane
sub
networks
may
not
be
dead
aware
and
you
can
still
get
across
them.
So
I
would
say.
E
A
E
E
C
I'm
realizing
I
need
to
apologize.
It
may
be
my
fault
of
using
the
term
single
administrative
domain,
so
the
text
we
have
on
the
screen
is
the
text.
What
what
we
have
in
the
document
in
the
body-
and
this
is
the
text,
what
we
have
in
the
Charter.
So
what
this
in
the
text
I
copied
from
the
introduction
here,
is
copied
from
the
Charter
to
the
document
and
it
doesn't
say,
single
ethnicity
of
domain.
It
says
a
group
of
had
initiative
control.
That's
what.
A
A
A
E
E
D
Sorry
Stuart,
who
keeps
forgetting
to
identify
himself
I,
think
that
the
text
under
you
got
introduction
extended.
That's
scoping
there
of
single
admission
of
control
or
within
a
closed
group
of
administrative
control.
It's
very
reasonable
and
sort
of
sets
out
what
we
need
to
do,
and
maybe
we
should
make
sure
that
that
scope
is
reflected
everywhere,
where
there
is
a
constraint.
It's.
C
The
security
section
we
need
to
double
check,
because
actually
that
was
the
one
that
gone
under
a
major
rewrite
based
on
the
resu
comment,
and
we
may
have
not
noticed
that
we
should
have
been
more
careful
with
this
or
in
that
part,
but
but
we
did
attention
in
the
abstract
and
in
the
introduction
upfront
to
be
in
line
with
the
Charter
and
I,
be
very
careful
in
the
Charter
as
it.
So
this
is
a
text.
C
Okay,
thank
you
for
that
feedback,
and
there
was
some
discussion
and
clarification
on
the
priority
queuing.
That
I
relates
to
that
nut.
I
just
thought
to
mention,
because
it
was
multiple
emails
with
multiple
people
and
number
of
references
have
been
added
based
on
based
on
comments.
I
listed
them
here
and
there
are
further
smaller
updates.
C
G
C
A
An
excellent
point,
and
it
the
place
where
that
will
impact
is
the
yang
models
that
are
being
developed.
So,
first
of
all,
we
have
to
add
in
a
way
to
configure
to
indicate
this,
so
we
don't
have
that
right
now
we
can
make
that
optional
with
a
default
value
of
true
that
it
is
that
it's
not
sorry
I,
guess,
that's
false!
You.
G
H
H
So
let's
summarize
it
so
in
this
slides
I
will
go
through
the
changes
in
both
the
IP
and
mpls
data
pain
solution.
But
before
going
to
that,
we
had
some
discussion
regarding
the
document
structure,
and
this
is
where
we
would
like
to
ask
for
some
feedback
from
the
from
the
group
and,
of
course,
I
will
finish
me
the
next
step
slide
in
order
to
show
what
improvements
we
intend
to
do
in
order
to
reach
the
verbal
glass
goal
and
finalize
the
document.
H
So
before
going
to
other
changes,
let's,
let's
see
a
question
where,
where
we
need
some
input
from
the
Virgo
during
the
discussion
of
the
different
deafness
scenarios
we
have
identified
in
the
document
several
scenarios-
some
of
them
were
F
flow-
is
mapped
to
that
the
data
plane
and
had
high
desert.
Some
scenarios
were
focused
just
purely
on
the
on
the
death
net
data
plane.
H
Pls
is
carried
over
and
specific
technologies,
so
a
similar
analogy
can
be
used
also
for
the
ratna
data
plane.
However,
some
other
organizations
are
specifying
solutions.
So
this
is
the
question
in
the
current
approach.
We
have
two
documents
and
based
on
the
structure,
we
need
some
feedback,
and
this
is
what
we
have
summarized
in
this
slide
where
we
are
currently
and
what
would
be
the
proposal
for
the
further
improvement
of
the
document.
So
currently
we
have
two
document.
H
One
is
dealing
with
the
IP
data
plane
solution
and
one
with,
but
what
is
dealing
with
the
MPLS
data
plane
solution
and
they
are
the
drawing
in
the
upper
right
corner.
This
is
just
trying
to
highlight
what
is
there
inside
those
documents,
so
we
have
core
data
plane,
related
sections
during
the
data
type
II
and
data
temporalis.
We
have
sections
dealing
with
map,
of
course
it
a
plane
to
underlying
transport,
so
TSN
IP
MPLS
and
in
the
MPLS
document.
We
also
have
some
text
covering
TSM
interconnect
over
MPLS
scenario.
H
So
if
we
intend
to
take
the
building
block
approach
and
and
follow
how
MPLS
related
our
seas
are
defined,
then
the
proposal
would
be
to
split
these
two
documents
in
maybe
seven
different
documents.
The
first
two
would
concentrate
only
on
the
death
net,
IP
data
plane
and
Imperius
data
plane
and
the
maybe
also
framework
related
stuff.
The
next
four
would
deal
with
her
death.
Not
over
different
sub
networks
can
work,
so
you
are
interconnecting
to
death
net
notes
over
over
sub
network,
and
the
seventh
document
would
deal
hard.
H
Regarding
the
framework,
this
is
something
that
might
be
published
as
a
separate
informational
document.
So
I
think
this
is
something
that
can
be
decided
as
during
the
last
bit
of
the
documents.
And
if
we
look
to
the
current
document
structure,
it
is
really
supporting
the
split.
So
we
have
dedicated
chapters
dealing
with
with
these
topics,
so
we
think
also
that
that,
after
the
split
some
of
the
documents
will
be
practically
finished
and
and
can
go
to
a
verbal
blast
for
so
in
the
next
slide.
H
H
The
two
document
which
would
dealing
with
hi
a
TSN
sub
network
can
be
used
for
IP
that
knot
or
I
or
MPLS
that
not
networks.
This
is
something
where
we
need
some
further
clarification
discussion,
so
we
put
there
a
question
mark
and
regarding
the
TSR
over
definite
MPLS
document.
This
is
that
needs
some
further
improvement.
This
is
where
we
need
some
some
more
stuff
to
cover
there.
H
C
A
Nor
the
certainly
the
items
three
and
four
wood-
sorry,
three,
four
five
and
six-
would
need
to
talk
about
requirements
on
the
underlying
layers
for
providing
that
keep
those
queuing
mechanism
I
see.
Generally
generally,
this
group
doesn't
define
queuing
mechanisms,
but
we
do
define
how
to
use
queuing
mechanism
or
that
an
implementation
is
required
to
use
a
queuing
mechanism
to
think
back
about
how
insurv
has
been
defined.
How
Biff,
sir,
has
been
defined?
The
actual
mechanisms
within
the
node
to
achieve
the
service
isn't
defined
or
isn't
required
by
the
IETF.
G
G
This
working
group
claims
to
be
building
a
single
overall
technology.
There
are
almost
some
examples
that
I
can
cite
that
are
at
at
smaller
scope.
That
mean,
for
example,
we
in
transport
wound
up
having
to
after
the
fact
write
a
guide
to
Oliveira
to
all
the
various
tcp
specs
I
might
suggest
being
proactive
on
this
one
before
you
encounter
confused
implementers
would
be
a
good
move.
A
J
There,
my
name
is
Michael,
sharp
and
I'm
the
review
of
the
architectural
document.
So
when
I
read
the
architecture
document
I
actually
noticed
that
you
have
quite
a
bunch
of
different
architectural
solutions
and
in
the
architecture
document
you
mix
all
of
them
and
that
actually
made
the
radio
pretty
difficult
because
they're
all
these
different
solutions.
So
this
to
me
somehow
speaks
in
favor
of
separating
the
different
ways
how
you
can
build
that
net
in
two
separate
documents,
so
I
think
that
it
may
be
a
good
idea.
J
H
J
But
for
the
things,
for
example,
that
we'd
had
to
discuss,
you
have
seen
that
I
had
to
distinguish
clearly
between
some
things
that,
for
example,
apply
in
the
mpls
case,
other
things
that
might
not
apply
in
the
MPLS
case
and
and
some
of
these
things
are
probably
easier
to
sort
out
in
such
an
splited
architecture.
Okay,
it.
D
Will
also
show,
prior
to
answer
David
a
question
was
so
some
of
the
clue
that
you'll
want
presumably
will
come
from
the
framework
component
of
the
first
two
documents
that
will
set
up
some
of
the
structure
and
and
some
of
the
information
that
you
need
to
as
forward
reference
and,
second
to
the
second
part,
when
we've
got
these
together
and
see
whether
any
or
more
of
it
needs
to
come
together
to
do
the
data.
Find,
then,
is
the
time
to
write
the
summary
document,
but
not
until
we
get
a
bit
further
along
the
way.
E
The
first
thing
is
your
ad
I
think
it's
really
good
that
you're
thinking
about
this
now
and
because
for
some
of
the
other
technologies
it
creaked
along
right
and
then
it
came
about
as
it
was,
and
this
is
very
daunting.
When
you
first
see
it,
you
know
that
you
can
think
that
well,
two
documents
is
going
to
become
seven,
but
looking
at
it
very
carefully,
I
think
it
does
look
much
more
clearer
and
separated
that,
and
always,
as
you
know,
think
about
your
users
and
and
how
but
they're
what
they're
going
to
be
looking
for.
E
The
only
thing
I
would
say
is
that
when
I
mean
you,
the
user
won't
maybe-
or
they
will
see
these
titles
there
and
that
can
I
think
maybe
be
daunting,
so
I
think
just
try.
Maybe
you
can
have
the
seven
documents,
but
try
maybe
carefully
to
think
better
the
titles,
because,
right
now
it
just
looks
like
that.
You
throw
them
together,
you
know
and
putting
debt
net
either
before
or
after
in
the
middle
and
it
it
can
be
very
confusing.
E
F
Dandy,
malice,
yeah,
just
is
one
of
the
people.
Who've
been
writing
text
and
contributing
to
the
existing
to
very,
very
large
documents
you
know.
Reading
through
them
is
is,
is
a
chore
and
and
I
think
that
this
will
go
a
long
way
to
to
making
the
work
more
approachable
for
people
who
haven't
been
following
with
day
in
and
day
out
as
well.
K
H
H
A
H
H
So,
thank
you
very
much
for
the
feedbacks
and
maybe
let's
look
to
the
details
before
changing
in
the
data
bank
documents.
Let's
start
with
the
IP
solution
document,
so
we
had
here
some
technology
clean
up
that
sorry
terminology
clean
up
and
there
are
new
content
in
Chapter
five
on
management
and
control
consideration.
Of
course
it
is
out
of
scope
in
this
document
to
define
the
control
airplane
for
the.net,
but
these
are
from
that
net
data
plane
discussions.
What
implication?
What
considerations
should
be
done
for
the
management
and
control?
H
This
is
just
the
starting
list
when
we
will
later
on
discuss
management
and
control
plane,
this
list
can
be
extended,
so
the
content
is
about
flow
identification
and
aggregation,
explicit
routes,
contention
loss
and
gentle
reduction,
bi-directional
traffic
and
then
in
Chapter
5.5.
We
have
summarized
from
the
data
plane
definition
the
rivet
control
airplane
requirements.
H
We
have
a
new
chapter
chapter
7.
It
is
described
in
the
IP
over
that
MPLS.
That
I
would
say
that
these
tags,
which
was
practically
copy
pasted
from
the
MPLS
solution
document,
was
the
trigger
for
us
to
think
about
the
new
structure
and
what
we
have
discussed
in
the
previous
some
minutes
had
to
have
to
have
a
better
structure
for
these
documents.
H
We
helped
update
on
chapter
8.
Chapter
8
is
dealing
with
mapping,
IP
flows
to
DSN,
and
after
all,
here
the
procedures
were
updated
and
also
the
management
and
control
implications,
and
this
text
is
describing
the
scenario
where
that
net
node
is
treated
from
the
TSN
sub
Network
perspective
as
a
TSN
aware,
talker
or
listener.
H
Regarding
for
their
work
in
the
IP
document
for
the
OEM,
which
is
chapter
4.4,
we
have
no
content
currently,
but
there
is
also
discussion
about
OAM
related
stuff.
There
is
also
individual
draft,
so
this
is
something,
but
what
would
be
good
to
cover
in
a
separate
document
in
which
is
dedicated
to
OEM,
and
we
have
also
some
further
work.
Some
clarification
we
did
on
on
the
TSN
as
a
sub
Network
chapter.
So
this
is
something
but
what
should
be
improved
or
or
should
be
changed
in
the
in
the
content
and,
of
course,
the
document
structure.
H
As
we
have
discussed,
we
do
not
intend
to
make
those
changes
in
in
the
existing
document,
but
in
the
in
the
documents
which
we
are
resulting
after
the
split
and,
of
course
a
general
auditoria.
Cleanup
is
also
something
what
is
needed
for
the
document.
So
these
are
all
the
changes
regarding
the
IP
document
and
let's
look
to
the
MPLS
document,
what
was
updated
there?
So
there
were
again
a
terminology
cleanup.
H
There
were
also
many
editorial
changes,
so
thanks
for
the
native
speakers
review
that
improved
the
text
pretty
much
so
we
have
editorial
changes
in
chapter
1,
chapter
4
or
5
candidates
for
this
restructuring.
So
this
is
what
is
identified
their.
We
have
changes
in
the
chapter
six,
and
that
is
a
important
change
in
the
document
for
the
mpls
data
pain,
encapsulation,
the
encapsulation
regarding
the
death
net
encapsulation.
H
It
includes
both
the
s,
labor
and
f
labor,
and
that
was
a
result
of
the
discussion
hard,
as
Labor's
should
be
allocated
whether
they
should
be
allocated
for
a
berth
platform,
labor
space
like
in
case
of
soda
wires-
or
they
can
be
allocated
differently
from
that,
and
it
is
implementation
specific.
Your
decision,
how
you
are
implementing
the
this
and
how
you
are
allocating
the
rest
Labor's.
H
It
was
filled
up
it
content
and
in
Chapter
nine,
which
is
dealing
that
MPLS
operate
over
a
death
net
IP
network
here
because
of
the
F
Labor's,
a
part
of
the
debt
that
encapsulation
other
UDP
IP
headers
are
also
part
of
the
data
encapsulation.
H
Similarly,
as
it
was
shown
on
the
previous
slide,
we
are
quite
happy
with
the
MPLS
data
plane
document
content.
So
most
of
this
stuff
is,
we
think,
almost
ready
for
forever
loop,
Glasgow
Glasgow.
So
after
making
the
document
structure
change,
this
is
something,
but
both
can
go
for
for
this
value,
plus
for
okay,
just
slide
to
summarize
the
next
step.
H
H
H
Just
as
a
reminder
regarding
the
service
flow
and
configuration?
We
have
defined
two
documents
in
order
to
deal
with
that
in
this
document
we
are
dealing
with
the
flow
information
model
which
describe
the
characteristics
of
data
flows.
It
includes
in
detail
all
relevant
aspect
of
a
flow
that
are
needed
to
support
the
flow
properly
by
the
network
between
the
source
and
the
destination,
and
we
have
heroes.
H
The
service
information
model
which
describe
the
characteristics
of
the
service
being
provided
for
a
data
flow
over
the
network
and
it
can
be
treated
as
a
network
operator
independent
information
model.
So
these
two
models
are
there
in
the
intended
to
be
there.
In
the
flow
information
model
draft
and
the
third
model,
which
is
related
to
configuration
data
model,
this
is
where
we
have
the
young
drafts
and
they
describe
in
detail
the
settings
required
on
network
nodes
to
serve
a
data
flow
prepared
just
based
on
the
latest
architecture,
draft.
H
H
When
we
are
speaking
about
that
that
flows,
we
have
some
further
terminology
as
well.
So
when
we
are
having
protection
application,
elimination
inside
the
network,
then
we
had
them
that
that
compound
flow
and
that
that
member
flow-
and
we
have
also
defined
in
the
data
data
plane
aggregation
when
aggregation
is
done,
then
it
will
result
in
a
new
aggregated
death
net
flow.
H
H
H
H
So
when
we
are
looking
to
a
end
to
end
service,
then
we
can
define
for
set
of
informational
groups.
One
is
describing
the
up
flow,
then
the
F
flow
will
have
also
some
service
requirements,
so
what
parameter
should
be
provided
when
the
connectivity
is
there
for
that
flow?
The
F
flows
are
mapped
at
the
Aged
node
to
a
dotnet
flow,
and
this
method
flow
will
be
served
with
that
that
service
attributes
within
the
data
domain.
H
H
So
that
is
the
intention
to
have
this
structure
replicated
in
the
document
and,
as
I
said,
we
think
that
there
are
tributes
are
already
there.
They
are
quite
stable
since
the
version
1
of
the
document,
and
we
think
that
maybe
99%
they
are,
they
are
ready.
So
maybe
some
fine-tuning
are
needed,
or
maybe
we
have
a
missing
attribute
somewhere.
So
any
feedback
on
on
that
attribute
list
are
also
welcomed.
H
H
L
Okay,
good
morning,
everyone
and
I
will
introduce
our
work
about
the
net
configuration
young
model,
what
we
have
done
after
the
last
ITF
and
was
in
the
plan.
Actually,
we
have
talked
talked
a
lot
about
how
complex
the
architecture
document
and
the
data
plan
document
I
have
to
say
that
the
configuration
model
is
also
complex
and
it
is
really
a
tough
work
to
make
it
simple
and
easy
to
use.
So
please
join
us
if
you
are
interested
in
how
to
configure
that
and
that
network
and
make
it
work.
L
You
know
real
network,
and
here
is
a
brief
history
of
this
document.
In
the
first
version
it
is
version
0.
It
is
accepted
as
a
working
group
document
after
ITF
Wow
and
the
first
version,
the
ITF,
then
a
topology
is
defined
separately
and
in
a
second
word
in
the
newest
version,
we,
the
document
is
updated.
A
F
L
L
F
F
L
L
Okay,
I
think
it's
quite
necessary
and
we
can
see
the
structure
continue.
Actually
the
attributes
defined
in
this
document.
We
think
it
almost
covered
all
the
attributes
that
are
needed
for
configure
attend
that
network.
This
is
one
side,
but
on
the
other
side,
how
to
organize
this?
These
ultra
Buell's
make
it
simple
and
easy
to
use
is
another
topic,
so
we
want
to
in
this
presentation.
We
want
to
list
some
options
for
how
to
organize
this,
the
ultra
bills-
and
this
is
an
option
one.
L
It
is
what
it
is
look
like
in
the
current
version
of
the
draft.
We
can
see
that
the
configuration
of
the
attributes
is
organized
based
on
the
role
of
the
tenant
node
that
are
defined
in
the
architecture
draft.
There
is
transient
note
for
cueing
and
a
filtering
and
rule
a
note
for
service
protection
and
ingress.
Note
us
note:
do
some
mapping
between
different
and
encapsulations
and
the
young
models
have
different
data
plan?
Solutions
are
also
supposed
to
be
defined
independently.
We
will
have
I
have
done
and
yes,
young
moto
I
have
done
I
p.m.
L
model,
and
we
we
can
see
that
there
will
be
a
lot
of
data
plan
drafts
in
the
future,
as
we
have
already
discussed.
Perhaps
this
will
make
it
really
complex
to
to
do
the
configuration
this
way.
So
we
consider
that
maybe
this
structure
is
complex
and
difficult
to
do
mapping
between
different
encapsulation.
So
we
want
to
make
it
more
flexible.
So
we
have
another
option.
All
the
attributes
of
different
and
nanos
are
defined
in
the
same
structure.
L
But
we
we
are
afraid
that
this
may
be
go
to
another
extreme,
that
we
makes
the
structure
too
simple
and
it
is
difficult
to
use
it
because
all
the
attributes
from
sub
layer
different
sub
layers.
They
we
put
it
together
and
maybe
well
we
configure
network
it
will
make
it
very
hard
to
understand
which
attributes
should
we
use.
So
we
try
to
learn
something
what
has
been
done
in
IETF.
This
is
the
structure
of
ITF
unchastity
media
model.
L
We
noticed
that
there
is
a
yin
segment,
which
is
the
incoming
label
actually,
and
there
is
a
out
segment.
It
is
the
outgoing
label
and
there
is
an
option
for
the
label
pub
push
or
swap
so
maybe
we
can
make
the
10
that
configuration
similar
to
the
MPs
that
young
with
some
modification
and
the
same
structure
is
showed
here.
There
is
a
yin
segment
there's
our
salmon
and
there
is
operations
for
tenon
node
and
the
Yin
Simon
awesome
only
labels.
L
It
can
include
different
than
that
encapsulations,
including
mps,
IP
and
maybe
as
rs6
in
the
future,
and
we
also
define
new
operations
rather
than
label
operations,
swap
a
pop
or
push
there.
Will
be
operations
for
a
service
protection
and
the
contrast
protection?
Maybe
we
have
to
change
the
terms
to
the
resource
reservation
and
these
structure
will
support
the
functions
for
the
net
and
it
also
can
suppose
the
flow
aggregation
made
it
simple
to
do
the
configuration
so
here.
L
The
here
are
some
options
we
list
in
the
slides
and
we
need
some
feedback
from
the
working
group,
which
structure
shall
we
choose
for
the
next
version
and
another
legacy
question
whether
attended
queues
is
in
or
out
of
the
scope
of
the
networking
group.
Whether
we
should
include
some,
it
is
showed
in
gray,
in
this
picture,
queues,
algorithm
configuration
in
the
young
model
and
more
comments
and
contributions
always
welcome.
I.
A
F
A
The
flow
information
and
document
and
then
I
think
they
don't
have
to
ripple
down
into
here
so
I
think
the
reorganization
away
from
as
you've
identified
the
the
organization
on
a
node
basis,
really
the
is
cumbersome
and
doesn't
work
very
well.
So
a
reorganization
away
from
that
I
think
makes
a
lot
of
sense,
but
I
think
you
might
want
to
look
wait
synchronize
with
the
flow
information
and
align
with.
A
The
architecture
and
the
flow
information
with
the
notion
of
this
service
and
forwarding
sub-layers
once
once
you're
there.
Then
you
could
start
thinking
about
like
what's
going
on
in
the
data
plane
document
that
we
have
a
core
piece
and
then
we
have
other
pieces
that
sort
of
get
bundled
on
top
of
that,
and
maybe
that
turns
into
augmentations.
A
Maybe
that
turns
into
pointers
and
to
other
things,
and
when
you
start
thinking
about
it,
also
look
that
we
have
the
a
lot
of
work
going
on
and
T's,
which
has
which
parallels
this,
and
we
don't
want
to
repeat
all
that
work.
So
the
question
is
how
to
integrate
with
it.
We
augment
the
t's
models.
We
figure
out
how
to
reference
the
t's
models,
so
I
think
that
there's
still
two
summary.
A
These
are
still
early
and
there's
I
think
a
fair
bit
of
synchronization
that
has
to
happen,
and
that
will
help
clarify
which
option
we
end
up
with
I.
Think
moving
to
two
and
three
there's,
definitely
good
steps,
whether
it's
two
or
three
I
think,
maybe
it's
a
little
early,
at
least
from
from
my
perspective.
Yes,.
L
A
M
M
L
G
J
N
Can
you
hear
me,
okay
with
our
latest
update
over
about
in
Dayton's
draft,
so
a
reminder
for
ya
the
new
attendees?
The
goal
of
this
draft
is
to
provide
an
upper
bound,
we're
entering
latency
and
not
just
not
the
lower
average
delay
calculation
and
also
providing
zero
congestion
laws,
we're
providing
enough
for
Versailles
inside
each
no,
and
we
find
it
nice
to
prove
these
two
requirements
mathematically.
N
Well,
there
are
major
changes
from
draft
two
to
three
that
in
bangkok,
we've
decided
to
make
its
document
informational
and
they
have.
We
made
it
in
section
3,
we
added
a
new
part
to
address
the
dynamic
and
Static
flow
creation.
As
we
discussed
in
bangkok,
there
were
some
requirements
to
add
dynamically
or
remove
some
flows
from
network.
N
We
added
section
6-4
to
update
delay,
bound
Galatians,
4
per
class
queueing
with
asynchronous
traffic
shaping,
and
that
the
fluid
Mission
Control
is
added
in
this
section
as
well.
We
simplified
some
parts
for
regarding
deign,
serve
to
be
easily
followed
by
people
who
are
reading
and
remove
some
too
much
mathematics
from
this
section.
N
Well,
in
section,
3,
1
or
2
type
of
flow
creation
is
discussed.
So
the
main
thing
is
that
we
have
two
types
of
flow
creation:
a
static
one
where
we
have
all
the
information
regarding
all
the
flows
at
the
same
time
and
bedewed
admission
control
for
all
of
them
or
none
of
them,
and
so
this
has
been
addressed
in
in
draft
two.
N
But
what
is
done
in
draft
three
is
to
address
the
dynamic
flow
creation
means
that
if
a
flow
is
entering
a
network
or
leaving
the
network,
how
do
we
manage
it
and
we
found
it
that
there
are
some
kind
of
requirements
to
be
added
in
this
draft?
So
what
we
are
going
to
discuss
is
this
dynamic
flow
admission,
so
in
the
pre
class,
dynamic
flow
admission
decision
process?
N
What
we
have
is
there
is
a
node
when
we
define
two
types
of
capacity
parameters,
the
total
rate
that
is
allocated
and
the
total
burstiness
and
for
each
flow
entering
the
network.
There
are
two
parameters:
well,
the
rate
and
the
burstiness
of
this
flow.
There
are
two
counters
within
each
node,
the
accumulative
rate
and
the
cumulative
burstiness.
There
should
be
less
than
the
capacity
of
the
Baroness
and
the
rate
in
this
node.
N
So
when
we
want
to
have
a
fluid
emitted,
we
should
guarantee
that
in
all
the
know
that
this
flow
is
traversing,
these
two
equations
hold
well
the
rate
that
the
flow
ones,
plus
the
cumulative
rate,
should
be
less
than
the
capacity
of
this
node
and
as
well.
This
should
hold
for
the
per
semester.
If
one
of
this
equation
in
one
of
the
know
that
this
flow
is
traversing
is
not
holding,
then
we
cannot
accept
this
flow
because
we
cannot
guarantee
its
requirements.
N
Well,
as
soon
as
we
accept
this
flow,
we
should
update
this
counters,
and
this
can
be
done
by
just
easily
adding
this.
The
rate
of
this
flow
and
the
person
is
the
corresponding
counters
and
as
soon
as
the
flow
leaves,
we
should
subtract
these
two
parameters
from
the
accumulative
rate
and
burstiness
well.
N
Are
we
able
to
calculate
antenna
latency
bound
for
that?
Well,
this
is
the
so.
This
is
the
model
of
the
relay
note
inside
that
net,
that
we
have
the
combination,
regulator
and
a
queueing
subsystem,
the
parameters
to
calculate
and
the
latency
bound.
Well,
we
have
it
as
the
combination
of
the
queueing
subsystem
and
the
regulator
at
the
next
node.
N
If,
thanks
to
the
regulation
for
free
property,
we
are
able
to
calculate
per,
have
delay
bound
where
the
parameters
you
can
see
in
the
formula
are
fixed
and
just
we
know
the
information
that
whole
person
s
the
delay
and
rate
term
of
the
queueing
subsystem
and
the
C
is
the
transmission
time.
So
we
are
able
to
calculate
delay
bound
over
1/2
and
simply
by
just
summing
this,
her
hub
delay
bounds.
We
are
able
to
compute
an
end-to-end
delay
bound
for
this
specific
flow.
N
Well,
there
are
some
updates
of
your
planning
to
go
through
it,
and
one
is
to
improve
this
delay
bound
calculation,
that
the
thing
that
we
are
able
to
do
that
and
from
both
the
dynamic
and
Static
flow
creation
or
problems
as
well.
There
are
some
work
or,
if
the
cqf,
since
this
work
is
intentionally
a
targeting
to
formally
prove
delay
bound,
you
would
like
to
have
the
formal
delay
calculation
for
cqf
as
well
to
just
not
give
things
shallowly
related
to
the
cqf
delay,
analysis
and
as
well.
A
It's
about
the
same
I
can't
tell
if
it's
more
or
less,
but
it's
about
the
same.
How
many
think
that
this
is
a
reasonable
starting
point
for
a
working
group
document
on
the
topic,
maybe
a
little
more
than
who
raised
their
hand
before,
which
is
perfectly
okay,
but
always
entertaining
how
many
think
we
should
not
adopt
this
document?
A
O
O
The
reason
I
put
it
here
to
discuss
this
is
in
China
Mobile.
When
we
talk
about
evolve
our
network
to
support
PSN
or
debt
net
or
PLC
in
industrial
Internet,
we
found
that
it's
very
important
to
understand
how
the
layer,
three
long
long
term
or
long
scale,
a
large-scale
network
to
support
a
deterministic
networking
or
to
support
bounded
latency.
So
this
is
really
a
outline
of
what
we
understand
the
problem
we're
going
to
face
if
we
bring
data
net
with
Jason
to
a
large-scale
network,
so
indebted
net.
O
Now,
as
I
understand,
the
data
plane
take
TSN
technology
as
the
forwarding
technology
to
to
basically
enable
the
bounded
latency.
But
when
you
put
the
PSN
network,
for
example,
we
have
a
use
case
of
connecting
to
isolated,
yet,
yes,
an
island,
then
the
actual
connection
between
DTS
and
Island
will
introduce
some
troublesome
for
for
maintaining
the
bounded
latency.
So
we
have
denote.
C
O
Yeah
and
yeah
just
to
clear
from
this
much
it's
not
a
mess,
but
they
say
it's
one
use
case
I
mean
so
we
we
have
several
schemes
that
proposed
for
bounded
latency.
We
can
find
the
draft
the
the
first
two
are
busy
Kissin,
and
then
we
have
the
I
think
I
believe
it
with
the
next
presentation,
and
then
we
have
the
segment
all
team
one.
O
O
Psn
are
synchronized,
we
knew
transmit
data
between
TS,
an
island,
then
the
layer
3
will
introduce
the
jitter
on
your
latency
and
that
basically
needs
to
be
taken
care
of
using
a
a
synchronized
network
to
bring
the
synchronized
PSN
together.
That's
number
one
number
two
is
the
propagation
delay
introduced
by
long
link
along
connectivity
because
it
will
violate
the
T
window
that
in
TS,
n
algorithms
or
TSM
mechanism?
That
is
a
requirement.
O
O
If
you
calculate
the,
if
your
based
on
the
status
to
calculate
the
latency
start,
then-
and
it
is
really
computing
and
greedy
stuff,
so
when
we
are
facing
these
massive
dynamic
flows,
then
we
probably
need
to
consider
a
mechanism.
This
is
not
solution
draft,
but
we
need
this
is
suppose
the
question
we
need
to
consider
how
we
are
going
to
deal
with
this
massive
dynamic
flows.
O
So
this
is
really
a
simple
draft.
It's
not
very
long,
and
we
feel
that
there
are
more
requirements
regarding
large-scale
deployment
are
going
to
come
for
different
use
cases,
so
we're
going
to
have
more
offline
discussion
with
the
authors
of
those
implementation
craft
not
implemented,
but
the
solution
draft,
and
we
believe
that
maybe
there's
no
one
solution
that
can
address
all
of
these
requirements.
But
the
combination
of
those
or
you
know
different
choice
of
those
may
accomplish
the
these
problems.
Okay,
that's
all
from
me!
Thank
you
very
much
so.
A
But
we
can't
do
anything
about
that
here,
because
that's
a
technology,
that's
owned
by
the
I
Triple
E
yeah,
so
bringing
the
issues
to
the
I
Triple
E
and
trying
to
work
them.
There
will
be
great
here.
All
we
can
do
is
make
sure
that
we
have
proper
bounding
on
and
constraints
on
what
the
what
implementers
do
to
in
using
underlying
technologies.
So
I
think
if
you
we
could
restate
this
document
to
four
translate
this
document
into
requirements
for
scalability
of
debt
net.
A
O
A
Of
out
of
our
scope,
so
we
talked
about
using
different
technologies,
so
we
could
have,
let's
say,
operate
over
an
MPLS
point-to-point
link
where
there's
some
local
queuing,
algorithms,
that
a
vendor
figures
out
how
that
meets
their
the
requirements
of
the
debt
service
and-
and
that
would
be
fine
okay,
so
as
long
but
as
long
as
it
meets
the
the
real
operational
requirements.
So
if
you
think
there's
some
operational
requirements
from
your
your
network,
network's
experience
or
from
your
personal
experience,
that's
really
useful
to
capture.
F
G
Park
Lou
I
wonder
if
you're
overstating
the
context
of
TSN
I
can
read
the
criticisms
in
this
talk
in
two
ways.
One
is
TSN
doesn't
solve
all
the
problems
that
could
solve
I
Triple
E
could
fix
it.
The
other
way
to
read
it
is
TSN
was
designed
to
solve
some
problems.
There
are
some
interesting
scenarios
for
debt
net
that
fall
outside
of
TSN,
and
perhaps
something
ought
to
be
done
elsewhere
outside
of
TSN
to
go
to
go
address.
Those
and
I'd
encourage
consideration
of
both
perspectives.
G
A
A
I
Yang,
thank
you
for
this,
for
this
draft
I
agree
encourage
you
to
present
the
requirements,
because
that
will
be
that
will
be
useful
in
trying
to
improve
the
situation.
I
think
if
you
keep
an
eye
on
I
Triple
E
at
the
next
meeting
you'll
at
the
802
dot,
one
you'll
find
that
the
recent
presentations
that
will
point
out
that
the
cyclic
methods
that
802
that
one
has
are
actually
more
powerful
than
most
of
the
people
in
ADA
2.1
realized
and
that
scalability
in
particular
will
improve
and
I
appreciate
your
your
your
comments
on
the
requirements.
C
You
I
wanted
to
come
back
to
your
synchronisation
slide.
Just
for
clarification
again,
can
you
go
back
to
the
slide,
so
so
in
that
net
we
don't
dive
into
synchronization
details
but
say
in
the
architecture,
for
instance
that-
and
this
is
along
the
principles
that
we
don't
want
to
reinvent
the
wheel-
that
that
the
idea
is
to
use
existing
time
synchronization
of
it
versus
equalization
techniques
as
needed
in
certain
deployments.
So,
for
example,
in
in
this
case,
it
could
be
that
the
data
domain,
interconnecting
two
TS
and
domains
implements
a
time.
C
O
Just
just
one
comments:
if
it
is
a
long
link
or
large-scale
implementation
like
what
we
are
doing
in
TD
LTE
network
backhaul
network,
that
the
synchronization-
yes,
it's
doable,
but
you
know
it
brings
you
a
lot
of
work
as
well,
but
it's
yeah
I
agree
with
you,
but
it's
really
bad
because
we
know
they're
solution
for
and
synchronized
with
bonded
latency.
So
that's
basically
why
we
triggered
this
here.
P
Amalia
Mira
I'm
Christina
from
Huawei
I'm,
going
to
present
the
large-scale
deterministic
Network
draw.
Let's
look
at
the
requirement
for
this
work.
We
have
a
requirement
from
than
that,
both
the
Danna
charter
and
architectural
States,
then
that
need
to
provide
bounce
and
delay
we
Aaron's.
We
also
noticed
that
the
else
neutralized
the
traffic
shaping
works.
We
are
in
providing
the
account
underlay.
However,
there
is
no
guarantee
on
lower
among
the
delay.
P
P
Normally,
the
link
speed
will
be
more
than
100
a
little
bit
per
second,
so
the
Dannette
need
a
furling
accused
option
that
is
available
at
that
speed.
We
also
has
other
requirement
included
the
easy
calculate
delay
or
jitter,
and
the
data
of
mechanism
not
subject
to
link
jitter
but
following
a
site
mapping.
The
course
of
this
work
is
to
be
a
information
that
network
into
a
per
document
and
introduced
requirement
and
framework
which
is
independent
of
specific,
for
what
inference
solutions.
P
Generic
applicable
to
the
next
scenario,
use
justification
and
reference
for
normative
working
as
the
working
group.
We
will
figure
it
out
later,
but,
for
example,
the
choose
model
in
TS
v,
WG
o
dirt,
not
under
the
forwarding
Frank
encoding
like
the
ipv4,
ipv6,
TCP
or
UDP
extension
in
TS,
vwg,
all
the
IP
basics
extension
header
in
6
month
and
sr
encoding
in
spray
MPI,
a
specific
encoding
in
NPR's
working
group.
P
This
is
a
classic
scenario
and
dad
not.
We
use
a
dad
net
demand
to
connect
the
two
TSN
isin
and
together,
as
the
P
note,
then
I
shall
provide
the
queueing
management
like
shaping
or
scheduling
to
provide
a
congestion
can
chew.
So
our
work
is
a
similar
scale
about
curing
solution.
In
the
scenario,
we
noticed
that
there
is
a
kind
of
solution
called
cyclic
affording
have
proved
work.
What
could
work
well
in
TSN
that
can
provide
a
bug
leader,
latency
and
jitter.
P
It's
a
kind
of
synchronized
forwarding
scheme
with
Auto
flow,
curing
State
on
every
Hall,
a
synchronized,
a
low
synchronous
packet
for
what,
across
all
health,
wheezing
a
single
sink
a
high
star
seeker.
But
this
kind
of
mechanism
has
some
challenges.
The
first
one
is
the
nanosecond
time
synchronization
requirement.
Among
all
of
note,
the
second
one
is
limited
physical
size
due
to
the
link
do
to
the
psyche
time,
the
larger
the
network,
the
smaller
the
percentage
of
traffic,
can
be
synchronized
due
to
the
long
link
propagation
delay.
P
In
order
to
overcome
this
Challenger,
we
propose
the
largest
feel
natural
cyclic,
affording
the
basic
idea
is
Kari
psyche
identify
in
packet.
This
is
a
kind
of
non
synchronized,
the
forwarding
of
a
cycle
and
saying,
after
all,
packet
of
foresight
arrived.
So
what
we
have
achieved
is
we
still
keep
the
key
benefit
of
traditional
cyclic
forwarding
it's
easy
to
calculate
the
end-to-end
delay
and
jitter
have
time
counter
and
delay
and
jitter,
but
we
only
a
limited
physical
sky
limitation
has
put
arbitrary
link
propagation
delay.
We
also
eliminate
for
the
time
time
synchronization.
P
We
have
made
some
change
to
the
rafters
in
the
last
ITF
meeting,
the
first
one
is
we
ate
a
new
figure
to
illustrate
that
the
common
IP
and
mpls
for
warden
price
priority
queue,
couldn
t
grantee
the
bundle,
latency
and
jitter
due
to
micro,
burst
and
the
micro
burst
iteration.
If
we
have
new
country
on
package
behavior,
then
the
worst
case
will
be
package
arrived
at
the
single
point
cemetery.
Then
the
package
wait
in
the
queue
and
produce
microburst
microburst
iteration
after
several
Hoff's,
then
there's
no
way
to
guarantee
bounded,
latency
and
jitter
and
more.
P
We
also
add
some
description
for
two
models:
that
implement
is
a
pro
coaster:
cyclic,
affording
massive
the
RF
model
and
model
in
swept
model
or
packet
carry
like
identify
and
at
each
hop
the
cycle
identify
where
base
wrapped
until
a
new
one.
According
to
the
psycho
mapping,
Tarot
instead
model
a
packet,
we
all
carry
a
stack
of
psycho
identifies
one
for
each
hop
like
the
SR
basics,
thus
each
home
map
based
on
the
next
type
identifier,
okay.
In
summary,
we
want
to
make
sure
the
requirement
and
solution.
P
We
are
on
still
stir
draft
and
verify
that
the
networking
group
agrees
on
requirement
of
being
valuable
and
to
ask
for
working
group
adoption
before
next
ITF
meeting.
We
will
do
another
motion
resulting
from
the
feedback
we
received,
and
we
saying
this
is
not
the
only
cheese
model
useful
for
dotnet,
but
this
model
is
very
important.
A
So
I'm
a
little
unclear
on
the
document.
When
reading
it
I
see
that
it's
informational
mm-hmm,
then
it
goes
into
a
lot
of
queueing
details
and
saying
that
there's
issues
with
TSM
are
you.
Are
you
looking
to
comment,
provide
comments
on
the
TSN
queuing
mechanism?
Are
you
looking
to
define
a
new
queuing
mechanism?
Were
you
going
with
the
document.
P
P
A
F
Q
So
what
we
tell
us
Eckert
as
a
co-author,
what
we
try
to
describe
in
the
document
in
their
slides
is
the
scheme
on
how
to
break
the
deadlock
between
you
know.
What
working
group
is
doing,
what
in
response
to
which
other
working
groups
right,
so
we
wouldn't
want
to
define
a
queuing
mechanism.
I
think
what
we
want
to
define
in
TS
vwg
is
the
proper
behavior,
which
is
the
externally
visible
result
of
the
queueing
behavior
so
because
we
kind
of
typically
have
the
approach
in
the
IDF
to
try
to
avoid
defining
internal
behavior.
Q
So
that's
also
been
said
in
the
slide
in
the
deck
the
PHP
in
t-80s
vwg.
But
you
know
our
best
understanding
was
that
TS
vwg
to
basically
accept
such
work.
It
needs
some
other
working
group
to
show
the
interest
in
this
and
which
is
exactly
what
this
document
is
meant
to
express
by
being
an
informational,
dead,
networking
group
document
right.
It's
basically
a
document
expressing
the
interest
of
debt
net
in
you
know
having
other
working
groups
creating
standardized
aspects
of
that
solution.
D
Yeah
I
agree
with
what
Allah
is
saying
that
this
sets
up
the
this
document
sets
up
the
problem
and
gives
the
context
of
that
net.
Then
we
need
to
get
some
work
from
the
transport
area
to
validate
the
hot
behavior,
and
then
we
need
to
go
to
other
groups
in
order
to
specify
the
marking
in
the
package
itself.
So
there's
three
pieces
of
work
will
be
gonna
start
somewhere
and
this
seems
to
be
the
best
place
to
start
I
completely.
A
Understand
how
the
previous
document
sets
up
the
requirement,
so
I
really
understand
how
the
previous
document
sets
a
requirement.
I,
don't
know,
I,
don't
understand
how
going
to
this
level
of
detail
is
setting
up
the
requirement.
It
seems
to
be
setting
up
a
solution.
Then
it's
not
from
I.
Don't
think
it's
I
think
the
right
place
to
define
the
solution
for
the
queuing
mechanism
to
meet
the
the
requirement
isn't
is
in
TSV
working
group.
G
Black
wearing
entirely
too
many
hats,
transfer,
advisor
debt,
working
group,
tees,
DWG,
working
group
chair
and
one
of
the
original
architects
of
architects
diffserv,
which
we
seem
to
be
talking
about
here.
We're
seeing
could
you
go
back
to
the
slide
on
microbursts,
because
I
think
that
okay,
I
think
what
we
have
on
this
slide.
You'll
notice
there
isn't
a
there,
isn't
the
word
queue
anywhere
in
this
slide.
Nonetheless,
what
I
see
here
is
a
behavioral
spec
that
could
probably
the
foundation
for
a
pH
B
that
basically
starts
from
actually
rights.
G
I
see
your
acute
memory,
but
what
it
basically
starts
from
is
that
we
have
a
time,
sequenced
stream,
and
we
want
to
ensure
that
it
doesn't
pile
up
and
turn
and
turn
into
uneven.
Microbursts
I
believe
that
can
be
done
by
referencing
some
sort
of
notion
of
time
interval
without
the
nuts
and
bolts
of
exactly
how
the
queues
work
inside
and
that
kind
of
careful
teasing,
a
part
of
things
can
get
us
to
Aitken
can
get
us
to
a
page,
B
spec
that
doesn't
tell
the
people
doing
silicon,
how
to
build
the
queues.
I
think.
A
G
Have
to
have
a
long,
we
have
to
a
longer
discussion
about
that.
One
because
I
think
I've
heard
from
the
authors
in
other
places
that
they're
that
they're
there
their
interest
more
interested
in
the
diffserv
framework.
Nonetheless,
I
and
I
understand
the
unguents
or
comment
the
crucial
point
I
want
to
make.
Is
that
I
believe
that
that
this
chart
provides
the
basis
for
the
be
the
on
the
wire
absorb
behavioral
spec?
That
gets
you
away
from
explaining
exactly
how
the
queues
work
in
the
silicon
that.
A
Would
that
would
be
fine
and
then
we
would
remove
all
the
queuing
behavior
discussion
from
from
this
document,
and
then
we
could
have
something
that
would
be
useful
here.
We
start
getting
into
the
queueing
behaviors
I
think
that's
the
place
where
we
we
passed
when
we
should
be
talking
about
a
debt
mumble.
G
G
Q
So
it
may
be
two
points,
the
first
point:
if
we
still
haven't
gotten
the
point
over,
we
still
need
to
improve
the
text
or
the
slides
right.
So
they
they,
when
you're
saying
in
surfing,
is
per
flow.
It's
exactly
the
point
of
not
having
per
flow
state
in
the
midpoint
P
notes
right,
so
only
state
really
for
the
class
that
is
basically
not
having
to
scale
up
with
more
and
more
flows
right.
A
Q
Q
A
A
A
The
queuing
pieces
are
really
down
in
the
in
the
guts
and
outside
the
scope
here,
but
the
the
point
that
you've
just
made,
if
it's,
if
that's
really
the
main
focus
or
I
thought
I
heard
you
say,
is
mind,
focus
that
could
be
wrong,
but
I
thought
I
heard
you
say
the
main
focus
is
is
to
get
away
from
this.
The
aggregate
model-
that's
very
new
and
that's
cool,
that's
great-
to
bring
that
in.
But
let's
make
that
the
focus.
The
discussion
by.
Q
The
way
the
the
whole
IETF
mindset
of
trying
to
completely
eliminate
discussing
anything
internal
to
the
router
I
can
I
invite
you
to
3:00
p.m.
on
the
router
architecture,
a
discussion
where,
basically,
that's
exactly
a
big
bitching
point.
I,
don't
know
if,
for
example,
any
of
the
TSN
specifications
would
have
been
able
to
be
implemented
correctly.
If
you
know
TSA,
you
know
if
I
Triple
E
would
have
taken
the
same
stance.
It's.
A
Q
No,
no
we're
not
sitting
on
their
toast.
The
only
thing
that
we're
saying
is
that
the
or
fine,
with
kind
of
any
structural
stuff
in
documents
in
terms
of
making
appendix
or
something
like
that,
but
without
having
any
idea
of
how
somebody
else
could
have
implemented
it
right.
It's
very
hard
to
bring
a
lot
of
the
QoS
technologies.
I
think
even
the
people
who
wrote
the
PHP
always
had
basic
in
the
back
of
their
mind
and
in
as
much
documentation
as
they
could
find
examples
right
so
and
I
think.
A
Of
insurv
I
was
there
building
solutions
and
queuing
mechanisms,
but
the
IETF
chose
not
to
go
down
the
path
of
standardizing
them.
I
certainly
agree
with
you.
The
implementers
always
have
an
idea
of
how
you're
gonna
implement
it.
It's
just
a
matter
of
what
we
standardize
and
and
where
and
in
what
forum,
but
the
the
notion
of
doing
a
different
form
of
aggregation
isn't
as
an
important
idea,
and
it's
to
me
that
that's
where
really
where
we
should
be
talking
in
the
comment
in
this
document.
N
R
N
P
N
So
in
TS
n
it
makes
sense,
because
you
don't
have
a
lot
of
notes
and
you
can
make
them
synchronized
on
the
same
T.
But
if
you
have
large-scale
network
I
find
it
hard
to
have
the
same
T
and
if
it
is
not
the
same
teat,
and
there
is
another
issue,
how
can
you
guarantee
that
from
one
ingress
port
there
is?
There
is
just
one
packet
from
one
cycle.
N
Maybe
there
is
more,
maybe
not
just
two,
because
I
notice
in
your
document
that
you
assume
that
maximum
two
packets
are
coming
from
the
previous
node,
while
if
the
parameter
T
is
different
and
then
for
sure
it's
more
than
that
right,
yeah.
And
so
how
is
it
going
to
be
addressed
in
the
large
scale
that.
N
I'm
not
talking
about
eight
years
I'm
talking
about
this
approach
of
LDN
right.
Maybe
there
are
some
challenges
in
80's,
but
what
I'm
concerned
is
the
Tilly
calculation
provided
in
this
document
is
not
accurate.
I
mean
there
might
be
some
challenges.
That
I
would
prefer
to
have
the
results
first
in
some
papers
and
can
be
proven
somehow,
and
the
results
reported
in
a
draft
rather
have
the
mathematical
proof
inside
the
draft.
Maybe
it's
more
interesting
to
have
the
results
so.
A
I
think
this
is
a
great
conversation
to
have
offline
and
also
well
great
conversation
to
have
offline,
given
the
time
constraints,
I
do
think
good
to
spend
a
few
more
minutes
with
Stuart
and
David
see
if
we
can
get
to
some
resolution
of
where
we
go.
Next.
I
will
point
out
that
we're
over
time
on
this
slot,
which
means
we're
eating
into
the
next
box.
But,
given
that
we've
talked
about
this
a
few
times,
it
seems
like
it's
the
right
use
of
the
meeting
time.
D
N
A
D
Three
things:
first
off
I
I,
don't
think.
Are
you
listening,
though?
First
look
I
think
it's
grossly
misleading
to
suggest
that
we're
treading
on
the
I
Triple
E
toes
we
as
you
suggested
earlier.
This
is
not
the
intent
here.
The
intent
here
is
to
solve
this
entirely
at
the
packet
layer,
which
is
not
the
way
that
so
we
don't
get
anywhere
near
what
they're
doing
we
want
an
independent
solution.
D
A
D
A
The
thing
that
I've
taken
away
from
this
is
that
the
main
point
of
the
document
is
is
to
introduce
a
new
form
of
scaling
mechanism
into
our
architecture
and
I,
really
like
us
to
focus
on
that
and
nail
that
out
and
if
that
means
a
separate
document,
that's
fine
too.
So
you
can
have
a
mechanism
document
and
then
one
that
covers
those
architectural
implications
is
that
architectural
implication
is
a
huge
one.
A
A
If
you
want
to
include
it,
we
need
to
bring
that
in
and
and
agree
to
that
there
right
now.
The
only
way
we
have
for
scaling
is
the
aggregate
flows
and
I
thought
I
heard
again.
I
might
have
misunderstood,
so
you
know
I'm
happy
to
be
corrected
either
in
session
or
out
of
session.
But
if
that
really
is
the
case
where
it's
a
it's
something
different
than
what
we
have
defined
for
aggregate
flows,
that's
really
important.
I
believe.
A
G
Lou
I
think
you
hit
the
nail
on
the
head
and
that
lot
very
last
comment
which
is
now
wearing
a
TS
cwg
hat
TS
v.
Wg
is
quite
happy
to
do.
Work
for
other
work.
Moves,
we've
done
so
many
times.
I.
Remember
the
amazing
adventure
I
went
through
to
get
us
off
the
critical
path
of
what
path
of
web
RTC,
so
we
wouldn't
be
responsible.
Cluster
238,
so
I
mean
we're.
We
have
lot
lots
of
track
record
doing
this.
G
However,
there
needs
to
be
a
cooperative
relationship
with
the
work
who
is
going
to
use
the
work,
and
hence
decision
needs
to
be
made
here
that
this
work
is
interesting
and
high-level
scope
and
scoping
requirements.
Yeah.
We
can
work
out
details
of
exactly
what
the
what
the
forwarding
behavior
through
the
node
needs
needs
to
be
to
solve
the
problem,
but
the
problem
is
going
to
be
defined
here
right.
A
And
I
think
the
previous
document
seems
to
be
defining
the
problem.
This
this
document
seems
to
be
more
about
the
solution,
but
with
the
addition
of
this
new
concept
of
handling
scaling
without
using
our
creation,
flow,
aggregation
and
again,
I.
Think
we've
made
it
clear
to
me:
that's
something
that
we
don't
have
and
I
say
to
me
so
because
I
want
to
give
an
opportunities
for
other
people
who
stand
up
and
say
I'm
wrong
and
that
we
really
do
have
it
so.
Q
In
in
what
you've
been
saying,
I
hadn't
heard
the
word
requirements
so
because
last
time
around
at
103,
when
we're
discussing
the
feedback
was
more,
we
we
need
to
describe
the
requirement.
So
is
it
fair
to
say
that
the
the
document
itself
should
also
relate
to
you
know
how
well
how
as
we
try
to
do
in
the
first
slide
right,
so
how
we
think
there
are
requirements
to
solve
this
this
this
this
problem,
if.
A
N
Q
I
Have
Norman
q
yeah
my
point?
The
bounded
latency
duck
giraffe
does
point
out
that
some
specific
queuing
mechanisms,
among
them
at
least
one
of
the
I
Triple
E
mechanisms,
does
not
require
any
participant
does
not
require
any
state
and
any
node
along
the
way,
and
so
does
allow
dynamic,
rapid
additions
and
deletions
of
of
traffic.
I
G
And
and
when
I
look
at
this
stuff
before
the
meeting,
my
sense
was
that
the
the
extent
there
was
a
functional
gap
somewhere
that
this
room
group
ought
to
be
looking
at
it
was.
It
was
in
network
scope,
things
like
the
can.
Can
you
extend
the
time
synchronization
to
art,
to
arbitrarily
large
networks
going
back
to
the
earlier,
the
cause
of
administrative
domain?
Are
you
going
to
get
uniform
tight
times
and
ization
across
administrative
domains?
What
happens
to
high
jitter
links
that
kind
of
thing
yeah.
A
And
those
are
places
where
we
definitely
want
to
have
solutions,
so
we
can
place
those
translate
those
requirements
down
to
the
TSV
working
group
and
have
that
group
work
on
solutions.
So
we
have
run
out
of
time,
which
means
we
are
skipping.
The
strategy
we're
skipping
two
slots
that
were
individual
document:
zero,
zero,
individual
documents.
We
do
encourage
you
to
take
a
look
at
them.
A
One
of
them
was
on
SR
and
we
did
want
to
talk
about
that
and
in
particular
mention
that
we
chairs
need
to
work
with
our
ad
and
C
talk
to
the
SR
and
also
the
SR
chairs
the
spring
chairs,
to
see
where
that
work
might
make
most
sense
or
that
discussion
should
take
place.
So
with
that
we
are
out
of
time.
We
thank
you
all
for
contributing
and
participating,
and
we
definitely
apologize
to
those
last
two
presenters
and
encourage
you
to
go.