►
From YouTube: IETF-DETNET-20230621-1200
Description
DETNET meeting session at IETF
2023/06/21 1200
https://datatracker.ietf.org/meeting//proceedings/
A
Okay,
with
luck,
that's
confirmation
that
you're
in
the
right
place.
A
Okay,
I've
gone
and
pasted
the
agenda
into
the
notes
page.
Would
anyone
like
to
volunteer
to
try
to
take
notes
there
since
I've
not
been
doing
very
well
in
some
of
the
recent
meetings.
A
A
Okay,
this
is
an
ietf
meeting.
This
is
the
note
well
slide.
You're
expected
it
to
be
familiar
with
it,
because
it
applies
to
your
participation
in
this
meeting.
A
A
The
goal
is
not
to
get
through
all
of
them,
but
to
discuss
one
or
two
of
them
in
depth
about
the
same
way
that
we
discussed
CQ
cqf
in
depth
at
the
last
meeting,
both
to
understand
so
what
mechanism
is
and
equally
importantly
to
in
essence,
debug
the
requirements
draft.
A
So
we
understand
what
it
means
for
a
mechanism
to
meet
or
or
not
meet
the
very
sections
of
the
requirements
draft
after
that,
Antoine
is
going
to
do
a
presentation
on
bounded
on
bound
on
his
bound
delay.
Q
draft.
B
Just
to
to
repeat
what,
if
you
haven't
folks
here,
haven't
read
the
mailing
list:
I've
tried
to
see
how
we
can
capture.
You
know
the
the
work
of
these
meetings
on
on
the
wiki
a
little
bit
better.
B
So
let
me
know
if
that's
useful,
but
what
else
we
can
do
and
and
and
add
other
mechanisms
that
you
think
are
should
be
in
the
list
and
then
also
we,
we
merge
the
two
cqf
derived
mechanisms
that
we've
been
presenting
that
and
and
yeah
sorry
I
sent
that
to
the
mailing
list.
Thanks.
A
Okay,
great
and
we
know
from
discussion
in
a
past
meeting,
the
csqf
uses
the
same
mechanism
in
the
data
plane.
The
same
sched
scheduling,
queuing
mechanism
on
the
Node,
but
has
a
has
a
different
control
mechanism
and
passes
different
information
between
the
notes,
but
I,
don't
know,
don't
know
yet
what
we
ought
to
do
about
drafts
to
recognize
that.
B
Yeah
I
I,
don't
know
yet
what
what
what
the
authors
there
would
like
to
do,
but
I
certainly
would
love
to
try
to
propose
some
com.
Comparison
to
to
I
think
add
further
explanation
about
the
pro
and
cons
of
either
approach.
A
C
A
Hang
on
here,
okay,
xiaofu
you're
gonna
lead
to
a
discussion
on
this.
A
Oh,
hang
on
a
minute:
I
need
to
I
need
to
negotiate
with
meet
Echo
just
a
minute
here.
A
A
A
All
right,
I
think
you're
gonna
have
to
yell
at
us
all
right.
Let's
see.
A
Okay
and
you're,
coming
through
much
better
now
you're
very
hard
to
hear
earlier
something
something
just
improved.
Please
keep
doing
that.
D
Okay,
this
this
one
yeah.
So
this
is.
D
Evaluation
according
to
the
requirement
of
so
let
me
begin
from
the
first
aid
so
for
the
item
total
with
the
time
a
singularly
the
human
AC
is
low,
I'd
be
called
that
the
world
Terminals
and
the
network
devices
needed
to
achieve
nanoseconduct
clock
synchronization
across
the
network
to
ensure
that
the
GCL
time
of
all
outgoing
posts
is
the
single
land,
singular
Network
for
this
item,
I'm,
not
sure
if
anyone
has
some
more
comments.
D
I
think
this
setting
is
obvious,
so
I
will
go
to
the
next
item.
For
the
item
support
large
single
propagation
latency
is
yes,
because
the
the
link
delay
is
nationally
considered
during
this
calculation,
that
is
to
determine
the
Tas
transmission
data
window
position
under
GCL
is
independently
is
stored
on
each
node.
A
Go
to
Let's
quickly,
they
sound
like
easy,
easy
calls,
any
comments
or
disagreement
with
3.1
or
3.2.
D
Okay,
if
the
data
load
any
comments
about
these
two
items
for
the
next
slide
and
accommodate
the
the
higher
link
speed,
the
generation
is
the
password
more
plus
size,
timer
control.
That
means
smaller
time
agent
of
G
cell,
which
is
required
that
may
you
can
see
the
the
capacity
of
the
device.
D
Some
more
detail
of
the
disciplinations
that
the
the
GCL
operation
is
based
on
the
flows,
so
so
for
a
specific,
a
specific
cycle
of
the
G
cello.
They
may
contain
too
much
items
due
to
the
higher
linger
speed
under
the
time.
Interval
is
smaller,
so
it
can
contain
too
much
items.
F
Yes,
hello,
very
generic
question
to
the
presenter
when
you
say
time
aware,
shaper
or
refer
to
actually
added
2.1
qbv.
F
It
sounds
like
you
refer
to
a
particular
use
of
IEEE
added
to
Define
qbd,
so
which
is
the
enhancements
of
scheduled
traffic,
because
she
said
generous,
outfellow
and
so
on.
But
the
enhancements
for
scheduled
traffic
provides
you
some
form
of
programmable
machine,
so
it
can
be
used
in
one
or
another
way.
For
example,
you
can
even
with
some
other
extensions,
create
a
cqf-like
behavior
with
this
enhancements
or
scheduled
traffic.
F
So
do
you
refer
to
a
classic
tdma
use
of
the
enhancement
for
scheduled
traffic
in
your
evaluation
here,
or
it
sounds
a
bit
to
me
like
that
or
do
you
have
some
other
communication
scheme
in
mind.
D
So
if
I,
could
you
Collision
clearly
so
the
the
this
slide
of
TSC?
You
want
to
see
it's
based
on
the
classical.
D
One
could
be
way
is
not
contained
only
enhancement.
D
F
I
think
it's
just
understood
that
you're
using
Tas
or
802.1
qbv,
which
is
the
project
name
of
p802.1
qbv.
It's
enhancements
for
scheduled
traffic,
so
standard
does
not
contain
cas
and
it
can
be
used
in
various
ways.
But,
for
example,
in
3.7
you
say:
delay
Jitter
is
Alpha
low
and
it
really
depends
on
how
you
use
the
gate,
control
lists,
and
my
question
is
therefore:
do
you
assume
802.1
qbv
is
used
in
a
Time
division,
multiplexing.
A
I
believe
he
already
said
yes
to
that
question.
Oh
okay,
I
didn't
hear
this
yeah
I'm
pretty
sure
he
said
yes
to
that
question
and
beyond
that
there's
a
separate.
We
had
a
separate
discretion
of
the
prior
meeting
about
cqf.
So
if
Tas
were
used
to
implement,
cqf
I
would
I
would
suggest
referring
to
evaluation
of
cqf.
F
B
F
Not
not
in
a
standardized
way,
though
this
classic
tdma
is
I.
Think
it's
like!
Well,
it's
not
really
normatively
standardized.
A
D
Okay,
something
so,
for
the
next
item
be
scalable
to
the
large
number
of
floats.
The
humanization
is
depression,
calculation
for
workflows
and
the
control
plane
is
can
be
hard.
Problem
may
contain
too
many
items
due
to
too
many
floats.
D
A
The
question
actually
I
have
a
question
there,
but
my
question
is
to
to
Ping
On
the
requirements
draft
and
I'm
going
to
apologize
because
I
may
I
I
probably
should
know
this
I
don't
paying.
How
does
the
requirements
draft
treat
individual
flow,
scheduling
versus
traffic
class
scheduling.
G
A
All
right,
it
sounds
like
that
might
be
something
that
would
be
useful
to
add,
because
if
I
understand
what
xiaofu
has
written,
he
said
that
Tas
will
not
scale
to
a
large
number
of
flows,
because
the
control
plane
problem
is
very
bad,
but
Tas
per
traffic
class
will
scale
to
large
number
of
flows
by
virtue
of
traffic
classes
and
I.
Guess
we
need
something
in
the
requirements:
draft
to
talk
about
use
of
traffic
classes
versus
individual
flows.
Does
that
make
sense.
H
Yes,
but
I
have
a
question
or
concerned
about
to
evaluate
TS
because
in
my
mind
as
I
know,
I
don't
see
any
extension
to
TS
to
use
in
a
large
scale
Network
if
I'm,
not
wrong,
I
think
in
most
people's
mind
they
think
it
can't
be
used
in
the
last
skill
Network
so
and
there's
no
just
trial
or
more
work
on
that.
So
we
can
just
give
them
inference.
I
think.
A
I,
don't
think,
there's
a
problem.
There
I
mean,
for
example,
there
are
two
clear
nodes
in
the
evaluation
column
which
suggests
the
Tas
and
large-scale
network
is
not
going
to
work.
However,
one
of
the
purposes
of
this
exercise
is
to
better
understand
the
requirements
draft
and
that's.
Why
that's
why
I'm
raising
this
here,
because
I
would
not
be
at
all
surprised
to
see
traffic
class
versus
individual
flows
turn
up
in
the
evaluation
of
some
of
the
new
mechanisms
that
are
appropriate
for
large-scale
Networks.
H
Well,
yeah
yeah!
Yes,
we
can
give
some
some
exponential
Nation
but
measure
just
to
express
my
little
concerns
about
it.
Okay,.
A
And-
and
the
concern
is
completely
valid,
I
in
in
encouraging
this
evaluation
exercise,
I'm,
not
believing
that
any
of
the
TSN
mechanisms
are
going
to
be
completely
super
large-scale
networks,
but
it
does.
It
does
serve
the
purpose
of
understanding
where
the
gaps
are
and
where
we,
where
some
refinement
requirements
draft,
will
help
us
work
through
the
new
mechanisms.
B
That
I'm
a
little
bit
surprised
of
of
bringing
in
traffic
classes,
so
I'm
I'm
not
quite
sure
that
we've
done
that
in
the
past
or
or
as
goals
right,
I
think
I
when
I
was
talking,
for
example,
about
tcqf.
I
was
just
saying
that
the
scaling
behavior
that
we
achieve
is
similar
to
that
what
we
have
with
with
traffic
classes,
but
that
was
just
meant
as
as
a
comparison
form,
but
not
as,
as
you
know,
specifically
saying
that
here
there
is
a
dead
net
idea
for
traffic
classes,
which
I
don't
think
we
have.
H
D
Told
us
I
I,
don't
understand
the
Canadian
position,
but
the
food
here
is
actually
contained.
Eight
traffic
class.
D
For
the
top
of
the
class,
it
will
also
included
the
TS
contain
the
gclo
to
assert
the
state
of
for
the
for
for
easy
traffic
class
to
assemble
the
traffic.
B
Isn't
that,
rather
than
what
what
I
would
be
calling?
What
I
think
I've
also
seen
in
other
papers
being
called
priorities
in
in
terms
of
that
an
individual
flow
belongs
also
to
a
traffic
class,
and
then
the
behavior,
the
the
latency
throughput
or
whatever
it
is.
Four
different
traffic
classes
is
given
different
parameters
there.
So
it's
not
independent
of
the
per
flow
Behavior,
but
it's
an
additional
parameter
of
each
flow.
D
Okay,
I
think
the
policy
to
write
the
TS
mechanism
is
actually
operator
based
on
eight
of
trump
class
and
multiple
flows.
Consider
a
single
two
specific,
a
publicist
specific
coup,
for
example
the
flow
one
to
a
floating
concealer,
the
the
the
glue
with
a
specific
type
of
class,
but
but
the
flu
one
and
the
flu
return.
And
why
would
the
simulate
at
the
different
time
so
there
arrive
at
the.
D
D
I
I
don't
know
even
my
answer
has
answer
the
police
question.
B
D
B
Okay,
but
but
isn't
it
true
that
we
okay,
so
in
the
control
plane
we
we
need
to
capture
per
flow
information
and
calculate
per
flow
which
which
each
flow
given
a
traffic
class
and
then
in
the
forwarding
plane
we
have
like
in
you
know
the
the
derived
solution,
like
cqf,
tcaf
and
so
on.
We
only
have
the
the
different
traffic
classes.
A
F
A
Okay,
all
right
so
I
think
the
the
evaluation
is
reasonable
and
I
think
we
do
need
to
say
as
this
as
mentioned
earlier,
do
you
need
to
say
something:
the
requirements,
draft
about
use
of
traffic
classes
and
flow
and
and
flow
aggregation
for
scalability?
Okay,
let's
go
ahead
and
move
on
to
high
utilization.
Thank
you.
D
D
For
the
item
will
win
the
flow
fluctuation
flow,
more
disrupting
service.
The
illumination
is
low.
D
D
So
for
the
next,
for
the
next
item,
be
scalable
to
a
large
number
of
hooks
with
complex
the
topology,
the
illumination
is
partial.
D
A
A
Let's
go
on
to
the
next
one
and
see
what
will.
A
Okay,
Chef:
we
want
to
talk
about
the
credit
based
shaper
evaluation
equipment.
D
Okay,
this
is
the
color
database
ship.
The
standard
is
referred
to
iea
or
2.1
to
a
week
for
the
first
item,
only
with
the
timer
singularly
yes,
because
it
does
not
rely
on
time,
signalization
or
fluency.
D
More
perfect
place
is
required
to
serve
more
service
capacity.
Accordingly,
it's
almost
impossible
to
quote
parts
of
the
storm
by
a
single
interfering
fluid,
because
speed
is
high.
D
So,
in
this
case,
it
may
not
necessary
to
to
be
combined
with
ATS.
D
So
for
for
the
next
one
be
scalable
to
the
large
number
of
flows,
partial.
A
D
Okay,
well,
the
next
item,
totally
with
the
higher
utilization.
The
illumination
is
yes,
it
is
still
the
pre-configuration
of
bundle,
riders
for
easy,
topical
class.
F
Once
to
the
item
three
four
more
than
one
and
three
four
sub
item,
two,
just
for
my
clarity,
three
four
sub
item,
two:
when
you
say
pre-configuration
of
bandwidth
limits,
do
you
mean
setting
and
administrative
idle
slope
value?
Is
the
Associated
Credit
Miss
shaper.
D
D
F
I,
don't
know
the
well
I,
don't
I
can't
remember
if
it's
a
title,
but
the
contribution
there
referred
to
an
item
or
in
the
bottom
line.
Number
one
license.
D
Yes,
it
is
a
paper
to
discuss
to
combine
the
ETFs
and
the
CBS
to
work
together.
F
Yeah,
okay,
if
it's
just
just
to
give
you
a
hint
if
it's
the
paper,
where
CBS
and
ATS
are
entertained
for
the
same
traffic
class
in
the
same
Bridgeport
or
end
station
port,
this
is
nothing
that
is
specified
by
electrically
802.1
Q,
this
combination,
so
this
mechanism
is
not
standardized.
This
combination.
D
F
D
Yeah
I
I
think
the
combination
of
eight
years
and
CBS
is
useful
for
some
scenaries,
because
the
positiveness,
because
the
problem
is
actually
existed
as
the
alpha
C
nicely
to
Tino,
has
also
discussed
this
combination.
To
avoid
this
issue.
D
Yeah,
okay
I
will
go
ahead
for
the
next
item.
Okay,
I
have
done
so,
for
the
next
item
will
win
the
flow
flood
creation.
Floorboard
is
blockchain,
so
it's
the
information
is
yes.
D
Each
service
flow
of
a
Class,
A
or
Class
B
is
a
permitted
based
on
the
bundled
silver
issue
under
the
total
amount
of
bandwider
civilization.
That's
more
that
you
can
see
that
the
three
configuration
simulator
and
mode
exceed
the
worst
latency.
D
G
A
D
A
I
Anyway,
to
to
make
it
more
clear,
the
the
each
item
of
the
table
should
be
the
same
as
this
one
or
the
previous
one
prepared
by
ejo
or
we
we
can
list
them.
The
the
item
we
think
is
very
crucial
to
evaluate.
A
I
think
I'd
want
to
see
a
chart
like
this
for
each
each
proposed
mechanism
and
then
I,
don't
think
there'd
be
a
problem
with
with
another
chart,
pointing
out
interest,
interesting
and
interesting
characteristics
of
of
of
The
Proposal
that
aren't
captured
here.
Would
that
be
okay.
I
A
Yes,
although
I
would
ask
that
that
the
that
that
that
the
more
interesting
aspects
be
limited
to
to
us
to
a
single
slide,
because
we're
going
to
have
to
do
compare
and
contrast
among
the
new
mechanisms
to
figure
out
what
saw
which
ones
solve
which
problems
and
what
to
take
forward.
A
Perlis,
or
is
you,
what
do
you
want
to
comment
on
cqf
or
sorry,
tcqf.
B
The
mic
button
is
always
your
friend,
not
no,
no
I
think
at
most
I'm
struggling
with
whether
these
these
this
this
list
is
that
we
have,
in
the
requirements
draft
complete
enough
to
to
to
do
a
useful
comparison.
So
I
think
this.
B
The
CBS
one
is
particularly
interesting
and
I
have
to
sit
a
little
bit
longer
on
it
to
see
what
what
may
be
missing
in
terms
of
what
the
you
know,
new
proposals
that
we
have
overall
are
improving
on
this
on,
but
certainly
I
think
it
must
be
possible
to
to
to
start
with
this.
A
Okay
sounds
good,
and,
if
you,
if
you,
if
you
see
something
that
ought
to
be
improved
requirements
draft,
please
please
please
send
note
to
the
list.
I'm
sure
ping
is
very
interested.
B
But
you
do
know
that
I
had
proposed
on
at
last
ITF
meeting
a
draft
which
had
a
more
detailed
list
of
evaluation
criteria
here
right
without
making
that
be
necessarily
each
of
these
criteria
be
a
requirement
and
I
think
that
that's
in
an
easier
starting
point
than
to
coming
up
to
these
evaluation
judgments.
That's
that's
I!
Think
what
I'm
a
little
bit
struggling
with.
A
Let's
see
I
need
to
choose
words
careful
here,
I'm,
looking
to
make
progress
with
some
initial
evaluation
that
is
tractable
and
comprehensible
across
a
variety
of
mechanisms.
We're
going
going
to
be
looking
at
so
at
the
moment.
I
I
think,
let's
see
put
another
way,
I
greatly
appreciate
the
depth
of
the
draft.
You
wrote
to
our
list,
but
at
this
point,
I
want
to
make
sure
that
we've
encompassed
the
breadth
of
the
proposals,
perhaps
before
diving
into
that
depth.
A
H
Some
points
here,
one
question
that
if
we
only
use
CBS,
can
it
guaranteed
latency.
For
example,
if
we
see
the
result
of
the
chat
with
some
of
the
evaluations,
results
were
yes
and
some
of
them
were
partial,
so
it
will
give
impression
that
it
may
be
used
into
the
SQL
Network,
but
it
may
has
its
own
problems,
such
as
latency
guarantee
and
also
some
efficiency
issues.
I
think
that's
one
of
the
point
and
the
second
one
is
that
I
recognize
the
requirements.
3.4.
H
It
tolerates
High
utilization,
it's
a
relatively,
relatively
new
points
that
is
added
into
the
text
and
I.
Think
in
this
slides,
the
notes
I
mean
the
evaluation
method
is
a
little
different
from
the
previous
for
the
for
the
TS
and
I
I
need
to
explain
that.
H
For
this
point
it
means
if
there
is
no
more
bandwidth,
can
be
used
into
for
the
and
then
that
flows,
how
it
can,
what
more
work
can
be
done
to
to
schedule
a
schedule,
the
the
the
capability
and
now
I
think
maybe
there's
a
little
problem
for
the
requirement
to
draft
for
this
point.
So,
oh,
it's
led
to
the
results
that,
for
example,
for
for
this
one.
A
H
Yes,
and
for
this
point
it's
it
says
if
the
thing
as
flow
doesn't
send
packages
depend
with
his
waste,
so
so
so
I
mean
the
a
different
to
evaluate
for
this
point.
But
maybe
that
point
has
its
own
problem
and
not
so
some
major
in
at
this
stage
and
I
will
think
more
about
this
point.
For
this
requirement.
H
So
just
two
parts,
one
for
the
CBS
itself
and
the
second
one
is
for
the
requirement.
0.4.
D
H
Okay,
okay,
sorry,
I
I
I
understand
what
do
you
mean?
You
use
the
same
evaluation
method,
but
I
really
don't
get
two
requirements
of
this
one,
but
I
don't
mean
their
quantities
is
cracked.
Now
you
mean
that
the
to
reach
a
high
utilization
of
the
bandwidth
right,
but
now
in
the
text
it
it
says,
if
the
for
example,
more
than
17
17
percent
or
just
up
to
100,
and
how
to
how
to
how
to
treat
more
flows
of
it.
A
I
think
one
of
the
concerns
here
is
that
Jesus
discussion
of
dead
time
in
cqf
would
detract
from
meeting
this
requirement.
I
think
that
would
be
an
example.
H
Yes,
I
know
that
I
write
the
right
in
the
profile
text,
but
character
comes
in
that
if
it's
fired
to
other
methods.
H
For
example,
artistic
solves
the
problem
of
SQL,
so
we
we
propose
the
requirements
but
I'm
not
sure
if
others
proposed
matter
because
they
may
don't
have
the
original
matter.
H
For
okay,
if
we
use
the
TC
graph
and
it
reached
the
100
percent
of
the
bandwidth
and
how
to
further
how
to
further
optimize
itself.
H
I'm
not
so
clear
about
this
point
on
me.
The
new
proposed
the
requirement
is
correct
of
our
two
others,
because
magnesiums
and
we
think
we
can.
We
can
take
more
time
to
really
discuss
about
it
with
a.
B
With
respect
to
the
requirements,
looking
here
at
CBS,
I
think
the
one
big
thing
we're
missing
is
a
requirement
about
Jitter
that
is
lower
than
you
know:
the
maximum
latency
minus
the
physical
link,
propagation
latency.
So
because
I
think
that's,
that's
the
the
key
differentiation,
of
course,
between
the
current
asynchronous
mechanisms
like
CBS
and
the
most
infamous
mechanisms
which
is
most
of
the
others.
B
So
I
think
that's
that's
something
we
need
to
add
in
the
requirements
and
the
evaluations
and
then
going
back
to
the
evaluation
overall
here.
I
think
the
problem
that
that
we
have
with
this,
which
is
I,
think
an
excellent
you
know
understanding
from,
or
you
know,
that's
very
good
that
these
evaluations
were
done
here
on
this
basis,
that
CBS
and
tests
are
by
themselves.
B
I.
Think
not
not
really.
Mechanisms
in
in
the
way
I
think
I
would
have
imagined
mechanisms,
but
rather
the
building
blocks
right
so
in
terms
of
tasks
being
building
blocks
that
can
be
used
for
tdma
or
for
cqf
and
CBS
being
building
blocks
for
and
I
think.
That's
the
question:
what
what
what
are
the
profiles
of
that
other
than
APS
right?
B
A
F
A
Yeah
there's
a
separate
slide
in
ATS
that
I've
avoided
going
to
mostly
for
time
management
reasons
and
also
because
I
thought
that
after
we
got
through
three
of
these
examples,
we
could
have
a
discussion
about
whether
we
have
an
evaluation
framework
that
can
be
used
for
the
new
for
for
the
new
proposals.
B
But
then
to
be
fair,
should
we
remove
the
option
of
mentioning
ATS
here
in
in
three
four
one
and
simply
say
no
there,
so
that
you
know
we
don't
mix
up
things
when
they
really
aren't
meant
to
be.
You
know
at
least
even
a
profile
right,
so.
C
A
Not
sure
changes
to
no
but
it,
but
it
might
reduce
the
scope,
might
reduce
the
scope
of
partial.
A
A
B
Yeah
and
I
think
it'd
also
impact
3-7
right.
If
we
remove
any
any
shaping,
then
three
seven
will
also
be
even
more
difficult,
but
in
any
in
any
case
how,
however,
we
adopt
the
evaluation
right,
as
Johanna
said.
If,
if
ATS
is
a
separate
spec
and
the
reshaping
of
ATS
is
really
you
know
not
within
the
scope
of
CBS
proper,
then
we
should
assume
CBS
evaluation
without
it.
A
J
A
H
J
H
Yeah,
so
sorry,
what
about
the
question?
Okay,.
A
H
H
So
paypoints
we
I
wish
that
every
quarter
of
this
draft
can
really
review
it
and
get
the
consensus
that
we
can.
It's
really
stable
now,
yeah.
A
Okay,
let's
target
about
two
weeks:
I'm
thinking
of
July
5th,
which
is
the
Wednesday
in
about
two
weeks,
it's
after
the
just
after
the
US
holiday
weekend,
and
if
we
can
get
it
done
before
then
so
much
the
better.
A
Thank
you.
Okay,
Chef
we're
going
to
skip
I'm
sorry,
go
ahead.
D
D
I
just
found
the
do
you
think
if
we
wanted
to
add
a
new
requirement
to
discuss
the
the
the
high
the
high
speed
technique
that
the
higher
speed,
the
yin
to
work
with
the
low
the
lower
link
speed,
because
the
real
network
is
from
access
application
and
plugable
Network.
The
link
speed
is
different.
H
Sorry,
I
I,
don't
understand
why
we
talk
about
lowering
speed
and
lower
than
what
okay
versus
the
higher
input
speed
is
compared
to
a
not
a
local
network.
I
think.
D
Because
the
traffic
engineer
passed
the
underground,
the
the
possible
the
access
and
the
work
that
are
located
in
the
network
and
the
bug
bone
Network
I
in
this
networks,
the
link
speed
are
different.
So
the
proposal
mechanism
should
support
the
the
end
to
work
between
the
higher
linger
speed
under
the
low
ending
speed.
H
Well,
yes,
I
get
your
points.
H
The
first
first
of
all,
when
we
propose
this
requirement,
we
just
compare
maybe
a
local
network
to
a
backbone
like
scale
Network.
So
if
from
it
has
become
higher
link-
and
in
fact
you
say
when
we
consider
about
the
end
to
end
link,
the
Spade
links
paid
different.
H
One
of
the
matters
I
think
we
can
change
the
name,
for
example,
accommodate
different
links
with,
but
it
will
be
a
little
have
some
relation
to
3.8
I.
Think.
H
D
A
A
A
Yeah
I
can
hear
you
Giannis
can.
Can
you
share
the
slides
via
VIA
meet
Echo.
K
K
G
K
Great,
can
you
see
the
slides
properly.
A
A
K
So
I'm
very
grateful
of
the
working
group
to
give
me
the
opportunity
to
present
a
draft
that
we
published
a
few
weeks
ago.
This
draft
is
entitled
and
Crossing
end-to-end
delay
bounds.
They
are
queue
resizing.
This
is
a
joint
work
with
my
colleagues
and
Sebastian.
Polos
are
listed
in
the
draft
and
I
will
take
the
opportunity
of
the
presentation
today
to
present
you
this.
This
draft
I
will
use
a
very
original
agenda,
as
I
will
follow
the
structure
of
the
document.
K
So
you
know
where
we
stand
at
three
given
points
in
the
presentation
with
no
further
Ado
I
will
go
to
the
introduction,
so
I
guess
people
in
the
ihcf
that
networking
group
are
already
aware
that
there
are
some
use
cases
for
deterministic
networking
and
for
bounded
latency
over
large
scale
networks,
because
otherwise
you
would
not
have
the
discussion
we
had
today.
One
of
the
use
cases
that
we
see
in
the
consumer
market
for
bounded
delay
without
the
digital
constraint
is
online
gaming.
K
In
fact,
in
online
gaming
in
multiplayer
massively
multiplayer
games,
the
developer
of
the
multiplayer
platform
is
interested
in
having
a
solution
which
enforces
deadlines
for
actions
that
players
that
peers
do
and
they
don't
really
care
about
the
Jitter
as
soon
as
the
deadline
for
actions
is
met,
because
the
the
the
duration
of
some
actions
that
the
Avatar
of
the
players
perform
in
the
video
game
come
to
come
update
the
smooth
and
smooth
the
Jitter
that
comes
from
a
variation
in
the
delay
for
incoming
packets.
K
So
if
we
look
at
this
the
chart
that
is
shown
on
the
right
that
is
taken
from
3gppts
23
501,
we
are
targeting
use
cases
that
are
locating
located
at
the
top
of
the
table.
K
Those
use
cases
a
specific
packet
delay
budget,
but
they
can
accommodate
some
burst
so
Jitter
as
soon
as
it's
controlled
over
a
default
every
window
and
with
differentiate
those
use
case
from
typical
industrial
TSN
or
deterministic
networking
use
cases,
for
instance,
discrete
automation,
intelligent
transport
system
or
high
voltage,
electric
distribution,
which
are
located
at
the
very
bottom
of
the
chart.
K
So
the
thing
that
you
need
to
take
from
this
introduction
is
that,
with
the
cousin
I'm
going
to
present
to
you
today,
we
are
only
looking
at
enforcing
latency
bound
over
network,
but
we
are
not
interested
in
maintaining
jitterbahn.
K
So
this
is
a
main
draft
ID.
We
cover
this
gap
between
TSN
or
determatic
IP
Technologies,
which
are
used
to
enforce
those
latency
and
Jitter
guarantees
at
a
cost,
sometimes
centralized
management,
and
it
was
synchronization
in
small
scale,
Network
complex
screen
management,
complex
traffic,
shaping
mechanisms
and
more
traditional
quality
of
service
enforcement
mechanisms
that
enforce
latency
properties
on
average,
for
instance
with
active
fuel
management.
You
can
variety
you
in
order
to
meet
some
delay
Targets
on
average,
but
not
in
absolute.
So
we
are
in
between
those
two
families
of
solutions.
K
Okay,
so
bounding
a
at
the
switches.
So
first
I
need
to
show
you
what
are
the
rationals
behind
our
ID
to
to
perform
end
to
end
delay
enforced
and
to
any
delay
bounds
by
acting
on
the
Q
depths.
In
fact,
this
is
from
Network
calculus.
We
know
that
if
cues
are
served
as
first
in
first
out
and
and
the
buffer
as
a
capacity
that
is
given
by
pe
to
the
k,
the
delay
can
be
expressed
as
t0
plus
the
capacity
overall,
the
committed
information
rate.
K
If,
if
we
have
a
fixed
committed
information
rate,
so
this
is
a
substrate
only
so
in
the
following.
If
we
take
the
committed
information
rate
as
a
static
value
and
we
adapted
buffer
capacity
by
changing
the
buffer
capacity,
we
can
act
on
the
worst.
You
delay.
So,
for
instance,
here,
if
we
increase
the
capacity
of
the
queue
there
must
be
capacity,
delay
is
increased
if
we
decrease
the.
If
you
want
to
decrease
the
delay,
we
have
a
decrease
in
the
buffer
capacity.
K
So
we
took
from
this
idea
and
looked
at
the
python
architecture
for
switches,
and
if
you
look
in
the
base
architecture-
and
you
look
at
the
packet
forwarding
engine,
you
can
you
can
abstract
the
packet
programming
engine
as
a
set
of
cues.
That
can
be
of
variable
capacity
that
are
served
by
scheduler
and
the
scheduler
can
adopt
a
set
of
this
of
disciplines
for
serving
the
different
cues.
One
of
those
disciplines
is
a
deterministic
from
grabbing,
meaning
that
each
fuse
is
served
for
a
specific
period
of
time
over
over
a
period.
K
So
if
we,
if
we
combine
those
viable
buffer
capacities
and
and
random
database
from
Adobe
to
schedule
a
discipline,
then
we
can
have
Express
the
maximum
surgeon
time
of
packets
in
use
as
a
formula
that
is
given
at
the
bottom
of
the
slides
here.
So
the
maximum
delay
is
constant
t,
0,
plus
plus
Factor
here
that
depends
on
the
capacity
of
the
queue
so
now
that
we
presented
how
adapting
the
before
capacity
can
help
enforcing
the
delay.
K
So
we
start
from
a
system
in
which
the
queues
have
a
variable
capacity,
and
those
skews
have
two
thresh
two
occupancy
thresholds,
nearly
full
occupancy
threshold,
for
instance,
when
80
of
the
queue
is
occupied
and
a
minimal
occurrence,
is
threshold,
for
instance,
when
20
percent
of
the
Q
that
then
we
operate
a
system
like
this
in
each
equipment
you
have
a
set
of
queues,
those
queues
are
occupied
by
and
by
and
in
this
queue
we
have
some
reservations
for
capacity
to
serve
flows,
for
which
the
node
is
committed
to
respect
a
maximum
delay
at
the
node.
K
Whenever
a
requests
foreign
capacity
allocation
arrive
at
the
node,
either
the
risk
capacity
in
a
queue
that
and
the
node
can
respect
a
delay
contract
that
is
compatible
with
the
end-to-end
delay
reservation
that
is
requested.
So
that's
fine.
We
just
place
a
temporary
reservation
and
we
are
good
to
go,
but
if
we
are
in
two
permanent
cases,
we
look
at
the
thresholds
first.
The
first
programming
cases
is
that
we
can't
serve
the
reservation
for
a
flow,
because
the
the
delay
that
is
requested
to
that
the
flow
needs
respect
is
too
low.
K
On
the
other
hand,
if
a
reservation
for
a
flow
cannot
be
accepted
because
too
much
capacity
is
requested,
then
we
look
at
the
queues
for
which
the
maximum
the
nearly
full
occupancy
threshold
is
this
past,
and
we
see
whether
the
we
can
increase
the
capacity
of
the
queue
while
respecting
the
minimal
delay
contract
for
the
reservations
that
are
already
allocated
in
YouTube.
K
So
I
made
the
schema
to
to
present
those
mechanisms.
So
this
is
the
adaptation
of
the
queue
to
meet
the
delay
constraint.
So
here
we
see
that
this
queue
is
occupied
below
the
minimal
occupancy
threshold.
We
see
a
demand
arriving
here
for
end-to-end
delay,
that
is
40
milliseconds.
The
qcon
serves
end-to-end
delay
for
50
milliseconds
and
there's
been
more
contractors
to
50
milliseconds.
We
are
going
to
reduce
the
capacity
in
order
to
meet
the
maximum
end-to-end
delay
for
the
queue
and
accommodate
the
your
reservation
that
is
requested.
K
On
the
other
hand,
if
now
we
have
a
q
that
is
nearly
freely
occupied
by
a
set
of
reservation
and
see
your
demand
arriving
for
capacity
that
exceeds
the
capacity
of
the
cube.
We
look
at
the
minimal
contract
for
ongoing
reservation
in
the
queue
here.
We
compare
it
with
the
maximum
end-to-end
delay,
for
instance.
Here
there
is
a
difference
by
10
millisecond.
We
can
increase
the
capacity
of
the
queue
while
meeting
the
minimum
contract
and
push
and
accommodate
the
reservation
that
we
received
okay.
So
this
curing
adaptation
system
is
in
maybe
very
simple.
K
It's
far
less
complex
and
queuing
systems
that
are
involved
in
that
are
used
in
IEEE
TSM
initiatives,
but
the
the
advantage
is
that
they
can
be
used
in
more
simple
devices
and
don't
require
any
synchronization.
K
So
I
will
present
the
three
name
mechanism
we
put
in
place
by
taking
as
an
example
the
network
that
is
presented
here.
So
we
imagine
that
node
a
wants
to
use
a
reservation
protocol
to
send
a
reservation
resource
reservation,
request
to
node
f
for
a
flow
for
which
the
maximum
end-to-end
delay
requirement
is
85
milliseconds
and
the
maximum
capacity,
including
the
burst
of
the
flow,
is
2
megabits
per
second,
so
I'm
going
to
present
the
signaling
mechanism
I'm
going
to
present
the
first
from
RSVP
in
two
ways.
K
First,
we
allows
the
exploration
of
multiple
passengers
with
the
option
procedure.
To
the
best
of
our
knowledge,
we
tried
harder
to
look
whether
a
recipe.
K
So
there
are
some
document
defining
the
behavior
or
rgp
and
the
multi-pass
environment,
and
we
didn't
find
it
I
would
be
very
happy
if
you
have
a
pointer
to
such
a
document
somewhere
and
in
our
mechanism.
We
allow
either
the
destination
or
the
source
to
take
the
decision
about
the
path
to
take
for
the
reservation
so
compared
to
RSVP.
This
is
different
because
there
are
narrow
sdp
when
you
send
a
resource
reservation
message
to
destination
the
destination.
K
Is
on
the
decision,
only
is
responsible
for
acknowledging
the
reservation
and
confirming
so,
let's
start
so
a
shapes,
a
request
and
sends
it
to
app
through
its
two
ongoing
nodes,
so
A
to
B
and
a
to
c.
The
message
here
has
a
set
of
parameters
compared
to
the
message
format
that
I
presented
in
the
draft.
The
message
format
that
I
present
here
is
slightly
reduced
for
the
sake
of
PFT
presentability,
so
most
information,
the
most
important
information
in
this
message
are
the
maximum
end-to-end
delay.
K
95
milliseconds,
the
end-to-end
delay
commitment,
which
is
the
contribution
of
of
nodes
that
that
have
been
crossed
by
the
message
to
the
end-to-end
delay,
the
capacity
requirements
which
is
2
megabits
per
second
and
a
record
root
Shield
that
we
call
the
root
of
the
nodes
that
have
been
crossed
by
the
message.
K
Then
this
message,
which
is
B
and
C
here
we
see
that
yeah,
for
instance,
if
I
take
the
behavior
of
node
B
compared
with
the
message
that
you
see
in
the
previous
slide
here,
B
has
allocated
a
temporary
reservation.
It's
in
its
qq1,
for
which
the
maximum
delay
is
20
millisecond
and
before
relaying
the
request
to
its
own
outgoing
interface
to
Which
F,
which
are
B
to
D
and
B
to
E
it
does.
K
It
has
added
the
maximum
delay
of
the
queue
for
which
it
did
a
temporary
reservation
to
the
end-to-end
delay
commitment.
So
we
know
that
b
takes
20,
milliseconds
out
of
the
85
millisecond
maximum
and
20
lay
credit,
and
it
has
added
its
identifier
to
the
record
root.
So
now
we
know
that
those
messages
went
through
a
and
b
and
the
and
20
millisecond
delay
is
taken
out
of
the
85
millisecond
delay
credit
the
same
for
C
here
so
now
we,
the
message,
arrive
at
D
and
E.
K
Here,
D
only
relays,
one
message,
because
here
its
maximum,
the
maximum
delay
for
execute
q1
is
40
milliseconds
and
the
difference
between
the
maximum
and
twin
delay
of
the
message.
From
C
to
D
is
35
milliseconds,
so
there
is
no
no
way
D
can
conserve
this.
This
reservation,
but
other
message
can
go
through,
so
they
are
related
to
F
with
the
maximum
and
20
delays
delay
then
20
commitment,
the
capacity
and
now
we
have
two
alternatives.
K
So
first
we
make
it.
We
make
our
protocol
behave
like
a
recipe.
It
means
that
the
destination
is
choosing
the
is
acknowledging
and
fixing
the
password
is
reserved.
So
out
of
the
three
message,
signaling
request,
messages
that
have
been
received:
F
chooses
one
message:
for
instance,
the
the
message
for
which
the
difference
between
the
end-to-end
delay,
commitment
and
the
maximum
end-to-end
delay
is
the
largest
and
shapes
a
reply
with
the
maximum
end-to-end
delay,
the
end-to-end
delay
commitment.
It
has
received
the
capacity
and
the
explicit
route.
K
This
message
is
relayed
along
the
Route
that
is
given
in
the
reply
message
to
A
and
after
AO
receives
a
message.
If
the
flow
can
be
exchanged
following
this
route,
with
respect
for
the
end-to-end
delay
commitment,
then
another
method
would
be
to
let
the
source
choose
the
path
that
is
going
to
be
taken.
So
this
Behavior,
we
ask
F
to
reflect
all
the
requests
it
has
received
with
the
reply
stating
the
maximum
intent
delays,
end-to-end,
delay,
commitment,
that's
received
and
the
route
that
is
taken
by
the
message.
K
Okay,
so
if
you
have
seen
this,
if
you
have
followed
this
protocol,
you
you
know
that
the
for
the
information
that
needs
to
be
transported
in
those
messages
is
very
close
to
what
needs
to
be
transported
in
RSVP
messages
and
I.
K
Agree
with
that,
but
the
the
issue
we
had
when
designing
the
around
the
formatting
of
the
information
we
needed
in
this
protocol
with
RSVP
is
that
if
we
want
to
stick
to
already
existing
RSVP
objects,
the
encoding
of
the
information
can
be
tedious
requires
some
competition
at
the
various
nodes
and
is
not
really
direct.
We
looked,
we
looked
after
RSVP
object
that
was
suitable
for
deterministic
networking.
K
We
encountered
a
document
draft
frozen.netracy
ptsn
that
expired
a
few
months
ago,
and
we
wonder
whether
it
would
be
appropriate
to
have
a
more
direct
encoding
of
the
information
that
we
will
use
in
our
signaling
protocol
in
future
versions.
But
for
the
current
draft
we
try
to
stick
to
the
RSVP
formatting
as
it
sounds
today.
K
K
K
K
We
found
a
limitation
in
the
session
that
object
about
the
the
identification
of
flows,
because
in
RSVP
the
identification
flows
is
based
on
five
tuples.
So
if
you
want
to
to
send
IP
traffic
while
respecting
end-to-end
delay
bound
that
is
not
tied
to
a
specific
transport
protocol,
this
may
be
a
problem
with
the
identification
of
the
flow.
The
recall
record
root
list
that
is
carried
in
the
request
message
can
be
presented
as
a
root
record
object
in
RSVP.
K
We
found
out
that
the
best
option
we
had
was
to
convey
the
maximum
delay,
but
with
a
standard
t-spec
or
a
flow
spec
object,
depending
on
the
message
with
a
general
can
back
at
the
spec
parameter
and
the
guaranteed
service
aspect
parameter.
So
I
will
show
this
message
later
and
the
delay
commitment
by
the
nodes
is
carried
by
an
ad
spec
object,
including
a
set
of
default
General
parameters
with
a
fragment
carrying
guaranteed
service
parameters.
K
So
I
will
present
the
formatting
that
we
have
for
for
the
for
those
two
parameters.
So,
first
the
maximum
end-to-end
delay.
So
in
the
RSVP
rfcs
we
found
that
there
is
an
formula
that
is
giving
the
end-to-end
delay
in
a
situation
in
which
P,
which
is
the
the
peak
data
rate,
little
r,
which
is
tuck
and
bucket
rate
and
Big
R,
which
is
the
rate,
are
equal.
K
So
here
the
delay,
the
maximum
delay
is
given
by
S,
which
is
the
slack
term
here.
Plus
B
the
token
bucket
size
divided
by
r,
so
the
end-to-end
delay
requirement
can
be
given
by
RBP,
R
and
S,
and
the
capacity
is
given
by
the
peak
data
rate
P.
So
we
can
use
this
token
back
at
this
spec
and
mounted
service
aspect
parameters.
So
here
is
the
test
taken.
Here.
Is
your
aspect
to
convey
this
information,
but
it's
not
a
direct
read
from
the
router.
K
It
needs
to
do
some
competition,
so
it's
we
think
it
can
be
more
simple
in
the
aspect
object.
The
aspect
object
is
a
bit
more
complex.
You
have
far
more
parameters,
but
the
end-to-end
delay.
Commitment
can
be
carried
by
setting
the
minimum
path.
Latency
here,
as
undetermined
value
specified
by
RC
2215.
K
K
We
think
that
this
this
encoding
is
a
bit
complex,
but
we,
as
I
mentioned
earlier,
we
tried
to
stick
to
the
typical
RSVP
message
encoding.
But
if
this
message,
if
this
mechanism
gets
traction,
we
feel
that
there
would
be
a
potential
optimization
with
a
thinner
RSVP
object,
carries
information,
so
I
hope
that
this
presentation
triggered
your
interest
in
this
document.
I
would
be
very
interested
in
Reading
from
in
hearing
from
your
from
you
about
your
interest
in
this
draft
in
the
next
steps
I'm.
K
That
net
mechanism,
with
requirement
for
large-scale
Nets
in
large
scale
networks,
so
we
are
going
to
make
the
exercise
that
you
that
has
been
presented
earlier
to
align
to
to
tell
how
the
mechanism
we
present
in
this
draft
can
respect
the
requirements
that
are
presented
in
the
requirement
plot
around
the
follow-ups.
We
are
wondering
whether
there
is
interest
in
this
document.
K
K
Maybe
is
it
belong
to
this
draft?
Maybe
the
group
thinks
that
there
is
a
potential
for
separating
those
recipe
protocol
mechanism
and
making
them
in
appear
in
a
separate
draft,
and
also,
we
think
that
there
is
an
interest
in
having
a
cleaner
rzp
object
to
carry
information
about
deterministic
Network
flows,
and
we
would
be
motivated
to
work
in
this
Direction
with
the
group.
K
So
thanks
a
lot
for
your
attention.
I
don't
know
if
we
have
a
time
remaining
for
questions,
but
I
would
be
very
happy
to
get
your
feedback
about
the
document
either
today
during
the
meeting
or
afterwards
directly
by
email
as
and
you
did,
or
on
the
main
list.
Thank
you.
C
E
B
Ahead
a
lot
interesting
presentation,
so
maybe
high
level
I
think
the
the
whole
RSVP
and
control
plane
aspects
might
be
better
brought
up
to
the
working
group
at
large
and
not
this
team.
B
So
unfortunately,
David
our
Master
of
Ceremonies
for
for
this
list
for
this
set
of
of
meetings
is,
is
gone
but
Janos
May
join
in
as
well,
so
I
think
we
primarily
want
to
focus
on
what
happens
in
the
forwarding
plane
and
out
that
necessarily
being
tied
to
a
specific
version
of
the
controller
plane
and
so
I
had
a
hard
time
figuring
out
which
type
of
changes
in
the
forwarding
plane.
B
You
would
want
to
do,
let's
say
as
opposed
to
RFC
2212,
which
would
be
the
forwarding
plane
for
guaranteed
service,
which
is
what
the
iitf
originally
did
in
conjunction
with
RS
SVP.
So,
but
just
let
me
add
two
more
points
because
that
that's
what
I
would
like
to
have
an
answer
from,
but
in
general,
with
what
what
you
were
talking
about,
RSVP
itself
does
not
specify
where
the
RSVP
messages
are
routed
across
in
terms
of
multi-path
of
what
the
path
is.
B
That
is
being
that
is
supposed
to
be
exactly
the
same
path
that
the
traffic
would
take.
So
to
do
anything
that
you
want
to
do.
You
first
need
to
have
a
mechanism
by
which
you
can
steer
the
traffic
across
different
path
and
in
in
Native
IP
multi-path.
You
can't
do
that
explicitly
right.
So
if,
if
you
use
IP
multi-path,
it
means
that
traffic,
for
example,
for
different
ports
to
the
same
destination
or
to
this
different
IP
addresses
to
the
same
destination
would
go
different
paths.
So
you
need
to
rely
on
that.
B
The
the
explicit
route
object,
for
example,
is
from
RSVP
te
so
that
doesn't
work
with
IP.
So
there
is
all
type
of
I
think
issues
that
that
you
would
need
to
resolve
going
forward.
But
you
know:
steering
the
traffic
to
to
get
to
different
path
with
different
bandwidth
might
be
actually
part
of
a
better
solution
for
doing
the
the
death
net
forwarding
plane,
but
I
think
it's
highly
unlikely
that
we
would
want
to
do
anything
for
steering
other
than
what
the
ietf
has
already
done.
B
K
B
I'm
sorry,
I'm
I'm,
I'm,
I'm
too
old
to
still
learn
that
so
I
think
in
general.
The
the
problem
I
have
with
RSVP
is
that
we
already
saw
you
know
more
than
10
12
years
ago
or
or
longer
that
we
have
scaling
issues
with
RSVP
in
the
network,
which
is
why
we
went
to
segment
routing
as
a
way
to
not
do
RSVP
signaling
in
networks.
So
I'm
I'm,
not
sure
that
RSVP
signaling
for
individual
flows
would
ever
be
a
good
fit
for
the
large-scale
net
deployments.
E
Station
and
regarding
the
task
that
the
tasks
that
are
supposed
to
be
done
by
RSVP,
that
has
also
can
be
done
by
let's
say,
a
sexualized
controller.
Anyway,
that's
that's
control
and
management
plan,
but
I
do
have
one
question
or
one
clarification
regarding
packet
forwarding.
K
Yes,
in
fact,
this
is
what
I
mentioned
when
I
presented
the
issue
we
have
with
the
identification
of
the
flow
in
the
way
a
recipe
does
in
fact
in
in
RSVP,
you
can
identify
a
flow
by
the
typical
five-stopper,
IP,
Source,
IP
destination,
soft
spots,
destination,
port
and
protocol
in
RSVP.
If
I,
remember
well
as
I
need
to
check,
there
is
a
possibility
to
use
to
use
the
flow
level
in
in
the
IP
header
to
help
with
this
identification.
K
But
if
we
use
a
slow
level,
we
need
to
make
sure
that
over
the
path
the
full
label
is
not
tempered
with
so
I
think
that
if
we,
if
we
want
to
identify
a
flow,
that
is
specific,
that
is
a
metric
flow
and
not
a
transport
layer
flow
and
we
we
need
to
stick
to
ipsource
and
IP
destination
and
the
user.
It
has
limitations.
But,
as
I
mentioned,
we
wanted
to
stick
to
the
format
of
our
SVP
to
try
whether
it
was
possible
and
probe
interest
in
having
other
mechanisms
in
the
record
group.
E
Okay,
actually,
my
question
is
not
quite
related
to
RSVP
or
any
any
other
control
plan
protocol.
My
question
is
about
data
plan
Behavior
anyway,
you
need
kind
of
per
flow
information
yeah
in
the
in
the
in
the
node
right
to
identify.
Yes,.
K
K
You
yeah,
in
fact,
in
the
reservations,
are
identified
by
the
by
the
flow
ID
that
is
carried
by
the
message.
So
we
need
to
have
this
flow.
Id
I
don't
give
some
how
to
map
the
flow
to
the
proper
queue
and
make
sure
that.
K
C
Yeah
I
actually
wanted
to
make
some
comments
to
Dollars
like
pretty
much
all
each
point.
That
RCP
is
not
has
issues
you
as
he
explained,
trends
and
so
on,
and
we
have
at
the
first
place
you
should.
You
should
get
the
questions
or
some
news
of
ISP
for
that,
not
that
large
in
the
working
group.
But
it's
it's
more
about
data
plane
enhancements,
not
controlling.
K
Okay
point
taken
I,
maybe
if
it's
appropriate
I
can
present
specifically
on
this
with
a
reservation
protocol
aspects
in
the
wider.net
group
rather
than
in
this
specific
word,
open,
open
working
group.
C
D
Oh
I'll,
try
it
again.
Can
you
hear
me
yes,.
D
Okay,
so
sorry,
so
my
question
is
that
yes,
according
to
the
discussion
also,
this
proposal
is
mainly
and
and
controlling
so
the
data
plane
mechanism.
I,
don't
know
if
what
are
the
the
detail
about
the
data
mechanism,
if
we
it
even
dies
and
flow
recipient?
This
isn't
my
first
consensus.
K
Okay,
yeah,
indeed,
I,
agree
that
the
way
things
are
presented.
Things
is
really
focused
on
the
control
pin
aspects
because
of
the
signaling
Etc
on
the
data
plane
aspects.
K
The
the
goal
we
had
in
mind
was
to
that
the
management,
the
the
ways
the
packets
are
managed
at
equipment
can
be
very
simple,
why
we
have
something
that
is
enforcing
them
to
end
delay
requirements
and
because
we
see
lots
of
quite
complex
work
with
regard
to
the
query
mechanism
and
the
curing
discipline,
this
scheduling
and
they
require
sometimes
coordination
between
nodes,
and
we
wanted
to
have
something
very
simple
on
on
those
on
those
aspects.
K
But
I
I
agree
that
for
now
the
work
that
we
presented,
this
monster
control
painting
so
I
I've
taken
your
remark
also
jealous
and
tallest
in
the
same
direction.
So
if,
in
future
versions
of
the
document
we
may,
we
will
be
more
precise
on
data
plane
aspects.
D
Okay,
thank
you
so
so.
My
second
questions
is
that,
according
to
your
slides
that
the
the
formula
is
actually
simple,
yeah
but
I
I
still
be
confused,
that
maybe
we
just
change
the
buffer
capacity
is
not
enough.
Good
bounded
latency,
because
the
the
password
said,
the
the
buffer
capacity
is
a
node
just
an
attribute
of
the
queue,
but
but
it's
the
the
size
of
a
wall,
aggregated
flows
that
can
be
permit
per
permitted
to
release
to
the
network.
Studies
on
my
second
coincidence,
I
think
yeah.
K
Indeed,
if
you
in,
if
you
look
at
the
the
way,
we
managed
you
in
the
that
RCS,
you
have
a
very
you-
have
a
more
detailed
model
for
the
competition
of
the
contribution
of
every
node
to
the
end-to-end
delay.
And
you
have
a
part
that
is
related
to
the
packets
processing
time
by
the
equipment.
K
The
time
that
it
stay
in
the
queue,
the
time
that
the
scheduler
takes
the
packet
and
puts
it
out
on
the
opening
interface
and
the
propagation
time
on
the
link
between
the
departing
node
and
the
arriving
node
we
tried
to
in.
In
our
model.
K
We
have
a
constant
factor
that
accounts
for
the
processing
time
on
the
on
the
on
the
Node
and
all
the
scheduler,
and
that
going
interface
time,
and
we
made
it
only
depending
on
the
buffer
capacity,
because
we
are
in
a
situation
in
which
the
queue
is
served
in
with
a
fifo
discipline
and
with
the
scheduler
of
the
very
skills
in
the
plant
is
following
a
deterministic
and
Robin
policy.
K
So
if
we
use
more
complex
scheduler,
for
instance,
introducing
priority
or
this
kind
of
things
and
the
queue
can
be
a
fifo
with
a
preemption
for
some
to
place
at
the
beginning
of
the
Q,
some
urgent
packets,
we
need
to
have
a
more
complex
model
for
the
for
the
competition
of
the
end
to
land
delay
depending
on
the
buffer
capacity.
But
we
are
confident
that
we
can
still
have
a
bound
depending
on
the
welfare
capacity,
and
this
mechanism
can
be
can
be
used.
D
Okay,
thank
you
on.
Maybe
I
can
provide
a
paper
that
may
be
related
with
your
proposal.
Sure
just
the
use
of
file
for
to
get
a
bangle
latency.
C
D
C
A
very
short
time
lap
that
I
would
like
to
give
a
kashinata.
The
word
make
your
question:
yeah.
G
Just
a
quick
question:
thanks
for
the
presentation,
it's
very
interesting
as
far
as
I
can
recall
the
rspp.
It
has
the
overhead
of
the
regular
refresh
messages,
part
and
reservation
in
opposite
direction,
and
your
work
did
you
consider
using
those
messages
or
exploiting
those
messages,
to
send
additional
information
in
order
to
maintain
the
guarantees
of
the
past.
K
In
fact,
in
those
in
those
those
messages
that
are
used
in
a
recipe
to
maintain
a
current
reservation,
they
are,
they
are
limited
by
the
objects
that
are
specified
for
sap
messages
and,
in
fact,
our
first
and
second
in
those
in
those
interim
messages
or
Internet
messages,
they
all
follow
the
flow
that
they
are
sent
by
intermediate
nodes
to
the
destination
and,
if
needed,
the
decision
feeds
backs
information
to
the
source.
K
So
this
back
and
forth
flow
is
to
us
introducing
delay
if
you
want
to
react
to
to
change
very
quickly
in
RSVP
first
and
the
formatting
of
RSVP
can
is
done
to
to
follow
the
Qs
work
that
has
been
done
several
years
ago,
but
for
that
Nets,
and
especially
for
this
mechanism,
if
you
want
to
keep
a
very
simple
data
format,
those
objects
are
very
tedious
to
use.
K
Yeah
yeah,
in
fact,
we
we
try
to
track
those
changes,
but
we
stick
to
our
LCP
objects
that
were
attribute
the
Yana
page
for
a
zp.
So
once
again
we
may
be
wrong,
but
and
if
you
have
ID
for
format
for
most
object
formats
that
we
may
use
I
would
be
very
happy
to
hear
from
you,
but
to
the
best
of
our
knowledge
and
from
the
objects
that
has
been
published
on
the
Yana
page
for
RSVP.
This
is
the
best
we
could
do.
C
Okay,
thank
you
very
much,
everyone
for
the
presentations
and
the
good
discussion.
We
are
running
two
minutes
over
time,
so
I
suggest
to
close
this
call
as
David
mentioned
the
next
one
is
four
weeks
from
now
as
two
weeks
from
nice
July,
the
fourth
and
we
can
continue
the
discussions
on
the
list
until
the
next
meeting.