►
From YouTube: IETF110-IPPM-20210312-1600
Description
IPPM meeting session at IETF110
2021/03/12 1600
https://datatracker.ietf.org/meeting/110/proceedings/
B
A
A
A
C
When
I
started
through
the
administrative
video
just
well,
we
have
a
moment
we're
waiting
for
martin.
This
is,
and
I
are
the
co-chairs,
so
obviously
this
is
our
second
meme
this
week.
This
is
the
no
well.
If
you
went
to
the
first
meeting,
you
already
saw
this.
I
assume
you're
familiar
by
now.
Anyway.
C
C
A
He
didn't
quit
the
meeting.
Yes
exactly
it's
rough.
E
C
Here's
our
agenda
for
today,
although,
as
I
mentioned
before,
we're
gonna
start
with
towel
and
some
of
the
conversations
we
had
earlier
this
week
about
15
minutes
on
that
at
the
beginning
you
see
the
rest.
We
have
quite
a
number
of
slides,
so
I'd
like
to
kind
of
get
started.
C
So
we're
going
to
revisit
the
iom,
direct
export
and
towel
we'll
go
for
it
thanks.
Can
you
hear
me,
can
you
see
me.
F
Yes,
I'm
both
yep
okay,
so
we're
going
to
give
a
short
update
about
these
two
drafts,
the
flag
draft
and
the
direct
exporting
draft
and
we're
going
to
start
actually
with
the
issue
we
discussed
earlier
this
week,
which
is
the
open
security
issue,
and
this
issue
is,
is
related
to
both
drafts
next
slightly.
F
F
Neither
of
these
cases
it
may
affect
performance
in
the
network,
so
what
we
did
in
both
drafts
was
in
addition
to
the
conventional
security
consideration
section.
We
also
added
a
performance
consideration
section
and
we
discussed
the
implications
of
of
the
potential
of
amplifying
traffic
and
also
in
each
of
the
security
consideration
sections
of
each
of
these
drafts.
We
discussed
amplification
attacks
and
also
the
other
security
issues,
and
we
also
suggested
potential
mitigation
solutions.
F
We
also
recently
added
a
description
of
the
pathological
use
cases
that
martin
described
earlier
this
week,
but
we're
going
to
talk
a
few
more
words
about
that,
and
we
actually
intend
to
add
more
text
to
the
drafts
about
that.
So
we'll
talk
about
that
in
a
second
regarding
mitigations,
what
we
have
so
far.
F
Another
issue
is
that
the
all
the
looped
back
messages
are
truncated,
so
when
we
only
send
back
the
header
which
includes
the
iom
data,
we
don't
send
back
the
entire
packet,
so
it
kind
of
reduces
the
effect
of
amplification
and
also
specifically,
specifically
in
loopback,
we're
using
only
a
single
iom
data
field.
F
Okay,
that's
just
one
instead
of
the
conventional
trace
option,
which
may
include
a
large
number
of
data
fields
next
slide,
please-
and
this
is
related
to
what
we
discussed
earlier
this
week.
So
thanks
again,
martin
for
raising
this
these
issues,
and
we
certainly
agree
that
we
need
to
address
them
a
bit
more
in
the
draft.
F
So
what
we
suggest
to
do
in
both
drafts
we're
going
to
add
more
text
which
describes
the
threat
here,
describes
the
potential
attacks,
but
in
terms
of
mitigations,
what
we
suggest
to
do
is
first
of
all-
and
this
was
a
bit
discussed
in
martin's
presentation.
What
we
want
to
do
is
define
probability
bounds.
F
So
that
means
that
in
the
iom,
encapsulating
node,
which
forwards
data
traffic,
what
we
want
to
do
is
apply
these
functions
either
decks
or
blue
back.
We
want
to
apply
that
only
to
a
small
fraction
of
the
traffic,
and
once
we
define
that
it's
it's
applied
only
to
a
small
fraction
of
the
traffic.
That
means
that
these
specific
pathological,
amplification
attacks.
F
Should
be
able
to
be
reduced
instead
of
amplified,
so
that's
that's
one
thing
and
the
the
second
thing
that
we
also
mentioned
earlier
this
week
was
that
we
wanted
specifically
in
this
context,
to
define
stronger
restrictions
to
the
domain,
and
this
is
something
that
should
be
really
emphasized
in
a
draft
in
this
context.
Next
slide,
please.
F
So
what
we're
going
to
do
in
the
next
step
we're
going
to
update
these
two
drafts
specifically
relating
to
the
issue
we
just
discussed
and
once
we
do,
that
we'll
go
back
to
the
working
group
and
and
make
sure
that
we've
done
the
job
right
this
time
and
in
general.
Any
comments
related
to
amplification
attacks
would
be
welcome
at
this
point.
F
If
people
have
specific
text
suggestions,
if
people
have
any
further
thoughts
that
we
haven't
discussed
so
far,
we'd
be
happy
to
hear,
and
also
it's
important
for
us
to
understand
whether
what
I
just
described
in
the
last
two
minutes
does
that
make
sense
in
terms
of
how
to
address
the
amplification
attack,
because
if
people
feel
that
what
we've
described
makes
sense,
then
we're
going
to
go
ahead
and
apply
that
to
the
drafts.
F
A
Yeah,
let
me
I
I.
I
include
myself
just
to
comment
as
a
interested
party,
so
two
questions
one
on
the
last
slide.
You
mentioned
having
a
stronger
restriction
to
a
domain
practically.
F
F
Make
sure
that
you
don't
have
these
management
loops,
which
may
cause
amplification
and
that's,
I
think,
that's
one
of
the
most
important
things
we
want
to
add
beyond
what
we
already
have,
which
is
iom,
should
be
applied
in
confined
ministership
domains.
A
Okay,
yeah,
that
sounds
like
good
advice.
I
don't.
I
think
we
should
say
that
I
don't
know
if
it
is
a
complete
mitigation,
because
people
may
be
ignorant
of
the
exact
details
of
their
deployment.
Sometimes,
and
like
the
other
question
I
had
last
time
at
the
end
of
the
session,
we
were
bringing
up
the
fact
that
things
like
ipfix
do
they're
exporting
either
on
a
time
based
or
at
the
end
of
flows
rather
than
kind
of
immediately
is.
A
Is
that
something
that
we
can
consider
here,
since
that
seems
to
take
a
lot
of
inspiration
from
prior
art
in
the
solution
space.
F
F
I
think
that
what
we
can
do,
what
I
suggest
to
do
is
we
can
point
to
some
of
these
ipfix
drafts,
which
define
how
to
do
sampling,
there's
at
least
one
rfc
that
was
issued
by
the
ipfix
working
group
that
defines
how
to
do
flow
selection
and
there's
also
not
a
couple
of
rfcs
from
the
ip,
the
psamp
working
group
that
also
define
how
to
do
sampling-
and
you
know
time
based
was
just
one
of
the
possible
ways
to
do
it.
F
It
was
also
one
of
n,
packets
or
m
of
n
packets
and
samplings
all
these
different
things,
and
I
think
what
what
what
I
suggest
to
do
is
to
basically
point
to
these
other
rfcs
as
a
possible
way
to
do
it.
A
Having
a
reference,
there
would
be
good.
I
I
would
also
challenge
us
to
think
like
there's
a
difference
between
sampling
and
when
you
do
an
x
chord,
because
sampling
is
about
what
data
are
you
choosing
to
collect
and
then
there's
a
question
of?
When
do
you
actually
perform
the
export
right
on
sample
data,
and
I
think
some
of
the
things
that
martin
was
pointing
to
earlier
are
really
about?
A
F
Yeah
one
thing
to
to
emphasize
here
is
that
in
direct
exporting,
it's
very
important
that,
as
you
do,
the
exporting
you
want
all
the
nodes
to
export
the
data
for
the
same
packet.
You
want
basically
synchronize
data,
so
we
have
to
make
sure
that
the
exporting
is
deterministic
so
that
if
the
first
node
exports
data
we
want
the
second
and
third
and
so
on-
to
export
data
for
the
same
packet
so
that
we
don't
get
exported
data
from
different
packets
from
each
of
the
iom,
nodes.
G
Yeah
thanks
for
the
quick
follow-up
to
the
previous
presentation,
I
I
think
we're
a
little
fortunate
that
we
had
this
break
agenda
right
where
we
did
so.
I
I
appreciate
that
the
the
the
progress
that
you've
made
here,
I
think
it's
more
or
less
on
the
right
track.
I
have
a
couple
concerns
and
you
know.
Obviously
we
haven't
seen
exactly
what
you're
going
to
do
and
the
details
going
to
be
important,
but
I'm
a
little
concerned
about
what
you
said
about
having
to
be
deterministic
because
it
does
allow.
G
I
mean
I
was
concerned
mostly
sort
of
accidental
explosions,
but
there's,
of
course,
an
attack
vector
here
too,
and
if
it
is
possible
to
craft
packets
that
trigger
the
deterministic
logic
for
sending
decks,
then
then
we
have
a
smaller
problem,
but
we
still
have
a
problem
and
then
the
unrelated
point
is
about
the
domain
stuff,
I'm
a
little
skeptical
of
the
you
know.
You
just
gotta
look
carefully
at
your
your
topology,
because
I
think
the
the
point.
H
G
I
G
Vague
bullet
about,
maybe
we
need
to
be
more
restricted
about
domain,
and
the
kind
of
thing
I
had
in
mind
was
maybe
doing
something
like.
Oh,
it
cannot
be
an
overlay.
You
know
you
have
to
actually
have
the
underlying
physical
equipment
or,
alternatively,
the
target
for
decks
has
to
be
the
path
to
the
target.
The
storage
target
for
your
dex
packets
has
to
be.
You
know
over
over
equip
you
control
rather
than
out
to
the
cloud
like
I
don't
know,
what's
workable
here,
I
don't
you
know
you
guys
know.
H
Yeah,
I
think
I
want
just
to
add
to
tommy
and
and
martin,
so
regarding
sampling,
I
think
what
you've
been
talking
about
is
like.
How
often
do
you
request
direct
export
right,
but
this
is
also
directly
related
to
like
every
node
that
that
sends
it
does
the
right.
Direct
export
also
needs
to
kind
of
restrict
how
often
they
do
it.
They
have
to
kind
of
rate
limit.
H
How
often
they
can
do
it
if
you
kind
of
request
direct
export,
if
the
requester
puts
direct
export
on
every
packet,
a
node
that
does
the
i8
export
should
not
do
it
for
every
packet.
So
I
think
that's
something
to
consider
here
and
give
advice
on
like
how
how
you,
not
only
in
your
network
rate,
limit
all
kind
of
direct
export
traffic,
but
also
on
each
separate
node.
Make
sure
that
you
don't
overload
the
network
or
any
single
host.
H
Another
thing
you
could
do
kind
of
what
martin
was
pointing
is
it's
also
make
sure
you
only
export
to
notes
that
are
kind
of
known
notes
that
you
have
like
a
a
list
of
notes
where
it's
allowed
to
export
and
and
and
really
restrict
this
to
make
it
more
safe.
H
But,
to
be
honest,
I'm
still
not
convinced
that
this
is,
and
I
understand
that
this
can
be
very
useful,
but
I
still
don't
think
it's
a
good
design,
because
you
can
so
easily
get
into
a
position
where
you,
where
you
open
for
attacks
and
and
overload
hosts
and
networks
and
so
on.
So
if
we
can
find
a
different
design
that
provides
more
or
less
similar
information,
I
would
be
all
in
for
that.
A
Gradually,
okay,
I
tried
to
take
down
all
these
comments
in
the
notes,
so
please
do
refer
to
those
as
you're
revising
sure.
F
F
Yeah,
so
just
a
very
short
update
about
the
two
drafts.
If
we
have
time
assuming,
we
do
have
a
couple
of
minutes
for
that.
F
Now
these
two
open
issues
we
both
discussed
them
in
the
last
idf
meeting,
but
and
we
talked
about
them
again
in
the
in
the
design
team,
but
unfortunately
we
can't
seem
to
make
a
decision
since
we
have
a
short
time
and
I'm
not
going
to
describe
the
open
issues
again,
but
I
I
think,
I'm
not
sure
even
all
the
relevant
people
are
here
at
this
point,
I
think
we're
looking
for
maybe
a
suggestion
from
the
chairs
about
how
to
resolve
these
two
open
issues.
C
D
Thanks,
I'm
starting
out
here,
hoping
everyone
can
hear
me.
C
D
I
I
can
see
the
slides
we're
on
the
first
one
where,
where
we're
giving
a
presentation
several
iterations
down
the
road
here,
you
can
see
the
last
one
was
o2
where
now
we're
at
07.,
but
if
you've
been
following
the
mailing
lists,
you've
been
aware
of
all
the
work
that
rudiger
and
and
len
and
and
I've
been
doing
to
resolve
comments
with
martin
and
magnus
and
and
the
other
ads
who
provided
really
good
feedback.
D
So
let's
go
to
the
next
slide,
please
all
right!
So
here's
where
we're
headed
today
we
want
to
discuss
any
remaining
issues
and
trigger
any
concluding
reviews
on
a
ippm
two-week
review
that
would
follow.
I
guess
once
we've
got
a
draft
that
we
agree
on
for
that
and
and
reach
approval
very
soon.
That's
that's
our
our
hope,
because
we've
been
doing
this
for
quite
a
while
now
and
lots
of
iterations
next
slide.
Please.
D
All
right
so
a
quick
background
for
everyone.
You
know
we're
trying
to
measure
the
maximum
ip
layer
capacity
and
we're
doing
that
with
udp
packets.
D
The
the
timeline
you
see
there
is
shows
a
fast
ramp
up
and
the
time
you
know
across
the
x-axis
is
divided
up
into
m
measurement
intervals.
We
we
basically
test
the
the
capacity
comparing
the
measurements
against
loss
and
delay,
variation
or
delay
range
thresholds,
and
eventually
we
find
a
test
and
interval
with
the
maximum
capacity.
So
that
would
be
the
result
of
the
test.
D
You
can
see
the
same
thing
on
the
left
in
the
packet
streams
and
there
you
can
see
the
feedback
from
the
from
the
receiver,
where
it's
reporting
out
on
the
measurements.
It's
doing
that
periodically.
D
In
our
case,
50
millisecond
intervals-
and
you
know
after
the
after
the
load
packets
have
been
proceeding
from
sender
to
receiver,
then
that
you
know
periodically
there's
a
status
message
sent
back
with
with
the
measurements,
and
you
can
see
right
smack
in
the
middle
there
there's
a
load
adjustment
where
the
packet's
facing
and
the
the
actual
scent
rate
goes
goes
quite
a
bit
lower
for
a
while.
D
So
that's
the
kind
of
thing
that
happens
in
this
in
this
feedback
loop
and
it's
a
it's
a
diagnostic
test.
So
it's
a
load
adjustment
algorithm
as
part
of
that
diagnostic
test
and
and
we
intend
it
for
no
more
than
that.
This
is
this
is
how
it
would
work
during
a
sunny
day,
and
you
can
see
the
various
intervals
here.
The
the
sub
interval
includes
many
trial
intervals,
with
the
many
feedback
messages
and
a
full
test
interval
includes
many
many
sub
intervals.
D
So
our
status
is,
we
got
lots
of
comments.
As
I
mentioned,
we
have
a
new
applicability,
subsection
right
mixed
in
with
the
scope
and
purpose
and
we're
adding
restrictions
for
both
the
metric,
the
method
and
the
load
adjustment
algorithm,
we've
restricted
it
to
access
measurement.
According
to
the
rate
problem
statement
that
we
standardized
in
rfc
7497
here
in
the
ippm
working
group
and
we've
got
additional
requirements
on
the
load
adjuster
algorithm,
which
you
can
see
there,
you
know,
must
only
be
used
in
the
application
of
diagnostic
operations.
D
Measurements
may
only
be
used,
and
you
know
consistent
with
the
must
only
be
used
consistent
with
the
securities
considerations
which
are
in
the
draft
as
well.
D
D
They
all
affect
the
operation
of
the
the
load
rate
adjust
adjustment
algorithm
in
that
table.
There
are
sort
of
parameter
names,
the
defaults
default
values
for
those
parameters,
the
range
over
which
we
have
tested
and
and
know
them
to
work
and
also
an
expected
safe
range.
D
Where
you
know,
as
martin
said,
you
know,
you
guys
are
experts
if
you
want
to
expand
the
range
and
and
and
and
put
this
put
a
larger
range-
and
you
know
that
that's
up
to
you,
so
we
we
have
those
we
actually
have
four
columns
there
and
the
last
one
needs
to
be
interpreted
in
that
way.
D
So
some
new
parameters
that
we've
added
are
the
feedback
message
timer
and
the
disconnect
timeout
and
so
there's
those
are.
Those
are
new
new
in
the
text,
but
another
thing:
we've
added,
which
which
really
nails
down
the
load
rate
adjustment
algorithm
at
the
end
of
the
day,
is
a
pseudo
code
in
in
an
appendix-
and
this
is
a
very
you
know-
precise
description
of
the
interactions
between
interactions.
That
begin
at
the
sender.
D
Every
time
a
feedback
message
is
received
with
measurements
of
a
sequence,
error,
errors,
like
loss
or
reordering,
and
the
delay
ranges
that
we're
measuring.
You
know
for
the
variation
each
time
we
receive
one
of
those
message,
one
of
those
messages
we
can
proceed
through
the
load
adjustment
algorithm
and
either
increase
the
rate
increase.
It
fast
hold
hold
steady
at
the
current
rate
or
decrease
fast
or
decrease
more
slowly
and
and
that's
that's
simply
the
way
it
works.
D
It's
it's
a
part
of
a
diagnostic
measurement,
so
we
as
a
result
of
iesg
comments.
We
we
found
that
we
had
some
shoulds
in
sections,
5.3
and
6.3
pertaining
to
the
number
of
intervals,
which
really
didn't
make
sense.
There
wasn't.
You
know
there
wasn't
a
a
case
that
you
could
make
where
this,
where
the
number
of
these
sub
intervals
needed
to
be
set
to
other
than
the
conditions
we
had
there.
D
So
we
converted
those
shoulds
to
to
a
must
and
we've
had
a
running
code
section
in
the
draft
since
about
last
september,
when
the
running
code
was
released
and
we've
so
that
updates
sort
of
automatically
with
the
latest
release
last
week.
So
next
slide,
please.
D
Good
okay,
so
we've
got.
We've
got
some
key
parameters
here.
I've
I'm
going
to
repeat
the
header
of
the
table
every
time
it
makes
sense.
So
you
know
you
can
you
can
read
that
for
reference?
So
now
I
talked
about
the
feedback
interval
so
that
default
value
is
50,
milliseconds
the
tested
range
or
values.
Obviously,
we've
used
50
milliseconds,
but
we've
also
used
20
or
100.
D
We
we
felt
that
the
expected
safe
range
would
be
as
low
as
5
milliseconds
or
as
high
as
250
milliseconds,
but
the
larger
values
may
slow
the
the
rate
of
advancing
the
the
sender's
rate
and
basically,
the
slope
of
the
sender's
rate
and
that
on
on
very
high
rate
access
that
may
cause
you
not
to
achieve
the
highest
rate
in
the
in
the
allotted
test
duration.
D
So
you
have
to
be
careful
with
with
that
interplay
there
and
then
we've
got
the
two
timeouts
which
we've
currently
got:
we're
proposing
default
values
of
500
milliseconds
for
the
timeout
on
not
receiving
feedback
messages
and
another
one
on
of
one
second,
on
not
receiving
failing
to
receive
the
load
packet
messages,
and
if
we
I've
got
these
last
two
illustrated.
D
So
this
is
a
case
where
the
for,
whatever
reason
the
load,
packets,
the
load
pdus
get
dropped
somewhere
in
the
middle
of
the
connection.
D
The
operation
of
the
timeout
is
explained
on
the
right-hand
side
here
at
the
so
it's
this
is
a
measurement
at
the
receiver.
It's
a
single
point
measurement
and
it's
really
an
inter-packet
time
measurement.
The
load
packet
timeout
shall
be
reset
to
the
configured
value
each
time
the
load
packet
is
received.
D
D
Case
yeah
there
we
go
so
so
now.
The
now
the
the
channel
in
the
reverse
direction
is
is
failing
and
the
feedback
messages
are
are
getting
dropped
and
we
have
another
timer
there
with
a
similar
construction
and
when
we,
when
we
lose
enough
of
those,
the
sender
will
stop
the
test
and-
and
you
see
that
by
the
cessation
of
the
load,
packets
and
and
these
can
work
kind.
D
Where
you
know
when,
when
one
when
one
side
stops
the
other,
the
other
side
eventually
gets
those
gets
that
information
and
and
stops
as
well,
it's
a
little
bit
it's
a
little
bit
more
crude
described
this
way
than
it
might
be.
Another
opportunity
here
would
be
to,
for
example,
if
the,
if
the
feedback
message
messages
don't
arrive,
obviously
we
don't
make
any
changes
to
the.
D
We
normally
don't
make
any
changes
to
the
to
the
sending
load
rate,
but
we
could
actually
begin
to
reduce
the
rate
when
we
expected
to
find
a
a
load
feedback
message
at
50,
millisecond
intervals.
Let's
say,
and
we
didn't
see
it
so
you
know
we
would
continue
to
reduce
the
rate,
reduce
the
rate
and
then
from
the
sender
and
then
finally
terminate
the
test
according
to
the
timeout
it
would
it
would
this
would.
D
This
is
an
idea
that
len
came
up
with
and
it's
you
know
some
it's
a
kind
of
a
an
a
a
side
process
to
the
load
adjustment
algorithm,
where
you
would
be
acting
on
the
lack
of
feedback
instead
of
the
input
of
feedback
messages.
D
So,
that's
you
know,
that's
an
aside
right
now.
We've
got
these
right
now.
We've
got
these
simple
timeouts
and
you
know
I
I
think
I
I
think
it
might
make
sense
to
to
pause
here
a
little
bit
for
comment.
C
K
K
Yeah,
I
was
actually
in
the
process
of
trying
to
comment
on
on
the
latest
email
here,
but
it's
I
I
start
to
understand
better
now
what
what
you're
trying
to
do
here,
but
I
I
mean,
for
example,
this
time
out
the
feedback
message
timeout.
How
does
that
relate
to
this
starter
transmission,
because
you're
saying
you
want
to
disregard?
K
They
already
get
rocket
teeth
out
of
this
in
some
sense,
and
that's
fine
as
soon
as
you
reach
that
you
at
least
see
one
feedback
message,
but
I
mean
the
initial
retransmission
timer
or
whatever
you
would
have
in
it
classical
like
tcp,
etc,
but
the
equivalent
of
that.
How
long
are
you
waiting
there
so.
D
We're
we're
we're
we're
we're
using
a
static
value
at
the
moment,
and
it
might,
it
might
make
sense,
to
look
at
the
sort
of
the
startup
of
the
protocol
to
begin
to
answer
that
we
we,
in
other
words
I
I
understood
this
magnus
that
you're
you're
asking
what
do
you
do
to
initialize
everything
here
and
the
I
mean
the
answer
is
that
we
start
out
with
we
start
out
with
with
static
timeouts,
because
we're
looking
at
inter-packet
arrival
times
we're
not
sensitive
to
round-trip
delay
in
in
this
part
of
the
in
this
part
of
the
timeout.
K
D
Right
so
I
mean
so
in
response.
One
thing
I
would
ask
is
in
the
context
of
static
timeouts,
if
you
would
put
if
you
would
provide
the
values
that
you're
most
comfortable
with,
then
we
can
talk
about
those
and
then
and
and
then
actually,
if
you,
if
you
ian,
if
you
back
up
one
slide,
please
this
is
number
six
in
the
in
this
example.
D
The
first
I
mean
the
you've
got
a
feedback
message
in
blue
at
the
top.
That's
going
to
contain
valid
measurements
for
an
entire
sub
interval,
the
next
time
or,
I
guess,
a
feedback
interval.
The
next
feedback
message
that
fires
off
50
milliseconds
later
is
going
to
contain
a
partial
measurement
but
measurements.
Nonetheless,
it's
going
to
contain
lots
of
packet
loss
by
the
way.
D
K
Loss,
I
mean
yeah.
Second,
feedback
message
doesn't
know
it's
lost.
It
just
knows
that
I
it's
it's
actually
not
going
to
detect
it
for
this
arrow
keys.
It's
just
going
to
report
it
received
with
this
it
nine,
packets
and
and
and
and
that's
what
it
received,
because
the
10th
10th
11th
and
maybe
12
that
they
might
have
seen
it
doesn't
know
that
it
should
have
seemed
unless
it's
actually
react,
because
it's
it's
not
the
sequence
number
gap,
it's
potentially
delay,
but
I
mean
that
it
can't
really
measure
react
to.
D
Well
so
then,
so,
then,
let
me,
let
me
continue
and
and
what
what
happens
in
the
next
feedback
message
is
that
you
get
essentially
an
empty
message.
50
milliseconds
later
it
says
I
haven't,
I
haven't,
I
haven't
received
any
packets,
so
I'm
I'm
not
able
to
measure
anything
and
there's
clearly
a
problem
between
the
sender
and
the
receiver.
K
D
So
the
so
it
although
it
doesn't,
although
it
doesn't
happen,
maybe
right
away
based
on
loss
it
it
like.
I
said
it
does.
It
does
happen,
50
milliseconds
later,
where
you
get
a
stat,
a
status
message
back
that
contains
no
measurements
and
and
what
we're
you
know
what
we're
talking
about
as
a
possibility
there
is
to.
I
mean
I
mean
that
message
that
message
could
be
used
to
reduce
the
sending
rate
it
could
be
used
to
take
advantage
of
some
other
things.
D
D
And,
yes,
that's
been,
that's
been
our
goal
all
along
and,
and
I
think
it's
I
think
it's
reasonable,
because
that's
the
way
we've
always
written
methods
and
metrics
here
in
in
ietf
ippm.
K
K
What
are
the
aspects
of
that?
And
but
here
you're
getting
into
and
load
algorithm
that
describes
something
and
it's
gonna
cause
its
behavior
load.
Algorithms
behavior
is,
of
course,
being
a
question
of
how
much
the
impact
these
things
and
and
and
that's
where
I
think
it
might
not
have
received
sufficiently
amount
of
review
here
I
mean
I
know
that
you,
you
tested,
implemented
etcetera,
but
as
as
we
do
realizing,
we
continue
discussing
it.
K
There's
a
lot
of
assumptions
there
for
that
old
algorithm
and
how
it
behaves,
and-
and
so
I
I'm
trying
to
understand-
where
is
the
right
from
my
perspective,
where
is
the
right
thing
to
draw
the
line
and
then
something
okay?
This
is
what
you
specify
here
now
and
what's
what
do
you
have
to
consider
and
deal
with
in
a
protocol?
Application
of
this
particular
thing
right
well,.
D
And,
and
of
course
you
know,
as
we
said,
but
I'll
say
this
just
once
to
remind
the
group,
our
our
level
of
metric
and
method
quote
quote:
interoperability
is
the
ability
to
produce
equivalent
results
and,
and,
and
you
probably
don't
need
all
the
details
in
the
load.
Adjustment
algorithm
nailed
down
for
one
specific
algorithm
as
long
as
two
algorithms
produce
the
same
maximum
value
for
that
technology.
K
Yeah
so,
but
from
my
perspective,
the
additional
aspect
of
this
which
you
get
into
is
that
is
this
so
to
say
in
quotes
safe
to
use.
Yes,
it's
intended
to
measure
it,
but
at
the
same
time
it
shouldn't
totally
bring
down.
I
mean
it's
trying
to
find
it
and
that's
part
of
the
challenge
here
I
think
in
the
measurements
trying
to
say:
okay,
we
need
to
find
the
low
threshold
where
we're
building
queue
sufficiently,
that
we
think
that
okay,
this
is
actually
true
capacity.
D
The
load
algorithm
does
I
mean,
there's,
there's
plenty,
there's
plenty
of
standards
that
could
be
used
right
now:
o-amp
and
t-wamp
o-amp
in
particular,
that
could
drive
the
network
into
congestion.
If
you
chose
the
right
sending
rate
and
it
doesn't
do
anything
until
the
test
ends,
but
that's
a
and
that's
a
an
I
an
ietf,
a
standard
track
protocol.
D
Fourth,
question:
what's
the
word
so
we're
doing
we're
doing
a
lot
more
than
that
and
and
and
it
feels
like
we're
getting
done
for
it,
so
we
really
are
out
to
protect
the
network,
but
you
know
we're
also
trying
to
measure
maximum
capacity.
As
I
said
in
our
goal,
you
know
in
a
very
short
duration.
K
K
So
I
I
I
think
the
fundamental
of
this
is
to
bring
down
the
scope
clearly
enough
that
this
is
not
intended
to
be
more
than
a
few
cooperating
administrative
domains
really
somewhere,
where
you
can
complain
about
if
it's
happening
and
it's
leaking
or
whatever
something
goes
wrong.
It's
not
cross-internet
and
and-
and-
and
I
want
I
from
my
perspective-
would
like
that
to
be
more
explicit
and
that's
what
I
asked
for.
D
A
N
D
H
N
Here
I
mean
for
me
it's
okay,
that
if
the
recycler
don't
resize
some
packets
or
or
in
the
other
hand
the
sender
don't
recite
the
feedback-
packets,
it's
okay
to
stop
the
the
thing
is
is
is
enough.
Perhaps
we
have
to
discuss,
discuss
how
to
set
up
the
limits
of
the
times
regarding
the
rtt
and
the
bandwidth,
the
bandwidth
of
the
link,
the
the
blog
and
the
the
the
near
ending.
N
D
Out
there,
that's
that's,
that's
for
sure.
We
don't
need
existence.
Proof
of
that.
Thank
you,
ignacio.
A
Absolutely
yeah,
so
I
put
myself
in
cue.
I
just
want
to
respond
to
magnus
a
bit.
I
don't
think
I
share
the
same
concern
around
this.
I
I
think
you're
having
this
definition
of
a
measurement
that
is
specifically
trying
to
get
the
capacity.
A
I
don't
think
what
we
want
to
do
here
is
we're
not
trying
to
reinvent
a
fully
fledged
transport
protocol,
like
quicker
tcp.
That
needs
to
do
all
the
things
like
we're,
trying
to
figure
out
the
capacity
of
the
network,
and
it
has
this
particular
use.
I
think
that
very
much
is
in
the
kind
of
bread
and
butter
of
what
we
do
with
metrics
and
methods
here.
A
If
you
know
martin
you're
responsible
for
this,
you
know
now
that
we've
also
for
changing
over
is
zahed
going
to
be
taking
over
the
review.
Part
of
this.
That
magnus
was
doing
like
how
are
we
moving
this
forward
because
I
think
we
need
to.
G
Thanks
thanks
for
the
segway
right,
so
you
know
I
think
well
mag,
so
yes,
magnus
and
lager
nady,
I
think,
is
you
know
a
participant.
He
has
objections
to
the
draft
and
we're
going
to
go
through
the
more
work
work
last
call
process.
G
I
think
once
that
is,
and
that's
for
the
chairs
to
adjudicate
and
when
that
is
when
you
declare
consensus
on
the
document,
as
is,
I
will
take
it
back
and
unfortunately,
because
of
expired
terms
and
expired,
aisg
reviews
we're
gonna
have
to
reball
at
it.
So
so
because
we're
not
we've
nine
out
of
ten
and
four
four
four
votes
dropped
away.
So.
G
Back
on
the
ballot-
and
you
know
we'll
see
what
what
happens
there
so
that
that's
the
process
view
I
I
I
I
would
invite.
So
I
had
to
come
up
here
and
and
share
his
thoughts
about
about.
If
he
has
deep
concerns
about
the
document,
it
might
be
good
to
address
that
now,
instead
of
waiting
for
his
ballot.
E
H
E
I
have
to
express
what
what
I
I
see,
but
I
have
seen
I
have
been
part
of
the
discussion-
that's
happening
in
the
email,
and
I
have
seen
the
discussion
today
and
it
seems
like
I
mean
there
are
some
valid
concerns
put
up
by
magnus,
which
has
been
addressed
so
far
so
yeah,
and
I
I
I
hear
what
the
working
group
is
has
to
say
about
it
and
then
I'll
I'll
convey
that
to
to
the
authors
and
the
and
and
martin
yes,
that's
right.
I
I
kind
of
see
like
this
is
progressing.
E
I
don't
see
like
this
is
like
need
to
be.
The
only
concern
I
had
like
there
are
a
lot
of
changes
on
the
draft
right
now
I
mean
that
that
the
working
has
to
go
through
and
that
that
is
basically
happening
right
now,
so
yeah,
that's
fine!
Okay!
Thank
you.
O
All
right
thanks,
immediate
one
thing
that
might
help
this
perhaps
is
a
little
bit
more
discussion
of
what
the
goal
of
the
the
rate
or
load
rate
adjustment
algorithm
is,
if
you're
trying
to
measure
the
capacity
of
a
path.
The
way
to
do
that,
is
you
saturate
the
path
with
traffic
push
all
the
other
applications
out
of
the
way
and
measure
it.
O
If
the
goal
is
to
be
friendly
to
other
users
of
the
link
of
the
path,
then
you
have
to
take
into
account
all
of
the
things
like
coexistence
with
the
other
congestion
controls
that
are
running,
because
there
is
no
such
thing
as
the
capacity
of
the
path
if
it's
in
use
by
other
applications,
you
make
capacity
by
pushing
other
users
down
in
their
sending
rate
and
that's
a
piece
I
think
for
me.
I'm
not
understanding
about
the
goal
of
this
algorithm.
O
Is
it
to
be
friendly
and
compete,
just
like
other
congestion
controllers
in
some
you
know
tcp
friendly
way.
Is
it
intended
to
be
a
lesser
effort
and
back
off
like
a
like
lead
bat
and
only
measure
capacity,
if
it's
totally
unutilized,
or
is
it
trying
to
push
everything
else
out
of
the
way
and
measure
the
full
capacity
of
the
path.
D
It
it's
trying
to
measure
the
full
capacity
of
the
path
greg
and
if
we,
if
we
push
all
the
traffic
down
to
almost
nothing
for
a
few
moments
during
our
short
measurement
duration,
that's
likely
what
we're
gonna
have
to
do.
D
Obviously,
that's,
obviously
that's
not
friendly,
but
at
the
same
time
the
person,
the
client
trying
to
make
this
measurement
is
interested
in
in
measuring
the
maximum.
So
if
they're
trying
to
do
that,
while
the
family
has
10
video
streams
going
they're,
not
helping
themselves
and
right,
so,
let's
you
know
let
let's,
let's,
let's
put
it
in
context.
I
think
this
is
the.
D
This
is
a
diagnostic
measurement
that
you
don't
run
very
often,
but
it's
gonna
run
up
against
some
technological
limit
and
that's
what
we
are
trying
to
find.
At
the
same
time,
it's
not
going
to
kill
anything.
That's
in
progress,
it's
going
to
come
it's
going
to
be
hard
and
it's
going
to
go
away.
K
K
We
get
parameters
etc,
but
these
parameters
has
not
been
reviewed
by
the
working
group
etc,
and
I
struggling
with
trying
to
understand
the
full
implications
of
them
and
that's
where
I'm
currently
saying
why
I'm
trying
to
respond
to
the
email
etc,
and
it's
also
there's
purpose-
is
that
why
I
you
can't
put
this
on
me
out
to
put
select
values.
I
can
identify
certain
values
like
too
short
ft
will
not
work,
particularly
well
with
technology.
K
That's
as
schedulers,
for
example,
you
can
identify
certain
aspects
where
it's
not
good
so
and-
and
we
get
stuck
on-
maybe
some
other
things
but
yeah.
So
I
let's
continue
mainly
discuss
a
bit
more
on
the
email,
but
I
really
really
would
like
that.
The
rest
of
the
working
group
actually
takes
a
little
serious
look
on
what
these
values
is,
if
they're
sensible
and
what
they
mean
and
if
you
understand
what
it
means,
because
I
I'm
not
certain
that
all
of
these
definitions
in
place.
Yet
so
that's.
A
A
You
know
that
we're
comfortable
with
all
the
changes,
but
I
think
let's
do
that
as
part
of
the
normal
process
and
then
once
we're
comfortable,
we'll
move
it
along
again.
A
D
A
D
Yeah
just
meant
just
to
mention
here
that
we're
looking
for
feedback
on
on
security
features
for
the
for
the
actual
test
protocol
and
yeah,
and-
and
that's
that's
plenty-
you
know
this
is
running
code,
so
we'd
be
happy.
If
you
ran
it,
we'd
be
happy
if
you've
provided
feedback
on
it.
Thanks
very
much.
A
M
P
Thanks
this
is
really
I'm
with
deutsche
telekom,
and
this
is
the
connectivity
monitoring
draft.
It
depends
on
having
deployed
segment
routed,
subpath,
and
the
draft
is
about
an
overlay
which
you
construct
and
then
you
use
the
kind
of
tomography
methods
to
calculate
some
metrics.
P
The
draft
was
moved
to
workgroup
status
and
there's
a
version
zero
zero,
which
has
been
published
by
at
around
christmas.
P
The
draft
timed
out
the
id
turned
out
at
that
stage
and
I
just
had
time
to
provide
text
describing
more
of
the
pros
and
but
didn't
have
time
to
add
the
metrics,
and
so
I
promised
that
I'm
gonna
add
a
zero
one
version
which
contains
at
least
part
of
the
metrics.
I
was
a
bit
sloppy
with
the
metrics
during
the
id
part
because
the
metrics
there
are
a
few
metrics
to
to
add
here,
so
that
is
still
incomplete
and
the
whole
metric
section
is
likely
to
benefit
from
review.
P
P
That
is
an
old
one,
which
I
added
here.
I
do
not
want
to
go
through
that
in
detail.
The
upper
part
says
well,
that
is
one
measurement
loop
and
they
will
all
look
like
that.
You
have
one
loopback
on
a
monitored
path
and
then
you
have
one
downstream
and
one
upstream
path
and
if
you
look
at
the
lower
part,
that
is
the
overlay
which
is
going
to
result
for
each
monitored
path,
which
here
is
the
connection
between
two
routers.
P
You
have
a
blue
measurement
loop,
you
have
the
green
and
the
red
one
and
without
going
to
details,
that
is
the
design,
and
this
setup
is
required
for
all
that
to
work,
and
you
can
do
that
with
applying
segment
routing.
You
cannot
do
that
with
standard
routing,
at
least
I
don't
think
so.
Then
please
next
slide
to
start
with
the
new
stuff.
P
P
First
of
all,
I
introduced
sub
path
I
and
measurement
loop
I
to
have
the
same
index
for
that
measurement,
which
is
done
in
round
trip
there,
the
blue
one
and
I
decided
the
red
one
to
have
an
index
downstream
and
the
green
one
to
have
an
index
upstream.
I
also
thought:
well,
I'm
gonna
call
the
one
side,
the
hub
and
the
other
side
the
spoke.
P
All
of
that
should
help
to
understand
how
the
metrics
work,
because
these
indices
will
pop
up
again.
If
you
come
to
metrics,
which
relates
to
the
monitored
subpath
I
and
yeah,
it's
it's
probably
somewhat
complex,
and
so
I
hope
that
all
this
helps
in
in
making
the
metrics
somewhat
clearer.
P
All
right,
that
is
the
ex
explanatory
part
and
then
let's
move
on
the
metrics
yep.
The
next
slide.
Please
and
that's
the
last
one.
I
first
of
all
said:
there's
a
basic
metric
statistic
and
that's
the
segment
route
path
periodic
mean
delay
and
that
is
captured
or
recommended
to
be
captured
during
offload
hours.
E
H
P
Singletons,
so
this
is
well
it's
it's
very
little
tolerance
and
if
you
look
at
the
design
to
check
whether
there
is
a
loss
of
connectivity
and
rerouting
or
packet
loss
or
whether
there
is
congestion
and
congestion
will
create
a
little
more
than
a
few
microseconds,
at
least
in
the
networks
which,
where
I
am
active
right
then,
based
on
that
baseline,
there
are
some
more
metrics.
First
of
all,
you
can
calculate
a
round
trip
delay
per
monitored,
subpath.
I
and
I
included
that
equation,
to
show
or
what
I
meant
by.
P
Choosing
the
indexes
the
indices,
because
you
have
six
measurement
loops.
Three
of
them
pass
the
interface
which
the
subpart,
which
is
monitored.
The
surpass
I
the
one
of
them,
is
that
one
with
the
loopback
and
that
was
denominated
i2,
and
here
you
see,
you
need
three
times
the
measured
round
trip
delay
of
that
measurement
loop
and
then
you
add
once
the
upstream
loop
and
wants
the
downstream
loop,
and
from
that
sum
you
subtract
the
delays
of
all
the
other
measurements
which
never
pass
this
interface
and
divide
that
by
four.
P
I
felt
that
to
be
a
way
to
somehow
describe
what
happens,
human
readable
and
use
the
indexing
to
set
up
formula
which
can
still
be
used
also
to
optimize
that
and
then
program
it
or
even.
If
you
just
create
a
formula
on
an
excel
sheet,
it
will
work
that
way
for
each
link.
P
P
There
is
more
text
in
the
draft
and
I
tried
not
to
reinvent
something
which
has
been
invented
by
others
long
time
ago
you
parameterize
it
by
defining
the
cumulative
sum
of
an
instance
t
as
the
bigger
value
of
zero
or
the
prior
value
of
your
cumulative
sum,
plus
the
actual
singleton
xt,
and
from
that
you
subtract
the
mean
value
which
you
captured.
P
While
you
had
no
congestion
and
you
further
subtract
a
value
ki
and
that
is
related
to
the
standard
delay
variation
that
is
all
taken
from.
I
think
I
found
a
reference
at
nist,
which
explains
how
you
set
the
values
and
I
think
it's
a
best
to
say
it
depends
on
what
you
are
out
for
how
you
pick
these
values
in
detail.
P
P
The
section
about
the
metrics
isn't
completed
so
far.
It
contains
a
something
to
read
still
and
I
will
continue
to
add
information
to
that
and
also
explain
how
to
use
the
change
point.
Detection
to
create
the
other
metrics,
which
are
the
loss
of
connectivity
and.
P
The
congestion
and
the
interface
where
the
congestion
occurs.
It's
just
about
creating
text
which
is
correct
in
a
mathematical
sense
and
readable
for
humans.
P
N
N
P
Yes,
but
as
I
am
describing
here,
and
it
was
just
to
foster
confidence
into
the
method,
one
thing
is:
you
apply
segment
routing,
usually
only
within
a
controlled
domain
where
you
own
the
network,
and
you
know
its
characteristics
and
second
thing.
As
I
point
out,
the
example
refers
to
a
95
95
quantile,
it's
it's
not
the
average.
P
The
average
has
been
smaller,
it's
just
to
say.
Well,
if
you
capture
the
mean
and
you
capture
it
in
an
unloaded
network,
even
the
95
percent,
even
the
95
percentage
quantity
percentile
offers
you
a
very
low
standard,
deviation
in
delay.
So
if
you
run
to
bad
luck
and
you
capture
a
bad
interval,
then
it's
no
worse
than
that.
P
If
you
run
consecutive
measurements,
as
I
mentioned,
240
samples-
and
you
wait
for
three
or
four
samples
as
you
proposed,
then
you
will
likely
see
even
lower
numbers
here
all
right,
and
that
is
how
it
should
work.
I
think
it
will
depend
on
the
operator
what
the
they
prefer,
but
I
agree
you
shouldn't
pick
the
very
first
measurement.
H
C
C
A
Yeah
yeah
and
I
guess
the
people
there's
an
audio
slider
at
the
bottom
of
meat
echo,
so
you
can
turn
up
the
volume
on
your
end,
so
feel
free
to
do
that.
People,
okay,
go
ahead
now.
S
Okay,
next
slide,
we
can
start
okay,
a
special
flow
measurement
employ
very
few
marking
beats
inside
the
editor
of
each
packet
for
loss
and
delay.
Measurement
is
a
protocol
independent
technique.
The
metrics
described
in
this
draft
are
about
rounded
time
rtt
and
about
bucket
loss,
packet
loss
and
one-way
market
loss.
S
S
The
first
measurement
that
used
this
explicit
technique
was
the
spin
beat
that
is
already
standardized
inside
the
quick
protocol.
S
S
Spin
bit
has
some
limitation
because
the
packet
loss
can
cause
wrong
estimates
of
rtt
if
the
edges
are
lost,
the
reordering
of
packet
can
create
false
periods,
very
short
force
periods,
and
there
can
be
also
some
holes
in
the
traffic
when
there
is
an
application
delay.
So
if
the
traffic
in
one
direction
is
stopped
because
the
application
is
waiting
sometimes
can
be,
there
is
a
delay
in
the
reflection.
So
there
is
an
application
delay.
S
S
S
S
S
S
S
The
formula
is
two:
the
measurability
plus
one
thousand
once
hundred
milliseconds.
That
is
his
own
experimentation,
that
it
will
work
quite
well.
The
measurability
can
be
calculated
using
the
three
packets
that
we
showed
at
the
start
of
the
session
or
can
be
measured
using
the
t-max,
a
very
big
team
as
like
one
second
t-max,
the
first
time
only
using
the
spin
beat
if
present
next
slide,.
S
S
S
S
S
H
S
Two
independent
measurement,
the
spin
beat
that
is
always
calculated,
but
can
have
some
errors
in
case
of
impairment
like
out
of
sequence
of
application
delay.
Also
in
the
traffic
in
one
direction
is
more
precise,
but
there
is
the
possibility
to
have
less
sample
less
measurement
because,
in
case
of
loss
of
the
same
polar
in
case
of
this
card,
the
step
was
discarded
by
the
client
or
the
server,
because
the
application
delay
is
more
than
one
millisecond.
We
haven't
the
measurement
and
also
there
is
sometimes
that
ensure
the
demonstration
that
reduce
the
number
of
samples.
S
The
next
slide
is
the
the
same.
The
last
time
was
the
comparison
between
the
measurement
of
packet
loss
and
appendix
of
this
presentation
we
can
skip,
because
we
already
explained
the
last
presentation
in
the
last
iitf
meeting
the
ppm
meeting.
Okay
next
slide.
S
So
if
we
have
a
three
bit
that
we
can
use
for
speech
flop
marking,
there
are
many
possibilities.
We
outline
a
five
possibility.
S
S
So
we
can
conclude
that
the
split
flow
measurement
are
gaining
interest
for
encrypted
transfer
protocols.
There
are
some
discussion
in
the
in
the
quick
working
group,
transport,
working
group
and
also
the
ppm
working
group.
There
are
some
implementations
about
the
four
companies
in
the
idf
architecture
and
there
are
a
trade-on
and
a
pppm.
My
list,
we
joined
the
previous
draft
to
have
only
one
draft
that
explains
all
the
measurements
that
are
in
this
kind
of
technology,
and
so
we
ask
for
the
working
group
adoption.
Thank
you.
F
Q
You
intro
you
introduce
the
second
marking
bit
to
achieve
more
accurate
packet
plus
and
delay
measurement.
Have
you
looked
at
the
individual
drug
that?
Well,
unfortunately,
currently
it's
expired,
but
you
can
find
it's.
A
compact
marking
you'll
find
tau's
last
name
on
the
taos
last
name,
so
it
allows
to
use
a
single
marking
bit
to
do
accurate
delay
and
packet
last
measurement.
Q
That's
a
good
question:
I
don't
think
that
there
is
a
any
distinction
using
the
compact
marking,
whether
it's
a
connection
oriented
or
connectionless,
but
yeah.
Let's,
let's
take
a
look
at
this
proposal
from
that
angle
and
see
if
a
single
bit
can
be
effectively
used
for
accurate
packet
loss
and
delay
measurement.
S
I
think
that
a
single
bit,
the
problem
that
in
protocol
like
weak
that
are
encrypted,
is
impossible
to
mark
a
bit
during
the
path
we
have
the
marking
inside
the
protocol.
So
in
this
case
we
are
defining
this
kind
of
methodology.
This
is
quite
different,
let
me
say
classical
marking
methodology
that
is
used
marking
the
bit
on
the.
G
Martin,
here
yeah
just
very
briefly,
so
this
seems
like
pretty
solid
technical
work
and
and
as
an
individual,
I'm
fairly
interested
in
these
kinds
of
things
making
their
way
into
the
into
protocols.
But
as
a
matter
of
adoption,
a
little
concern
is
that
this
is
not
really
on
the
radar
quick,
much
or
any
other
transport
protocol
and
and
as
a
as
a
thing
with
a
very
rich
set
of
potential
adoption
candidates.
We
have
a
little
concern.
This
doesn't
have
a
very
clear
path
to.
C
All
right,
I
was
going
to
add
kind
of
both
as
an
individualism
and
also
as
a
member
of
the
quick
working
group.
There
is
also
the
question
of
where
the
privacy
review,
which
will
likely
end
up
being
necessary
for
this
to
be
deployed.
C
I
would
end
up
living
with,
would
end
up
living
in
quick
or
ippm.
I
don't
think
that's
been
decided
yet,
but
I
think
that
probably
will
also
end
up
being
a
little
bit
of
a
something
to
keep
on
everyone's
radar.
A
Yeah,
that
makes
sense.
I
do
think
you
know
this
is
something
that
people
have
been
talking
about
for
a
while.
There
is
interest.
A
I
I
believe
this
has
been
discussed
on
the
quick
list
previously
like
this
general
area,
and
I
think
if
we
did
a
call
for
adoption,
that
we
should
certainly
kind
of
call
out
to
quick
and
let
people
know
there
and
give
their
feedback.
Does
that
make
sense
cool?
Yes.
Does
I
guess,
does
anyone
if
we
go
ahead
with
that
plan?
You
know
informing
quick
and
asking
this
group
for
adoption.
Does
anyone
object
to
us
looking
at
adopting
this
work.
G
I
I
I
have
to
think
a
little
bit
about
about
exactly
what
precise
conditions
I
would
place
on
this
to
not
object,
but
and-
and
I
I
also
have
to
think
about
what
I'm
rejecting
this,
what
hat
I'm
wearing
as
I
as
I'm
making
this
objection,
but,
but
I
mean
just
like
some
energy,
at
least
as
an
experiment
in
in
one
of
these
protocol
working
groups,
whether
it
be
quick
or
different.
One
would
be
gratifying
to
to
show
this
work
has
an
actual
like
deployment
path.
C
I
think
we
should
pose
the
question.
Oh,
I
have
a
quick
question
for
the
authors.
Actually,
there
are
at
least
two,
and
maybe
more
of
the
others
on
this
draft
have
substantial
server
deployments
are,
are
any
of
them
planning
on
providing
that
kind
of
a
deployment
experience?
I
I
guess,
I'm
thinking
of
demetrian
and
igor
of
light
speed
and
akamai
respectively,
but
there
may
be
others
on
that
list.
So
I
don't
know.
I
I
was
just
wondering
what
you
had
in
mind
in
ippm.
Was
it
to
define
a
method
or
a
protocol?
I
mean,
because
we
have
both
around
in
transport.
We
have
tfrp,
which
has
no
protocol
particularly,
but
it
is
a
method
and
then
we
plant
it
into
a
protocol
to
use
it.
Is
that
what
we're
thinking
about
here
or
is
it
something
that's
a
protocol?
Instantiation
of
this.
A
My
impression
is
that
it
is
a
method
of
defining.
If
you
have
these
many
bits,
here's
how
you
can
use
them,
but
it
would
not
be
explicitly
defining
quick
adopting
it,
and
I
think
that
does
bring
up
the
question
of
you
know:
is
someone
going
to
adopt
the
method,
but,
at
the
very
least,
having
this
work
done
here
would
allow
future
versions
of
quick
versions
of
other
protocols
to
reference
this
and
use
the
mechanism
without
having
to
do
the
measurement
methodology
from
scratch.
C
Is
kind
of
the
direction
that
tommy
and
I
were
going
in,
and
that
is
the
direction
we
also
passed
about
quick
chairs,
who
worked
in
some
ways
happy
to
see
this
progress
and
become
more
mature
before
we
try
to
stop
it
onto
the
protocol.
Sorry
I
shouldn't
that
was
probably
a
pejorative.
I
apologize,
but,
like
you
know,
by
giving
it
more
time
in
the
lab,
as
well
as
with
experiments
and
by
the
time
we
do
apply
it
to
a
real
protocol.
A
Okay,
I
think
that's
good
and
we
use
the
chairs
we'll
kind
of
figure
out
the
next
steps
and
we'll
actually
make
some
action
on
this
in
the
near
future.
C
A
C
Q
Okay,
yeah
great,
thank
you,
okay!
So
yeah,
let's
go
the
next
slide.
Q
Okay,
so
update
since
our
discussion
there
itf
109.,
the
hybrid
two
step
has
now
new.
I
am
trace
option
type,
alka
request
for
ayana
allocation.
Q
Protocol
operation
in
their
intermediate
node
on
the
reception
of
the
follow-up
packet,
so
using
their
defined
authentication,
updated
security
considerations
and
integrating
hts
with
the
iam
updated
the
I
am
registries.
Let's
go
to
the
next
slide.
Q
So
iom
currently
has
trace
options
end-to-end
hub
by
hub,
in
addition
that
it
defined
their
direct
export
mode
for
collecting
telemetry
information.
Q
Hybrid
two-step
can
be
used
as
one
of
their
methods
to
collect
and
transport
iom
information,
so
their
impact
is
acting
as
a
trigger
packet
and
then
their
collection
and
transport
of
their
iem
data
is
done
using
the
hybrid
two-step
protocol.
So
for
that
we
are
proposing
to
add,
allocate
option
type
for
a
hybrid
two-step
in
the
iem
option.
Type
registry.
Q
Next
authentication,
so
the
anticipate
authentication
uses
the
tlv
format
and
it
has
one
of
their
tov
type.
Values
allocated
by
ayanna
then
followed
by
a
type
of
hmac
and
the
length
of
their
tlb,
so
that
provides
us
extensible
and
flexible
mechanism
to
of
at
some
point,
deploy
more
advanced
methods,
but
currently,
what's
being
defined,
is
used,
sha-256,
which
is
truncated
to
16
octets
and
in
order
to
protect
from
replay
attack.
Q
So
what
that
allows
is
that
each
node
that
exports
information
using
the
hybrid
to
step
method
can
protect
its
iom
data
independently
of
other
mode.
Q
Q
The
next
update
was
that
to
clarify
the
operation
of
the
intermediate
node,
so
here
you
see
their
mechanism
how
their
hts
works,
that
there
is
a
trigger
packet
and
then
their
capable
nodes
originate.
The
follow-up
packet
in
which
their
data
being
collected,
obviously
there
might
be
a
amount
of
information
or
number
of
nodes,
will
cause
that
first
follow-up
packet
to
be
filled,
so
that
can
be
that
is
handled
by
generating
by
the
intermediate
node
there
packet.
Q
Q
So
this
flow
chart
just
captures
their
what's
being
updated
in
the
text
format
because
we
cannot
include
the
flowchart
in
easy
way.
Asciis
really
takes
a
lot
of
space.
Q
So
it's
all
reflection
of
the
text
update
in
the
document
and
if
you
have
any
questions,
so
we
can
look
at
how
mechanics
works
or
questions
can
be
brought
up
on
a
mailing
list.
Q
Okay,
our
next
slide
the
security
consideration
there,
hybrid
two-step
nodes,
belong
in
a
trusted
domain
and
integrity.
Protection
can
be
achieved
by
using
the
authentication
subtlb
that
each
node
protects
collected,
telemetry
information.
Q
Q
Q
Next
slide,
we
appreciate
their
common
suggestion
and,
thanks
to
frank
for
his
suggestion,
to
give
some
example
of
how
follow
packet
can
be
associated
with
their
data
flow
for
some
encapsulations,
because
our
goal
is
to
have
a
discussion
of
encapsulations
in
the
separate
documents.
Q
A
A
Okay,
I
don't
see
anyone
stepping
up.
A
Yeah
so
just
time
check
here
we
don't
have
all
that
long
left
and
clearly
you're
not
going
to
get
to
all
of
our
lightning
talks
of
the
things
that
we
have
left
in
the
agenda.
A
Q
I
can
squeeze
my
next
presentation
to
like
probably
five
seven
minutes
and
then
we'll
leave
time
for
a
srpm
discussion.
A
Q
There,
okay
error
performance
measurement
and
packet
switch
network.
Let's
proceed
so
in
the
network
slicing
discussions.
What
we
discuss
extensively
is
service
level
objectives
and
these
service
level
objectives
they
define
whether
the
network
slice
considered
to
be
available
for
the
service
are
non-available
and.
Q
Their
slo
express
packet
to
us
ratio,
latency
jitter
and
obviously
there
are
path.
Continuity,
additional,
the
other
interesting
work,
that's
being
done
in
bdf,
it's
quality
attenuation.
It's
another
attempt
to
combine
several
performance
metrics
in
one
delta
q
that
characterizes
their
experience
of
application
next
slide.
Q
We
have
different
oem
protocols
that
independently
measure
detect
defects
and
measure
performance,
but
defect
is
inability
to
communicate
and
packet
defect
is
a
hundred
percent
packet
loss.
At
the
same
time,
the
packet
loss
can
be
viewed
as
infinite
delay
of
the
packet,
so
the
error
performance
is
quantitative.
Characterization
of
the
network
condition
between
the
end
points,
whether
it's
packet
loss
or
packet
delay
next
slide
from
their
constant
bitrate
or,
for
example,
tdm.
Q
Technology,
we
know
that
there
is
a
work
on
error,
performance
measurement
based
on
guaranteed
presence
of
the
data,
but
that's
not
the
case
in
statistical
multiplexing
in
packet
switch
networks,
because
we
don't
have
an
expectations,
a
guarantee
that
the
signal
will
arrive
at
a
certain
interval
in
a
certain
time.
Q
So
for
that
we
need
to
use
our
active
oem
test
packet
specifically
constructed
test
packets,
that
being
periodically
transmitted,
and
this
rate
is
well
known
to
the
remote
endpoint
of
the
measurement
session.
Q
So
what
dictionary
we
use,
we
use
errored
interval,
severe
error
interval
a
second
and
error-free
interval.
Second,
and
from
that
we
can
determine
other
periods
of
availability
and
unavailability.
Q
Q
If
the
number
of
consecutive
periods
less
than
certain
thresholds,
that
doesn't
change
the
state
of
the
path,
so
that
provides
the
stability
to
their
identification
of
their
error
being
present.
Q
The
comments
and
let's
have
a
discussion
on
the
mailing
list.
Thank
you
awesome.
Thank
you.
So
much.
A
Yeah,
please
view
the
document
comment
on
the
list.
I
don't
think
we're
quite
ready
for
adoption.
Yet
let's
give
four
minutes
here.
R
Okay,
sorry,
I
will
be
really
quick.
This
is
the
summary
of
the
main
updates
from
zero
seven
version
to
zero
eight
version.
R
First,
one
is
that
we
analyze
the
use
cases
for
purchase
based
on
that
copy
and
igp
and
the
accuracy
reply,
and
the
conclusion
is
that
number
one
that
conveyor
is
most
suitable
if
the
ion
domain
is
administrated
by
a
centralized
controller
number
two
use
of
netcat
young
is
problematic
without
the
centralized
controller
and
the
flooding
igp
domain
with
iom
information
may
be
excessive,
hence
using
an
accuracy.
Supply-Based
mechanism
is
reasonable
in
some
cases.
R
R
Analyzes
the
show
netcom
yeah
has
its
limitations
when
it's
used
in
an
item
domain
where
no
centralized
controller
exists.
Firstly,
each
I
am
encapsulating
node
needs
to
implement
the
net
curve.
Client
each
ion
can
transition
nodes
and
imd
capsulating
nodes
needs
to
implement
the
netcom
server.
R
R
R
R
R
Please
we
also
improved
the
security
considerations
to
adjust
received
comments.
Several
methods
are
suggested
for
the
implementer
and
operator
to
use
the
first
one
is
the
authentication
of
actual
requests
to
apply.
That
includes
the
irm
capabilities
theory.
The
second
one
is
filtering
based
on
the
source
suggests.
The
echo
requests
requested
hi.
So
we.
A
Are
out
of
time
there
we
go,
then
thank
you.
If
anyone
has
questions,
please
stick
it
to
the
list.
Oh
change,
some
quick.
L
J
Hi
everyone,
my
name,
is
rakesh
gandhi
from
cisco
systems,
presenting
the
srpm
extensions
draft
on
behalf
of
the
authors
listed.
So
there
is
a
companion
spring
dropped
and
it's
dependent
on
this
draft
so
good.
If
you
can
progress,
this
work
to
unblock
the
work
in
spring,
so
agenda
is
requirements
and
scope
summary
and
next
steps.
J
Next
slide.
Please
so
requirement
is
the
in-band
pm
for
links
and
sr
paths.
The
in
segment
routing
goal
is
to
avoid
controlled
products.
Protocol
signaling,
as
well
as
maintaining
state
on
the
reflector.
The
scope
of
this
draft
is
a
stamp
the
rfc
is
listed
here.
The
next
slide,
please.
J
So
many
thanks
to
everyone
who
provided
review,
comments
and
suggestions,
so
we
have
updated
the
draft
align
the
terminology
with
stamp
rfc
move
the
direct
measurement
messages
to
a
new
draft.
This
is
a
standalone
draft
not
tied
to
sr
in
this
generic.
We
move
the
control
code
to
a
tlb
and
various
editorial
changes
to
address
the
review
comments
next
slide
piece.
J
So
draft
really
defines
two
tlvs
as
extensions.
The
procedures
are
there
in
the
spring
drop
here.
Just
the
two
tlv
extensions
for
stem
one
is
a
destination
address
tlv,
which
is
useful
when
we
have
a
test
package
going
to
127,
let's
say
decimation
address
to
avoid
wrong
performance
measurement
and
next
slide.
Please.
J
And
the
second
tlv
that's
defined
is
the
written
path
tlv,
it
has
a
structure
with
sub
tlvs
and
we
can
look
at
the
next
slide
for
some
of
them.
J
So
there
is
a
control
code
where
for
link
measurement,
this
indicates
to
the
reflector
to
send
the
reply
back
on
the
incoming
link.
It
applies
to
all
kinds
of
links,
but
it's
just
that
return.
The
reply
invent
for
this
link
next
slide.
Please.
J
There
is
a
return
address
as
well,
so,
instead
of
sending
the
reply
back
to
the
assassin
sender,
source
address
session
center
could
specify
a
different
source.
Address
gives
a
bit
of
flexibility
for
the
test
package
and
next
slide.
J
Please
for
sr.
There
is
a
return
path
segment
list,
so
sender
can
specify
on
which
sr
path.
The
the
reply
should
come
back
and
that's
a
tl
that
carries
the
either
the
the
segment
list
or
the
binding
seed
of
the
sr
policy.
Next
slide.
Please.
J
So
that's
all
there
is
just
the
two
tlvs,
it's
a
fairly
straightforward
extensions
for
stamp,
very
useful
in
segment
routing,
and
if
you
can
progress,
this
unblock
the
spring
drop.
Welcome
your
comments
and
suggestions
and
again
appreciate
all
the
comments.
Thank
you.
Q
Love
your
input.
Yes,
thank
you
rakesh
for
the
updates
and
really
appreciate
your
consideration
on
the
comments
on
these
two
extensions
that
to
control
the
return
path.
Have
you
considered
doing
it
through
their
yang
model,
as.
Q
Update
to
stamp
yank
model.
J
Yeah
so
two
points.
One
point
is
that
the
goal
is
to
avoid
a
state.
There
could
be
tens
of
thousands
of
sr
policy
each
with
multiple
candidate
path,
with
multiple
segment
lists.
You
can
end
up
with
a
lot
of
test
sessions.
It
would
create
a
lot
of
states
on
them
on
the
reflector
side,
and
the
second
is
that
the
dynamic
paths
can
stay
in
the
network,
so
you
end
up
doing
a
lot
of
signaling,
which
is
also
not
desired
in
sr.
J
So
there
are
multiple
motivations
for
for
using
having
this
extensions
for
sr.
Q
Right,
I
agree
with
your
rationale,
but
on
the
other
hand,
by
increasing
the
size
of
the
test
packet,
you
are
limiting
capability
of
measuring
with
a
smaller
pack,
because
one
of
the
requirements
for
their
performance
monitoring
is
that
you
want
to
be
able
to
monitor
to
test
packet
loss
and
delay
with
the
variable
packet
size.
Q
So
if
you
have
to
include
additional,
because
basically
that
becomes
a
mandatory,
if
you
don't
have
this
information
on
your
session
reflector
that
becomes
mandatory
for
each
and
every
test
packet.
J
Yeah
the
procedure
is
defined,
so
if
you're
doing
one
way
kind
of
measurement,
where
we
don't
care
about
the
return
path,
it
can
be
ip,
then
this
tlv
is
not
required.
If
you're
doing.
If
you
want
return
path
to
come
back,
then
tl
is
required,
but
again
it
can
be
a
binding
sheet.
Only
in
binding
seed.
You
only
have
one
segment,
not
the
entire
label
stack.
So
this
way,
you're
not
really
increasing
the
size
except
the
type
length
value
and
the
binding
scene.
Q
A
A
Okay,
so
what
I
would
like
to
do
is
we
don't
have
much
time
of
any
time
I
kind
of
want
to
just
do
a
quick
show
of
hands
just
see
like
what
people
want
to
do
with
this.
A
So
you
should
now
see
a
in
the
show
of
hands
tool
the
little
bar
graph
up
at
the
top.
A
A
A
Seconds
all
right,
so
we
have
nine
people
who
are
in
favor
working
on
this
five
people
who
aren't
not
a
terrible
amount
of
engagement,
but
at
the
same
time
it
seems
like
the
majority
of
people
who
do
have
an
opinion
still
want
to
see
something
done.
G
Yeah
just
on
this
subject,
if
somebody-
if,
if
any
of
you
do
not,
if
any
of
the
non-voters
have
substantive
objections,
rather
than
just
not
being
interested
in
this
work,
please
take
that
to
the
list.
This
consensus
is
a
lot
stronger.
If
these
are
people
who
just
don't
care
rather
than
people
who
hate
the
work
for
some
reason,.
A
Thank
you.
Everyone,
sorry
to
everyone
who
we
can
get
to
your
talks,
but
the
slides
are
available
on
data
trackers.
So,
if
you're
interested
in
any
of
the
drafts,
please
do
look
at
the
slides,
read
the
documents
comment
on
the
list.
That
is
the
primary
way
to
engage
and
to
everyone.
You
are
now
done
with
your
itf
110
week.
Congratulations,
you
have
made
it
through
and
have
a
good
flight
back.
Everyone
have
fun
sightseeing
wherever
you
are
and
we'll
see
you
on
the
list
and
next
time.