►
From YouTube: IETF112-IPPM-20211111-1600
Description
IPPM meeting session at IETF112
2021/11/11 1600
https://datatracker.ietf.org/meeting/112/proceedings/
A
A
A
All
right
so
welcome
everyone.
This
is
ippm
ip
performance
measurement,
I'm
tommy
paulie.
One
of
your
chairs,
marcus
here,
is
a
new
chair
joining
us.
So
please
welcome
marcus
and
thank
you.
Marcus
for
stepping
up
ian
is
stepping
down
in
his
chair.
I
don't
see
him
here,
so
we
may
need
to
just
bid
him
adieu
virtually
on
the
list,
but
he's
been
very
helpful
for
the
past
you're
in
a
bit
okay.
A
A
The
various
processes
we
have
around
ipr,
as
well
as
around
how
you
participate-
and
one
thing
we
want
to
highlight
specifically
this
time-
is
just
some
of
the
aspects
of
the
code
of
conduct.
You
know
this
group,
I
think
you
know,
for
the
entire
time
I've
been
here
has
been
very
good
at
interacting
well
with
each
other,
but
it's
good
to
just
remember
and
have
it
at
the
forefront
of
our
minds
as
we're
interacting
in
the
group
that
you
know.
A
This
is
a
group
with
a
lot
of
different
people
from
different
backgrounds,
and
the
most
important
thing
we
can
do
here
is
to
treat
each
other
with
respect
and
to
make
sure
that
we're
clear
and
making
it
easy
for
everyone
to
understand
and
participate
and
that
we
focus
on
having
you
know,
technical
discussions
that
are
getting
at
the
details,
so
that
we
can
help
progress
the
work
not
getting
into
anything
personal
or
just
trying
to
stop
work
and
we're
trying
to
find
the
best
solutions
for
the
whole
internet.
A
So
just
keep
this
in
mind
in
your
contributions
at
the
mic
and
on
jabber,
and
I'm
looking
forward
to
a
great
meeting
for
the
meeting
management
you
are
here.
You
are
in
meet
echo,
just
as
a
note
when
you
want
to
queue,
use
the
little
join
queue.
Hand
sign
that's
up
by
your
name
on
the
side
of
meet
echo.
A
You
can
turn
on
your
own
video
and
audio.
If
you
are
a
presenter,
all
of
the
slides
are
pre-loaded
through
meet
echo
and
there's
next
to
the
hand
button.
There's
the
file
button
that
allows
you
to
share
slides
that
will
you'll
be
able
to
request
sharing
slides
when
someone
else
is
no
longer
sharing
slides.
A
A
A
Okay,
so
on
to
our
agenda,
we're
going
to
start
with
a
good
amount
of
time
about
an
hour
going
over
our
primary
documents,
the
things
that
we
currently
have
adopted
in
the
working
group.
We
have
some
new
things.
We
adopted
since
last
time
around
iom
that
we
want
to
spend
a
bit
of
time
going
over
and
then
in
the
second
half
of
our
meeting,
we
have
some
new
proposed
work.
A
So
that
is
some
of
the
background
of
the
discussion
that
happened
there,
but
all
of
these
had
some
discussion
on
the
list
and
interest
from
the
group,
and
the
remaining
time
will
be
a
number
of
quick
elevator
pitches,
we're
trying
a
new
style
this
time
where,
rather
than
trying
to
limit
based
on
time,
we
want
people
to
summarize
what
they're
doing
within
a
single
slide,
so
that
we
can
get
all
of
the
information
up
front
and
hopefully
save
more
time
for
discussion
and
feedback
and
not
just
be
trying
to
fill
up
the
whole
time
of
the
lightning
talk
with
just
you
know,
a
presentation
of
blasting
information
at
people
all
right.
A
A
And
frank,
if
you
could
ask
to
share
slides.
B
Do
the
session
for
for
us
this
time
on
iom?
I
just
got
my
my
third
jab
yesterday
and
I'm
kind
of
okay,
but
I'm
not
feeling
that
well,
so,
thanks
for
that.
D
So
here
we
go,
this
is
going
to
be
a
quick
update.
I
hope
we'll
have
more
discussion
time,
since
we
have
25
minutes
allocated
for
these
two
drafts.
These
were
the
two
new
drafts
which
were
initiated
because
of
the
feedback
that
was
received
on
the
iom
data
draft.
D
D
The
changes
just
highlighting
the
changes
that
happened
post
while
we
adopted
this
work
here.
First
change.
Major
change
was
changing
it
from
informational
to
standard
stack,
because
we
are
defining
integrity
based
options
which
are
parallel
to
the
the
iom.
Data
options
that
are
defined
are
I
options
that
could
be
defined
in
other
graphs.
Like
the
direct
direct
export
draft,
then
we
based
on
the
feedback
from
multiple
folks
in
in
the
work
group.
D
We
we
adopted
method
three,
which
is
a
symmetric
key
based
signature,
to
protect
the
integrity
of
the
options
as
the
as
as
the
proposed
method
and
mode.
D
All
the
other
methods
that
were
discussed
earlier
were
in
the
appendix
that
got
removed
because,
as
as
part
of
the
adoption
and
feedback
received
during
the
adoption
call-
and
there
were
some
editorial
changes
done,
for
example,
we
had
given
examples
specific
to
the
options
that
are
there
in
the
iom
data
draft,
but
had
not
explicitly
mentioned
options
in
the
direct
export
or
any
other
iom
drafts
that
could
come
come
up
in
future,
but
data
integrity
is
quite
generic.
D
It
should
apply
to
anything
any
iom
options
that
are
that
are
defined
in
beyond
the
data
draft
that
that
is
fixed
now
and
some
terminology
fixes
we
had
called
the.
Inter
I
mean
we
were
not
following
the
the
nomenclature
used
in
the
data
draft,
which
is
now
fixed.
D
So
we
we
do
have
additional
feedback
and
some
of
the
internal
discussions
among
the
authors
to
improve
the
security
sections.
We
definitely
want
more
input
on
yeah.
E
D
As2
as256
and
chart56,
plus
hcsa
based
algorithms,
that
could
be
used
for
diff
for
carrying
out
the
signature,
but
we
appreciate
feedback
from
other
folks
more
familiar
with
what
would
be
other
reasonable
combination
of
hashing
and
signature
algorithms.
That
could
be
that
should
make
it
into
that
list
and
more
reviews
on
the
integrity
methods
that
is
adopted
is
appreciated
and
plus.
If
anybody
wants
to
try
out
the
implementation
of
the
data
integrity
option,
especially
in
the
the
kernel
or
in
vpp,
where
we
have
the
iom
data
options
implemented,
that
is
appreciated.
D
A
Actually,
I
just
jumped
in
queue
quickly
a
couple
of
questions
on
the
integrity
draft,
so
yeah.
Thank
you
very
much
for
the
work
on
cleaning
it
up
kind
of
consolidating
down
to
one
option.
A
The
two
questions
I
wanted
to
check
on
was
a
status
of
implementing
this
solution
and
testing
it
to
see.
You
know
how
it
does
this
impact
kind
of
the
existing
iom
code
bases
and
deployments,
and
then
the
other
question
is:
should
we
as
a
working
group,
be
requesting
early
sector
review
and
when
do
we
think
we'd
be
ready
to
do
that
to
get
that
input
from
the
security
area.
B
Hit
the
wrong
button,
I
don't
know
I'm
not
aware
that
anybody
is
coding
the
thing
up
right
now,
I'm
not
sure
whether
justin
is
here
who
is
driving
most
of
the
work
on
the
kernel,
oh
yeah
we
have
just
in
here.
Maybe
he
can
go
and
speak
up
from
this
perspective
on
what
he's
been
covering
for
either
the
colonel
or
yes,
colleagues,
also
within
the
vpp
implementation.
So
I'm
not
sure
where
that
work
is,
but
I'm
not
aware
of
anybody
else
being
actively
coding.
B
This
thing
up,
so
maybe
we
can
have
other
people
speak
up
here
from
perspective.
I
think
once
we
have
I,
I
would
not
feel
comfortable
going
for
a
security
review
unless
we
have
completed
the
security
section,
which
is
one
big
leftover
in
the
draw.
So
that
is
something
that
we
want
to
go
and
fill
in.
There
was.
B
B
And
but
there
is
still
a
little
bit
of
work
to
be
done
on
the
draft
I
think
before
we
want
to
go
and
go
for
a
proper
security
review.
Yeah.
B
Yeah,
along
with
what
what
is
also
asking
right,
so
we
have
two
combinations
listed
right
now
for
what
people
want
to
go
and
do
from
a
signing
perspective
and
the
likes.
So
what
algorithms?
You
would
appreciate
if
there's
others
that
people
see
as
high
priority.
Let's
go
and
speak
up
on
the
list,
please
so
that
we
have
a
relatively
comprehensive
set
of
things
included
so
that
we
are
at
least,
I
think
from
a
from
a
working
group
perspective.
B
Given
that
this
document
is
really
a
document
that
has
been
kind
of
shepherded
and
requested
by
the
working
group
that
we
have
a
relatively
good
representation
of.
A
B
A
Yeah,
I
I
would
suggest
the
small
set
of
algorithms
like
just
let's,
you
know,
choose
kind
of
the
best
ones
that
we
need,
but
not
too
many
in
there,
because
that
often
times
is
a
bad.
B
The
ones
for
asymmetric,
but
if
people
have
strong
feelings
around
other
things,
then
let's
go
and
get
that
in
and,
as
I
said,
like
yeah,
I
think
we
could
probably
give
it
another
round
for
the
next
meeting
that
will
hopefully
be
in
person.
And
then
maybe
we
will
really
offer.
A
D
D
Revised
it
based
on
basic
feedback
that
was
received
some
of
the
feedbacks
some
feedback
is
addressed
in
in
the
current
zero
zero
version
of
the
group
draft.
We
aligned
the
the
limited
domains
per
rfc,
8799
and
again
aligned
it
with
the
ion
data
terminology,
which
was
which
was
good
feedback
that
we
received.
D
D
B
H
E
H
But
it's
not
in
the
zero
zero
version
working
group
draft.
H
Is
there
any
special
reason
why
blm
is
not
included.
D
I
we
have
not
addressed
sorry
good
friend.
B
A
I
H
Okay,
good!
Thank
you.
Thank
you.
Tommy,
hello,
everyone!
It's
xiaomi
speaking!
This
presentation
is
a
accurate
request
to
apply
for
enabled
institution
capabilities.
This
is
a
working
group
draft
and
the
latest
version
is
zero.
One
zero
zero
version
of
this
chapter
has
already
been
presented
at
the
last
itf.
H
This
is
the
summary
of
the
two
main
updates
in
this
revision.
Firstly,
this
revision
changes
the
format
of
iom
capabilities,
query
and
response
from
dlv
to
container,
and
the
reason
is
that
for
icmp
v6
and
taking
example
by
rc
8335
iom
capabilities.
Query
is
not
a
tov
of
echo
request,
but
the
echo
request
itself.
H
Also
note
that
in
six-man
working
group
and
the
zero
zero
version
individual
draft
javascript
six-man
icmp
v6
iom
comp
state
has
been
posted
before
ietf112
and
this
new
individual
draft
has
been
presented
in
six-man
working
group
on
tuesday.
H
My
my
personal
feeling
is
that
I've
got
positive
feedback
from
there.
The
internet,
id
and
six
man
chairs
have
no
objection
to
extend
icmp
v6
for
iom
capabilities
discovery.
H
It
can
be
seen
that
the
container
header
of
icmpv6
does
not
include
exactly
type
in
the
lens.
It
includes
a
type
code
checksum
identifier,
sequence
number
and
a
number
of
namespace
ids.
So
this
change
is
needed.
Also
note
that
for
nps
lsb
ping,
f,
sfc
ping
and
vrp,
the
container
header
is
also
applicable.
H
H
It
can
be
seen
that
the
object
header
of
icmpv6
does
not
include
exactly
subtype
and
the
lens
it
includes
a
lens
class
number
and
the
c
type.
So
this
change
is
needed
also
note
that
for
mpls
ping,
sfc
ping
and
the
beer
ping,
the
object
header
is
also
applicable.
H
Except
for
them
two
main
updates.
There
are
also
some
other
mentionable
updates,
the
first
one
that
a
list
of
namespace
ids
must
be
included
in
an
echo
request
in
last
version
may
is
used
here.
H
The
third
mentionable
update
is
that
one
new
double
bit
is
borrowed
from
the
result:
field
of
iom
pre-allocated,
tracing
capabilities,
object
and
incremental
tracing
capabilities
object.
It's
used
to
indicate
whether
the
object
carries
a
16-bit,
egress
interface
id
or
the
object
carries
a
32-bit
egress
interface
id.
H
So
next
steps
for
this
chapter,
we
also
ask
for
more
reviews
and
comments,
we'll
revise
this
chapter
to
improve
it
and
then
we'll
ask
for
working
group
classical
and
thank
you.
A
All
right,
thank
you,
frank,
go
ahead.
B
Comments,
the
first
one
is:
if
you
go
back
to
slide
six,
what
we
seemingly
do
right
now
is
that
we
have
a
kind
of
opinionated
choice
of
what
we
want
to
go
and
get
included
into.
One
of
these
echo
replies
that
you're
transporting
back
to
the
requester,
which
is
kind
of
a
bit
field
that
we
come
up
with
here
right
and
it's
inspired
by
the
data
types
that
we
have
in
the
data
draft.
But
it's
not
the
same
rather
than
coming
up
with
kind
of
new
definitions
here.
B
I
think
that
would
make
much
more
sense
and
it
would
avoid
the
redefinition
of
fields
rather
than
yeah
what
we
have
here.
So
I
think
aligning
that
and
coming
up
with
a
mapping
from
the
yang
model
that
would
go
and
be
encapsulated
over
icmp
or
whatever
carrier
protocol
you
want.
I
think
it's
the
cleaner
approach,
that's
at
least
my
suggestion
and
the
other
point
that
I
have
is.
B
B
B
So
there
should
be
more
less
cans
but
more
shoulds
of
what
you
want
to
go
and
use
so
that
we
have
proper
security
in
place,
because
you
probably
need
at
least
some
form
of
integrity,
checking
and
even
more
right,
because
you
can
retrieve
a
load
of
configuration
information
from
the
notes.
This
way
between
the
requester
and
the
responder.
H
I
I
think
I
I
I
know
your
comments
from
previous
itf
meetings,
but
I'm
not
sure
how
can
I
encapsulate
the
young
model
data
into
the
into
this
response
message,
so
is,
is
that
your
first
suggestion
to
include
the
young
model
data
into
the
response.
B
Go
and
build
an
encoding
for
the
yang
model,
similar
to
what
we've
done
for
netconf
right
into
what
you're
transporting
here,
and
that
would
align
that.
B
So
the
reason,
the
wheel
for
a
specific,
let's
not
create
a
specific
encapsulation
of
a
network
management
protocol
into
icmp,
rather
than
we
have
something
generic
for
managing
individual
icmp,
individual
iom
notes.
And
let's
stay
with
that.
And
then
you
just
treat
icmp
as
a
carrier
or
beer
as
a
carrier
or
whatever.
H
To
my
understanding,
if,
if
we
use
netconf
here,
then
we
need
to
establish
a
net
kind
of
connection
between
the
encapsulated
node
and
each
transition
nodes
right.
B
Do
netconf,
but
I
think
you
given
that
you
want
to
go
and
carry
network
management
information
over
a
channel.
You
don't
want
to
use
netconf,
you
you're,
creating
your
own
transport
protocol
using
icmp,
but
I
think
the
there
is
no
need
to
go
and
redefine
what
you
do
from
an
object's
perspective,
because
that
is
in
the
young
model.
B
A
Yeah,
I
think
we're
a
little
over
time
on
this
item,
we're
overall
ahead
on
time,
but
I
think
this
is
a
great
discussion.
I
think
this
is
a
good
point
to
drill
into,
and
maybe
this
is
something
that
frank
you
know
if
we
can
take
this
to
the
list
and
just
kind
of
go
in
depth
on
the
different
options
about
how
we
can
not
end
up
reinventing
the
wheel
for
the
signal.
I
think
that
would
be
a
good
discussion
to
have
there.
K
A
A
M
M
J
M
M
Okay,
you
can
hear
me
it's
good
to
sound
okay,
we
can
start
okay.
I
can
summarize
in
a
simple
same
sentence.
The
specific
measurement
concept,
explicit
flow
measurement
techniques,
employ
a
few
marking
bits
inside
the
head
of
each
packet
for
los
angeles
measurement
protocol,
independent
evaluable
for
encryption
either.
M
After
there
is
the
round
three
packet
loss
that
is
made
using
the
spin
beat
and
the
round
three
plus
bit
the
t
bit.
And
finally,
there
are
two
options
for
the
one-way
packet
loss.
The
first
one
is
the
square
bit
that
is
already
noun
techniques
in
83,
21
fc
and
they
lost
event
beat
as
a
second
bit
of
in
the
in
alternative
the
square
bit
the
same
bit
with
the
reflection
square.
M
In
the
last
academics,
we
experimented
with
other
companies,
for
example,
ericsson,
huawei,
and
so
on.
Some
experiments
about
this
quick
measurement.
There
are
some
implementation
in
the
field,
the
first
one
that
we
displaying
this
transparency
is
the
recommendation,
implementation
on
android
mobile
phones.
M
That
is
inspired
the
draft
about
user
devices,
monitoring
that
is
presented
in
the
last
part
of
this
meeting.
There
is
the
ericsson
implementation,
core
network
problem
that
detects
the
spin,
beat
title
at
marking
and
make
the
measurement
the
orange
akamai
implementation.
That
is
the
first
one
that
implement
on
the
field.
The
packet
loss
using
qubit
and
albeit
the
arcane
university,
had
an
experimental
implementation.
M
M
We
using
our
devices,
we
spread
some
probes
on
our
mobile
phones
that
telecom
italia
uses
for
some
experimentation
in
our
network,
and
we
saw
that
there
isn't
market
packets
inside
the
quick,
quick
is
quite
used,
for
example,
by
google
application,
but
also
facebook
application
and
many
others,
but
no
spin
bit
is
marked,
so
it
is
implementation
that
is
defined
as
not
mandatory
in
the
standards
in
the
fc.
9000
is
not
implemented
in
the
field
in
italy
almost,
but
I
think
all
over
the
world
is
the
same.
M
M
In
the
same
way,
there
is
no
delay
in
the
real
traffic,
but
the
markings
delayed,
so
it's
possible
to
mask
the
reality,
but
it's
possible
to
have
some
measurement,
but
knowing
the
time
that
we
added,
but
also
not
knowing
it,
because
if
the
measurements
are
between
an
observer
and
another
observer
or
between
a
server
and
the
server,
the
the
measurement
are
correct.
The
end-to-end
measurement
is
not
visible
unless
you
know
the
other.
The
additional
delay.
Okay,.
M
The
main
change
of
this
draft
dementia
is
the
first
great
news.
Is
the
eppm
working
group
adopted
this
draft
some
days
ago
and
we
updated
the
draft
the
new
version
of
the
draft,
adding
the
new
name
and
introducing
a
couple
of
changes
that
are
suggested
during
the
last
during
the
working
group
adoption
call
in
the
introduction
paragraph,
we
underlined
the
beneficial
approach
of
the
methodology
described
described
inside
the
document,
as
per
the
lfc
1965.
M
M
We
can
measure
only
the
loss
between
the
origin,
the
client
and
or
the
server
and
the
measurement
point,
because
we
mark
the
blocks
of
64
packets
and
if
the
number
of
packets
marked
with
the
same
color,
0
and
1,
is
less
than
64..
There
is
a
loss
between
the
origin
trying
to
serve
and
the
measurement
point
so
to
have
and
to
animation.
M
We
have
to
add
the
other
another
bit
the
loss
event
bit
as
proposed
by
akamai
and
orange
in
the
their
airdraft
that
we
joined
with
in
in
this
draft
or
the
team,
recom
italia
proposal
to
add
the
reflection
square
bit.
The
lost
event
bit
generates
a
market
packet.
Every
every
loss
detected
by
the
protocol.
M
M
There
are
some
differences,
because
the
loss
beat
the
l
bit
is
very
simple
to
implement
is
more
simple
than
the
reflection
square
bit,
but
it
is
dependent
from
the
protocol.
So
if
the
loss
is
detective
of
the
in
the
protocol,
the
lbt's
market
is
not
detected
is
not
market,
so
it
works
well
with
quick.
So
in
our
opinion,
can
be
a
right
proposal
for
quicker.
It
works
less
well
for
less
is
less
good
for
tcp,
because,
for
example,
the
packet
is
not
detected
as
losses.
M
So,
if
I
load
I'll,
I
know
is
lost
the
protocol,
the
tcp
doesn't
measure
it
is
lost,
so
it's
better
to
use
for
tcp,
in
our
opinion,
probably
the
the
r
bit,
but
it
is
a
as
I
said
before,
this
is
only
a
general
review
of
all
the
possible
techniques
there
is
a
next
step
will
be
to
the
will
be
to
select
the
best
bit
to
mark
for
each
protocols.
M
Next,
we
can
show
this
slide,
so
we
have
many
possibilities
if
there
are
only
two
bits
for
a
specific
measurement,
there
are
two
possible
options
in
my,
in
our
opinion,
is
the
most
significant
the
spin
beat
and
the
t
beat
to
have
the
round
trip
time
and
round
three
packet
loss
using
only
two
beat,
but
the
tbt
is
is
less
precise,
that
using
a
qubit
and,
albeit
or
jubitan
and
arbit
so
is
a
we
propose
to
use
only
if
we
have
only
one
only
two
beats
and
only
one
for
the
loss.
M
The
option
b
is
to
use
the
debate
or
the
mask
the
diabetes
and
delay
beat
for
rtt
and
to
use
the
one-way
packet
loss.
They
could
be
for
the
loss.
In
this
case
we
don't
have
the
end-to-end
measurement
for
the
loss,
but
only
the
origin
to
a
problem
measurement
for
the
loss.
So
it's
a
compromise.
M
If
we
have
only
two
bits,
if
we
are,
we
have
a
three
bits:
that
is
the
best
option
for
monitoring
on
using
a
specific
measurement
and,
for
example,
quick
is
possible
to
have
is
not
defined
in
now,
but
it's
possible
to
have
three
bits.
There
are
four
options
in
that.
M
M
This
can
be
useful
if
we
don't
want
a
measurement
that
depends
from
the
protocol,
because
that'll
be
the
signals
the
losses
detected
by
the
protocol.
If
the
protocol
don't
then
take,
doesn't
detect
the
losses,
the,
albeit
doesn't
signal,
the
losses
that
bit
is
independent,
so
is
more
complicated
of
the,
albeit,
and
the
rather
problem
is
that
there
is
a
delay
in
the
measurement,
because
we
have
to
wait
a
square
wave
of
the
qubit
starting
from
the
client
reaching
the
the
server
and
converting
in
the
reflected
square
wave
and
come
back.
M
So
we
have
a
two
round
trip
time,
more
plus
the
the
time
of
the
square
wave
to
complete
the
first
measurement,
but
a
little
bit.
Instead.
A
little
bit
is
very
quick
because
there
is
a
market
packet
when
the
protocol
detects,
but
in
this
case
their
bit
is
totally
independent
from
the
protocol.
So
if
also
the
protocol
done
doesn't
detect
the
loss
the
arbit
can
detect
it.
M
Okay,
the
last
slide
I'll
spend
the
time
for
one
minute.
Okay,
so
we
can
conclude
conclude
some
thoughts
about.
That
is
a
draft
they.
There
are
some
implementation.
There
are
some
on
feed
implementation.
The
eppm
working
group
adopted
the
draft.
M
We
presented
some
some
sibling
draft.
There
was
another
one
in
this
working
group
about
the
positioning
of
the
probes.
The
idea
is
to
positioning
the
probe
in
the
client
in
the
client,
because
so
we
can
save
hardware
using
the
hardware
of
the
client
to
make
also
the
measurement
same
advantage
that
we
explained
in
the
presentation.
M
At
the
end
of
this
meeting,
another
sibling
draft
is
presented
in
this
session
of
the
core
working
group
about
the
co-op
protocol.
That
was
another
protocol
client
server.
That
can
have
some
extension
to
have
some
additional
bit
for
marking.
So
we
we
think
that
can
be
a
good
idea
to
present
this
in
the
future.
If
the
working
group
are
interested,
we
can
try
to
present
an
implementation
for
quick
having
a
choice
of
the
right
beats.
A
E
M
Be
applied
both
for
the
beat
and
for
the
traditional
spin
beat.
The
idea
is
to
delay
the
reflection
of
the
of
the
packet
that
detect
the
the
the
square
wave
for
the
spin
beat
or
the
sample
delay
sample.
That
is
the
single
market
packet
that
is
used
by
the
light
beat
so
when,
for
example,
for
the
spin
bit
when
the
square
wave,
that
is
the
the
front
of
the
square
wave
that
signal
the
starting
of
the
rtt
or
any
of
the
rtt
arrived
to
the
client.
M
The
client
don't
reflect
the
end
of
this
square
wave
immediately,
but
he
continually
continued
to
mark
the
packets
with
the
same
beat.
So
if
we
are
in
the
one
period,
for
example,
the
rtt
is
10
milliseconds
and
the
additional
that
we
add
is
other
five
milliseconds.
So
when
the
square
wave
reached
the
client,
the
client
don't
change
immediately
the
front,
but
it
weighed
other
additional
five
milliseconds
to
change
the
front.
So
the
server
don't
measure
the
right
rdt,
but
the
right
elementary
plus
5
millisecond.
M
This
masked
only
the
end
to
end
rtt,
because
if
we
measure
using
a
probe
dirty
between
the
observer
and
the
server,
for
example,
the
server
don't
add
anything
because
it
works
in
the
same
way
that
it
work
for
this
enormous
pin
bit
or
normal
bit.
So
the
measurement
observer
server
observer.
The
rtt
is
right,
so
it's
possible
to
segment
the
network
and
to
have
a
partial
measurement
correct,
but
not
end-to-end
measurement.
The
only
measurement
that
is
a
masked
is
the
round
trip
between
client
and
observer.
A
M
Right
and
it's
impossible
to
to
choice,
to
choose
that
it
is
adding
timing
many
times
in
many
ways.
We
propose
to
choose
a
fixed
amount
for
each
client,
so
the
position,
the
client
is
moved
for
exactly
the
same
amount
of
time,
so
the
same
amount
of
kilometer.
Let
me
say
for
all
the
time,
so
it's
impossible
to
detect
if
there
is
the
right
distance
or
a
fake
distance,
because
they
add
all
always
the
same
amount
of
time.
N
We'll
find
it
thanks.
Yep,
I
made
the
presentation
before
I
updated
the
draft,
so
these
presentation
refers
to
the
older
version.
I
mean
while
updated
the
draft
on
monday,
and
it
contains
a
packet
loss
based
specification
for
a
connectivity
metric
which
is
just
announced
here
in
this.
I
also
have
changed
the
status
from
standard
strike,
meanwhile
to
experimental,
because
I
think
that's
more
suitable.
N
I
not
able
to
come
up
with
an
implementation
very
soon,
so
I
think
that's
fair,
don't
I
think
I
moved
to
the
other
slide
yep
and
that
is
indicating
how
the
loss-based
connectivity
metric
is
going
to
work.
N
N
Router
sketch,
which
is
on
the
upper
part,
is
already
part
of
the
other
presentations
and
there
are
three
measurement
loops,
passing
each
connection
or
path
to
be
monitored
and
two
go
to
the
same
direction,
always
per
upstream
or
downstream,
and
there's
always
a
contra
the
direction
and
there's
only
two
measurement
path,
going
per
direction
either
upstream
or
downstream,
and
the
idea
then,
is
if
there
is
no
packet
received
with
within
one
measurement
interval
inc
t
on
all
of
the
measurement
loops
per
monitored
interface.
N
Then
you
decide
on
connectivity
loss
and
the
simple
definition
of
distance
between
two
packets
per
measurement
interval
to
me
seemed
seemed
to
be
to
wait
for
the
measurement
loop,
which
is
producing
the
longest
measurement
loop
delay
and
then
double
that
amount
and
simply
send
packets.
Only
after
that
has
passed
on
each
individual
measurement
loop.
That
means
you
only
send
a
measurement
packet
along
along
the
next
loop
once
you've
received
that
of
the
last
one,
which
is
maybe
not
the
fastest
way
of
operating
it,
but
it's
pretty
reliable.
N
Combination
of
measurement
loops,
passing
this
monitored
interface
and
if
you
are
interested
in
details,
terminology
is
adapted
to
try
to
illustrate
that
as
good
as
possible,
and
you
can
read
how
that
works,
and
I'd,
of
course
be
happy
for
comments.
And
I'd
like
to
add
that
the
question
about
timing
has
been
put
to
me
several
times.
So
I
think,
or
the
intent
now
here
is
to
to
answer
these
questions
and
come
up
with
a
scheme,
how
it
works
all
right,
and
that
is
it
from
me.
N
O
Still,
I'm
not
really
comfortable
with
the
terminology,
because
traditionally
the
connectivity
is
state
in
a
fault
management
oem
so
and
if
there
is
a
connection
so
actually,
when
we
are
referring
to
something
as
a
connection,
we
understand
that
it's
not
only
ability
to
receive
test
packets,
but
on
that
connection
that
belong
to
this
connection,
but
not
receiving
packets
from
other
connections,
so
basically
absence
of
misconnection
state
and
misconnection
definition,
usually
specific
at
itf
since
well.
O
We
are
dealing
primarily
with
the
ip
and
mpls.
If
it's
a
pmp-ls
and
not
traffic
engineered,
then
we
usually
refer
to
the
laws
of
path.
Continuity
and
the
protocol
that
used
to
that
extent
is
a
bidirectional
forwarding,
detection,
bfd
and
one
of
the
things
that
property
of
bfd
is
that
their
session
goes
down
or
path
loss
continuity,
state
declared
when
I
lost
three
consecutive
test
packets
and
even
though
the
protocol
supports
doing
that
on
the
first
packet
on
only
one
last
packet.
O
I'm
somewhat
concerned
that
in
this
proposal
the
miscon,
a
loss
of
connectivity,
declared
on
a
very
first
packet.
O
Because
then,
if
that's
the
case,
then
so
the
next
packet,
what
next
packet
will
mark
the
connection
as
up
then
we'll
have
probably
states
bouncing
up
and
down
interchangeably
if
we
have
a
flaky
connection.
O
On
the
other
hand,
the
packet
loss
ratio
or
some
other
statistics
will
indicate
that
the
connection
the
path
is
not
stable,
but
from
the
fault
management
perspective,
bouncing
states
and
generating
alarms
and
clearing
our
arms,
and
that
would
cause
a
lot
of
confusion.
So
maybe
that's
something
that
needs
further
discussion
and
look
at
and
especially
how
it
relates
to
default
management.
Oh
yeah.
N
Yes,
adding
some
text
with
a
relation
to
fault
management
oim.
I
probably
need
to
do
that.
I'm
not
a
bfd
expert.
N
The
state
of
lost
connectivity
is
declared
after
three
packets
have
been
lost.
If
you
look
at
the
illustration
below
it's,
that
red
link,
lsr2
leri,
is
disconnected
that
what
the
text
says.
Also
in
the
draft
is
you
start
with
sending
packet
f2
index
one
and
then
you
send
f3
index
one
and
the
expectation
would
be
if
you
send
a
packet
at
t2.
It
should
have
been
received
at
t3.
N
If
you
send
one
at
t3,
it
should
have
been
received
at
t4
and
so
on.
And
if
you
look
at
the
blue
one
at
time,
t5
it
should
have
been
received
at
t6.
So
these
are
three
packets
and
if
they
haven't
been
received
until
t8,
which,
when
when
you
would
send
the
next
packet
on
f2,
then
you
declare
the
state
to
be
disconnected
so
yeah.
It's
three
packets,
three
packets,
which
should
have
been
received
for
a
long
time
then
or
sometime
then
at
t8.
When
you
declare
it
to
be
disconnected.
O
Yeah
I'll
be
happy
to
discuss
with
the
bfd,
because
I'm
working
on
it
extensively
yeah
sure
that.
O
Yeah:
okay:
we
can
continue
discussion
on
the
list
thanks.
A
Perfect,
okay
and
last
of
our
adopted
documents,
we
have
the
srpm.
F
F
So
agenda
is
a
brief,
update
and
revision02
just
to
highlight
the
work
happening
in
other
working
groups
on
the
stamp
extensions
and
next
steps.
F
So
there
is
very
brief:
minor
updates
in
revision,
zero.
Two,
mostly
editorial
changes.
We
converted
ni
into
table
format
and
we
have
no
open
issues
at
the
moment.
F
There
is
extensions.
That's
built
on
the
great
work
done
by
ippm
with
stem
spring,
has
two
drops
on
srpm,
and
also
there
is
one
in
mpls
with
a
new
encapsulation
for
pseudo
wires.
So
welcome
your
review
comments
on
those
traps
as
well.
F
So
just
we
are
seeking
more
review
comments
and
suggestions
for
this
draft
and
we'll
wait
for
maybe
one
one
more
itf
before
we
request
a
last
call.
So
that's
all
I
had
any
comments,
suggestions.
A
Okay
and
now
we're
going
to
move
on
to
our
proposed
work
actually
before
we
do
that
before
we
do
that,
there
is
one
thing
I
slides,
which
is:
we
did.
We
did
publish
rfc.
P
P
Great
hello,
everyone.
I
will
present
our
internet
draft
responsiveness
under
working
conditions
and,
as
this
is
the
first
time
I'm
presenting
and
I'm
giving
a
little
bit
a
start
with
a
little
bit
of
background
first.
So
first
my.
P
What
is
it
actually
what
we
are
trying
here
to
solve
here?
What
are
we
trying
to
address
here?
The
first
problem
is
that
we
realized
is:
there
have
been
10
and
more
years
of
buffer
bloat
in
the
sense
that
it
was
made
very
public.
More
than
10
years
ago,
solutions
quickly
started
popping
up
and
were
being
standardized
and
implemented,
but
still
today,
buffer
bloat
is
a
very
common
problem,
and
why
is
that?
P
P
A
second
problem
that
we
realize
exists
is
there's
not
really
a
clear
definition
of
what
buffer
load
actually
is,
and
if
you
ask
different
people,
you
will
get
different
definitions
of
buffer
bloat.
It's
also
not
clear
how
you
can
measure
it
is
an
icmp
ping
while
doing
the
file
transfer
good
enough.
Do
you
need
to
flood
the
network
with
udp
traffic
and
then
send
tcp
request
response?
P
Dns?
Can
you
use
dns
to
measure
buffer
bloat
right?
It's
not
really
clear,
and
there
are
some
tools
that
exists
like
dsl
reports,
fast.com,
waveform
and
so
on
that
are
measuring
buffer
bloat.
But
if
you
compare
one
tool
with
the
other,
they
have
hugely
different
methodologies
and
when
you
run
the
tools
they
provide
completely
different
results.
P
P
So,
first
of
all,
we
want
to
measure
responsiveness
for
the
end
user,
and
that
means
that
implies
a
certain
set
of
things.
First
of
all,
a
realization
that
buffer
bloat,
while
typically
you
think
of
buffer
bloat
as
something
that
is
a
big
da,
a
big
q
in
a
bottleneck.
Router
that
is
being
filled
up
buffer
bloat
can
happen
at
many
different
places
right.
It
cannot
only
be
in
the
bottlenecks
router,
it
can
be
on
the
server
it
can
be
on
the
client.
P
It
can
be
in
the
service
nic,
it
can
be
in
the
server's
tcp
implementation
or
even
in
the
http
implementation
right,
and
we
have
found
that
in
many
cases
the
buffer,
bloat
and
http
implementations
are
way
way
higher
than
whatever
you
you
can
measure
on
the
internet.
So
sometimes
the
problem
is
at
a
different
layer
where
you
would
not
expect
it.
So
this
implies
that
if
we
want
to
measure
buffer
bloat
between
quotes
because
of
the
sudden
buffer
load
is
no
more
just
on
the
bottleneck
router,
but
it
can
be
anywhere
right.
P
If
you
want
to
measure
bar
for
load,
it
means
you
need
to
use
the
modern
protocol
that
the
end
users
are
actually
using,
which
means
you
need
to
use,
for
example,
http,
2,
hp,
3,
you
need
to
use
tls
and
so
on.
You
also
want
to
measure
all
stages
of
the
connections
right
if
you
have
a
good
solution
to
handle
buffer
bloat
in
your
tcp
stack,
but
your
dns
is
still
going
to
go
through
a
shared
queue
that
is
completely
bloated.
P
So
how
do
we
create
what
we
call
working
conditions,
because
in
order
to
measure
buffer
bloat
in
order
to
measure
the
responsiveness
when
under
working
conditions,
we
need
to
fill
the
buffers
right
now?
Filling
the
buffers
can
be
done,
for
example,
by
flooding
the
network
with
udp.
That
would
definitely
very
reliably
fill
the
buffers
can
guarantee
that.
However,
it
is
not
a
realistic
traffic
pattern.
P
Nobody
does
that,
and
so,
in
order
to
measure
buffer
build,
we
need
to
create
a
balance
between
a
realistic
traffic
pattern
while
at
the
same
time
reliably
filling
those
buffers
and
so,
for
example,
a
good
choice
would
be
running
several
http
2
bulk
data
transfers
right
now
on
the
other
side,
also
with
quick
coming
along
well
in
the
future,
we
rather
would
want
to
probably
use
quick
instead
of
http
2,
because
the
buffering
can
be
completely
different
for
quick
now
in
order
to
measure
responsiveness,
we
need
to
create
those
working
conditions
for
an
extended
period
of
time,
because
we
need
to
measure
it
repeatedly
and
get
multiple
samples
to
have
a
stable
number
right,
and
so
we
need
to
gradually
add
flows
and
monitor
the
good
put
evolution
to
create
this
kind
of
a
stable
condition
where
the
buffers
are
roughly
full
and
the
good
way
to
do
is
the
way
we
do.
P
It
is,
for
example,
in
as
we
describe
it
in
the
idea
in
the
draft
here
is
we
start
with
a
certain
number
of
connections,
as
those
connections
go
through
tcp
slow
start.
The
good
put
is
going
to
ramp
up
and
it's
eventually
going
to
level
out
and
at
that
point
the
only
way
to
increase
the
good,
but
is
by
adding
more
connections.
So
we
have
more
connections
again.
You
will
be
able
to
see
it.
P
P
So
once
we
have
these
stable
working
conditions,
we
can
then
go
and
measure
responsiveness
right,
and
there
are
two
ways
to
measure
it.
The
first
one
is
by
using
those
connections
that
are
creating
the
working
conditions.
By
using
these
bulk
data
transfer,
we
can
send
an
additional
get
request
on
those
connections
and
measure
the
latency
for
those
get
requests.
P
This
is
great
because
it
allows
to
expose
bad
http,
2
and
tcp
implementations
on
the
client
and
the
server
side,
and
it
also
exposes
bad
buffering
in
the
network.
P
So,
finally,
as
we
are
doing
all
these
measurements,
we
then
expose
what
we
call
the
round
trips
per
minute,
and
we
do
this
round
trip
per
minute
instead
of
latency,
because
it
is
more
user-friendly,
which
is
one
of
the
goals
that
I
mentioned
initially
is
we
want
it
to
be
the
higher
the
better?
We
want
it
to
range
from
the
low
tens
to
a
few
thousand,
a
few
thousands,
and
we
want
it
to
be
a
nice
analogy
to
what
we
call
revolutions
per
minute.
P
Now
this
methodology
is
something
is
currently
a
work
in
progress,
and
we
are
very
much
looking
forward
to
feedback.
We
have
currently
an
implementation
in
macos
monterey
and
in
ios
15,
and
we
also
have
published
service
server-side
implementations
for
it
on
on
github
that
I
will
show
in
a
minute.
P
So
we
are
very
much
looking
to
collaborations
around
this
effort
to
define
to
properly
define
a
way
to
measure
buffer
load
and
responsiveness
under
working
conditions
in
our
internet
draft
that
we
wrote.
P
We
are
using
http
2
and
we
are
very
open
to
other
suggestions
on
how
to
measure
it,
because
there
are
many
different
ways
to
create
those
working
conditions,
and
there
are
many
different
ways
to
measure
the
responsiveness,
and
so
this
is
really
a
call
for
contributions
for
people
to
come
and
join
this
effort
to
to
give
suggestions
on
how
how
to
measure
responsiveness.
P
We
believe
that
this
would
be
very
useful
and
we
are
already
seeing
how
the
two
can
be
used
to
actually
debug
bad
http
implementations
in
the
wide
end.
So
we
believe
that
this
would
be
very
beneficial
for
the
overall
user
experience.
E
Thank
you,
martin
hi
christoph.
This
is
really
interesting,
really
interesting
a
couple,
so
it
seems
like
there's
two
separate
proposals
here,
there's
a
like
a
methodology
and
then
there's
a
metric,
so
I
actually
have
one
question
for
each
so
for
the
methodology
this
this
background,
these
background
connections
racing.
Are
they
just
downloading
an
arbitrarily
large
resource
to
fill
the
pipe.
P
E
P
So
in
the
draft
we
specify
also
the
what
what
the
the
protocol
would
be,
and
so
what
we,
what
we
require
is
that
the
server
is
able
to
provide
one
very
large
file,
one
very
small
file,
basically
just
one
byte
and
allow
to
upload
data,
and
so
the
small
file
is
used
for
latency
measurements
and
because
it's
just
one
byte,
it
would
be
one
packet.
So
that's
that's,
basically
it
so
in
order
to
measure
latency.
P
If
we
send
a
get
request,
it
takes
it's
just
for
the
time
it
takes
to
get
the
response
back
now.
The
assumption
is
that
honest
on
a
cdn
implementation,
the
assumption
is
that
this
byte
would
be
locally
generated,
so
the
cdn
wouldn't
need
to
go
back
to
the
origin
to
fetch
it.
So
that's
that
is
not
this
kind
of
latency
in
in
the
picture.
I
Okay,
okay,
great
yeah-
this
is
this-
is
very,
very
interesting.
I
Yeah
yeah,
I'm
getting
I'm
getting
an
echo
hang
on
yeah
no
anyway.
I
hope
you
guys
are
not
getting
an
echo,
but
anyway.
I
think
this
is
very
interesting
and
I
would
be
very
curious
because
our
pdm
is
a
performance
and
diagnostic
metrics
for
ipv6.
I
That
is
installed
both
at
the
server
and
client
end,
and
the
idea
of
pdm
is
to
separate
server
time
and
client
time,
because
this
is
a
problem
that
many
large
enterprises
have
and
I
would
be
very
interested
to
see
how
what
what
kind
of
measurements
we
come
out
with
using
your
test
data.
So
so,
as
I
said,
I
think
this
is.
This
is
a
quite
quite
interesting.
A
All
right,
lucas.
L
Hello,
can
you
hear
me?
Okay,
okay,
so
just
a
quick
one
you
mentioned,
maybe
being
able
to
detect
bad
http,
2
or
quick
implementations,
but
I
just
a
word
of
caution
that
there's
many
ways
to
be
bad.
You
know
there's
lots
of
things
in
these
specifications
that
may
be
a
bit
optional
and
people
choose
just
not
bother
implementing
them,
so
I
just
caveat
or
try
to
caveat
what
you
mean
by
a
good,
or
this
is
bad.
This
is
for
like
raw
throughput
or
scheduling
these
kinds
of
considerations.
L
It's
a
very
minor
point,
but
I
think
for
for
nitpickers,
like
me,
or
people
who
are
really
into
web
performance,
where
the
workload's
slightly
different,
just
quantifying,
good
or
bad
might
mean
something.
A
All
right,
I
think,
we're
up
time
on
this
one.
Thank
you
stuff
and
it'd
be
good
to
hear
more
comments
on
the
list,
especially
from
people
who
have
a
lot
of
deep
experience
with
existing
ippmetrics,
and
now
we
have
al
wearing
a
deck.
L
K
Let's
see
screen
one,
I
think,
is
this
one?
Yes,
okay
and
allow
and
slides
all
right.
I
guess
I'll
just
make
this.
K
As
big
as
I
can
all
right,
so
thanks
everybody
and
and
for
joining
in
today,
and
thanks
tommy
for
the
congratulations
on
on
the
one-way
ip
capacity
measurement,
rc
1997.
K
Where
was
a
long
road
longer
than
we
thought
so
last
time
about
the
protocol,
I
gave
a
let's
see
here:
I've
got
the
time
covered,
that's
not
going
to
help.
K
So
what
I
wanted
to
do
is
is
basically
talk
about
some
special
features
of
the
protocol.
Since
last
time
we
talked
about
security
features
and
didn't
get
much
traction
with
that.
So
it's
it's
kind
of
a
unique
protocol
that
we've
developed
to
do
this.
You
know
what
so
we're
going
to
talk
about
what's
new
in
the
protocol,
where
the
functions
reside
between
like
a
server
and
a
client
matter,
and,
first
and
foremost
thanks
to
the
reviewers
jan,
that's
how
he
pronounces
his
name,
lincoln,
rudiger
and
greg
so
on
to.
K
K
Exchange
a
test
activation
exchange,
and
the
most
important
point
I
want
to
make
about
this
is
that
you're
you're,
actually
testing
connectivity
when
you're
your
client
is
setting
this
up
and
also
opening
ports
and
and
so
forth,
in
the
firewalls
and
translators
and
and
so
forth.
So
you
know
these
are
these
are
important
parts
of
the
exchange
and-
and
it's
always
good
to
record
when
you've
lost
connectivity
to
your
server
or
between
your
client
and
server.
K
K
That's
one
of
my
workshop
points,
but
then
we're
going
to
go
on
mostly
today
and
talk
about
the
test
protocol
on
the
test
protocol
is
the
where
we
send
test
stream,
pdus
and,
and
the
most
important
part
is
that
we
get
measurement
feedback
here.
So
the
measurement
feedback
is
is
exactly
what
we
really
want
to
emphasize
and
so
we're
going
to
carry
these
colors
through
the
load
use
our
in
the
in
black
and
the
feedback.
K
Pdus
are
in
blue
all
right
and
not
showing
anything
else
here
other
than
that,
and
of
course
this
is
the
protocol
that
we're
currently
using
in
running
code.
So
so
here's
the
test
phase
operation.
I've
got
two
scenarios
side
by
side.
K
I've
got
a
downlink
scenario
here
where
the
server
is
in
the
network.
That's
on
the
top.
It's
in
the
role
of
the
sender
and
the
test.
Controller
always
resides
with
the
server,
so
that's
an
important
functionality
and
it
doesn't
doesn't
swap
completely
when
the
sending
and
receiving
roles
change
places.
So,
let's
look
at
this
quickly.
K
You
know
the
sender's
deciding
what
downlink
test
track
traffic
rates.
It's
going
to
send
that
for
our
our
classic
iplayer
capacity
measurement,
the
receiver
receives
that
traffic
and
it
formulates
a
reverse
path:
feedback
message,
which
includes
loss,
delay
and
reordering,
measured
on
the
the
scent
rate
on,
and
it
sends
it
back
every
50
milliseconds
and
I
I
didn't
even
include
the
fact
that
it
measures
the
rate
here,
but
that's
a
fairly
obvious
thing,
that's
happening
so
then
every
50
milliseconds
that
feedback
message
arrives
at
the
sender.
K
The
test
controller
acts
on
it.
It
runs
the
load,
adjustment,
algorithm
and
controls
the
sender
rate.
It
can
select
a
rate
in
the
table
with
our
current
logic
and
and
that's
kind
of
what
you
see
depicted
here
down
at
the
bottom,
so
we're
you
know
we
ramp
up
the
rate
at
first
ipgr
capacity
on
the
y-axis
and
time
on
x,
and
you
can
see
us
searching
for
the
maximum
capacity
here
in
the
typical.
You
know:
rfc,
9097
method
and
then
metric.
K
So
we
we
reverse
some
things
and
change
some
information,
that's
exchanged
when
we
do
an
uplink
test.
So
now
the
we've
arranged
for
the
sender
to
do
the
sending
from
the
client
side,
and
so
the
uplink
test
traffic
arrives
at
the
receiver
and
now
the
receiver's
test
controller
function
is
to
perform
the
measurements
and
it
keeps
those
measurements
to
itself.
It
runs
the
load
adjustment
algorithm
and
it
sends
the
next
rate
to
be
used
over
the
the
next
feedback
interval
back
to
the
sender.
So
the
the
reverse
path.
K
Contrary
contains
mostly
the
sending
rate
structure
which
is
identified
in
the
protocol,
and
this
contains
you
know
the
size
of
packets,
the
marking
of
the
packets,
the
rate
at
which
they
would
be
sent
and
and
so
forth
and
someone's
looking
for
me
to
go
full
screen,
which
means
I
won't
be
able
to
see
further
comments.
K
So
good,
so
what
we've?
What
we've
basically
enabled
here,
though,
is
that
for
the
the
test
controller
logic
that
we've
currently
defined,
what
we've
got
in
is
the
capability
to
change
that
logic,
and
so
you
could
create
alternate
forms
of
reaction
to
the
network
or
or
simply
changing
the
rates
and
the
and
the
time
rates
over
time,
the
packet
sizes
and
their
burst
composition
over
time
and
and
so
forth.
K
So
it's
a
it's
very
much
a
a
a
a
a
you
know,
a
very
complete
and
easy
control
of
the
of
the
the
testing
situation.
I
I
wanted
also
to
mention
that
we
we
measure
round
trip
time
on
on
a
sample
basis
and
that's
on
the
basis
of
these
feedback
messages,
so
we
always
have
an
estimate
of
a
round-trip
time
going
on.
K
Okay.
So,
let's
see
I
gotta
hurry
through
this
now,
so
we
we've
the
we've
got
lots
of
motivation
for
the
new
protocol.
You
know
it's
different
from
a
session
sender
in
that
you,
you
know,
or
the
reflector
we're,
definitely
not
sending
packets
back
every
time.
It's
different
from
owamp
in
that
there's
a
fetch
and
sending
results
at
the
end
of
the
session
and
the
the
the
security
for
owamp
and
ramp
is
kind
of
old
and
was
described
as
unusual.
K
It
was
a
little
bit
challenging
way
back
then
so.
We've
we've
had
some
good
comments,
as
I
mentioned,
and
one
of
the
things
I've
decided,
which
made
it
kind
of
easy
to
just
to
decide
where
we
stopped
with
the
zero
two
update.
We're
gonna
stop
describing
version
eight
after
this
version,
and
so
new
updates
will
result
in
a
version
9
or
higher
versions
of
the
protocol.
K
E
K
Of
those
options
are
described
in
the
current
draft,
one
of
the
options
that
jan
asked
for
was
options
for
the
payload.
You
know
whether
it
be
all
ones
or
all
zeros
or
alternating
or
pseudo-random,
that
helps
a
pseudo-random
helps
out
with
compression
links
like
on
satellite
and
then
then
we've
got
this
last
one
here,
which
is
interface
measurements,
where
we're
actually
in
the
in
the
running
code
right
now
we're
actually
making
diagnostic
measurements
on
the
interface
that
observes
all
the
traffic.
K
So
you
know
way
back
here
in
a
downlink
test.
If
we've
got
test
traffic
sharing
traffic
with
user
traffic,
we
can
measure
that
at
where
two
and
it's
a
kind
of
a
hybrid
type,
two
measurement
at
that
point.
K
Before
that,
it's
it's
purely
active,
so
we've
we've
got
that
going
now,
and
you
know
you
could
you
could
you
could
make
that
a
make
a
case
for
that
in
the
earlier
two
as
well
and
maximize
on
where
to
so
like
I
said
next
steps,
we
we
talked
about
the
security
features
at
111,
the
authors,
welcome
protocols
and
and
revisions
to
the
security
modes
of
operation.
But
today
we
talked
about
what
I
think
are
the
unique
features
of
this
protocol
proposal.
K
It
can
do
more
than
measure
capacity
and-
and
you
know,
we've
taken
a
look
at
that
we'd,
like
others,
to
take
a
look,
and
I
had
a
side
conversation
with
martin
last
time
we
with
working
group
adoption,
we
could
get
an
early
security
director
view
and
and
that
would
help
us
solve
that
aspect
for
this
protocol
in
a
way
that
might
go
through
the
iesg
a
little
easier.
So
thanks
for
your
attention-
and
I
hope,
there's
some
questions.
A
Well,
any
question.
K
K
K
K
For
example,
you
know
a
hybrid
type,
2
measurement,
where
you're,
where
you're
seeking
additional
downstream
load-
and
you
want
to
measure
the
performance
of
the
passive
traffic.
That's
alongside
of
it,
you
know
at
full
capacity
so
where
you
may
be
building
up
your
buffers
and
so
forth.
C
O
O
Here
we
go
okay,
great,
thank
you,
okay,
so
this
is
update
on
error,
performance
measurement
in
packet
switch
network
and
let's
go
to
the
next
slide,
please,
okay!
So
the
recap:
their
error
performance
measurement.
We
define
several
metrics
for
the
packet
switch
networks
that
reflect
conformance
to
set
of
predefined
service
level
objectives
and
among
this
metrics
we
have
the
availability
and
unavailability
as
a
sequence
of
the
time
intervals
that
conforms
on
do
not
conform
to
slo.
O
Since
our
last
presentation,
we
have
two
new
coffers,
leon
and
joel
joining
us
welcome
and
we
updated.
We
edited
the
section
on
availability
of
anything
as
a
service
and
availability
as
used
in
mobile
communication
and
look
at
how
are
these
two
interpretations
of
availability
relate
to
what
we
discuss
in
error
performance
measurement
next
slide.
Please.
O
So
what
is
anything
as
a
service?
I
think
that
we
are
all
well
familiar.
It's
a
concept
of
services
related
to
the
cloud
computing
and
remote
access
and
almost
everything
anything
is
a
service
as
a
security
platform,
software,
so
sdn
as
the
van
so
network
as
a
service.
O
So
as
everything
it
doesn't
come
for
free,
so
it
has
some
benefits,
but
at
the
same
time
it
has
challenges
and
the
advantages
of
anything
as
a
service
is.
O
It
improves
express
model
because
there
is
a
lower
capex
and
purchase
a
subscribe,
subscription
basis
and
as
more
capacity
needed
from
the
providers,
and
it
allows
the
enterprise
to
faster,
adopt
new
applications
and
use
its
expertise
of
its
it
on
a
specifics
of
its
main
business
rather
than
in
support,
but
at
the
same
time
anything
is
a
service
has
some
potential
challenges
so
because
they
depend
on
more
equipment
on
the
network,
connectivity
and
availability
of
computing
resources,
so
they
might
be
constrained
by
their
temporary
loss
of
connectivity
problems
with
the
resilience,
low
bandwidth
and
low
computing
power
next
slide.
O
O
For
anything
is
a
service
using
a
dedicated
environment.
The
availability
average
is
understood
as
a
mean
time
between
failures.
O
O
Obviously,
so
we
can.
If
we
look
at
this
equation,
we
can
see
that
mean
time
between
failures
can
be
considered
as
a
constant
for
the
particular
environment.
O
And
the
only
way
to
improve
availability
is
to
minimize
mean
time
to
repair
and
that
can
be
achieved
by
introducing
redundancy
in
their
infrastructure
is
like
with
a
web
or
database
service
or
world
balancers
in
epm.
Epm
reflects
near
real-time
availability
by
measuring
the
conformance
to
slos
and
the
ability
to
communicate.
O
So,
basically
it
can
it's
a
combination
of
performance
monitoring
and
fault
management,
monitoring,
their
continuity
of
the
path
from
the
user
to
their
cloud
and
using
the
epm
interpretation
of
availability.
O
It
provides
valuable
data
and
more
accurate,
realistic
information
about
mean
time
between
failures
and
mean
time
to
repair
so
because
these
are
giving
us
a
theoretical
estimation
of
availability,
while
using
their
methodology
of
error.
Performance
measurement
gives
us
a
real
time
or
near
real-time
information
next
slide,
please
so
another.
O
Domain
are
where
we
can
see
there
or
hear
about
their
availability
is
mobile
communication,
so
the
mobile
voice
and
data
services.
The
definition
of
service
availability
can
be
found
in
baric,
communication
positioning
and
it's
oriented
for
operators
and
consumers
so
and
it's
uses
their
percentage
of
the
number
of
attempts
to
their
access
given
services
successful
attempt,
but
it's
only
reflects
on
established
connection.
O
It
does
not
reflect
the
quality
of
connection
so
that
now,
when
we
see
that
it's
not
only
a
voice
but
a
mobile
broadband,
so
their
quality
of
the
connection
are
becomes
more
important
and
that's
where
again,
the
methodology
of
error,
performance
management
and
the
approach
of
how
availability
is
determined
and
measured
and
expressed,
allows
for
a
better
understanding
and
better
quantifying
of
availability
for
mobile
voice
and
data.
Communication.
O
E
O
O
And
I
see
fan.
A
A
Okay,
so
now
we're
on
to
our
quick
elevator
pitches.
These
are
ones
like.
I
will
display
the
oh
sorry
yang.
Can
you
come
back
on?
I
can
display
the
slides
now.
A
All
right
and
again
for
these,
we
just
want
to
cover
them
in
like
a
minute
or
two
and
please
focus
on
you
know
what
should
this
group
be
interested
about?
Why
did
it?
Why
should
people
comment
on
it
or
read
your
draft.
Q
Okay,
the
title
of
my
talk
is
one-way
delay
moment
based
on
reference
delay.
So
the
background
is
that
entrance
one-way
delay.
Measurements
is
very
important
for
sla,
as
in
this
fit
and
the
figure
on
the
left,
we
need
to
know
the
end-to-end
one
one-way
delay
of
a
video
surveillance
service
in
5g
network.
Q
Q
Q
A
We're
talking
we're
we're
out
of
time.
I
I
know
you
have
another
slide
coming
up
right
after
this.
A
Gonna
switch
to
that
people
have
comments
you
can
get
to
those
on
the
list
all
right,
so
your
next
one
is
again
a
one-way
delay.
Q
As
quick
as
yes,
okay,
they
we,
we
propose
a
new
one-way
delay,
environment
method,
also
based
on
deterministic
net
networking.
Q
The
the
way
we
can
have
a
centralized
control
node
as
shown
and
used
to
collecting
network
information
and
the
link
from
each
network
element
to
the
synchronize.
The
control
node
leverages
delay,
deterministic
links
so
that
the
delay
are
shown
in
the
figure
on
the
left
side
denoted
at
t1,
t2
to
tn
and
can
be
calculated
in
advance
and
have
relatively
stable
and
low
jitter.
Q
As
shown
on
the
right
side,
the
network
element
m
and
n
are
connected
to
the
centralized
node
by
delay
deterministic
network.
When
the
track
traffic
flows
from
the
network
element
m
to
n,
they
will
upload
flow
information
to
the
centralizer
control
load.
The
transmission
delay
of
flow
information
tm
and
tn
respectively,
and
the
arrival
time
of
flow
information
at
the
centralized
control
load
is
tm.
Prime
and
the
tm
prime
respectively.
C
A
J
Pitch
it
to
us
yeah,
the
background
is
that,
in
a
few
words
that
we
already
presented
the
alternate
marking
data
fields
for
ipv6
and
the
relevant
document
is
an
isg
evaluation.
But
during
the
discussion
we
there
was
some
inputs
from
the
sg
to
consider
whether
the
flow
monitor
identification
field
can
be
extended
in
order
to
have
more
entropy.
J
Considering
this
motivation,
we
propose
this
graph
to
discuss
in
ippm
how
to
generalize
the
data
fields
extension
and
whether
it
is
necessary
to
extend
the
the
flow
monitor
identification
field.
For
example,
the
next
header
can
be,
can
be
used
to
define
this
enhanced
capability.
So
this
is
just
a
proposal
for
discussion
in
ippm,
so
we
are
looking
for
more
feedback
from
the
group
and
to
we
already
have
some
discussion
with
greg
on
the
list
and
we
hope
to
have
more
feedback.
Okay,
thank
you.
A
S
Thanks
al,
so
a
quick
update
on
the
implementation
on
the
linux
kernel,
so
last.
Release
of
the
kernel,
which
is
the
5.15
a
few
days
ago,
has
been
released
and
it
comes
with
the
support
for
the
iom.
S
Prayer
located
trace
next
version,
which
is
the
5.16,
and
it
is
expected
for
end
of
december
this
year,
will
provide
tunneling
support
for
in
transit
traffic
so
that
we
are
compliant
with
the
rfc
8200
and
actually
everything
is
configurable
with
the
ip
root
2
tool,
which
most
of
you
probably
know
with
the
ip
command
in
the
linux
environment.
S
R
Right
so
we
have
a
new
draft
that
describes
an
iom
profile
which
is
used
by
the
linux
kernel
implementation,
and
this
means
that
by
reading
this
draft,
you
can
create
an
implementation
which
can
interoperate
with
the
linux.
Kernel.
Implementation
and
we'd
really
be
happy
to
get
any
feedback
and
comments
about
that.
A
R
A
To
discuss
all
right
next,
we've
got
some
in-band
flow
learning.
A
F
Now
slide
the
six
tell
me
if
you
have
it
yep
and
if
you
go
to
slide
six
there
it
is
yeah.
Thank
you.
So
this
is
a
very
simple,
straightforward
draft
that
defines
a
packet
format,
a
message
from
it
for
the
session
sender
and
a
reflector
to
measure
the
direct
loss.
F
It's
inline
counter,
stamping
just
like
inline
time,
stamping
and
it's
asic
friendly,
because
the
counters
are
at
the
fixed
location,
supports
the
alternate
marking
method
as
well
as
32
bit
and
64-bit
packet
and
byte
counters,
as
well
as,
if
you're
doing
measurement
for
dhcp
scope
counters
as
well.
So
messages
are
also
defined
for
the
authentication
mode
as
well.
So
it's
fairly
straightforward
draft
and
welcome
your
comments
and
suggestions
on
this.
Thank
you.
Q
Q
We
will
introduce
each
very
quickly
in
the
live
the
container
or
the
ms
will
connect
the
information
from
the
coloring
point
and
the
monitoring
point.
The
non-problem
parts
may
maybe
several
many
and
they
need
to
corrected
the
information
by
using
the
block
id
and
in
the
right
if
we
insert
the
block
id
information
into
the
coloring
point
also
with
the
color
information
by
occupying
another
specific
bit,
we
can
replace
the
monitor
report
at
the
monitoring
point
directly
target
environment
value
if
needed.
Q
G
Based
telemetry,
this
draft
has
been
extensively
discussed,
the
email
list
and
a
major
question
raised
by
the
chairs
and
ads.
That
is,
that's
really
worth
to
standardize
this
approach
or
isn't
it
already
covered
by
some
existing
draft.
I
think
it's
worse
to
clarify
this
point
in
the
family
of
the
on
pass
flow,
telemetry
techniques.
G
There
are
two
types
passport
mode
and
the
passco
postcard
mode,
so
the
for
the
passport
mode,
the
ios
trace,
is
already
on
the
standard
track
and
in
the
branch
of
postcard
mode,
for
the
instruction
based
telemetry
is
also
the
iom.
Dx
is
also
on
the
standard
track
and
the
pvtm
covered
in
this
drive
is
a
standalone
method,
as
shown
in
the
left
side
figure,
and
so,
but
it's
also
have
some
unique
issues
need
to
be
solved.
Indeed,
it's
actually.
Actually.
G
This
approach
has
already
been
used
by
the
srv6
oam
dropped,
but
many
open
issues
are
answered
as
this
will
be
covered
by
this
drop.
So
we
hope
that
this
is
a
word
to
be
standardized
in
this
working
group.
Thank
you
very
much.
All.
C
A
A
I
I
It's
all
really
sensitive
data,
and
so
we
want
to
you
know:
we'd
love
to
work
with
anybody
to
do
that
too
and
I'll
be
contacting
you
offline
and
because
we're
we're
getting
we're
getting
an
implementation.
M
Okay,
this
is
the
the
draft
that
is
a
sibling
of
the
previous
one.
The
idea
is
to
put
the
probe
on
the
user
devices
a
smartphone,
for
example,
or
pc
in
order
to
have
a
point.
That
is
where
it
is
possible
to
make
all
the
measurements
without
dedicated
hardware
and
the
software
inside
the
network,
or
it
can
be
also
an
additional
point
to
interface
to
the
props
in
the
network.
We
can
have
some
advantages,
for
example,
the
scalability,
the
hardware
of
the
user
is
not
is
very
much
so
it's
possible
to
have
many
measurements.
M
F
Yes,
hello,
yeah.
This
is
a
brief
elevator
pitch
for
a
new
draft
concerning
high
precision
service
metrics.
The
idea
is
basically
to
define
a
new
set
of
metrics
that
capture,
whether
or
not
the
service
that
is
being
delivered
complies
with
the
service
level
objective,
for
instance,
for
when
you
have
precision
services
that
require
a
specific
end-to-end
latency,
you
would
want
to
basically
capture
whether
or
not
yeah.
Basically
your
your
your
your
flow.
F
F
You
would
want
to
capture
a
number
of
packets
that
were
violating
an
objective,
the
number
of
time
units
that
were
violated
during
during
the
duration
of
a
service-
and
this
basically
gives
you
also
second
order
measures
of
precision
availability,
so
not
availability
but
available
the
during
which
a
service
can
be
accessed
with
the
with
agreed
on
pr
precision
so
anyway.
F
A
Okay,
greg
did
you
have
a
quick
question
on
this
or
comment.
F
Okay,
so
so
so
I
did
not
let
let's,
let's
put
this
on
the
boiling
list.
A
Yeah,
I
think
it
was
referring
to
the
earlier
presentation,
alexander,
that
greg
made
on
the
epm
proposal.
So
it
sounds,
like
you
know,
read
each
other's
drafts
and,
let's
discuss
on
the
list,
how
we
can
kind
of
combine
efforts
here.
A
You
all
right,
okay,
thank
you
all
it's
as
as
usual
a
full
session,
but
I
think
we
made
some
good
progress
today.
Thank
you
to
co-chairs
thanks
marcus
for
running
the
time.
Thanks
ian
for
all
your
service,
we
shall
miss
you
and
thanks
to
everyone
who
came
and
have
a
good
rest
of
your
ietf
if
you're
going
to
stuff
tomorrow-
and
we
will
see
you
next
time.