►
From YouTube: IETF113-IPPM-20220321-0900
Description
IPPM meeting session at IETF113
2022/03/21 0900
https://datatracker.ietf.org/meeting/113/proceedings/
A
C
C
All
right,
I
think,
we'll
just
go
ahead,
then,
if
there
are
people
on
site
with
meat
echo
who
can
help
keep
tuning
things,
that
would
be
wonderful,
but
otherwise
we'll
forge
ahead.
C
All
right,
let's
get
started
so
welcome
officially
to
ietf
113
in
vienna
and
virtually
it's
our
first
really
hybrid
meeting,
and
so
we'll
all
be
learning
how
to
do
this.
So
let's
have
a
lot
of
patience
and
grace
with
each
other.
In
this
you
are
in
ippm
ip
performance
measurement.
C
First,
if
this
is
one
of
your
first
itf
meetings
or
if
it's
your
30th
30th,
we
do
ask
you
to
note
well
all
of
the
different
policies
for
participation
and
contribution
in
the
itf.
C
For
the
meeting
management
for
people
both
on
site
and
remote,
we
are
using
meat,
echo
and
it's
the
same
way.
It's
been
working
virtually
for
the
people
who
are
remote
and
in
person.
I
believe
there
is
a
heat
echo
mobile,
app
that
you
can
use
for
queuing.
C
Please
do
use
that.
We
can
see
the
camera
into
the
room,
but
it
will
be
useful
to
manage
everything
from
me
tech
over
there.
C
C
C
C
All
right,
looking
at
our
agenda,
we
are
spending
the
first
chunk
on
the
primary
working
group
documents
that
we
have
that
are
active.
We
have
a
bunch
of
documents
that
are
in
later
stages,
they're
with
the
isg
or
they
are
in
the
rfc
editor
queue
where
we've
been
very
productive.
So
thank
you
to
everyone
in
the
working
group
for
that,
but
today
we'll
go
through
some
of
the
protocols,
starting
with
some
of
the
more
newly
adopted
protocol
work
and
then
getting
some
updates
on
iom,
as
well
as
explicit
flow
measurements
and
srpf.
C
Then,
after
that,
we
have
some
shorter
talks
about
some
new
proposed
work
that
has
received
either
a
lot
of
work
on
side
meetings
or
discussion
on
the
test,
and
that's
all
we
have
for
today
any
agenda
bashing
or
should
we
launch
right
in.
F
Hi
tommy,
if,
if,
if
you
guys,
can
share
the
slides
that
would
help,
I
think,
okay.
F
Good
morning
everybody
so
we've
got
the
capacity
measurement
protocol
to
talk
about
this
morning.
We've
got
a
working
group
draft
and
one
which
we've
updated
lynn
chaviton
my
long
time
colleague
and
I
are
working
on
this
together-
we're
looking
for
more
help
and
review
from
the
working
group.
So
next
slide.
Please.
F
We've
got
a
basically
a
simple
setup
exchange
between
the
client
and
the
server
test
activation
exchange
that
follows
that
talks
about
the
optional
testing
parameters
in
the
in
this
first
phase,
we've
we've
added
a
server
admission
control,
a
kind
of
a
programmed
bandwidth
check
and
we
think
that'll
help
with
managing
the
server
capacity
and
the
rest
of
the
after
the
test,
exchange,
setup
exchange
and
test
activation
exchange
takes
place.
F
We
have
the
test
stream
and
that
still
has
a
feedback
path
with
the
either
the
measurements
or
the
commanded
test
rates
to
use
for
the
next
50
milliseconds
or
so
so.
We've
got
a
continued
round-trip
relationship
going
on
throughout
the
operation
of
the
protocol
and
we
actually
set
bits
in
the
load
pdus
to
stop
the
test,
stop
one
and
stop
two
from
the
server
and
client
respectively,
and
so
that's
how
we
basically
turn
things
down
at
the
end
of
the
test.
Duration.
F
So
thanks
for
working
group
adoption,
some
new
folks
joined
the
working
group
based
on
the
the
work
in
the
open
source
project
and
it
was
good
to
have
their
input
and
review
all
along
the
way.
In
fact,
they.
F
Good
suggestions,
like
the
suggestion
to
include
a
randomized
payload
option
and
also
how
that
might
be
implemented.
So
we've
tested
the
performance
of
that
and
the
code
suggests
performance
of
the
code.
It
suggests
very
little
compressibility
of
packet
payloads,
so
so
that's
good.
In
fact,
we
got
surprisingly
low
rates
and
in
some
of
the
tests,
where
you
know
much,
higher
rates
were
claimed.
F
And,
as
I
mentioned,
the
server
can
set
a
bandwidth
limit
for
admission
control.
That's
great!
When
you
have
you
know
some
clients
trying
to
test
five
gigabit
per
second
services
and
some
clients
trying
to
test
25
megabit
per
second
services.
F
You
don't
have
to
allocate
five
gigabits
to
everybody,
big
big
savings.
There
we
have
an
optional
stop
for
start,
I'm
sorry
start
rate
in
the
load
adjustment
algorithm.
Now
the
fixed
rate
option
remains
so
that
basically
means
that
if
you're
trying
to
achieve
gigabit
rates,
you
might
start
at
500
megabits
and
test
your
search.
Your
way
up
from
there
also
we've
got
backward
compatibility.
F
F
One
of
the
points
was
consistent
terms
for
test
activation
and
question
response
that
that
came
from
your
guy.
Also,
the
clarifications
about
test
stop
and
there
are
even
more
error
codes
now,
so
we're
keeping
the
field
bite-sized
and
a
bitmap
would
be
much
bigger.
F
I
think
if
we
tried
to
implement
that-
and
in
section
four
I
took
out
all
the
parameters
that
were
basically
referred
to
or
were
redundant
with,
rfc
1997,
the
the
metric
and
method
and
trim
down
we've,
basically
trimmed
down
our
proposal
to
four
security
modes,
the
two
that
we've
got
implemented,
unauthenticated
and
password
and
we've
got
a
secure
setup
exchange.
So
that
would
just
be
the
first
part
of
the
setup
and
then
you
know
the
classic
secure
all
the
things.
That's
the
last
purpose.
F
Okay,
so
the
next
steps-
authors,
the
welcome
proposals
and
revisions
to
the
security
modes
with
working
group
adoption
we
can
now
adopt
or
ask
for
a
an
early
sector
review,
may
help
us
solve
that
one
go
and
that
would
be
really
cool.
F
You
know
that,
as
we've
learned
in
past
meetings,
there's
no
silver
bullet,
but
you
know.
Maybe
if
we
make
this
simple,
we
can
get
a
good,
simple
answer
and
we
don't
so.
Obviously
one
of
the
things
I
talked
about
last
time
back
in
november
was
that
this
protocol
can
do
more
than
measure
capacity.
F
I
mentioned
the
four
modes:
if
you
want
more
modes
say
so,
and
protocol
9
allows
for
a
new
load
adjustment
algorithm
with
more
robustness
to
all
sorts
of
problems,
because
the
feedback
we
got
is
is
people
said,
look
you
guys
say
you're
measuring
maximum
capacity.
So
please
always
do
that.
You
know,
even
even
if
you
have
to
put
more
load
in
the
channel,
do
it.
F
So
so
that's
what
we
heard
and
that's
what
we're
working
on.
So
I
got
three
minutes
to
let
people
ask
questions
thanks.
C
All
right,
frank:
were
you
in
the
queue?
Are
you
I'm
trying
to
go
and
understand
how
to
cue
yeah
you
you
are
able
to
unmute
yourself.
You
can
also
click
the
hand,
but
it's
fine
go
ahead.
Please!
Okay,.
A
I
have
a
very
simple
question
to
al
so
al.
You
mentioned
that
there
is
an
open
source
implementation.
Could
you
drop
us
a
pointer
to
that
one
yeah
yeah,
of
course
frank
and.
A
Reference
so
I'd
be
well,
I
might
have
somebody
that
wants
to
go
and
use
it.
H
F
Oh,
oh,
okay!
Okay,
all
right,
I
you
know
I'll
I'll
drop
it
in
the
meeting
minutes
and
so
it'll
be
there.
But
it's
a
it's.
An
open
broadband
project
on
github.
A
C
All
right
will
were
you
trying
to
get
in
queue
or
no
okay,
cool.
Thank
you
all
right.
Any
other
questions.
Next,.
C
C
Okay,
I'm
not
seeing
anyone
else
getting
cute.
Thank
you
al
for
this.
So
it
sounds
like
there
are
a
couple
actions
that
the
chairs
have.
C
I
C
J
J
Okay,
hello,
everyone.
I
hope
you
can
see
my
slides
being
presented.
J
Yes,
okay,
so
welcome.
This
is
responsiveness
under
working
conditions.
I
am
going
to
present
the
changes
that
have
been
happening
in
the
draft
and
the
main
discussions
points
that
were
happening
during
the
working
group
adoption.
J
But
before
I
go
into
the
draft,
I
have
two
parts
of
the
presentation,
one
that
is
on
the
implementation
experience.
The
first
one
is
on
the
server
side.
K
Hello
hi,
hopefully
it
can
be
heard.
It
looks
like
it.
Okay,
hi
at
the
network,
quality
quality
server
repository.
We
have
a
sample
configurations
for
opacity,
traffic
server,
apache,
httpd
and
nginx.
K
The
apache
traffic
server
configuration
in
that
repo
is
the
same
implementation
that
apple
cdn
uses
to
serve
traffic
for
the
network
quality
tool
from
apple
next
I'd
like
to
introduce
will
hawkins
of
the
university
of
cincinnati
particularly
will.
L
Thanks
that
was
that
was
quick
and
succinct.
I
like
that.
L
L
B
We're
having
a
little
trouble
in
the
in
the
room
here
with
the
the
slides
aren't
showing
for
some
reason
on
the
screen
and
we're
trying
to
diagnose
that.
B
C
You
should
be
granted,
you
should
be
able
to
see
a
way
to
select
which
dick
you
want.
Yeah
there
you
go.
L
L
Yeah,
I'm
sorry,
I
hope
I
didn't
nobody
didn't
screw
anything
up
there,
but
the
implementation
is
in
go
and
it
is
has
been
tested
as
far
back
as
government
1.16,
presumably
to
work
with
earlier
versions.
I
just
haven't
tested
with
it,
so
not
that
it
won't
work,
but
this
is
sort
of
our
backwards
version.
L
We
have
had
successful
interoperability
with
apple's
implementation,
as
randall
just
talked
about
the
one
that
they
didn't
go
so
we've
had
successful
implementation
interoperation
with
that
one
and
also
successful
interoperation
with
apple's
public
measurement
endpoints
that
are
out
there
and
there
are
the
same
ones
that
are
used
by
the
mac,
os
and
ios
clients.
L
The
to
do
on
the
interoperability
right
now
is
to
do
calibration
of
the
rpm
measurement
between
the
network
quality
client,
that's
in
mac,
os
and
ios,
and
our
version
so
we're
currently
working
on
that.
L
Just
a
few
lessons
learned
before
I
turn
it
back
over
to
kristoff
and
one
of
the
really
important
lessons
that
we
learned
was
to
ensure
that
go's
http
api
opens
http
connections.
L
L
One
of
the
other
lessons
that
one
of
the
other
big
lessons
that
we
learned
that
took
a
long
time
to
overcome
is
that
those
http
api
aggressively
pools
tcp
connections,
so
that
prevents
us
from
saturating
the
bandwidth
in
the
way
that
we
really
want
to
to
be
able
to
do
saturation.
L
L
L
Finally,
as
usual
with
this,
with
any
implementation
and
interoperation
test
and
work
is
to
take
advantage
of
this
experience
to
clarify
ambiguities
in
the
protocol
and
to
get
those
rounded
out,
and
that's
one
of
the
things
that
I
think
kristoff
will
talk
about
as
he
goes
into
talking
about
the
protocol
itself
and
the
changes
that
we've
made.
So
I
don't
want
to
steal
his
thunder
but
feel
free
as
we
go
forward
at
the
end
of
the
talk.
L
J
Thank
you
will
that
was
great
so
over
to
the
major
changes
in
the
itf
draft.
Thank
you
very
much
for
the
working
group
group
adoption.
So
during
this
presentation,
I
want
to
address
mostly
the
two
biggest
discussion
points
which
were
around
what
is
working
conditions
and
how
can
we
interpret
responsiveness
results?
J
Additionally,
they
were
in
terms
of
the
changes
to
the
draft.
There
were
a
lot
of
minor
tweaks
in
the
wording
and
clarifications
thanks
to
many
contributors,
particularly
will
also
from
his
implementation
experience
on
to
make
things
more
clear
on
how
the
protocol
actually
works.
J
So
the
first
big
discussion
point
was:
what
does
it
actually
mean
working
conditions,
so
in
that
sense
I
clarified
the
definition
of
working
conditions
in
the
section
in
the
working
conditions
section,
and
so
I
want
to
revisit
this
here
and
open
it
up
for
any
kind
of
discussions
that
may
come
up
so
the
goal
of
the
working
conditions.
J
We
are
trying
to
explore
how
the
network
behaves
when
it
is
under
traffic
patterns
that
end
users
actually
generate,
and
so
we
use
http,
2
or
http
3
in
the
future,
with
standard
congestion
controls
that
way,
we
create
the
realistic
part
of
the
network,
responsiveness,
working
condition,
and
how
can
we
push
it
to?
The
worst
case
scenario
is
by
creating
multiple
bulk
http
requests
like
now.
J
J
However,
we
want
to
create
what
we
call
a
stable
buffer
load
situation
so
that
we
can
actually
measure
it
over
a
certain
duration
of
time,
and
so
we
create
this
kind
of
stable
buffer
load
situation
by
creating
multiple
http
requests.
And
so
we
really
push
the
network
into
a
worst
case
scenario,
but
even
by
creating
those
multiple
bulk
http
requests.
J
We
remain
as
a
realistic
traffic
pattern
because
it's
very
similar
to
basically
when
you
receive
a
message
or
send
an
email
with
multiple
large
attachments,
same
scenario.
J
J
Now
that
is
true,
definitely
and
each
network
point
between
your
client
and
the
server
has
the
potential
to
expose
buffer
bloat
and
to
to
have
buffer
bloat.
However,
buffer
blows
can
also
happen
in
the
end
host
sets
the
entire
networking
stack
from
ip
all
the
way
up
to
http
can
be
subject
to
buffer
load,
and
so
each
of
these
points
in
the
in
the
networking
stack
layer
has
the
potential
to
create
buffer
bloat,
and
so
because
our
methodology
is
using
http.
J
We
we
may
expose
these
kind
of
buffer
blocks
as
well.
It
can
be
argued
whether
that
is
intentional
or
not,
and
in
our
case
it
actually
is
intentional
right.
We
because
we
want
to
measure
responsiveness
the
way
the
end
users
are
experiencing.
J
So
one
of
the
questions
during
the
adoption
call
was
well,
if
I'm
having
let's
say
I'm
measuring
responsiveness,
and
I
create
this
load
generating
connection
between
the
client
and
the
server
here
in
red
right
and
it's
filling
the
pipe
and
it
exposing
it
is
exposing
buffer
bloat
on
the
right
side.
Those
blue
boxes
that
I
draw
here-
let's
say
the
buffer
load-
is
happening
in
this
in
these
sections,
the
http,
the
tls
and
the
tcp
connection.
J
Now,
as
as
a
measurement
user,
how
would
I
be
able
to
rootcast
it
and
to
identify
that
the
buffer
load
is
happening
there?
Well,
one
way
it
is
possible
to
do
this
is
because
we
create
not
only
responsiveness
probes
on
the
load
generating
connection.
J
So
are
there,
does
anybody?
Has
any
questions
or
suggestions
around?
These
say
replies
to
the
comments
during
the
adoption
call.
C
Yeah,
so
please
get
in
the
meat
echo
q.
If
you
have
any
questions
ignacio,
I
do
see
that
you
have
some
comments
or
questions
on
the
chat
be
happy
to
hear
those
as
well
and
stuart
who's
in
the
cube.
N
N
J
J
So
the
real
traffic
is
by,
we
create
http
bulk
data
transfers,
so
we
do
an
http
get
for
a
very
large
file
and,
depending
on
the
server
implementation,
we
actually
recommend
this
large
file
to
be
basically
an
infinite
response,
and
so
that
is
our
way
to
create
unreal
traffic,
because
http
gets
for
large
file
is
what
happens
when
you
are
downloading
a
large
attachment
from
an
email,
and
so
this
is,
as
far
as
we
see
the
closest
we
can
get
to
a
realistic
traffic
pattern.
G
Okay,
oh
I
am
my
question
is
related
to
because
normally
the
the
real
traffic
inside
of
of
a
network
depends
not
just
in
one
client
dependent,
several
clients,
multiple
clients-
and
this
is
the
part
which
is
really
difficult
to
imitate.
G
J
Thank
you
for
this
question,
so
I
agree.
It's
that's
imitating
multiple
different
clients
and
if
I
say
clients
I
mean
different
different
devices,
is
unfortunately
not
possible
with
this
kind
of
a
test.
We
would
need
to
have
some
inter-device
synchronization
and
communication
to
start
the
test
and
then
have
them
all
create
these
kind
of
http
bulk
data
transfers
at
the
same
time,
and
doing
that
is
not
possible
without
without
a
protocol
to
talk
between
the
devices
and
it's
from
our
perspective,
currently
out
of
scope
for
this
draft.
J
You
had
a
second
question
around
tcp
and
how
it
reacts
to
how
to
react
to
the
packet
loss
and
so
on.
That
is
the
reason
why
we
create
multiple
tcp
connections
so
that
we
can,
when
one
tcp
experiences
a
packet
loss,
the
other
one
is
still
going
and
sending
it
at
full
speed
all
right
and
we
have.
F
Go
ahead:
okay,
thanks
thanks
for
your
updated
draft
christopher
and
to
everyone
for
your
work
and
implementation.
A
a
couple
of
questions
to
your
your
your
summary
trying
to
close
the
discussion
on
working
load
and
responsiveness.
F
Here,
the
I
noted
I
read
the
draft
again
yesterday
quickly
and,
and
I
noted
sort
of
the
use
of
capacity
alongside
you
know
the
capacity
of
the
link
alongside
words
like
saturation
and
and
then
you
know,
the
real
test
is
based
on
tcp
maximum
or
good
put
so
I
mean
there's,
there's
still,
you
know,
obviously
some
different
terms
and
some
different
ways
to
interpret
what
the
working
load
conditions
all
could
be.
F
For
example,
you
know
where,
in
the
in
the
metric
and
method
that
the
protocol
I
just
talked
about,
supports
that's
a
maximum
iplayer
capacity.
So
there's
still,
I
think,
a
little
little
ambiguity
in
the
terminology
that
you
might
be
able
to
root
out.
Also,
I
didn't
notice
in
the
draft
any
discussion
of
the
effects
of
congestion
control.
Algorithms,
you
know
sort
of
the
older
ones
are
more
likely
to
fill
the
buffers,
and
some
of
the
newer
ones
have
the
goal
of
of
not
doing
that.
F
So
you
know
you're
going
to
get
different
levels
of
working
conditions
from
those
and
and
and
as
ignacio
says,
with
multiple
clients
you're
going
to
get
kind
of
a
mixture
of
those
congestion,
control,
algorithms,
potentially
so
there's
you
know,
there's
some
some
things
to
talk
about
here
and,
and
you
know,
unfortunately,
I
don't
think
we
can
shut
the
door
on
on
all
these
discussions.
Quite
yet,
but
thanks
for
putting
your
work
together,
appreciate
it.
J
Thanks
a
lot
for
your
feedback,
al
yeah
we'll
definitely
try
to
clean
up
some
parts
of
the
capacity
wording
and
I'll
take
your
feedback
and
go
another
pass
on
capacity
and
the
wording
we're
making
sure
that
we
use
the
right
terminologies
there
and
I
like.
Actually
the
suggestion
aren't
having
a
discussion
about
congestion,
controls
I'll,
add
a
subject
section
for
that
as
well.
C
C
Well,
thank
you
very
much
and
I
believe
next
we
have.
O
O
O
O
O
O
O
This
slide
provides
more
details
on
discussion
on
point
one
in
a
limited
domain,
as
defined
in
rc
8799.
This
document
is
not
needed
if
both
preconditions
exist
and
the
first
one,
a
control
entity
that
has
control
over
every
iom
device
is
deployed.
O
O
O
O
O
O
They
can
define
enable
the
namespace
ids
they
can
do
for
each
enable
database
namespace
id
define
the
prefix
from
which
is
mpv6.
O
O
C
All
right,
thank
you.
Do
we
have
any
comments,
questions.
C
C
All
right,
I'm
not
seeing
anyone
come
into
the
queue,
so
we
can.
The
chairs
will
discuss
and
then
get
back
to
you
on
that.
P
M
Draft
so
about
the
data
integrity.
We
have
clarified
the
scope
of
the
document
so
basically
now
the
integrity
protection
is
on
iom
data
fields
and
not
including
headers.
For
obvious
reasons,
we
have
shared
on
the
mailing
list
and,
as
a
consequence,
the
algorithms
were
returned
to
be
more
generic,
and
so
they
work
for
currently
defined
iom
option
types
and
so
for
future
defining
and
also
as
a
direct
consequence.
M
The
direct
export
option
type
is
not
included
anymore
because
it
doesn't
fit
because
if
you
see
the
define
the
defining
of
such
option
type,
it
doesn't
contain
any
ion
data
field
per
c.
Okay.
So
if
you
really
want
to
go
that
way
and
have
protection,
I
think
we
could
have
a
per
hub
verification,
but
not
sure
if
you
want
to
go
that
way.
So
we
can
just
discuss
that
later
on
the
mailing
list,
and
so
the
update
also
includes
some
editorial
changes.
M
M
C
I
guess
just
for
myself
forgetting
further
reviews.
Do
we
want
to
ask
for
any
particular
external
reviews,
sector
reviews,
etc.
A
Was
that,
given
that
this
document
is
really
a
document
that
was
inspired
by
the
working
group
originally
right,
where
people
said
well
yeah?
Well,
even
though
we're
in
a
limited
domain,
we
do
want
to
have
a
dedicated
way
to
go
and
ensure
integrity
above
and
beyond
what
you
can
go
and
do
with
the
underlying
transport,
if
you're
riding
on
on
top
of,
say,
v6
and
would
be
able
to
use
authentication,
headers
and
the
likes.
And
so
we
would
really
appreciate
hearing
back
from
people
that
well
originally
driving
the
the
questions.
A
Whether
the
document
in
its
current
shape
actually
meets
what
they
had
in
mind.
I
think
that's
more
like
what
we
were
asking
this
this
review
to
be
so
that
we
at
some
point
can
call
the
document
done
and
then
progress.
A
Okay,
I
think
it's
not
so
much
of
a
of
a
security
review,
because
I
think
the
methods
that
we
are
using
are
well
well
known
and
state
of
the
art.
There's
nothing
really
new
or
inventive.
Here,
it's
more
like
application
appliance
well
donor
to
the
problem,
but
let's
see
whether
people
consider
that
is
well
usable,
useful
in
the
context
of
the
the
original
problem
that
people
had
in
mind.
J
A
My
bad
right,
so
the
document's
been
pretty
stable.
We've
not
really
heard
anything
back
and
sorry,
gentlemen.
I
I
forgot
to
add
the
beer
reference
or
not.
I
didn't
forget
that
the
beer
reference
I
forgot
to
publish
the
draft,
the
the
o1
version
before
the
cut
off
so,
which
is
why
I
added
the
github
reference
I'll
I'll
push
that
out
today
or
well.
A
If
we
to
get
more
feedback
during
the
week,
maybe
towards
the
end
of
the
week
early
next
week,
if
there
is
more
things
that
people
want
or
have
experiences
with
from
an
iom
deployment
perspective
that
they
want
to
go
and
see
represented
in
the
draft,
I
want
to
go
and
keep
it
well
rolling
for
a
while.
A
We
could
go
for
last
call,
but
I
do
think
that
well,
if
we
mature
it
over
the
course
of
another
two
or
three
itf
meetings,
it
might
help
everybody,
given
that
it
should
go
and
aggregate
all
deployment
experiences
that
we
have
at
iom
and
thanks
to
justin
for
coming
up
with
an
even
more
mature
implementation
in
the
kernel,
and
we
also
have
an
update
on
on
vpp.
A
So
well
maybe
we
we
get
more
experience
with
even
the
open
source
implementations
and
we
can
go
and
fault
that
in
but
for
now
I've
not
really
heard
anything
back
so
keep
feedback
coming
to
the
list
or
well.
The
implementers
thank.
D
A
D
A
I
I
So
so
that's
monitoring
system
can
interpret
the
iom
data
and,
secondly,
we
aligned
with
the
latest
ion
data
chart
on
port
configuration
user
needs
to
augment
this
module
for
the
configuration
of
a
specific
port
type
and
then
some
young
module
issues
we
reduced
all
the
prefix,
yum
identifiers,
add
more
descriptions
and
make
the
draft
more
readable
and
updated
the
security
considerations
and
cleaned
some
knits,
and
next.
I
The
iron
with
the
plain
drafts
are
stable
and
this
young
module
is
already
aligned,
stable
and
mature.
So
we
would
like
to
ask
for
young
doctor
review
and
also
working
group
call.
F
C
Q
R
R
R
R
This
case
we
have
the
measurement
of
all
the
packets
from
one
side
and
a
percentage
of
the
package
from
the
side
with
more
packets
and
the
one-way
particulars
that
are
made
in
the
best
options.
In
our
opinion,
using
it
to
be,
the
square
beat,
is
common
to
the
option
that
we
can
have
before
this
measurement
and
two
different
bits
for
the
second
part
of
the
measurement.
The
loss
event
need
the
reflection
square.
R
Okay,
the
number
of
groups
and
the
companies
that
works
about
this
top
is
is
incremented
from
last
time
because
the
university
of
technion
joined
us
in
this
kind
of
work
and
in
technology.
If
your
caller
from
huawei
manage
the
contact
with
his
institutional
research.
So
we
are
a
free
university.
Three
different
research
groups
that
explore
this
kind
of
methodology
for
measurement
in
the
network.
R
Last
update,
so
it
is
a
meeting
we
don't
present
the
draft
updates,
because
we
are
waiting.
The
very
extensive
revision
that
the
ike
coons
from
akin
university
terminate
only
last
thursday,
and
so
we
are
waiting,
is
a
revision.
I
thanks
ike
for
his
work
and
the
work
of
his
research
group
in
the
university
because
he
implemented
all
the
algorithm
described
in
this
draft.
So
we
have
a
new
implementation,
totally
separate
from
the
others,
so
he
tested
also
the
clarity
of
the
description
of
the
algorithm
and
he
suggested
some
updated.
R
The
main
tool
is
about
the
tv
description,
the
routing
packet
loss,
because
he
asked
to
clarify
better
the
token
mechanism
that
maintains
the
throughput
of
the
measurement
equal
in
the
two
directions:
the
discussion
about
the
trade-off
of
the
duration
of
the
measurement,
because,
if
the
measurement
the
period
of
the
measurement
is
longer,
we
have
less
measurement
and
is
is
short.
We
measure
less
packets
for
the
packet
loss,
so
there
is
a
trade-off.
There
is
a
little
bit
discussion
that
is
better
to
describe
in
the
draft.
R
R
R
There
are
many
options,
it
depends
from
the
protocol.
For
example,
some
protocol
have
only
two
bits,
so
only
one
bit
can
be
used
for
packet
loss.
This
is
a
little
bit
the
problem
because
with
them
one
bit,
the
measurement
are
more
less
precise
for
the
delay.
One
bit
is
a
sufficient,
because
both
spin
bit
and
delay
beat
also
in
the
hidden
version
needs
only
one
beat.
R
R
So
the
conclusion
is,
there
are
quite
a
big
work
about
this
topic.
There
are
some
sibling
draft
in
the
ppm
working
group,
one
in
particular,
that
is
about
putting
the
probe
not
in
the
network,
as
we
thought
at
the
beginning
of
the
work,
but
also
in
the
end
user
device,
in
order
to
have
an
end-to-end
measurement
in
a
very
simple
way,
also
for
methodology
like
a
kubit
that
is
not
an
end-to-end
methodology.
R
So
it's
very
convenient
and
it's
possible
to
measure
end
to
end
and
combining
this
measurement
with
probe
in
the
network
is
possible
to
split
the
measurement
and
to
locate
the
problem.
If
there
is
other
sibling
draft
are
presented
in
other
working
group
in
the
co-op
working
group
in
particular,
the
last
one
in
the
past
was
present
in
quick
working
group,
and
there
are
one
draft
also
in
tcpa
proposal.
R
R
There
is
a
summary
of
that
is
not
a
summary
of
all
the
authors
of
the
working
group
excuse
me
of
the
draft,
but
only
our
company.
R
R
R
There
is,
in
our
opinion,
a
little
bit
difference
about
the
packet
loss
because
for
quick
we
prefer
cubit
and,
albeit
for
tcp,
cubit
and
rbit,
because
the,
albeit
is
a
simple
as
a
simple
implementation,
and
there
is
a
less
measurement
delay
their
bit
detector's
losses
also
for
all
accurate
tcp
packets.
It
is
a
protocol
independent,
so
it
not
depends
from
the
implementation
of
the
protocol.
So
there
are
strengths
and
the
weakness
in
the
bot,
but
in
our
opinion,
for
quick
is
better,
albeit
as
second
was
beat.
R
C
Not
seeing
anything
immediately,
one
question
I
did
have
on
the
drafts
in
other
working
groups
and
the
discussion
for
how
to
apply
this.
C
I
think
the
one
on
quick
was
a
while
ago
and
in
co-op
that
just
happened
is
this
something
that
looks
like
it
will
get
adopted
in
any
of
these
working
groups.
Are
we
going
to
see
this
progress
for
great
views?
No.
R
No
because
we
decided
to
start
from
a
ppm
also
with
some
materials
that
are
agree
with
this
approach
and
after
we
put
this
idea
inside
the
specific
protocol
working
group
for
copper,
the
the
draft
is
active.
We
presented
in
the
last
interim
meeting
where
people
represented
is
a
quite
good
interest
from
the
working
group
for
quick
in
particular
in
past,
was
presented.
C
Do
do
we
do
we
want
to
essentially
publish
this
without
any
adopters,
necessarily
or
or
if
we
had
some
adopter
lined
up.
That
was
that
we
thought
would
come
in
relatively
soon.
Then
we
could
see
if
there's
any
feedback
from
that
protocol
or
group
before
we
kind
of
finalize
and
publish
this
document,
but
that's
not
necessary.
C
C
P
Let
me
see
if
I
can
do
it
there,
we
go.
P
Okay,
perfect
hi
everyone,
my
name
is
rakesh
gandhi
and
I'm
presenting
the
stem
extensions
for
sr
networks.
Recent
updates
to
this
drop
on
behalf
of
the
authors
listed
here.
P
P
So
in
the
recent
revision
we
have
updated
the
usage
of
deflect
around
destination
flag,
the
wrong
the
decision
with
a
reply
required
or
not
required.
P
So
there
is
either
zero
or
one
we
added
a
sub
dlv
for
srv6,
it's
a
structure,
sub
tlv
and
some
small
minor
editorial
changes,
and
we
have
no
open
issues
currently,
so
the
structure
srv6
segments
tlv
basically
identifies
the
structure
of
the
128-bit
srv6
seat,
so
128
bits,
some
bits
can
be
for
the
node,
some
for
the
function
length
and
some
for
the.
So
it
just
identifies
that
this
is
in
line
with
other
drops
and
repsis
for
a
srv6.
P
So
there
is
some
work
going
on
as
well.
Another
working
group,
the
ippm,
has
done
a
great
job
with
coming
up
with
stamp,
and
there
are
some
extensions
there's
some
work
in
spring.
P
There
is
also
some
enhanced
srpm
draft
as
well
in
spring,
as
well
as
for
the
work
for
pseudowire
extension
in
mpls
working
group.
So
we'll
appreciate
your
review
comments.
P
So
there
is
some
inter
interest
to
do
the
interrupt
testing
for
the
extension
in
this
trap,
so
we
are
interested
in
early
in
the
components
so
we'd
like
to
make
a
request
for
that.
Welcome
your
comments
and
suggestion
on
the
drought
and
that's
all
I
have
any
comments,
questions.
F
Can
you
go
to
wi-fi.
P
C
All
right
rockish
for
the
ayanna
assignment.
Can
you
send
a
a
note
to
the
chairs,
an
email
to
the
chairs
about
that
or
on
the
list?
Yeah.
P
C
P
C
C
Okay,
thank
you,
yeah
just
speak
up.
Do
you
want
me
to
share
the
slides
okay.
S
Thank
you,
okay,
so
I'm
talking
about
pdm
v2,
so
we
are
on
the
o2
draft
next
slide.
S
S
Then
we
presented
it
at
112
in
the
ippm
working
group
and
had
a
side
meeting
where
we
explained
our
linux
implementation
and
how
it's
working
recently
we
have
been
working
on
the
lightweight
registration
protocol,
which
it
is
a
sample
protocol
along
with
pdmv2,
where
we
try
to
authenticate,
authorize
and
generate
a
shared
context
for
well
with
the
the
primary
secret.
S
Basically,
so
that's
what
we
have
been
working
on
today
morning,
we
had
another
side
meeting
where
we
showed
the
recent
progress
in
registration
protocol,
and
yesterday
we
had,
we
got
a
chance
to
present
it
at
the
hackathon
as
well.
S
We
have
a
primary
client
primary
server,
secondary
client,
secondary
server
architecture
and
yeah.
The
draft
mentions
a
rational
for
it.
This
is
a
summary
mentioned
in
our
appendix
we
give
one
possible
way
of
registration
protocol
and
we
keep
the
option
open
to
enterprises
to
either
use
this
or
have
their
own
registration
protocol
so
well.
There
is
a
flow
between
primary
client
and
the
primary
server
where
well.
S
Then
this
is
the
hpk
chem
key
encapsulation
mechanism,
where
we
do
the
chem
and
share
the
encapsulation,
and
the
primary
server
will
basically
do
a
decap
and
generate
the
same
secret
on
its
side.
S
Well,
this
secret
is
later
shared
with
with
the
secondary
server
and
the
secondary
clients.
We
had
received
some
comments
in
the
side
meetings
with
sharing
the
secret
with
the
secondary
clients,
and
we
had
taken
care
of
it
by
generating
client
specific
keys.
So
what
primary
client
does
is
it
generates
generates
the
client
specifically.
S
Keep
the
mic
up
your
mouse,
so
what
the
primary
client
does
is.
It
generates
the
client
specific
keys
by
doing
a
kdf
with
the
info
parameter
as
the
client
ip
and
generates
a
specific
client
generate
the
specific,
secondary
client
keys
and
for
sharing
all
these
keys.
We
are
using
tls
as
a
means
so
yeah.
S
This
is
a
registration
protocol
which
we
have
added
in
the
draft
as
of
now
and
yesterday
in
the
hackathon,
we
were
discussing
on
how
to
add
authentication
and
authorization,
and
we
had
a
brief
implementation
of
that
as
well.
Today,
we'll
be
presenting
it
at
the
hack
demo,
so
yeah
next
slide.
S
C
All
right,
thank
you
for
the
presentation.
Just
thank.
C
Question
on
my
end,
actually
just
you
know
since
we're
not
there
in
person,
I'm
other
person
right
now
for
the
side
meetings
in
the
hackathon.
How
many
people
are
you
having
at
the
side
meetings
and
the
hackathon
projects?
How
many
people
are
engaged
on
this
right
now.
T
S
T
Guys
can
all
raise
your
hands.
We
have.
We
have
tomaso
who's,
a
cryptographer
from
university
of
florence
and
then
we
have
mike
ackerman,
who
is
an
enterprise.
Who's
been
involved
with
us
all
along,
and
so
I
think
at
the
at
the
hackathon.
T
There
were
what
maybe
I
think
there
were
50
60
people
at
the
at
the
hackathon
itself
that
we
presented
to,
and
I
think
at
this
side
meeting
it
was
a
little
early
in
the
morning,
so
we
we
had
a
little
less
attendance
than
we
would
have
wanted,
but
the
other
side
meetings
I
think
in
the
first
one
we
had
at
least
oh,
I
will
say
30
40
people,
second
one,
maybe
I
think
20,
including
some
very
good
cryptographers,
because
that's
what
we
were
concerned
with
is
the
the
you
know.
T
This
is
sensitive
information
and
mike,
I
don't
know
if
you
want
to
come
up
and
talk
about
the
enterprise
use
of
this
oh
you're,
in
the
queue
already
using.
F
S
Q
S
Q
Q
I
think
that
when
we
enterprises
finally
do
get
ipv6
networks,
which
we're
dragging
our
heels
on
terribly
on
right
now
that
we
will
have
other
pieces
of
information
in
other
extension
headers,
and
I
would
not
like
to
see
a
different
solution
for
each
one
of
those
that
we're
going
to
deploy
in
the
future.
So
those
are
my
three
points
and
we're
excited
about
this
development.
I
hope
it
can
continue.
T
Yeah,
thank
you
so
much
mike
and
one
thing.
I
know
that
I'm
going
to
just
put
a
boost
out
mike
and
I
have
been
working
with
a
lot
of
the
federal
government
in
the
united
states
to
help
them
do
their
ipv6
address
planning,
because
ipv6
planning
at
enterprises
has
lagged
a
great
deal
and
and
we're
hoping
actually
to
get
some
of
these
federal
agencies
to
talk
next
time,
but
we're
working
with
at
least
three
different
agencies.
T
F
Tom
thanks
nelly
and
mike
and
your
team
for
this
it
it
seems
to
me
that
you
guys
are
extremely
lucky
to
get
the
original
pdm
through
the
ietf
and
approval
without
encryption.
F
Just
good
timing,
I
guess-
and-
and
you
know
in
today's
environment,
it
makes
a
lot
more
sense
to
have
an
encrypted
version
of
it.
So
just
offering
my
support
for
this
direction
draft
a
long
time
ago,
you
know
I
I
I
appreciate
that
they've
got
a
lot
of
work
to
do
here
and
it
should
be
interesting
to
see
it
complete
thanks.
B
Martin,
duke
google,
no
hats
on
yeah
like
encryption's
good,
so
thank
you.
If
we
do
adopt
this,
I
think
it'd
be
good
to
get
really
early
sec
area
review
of
this.
I
don't
know,
I
mean
you've
mentioned
some
cryptographers
a
bunch.
I
don't
know
who
those
people
are,
but
we
should
probably
run
this
through
suck
area
sooner
rather
than
later.
C
Great
that
sounds
good.
I
was
thinking
the
same
thing
that
it
would
be
something
good
to
adopt
and
then
get
a
very
early
sector
review,
the
the
use
of
hpke.
C
B
U
Yes,
thank
you.
So,
let's
first
start
with
the
hybrid
two-step
collection
and
transport
method
and
next
slide,
please.
U
So
what
is
their
goal
of
this
protocol?
U
It's
a
method
to
collect
and
transport
on
path,
telemetry
information
it
can
equally
be
used
in
a
point-to-point
or
point-to-multi-point
cases,
and
the
point
of
multi-point
cases
is
a
tricky
one
because,
as
you
understand,
the
replication
of
packets,
if
packet
includes
telemetry
information,
inherently
leads
to
replication
of
upstream
collected
telemetry
in
some
environments.
That
is
not
a
big
concern,
but
some
networks
are
especially
for
the
services
that
a
premium
service
and
use
guarantees
for
like
out
reliable,
low
latency
services
that
that
becomes
challenging.
U
By
separating
the
moment
of
originating
on
generating
telemetry
information
from
the
collection
in
transport,
we
can
achieve
more
accurate
measurements.
U
Also,
with
the
hybrid
two
step
we
can,
we
are
removing
the
limits
of
a
mountain
information
that
can
be
collected
for
the
monitored
flow.
U
So
we
know
that,
even
if
the
network
uses
a
jumper
frames,
that
still
has
put
some
limits
on
the
amount
of
information
that
can
be
carried
embedded
into
this
trigger
packet.
U
And
also
because
this
is
a
separate
mechanism
and
the
packets
are
not
the
data
packets,
the
integrity
protection
can
be
applied
to
this
information
and
without
affecting
the
accuracy
of
the
measurement.
Because
again,
if
we
take
a
measurement
and
then
we
apply
integrity
protection,
then.
S
U
Another
advantage
of
what
can
be
seen
as
advantage
of
this
method
is
that
the
collection
can
be
done
out
of
band
so
which
means
that
it
follows
their
topological
path
but
uses
a
different
class
of
service
so
not
to
use
the
same
bandwidth
allocated
for
the
data
flow
that
is
monitored
and
again
in
many
environments.
In
many
cases
that
would
be
advantageous
because
it
would
not
use
their
premium.
Bandwidth
next
slide
please.
U
So
this
is
their
outline
of
their
follow-up
packet.
U
We
had
discussions
and
we
added
some
fields
for
ease
of
parsing,
and
it's
expected
that
it's
processed
at
their
same
notes
that
are
generating
information
and
each
node
when
it
receives
the
trigger
packet,
originates
information
and
holds
it
for,
according
to
the
local
policy
for
the
follow-up
package,
one
of
their
options
that
it
gives
us
is
that
we
can
export
raw
measurements
for
each
trigger
packet
or
measurements
can
be
statistically
processed
locally
and
then
collected
using
their
follow-up
packet
next
slide.
Please.
U
Hybrid
two
step
can
be
used
as
I
with
iem
as
another
trace
option,
and
so
we
are
proposing
to
allocate
our
appropriate
flight
flag.
U
So
that
intelligent
data
profile
field.
U
U
Transit
ingress
node
originates
the
follow-up
packet
and
then
each
transit
node
adds
more
information.
If
the
mtu
is
about
to
be
exceeded,
then
it
uses
their
encapsulation
of
preceding
follow-up
packet
and
starts
additional,
generates
additional
packet
so
and
then
they
arrive
at
the
egress
node
next
slide.
U
So
that
was
the
case
of
for
point
to
point,
and
this
diagram
reflects
the
theory
of
operation
for
point
to
multipoint,
and
so
because
we
identify
the
flow
we
don't
have
to
copy
the
packets
and
they're
replication
node
only
generates
a
new
follow-up
packet
that
collects
telemetry
information
on
downstream
of
their
multicast
tree
slide.
Please.
U
And
this
is
very
interesting
mode
that
was
suggested
by
pascal,
so
that
it
works
upstream
and
their
follow-up
packet
is
generated
by
egress
node
of
their
for
the
flow
and
then
as
sent
in
return
path,
tracing
the
same
path
of
the
monitored
flow
to
the
ingress
so
which
might
be
useful
for
cases
when
the
ingress
node
needs
to
be
aware
of
the
performance
and
can
influence
the
network
by
selecting
certain
scenarios.
One
of
the
possible
scenario
was
the
wireless.
U
I
think
the
next
slide,
so
we
welcome
comments
and
will
appreciate
the
consideration
of
the
working
group.
Adoption.
U
Okay,
so
precision
availability,
metrics
for
slo,
govern
end-to-end
services
or
services
that
have
multiple
slos
next
slide.
Please.
U
This
is
not
really
a
new
document,
as
you
see,
so
this
is
a
merge
of
two
documents
that
we've
discussed
in
the
course
of
several
meetings.
Next
slide,
please.
U
And
so
what
is
the
precision
availability
metrics?
U
U
U
Pam
or
precision
availability
metrics
can
be
used
to
determine
the
degree
of
compliance
which
is
the
service
is
delivered
versus
their
contract
between
their
operator
and
the
client.
U
It
also
can
provide
the
service
according
to
slo,
whether
it's
for
accounting
and,
of
course,
it's
a
billing
or
continuously
monitor
the
quality
with
which
the
service
is
delivered.
So
what
we
include
what
we
propose
to
include
in
this
metrics
next
slide.
Please.
U
So
their
key
element
is
their
time
unit
or
pam
interval,
and
then
we
differentiate
the
error
interval.
The
interval
when
their
metrics
exceeds
the
optimal
thresholds,
predefined
and
error-free
interval
when
their
performance
is
below
optimal
or
better,
not
below,
but
below
the
threshold,
so
not
exceeding
their
optimal
thresholds,
and
there
is
no
defect
being
detected.
U
With
error
error
interval,
we
can
identify
the
severely
errored
interval
so
and
define
it
would
be
proposed
to
define
it
as
an
interval
where
their
performance
net
metric
exceeded
critical
level
previously
predefined
or
defect
was
detected.
You
can
notice
that
a
severely
arrowed
interval
is
a
subset
of
error
interval.
U
S
U
Based
on
counts,
we
can
go
to
the
timing
so
the
time
since
the
last
errored
interval
and
mean
time
between
and
the
number
of
packet
since
and
mean
number
of
packets
and
analogous
for
the
severe
errant
interval
next
slide.
Please.
U
So
lengthy
disruption
can
give
us
the
state.
If
we
have
consecutive
severe
error
interval,
we
can
define
this
state
as
unavailability
and
then
an
availability
state
begins
with
the
start
of
the
first
of
10
consecutive
states.
U
U
U
So
we
identified
the
items
for
further
discussion
and
future
work
that
will
be
outside
of
the
scope
of
this
draft
and
we
welcome
our
inputs
and
contributions
and
collaboration
next
slide.
Please
so
welcome
comments,
discussion
and
we
think
that,
as
a
merge
of
the
work
that's
been
discussed,
we
would
like
ask
the
chairs
to
consider
working
group
adoption.
C
All
right,
thank
you
and
thank
you
for
your
work
on
merging
this.
I
think
that's
become
more
clear.
Certainly
as
part
of
that
do
we
have
any
quick
questions
from
this
group.
C
All
right,
thank
you,
greg
for
both
of
those
presentations
and
thank
you
again.
If
you
have
comments
or
thoughts,
please
bring
those
to
the
mailing
list.
Thank
you.
All
right.
C
Then
next
up
we
have
the
ot
ramp
on
lag
documents.
I
I
Lag
provides
to
pull
physical
links
into
a
single
logical
link.
You
should
sorry
general.
I
C
China,
do
you
want
to
try
to
send
audio
again
or,
if
you're
not
able
to?
If
you
could
drop
a
note
in
the
chat?
Oh
here
we
go.
C
All
right,
maybe
let's
switch
over.
S
C
Okay,
it
sounds
like
we're
having
issues.
Let
frank:
let's
switch
over
to
your
documents,
if
that's
okay,.
A
A
A
The
first
one
is
on
using
either
type
a
protocol
and
identification
to
carry
iom
data,
and
that
is
largely
for
protocols
like
gre
or
we
can
use
that
for
geneva
as
well.
The
second
document
is
for
raw
export
of
iom
data.
Take
the
the
ium
data
blob
and
export
it
using
a
fix,
very
simple
method
to
go,
and
at
least
have
one
standard
means
to
go
and
get
the
data
out.
So
the
two
drafts
are
old,
mature
both
of
them
started
in
in
2018
and
they've
been
tagging
along.
A
They
expired
at
some
point
and
I
kind
of
refreshed
them.
Just
recently,
first
one
is
raw
export.
It,
as
I
said
it
just
takes
the
the
iom.
Blob
encapsulates
that
into
ipfix,
and
then
ships
it
off
there's
a
few
nuances
that
are
being
added
a
few
new
data
fields.
A
We
would
not
even
need
to
go
and
do
an
rfc
for
that
in
order
to
go
and
get
the
additional
code
points,
but
I
think
it's
better
if
we
do
it
that
way,
because
that
means
people
are
aware
of
it
and
it's
it's
well
documented
how
they
want
to
go
and
be
done.
The
second
thing
is
a
draft
that
brian
weiss
wrote.
Originally
a
couple
of
protocols
use
an
either
type
to
identify
a
particular
header
geneva's.
A
One
gre
is
another
example,
and
that
can
be
used
to
carry
iom
data
fields
and
people
felt
like
this
is
a
simple
way
at
least
to
go
and
get
v4
support
for
for
iom
using
a
gre
header.
A
B
Martin,
duke
google,
can
you
explain
the
difference
in
the
use
case
between
rock
sport
and
direct
export.
A
A
Direct
export
is
iom,
information
or
flags
that
tell
a
node
to
go
and
extract
the
data
and
then
eventually
ship
it.
How
it's
been
shipped
direct
export
doesn't
tell
you
right.
So
direct
export
is
about
flags
and
iuem
on
the
wire
that
tell
a
node
what
to
do.
Raw
export
is
what
rapper
you
want
to
go.
Well,
some
means
in
ipfix
to
go
and
get
us
a
quote:
standardized
effort
to
go
and
get
data
off.
The
note.
A
Question
one
more
time
to
list
and
we
can
go
decide
accordingly,
like,
as
I
said,
I'm
not
the
the
author
of
those
two
drafts,
but
I
do
see
value,
but
I'm
the
only
one
that
doesn't
make
sense.
C
Right,
I
guess
as
a
follow-on
to
that
frank.
Do
you
know
if
those
authors
are
willing
to
carry
on
their
work?
Are
they
still
willing
to
drive
that
yeah.
A
C
Think
at
least.
A
From
from
brian,
I
know
that
he
retired
at
some
point,
I'm
not
sure
and
well
with
mickey.
I
picked
him,
but
he
probably
moved
on
to
other
topics
of
given
that
he
switched
companies
and
the
likes
right.
R
A
A
Off
off
list
from
from
a
launch
last
large
service
provider
in
the
in
in
europe
that
had
its
interest
in
the
in
at
least
the
raw
export
work,
so
maybe
we
can
have
other
people
speak
up
there,
I'll
I'll,
raise
it
on
the
list.
Thank
you
all
right.
C
I
That's
too
quick.
This
topic.
We
talk
about
performance
management
on
a
lab,
including
two
drops
for
stamp
at
the
tm
extensions
lag,
provides
multiples
to
combine
multiple
physical
links
into
a
single
logical
link.
Usually
when
forwarding
traffic
over
lag,
the
hash
based
motion
is
used
to
load
balance.
The
traffic
across
the
member
links
link
delay
of
each
member
links
varies
because
of
different
transport
paths.
To
provide
low
latency
service
for
time
sensitive
traffic,
we
need
to
leastly
steer
the
traffic
across
the
lag
member
links
based
on
the
link,
delay,
loss
and
so
on.
I
That
requires
a
solution
to
measure
the
performance
metrics
of
every
member
link
of
lag,
existing
active
pm
methods
around
a
single
test
session
over
the
aggregation
without
the
knowledge
of
each
member
link.
This
will
make
it
impossible
to
measure
the
performance
of
a
given
physical
member
link.
The
measured
traffic
management
metrics
can
only
reflect
the
performance
of
one
member
link
or
an
average
of
all
the
member
lengths
of
the
lag
to
solve
this.
We
followed
the
similar
idea
of
rfc
71
and
30
the
bfd
on
that
next,
please.
I
And
to
measure
the
performance
matrix
of
every
member
link
of
a
lag,
not
multiple
sections
need
to
be
established
between
the
two
endpoints
that
are
connected
by
the
lab.
These
sessions
are
called
micro
sessions.
The
micro
sessions
need
to
associate
with
the
corresponding
member
links,
for
example,
when
the
reflector
receives
a
test
packet,
it
needs
to
know
from
which
member
link
the
packet
is
received
and
correlated
with
a
micro
session.
I
I
This
shows
the
omp
and
the
tvamp
extensions,
including
control
message
and
the
test
packet.
We
add
two
new
control
messages,
the
request,
the
om
micro
sessions
and
the
requested
tv
micro
sessions
and
in
the
test
packet.
We
add
sender,
micro
session
id
and
the
reflector
micro
section
id
both
ids
are
locally
assigned
next.
I
I
C
Okay,
thank
you.
Any
questions
comments
support
please
get
in
the
queue.
If
you
have
thoughts
on
this.
C
Okay
sounds
like
we
can
probably
take
it
back
to
the
list,
but
thank
you
for
this
update.
A
C
Thank
you
all.
I
think
we
got
through
a
lot
of
good
good
progress
on
this.
Thanks
to
all
the
authors,
marcus
anything
from
your
end
comments.