►
From YouTube: IPPM WG Interim Meeting, 2020-04-01
Description
IPPM WG Interim Meeting, 2020-04-01
A
B
A
B
C
B
E
B
B
B
B
B
B
F
F
F
B
B
B
B
C
B
You
have
not
yet
please
do
go
to
the
ether
pad
and
add
your
name
and
affiliation
there
for
the.
If
you
want
to
have
a
comment
at
the
Mike
effectively,
we
are
going
to
be
using
the
WebEx
chat,
that's
the
chat
in
WebEx,
not
in
the
jabber
room,
and
if
you
would
like
to
join
the
queue
to
have
a
comment,
please
type
Q
plus
and
send
that
to
everyone.
If
you'd
like
to
leave
the
q
type
q
and
the
Charis
we'll
be
monitoring
that.
B
B
All
right
so
for
chairing,
we
do
have
change
in
chairs
and
so
I'd
like
to
give
a
big
thank
you
to
both
Brian
and
bill,
who
will
be
leaving
us
as
AI
ppm
chairs
for
all
of
the
time
and
effort
that
they've
put
into
this
group
I
think
that
both
in
chairing
since
2012
and
have
done
a
great
job
there
I
think
I
was
looking.
We've
had
18
RFC's
published
in
that
time,
so
just
want
to
thank
them.
So
much
for
that
and.
B
D
B
The
main
IO
am
core
document.
Io
am
data,
Draft
has
gone
through
working
group
last
call,
and
we
have
received
an
update
based
on
that.
We
will
be
getting
a
updated
presentation
on
that
today,
and
I
would
specifically
like
to
ask
those
who
gave
last
call
feedback
to
respond
either
during
this
meeting
or
on
the
list
afterwards
to
indicate
that
the
feedback
that
they
had
had
he's
addressed
or
not,
based
on
the
current
updates
and
then.
B
H
B
G
Okay,
I'll
do
that.
Let
me
let
me
also
add
a
thanks
to
Miriah
who
has
worked
for
this
for
four
years
of
getting
our
drafts
through
as
a
key
advisor.
It's
been
a
pleasure
and,
and
so
the
the
this
this
draft
is
on
the
metrics,
firm
and
methods
for
IP
capacity
measurement.
We
adopted
it
as
Tommy
said:
we've
updated
it
also,
as
Tommy
said,
and
and
Alan
rĂ¼diger
and
Len
are
the
co-authors
next
slide,
please
so
two
slides
on
quick
overview.
G
You
can
imagine
a
measurement
that
can
take
a
certain
amount
of
time,
a
fixed
time
here,
interval,
length,
I
and
we're
we're
making
measurements
very
frequently
and
using
those
measurements
to
operate
a
search,
algorithm
and
we're
grouping
the
measurements
in
time
intervals.
Dt
in
length,
we
take
the
number
of
bits
measured
in
those
intervals
and
and
then
look
for
them
interval
with
the
maximum
number
of
bits.
That's
the
that's
the
measurement
IP
layer
capacity.
Of
course,
we've
defined
that
at
the
IP
look
next
slide.
G
So
here's
some
more
detail
about
the
metric
and
the
the
equation.
You
know,
there's
there's
lots
of
metrics
that
could
be
defined,
but
we're
focusing
on
the
maximum.
That's
the
one,
that's
a
sort
of
easiest
to
pin
down
and
just
a
couple
of
changes.
We
changed
how
we
work
the
indexes
a
little
bit
here
so
that
the
end
starts
at
one
and
and
that
that's
now
consistent
across
all
the
different
definitions
in
the
different
standards
bodies
and
next
slide,
please
so
here's
our
status
we've
had
many
many
comments
and
reviews.
G
It's
I
think
we're
getting
very
close
to
a
complete
draft,
including
new
reviews
from
you
know
talking
about
this
with
at
CES
to
key
you
mobile.
That's
a
technical
committee
and
we've
got
new
members
of
ITT
study
group
12.
All
the
testing
companies
that
you've
used
in
apps
on
your
mobile
phone,
for
example,
and
we've
had
testing
from
various
volunteers.
All
this
has
been
a
good
input
and
and
fed
back
into
the
document.
So
the
key
updates
no.1
are
the
measurement
considerations
and
the
reporting
formats
next
slide.
Please.
G
So
now
we
get
to
the
meat
of
this
and
and
here's
where
I
expect
to
get
some
discussion
going
and
hint.
So
we
have
in
section
8.3
on
measurement
considerations:
it's
not
a
new
section,
just
the
new
material,
where
we're
we're
covering
the
cases
where
we're
hacking
losses
may
occur
independently
from
the
the
sending
rate
we.
You
know,
we
sort
of
expected
that
you
know
you
imagine
that
loss
will
occur,
certainly
when
you're
at
the
capacity
limit.
G
But
if
there's
some
sort
of
background
loss
occurring,
then
these
are
the
kinds
of
places
where
we
might
encounter
that
the
congestion
at
an
interconnection
backbone
that
maybe
appears
the
losses
distributed
over
time,
a
random
early
detection,
if
that's
being
applied
someplace.
That
could
also
be
a
cause.
G
G
What
is
it
all
the
all
about
wrong?
One,
all
the
all
the
buttons
for
WebEx
are
in
in
in
my
way
of
reading
that
the
bottom
slide
there,
but
it's
possible
to
mitigate
these
conditions.
But
the
first
first
thing
you
should
do
is
to
try
to
locate
the
measurement
points
as
close
as
possible
to
the
notes
of
the
links
that
you're
testing
and
then
try
that
first.
So
the
next
slide.
G
Okay,
good
good,
thank
you,
Tom
all
right
and
I
should
also
mention
you
know.
I've
had
pretty
good
performance
today,
but
occasionally
I
get
I
get
cut
off.
So
you
know
if
it's
that,
if
that
happens,
it'll
take
about
a
minute.
Just
you
know
wait
for
me,
so
yeah,
so
the
we
we
have
this.
We
have
the
search
algorithm.
It's
described
at
a
high
level
in
section
8.1
we're
now
referencing
the
version
that
ITT
starting
group
12
constructed
as
well
and
and
and
basically
at
a
high
level.
Here's
how
it
works.
G
I
mean
we
start
out
with
status
feedback
reports
coming
from
the
receiver.
Every
50
milliseconds
is
what
what
the
you
know:
the
prototype
code
uses
and
when
we
test
when
we
look
at
the
measurements
and
we
see
that
the
sequence
or
loss
or
order
errors
are
zero
and
the
delay
variation
is
less
than
a
threshold.
We
say
yes
to
that.
Then
we
generally
increase
the
sending
rate
and
we
have
the
ability
to
increase
that
at
a
fast
or
a
slow
rate,
depending
upon
whether
we're
close
to
the
congestion
point
or
not.
G
When
we've,
when
we've
got
some
ambiguity
about
whether
there's
a
possible
indication
of
congestion,
then
we
sort
of
put
off
the
decision
and
we
maintain
the
sending
rate.
So
that's
the
that's
the
orange
path
diagonally
through
the
middle
and
then
when
we've
got
a
definite
indication
of
congestion.
Where
we've
seen
you
know,
errors
losses,
delay
variation
above
the
the
upper
threshold,
then
we
reduce
the
sending
rate
and
we
have
the
ability
to
do
that
fast
or
slow.
G
Now,
one
of
the
one
of
the
ways
that
we
can
mitigate
the
case
where
packet
losses
occur
independently
from
the
sending
rate
is
to
adjust
the
parameters
of
the
search,
algorithm
and
so
I'm,
demonstrating
that
a
little
bit
here
where
we
can,
just
by
adjusting
the
parameters
we
can
take
away.
The
slow
increase
always
make
that
fast
and
we
can
take
away
the
fast,
reduce
sending
rate
and
I've
marked
those
off
by
stars
there.
So
that's
the
that's
the
way
that
you
know
in
general.
How
how
that
would
work
and
I
know
this.
G
G
We
would
report
the
separate
results
for
the
repeatable
modes
and
you
can
sort
of
see
that
stylized
indicated
here
where
the
first
few
seconds
of
a
new
flow
experiences
a
higher
capacity
and
then
the
subsequent
seconds
sort
of
the
sustained
capacity
would
be
adding
at
a
lower
level.
I
mean
there's
a
rationale
for
this
that
you
know
comes
from
you
know
loading
webpages
quickly
or
buffering
up
or
increasing
the
buffers
and
streams
as
as
fast
as
possible.
Those
are
all
key
metrics.
G
G
Okay,
so
we've
we've
identified
some
reporting
format,
elements
which
we
think
ought
to
be
included
here
and
that's
the
that
the
singleton
IP
capacity
results
should
be
accompanied
by
the
context
under
which
they
were
measured.
It's
good
to
get
the
timestamps
for
each
of
the
sub
intervals,
the
each
of
the
DTN,
especially
the
one
where
the
maximum
takes
place,
obviously
the
source
and
destination
IPS
or
other
IDs
and
the
other
inner.
G
What
we're
going
in
are
parameters
which
are
all
the
typical
metric
parameters,
so
we've
got
a
list
of
those
in
section
four
outer
parameters
or
measurement
context
such
as
performed
in
motion.
You
know
this
is
that
might
be
the
kind
of
thing
you
write
down
when
it's
a
drive,
testing
or
other
factors
belonging
to
the
contents,
the
result
validity.
Sometimes
we
know
that
you
know
the
test
terminated
early,
because
connectivity
was
lost.
G
G
G
So
here's
the
high-level
standard
status,
I
I,
said
I
was
going
to
put
a
slide
in
on
this
and
I
did
in
study
group
12
we've
approved
the
updated
version
of
recommendation.
Why
1
5
4
0,
what
we
often
just
call
1540
and
annexes
a
and
B
contained
the
the
new
metric
method
of
measurement
and,
most
importantly,
the
the
sort
of
the
best
search
algorithm
is
in
annex
B.
G
Just
to
mention
that
the
SE
documents
on
on
high-speed
Internet
kpi's,
it
makes
a
strong
reference
to
why
1544
all
the
other
material
beyond
the
metric
definition
in
the
broadband
form.
The
project
to
prepare
this
metric
and
method
of
measurement
has
been
approved
and
we've
got
a
really
good
review
going.
There,
for
the
last
really
project
was
approved
back
in
September,
and
we
expect
sort
of
a
first
ballot
in
in
May
this
year
and
so
making
good
progress
with
this
version
of
it
called
working
text.
G
471
and
and
of
course,
we
we
know
what
we're
what
our
status
here
is
in
the
IP
PM.
The
internet
draft
is
adopted
and
we're
trying
to
make
it
better
so,
but
we've
got
a
good
harmonization
process
going
across
the
industry,
pretty
rare,
but
I
think
in
this
particular
case,
where
there's
lots
of
ad
hoc
testing
going
on
in
our
world
that
this
is.
This
is
valuable.
G
So
next
steps
in
the
post
working
group
adoption-
we
have
you-
know
the
harmonization
activity
trying
to
keep
up
with
the
parallel
efforts
and
insurer
IP
PM's
expertise
is
incorporated
elsewhere.
We're
basically
doing
that
through
flesh
and
blood.
Liaisons
Ruettiger
and
myself
are
all
working.
You
know
in
as
many
places
as
we
can
to
keep
things
synchronized
and
up
to
date,
but
we're
not
trying
to
do
some
formal
synchronization.
G
Where
there's
a
you
know,
a
big
bang
on
floor
standards
get
delivered
well,
we'll
sort
of
do
it
and
and
do
the
best
schedule
we
can
everywhere
and
do
the
round-robin
updates.
If
we
have
to
we,
we
would
like
to
reach
consensus
soon
here
in
IP
p.m.
so
that
we
can
start
work
on
protocol
support.
That's
I
think
that's
one
of
the
real
roles
of
the
IP
PM
in
this
overall
tapestry
of
metric
development
and
harmonization.
G
B
G
J
Can
you
hear
me
okay,
great
so
Greg
mirskiy,
here
I'm
presenting
and
they
have
offers
of
this
product
and,
as
you
know,
we
heard
that
we
have
published
the
base
specification,
RFC,
87,
62
and
thanks
everyone
who
helped
us
to
move
and
reach
this
role.
But
this
is
an
extension
to
this
work.
So
next
slide.
Please.
J
This
is
an
update
and
to
capture
what
being
changed
between
there.
Our
discussions
is
that
we
define
the
stamp
session
identifier,
introduce
a
generic
TLD
or
their
identity
protection
and
based
on
the
comments
will
receive
the
last
meeting
in
Singapore,
so
clarify
their
stand,
processing
and
vocation
TVs,
so
it
was
one
octet
for
destination,
for
so
we
managed
it
with
a
header
size
up
two
octaves
for
destination,
port
and
source
port
fields
and
follow-up
TV.
J
J
Step
session,
identifier
with
the
extension
to
control
their
DCP
marking
on
the
packets
and
idea,
is
that
disappear
marking
with
the
stamp
session
can
change.
So
there
is
a
question
of
what's
a
better
way
to
identify
the
stem
session
and
well.
It
was
not
part
of
the
thinking
when
we
were
working
on
the
base
specification,
but
now
we
got
it
and
what
we
propose
is
to
add
in
a
base
packet.
J
The
two
octave
field,
which
is
a
non
zero
unsigned
integer
and
the
value
will
be
set
by
session
sender
and
must
not
change
you
during
the
course
of
what
is
perceived
as
stamps
test
session
and
session
reflective
stop
session.
Reflector
that
supports
this
new
specification
must
copy
this
value
and
put
it
into
its
field,
so
just
to
save
time
space
within
copy
there
format
of
their
session,
reflector
packet,
negated
and
authenticated.
J
H
mark
TOB,
so,
as
you
know,
stem
has
two
modes
unauthenticated
and
authenticated
an
authenticated
mode.
We
use
H
mark
to
do
identity
protection
and
we
follow
the
same
approach
with
extensions
by
introducing
htlv
and
effectively.
This
is
the
same
size.
So
if
the
proposal
is
that
the
notes
supporting
this
specification
will
use
H
mark
as
a
256
and
signal
digest
will
be
truncated
to
under
28
bits.
J
J
J
K
K
J
If
we
could
to
the
slide-
and
thanks
for
the
question
yeah
one
more
yes,
so
as
you
see
there
are
the
format
of
session
Center
unauthenticated
packet.
So
we
do
have
28
octaves
here
if
we
want
to
extend
their
base
packet
and
if,
yes,
in
this
presentation,
I
didn't
include
just
a
safe
space
session,
reflected
under
the
dedicated
packet,
but
it
does
have
three
octaves
that
are
not
assigned.
There
must
be
zero,
so,
yes,
I
understand
their
space
is
scarce,
but.
K
K
J
No,
actually
I
will
point
that
the
three
octaves
at
the
end
of
the
packet
they
are
not
used
in
session
sender
and
not
used
in
such
a
reflector
packet
formats.
So
if
we
look
for
the
space
in
the
packet
that
is
available
in
both
packets
in
session
sender
and
session
reflector
unallocated
packets,
then
we
have
three
updates
that
are
the
elbow
involved
and
they're
in
the
same
place.
K
J
D
Duke
thanks
Greg.
What's
the
use
case
for
this
SSID
is
if
you're
Nats
in
the
path
or
something
yeah.
J
J
So
it's
a
probably
will
simplified
the
operational
procedure
and
simplified
for
the
reflector,
especially
in
stem-based
specification.
There
notion
of
stateful
and
stateless
stamp
session
reflector
defined.
So
if
it's
a
stateful,
it
runs
its
own
sequence
number.
So
to
allow
measurement
of
one
way
packet
to
us
and
session
identifier
will
simplify
their
implementation
of
stamp
sessions.
Reflector
stateful,
okay,
thanks.
B
B
L
Can
try
to
speak
a
little
faster
but
I'm,
not
sure,
that's,
no
thanks
Tommy.
So
this
is
a
presentation
between
myself
and
tell
who's
gonna
go
on
to
the
next
slide.
I'm
gonna
go
cover
two
drafts,
the
first
ones,
so
the
data
draft
and
the
v6
options
draft
and
tell
it's
gonna,
go
and
cover
the
flags
draft
and
export
drafts
so
come.
You
already
said
that
we
went
through
next
slide.
Working
group
last
call
on
the
IOM
data
draft
and
that
resolved
in
quite
a
few
comments.
L
C
L
L
They
brought
up
a
bunch
of
editorial,
which
is
that
we
have
resolved
so
one
thing
was
there
could
be
so
the
document
make
made
reference
that.
Well.
If
the
remaining
length
drops
to
zero,
you
can't
insert
anything
more,
but
there
could
be
a
situation
where
the
remaining
length
is
less
than
what
you
can
go
and
drop
into,
and
it's
still
not
zero,
and
so
well
that
nuanced
good
catch
I
think
that's
been
clarified.
There
is
a
bunch
of
sloppy
language.
L
That's
all
thanks
to
Frank
writing
the
initial
document,
so
I
said
so
certain
things
like
native
ipv6
and
crack
or
those
like
there's
no
such
thing
as
native
ipv6
as
v6
right.
We
had
references
to
drafts
that
are
expired
by
now,
like
Petrus
draft,
for
instance,
we
got
rid
of
those
in
order
to
clean
those
up,
because
Petter
stopped
maintaining
that
one
and
there's
a
couple
of
statements
and
Mickey
caught
those
around
price
types,
and
we
never
ever
straight
stated
explicitly
that.
L
Well,
we
expect
that
the
ordering
of
the
data
fields
is
following
the
order
of
the
bits.
Implicitly
that
was
already
there
and
all
the
examples
were
that
way,
but
we
never
ever
stated
that
explicitly.
We
do
that
by
now.
We
have
statements
that
transit
notes
must
not
modify
in
the
fixed
header,
and
also
that
must
be
zero
fields
should
be
also
then
set
to
not
only
zero
but
also
ignored
by
transit.
L
Note
so
I
think
all
of
these
things
are
obvious,
but
it's
it's
very
helpful
to
state
the
obvious
Greg
brought
up
an
issue
about
the
term
in
situ
and
IOM,
evolved
from
something
that
was
clearly
only
in
situ
ie
piggybacking
information
on
top
of
the
packet
right
now
we
have
other
options
where
there
might
be
pure
IOM
packets,
like
loopback,
for
instance,
and
well,
for
those
in
situ,
wouldn't
apply
what
we
we
obviously
want
to
go
and
keep
the
term.
So
we
said:
well,
it's
an
umbrella
term.
L
Instead
of
like
it,
what's
well
focus
on
a
specific
functionality,
so
I
think
it's
now.
The
group
name
instead
of
there
is
a
specific
function
there
next
slide.
So
let's
keep
rolling
to
151,
which
is
a
gain
one
from
Greg
and
that's
again,
I
think
probably
to
blame
on
me.
Early
on,
we
didn't
really
properly
use
2119
and
there's
been
statements
with
upper
scale
is
upper
case.
L
Should
that
shouldn't
have
been
upper
scale
upper
case
should
that's
been
clarified
so
that
it
should
be
possible,
then,
obviously,
you
need
to
understand
what
would
happen
if
the
possibility
is
not
mad.
Well,
that
wasn't
really
the
case.
I
think
we
wanted
to
outline.
That
is.
There
is
an
option
to
go
and
go
this
way,
but
there
is
another
option,
so
I
think
we
wanted
to
go
on
outline
options
instead
of
steering
people
one
particular
direction
next
slide.
L
152
is
on
existing
definitions,
again
clarifications
you
could
call
them
editorial
or
classifications.
So
what
happens
if
the
underlying
protocol
in
the
gray
again
rag
thanks
for
all
the
new
degree,
reviews
really
helpful.
What
if
the
underlying
encapsulation
protocol
doesn't
really
carry
anything
like
hop,
limit
or
or
or
TTL?
Well,
there
is
nothing.
What
would
you
put
in?
Well,
we
put
FFA
in
now,
so
we
hadn't
really
had
things
specified.
L
We
hadn't
really
specified,
although
the
examples
were
there,
what
the
field
length
is
for
tri
step
zero,
which
is
four
bytes,
and
we
got
rid
of
implicit
definition
of
nomenclature.
Like
my
capable
notes,
we
never
wanted
to
find
that
that
thing,
and
we
are
steering
clear
of
that
now,
let's
flip
to
the
next
one
and
address
153,
which
is
from
Greg
and
Barak.
L
Do
where
this
should?
We
have
finally
a
conclusion
that
we
want
a
unit
type
for
these
fields,
I'm,
given
that
it
limits
flexibility,
and
it
also
creates
a
problem
with
certain
implementations,
because
not
everybody
go
and
use
the
same
unit
necessarily
so
what
we
were
reverting
back
to
is
a
suggestion
with
our
greedy,
a
forcing
function
where
we
say
well,
it's
really
beneficial.
If
people
moving
forward
would
use
bytes
as
a
unit-
we're
not
forcing
this
for
now,
because
we
know
that
implementations
differ
for
now.
L
So
we
decided
not
to
do
anything
about
additions.
Functional
additions
to
the
document,
so
we
are
revising
the
document,
we're
making
it
crisper
what
we're
not
adding
functionality
at
this
stage
anymore,
then
there's
been
security,
related
questions
and
rec
and
Intel
address
most
of
those
in
section
8
around
a
bunch
of
additional
security
aspects,
so
I
think
the
the
security
section
probably
doubles
by
now
in
size,
from
0
to
0-9.
L
C
L
And
there's
been
kind
of
also
a
forward
reference
to.
If
you
were
having
specific
questions
about
encapsulation
related
questions
with
IOM
those
need
to
be
covered
in
the
related
encapsulation
drafts,
because
we
can't
really
cover
them
in
the
data
drive,
let's
jump
forward
to
cover
156,
that's
been
one
from
Mickey
who
brought
that
up
and
also
provided
the
patch
thanks
Mickey.
L
So
there
was
a
question
like
yeah:
well,
no
timestamps
a
packet
on
and
cap,
but
what
times?
What?
What
is
this
timestamp
representing
and
there's
multiple
options?
What
this
timestamp
could
represent
and
again,
given
that
we
didn't
really
want
to
go
and
constrain
implementations,
we
said
well,
what
we
really
want
to
know
is
when
the
thing
it's
gonna
go:
timestamp
the
packet,
not
everybody,
timestamps
the
packet
necessarily
the
same
way,
and
so
the
option
that
we've
opted
for.
Finally,
that
we
went
for
is
require
that
implementation
documents.
L
How
is
scandal
and
when
it's
going
to
go
timestamp
the
packet,
acknowledging
that
there
is
multiple
options,
so
I
think
a
documentation
request?
Is
there
instead
of
mandating
everybody
we're
having
the
same
way
we're
just
hard?
Because
there
is
too
much
variety
out
there
in
the
chipsets
and
let's
flip
to
the
next
night.
L
158
is
a
game
from
Mickey,
almost
literally
well
it's
another
for
you
bleach.
We
in
74
and
section
74.
We
forgot
that
well,
there
is
bits
available
for
assignment,
and
that
is
something
that
we
address
by
now
was
missing
and
then
another
editorial
glitch
that
Mickey
Cod
was.
It
was
references
to
direct
export
still
in
the
draft.
A
direct
export
has
its
own
draft
by
now,
so
we
moved
everything
out
and
also
the
few
traces
and
remainders
that
were
in
there
got
caught
by
Mickey
and
I
am
out.
I.
L
Think
that
brings
me
to
the
the
end
of
this
lengthy
list
and
considering
next
steps
for
that
particular
document.
So
we
had
a
lot
of
really
good
comments.
We
did
a
load
of
edits
and
since
then
well
after
the
working
group
last
call
finished,
the
whole
thing
went
quiet,
it
could
be
an
indication
of.
There
is
no
more
comments.
It
could
be
an
indication.
People
were
waiting
for
the
a
for
the
O
nine
version
to
go
and
reread
it.
L
that
we
want
to
go
and
give
everybody
obviously
a
period
of
time
to
go
and
re
review,
and
if
there
is
nothing
major
received,
we
might
want
to
go
and
consider
another
working
group
last
call
by
something
tell
me:
do
you
want
to
go
and
take
questions
now,
or
should
we
go
through
the
entire
blob
and
then
take
everything
in
one?
Go
questions
have
feedback.
B
On
the
changes
that
were
made
in
after
nine
update
during
working
class
calls
specifically
the
people
who
provided
reviews
I
see,
of
course,
Gregg
did
a
lot
of
those
as
well
as
Mickey.
If,
if
you
want
to
speak
up
at
all
now
with
your
thoughts,
if
you
think
that's
addressed
and
what
you
think
is
required
going
forward
to
make
sure
we
have
good
enough
review
before
we
send
this
on.
That'd
be
great.
Otherwise,.
J
Thanks,
yes,
as
Frank
mentioned,
we
had
a
discussion
reviewed
the
presentation
last
night
and
I
agree
with
their
proposed
timeline.
So
I
need
some
time
to
review
their
updates,
because
one
of
the
reasons
it's
quite
significant
changes,
so
targeting
working
group
last
call
on
a
data
document
somewhere
in
May.
It
sounds
reasonable
before
that
I'll
review
it
and
how
I'll
share
my
conclusions
with
the
list.
Afterward
review,
the
latest
version.
C
L
Maybe
at
some
point
even
end
up
with
interoperable
implementations
and
no
divergence
at
that
level.
So
I
think
that's
good
news.
So
thanks
Tommy
for
kicking
this
off
and
well,
the
updates
were
minor,
and
so
the
real
question
is:
how
do
you
what
we
want
to
go
and
do
from
document
progression
perspective?
L
I
H
I
saw
some
comments
on
actually
a
massive
go.
We
just
submit
another
draft
which
actually
summarized
several
options
to
apply
the
IOM.
Actually,
six
environments
I
the
current.
We
because
we
identify
some
practical
issues
for
the
current
proposal,
then,
but
there
must
be
alternative
ways
to
avoid
that.
So
in
basically,
is
that
in
that
draft
we
gave
several
possible
options
on
leashes
II
suggest
everybody
could
read
that
draft
and
provide
your
comments
and
I
think
the
best
one
should
be
maybe
picked
from
that
list
of
options.
So
that's
our
suggestion.
B
L
E
Sure
we
can
do
that
and
it's
light
please,
okay,
so
we
had
a
few
meetings
and
the
design
team
or
the
flights
raft,
and
we
were
revised.
The
draft
and
the
current
version
has
a
few
changes
compared
to
the
previous
version
based
on
feedback
we
received
from
people
on
the
working
group
and
based
on
feedback
from
the
last
ITF
meeting.
E
One
thing
that
was
discussed
a
couple
of
times
meeting
and
also
on
virtual
meetings,
security
aspects
and
specifically
amplification
attacks,
and
we
added
more
text
to
emphasize
these
attacks
to
discuss
when
they
are
relevant
and
also
to
suggest
measures
that
can
be
taken
to
mitigate
these
attacks.
One
is
to
use
rate
limiting
the
other
is
data
minimization,
which
means
you
limit
the
number
of
data
fields
that
are
used
when
you
use
the
row
back
flag
and
that's
that's
basically,
one
measure
that
can
significantly
reduce
the
scope
and
the
scale.
E
E
E
We
do
not
want
I
own
data
to
be
incorporated
into
that
packet
when
it's
on
the
reverse
path.
So
in
order
to
avoid
having
IOM
transit
nodes,
push
data
fields
into
that,
a
kid
that
has
to
do
back
on
the
reverse
path
need
some
way
of
indicating
to
these
nodes.
The
packet
is
on
the
reverse
path,
so
there
are
a
few
ways,
a
few
possible
ways
to
do
that.
One
way
is
by
using
a
new
flag
and
we're
not
very
eager
to
do
that.
To
define
a
new
flag.
E
Another
way
to
go
is
to
define
a
new
iom
type
which
would
not
consume
another
flag,
so
maybe
more
preferable.
And
finally,
the
third
option
is
kind
of
a
hack
but
may
still
work,
which
is
basically
the
node
that
loops
back
the
packet
and
clear
the
remaining
length
field
and
that
way
again
transit
nodes
on
the
way
back,
not
add
any
data
fields.
So
this
is
an
open
issue
and
again
we're
looking
for
feedback
from
people
it's
to
get
a
feeling
of
how
people
think
this
should
be
addressed.
E
L
C
I
quickly
had
a
look
at
this
for
the
meeting
as
well,
and
this
downgrades
it
from
a
I
can
see
how
this
is.
An
amplification
attack
smells
like
an
amplification
attack
right.
So
that's
good
right
like
so.
It's
going
from
from
whose
scary
to
I
have
an
uneasy
feeling
about
this.
I'd
have
to
dig
into
it.
A
bit
more
like
I'd,
have
to
see
a
more
detailed
design
to
actually
be
able
to.
C
C
D
Yeah,
so
I
also
intend
to
take
a
look
at
this
I
think
from
the
Navy
perspective.
This
is
likely
to
be
the
biggest
issue
contention,
and
is
you
review
the
only
thing
I'll
suggest
at
this
point?
Without
I
man,
you
did
the
same
analysis
as
Brian,
but,
like
a
rule
of
thumb,
that's
being
used
elsewhere
in
TV.
Wg
is
the
multiplier.
Three
is
about
right.
That
is
the.
D
If
the
attacker
has
to
send
X
bytes,
then
you
should
should
not
be
able
to
force
the
rest
of
the
network,
indirect
no
more
than
than
three
X
bytes
I
mean
you
can't
quit
in
some
other
places.
So
I
mean
that
might
be
one
way
to
look
at
this
problem
and
seeing
how
I've
been
in
my
idea-
and
you
know-
moving
it
to
one
option-
maybe
solves
that
problem,
but
that
would
be
one
sort
of
removal,
some
that
we
can
apply
to
sort
of
evaluate
this.
D
E
E
E
If
we
look
at
the
explicit,
explicit
hop
count,
one
advantage
is
that,
basically,
when
use
the
up
limit
data
field,
you
rely
on
copying
the
TTL
or
OPCON
from
lower
layer.
So
if
there
is
an
explicit
hop
count
field,
you
don't
rely
on
a
lower
layer
which
in
some
cases,
doesn't
necessarily
include
a
TTL
field.
E
Also,
another
advantage
is
that
in
some
cases
it
may
may
be
more
accurate
than
using
the
TTL
that
is
copied
from
the
lower
layer.
In
some
cases,
you
may
have
some
way
incapable
nodes
that
have
been
skipped
for
one
reason
or
another,
and
you
want
to
be
able
to
detect
that,
and
you
can
use
that
to
detect
these
cases.
These
are
basically
the
pros
and
cons.
E
J
Hi
tell
thanks
for
their
updates
and
the
presentation
and
stating
their
open
questions.
I
just
want
to
mention
another
draft.
It's
a
hybrid
two-step
and
I
think
that
there
might
be
some
interesting
I,
don't
like
this
world,
but
synergy
between
their
approach
of
direct
expert
and
hybrid
two-step.
I
think
that
both
are
complementary
and
I
wanted
to
use
this
opportunity
to
just
mention
and
propose
that
we're
working
on
this
and
discuss
it
on
the
list,
of
course,
and
in
the
design
team.
D
E
Well,
I
think,
basically
we
we
have
heard
this
comment
in
the
past
and
I
think
the
intention
for
these
two
flavours
was
a
bit
different.
Lubeck
was
intended
to
be
returned
to
the
origin,
to
the
encapsulating
note,
in
order
to
allow
something
like
traceroute
and,
on
the
other
hand,
X
is,
is
intended
to
be
collected
by
a
collector
or
in
a
central
entity.
C
Thanks
L
for
reiterating
sort
of
the
difference
between
these
two
things,
I
would
I
would
phrase
it
a
little
bit
more
simply
in
the
direct
export
case,
the
target,
like
the
place
that
they
that
the
export
is
going
to
go,
is
actually
a
matter
of
configuration,
so
it's
the
control
plane
for
each
device,
whereas
in
the
loopback
case
the
information
about
where
it
should
go
is
taken
out
of
the
VIP
packet
right
is
the
source
address
the
IP
packet?
That's
if
that
is
a
an
accurate
statement,
correct
yep.
E
C
So
I
think
that
makes
this
a
little
bit
less
scary
in
terms
of
sort
of
like
amplification
type
things
that
you
know,
you've
actually
can
be
which
device
you're
going
to
throw
all
of
the
traffic
add
as
a
matter
of
configuration.
It
would
be
nice.
Let
me
actually
I.
Have
it
up
in
front
of
me.
C
Yeah,
okay,
the
security
considerations
of
this
document
now
I
think
are
good
for
handling
that
and
I
like
the
fact
that
these
two
documents
now
have
parallel
security.
Consideration.
I'm
gonna
put
up
a
on
my
list
of
things
to
do
which,
again
and
with
these
days,
to
follow
up
on
this
on
the
list.
But
I
do
like
the
fact
that
these
are
at
least
converging
a
bit
in
warnings
about
how
they
convenience
use.
I.
B
K
B
F
Okay,
yep
the
basic
idea
behind
that
was
I've,
been
working
with
a
second
routing
and
performance
monitoring
for
a
while,
and
the
design
aim
of
this
draft.
Ipp
M
connectivity
monitoring
was
to
use
segment
routing
to
have
something
like
in
functionality,
which
means
you
scale
with
ping.
One
signal
per
interface
to
be
monitored,
but
you
take
out
a
little
more
information
and
I
am
especially
interested
in
locating
congestion.
So
it
took
me
a
while
to
figure
out
how
you
could
do
something.
What
should
be
called
network
tomography
and
set
up.
G
F
Which
you
control
in
the
setup,
nothing
is
accidental
here
and
use
not
more
than
one
measurement.
Relation
per
monitored.
Interface
then,
have
all
the
information
like
connectivity,
locating
congestion
and
also
being
able
to
measure
a
round-trip
time
for
interface
or
path
and
do
all
that
in
a
controlled
way
right.
This
is
the
motivation
and
then,
if
you
flip
to
the
next
side,
I
slide
like
yeah,
that's
a
difficult
stuff
with
segment
routing
you
can
fix,
passes
through
network.
So
here
you
see
two
things.
F
One
is
what
a
single
measurement
path
is
looking
like,
and
the
second
is
how
you'd
overlay
several
and
pass
and
in
the
end
you
can
measure
all
the
metrics
or
parameters
which
I
mentioned
on
the
first
slide.
So
the
above
one
shows
that
you
start
to
measure
from
the
past
monitoring
system.
You
pass
the
first
router,
then
you
make
a
kind
of
a
u-turn,
and
then
you
do
two
other
links,
one
you
do
downstream
and
the
other
upstream
and
then
back
to
the
measurement
system.
F
There
is
a
single
measurement
path
in
the
end
and
that's
described
in
the
draft.
You
create
an
overlay
which
will
create
a
pattern
shown
below
with
the
three
colors
and
it
has
the
properties
shown
for
monitored
path
or
interface.
That
is,
you
have
one
u-turn.
You
have
one
measurement
going
upstream
through
the
interface
and
one
measurement
going
down
downstream,
and
now
you
see
that
between
Ellis
or
a
and
the
le
RI,
the
question
might
be
now.
What
is
missing
on
lar
I?
F
If
you
have
a
another
measurement
path,
yellow
that
may
would
make
a
u-turn
there,
and
if
you
then
overlay
all
the
measurement
passes
and
evaluate
them
simultaneously
and
that's
the
network
to
market
apart.
Then
you
can
figure
out
that
if
you
have
a
packet
loss
on
all
three
passes
shown
here:
green,
red
and
blue
simultaneously,
you
lost
connectivity
between
Alice
or
a
and
lri.
There's
no
other
explanation
for
that.
F
If
you
have
growing
delay
on
red
and
blue,
that
can
only
be
caught
by
congestion
on
the
interface
eleazar,
a
to
le
RI,
and
if
you
have
seen
millenias
edit
delay
on
green
and
blue,
that
can
only
be
le
RI
to
Alice
or
a
the
expectation
here
is
that
there
is
a
single
event
in
the
network.
Only
the
design
is
done
for
that
and
well
I'm
working
with
a
carrier.
So
that's
what
usually
happens.
There
is
a
single
cause
of
error,
there's
no
catastrophe,
and
that
is
what
this
draft
is
about.
F
Creating
a
metric
which
allows
you
to
set
up
measurement
passes
in
a
controlled
way
as
shown
here
and
well.
If
you
do
that
on
a
larger
scale,
you
likely
have
two
autumn
eyes
it,
but
the
automation
is
not
part
of
this
graph.
It's
only
the
metric
that
special
set
up
of
passes
and
the
math
you
have
to
do
to
seize
all
the
information,
and
that
is
it
if
you
liked
that,
please
comment
on
the
list.
I'd
be
interested,
of
course,
of
make
making
this
a
workgroup
document,
not
sure
whether
we
charted
four
segments
routing.
B
J
Thank
you,
I
think
it's
very
interesting
and
the
question
I
have
is
how
well
synchronized
these
reports
on
different
spans
measurement.
Perhaps
you
need
to
have
because
the
correlating
events,
if
they're
persistent
state-
yes,
that's,
probably
easier,
but
they,
if
it's
intermittent,
that
you
need
to
have
certain
window
of
event,
synchronization
sure.
F
B
H
A
very
quick
first
yeah
yeah,
yeah,
okay,
so
this
postcard
pasted,
a
telemetry
draft,
will
have
been
submitted
almost
two
years
ago,
but
this
is
the
first
time
we
got
chance
to
present
in
this
working
group,
and
but
it
has
been
through
a
lot
of
discussions
in
the
email
list
and
also
it
has
some
influence
and
impact
on
some
ongoing
works
next
slide.
Please.
H
Yeah,
what's
new
in
the
I
believe
this
already
our
working
0
Phi
and
is
this
new
version?
We
changed
the
status
of
the
draft
to
informational
as
a
Madrid
made
reason,
for
that
is
that
we
sink
a
VB.
Remove
all
the
enormity
our
specification
to
some
other
directly
related
drafts
footwork
for
in
this
job
to
be
just
the
position.
The
two
are
possible
solutions
of
a
PBT.
We
named
that
PBT
m
and
the
PPD
I,
just
as
a
two
high
level
approaches
of
PBT.
So
in
four
examples
are
the
PBT
I
I
stands
for.
H
Instruction
will
become
an
independent
iom,
option,
direct
export
and
another
one.
We
have
another
draft
to
show
how
it
will
be
implement
in
some
other
protocols,
such
as
s,
r,
v6
next
class,
please
so
on
the
high
level
is
a
strapped.
We
we
consider
posts
this
hyper.
A
group
of
techniques
are
covered
by
this
unpassed
flow
telemetry
techniques,
and
there
are
two
basic
modes
for
that:
one.
First
wines:
we
call
it
the
passport
mode,
so
the
IOM
trees
mode
is
a
one
representation.
H
So
for
that
mode
and
another
may
branch
is
a
postcard
mode
and
this
postcard
mode
can
be
further
partitioned
to
two
subtypes.
The
first
wines
instruction
based
because
at
PBT
I
as
a
representative
technique
is
IO,
am
a
direct
export
option
and
another
one.
We
call
that
PBT
em
and
makes
marking
based
PBT
subtype.
H
So
for
this
type
of
another
draft,
we
call
that
s
r,
v6
PBT
is
one
representation
for
that,
and
also
we
there's
another
draft
to
support
a
multicast
telemetry
which
basically
can
use
the
combination
of
the
postcard
mode
and
the
instruction
based
a
passport
mode
and
the
instruction
based
at
PBT
mode.
So
we
can
consider
that
is
a
it's
a
hybrid
mode.
Next
next,
please!
H
So
we
are
go
through
the
next
three
slides
quickly.
Basically
means
dystrophy
also
summarized
the
pros
and
cons
of
each
mode,
for
example
for
this
passport
based
telemetry,
it
is
good
for
you
know,
data
self
describing
and
it
has
lower
export
overhead
and
it's
easy
to
configure,
but
it
has
some
other
issues
like
I
said
you
know,
since,
like
header
need
to
carry
both
the
instruction
and
the
data,
it
has.
H
Some
impact
on
performance
and
also
inflate,
subject,
header
practice,
size
and
the
cause
some
issues
on
the
encapsulation
some
protocols
and
also
there's
some
security
concerns.
If
you
carry
this
festive
Merriman
the
data
along
with
a
packet
and
also
if
the
data
dropped
on
the
foreign
pass,
you
will
lose
lose
everything
so
there's
some
drawbacks
and
next
size
for
the
postcard,
a
marking
based.
H
You
know,
you
don't
need
to
I
you
capsules
in
any
new
header
in
the
package,
so
it
basically
the
user
packet
remain
the
same,
and
you
standard
the
data
using
independent
postcard
packets
to
report
to
collected
data.
So
it
has
a
basically
it's
a
addressed
several
issues
of
a
sport
mode,
but
it
has
some
introduced,
some
new
issues
like
data
correlation
and
it's
increased,
expedited
export
overhead
and
it
also
needs
to
configure
all
the
node
in
the
network.
So
the
configured
recent
overhead
is
also
high
next
slice
and
the
postcard
paste
this.
H
So
this
is
a
mark
instruction
based
a
postcard
mode.
Cannula
is
a
you
know:
it's
mixed.
You
know
it's
a
it's
a
quite
some
trade-offs
and
makes
a
address
some
of
the
issues
and.
H
But
it
also
has
some:
some
topics
remains:
like
export.
Overhead
is
still
kind
of
high.
It
also
reintroduced
like
in
caps
initiations,
but,
on
the
other
hand,
ASA.
It's
also
good
on
the
other
aspects.
So
why
we
have
these
documents
and
is
this
document,
be
describes
the
high
level
approaches
and
makes
a
good
classification
on
this
high
level
on
pasta,
every
technique,
and
we
summarized
the
pros
and
the
cons
of
each
approach
and
we
also
details
a
marking
based
PBG
mode,
which
is
no
coward
anywhere
else,
so
we
can
see
so
far.
H
This
work
has
motivated
several
other
works,
and
but
the
detailed
implementation
or
standardization
of
each
approach
will
be
covered
by
separate
documents,
next
slice
yeah.
So
based
on
the
current
and
maturity
of
this
chapter,
we
request
the
working
group.
Adoption
wait,
I
think
that's
all.
Thank
you.
Thank.
B
The
direct
export
regarding
this
document
I
think
my
impression
and
the
impression
of
speaking
it
for
all
the
chairs
is
we
were
you
know
we
want
to
really
converge
on
documents,
not
have
too
many
different
approaches
and
we're
happy
with
how
we
see
it:
convergence
going
towards
direct
export
for
that,
yes,
I,
don't
personally
see
a
great
need
to
have
an
independent
informational
document
that
describes
some
of
the
background
we
do
have
this
I
mean
you
always
will
have
a
draft
here
as
a
reference
for
that
publishing
this
as
an
informational
RFC
that
is
separate
from
direct
export
or
anything
else.
B
I'm,
not
convinced
of
the
value
of
I
would
love
to
see
if
there
are
bits
of
explanation
for
rationale
that
we
want
to
add
into
direct
export
as
background
sections
appendices,
I
think
that
does
make
sense.
Of
course,
if
we
have
the
marking
mode
I'm,
having
that
be
a
separate
document
also
makes
sense.
B
H
B
And
so
I
think
having
a
specific
document.
That
is
not
information
that
talks
about
how
you
do
the
marking
as
something
more
of
a
protocol
level
thing
makes
sense,
I'm,
seeing
several
plus
ones
on
the
jabber,
as
well
as
on
the
chat
in
WebEx,
so
I
think
for
now.
I'm
going
to
ask
the
authors
of
this
is
to
work
on
incorporating
any
informational
bits
that
you
have
into
the
direct
export
effort,
but
I
think
we
are
not
going
to
plan
on
adopting
this
correctly.
B
H
Yeah
so
please,
let's
try
to
fool
things:
to
go
I'm
also,
yeah
yeah
I'm
also
wondering
if
there's
actually
a
need
to
actually
summarize
the
high
level
approaches.
Is
that
valuable
or
you
know,
sometimes
we
do
need
to
understand
the
pros
cons
of
each
approach.
Large
architectures,
yes,
know
where
to
find
it.
Yeah
I.
B
Think
in
this
case,
we
can
add
that
high-level
description
into
the
directory
export
document
alright,
but
thank
you
for
that.
I
think
we
are
all
out
of
time
at
this
point
just
as
a
reminder,
if
you
did
not
sign
the
blue
sheet,
please
go
into
the
ether
pad
that
is
linked
on
the
slides.
Thank
you
all
for
doing
this
and
having
this
virtual
experience.
Sorry,
if
you
had
any
technical
glitches
and
we
hope
to
see
you
virtually
or
in
person
next
time
in
on
the
lists.
Thank
you
so
much.
Everyone.