►
From YouTube: IETF106-IPPM-20191118-1000
Description
IPPM meeting session at IETF106
2019/11/18 1000
https://datatracker.ietf.org/meeting/106/proceedings/
A
So
since
this
is
the
first
meeting
we
hear
it
was
the
note
well,
please
do
take
the
time
to
look
through
this
and
have
know
the
what
you're
getting
into
by
participating
here
and
I'd
have
up.
If
you
haven't
look
at
this
before
the
blue
sheets
are
going
around,
so
please
do
sign
your
name
on
all
of
those
as
well.
A
A
The
stamp
document
is
in
the
RFC
editor
queue,
so
that's
great
couldn't
get
that
shipped
and
the
metric
registry
and
initial
registry
documents
are
currently
and
I.
Is
she
waiting
on
ad
write
up
and
we
did
get
some
feedback
there
from
Ayane
about
the
registry
request
and
I
assume
that
I
will
be
going
into
more
detail
on
the
status
there
and
the
other
document
we
have
that's
out.
There
is
teaming
up
being
still
remaining
in
miss
Roth
and
hopefully
that
gets
cleared
up
as
all
these
other
documents
push
through
so
I'll
be
great
I.
C
C
B
D
B
Delay
we
made
it
64,
god
I
think
that
we
need
to
go
back
yes
for
the
delay,
it's
a
64
so
which
gives
us
plenty
of
delay,
but
for
delay
variation.
Yes,
your
variation,
okay,
okay,
okay!
Yes,
we
actually,
we
discussed
that
and
we
decided
well,
we
might
get
by
with
the
32
for
delay
variation,
but
again
because
this
is
a
working
group
document.
We
made
this
change
and
we
proposed
it.
So
if
there
are
any
questions
concerns,
please
bring
it
on
the
list.
B
B
B
There
was
a
suggestion:
okay,
why
not
to
make
it
bizarre
and
make
it
five
decimals,
because
we
do
have
space
so
now,
if
you
can
express
five
nines
after
their
decimal
point.
So
if
there
is
any
questions,
concern,
please
bring
it
up
and
we'll
talk
about
it,
but
that's
for
now
what
is
potential
to
express
as
a
percentile
request?
B
Next,
please!
So
there
UDP
port.
We
had
a
discussion
in
the
list
about
what
could
be
used
as
a
destination.
Udp
port
and
the
agreement
is
that
one
it's
862,
which
is
a
sine
2t
womp
test
protocol,
and
then
it's
a
user
range
and
a
dynamic
range.
The
user
range
does
bring
their
potential
for
the
using
stemmed
packets
as
a
denial
of
service
attack.
B
But
well,
in
another
hand
we
were
told
is
that
operators
would
like
to
use
this
tool
as
a
simulate
certain
application,
so
basically
to
be
to
to
have
a
higher
level
of
guarantee
or
chance
to
follow
the
same
path.
If
you
share
the
destination
port
address
with
your
application
and
usually
that
probably
can
be
done
and
used,
we
need
you're
doing
a
service
activation
testing.
So
when
there
is
no
active
traffic,
so
you're
not
piggybacking
on
your
active
application
again,
if
there
are
any
questions
concerned,
please
bring
it
up
next
slide,
please,
okay!
B
So
we
still
all
a
lot
of
updates
and
improvements
to
the
text
with
the
reference
and
everything
to
address
all
the
comments
received
as
early
young
doctor
review,
we'll
be
working
on
it
now
more
actively.
Once
the
base
specification
is
out
of
our
hands
and
being
published
and
comments
are
welcome,
always.
B
B
B
Okay:
let's
go
next
follow-up.
Telemetry
TV
I
think
that
back
in
or
one
T
one
documents
with
this,
especially
in
t1
this
document,
we
discussed
that
for
the
reflector
and
the
same
for
the
sender
to
get
accurate
timestamp
is
a
challenge
because
the
timestamp
better
to
be
recorded
in
the
packet
as
close
as
possible
to
the
physical
transmission
of
the
packet
so
to
exclude
queuing,
which
introduces
delay
variation.
B
That's
effectively,
that's
the
purpose
of
this,
but
that's
basically
to
collect
the
follow-up
telemetry
from
the
reflector
and
I
can
explain
why
the
sender
is
not
critical,
because,
especially
in
a
virtual
machine
environment,
the
sender
may
get
their
physical
timestamp
from
their
hardware
once
the
packet
being
transmitted.
So
it
can
associate
this
value.
Even
the
value
in
a
packet
was
different,
so
the
sender
does
know
the
real
transmission
timestamp.
B
The
problem
is
more
on
the
reflector
side,
because
the
reflector
puts
the
timestamp.
Then
time
stamp
is
passed
for
the
transmission
and
when
their
Hardware
passes
this
timestamp
to
their
reflector
application
in
the
packet
long
gone.
So
that's
the
purpose
of
follow-up
telemetry
TLV.
It
may
be
included
in
a
consecutive
stamp
packet
so
that
real
transmission
timestamp
for
the
preceding
packet
will
be
returned
to
the
sender.
In
this
way.
Basically,
you
can
associate
and
because
it
has
a
sequence
number
of
that
preceding
packet.
E
E
B
Again
in
a
packet
in
the
stem
packet
there,
we
are
collecting
three
time
stamps,
so
their
sender
transmission
time
reflect
the
receive
time
and
reflector
transmission
time.
This
is
the
follow-up
information,
so
basically,
in
addition,
it's
a
TLV
that
follows
stamp
packet
and
it
collects
specifically
on
the
information
that
the
reflector
may
have
from
the
hardware.
If
we
believe
that
it
may,
it
has
access
to
their
more
accurate
time
stamp
for
the
proceed
Paquette,
it
doesn't
override
any
other
information
at
times
them
that
we
normally
collect
in
the
stand
packet.
B
B
Okay,
the
next
packet
from
the
sender
will
come
with
the
follow-up
telemetry
T
of
V,
so
the
reflector
will
put
all
three
time
stamps
relevant
to
that
second
packet
in
a
stamp
packet,
and
then
there
are
times
them
it
acquired
for
the
preceding
packet
it
will
put
in
the
follow-up
telemetry
TV.
So
there
is
no
override,
there
is
nothing,
no
information
is
overwritten.
B
Again,
the
reflector
doesn't
know
it's
actual
not,
but
if
there
is
a
situation
that
hardware
provides
or
system
provides,
with
a
more
accurate
time
stamped
on
the
physical
transmission
of
the
packet,
and
this
information
are
at
least
in
some
systems
available
to
their
application.
That
requested
the
transmission.
It's.
E
F
G
Sorry,
Rick
Taylor
I
understand
what
you're
doing
here.
I
I
have
a
couple
of
quick
questions
and
it
might
be
my
lack
of
knowledge
in
stem.
No.
How
does
the
application
that's
actually
consuming?
This
telemetry
know
that
follow-up
tlvs
are
going
to
be
used
for
this
particular
path
and
perhaps
not
for
that
path.
Where
is
the
negotiation
with
the
user
application?
Okay,
that's
under
some
random.
Fourth
TLV.
Oh.
B
G
Understood
again,
but
if,
if
I'm
writing
an
application,
this
is
using
stamp.
How
do
I,
as
the
application
in
my
control
loop,
know
that
I
have
to
wait
for
the
follow-up
telemetry
TLV,
you
know,
don't
get
the
accurate,
timestamp
or
just
say:
hey
I've.
Had
my
response.
I've
got
my
three
times:
I
can
I
go
act,
okay,.
G
B
H
You
could
have
in
a
I,
have
a
little
bit
more
a
process
related
question
and
it's
more
on
the
previous
till
the
Edit,
because
since
this
document
was
adopted
as
a
working
document,
you
not
just
added
to
T
of
ease
to
it
right
so
like
how
much
more
do
you
want
to
add,
and
how
do
you
like?
Why
does
it
have
to
be
in
this
document?
H
You
could
also
follow
doc
documents
that
will
be
discussed
separately
by
the
working
group
and
especially
the
previous
govt,
so
there's
no
work
in
3gpp
that
is
adopting
stamp
already
for
any
kind
of
measurements
and
I'm,
not
sure
how
likely
does
is
so.
Maybe
this
TIV
is
not
even
needed
at
all.
There's
no
decision
made,
so
maybe
that's
too
like
premature
to
actually
edit
to
the
draft.
G
B
There
is
mercy
reflected,
so
it's
basically
it's
very
special
case
for
their
3gpp
PMF
requirements.
What
they
want
to
do
and
decision
was
that,
yes,
if
we
use
stamp
as
a
performance
measurement
function,
then
why
not,
rather
than
introducing
a
new
mechanism,
use
existing
mechanism
to
do
this
state
reporting,
okay,.
G
So
I'm
involved
with
doing
a
lot
of
performance
measurement
on
non
3gpp
radio
rings,
so
I'm
really
quite
interested
in
sorry.
I've
got
a
cold
I'm,
really
quite
interested
in
this
sort
of
extension
point
for
non
3gpp
radios,
so
a
I
need
to
get
involved
and
be
I'm,
not
sure
this
is
cooked,
yet
I
think
there's
some
more.
That
could
be
done
here
to
make
it
a
bit
more
powerful,
so
I
need.
B
A
Yeah,
that
makes
sense.
Alright,
thank
you
for
sharing
this
I
think
based
on
this
discussion.
A
couple
follow-ups,
a
you
know,
since
we
are
adding
tea,
leaves
here,
which
I
think
you
know
can
certainly
be
appropriate.
But
since
this
is
a
working
group
document,
we
do
want
to
get
a
good
idea
that
the
working
group
user.
A
Think
we
can
maybe
have
like
an
just
kind
of
getting
consensus
on
these
TVs
make
sure
if
there's
any
concerns
about
them
that
can
be
raised,
and
we
can
see
if
these
are
appropriate
to
take
for
the
access
report.
Tv
I
think
we've
heard
feedback
that
having
something
that's
generic
would
be
good
and
to
the
point
of
the
interaction
with
3gpp.
It
may
be
easier
if
we,
you
know,
allow
it
to
be
used
for
other
cases
such
that
if
3gpp
doesn't
decide
to
use
it.
It's
still
fine
and
I
think
we
do
want
clarification.
A
I
read
through
the
texts
again
for
the
follow-up
flow
injury
and
at
least
my
impression
is
training
to
help
you
measure
the
delay
between
software
and
hardware
transmission
reflector.
In
a
way,
it
was
like
kind
of
explaining
why
you'd
notice,
I
think
would
help
get
to
the
point,
because
I
think
it's
useful,
but
we
need
to
have
clarification.
Sorry.
G
Very
good
question
again
underlying
my
lack
of
knowledge
on
stamp.
Is
there
any
kind
of
inbound
session
in
band
extension
negotiation
within
stamp
at
all?
It's
all
band
pre-configuration.
So
from
a
process
perspective,
you
could
move
ahead
with
the
raw
stamp,
but
the
basic
core
stamp
documents
and
then
produce
later
drafts,
which
say
this
is
extension,
have
an
eye
on
a
registry
or
whatever
is
required
for
I.
Don't.
A
F
Hi
everybody
I'm
al
Morton
and
we've
been
working
on.
Big
teams
basically
have
been
working
on
a
performance,
metrics
registry
and
the
initial
contents
of
that
registry.
Since
2012,
can
you
believe
it?
Let's
finish
it
up?
Okay
and
and
really
we're
damn
close
now,
and
you
can
see
that
right
here,
the
IETF
last
call
is
complete
and
willing
about
that.
After
me,
Ria's
review,
Mary's
review
was
very
good.
We
made
a
lot
of
changes
and
drafts
11
and
20
the
contents
and
the
registry
design
itself
are
the
result:
Thank
You
Maria.
F
So
now
we
have
the
Jen
art
and
the
security
reviews,
and-
and
they
really
were
mostly
about
helping
Ayane,
do
its
job
and
that's
good,
but
we
also
got
an
I
Ana
review
during
last
call
and
they
basically
said
what
the
last
line
is.
You
know
they,
their
reviews
complete,
but
they'll
work
with
the
author's
to
establish
and
populate
the
registries,
and
that
meeting
happens
this
week.
F
I've
seen
Michele
cotton
she's
here
if
anybody's
interested
in
joining
that
meeting
with
Michelle
and
me
and
the
rest
of
the
Guyana
staff
to
sort
out
the
the
last
details,
Thank
You
Benoit
will
will
do
that
Ronnie.
Even
who
commented
for
Jen
art
may
be
willing
to
do
that
as
well.
So
the
big
knockout
for
understanding
this
registry
and
what
it
is
has
been
a
mock-up
that
we
created
and
we're
trying
to
get
Ayanna
to
post
that
in
a
like
a
temporary
place.
F
So
we're
going
to
work
that
out
this
week
too,
and
here's
why
the
registry
itself
is
is
fairly
complex.
It's
got
all
these
categories
of
summary
metric
definition,
method
of
measurement
output,
administrative
info
comments,
blah
blah
blah
I
Anna
is
only
going
to
deal
with
the
summary
column
and
have
a
link
to
a
text
file
that
displays
the
rest
and
here's
how
that
looks
in
practice.
F
So
we
start
out
here
we
have
the
registration
procedures.
Now,
incidentally,
that's
going
to
change
when
I
first
met
with
Michelle
about
this
years
ago.
She
said:
please
make
this
expert
review.
Please
make
this
expert
review.
Please
make
this
expert
review
just
like
like
saying
Beetlejuice
three
times
and
and
so
I
got
and
so
I
heard
her
and
we
made
it
expert
review,
but
it
looks
like
it
would
have
it's
better
to
make
it
specification
required,
because
that
includes
expert
review.
F
So
now
you
know
we're
gonna
make
that
change
and
who
are
the
experts,
its
performance,
metrics
experts
and
and
so
forth.
So
this
information
should
really
go
in
the
in
the
registry
design
draft
as
well,
not
just
the
mock-up.
So
it's
a
it's
a
performance,
metrics
group,
we're
gonna,
have
a
performance,
metrics
registry
and
and
really
there's
two
registries
included
below
there,
but
there's
I
think
we've
actually
designed
more
now.
So
the
summary
includes
an
ID,
a
name
for
the
metric,
the
URIs
which
we've
reduced
now
just
to
the
URL
to
the
text
file.
F
That's
HTML
eyes.
That
brings
you
all
the
good
stuff
and
about
building
a
metric
and
method
of
measurement.
That's
compliant,
and
we
have
a
nice
description
there
that
nobody's
going
to
be
able
to
read
today
at
all
my
apologies,
the
reference.
The
change
control
is
an
open
issue.
Michelle
told
me
when
we
added
that
it
should
be
IETF.
F
F
We
know
that
the
that
the
names
like
RT
delay
active
IP,
UDP
periodic
blah
blah
blah.
That's
that
that's
going
to
that's
going
to
evolve
over
time
as
we
get
more
metrics.
So
we
want
the
metric
name
elements
to
be
a
registry
that
people
choose
from
as
well
extensible
registry,
just
like
the
metrics,
and
we
will
be
able
to
add
to
those
four
different
things
that
we
measure.
F
H
F
H
H
F
F
G
Good
catch
I
have
to
fix
that.
Yes,
please
Rick
Taylor,
very
good
question:
are
you
still
reserving
some
experimental
space?
These
things
are
hard
to
do
and
will
need
a
little
bit
of
random
space
to
build
experiments
to
work
out
whether
what
we're
doing
makes
any
sense
at
all
before
bringing
it
to
the
IGF.
Here's.
F
What
you
can
do
we
already
found
out
when
going
through
working
group
last
call
that
there
was
like
an
experimental
non
experimental
but
a
private
version
of
this
registry
being
built
in
Brazil,
and
that
was
kind
of
cool
and
they
helped
us
clarify
a
few
things.
So
I
would
say
you
know,
build
your
metric
test
with
it,
be
able
to
tell
us
what,
how
widespread
it
is
and
and
and
whether
the
meth,
whether
the
met
specification
of
it
works.
D
J
D
So
that
ID
number-
that's
not
a
code
point.
That's
gonna
show
up
on
the
wire.
That
is
that,
is
you
int
in
you,
int
infinite
length,
T
right
right
so
like
there
is
the
practical
limitation
that,
like
you,
probably
want
your
your
names
to
be
less
than
four
kilobytes.
So
the
set
of
names
is
like
the
set
of
possible
four
kilobyte
UTF
strings,
which
I
think
is
large
enough.
That
I
mean
there's
no,
that
yet
there's
experimental
space,
because
there's
no
there's
no
scarce
identifier
space
that
were
allocating
either
okay,.
G
D
Someone
were
to
then
build
a
protocol
that
used
this
registry
and
say
I'm,
going
to
use
the
ID
we'd
have
to
talk
to
them,
but,
like
the
registry
here
has
like
a
whole
bunch
of
pointers
to
this
working
group.
So
as
long
as
this
working
group
is
here-
and
it's
been
here
since
83
so
I
think
it's
gonna
be
around
for
a
while,
then
you
know
they
would
come
to
us.
We
would
say:
don't
make
this
a
UN
date
T,
but
I.
D
Don't
think
that
needs
to
be
in
this
doc,
though,
because
the
doc
is
not
like,
though,
the
level
of
metrics
that
we're
dealing
with
in
this
registry
and
the
transports
we're
gonna
use
for
those
are
going
to
be
not
such
that
we
really
have
constrained
space
like
it
would
not
make
sense
to
have
this
this.
This
ID
be
a
constrained
integer
space,
okay,.
D
F
Thank
you,
cool
no
requests
for
the
floor,
so
I'll
send
to
the
list
when
I
schedule,
something
with
Michele
cotton
and
her
staff
from
I
Anna
folks
are
willing
to
join
us.
Please
do
and
let's
move
this
along.
This
is
too
big
drafts.
That's
been
around
for
a
long
time.
It's
it's
helping
get
stuff
off
our
milestones.
Thank.
A
F
F
F
Last
time
we
had
Frank
bottners
review,
which
helped
us
change
the
terminology
instead
of
the
old
RFC
2330
host,
we're
using
node
now
throughout
the
document,
and
but
we've
retained
that
where
it
makes
sense,
that's
a
quick
one
there.
So
that's
cool
we've
differentiated
some
of
the
limitations
that
we
were
discussing
in
section
3.6
about
trace,
basically
they're
about
tracer,
a
child
methods
and
getting
back
the
same.
My
IP
address
from
the
from
the
host.
F
That's
not
going
to
be
true
for
hybrid,
so
we
fixed
up
a
lot
of
items
and
3.6
to
say
that
there's
differences
when
you
use
the
hybrid
methods,
so
then
the
formatting
of
the
results.
We
decided
not
to
update
RFC
53
cents,
punt,
the
future
development
of
that
to
a
yang
model
and
the
working
group
agreed
footers
comments
and
questions
were
also
about
that
resulted
in
adding
another
requirement
for
for
the
future
model.
F
The
original
sender's
time
stamp,
so
we
got
a
list
of
requirements
now
in
Section,
three
six,
so
then
we
said
in
IETF
105
back
in
Montreal,
we
said
we'd
like
to
see
a
working
group
last
call
at
IETF,
106
started
here
at
least
and
now
with
the
result
resolution
of
those
comments
and
the
new
drafts.
We
think
we're
ready
for
that,
and
that
would
help
us.
You
know,
move
what
we
believe
is
a
very
mature
draft
on
its
way
and
so
working
group
last
call
please.
A
K
A
Right
yeah
sounds
good,
so
I
mean
please
do
take
a
look
at
this
and
I
think.
If
we
do
a,
we
can
get
a
last
call
going.
For
that
sure
I
mean
we
will
send
out
an
email.
You
wanna,
send
out
an
email
yeah,
give
it
three
weeks
and
yeah.
Please
do
read
through
this
and
race
any
issues
you
see,
but
yeah.
It
would
be
great
to
get
this
moving
along
cool.
Thank
you.
Thanks.
D
L
Okay,
good
morning,
this
is
an
update
of
our
multi-point
alternate
marking
draft,
just
a
summary
about
the
idea.
We
want
to
introduce
this
intelligent
rhyme
approach,
so
we
started,
you
may
recall
from
our
FAC
8321
that
allowed
unicast
point-to-point
flows,
performance
monitoring
and
we
moved
to
this
multi-point
alternate
marking
approach
that
allowed
monitor
of
general
multi-point
anycast
flows
without
any
constraints.
So
what
is
the
motivation,
the
main
motivation?
It
is
resource,
consuming
to
monitor
all
the
flows
and
all
the
paths
continuously,
so
we
should
figure
out
a
way
to
make
these
in
a
flexible
way.
L
The
solution
is
to
put
intelligence
into
the
system,
so
to
give
the
intelligence
to
the
controller,
to
be
able
to
manage
the
performance
measurement
and,
for
example,
it
can
start
without
examining
in
depth
the
network
and
then
go
deep
if
the
problem
have
to
be
analyzed
in
some
position,
so
there
are
two
ways
to
act
if
we
go
to
the
implementations.
The
first
way
is,
of
course,
to
specify
the
traffic
filters
to
find
more
detail.
L
I
remembered
the
comment
from
Jose
Ignacio
and
we
added
the
reference
from
to
a
new
paper
that
is
about
the
definition
and
mathematical
formalization
of
the
algorithm
for
cluster
partition,
so
that
it
can
be
applied
for
every
kind
of
graph,
and
you
can
download
this
paper,
but
also
I
became
aware
that
it
was
published.
So
you
can
also
find
on
I
Tripoli
Explorer
by
yourself
anyway,
is
also
available
on
this
website.
L
Just
look
again
at
the
algorithm.
This
is
the
iterative
approach
we
can.
If
we
have
a
network
graph,
we
can.
The
first
step
is
to
group
the
links
with
the
same
starting
node,
so,
for
example,
in
this
network
graph
in
figure
we
have,
we
can
identify
five
group
with
the
same
starting
node
and
then
in
the
second
step
we
can
join
the
group
with
the
at
least
one
ending
one
node
in
common.
L
So
in
this
case,
for
example,
the
group
number
two
and
group
number
three
will
be
joined
to
to
be
the
cluster
two,
so
in
general,
for
these
Natur
graph
we
can
identify
for
cluster.
This
is
just
to
give
you
a
view
in
this
case.
It's
very
simple,
but
you
can
imagine
that
if
you
have
a
network
graph
more
complex,
we
can
make
this
at
a
different
level
and
we
can
also
combine
the
different
clusters
at
different
level
to
activate
more
measurement
points.
L
So
just
a
recap
on
what
kind
of
measurement
we
can
do.
So
this
is
a
complete
performance
measurement
framework,
so
we
can
measure
the
packet
trust
on
multi-point
on
multi-point
path
basis,
or
we
can
do
that
for
single
flows
or
for
dual
network.
So
these
give
you
the
flexibility
to
measure
packet
loss
at
a
different
level,
such
as
I
said,
and
in
addition,
of
course,
the
delay
measurements
that
in
same
way,
can
be
done,
can
be
done
it
at
different
level.
L
So
we
can
measure
the
delay
on
multi-point
but
basis
that
this
means
that
the
delay
has
the
meaning
and
is
a
representative
of
the
multi-point
path,
or
we
can
do
this
in
a
traditional
way
on
single
packet
basis.
In
this
way,
we
should
ask
the
help
of
the
another
FSC
that
is
the
5475
about
the
hashing
selection,
because
the
double
marking
methodology
of
the
fs8
321
does
not
work
for
multi-point
pot.
L
So
we
should
use
the
ashing
approach
to
couple
the
samples,
and
but
in
this
case
it
will
be
simplified
because
we
can
use
the
clusters
to
couple
the
our
samples
in
terms
of
space,
and
we
used,
of
course,
the
marking
method,
to
couple
the
samples
in
terms
of
time.
So,
in
this
case,
the
decouple
the
coupling
of
the
our
samples
is
very
simple:
okay,
the
next
step,
so
the
document
is
stable
and
we
want
to
the
outers
suggests,
to
begin
the
path
to
become
erect
see
if
you
have
additional
inputs
comments.
Of
course,
our
work.
D
D
Yeah,
do
we
want
to
start
it
go
ahead
and
start
over
last
call
on
this
one.
We
can
do
that
we
can
pipeline
it
yeah,
okay,
so
we
will
well.
We've
had
bad
like,
like
we've
done
a
couple
of
times
where
it's
like
yeah,
we
have
a
working
group
last
call.
Let's
start
another
working
group,
less
call
after
that.
First
working,
your
close
call-
and
we
forget
about
it
because,
like
the
this
working
group
last
call
just
started
for
to
end
on
December
9th.
D
D
D
C
All
right
so
quick
update
on
clicker,
so
quick
update
on
the
three
wicking
group
adopted
IOM
graph,
so
not
the
whole
kind
of
enchilada,
so
the
oldest
and
I
think
most
well
discussed
document.
It's
the
data
draft
as
we
typically
refer
to
it.
So
the
data
draft
has
seen
two
updates
since
the
last
meeting
and
I'm
going
through
them
sequentially.
So
the
first
one
is
addressing
or
the
zero
six
two
zero
seven
update,
basically
adjusts.
Well,
a
detailed
review
from.
C
Are
you
and
another
detailed
review
from
Tom
with
the
additional
feedback
folded
in
from
the
hackathon
that
we've
done
at
1:05?
And
so
there's
two
types
of
main
updates
that
we
received
from
there.
So
the
first
and
that's
I
think
the
the
technical
real
update
was
a
rearrangement
of
the
flags
feel
that
we
have
for
the
various
data
fields
in
in
IOM,
and
so
the
starting
point
of
that
rearrangement
was
that
out
of
the
hacker
column,
hackathon
came
the
suggestion
saying:
well,
you
have
24
bits
for
the
the
various
data
fields.
C
What
if
we
ever
run
out
of
that,
so
we
need
to
go
on
at
least
have
one
field
or
one
bit
dedicated
to
future
extensibility.
So
let's
reserve
one
of
that
if
you
reserve
one
of
those
well
make
it
the
last
bit
in
the
field.
Oh,
the
last
bit
was
taken,
so
23
was
already
taken,
so
we
wanted
23
means
like
from
there.
You
kick
off
another
range
of
additional
changes
that
roll
into
that.
C
So
we
made
room
for
that
and
then
the
additional
thing
was,
if
you
look
at
how
I
data
fields
a
structure
right
now
we
have
two
types
of
data
fields.
We
started
off
the
the
initial
journey
with
always
fixed
format.
Data
fields
were
they
with
a
defined
syntax.
So
all
the
fields
like
node,
the
interface
identifier,
timestamps
and
the
likes
they
all
have
fixed
format,
which
is
good
for
a
hardware
implementation
but
later
on.
I
think
people
said
well,
I
want
this,
and
this
and
that
I
want
flexibility.
C
So
it's
much
better
to
have
something
fixed
format
and
then
well
have
the
flexibility
later
on,
so
that
from
a
parsing
perspective,
things
become
more
simple.
That
meant:
okay,
you
want
to
go
and
have
the
opaque
snapshot
field.
After
all,
the
fixed
fields.
That
means
opaque
snacks,
takes
reference
to
opaque,
opaque
state
snapshot,
especially
at
3:00
a.m.
in
the
morning
right.
C
It
is
now
bit
22
so
that
everything
is
fixed
prior
and
then
you
have
opaque
state
snapshot
I'm
getting
better
at
it,
and
then
you
have
well
anything
future
that
we
don't
really
know
about
so
bit.
22
was
taken
and
then
well.
We
kind
of
move
the
other
stuff
around
to
go
and
not
end
up
with
any
holes
in
the
current
allocation.
So
that's
the
bids
rearrangement
that
we've
done
as
part
of
oh
seven.
C
It
was
referred
to
as
IOM
type
some
time
it
was
called
the
IOM
header
some
times
it
was
called
IOM
option,
but
basically
we
all
refer
to
the
very
same
thing
and
now
the
very
same
thing
is
called
the
IOM
option.
Type
again,
not
a
beautiful
world
would
but
a
kind
of
it's
a
perfect
compromise.
Everybody
is
equally
unhappy
about
the
compromise.
Why
not
the
IOM
option,
type
header
header
option
type,
don't
create
a
pull
request.
C
We're
happy
to
hold
it
in
but
make
sure
that
you're
catching
all
of
them,
so
I
tried
really
hard
to
go
and
catch
all
of
them
and
yeah,
so
that
cleanup
happened
along
with.
So
we
had
certain
things
that
were
really
given
a
dedicated
section
like
nomenclature
for
the
namespace
section,
but
other
of
things
didn't
really
have
a
dedicated
section
in
section
4
or
a
subsection
that
again
got
cleaned
up.
C
So
there
I
hope
that
I
kind
of
made
the
overall
thing
more
digestible,
cleaner
with
the
same
level
of
kind
of
nomenclature
throughout
the
document,
so
that
we
are
really
ready
towards
well.
Ultimately,
working
group
last
call
and
then
I
pushed
that
oh
seven
out
on
the
mailing
list
and
well
what
typically
happens
this
there's
something
else
that
you
forgot
about,
and
that
was
issue
135,
where
one
little
thing
that
we
forgot
about
to
specify
was
is
what
happens
on
D
cap
if
well,
what
should
know
do
on
D
cap
and
well?
C
We
believe
that
it
should
go
and
get
rid
of
all
option
types,
no
matter
what
for
a
particular
namespace
that
it's
kind
of
responsible
for-
and
that
was
also
not
said
in
the
draft
so
again
a
little
bit
of
additional
cleanup
that
is
in
there
and
so
sent
to
the
mailing
list.
No
comments
received,
but
well
that's.
Usually
there
is
one
more
comment.
C
You
can
not
do
that
unless
you
well
we're
an
idea
and
we
brought
up
the
question
earlier
on
so
I
think
at
8:30
teller
was
friendly
enough
to
go
and
share
you'll
a
breakout
meeting
or
a
site
meeting
like
it's
called
here
and
bring
up
the
question
again.
Is
there
any
comments
on
the
data
graph
and
guess
what
it
was
one
and
then
Greg
had
one
and
he
posts
the
question
and
I
want
to
go
and
reflect
it
back
to
the
room.
It
goes
back
to
the
earlier
thing
about
what
data
fields
we
have
today.
C
We
have
data
fields
that
are
fixed
format,
and
then
we
have
this
kind
of
catch-all,
tle
style
for
everybody
else.
Now,
if
we
look
at
the
fixed
format,
data
fields,
there
is
a
little
bit
of
flexibility
into
the
fixed
built
into
the
fixed
data
fields,
because
we
have
short
and
long
format
of
for
different
types
of
fields:
node
ID
interface,
identifier,
timestamp
and
namespace,
specific
data
and
well.
These
fields
can
be
kind
of
combined
even
right,
so
you
can
have
short
+
white,
make
it
super
white.
C
So
that
was
us
to
give
one
give
us
fixed
format
fields
with
a
little
bit
of
additional
flexibility,
so
that
people
who
are
really
constrained
on
space
and
I
know
that
several
people
are
because
they
say
well,
I'm
ready
to
go
and
record
60
bytes
I
have
a
diameter
of
say
five
hops
in
my
network
and
how
much
information
can
I
get
really
fit
in?
So
people
believe
that
there
is
use,
but
the
question
that
Greg
and
I
think
for
a
really
good
reason.
Rice
is:
do
we
need
this
kind
of
interim
flexibility?
C
Should
we
really
have
something
that
is
really
small
and
fixed?
And
then,
if
you
want
to
go
a
little
wider,
you
end
up
in
the
TLV
land
or
do
we
want
what
we
have
right
now,
which
is
small
and
fixed
a
little
larger
and
fixed
and
well,
if
you
want
to
go
crazy,
you
go
crazy
with
TVs
and
make
your
heart.
C
Finally,
after
15
iterations
of
the
document,
7ss
personal
draft
and
and
another
aid,
as
as
working
group
document
that
we
were
able
to
progress,
the
working
last
call
or
kind
of
how
do
you
want
to
go
and
deal
about
the
question
that
was
brought
up
in
in
the
earlier
in
the
early
breakout,
sorry
site
meeting
it.
D
Me
from
the
chair,
it
looks
like
we've
sort
of
lake
accidentally
adopted
a
model
of
pipelining
working
group
last
calls
which
I
actually
kind
of
like
so
then
we
are
so
then
we're
basically
saying:
okay,
well
we're
gonna,
we're
gonna,
be
doing
a
working
group.
Last
call
continually
at
this
point,
probably
till
the
next
meeting.
Can
we
sign
you
up
for
the
third
slot,
which
would
be
a
working
group
was
called
beginning
on
the
6th
of
January
and
ending
on
the
20th
of
January.
D
D
Yeah
does:
does
anyone
here
think
that
that
we
will
not
be
ready
to
start
a
working?
Your
boss
call
in
January
on
this
one
like
I,
you
know,
as
somebody
who's
looked
at
the
document
and
watch
the
progress
you
know
we're
now
sort
of
seeing
you
know
the
the
scope
of
of
what
we're
fixing
narrowing
and
narrowing
and
narrowing,
and
it's
like
we've
done
the
editorial
changes
and
it's
like
it's.
It's
looking
kind.
C
A
What
one
question
I
do
have
about
the
stability,
so
it
definitely
seems
to
be
getting
worse.
We
did
from
the
last
hackathon
and
ITF
105
get
good
input.
That's
it.
Can
you
give
it
update
on
the
status
of
essentially
those
implementations
that
did
hackathon
work?
Have
they
updated
to
the
new
stuff?
Do
we
have
kind
of
like
we
have
confirmation
that
the
changes
are
I
seem
to
be
good
and
well?
So,
if
you
just
be
good
to
do
that
also,
potentially
before
do.
C
C
No
did
you
include
the
oh
seven
changes
in
the
I
of
the
notation
for
the
kernel?
Not
yet
it's
not
yet
all
right
would.
A
D
D
D
C
Before
we
do
this
right,
so
could
I
get
a
sense
in
the
room,
maybe
a
voice
or
two
or
maybe
we
can
even
do
a
hum
whether
people
feel
like
we
want
to
stay
with
the
current
format
of
what
we
have
from
a
fixed
fields,
rate
of
that
yeah.
So
this
is
this
wide
and
short:
do
we
want
to
go
and
keep
it
or
they
want
to
go
and
not
have
white
format.
C
I
think
that
was
the
the
thing
that
kind
of
bubbled
up
in
the
the
earlier
in
the
earlier
site,
meeting
I
think
any
any
any
common
any
way
would
be
greatly
appreciated
because
well,
if
we
have
say
some
sense
in
the
room
that
people
say
well,
let's
go
ahead
with
what
we
have
at
least
some
indication
right
or
they
go
and
confirm
that
on
the
list
and
I'm
gonna
go
bring
it
up.
But
let's
any
opinions.
D
This
is
that
yeah,
so
hey
Rach,
do
you
have
an
opinion
yeah?
Let
us
let
us
get
right,
because
this
is
like
that
could
be.
C
C
That
the
suggestion
is
to
go,
and
well
we
had
that
as
well,
but
kind
of
that
was
like
we
got
rid
of
that
in
the
site
meeting
already
okay,
so
some
people
said
like
let's
use
the
lowest
common
denominator,
which
means
the
lowest
common
denominator
would
be
48
bits
for
4,
node,
ID
and
then
64
for
interface,
IDs
and
whatever.
So
that
means
like.
A
D
As
someone
without
a
dog
in
this
fight,
this
seems
like
a
perfectly
reasonable
compromise
for
it.
So
this
is,
let's
definitely
confirm
this
on
the
list,
but
I
think
most
of
the
people
who
have
an
opinion.
Your
were
are
in
the
room
or
were
in
the
room
at
the
side.
Meeting
yeah
so
I
mean
it
seems.
I
would
be
surprised
if
this
comes
back
up,
so
it.
C
D
D
Sounds
like
an
opinion
to
me
great
and
I
would
say
yes,
this
would
be
surprising
to
me.
Please
don't
surprise
your
chair,
so
if
yeah
this
is
we're,
you
know
the
working
your
boss
call
is
where
it
is
because
we
want
to
focus
the
working
groups
attention
on
one
document
in
time
to
get
these
things
out.
But
please
do
you
know
speak
up
as
soon
as
possible,
because
the
idea
is.
C
So
that's
actually
a
draft
that
tell
wrote,
wrote
for
all
of
us
because
it
was
text
that
was
originally
in
the
data
draft
was
seemingly
contentious
and
well
following
the
strategy
that
we
just
discussed
like
create
ourselves
a
stable
base.
Let's
move
everything
that
is
like
mm-hmm,
not
entirely
sure
in
to
dedicate
a
document.
That's
one
of
the
the
examples
and
well
the
the
direct
export
thing
is
later
on.
Another
example,
where
we
kind
of
moved
things
away
from
from
the
the
core.
C
So
since
we
discussed
last
time
we
working
a
rube
adopt
at
the
Flagstaff
what
they,
which
is
reasonable,
because
it
was
in
an
adaptive,
working
group
adopted
talk,
iam
and
document
already,
and
it
even
went
through
additional
revisions.
Now
that
we
had
three
flags
in
there,
one
was
called
active,
one
was
called
loopback
and
the
third
one
was
called
immediate
export.
Given
that
immediate
export
had
another
raft
of
discussions
and
well,
it
has
its
own
dedicated
document.
Now
we
removed
even
that
one
out
of
there.
C
So
that
means
by
now
it's
only
two
flags
in
there
loopback
and
the
reactive,
and
we
had
a
bit
of
discussions.
Last
time
like
well,
what
does
that
mean
these
flags
because
they
drive
a
set
of
behavior,
and
maybe
you
don't
want
that
set
of
behavior
for
all
the
packets
because
well
it
could
go
and
course
harm
if
it's
really
for
all
the
packets.
So
we
clarified
and
specified
that.
C
Well,
absolutely
you
want
to
go
and
put
constraints
in
so
that
you
only
do
this
for
packets
that
you
really
want
it
in
the
site
meeting,
and
that
makes
let's
make
sure,
given
that
my
brain
is
not
entirely
working.
I'm
able
to
read
I'm
not
really
able
to
remember
so
that
site
meeting
today
revealed
another
set
of
things.
We
definitely
want
to
go
and
add
to
the
draft
on
loopback
and
there's
that
documentation
is
already
in
issue
138
and
Barack
created
138
for
that.
C
So
the
first
thing
is
with
loopback.
You
were
recording
on
the
way
and
you're
also
recording
on
the
way
back.
Nobody
knows
why
we're
recording
on
the
way
back
and
we
couldn't
really
go
and
find
a
good
use
case
for
that,
so
we're
gonna,
go
and
I.
Think
the
the
suggestion
from
the
site
meeting
is
don't
record
on
the
way
back.
C
The
second
thing
is:
loopback
only
makes
sense
if
the
sender
is
also
the
encapsulating
note,
cuz
otherwise
is
a
little
funny,
because
loopback
is
a
probing
mechanism.
It's
kind
of
a
fast
trace
route
mechanism,
that's
what
it's
built
for,
and
so
Sandra
needs
to
be.
On
cap
node.
We
need
to
go
and
explicitly
state
that
and
in
addition
well
loop
back
only
makes
sense,
because
you
have
a
node
that
is
supposed
to
go
and
send
traffic
back
to
the
source
if
there
is
indeed
a
source
address
in
the
packet.
C
So
again,
we
want
to
go
and
specify
that
it's
only
making
sense
for
encapsulations
where
the
source
is
indeed
part
of
the
encapsulation
or
the
packet
for
C,
and
in
addition
to
that
well,
we
want
to
go
and
have
a
statement
in
encapsulation
graphs
on
how
loopback
would
actually
be
used
for
a
particular
encapsulation.
I
think
that
again
makes
perfect
sense,
and
in
addition
to
that,
we
want
to
have
a
statement
that
well
even
add
notes.
So
a
in
an
IMD
capsule
ating
note
receives
a
loopback
packet
that
it
hasn't
originated.
C
It
should
go
and
drop
it
again.
I
think
from
a
security
perspective
makes
perfect
sense.
It
should
not
happen
if
it
happens,
go
get
rid
of
it.
So
that
was
really
the
discussion
on
loopback
refining
what
we
already
have
in
the
draft,
which
is
why
I
believe
I
think
these
site
meetings
are
super
useful.
You
get
far
more
active
discussion
rather
in
the
bigger
room,
so
things
are
more
entertaining
even
if
it's
really
early
mornings.
So
that
is
something
that
we
want
to
go
and
do
refine
on
the
loot
bag
option
in
in
flax.
C
The
other
thing
that
we
have
as
I
set
the
second
flag
is
active.
Active
didn't
really
have
that
much
of
a
discussion,
but
there
is
one
thing
that
again
great
mirskiy
brought
up
as
well
a
bit
of
a
clarification
and
his
question,
and
I
think
his
question
is
a
worthwhile.
He
said:
well,
there
is
active
OEM
protocols,
there's
many
of
them.
C
You
can
add
iom
information
to
an
active
or
william
protocol
check
right,
so
you
could
run
BFD
and
you
can
add
IOM
to
BFD.
Why
not?
What
is
the
purpose,
and
why
do
we
need
the
active
flag?
In
addition
to
adding
the
IOM
information
and
well,
we,
the
active
flag,
was
put
in
to
basically
help
implementers
and
help
hot
where
to
go
and
tell
them
like
with
a
single
bit.
C
You
need
to
go
and
look
at
that
need
to
look
at
that
deeper
and
so
that
that
particular
behavior
wasn't
entirely
clear
from
the
text
that
we
have
so
we're
gonna
go
and
add
a
paragraph
that
says
well,
the
active
thing
is
really
to
help
implementers,
and
here
is
why,
and
so
the
likes
of
Parag
are
gonna
go
and
help
specify
further.
Why
this
is
needed
in
order
to
clarify
the
use
of
this,
you
could
call
it
a
helper
bit
for
implementers.
C
C
C
H
H
D
H
And
and
like
the
only
countermeasure
you
have
in
the
draft,
is
you
should
like
each
device
should
limit
the
number
of
packets
they
can
send,
but
it's
like
on
a
per
device
basis.
But
then,
if
you
have
a
past,
you
have
a
number
of
hops
and
it's
still
not
great
I
think
this
is
just
like
not
acceptable.
I
think
you
need
to
do
better
than
that.
You
need
some
kind
of
you
know
excess
control.
You
have
to
make
sure
that
not
everybody
is
just
can
select
your
network
just
sending
one
packet.
H
H
D
We
have
come
in
the
transport
area
to
understand
that
this
is
a
core
requirement
for
networking
at
that
layer.
If
you
want
to
not
kill
yourself,
I
would
suggest
that
you
set
up
IOM
Flags
loopback
on
a
routing
loop
in
a
hackathon,
and
then
this
was
this
is
this
is
like
instructive
right
like
so.
If
you
can
create
a
routing
loop
with
a
loopback
flag,
you
have
now
melted
your
network
like
and
I
think
that's
not
like,
set
it
up
and
and
watch
it
burn.
Just
it's
educational.
H
I
mean
the
other
I
mean
whatever
you
do
here,
it's
more
heavyweight
right,
so
the
other
option
would
be
to
have
some
more
access
control
so
that
you
actually
can
identify
the
device
that
has
send
a
loopback
request
and
only
reply
if
you
know
that
this
is
a
device
that
is
actually
supposed
to
do
that.
So
you
have
to
add
more
information
to
your
bacon.
So.
D
From
a
strategy
standpoint,
the
question
is:
do
we
want
to
hold
up
flags
in
IP
p.m.
while
we
try
to
make
it
safe
or
do
we
want
to
send
the
two
documents
up
to
the
iesg
and
IOM
data
goes
through
and
Flags
gets
killed
right
like
so
like
in
the
state
that
it's
in
right?
Now
it's
not
gonna
pass
sector
yeah.
N
D
D
Wait,
no,
we
know
what
one
of
the
answers
will
be.
We
in
this
room
have
have
identified
the
amplification
and
reflection
attack.
There
may
be
other
attacks
wait,
so
it
might
be
well
yeah,
but
if
you
give
it
to
sector,
they're
gonna
find
the
first
hole
and
then
they're
gonna
say
no
they're,
not
actually
gonna
dig
on
it.
So
we
should
do
something
about
the
amplification
reflection
attack
before
we
send
it.
This
idea
good
point:
let's.
C
N
N
D
Is
not
that
is
that
is
not
an
acceptable
retort
to.
D
Yeah
perfect,
we
have
enough,
we
have
a
tier,
we
have
enough
foot
guns,
so
foot
guns
are
okay
or
we
have
enough
foot
guns,
let's
not
make
more
very
like
and
I'm
leaning
toward
the
second
one,
so
yeah
I
think
the
next
step
here
is
between
now
and
Vancouver.
We
go
off
and
think
about
this
four
flags
and
we
basically
have
a
session
in
Vancouver.
That
is
let
us
let
us
come
to
conclusion
on
the
solution
for
this
problem.
Let's.
O
C
A
Packet
replication
and
when
we
talk
about
the
process
for
this,
because
we'd
essentially
agreed
for
the
other
one
that
we
could
kick
off
working
group
last
call
I.
Think
you
know.
If
you
wanted
this
to
be
aligned,
you
would
have
to
delay
both
of
them.
So
do
the
author
prefer
to
do
the
data
last
call
separately
from
flat,
I.
C
Think
I
have
a
personal
preference,
everybody
else
can
go
and
speak
up,
but
I
do
think
we
decouple
the
thing
for
a
reason.
Yes,
we
did
and
we
did
a
couple
of
the
thing
in
order
to
have
a
stable
base
and
everything
else
can
then
progress
at
the
speed
that
it
can
go
and
progress
that
so
well,
let's
not
could
put
all
about
the
ex.
P
I
guess
a
couple
things,
so
one
is
I
wonder
why
every
node
needs
a
send
back.
So
we
do
have
node
IDs
in
the
in
the
trace
information.
It's
not
an
IP
address,
but
it's
something
and
then
the
other
is.
There
has
been
some
interest
in
having
IOA
info,
go
back
to
the
source
for
closed
loop
control
or
other
purposes
back
to
Nix
and
that
overlaps
little
with
what
we're
talking
about
with
loopback
and
I.
Think
it's
worth
discussing
that
afterwards.
N
I
agree
that
we
should
further
discuss
your
second
comment,
but
the
first
comment
has
to
do
with
the
whole
intention
of
it.
So
if
you
want
to
see
where
their
pocket
can
actually
traverse
the
whole
network,
then
you
would
have
to
collect
it
and
then
drop
the
packet.
You
will
not
see
it,
so
it's
the
same
thing
that,
as
you
have
for
trace
hole
if
it
makes
sense.
P
C
C
There's
been
nothing
really
new
in
this
draft
other
than
the
working
group
adoption
thing
and
well
a
little
bit
of
informational
reference
to
the
deployment
draft
and
what-have-you,
and
we
basically
asked
for
this
draft
to
be
adopted
to
go
and
follow.
What's
the
RFC
number
70
120
something
to
go
and
ask
for
early
adoption
of
ipv6
option
types
so
that
the
VPP
implementation
and
the
kernel
implementation
can
start
to
use
Ayana
allocated
option
types
instead
of
their
own
ones
and
well
they're
already
diverging
today.
C
D
C
To
RFC
71
something
right,
I
refer
to
that
in
that
email
and
yeah,
so
would
be
good
to
go
and
get
additional
reviews
end
of
the
day.
I
think
it's
it's
asking
for
two
coal
points,
the
draft,
the
all
the
considerations
are
in
the
deployment
draft
that
goes
hand-in-hand
with
a
v6
rav4
now,
so
that
the
draft
itself
is
very
simple.
Q
Sharmeen
city
I
have
a
question
on
these
jobs.
When
I
read
this
chapter,
I
found
it.
It
seems,
there's
a
alignment
issue
between
this
chapter
and
ion
data,
because
in
section
3
of
this
draft
says
unless
a
particular
interface
is
explicitly
in
your
enabled
for
IOM
route
to
a
master
job
packets
which
contain
extension,
header,
carry
arm
data
fields,
but
I
am
data.
It
says.
Q
Q
I
C
C
M
M
D
You
thank
you
very
much
so
yeah
we
will.
We
will
sit
down
and
figure
out
the
go.
Read
7120
we're
gonna
have
a
look
at
the
process.
I
think
the
process
is
that
we
need
to
do
a
call
for
consensus
about
this
on
the
working
group
list
and
then
we
need
to
go.
We
need
to
determine
consensus
right,
like
we
can
determining
consensus
by
Fiat,
but
that's
not
really
how
we
do
things
here
and
then
we
need
to
go.
Ask
Miriah
and
then
she'll
say
no
I'm
area.
C
H
D
D
The
reason
that
I
would
say
we
should
do
two
reasons
why
I
would
say
we
should
try
the
the
early
allocation
thing
instead
number
one.
It's
a
new
shiny
thing
that
I've
never
done
the
ATF
and
it
would
be
fun
to
try.
The
actual
reason
is
that
it's
a
little
bit
weird
to
last
call
this
document
before
we
last
called
data
check
agree.
So
let's
do
the
early
allocation
yang.
M
D
On
we
have
we
have.
We
have
like
one
minute
and
I
actually
would
like
to
take
this
time
from
the
chair
to
start
the
consensus
determination
process.
Is
there
anyone
in
this
room
who
believes
in
an
early
allocation
for
these
code
points
so
that
we
can
stop
squatting
on
weird
bits
of
the
option?
Number
space
is
a
bad
idea.
If
there's
dissent
in
this
room
about
doing
that,
I'd
like
to
hear
it
because
we're
gonna
start
it
we're
gonna,
do
a
quick
call
on
the
list.
D
D
Okay,
that
that's
that
signal,
so
yeah
we'll
go
ahead
and
get
that
started.
Cool
yeah.
D
A
R
Name
is
Tom
Mizrahi,
and
this
draft
is
about
direct
exporting
is
a
new
draft,
but
it's
actually
not
a
new
idea.
It's
the
product
of
a
design
team
which
consists
of
the
people
listed
here,
a
little
bit
of
history,
so
this
concept
was
actually
discussed
in
the
working
group
a
couple
of
times
and
it's
based
on
two
different
proposals.
R
We
had
quite
a
quite
a
bit
of
discussion
about
these
two
proposals,
and
the
working
group
chairs
instructed
us
the
authors
to
work
on
this
together
and
to
come
up
with
one
a
single
solution,
and
this
is
what
we
did
so.
First
of
all,
I
want
to
thank
all
the
people
who
took
part
here
and
worked
on
this
combined
solution.
It
wasn't
trivial,
but
I
think
we
came
up
with
something
that
makes
sense
so
last
month
in
October
we
introduced
the
first
version
of
this
draft
which
combines
the
two
concepts.
R
So
what
is
direct
exporting?
So
if
we
look
at
the
figure
here
from
the
left
side,
we
see
a
data
packet
that
goes
into
the
I/o
and
domain
and
when
it
reaches
the
encapsulating
node,
encapsulating
node
can
incorporate
a
direct
exporting
adex
option
into
the
data
packets.
Then,
when
the
packet
is
forwarded
along
the
path,
the
transit
nodes,
just
Ford
the
data
packet,
they
don't
modify
it.
R
So
each
of
these
nodes
along
the
path
can
also
export
information
to
a
collector,
but
the
transit
nodes
don't
need
to
change
the
decks
option.
So
that's
the
the
main
difference
here
compared
to
what
we
saw
in
in
a
conventional
iõm.
So
the
question
that
comes
up
here
is
why:
why
is
this
needed?
So
the
the
main
motivation
to
use
direct
exporting
is,
first
of
all,
since
transit
nodes
don't
need
to
modify
the
packets,
it
makes
processing
by
transit.
No
it's
easier
simpler.
R
So
that's
one
thing
and
another
major
advantage
is
that
since
we're
adding
a
constant
overhead,
it's
a
relatively
small
overhead
so
compared
to
the
conventional
io
m,
which
adds
per
hop
overhead,
it's
a
significant,
significantly
easier
or
less
less
overhead
for
each
data
packet.
So
there's
potential
improvement
in
terms
of
the
data
plane
bandwidth.
R
So
these
are
the
main
advantages
of
using
the
Dex
option.
So
this
is
the
Dex
option
format.
We
can
see
the
different
fields
here.
Well,
some
of
the
fields
are
similar
to
some
of
the
fields
in
in
the
IOM
incremental
and
pre-allocated
trace
options.
So,
for
example,
we
have
the
namespace
ID
the
same
as
we
have
in
the
trace
options.
R
R
Let's
see
an
example,
and
in
this
example
we
see
a
Dex
option
which
is
encapsulated
in
an
ipv6
extension
header,
which
is
based
on
the
draft
we
saw
a
couple
of
minutes
ago
the
IOM
ipv6
options
raft.
So,
as
we
can
see
here,
we
can
see
the
last
four,
which
are
exactly
the
same
as
we
saw
in
the
previous
slide.
We
see
the
first
word
here,
which
is
option
type
and
length
which
are
the
beginning
of
the
ipv6
extension
header.
One
thing
to
notice
here
is
on
the
right
side.
R
R
The
other
thing
to
notice
here
is
the
option.
Data
length
field
indicates
the
length
of
this
option
so,
like
we
said
it's
a
constant
sized
option,
but
this
length
indicates
whether
the
option
includes
the
Floridian
sequence
number
or
not.
Okay,
so
that's
how
we
know
what
the
length
of
the
option
is.
Quick.
B
Yes,
sorry
to
interrupt
you,
but
I
think
that
it's
convenient
that
we
have
this
slide.
So
I
understand
that
you
are
implicitly
defined
by
the
length
what's
option
included
or
not,
but
I
think
that
there
are
possible
situation
when,
or
at
least
is
it
possible
that
only
flow
ID
or
sequence
number
included,
because
then
it
will
my
create
a
ambiguous
situation
to
understand
what
this
field
is.
B
R
R
So
basically,
this
draft
is
based
on
the
input
we
received
from
the
working
group.
It's
a
combined
solution
and
we
believe
it
addresses
what
we
heard
from
the
working
group
and
we
believe
currently
the
draft
is
pretty
stable.
So
we
would
like
to
consider
working
group
adoption
and
we'll
be
happy
to
hear
any
comments.
B
J
C
D
D
Have
a
question
so
as
an
individual
I'll
go
back
to
the
back
of
your
architecture,
diagram
this
one.
So
this
this
looks
like
a
different
kind
of
reflection
and
amplification
attack
then
be
loop
back.
So
I
would
say
that
when
we
bring
this
into
working
group
change
control,
we
should
probably
ensure
that
the
you
know
whatever
we
come
up
with
to
protect
flags
can
also
protect
this
right.
D
Like
so,
have
we
don't
want
two
different
sort
of
like
amplification
protection
strategies
for
the
two
different
for
the
two
different
strategies
in
in
IOM,
for
the
to
the
extent
that
that's
possible?
Because
these
you
know
these
look
like
these
look
like
differently
sized
cannons
pointed
in
different
directions,
but
they're
both
cannons
and
if
there's
an
approach
that
we
can
come
up
with,
that
will
protect
us
in
all
cases.
That's
that's
worth
working
on
right.
So.
D
Yeah
and
if
it
comes
out
that,
like
that,
like
these
are
different
enough,
that
you
we
need
two
completely
different
ways,
but,
like
so
I'm
thinking
of
something
that
basically
involves
like
a
little
bit
of
cryptographic,
protection
that
if
you
don't
actually
pass
that,
if
like
is
this,
is
ripping
through
the
network
in
the
transit
node
doesn't
see
the
token,
where
the
token
can
be
something
pretty
lightweight.
Then
it
basically
just
says
no
I'm,
not
gonna
I'm,
not
gonna.
Point
this
cannon
over
there
and
I.
Think
that's
like
a
token
based
approach.
D
Is
you
need
to
apply
the
security
at
a
low
enough
layer
that
it
needs
to
be
pretty
lightweight
right
like
so?
The
constraints
are
the
same
in
both
environments
and
that's
going
to
constrain
the
space
of
solution
such
that
I
suspect
that
even
the
solution
is
going
to
be
unified,
suspect,
I
haven't
done
the
work.
Yet.
Okay,.
B
Gregory
I
think
that
this
comment
brings
another
point
for
discussion
is
whether
loopback
and
direct
exporting
are
or
whether
direct
exporting
is
a
more
general
case
of
loopback
and
whether
we
really
need
to
loop
back
or
we
can
get
by
with
the
direct
exporting
and
providing
special
encoding
for
loopback
scenario,
because
as
I
understand
encapsulation,
so
there
is
no
identity
of
their
destination
for
the
I.
Am
data
to
be
exported
in
the
data
package.
So
probably
you
envision.
B
R
B
D
So
I'm
gonna
make
an
observation
as
the
chair
we're
already
kind
of
having
this
discussion
within
the
working
group.
So
this
is
the
factor
or
your
backing
announcer.
We
should
go
ahead
and
and
do
a
call
for
adoption
Popo
kick
that
off.
Okay,
let's.
I
R
I
I
have
five
types
of
data
in
might
know
that
I'm
a
report-
and
sometimes
this
may
be
important
and
the
other
time.
This
may
be
important,
and
maybe
you
want
to
say
eh.
You
guys
are
on
this
path
report.
This
particular
information
and
I
see
that
you
don't
seem
to
have
any
filter
of
any
sort,
be
able
to
indicate
that
sort
of
thing.
What.
B
C
I
C
More
minute
for
this
drivers
on
real,
quick
clarification
so
that
the
trace
this
is
Frank,
the
trace
type,
tells
you
what
is
collected
right,
so
the
encapsulation
note
will
put
into
the
packet
what
is
going
to
be
collected.
So
the
assumption
is
that
you
only
collect
what
you
want
to
go
and
see
exported
it's
not
the
way
that
you
say,
okay,
that
you
have
the
encapsulator,
it's
going
to
go,
decide
what
is
going
to
be
collected
and
then
the
exporter
would
just
export
everything
that
is
collected
right
as
opposed
to
so
you
have.
I
C
Well,
I
think
otherwise,
like
if
the
exporting
note
would
don't
decide
what
to
export
right,
then
you
can
do
that
today.
You
don't
need
anything
IOM
for
that.
You
don't
need
anything
signaling
for
that.
So
the
whole
point
of
Io
a.m.
is
that
the
encapsulating
note
can
signal
to
the
intermediate
node.
This
is
what
you
should
go
and
grab
and
send
off.
Otherwise
I
can
go
and
implement
like
something
like
otherwise.
C
A
D
So
tall
one
quick
question:
I
just
give
me
a
thumbs
up
or
thumbs
down
on
this
after
adoption.
There's
no
need
for
the
continuing.
R
R
F
Get
to
the
end,
hi
I'm,
al
Morton
and
Len
Chabot
owned
and
ridiger
gibe,
who
have
been
active
members
of
the
group
here,
have
been
working
on
a
draft
with
me
called
metrics
and
methods
for
IP
capacity.
This
is
a
this
is
a
draft
where
we're
defining
a
metric
for
something
that
we
can
really
hang
our
hats
on
this
maximum,
it's
kind
of
like
a
physical
capacity.
We
should
be
able
to
measure
this
and
it's
a
it's
been
kind
of
a
holy
grail
of
this
group
to
measure
capacity
of
links
and
so
forth.
F
It's
it's
becoming
extremely
important
in
the
part
of
the
world
where
we're
offering
a
gigabit
access
to
every
user
who
can
spend
I,
don't
know
$300
a
month.
So
it's
a
it's
an
important
thing.
Now
it's
at
the
IP
layer,
that's
where
we've
defined
it
with
header
plus
payload,
and
we
have
a
word
definition
and
this
equation
definition
in
the
draft,
also
a
method
of
measurement.
So
we've
got
these
time
intervals.
They
are
typically
one
second,
where
we're
generally
using
10
second
measurements,
and
we
try
to
find
the
maximum
interval.
F
That's
going
to
be
the
maximum
capacity
and
we
also
have
a
search
algorithm
in
the
method
of
measurement
and
and
so
the
typical
search
kind
of
looks
like
this,
where
it
converges,
maybe
overshoots
a
little
bit
at
first,
but
it
converges
on
a
one
of
these
intervals
where
we
can
identify
the
maximum
capacity
and
we've
also
got
this
aside.
Measurements
of
loss
delay,
delay,
variation
and
so
forth.
F
These
are
the
kinds
of
things
you
can't
get
with
TCP,
but
TCP
is
being
used
everywhere
and
TCP
falls
short
when
you're
trying
to
measure
gigabit
access,
even
half
a
gigabit
access,
there's
always
going
to
be
some
error
there
and
and
that's
the
problem
with
the
current
methods
of
measurement
now
UDP
is
coming.
Quick
is
coming.
F
This
is
a
simplified
way
to
measure
IP
layer
capacity,
so
we've
had
an
enormous
amount
of
discussion
on
the
list
and
I'm
not
going
to
go
through
all
of
this
with
you
in
fact,
I
sitting
in
the
back
there
I
realized
that
this
is
meme
readable
from
the
back.
So
let
me
just
pick
up
some
points.
Bimodal
features
are
still
being
offered
in
Internet
access
where
you
get
a
turbo
boost
in
the
first
couple
of
seconds,
and
then
it
drops
back
down.
F
We've
got
that
covered
now
in
the
in
the
methods
of
measurement,
also
I
peel
our
capacities
different
from
goodput
or
throughput.
So,
let's
not
mix
those
together.
It's
a
it's
a
different
metric,
okay
and
by
the
way
in
the
in
the
slides,
which
I
hope
you
all
take.
A
look
at
the
action
points
are
in
the
at
and
then
the
plus,
plus
plus,
is
what
we
did
in
the
draft
to
deal
with
it.
So
we've
got
about
15
of
those
just
in
the
September
list
points,
so
feedback
from
Yocum
Fubini
who's.
Here
welcome
back
Yocum.
F
He
said
formulate
this
as
a
statistic
based
on
the
singleton.
That's
the
way
we've
always
done
metrics
here
before
so
we've
got
that
done
now,
and
we've
also
got
a
qualification
measurement
that
we
talked
about
in
the
in
the
considerations,
methods
of
measurement
and
we're
proposing
that
at
a
fixed
99%
of
whatever
we
come
up
with
in
the
search
algorithm.
So
that's
something
we
can
talk
about
and,
of
course,
we've
got
the
with
the
zero
zero
draft.
F
Basically
we
were,
we
were
looking
at
averaging
times.
The
ones
that
we
chose
seem
to
work
very
well
of
one
second
and
ten
seconds
for
the
whole
test.
Matt
was
proposing.
Matt
Mathis
was
proposing
something
a
little
lower.
We
did
some
tests
with
50
milliseconds
and
didn't
seem
to
improve
things
actually,
so
one
of
the
things
we
did
this
time
we
haven't
done.
F
This
in
previous
drafts
was
to
go
with
default
values
for
the
input
parameters
to
the
metric
and
and
that's
what
the
the
defaults
are,
the
one
second
for
the
measurement
short
manager
and
intervals,
the
time
for
the
singletons
and
and
so
forth.
So
that's
that's
new
and,
let's
say
the
consideration
for
testing
parallel
flows,
we've
added
that
and
the
sending
rate
measurements.
We've
we've
also
added
defaults
and
some
additional
information
there.
F
So
all
of
that
went
into
the
draft
zero
one,
which
is
now
the
third
version,
so
next
steps
it's
good
to
recognize
that
there
is
still
an
active
method,
some
measurement
community
in
the
working
group
among
the
all
the
protocol
developers
that
are
here
and
I'm
sure
you
guys
are
all
interested
in
a
little
of
both.
So
let's
continue
this.
F
That
means
I've
got
five
minutes
left
and
that
gives
us
a
little
time
for
discussion.
So
that's
cool,
so
the
authors
believe
it's
worthy
of
a
working
group
adoption
the
key
points.
There
are
harmonization,
there's
parallel
efforts
going
on
which
we
tried
to
coordinate
with
the
liaisons.
But
the
best
way
to
do
this
we've
found
is
to
get
the
spec
going
someplace
and
that
way
we
get
the
feedback
of
review
of
the
metric
and
the
methods
from
the
community.
We
get
their
unique
perspectives
and
we
get
to
the
point
where
they
can
do.
F
The
things
that
they
want
to
do
is
a
unique
thing
we
can
do
in
AI
ppm
after
we've
got
the
metric
to
point
to
and
and
our
unique
perspectives
as
to
as
to
helped
us
out
with
a
protocol
enhancements
in
whatever
protocols
we
choose
like
t
WAMP
like
stamp.
All
of
those
so
I
think
that's
that's
the
kind
of
second
step.
Beyond
adopting
this
and
and
reaching
consensus,
then
we
can
work
the
protocol
aspects.
F
If
we,
if
we
do
that
so
I'd
like
to
open
up
the
microphone
for
comments
and
let's
for
I
guess,
let
me
ask
the
question:
who
here
in
the
room
has
read
the
draft
and
there's
two
people:
Ignacio
has
read
it
on
the
on
the
jabber
and
there
are
obviously
there
are
three
authors
and
so
forth
out
there
Dave.
That
was
a
time
for
you
to
raise
your
hand.
T
V
Almost
a
tee
up
for
my
comment,
so
Dave
sanic
rope
I'm
using
the
IETF
liaison
manager,
the
broadband
forum
hat
this
work
is
of
interest
to
that
community.
They've
actually
taken
note
of
the
draft
and
there's
some
ongoing
work
that
would
utilize
the
draft
and
look
at
its
application
within
that
community,
so
that
I'm,
not
speaking
on
behalf
of
the
forum
but
I,
do
know
that
there's
interest
from
that
community
in
heaviness
draft
progress.
Thank
you.
D
Yeah
I
think
we're
just
gonna
go
ahead
and
start
a
call
for
adoption.
Now,
like
will
run
the
we're
running.
The
working
request
calls
like
in
a
pipeline
but
call
for
adoptions.
This
seems
separate
enough
that
yeah
I
think
everybody's
already
made
up
their
minds,
so
we
don't
actually
have
to
pipeline
it.
So
we'll
start
one:
oh
I'll,
just
basically
copy
and
paste
the
last
message:
essentialist
and
change
the
draft
names
and
yeah
and
then
we'll
that'll
end
on
the
night
to
December
as
well
cool.
S
A
O
B
Okay,
yes,
of
course,
okay.
So,
as
I
mentioned
in
the
comment
earlier,
we
want
to
bring
up
for
the
discussion
and
consideration
is
the
special
specific
aspects
of
a
telemetric
collection
on
on
path
in
multicast
networks.
If
we
can
go
next
slide,
doesn't
oh,
okay,
okay,
no,
okay,
so
monitoring,
multicast
traffic
is
important
as
important
as
monitoring
unicast.
So
that's
important.
It
helps
to
reconstruct
visualize
the
multicast
tree
and
performance
monitoring
and
troubleshooting.
So
on
paths.
Telemetry
collection
technique
is
a
promising
as
a
complement
to
the
active
om
measurements
method.
B
So
what's
the
problem,
what
we
see
the
problem,
the
currently
on
path,
telemetry
techniques-
may
have
an
issues
that
when
we
are
doing
when
they're
harp
collection
is
used,
that
means
that
the
same
data
collected
upstream
will
be
replicated
on
all
branches
and
carried
on
all
branches
which
will
lead
to
unnecessary
data
being
collected,
because,
if
you
can
imagine
from
the
upstream
so
you're
doing
the
replication
in
any
nodes
that
branch
nodes
downstream.
So
all
the
information
collected
upstream
will
be
replicated
on
each
and
every
branch.
B
B
N
B
Network:
okay,
there,
okay,
there
are
several
methods
of
collecting
telemetry
information.
What
we're
trying
to
optimize?
Okay!
What's
this
points
to
is
that
if
data
is
collected
in
the
packet
itself,
then
aspect
it
get
replicated
down
the
branches,
it
leads
to
unnecessary
information
being
collected
in
the
network
or
carried
in
the
network.
You.
N
B
I'm
not
saying
what
to
do
I'm
saying
that
there
is
a
problem.
If
you
agree
with
this
problem,
then
we
are
not
sure
if
area,
understandable
and
so
okay,
the
problem
is,
if
you
collect
our
telemetry
information
in
the
packet
itself
and
as
you
replicate
the
packet,
so
the
information
and
being
collected
upstream
to
the
branch
will
be
copied
in
all
replicated
packets.
Okay,
would
you
agree
with
that?
This
that's
a
problem
so.
B
Okay,
so
this
is
for
a
very
special
method
of
doing
a
telemetry
collection
using
either
postcard.
Basically,
it's
a
per
half
solution.
It
can
be
said
this
is
a
deck
solution
or
post
card
telemetry
so
which
each
node
exports
the
data
and
the
observationally
of
the
draft
was
that
the
current
information,
then
being
proposed
is
not
sufficient
to
do
their
Association.
B
C
One
quick
question
here:
have
you
considered
so
we
don't
really
specify
the
semantics
of
the
node
ID
today
in
iom
data
in
the
data
draft
right.
So
what
you
can
go
and
do
is
you
can
then
coat
the
node
ID,
plus
your
branch
ID
in
the
node
ID
field
right
so
yeah
I.
Don't
believe
that
you
need
an
extra
field
for
that.
Okay,
again,.
B
We
can
discuss
whether
there
is
already
sufficient
or
not.
This
proposal
was
I
believe
that,
from
the
same
offers
that
are
part
of
the
design
team
for
the
decks,
so
they
looked
at
their
earlier
proposal
with
their
postcard
and
they
suggested
that
whether
there
is
already
informational
element
that
can
be
included
into
their
exported
data.
S
How
you
from
vitória
so
currently
we
haven't
if
I,
are
how
you
will
actually
implement
this
labor
branch,
but
you
can
use
any
existing
mechanism
in
the
text
or
whatever
you
can
use.
But
but
here
we
just
point
out:
what's
actually
information
you
will
need
to
identify
where
the
branch
helped
you
reconstruct
the
entire
multicast
tree
mm-hmm.
B
Okay-
okay,
our
next
one,
please,
okay,
so
another
method
is
that
collecting
information
is
a
person
per
segment
solution
based
on
a
hybrid
2
step.
So
the
hybrid
2
step
is
not
collecting
data
in
the
packet
itself,
but
rather
construct
their
similarly
encapsulated
special
packet
that
collects
data
following
their
packet.
That
is
a
trigger
which
is
a
similar
with
the
postcard
and
Dex.
The
only
difference
is
that
postcard
index
operate
on
per
hop,
and
this
collects
from
end
to
end
all
the
information.
B
So
you
can
see
that
without
any
modification,
hybrid
2
step
will
lead
to
the
replication
of
unnecessary
replication
of
collected
data,
because
there
HTC
packet
that
collects
data
uses
the
same
encapsulation,
so
it
will
be
replicated
as
a
data
packet
and
thus
will
lead
to
the
unnecessary.
The
multiplicity
of
information,
so
instead
to
do
optimization
is
that
basically,
it's
very
simple
saying
that
the
first
packet
is
being
transported
with
all
information
and
if
packet
needs
to
be
replicated.
So
if
this
in
transit
node
happens
to
be
a
brain,
that
note
generates
a
new
HTC
packet.
B
As
a
result,
you
can
see
here
so
that
only
information
on
this
branch
will
be
collected.
Then
you
will
have
all
only
one
end
to
end
on
one
branch
and
everything
else
is
per
branch
from
the
branch
note
to
the
leaf
and
that's
basically
order
that
so
in
post
card
may
have
enhancement
for
optimization.
Yes,
oh
sorry,
okay,
so
it
has
a
position.
We
asked
for
your
consideration,
comments
on
the
list
and
discussion
and
down
the
road
somewhere
adoption.
Thank
you
thank
you
for
bringing
that
alright.