►
From YouTube: IETF112-GROW-20211109-1600
Description
GROW meeting session at IETF112
2021/11/09 1600
https://datatracker.ietf.org/meeting/112/proceedings/
A
You
can
catch
the
chair
who
copies
and
pastes
the
slides
from
last
meeting.
That's
me
not
joke.
I'm
chris,
the
other
guy
is
joe
or
yop.
If
you
prefer,
this
is
itf
112
we're
somewhere
in
the
eu
time
zone.
I
think
we're
supposed
to
be
in
madrid.
Here's
the
note!
Well,
everyone
knows
where
to
find
this.
You
should
read
this
eye
chart,
but
it's
probably
not
I
chart
on
your
computer,
so
you
can
read
it
all
of
our
resources
are
available
also
at
austria,
wherever
it
says,
111
should
say
112.
A
boy,
I'm
gonna
hear
about
this
later.
All
the
slides
should
be
updated
on
the
meeting
materials
page.
Fortunately,
medico
has
a
little
quick
link
for
that
which
is
super
nice
thanks
to
the
medico
folks,
blue
sheets.
We
don't
have
to
do
any
more,
that's
taken
care
of
in
the
medico
thing.
We
do
need
a
minute
sticker.
So
if
someone
wants
to
volunteer
to
take
minutes,
that
would
be
terrific.
A
I
can't
tell
if
it
makes
any
better
or
not
anyway,
so
until
somebody
speaks
up
and
says
I'll,
do
the
minutes
we're
going
to
wait.
A
A
I
think
mr
holliman
is
telling
me
joel
will
do
the
minutes.
Terrific
he's
a
great
minute
sticker
mike
holliman,
perhaps
has
a
link
that
will
tell
us
how
to
use
the
share
slides
for
next
time
agenda.
Bashing
charter
update.
Oh
we're!
No
charter
update
this
time.
Sorry,
here's
our
agenda.
A
We
have
some
tlb
stuff
for
bmp
from
paolo
an
update
to
his
document.
Five
or
ten
minutes
worth
of
chat
time
and
then
edward
has
an
on-hose
load
balancing
for
telemetry
conversation
to
have
some
research
work.
I
believe
he
and
thomas
did
in
the
in
the
hackathon.
I
think
I
could
be
mistaken
about
the
hackathon
part.
A
Please
feel
free
to
correct
me,
edward
when
you
get
up.
I
think
at
this
point
we're
not
finished
it's
time
for
paulo.
So
unless
there's
any
questions
we
want
to
get
out
of
the
way.
First
give
people
a
couple
minutes.
While
I
find
the
slide
deck
or
paulo,
you
can
do
the
slide
deck
yourself.
If
you
prefer
just
do
the
screen
cherry
thing
and
we'll.
B
Oh,
I
I
prefer,
if
you
can
share
the
deck
that
I
sent
just
because
I
have
troubles
sharing
my
screen
with
me:
deco
yeah,
no
problem
and
screaming
video,
so
that
you
can
look
at
me
as
well.
A
B
All
right,
okay,
so
yeah,
and
so
I
can
start
yeah
five
minutes
about
tld's
for
route
monitoring
and
peer
down
messages.
It's
just
a
very
quick
update.
So
if
you
can
go
to
the
next
slide
already
very
short
recap:
what
the
draft
is
doing
is
that,
like
not
every
bmp
message
does
support
the
tlds
and
so
essentially,
and
so,
namely
the
root
monitoring
and
the
peer
down.
B
B
Still
as
a
recap,
so,
as
you
will
see,
it's
a
two
slice
of
recap
and
one
slide
of
content,
so
we
have
a
generic
and
indexed
tlds,
so
generic
tlds
is
just
the
tlv
of
all.
B
All
your
life
doesn't
need
to
have
any
explanation,
whereas
the
indexes
the
tlv
is
the
tld,
but
you
have
also
that
one
index
field
after
the
length
okay,
so
why
the
indexed
tlp
is
needed
in
the
end,
it
is
needed
for
root
monitoring
messages
right,
so
we
took
a
kind
of
a
scenic
route
on
saying
what
do
we
do
if
we
have
a
tlv
routine
route,
monitoring
message,
whether
we
want
to
address
the
whole?
You
know
all
the
nlris
that
we
find
in
the
in
the
bgp
message
encoded.
B
You
know
in
the
root
monitor
message
or
whether
we
want
to
address
just
one
specific
or
you
know,
an
lri
right,
so
the
index
comes
from
that
need
so
next
slide
and
essentially
so
what
we
did
and
that's
the
only
change
that
it's
in
revision
number
six
is
that,
like
is.
If
the
index
is
zero,
then
it
means
that
the
tlv
does
apply
to
the
whole
bgp
message
or
otherwise.
If
it's
a
positive
number,
then
it
applies
to
a
specific
nlri.
B
So
very
simple
in
concept
I
would
say
this
is
the
only
change
that
we
did
and
I
think
that
was
the
only
outstanding.
You
know
piece
of
work
that
really
needed
to
be
done.
I
don't
know
if
anybody
has
some
feedback,
I
mean
I
anticipated
this
change
already
in
the
previous
itf
meeting,
so
we
had
a
couple
of
options
and
this
kind
of
looked
the
most
natural
for
us
authors
and
also
I
received
some
feedback
and
it
seemed
you
know
kind
of
consensus
to
go
down
this
way.
B
I
don't
know
if
there
is
any
feedback
you
know
from
anybody
in
the
audience.
Otherwise,
if
not,
I
would
say
for
the
chairs
like
this
seems
to
be
a
kind
of
don't
work,
and
maybe
we
can
last
call
it
or
something
like
that
and
move
forward.
A
Thank
you.
I
think
it.
If
you
are
asking
for
work
group
last
call.
You
should
definitely
make
a
comment
on
the
mailing
list,
so
people
can
speak
up
as
a
standard
chairs,
always
say
this
thing,
fantastic,
we'll
do
that
after
this.
B
A
Other
than
that,
thanks
for
the
presentation-
and
thank
you
thank
you
if
no
one
else
has
put
their
hand
up
to
say
anything,
so
I
think
definitely
put
your
message
forward.
Please
make
a
work
real
ass
call
for
this
now
now
now.
A
C
I
think
I'll
I'll
try
it
on
my
own,
we'll
see
how
it
goes.
Okay,
unfortunately,
the
the
test
room
from
me
deco
was
down.
So
let's
see
how
this
goes.
C
A
C
Okay,
awesome
in
that
case,
let
me
get
started
so
the
context
of
so
this
is
compared
to
the
previous
stuff.
There's
a
bit
more
academic
work.
The
context
was
a
was
a
master's
thesis
in
this
case,
thomas
was
so
kind
as
to
supervise
the
whole
thing,
and
pierre
also
on
this
call
was
also
providing
tons
of
help.
So
thanks
to
them
upfront,
so
let
me
get
started
then.
C
So
the
the
context
for
this
whole
thing
is
the
notion
of
device
telemetry
right
and
what
we
want
is
to.
C
To
to
understand
the
networks,
we
want
to
combine
different
perspectives
from
from
basically
combined
information
from
various
angles.
So
specifically,
we
might
want
to
extract
information
about
whether
a
certain
flow
belongs
to
like
a
specific
vpn
by
combining
a
control
plane
and
for
avoiding
plane
information
and
the
way
we
obtain
that
information
in
the
first
place
is
via
like
network
telemetry
protocols,
and
these
can
be
various
things
right.
It
can
be.
C
It
can
be
bmp
for
well
for
for
bgp
type
information
right
ipfix
for
for
flow
information
or
like
yang
push
for
for
device
level
stuff.
So
to
collect
that
information
at
all.
We
need
to
basically
make
sure
that
it
gets
routed
to
the
to
the
right
place
from
across
the
network.
C
So,
ideally,
we
just
define
a
single
globally
globally
unique
end
point
and
just
rather
anything
there.
Luckily,
that
also
allows
us
to
use
anycast
for
the
original
for
the
originalization
path
part
we
can
just
ecmp
balance
across
there
and
finally,
once
once,
the
traffic
actually
actually
hits
our
machine
vm
whatever.
C
So,
in
the
context
of
this
work
you
have
to
imagine,
we
have
like
a
network
with
routers
pushing
information
to
like
some
sort
of
a
host.
C
The
the
main
interesting
part
here
is
that
to
combine
the
information
to
combine
the
different
perspectives
on
that
target
host,
we
need
to
make
sure
that
all
these
different
perspectives
from
the
same
device
arrive
at
the
at
the
same
end
point
right,
because
otherwise
we
just
don't
have
the
data
streams
to
combine
them
early.
We
could
eventually
I
mean
we
could,
of
course,
like
globally
tag
everything
and
then
throw
everything
in
like
some
data
lake
type
setup
and
then
just
do
matching
there,
but
that
is
first
of
all
it
is.
It
is
expensive.
C
C
The
collector
that
that
we
used
here
was
pmacts
and
fxd
thanks
to
polo
for
for
creating
these
so
and
later
on.
You
can
imagine
this
is
then
used
use
them
for
some
sort
of
like
olap
querying
where
you
want
to
like
find
out
information
about
your
network,
but
that's
out
of
scope
here.
So,
if
you
look
at
the
if,
as
a
baseline,
we
can
just
simply
look
at
something
really
simple,
where
we
combine
that
data
by
introducing
a
layer
of
indirection
right,
we
have,
we
have
our.
C
We
have
like
n,
telemetry
daemons
that
are
collecting
all
that
information,
but
to
make
sure
that
the
right
streams
are
ending
at
the
right
demons.
We
put
some
proxies
in
front
of
them
that,
where
all
they
do
is
basically
forward
the
traffic-
and
in
this
case
we
can
just
use
something
like
h,
a
proxy
for
the
tcp
part
and
nfd
itself
can
actually
act
as
a
udp
proxima.
We
can
use
that
very
pfix
right,
like
a
relatively
simple
setup.
C
The
main
challenges
that
are
the
main
challenges
in
this
type
of
setup
is
the
is
the
the
mapping
right,
because
we
need
to
in
fact
make
sure
that
stuff
ends
up
at
the
at
the
right
target,
the
the
file
in
the
the
little
box
in
the
middle.
In
this
case,
the
challenges
here
are
relatively
straightforward.
I
think
it's
the
reliability
of
having
additional
components
that
may
fail
at
any
time
right.
They
they
suffer
for
update
cycles.
They
suffer
for
process
lifetime
cycles.
All
of
that
stuff.
There
is
overhead
right.
C
You
have
proxies
that
you
don't
strictly
need.
In
that
sense,
you
have
a
latency
impact,
which
you
probably
don't
care
about
so
much,
but
in
in
general,
just
limits
to
what
we
can
do
here
and
the
main
administrative
challenge
is
the
is
the
configuration
overhead
because
we
need
to
con
it's
basically,
these
files
right.
We
need
to
configure
two
different
proxy
with
different
configuration
syntaxes
to
support
like
all
of
these
devices
sending
across
multiple
telemetry
protocols.
C
My
example
only
two,
but
can
be
any
of
them
right
to
then
find
eventually
any
to
correctly
balance
it
across
the
different
backends,
and
this
is
what
we
want
to
solve.
So
what
what
can
we
do?
So
there
is
a
socket
option,
just
as
a
refresher
there's,
a
socket
option
called
as
a
reuse
part,
and
that
allows
us
to
basically
statelessly
load
balance
among
a
bunch
of
different
processes.
C
That's
super
convenient,
but
unfortunately
the
problem
with
that
approach
is
that
by
default
it
takes
or
in
its
standard
configuration
it
takes
the
hash
of
the
whole
flow,
which
means
that
different
telemetry
protocols
will
most
likely
end
up
at
different
demons:
telemetry
collection,
daemons,
so
you're
breaking
the
link,
which
is
the
whole,
which
is
the
whole
purpose
of
doing
this
right.
You
want
to
correlate
these
two,
these
two
different
data
streams,
so
luckily
this
is
something
we
can
influence
using
bpf.
C
It
I
think
it's
just
called
the
reuse
port
that
we
can
attach
to
help
us
influence
the
selection
as
a
refresher,
bpf
right
sd
is
the
subsystem
that
basically
allows
us
to
attach
logic
to
custom
predefined
hooks,
and
one
of
these
hooks
is
specifically
the
socket
selection
in
the
so
reuse
port
mechanism.
C
C
Under
the
assumption
being
that
we
only
have
a
single,
a
single
that
all
the
telemetry
is
coming
from
a
single
interface
that
has
like
only
a
single
ip
address,
and
the
second
step
is
that
we
no
longer
balance
across
all
the
listening
demons,
but
across
the
intended
number,
which
also
means
that
we
don't
need
to
dynamically
reallocate
or
so
try
to
solve
the
problem
of
reallocating
to
establish
tcp
connections
across
multiple
daemons
when
some
of
them
are
going
from
some
sort
of
like
update
cycle.
C
Like
imagine
some
sort
of
like
you
have
10
of
them
listening
right
and
then
you
have
like
a
rolling
update
where
you
restart
one
after
the
other,
with
some
new
binary
version.
And
then
you
basically
have
chaos
going
back
and
forth.
But
if
you
just
say
that
we
always
intend
to
have
10
of
these
and
we
simply
respond
with
a
tcp
reset
or
just
simply
drop
the
udp
datagram.
C
Using
this
approach,
we
basically
now
have
stateless
balancing
right.
We
don't
need
any,
we
need
to
don't.
We
don't
need
to
deal
with
config
files
because
they
don't
exist
in
that
sense.
Well,
I
mean
we
still
need
to,
of
course
know
how
many
we
intend
to
have,
but
that
is
basically
it
it's
based
on
device
identity.
Only
again
the
link
here
being
something
like
that,
the
ip
address
corresponds
like
a
single
ap
address,
corresponds
to
a
single
device.
C
If
that
wasn't
the
case,
we
could
also
we
could
also
do
more
complicated
stuff.
You
can
just
parse
the
skb
from
within
that
program.
That
is
possible.
It's
stable
across
restarts
in
the
sense
that
we
are
not
throwing
connections
back
and
forth.
You
prevent
a
scenario
of
cascading
failure,
so
imagine
like
a
packet
of
death
scenario
that
exploits
the
this
collector.
C
If
you,
if
you
simply
load
balance
across
everything
that
is
listening
at
this
point,
basically,
every
time
this
connection
is
trying
to
be,
it
tries
to
get
itself
re-established
or
the
pdp
packet
gets
resent
in
the
ap
fix
case.
You
would
basically
like
do
a
round
robin
of
like
killing
these
demons.
C
This
way,
you
only
have
like
a
single
victim
that
basically
stays
down
and
doing
it.
This
way
basically
requires
no
configuration
right.
It's
just
this
this
one
piece
of
information
about
how
many
you're
going
to
have
so
we
have
this
so
to
summarize
what
we
have
what
we've
seen
here
right,
so
we
say
that
network
telemetry
aggregation,
meaning
combining
different
streams.
C
We
have
additional
requirements
on
load,
balancing
that
we
wouldn't
usually
have
by
simply
needing
having
the
need
to
combine
the
streams
from
different
across
different
transport
protocols
and
transport
critical
ports
down
to
the
same
process
having
to
balance
it.
This
way.
C
So
we
can
solve
this
problem
relatively
straightforwardly
using
ebpf
without
using
additional
daemons
on
the
system
side.
This
is
this
is
a
pretty
nice
property
right,
because
that
means
no
no
vulnerabilities
at
that
level,
no
upgrade
cycles.
C
No,
I
misread
the
configuration
or
in
incompatible
configuration
updates
with
vpf,
compile
once
run
everywhere,
functionality
that
loop
bpf
provides.
You
can
basically
it's
it's
portable
enough.
In
that
sense,
we
can
balance
stuff
across
an
arbitrary
number
of
end
points,
because
scaling
up
and
down
is
relatively
easy
in
this
type
of
setup.
C
This
basically
allows
us
to
do
the
proper
networking
by
by
always
having
this
data
correctly
correlated
instead
of
split
across
multiple
data
collectors
and
then
having
to
aggregate
it
at
a
later
stage
and
that's
much
easier
to
maintain
and
as
a
bonus.
Actually,
if
you
try
to
benchmark
it
by
doing
like
a
ba
by
running
a
bmp
load,
generator
in
terms
of
connections,
we're
behaving
much
better,
I
think
we're
saving
around
like
20
cpu
time.
C
By
doing
it
this
way
and
the
ramp
up
is
actually
feasible
like
if
you
run
the
load
generator
against
like
h
a
proxy
you
you,
I
basically
have
to
slow
it
down
before
things
just
crash,
whereas
in
the
kernel
level
balancing
case
that
is
not
a
problem
at
all
and
that's
basically
already
it.
The
slot
ended
up
being
much
larger
than
anticipated.
C
So
I
guess
that
means
we
have
plenty
of
time
for
questions
this
time
and
if
you
want
to
read
the
same
stuff
in
the
form
of
like
theses
with
like
huge
appendix
and
70
pages,
blah
blah
blah
feel
free
to
visit
the
link.
So
that's
it
from
for,
for
the
scheduled
items
of
my
talk,
I'm
happy
to
answer
any
questions
at
this
point.
A
D
Other
edwards,
thank
you
so
much
for
this
presentation.
This
is
edward's
first
ietf,
so
I
think
you
did
a
fantastic
job
of
sharing
your
insights
with
our
community.
C
All
right
well,
given
that
there
doesn't
seem
to
be
anything
on
the
chat,
nor
is
anyone
raising
hands
interrupting
that
case
thanks.
Everyone
was
also
presenting
and
a
special
thanks
again
to
thomas
pierre
and
phone.
A
All
right,
I
think,
that's
the
end
of
our
presentations
for
the
meeting
and
unless
anybody
else
has
other
things
to
talk
about.
D
Anything
you
want
to
cover
you.
I
want
to
go
back
to
upgrading
rpk
validators,
okay,.