►
From YouTube: IETF108 GROW 20200727 1410
Description
GROW meeting session at IETF108
2020/07/27 1410
A
A
Blue
sheets
are,
as
we
just
said,
taken
care
of
by
medico.
We
do
need
a
person
to
take
minutes,
so
if
somebody
could
do
that,
that
would
be.
A
A
Ideally,
we
have
a
presentation
about
bmp
of
the
hackathon
from
thomas.
A
Okay,
let
me
find
your.
B
You're
ready,
okay,
so
I'm
presenting
for
the
btp
monitoring
protocol
team
from
the
itf
hackathon
next
type
slide,
please.
B
So
our
main
focus
was
primarily
on
correlating
the
bmp
bgp
low
clip
metrics
with
ipfix,
so
that
we
can
do
a
flow
aggregation
and
we
have
a
couple
of
bmp
drafts
which
we
were
covering.
One
was,
namely
the
bmp
local,
a
local
hitchcraft.
Then
the
bmptlb
support
for
monitoring
and
peer
down
messages
as
well,
the
enterprise,
specific
tlvs,
the
path
marking
and
our
policy
and
attribute
trace.
B
Besides
that,
from
a
functionality
perspective,
we
were
also
looking
at
the
performance
impact,
see
how
bmp
is
impacting
the
cpu
memory
sources
of
the
route
processor
and
also
how
it
impacts
the
the
btp
route
propagation
throughout
the
network.
B
So
in
for
this,
from
the
software
perspective,
we
were
using
pmacct
at
the
data
collection.
On
the
big
data
side,
we
had
kafka
as
a
message
broker
through
as
a
time
series
db.
Pivot.
Does
a
user
front
end
at
the
bottom?
There
is
a
nice
tutorial
if
you
want
to
set
it
up
on
your
own
and
we
are
working.
We
were
working
also
on
a
bmpd
sector
update
for
wireshark,
covering
the
new
bmp
tlvs,
which
I
described
in
the
beginning
of
the
slide
deck
and
for
the
traffic
and
our
generation.
B
We
were
using
stylund
next
slide,
so
that
was
the
the
network
we
created
in
our
lab
environment.
It's
an
inter
asmpalas
interface,
option
c
networks
with
a
couple
of
pe
routers
ce
routers
and
asbrs,
and
intel
es
route
reflectors.
So
everything
was
dgp
only
on
all
the
routers
we
had
bmp
enabled
for
adjacent
city
in
out
local
hip
pre-post,
basically
everything
very
capable
of-
and
I
do
not
see
the
slide
deck
anymore.
B
B
We
had
the
cpu
memory
collection
with
young
push
and
we
identified
that
we
have
automatized.
The
configuration
of
the
network,
but
where
we
need
to
improve
is
the
test
verification.
We
want
to
automatize
that
as
well,
and
the
next
step
is
also
we
want
to
visualize,
basically,
the
propagation
delay
and
using
basically
the
bmp
collected
timestamps
next
slide
deck,
please
on
the
pmact
side,
and
we
used
nfa,
nf,
acct
and
pmb
and
pd.
B
We
were
now
this
time
able
to
correlate
the
bgp
low
clip
with
ipfix,
even
on
an
ample
pe
routers,
where
prefixes
have
that
are
distinguishes
included
in
their
update,
and
we
were
working
also
in
decoding
the
the
tlv's
in
there
our
policy
attribute
days,
and
we
made
some
progress
there
and
by
doing
that
work,
we
identified
a
possible
improvement
in
the
path
marking
pld,
since
multiple
paths
can
be
present
in
the
monitoring
message
of
the
bgp
pdus
in
the
current
implementation
for
each
path,
we
need
basically
a
path
marking
and
it
could
be
theoretically
possible
then,
for
some
paths
that
we
don't
have
a
path
marking
and
currently
the
path
marking
would
be
unknown,
and
by
introducing
indexing,
we
will
be
able
that
we
only
do
the
path
marking
for
paths
where
we
have
information,
thus,
basically
simplifying
reducing
the
amount
of
data
we're
sending
to
the
collect
next
slide.
B
B
And
on
the
on
the
outside,
these
are
the
drafts
which
we
have
currently
implemented.
We
did
some
stress
tests,
as
described
earlier
for
the
cpu
memory
usage,
which
I
will
detail
in
the
next
slide,
and
we
were
also
working
on
the
wireshark
bmpd
sector
yeah.
So
this
graph
here
didn't
make
much
sense
to
us.
We
need
to
further
investigate,
basically
what
it
shows
is.
After
we
enabled
bmp,
we
saw
that
the
the
cpu
increa
increased.
B
We
didn't
expect
it
that
much
without
having
the
updates
in
the
network,
and
we
think
it
has
to
do
with
the
way
how
we
measure
the
cp
usage,
and
we
want
to
clarify
that
until
the
next
hacker
don't
find
out
what
exact
cpu
resources.
We
need
to
monitor
there
to
make
an
accurate
description
on
the
cpu
consumption
next
slide.
B
B
This
is
about
the
the
memory
increase
of
before
after
bmp
is
enabled
when
hundred
thousand
five
hundred
thousand
or
one
million
routes
were
introduced,
and
we
see
a
slide
increased.
This
is
actually
expected
because
when
you
have
multiple
peers
and
then
basically
need
to
do
to
send
the
there
are
monitoring
messages
towards
the
collector
that
there
will
be
some
queueing
on
the
on
the
router
side,
and
this
is
will
lead
into
a
slight
increase
in
the
memory
usage
next
slide.
Please.
B
B
B
B
B
The
pmpd
sector,
the
code
at
wireshark,
is
currently
committed.
It
should
be
soon
in
the
next
build
within
the
next
couple
of
days
and
can
be
then
downloaded
from
the
wireshark
website,
and
these
are
the
the
next
drafts
which
we'll
be
covering.
There
are
the
the
bmp
tlv,
the
enterprise
bits
and
the
path
marking
next.
B
We
had
also
a
a
master
student
from
eph
here
in
zurich
livio
with
us.
He
was
working
on
the
visualization
of
the
bmp
data.
So
what
you
see
here
is
basically
the
from
the
router's
perspective,
taking
the
bgp
next
hop
for
giving
prefixes
into
account
and
then
basically
draw
the
network
map,
and
for
that
purpose
the
the
pgp
localhip
matrix
there
are
monitoring,
metrics
were
being
used
and
he's
currently
working
on
new
visualizations,
where
he's
also
including
informations
from
the
policy
tracing
and
also
the
path
marking.
B
So
we
can
actually
see
to
which
pgp
routes
are
actually
being
installed
which
are
being
used
for
ecmp
primary
and
also
which
our
policy
was
actually
in
charge
of
or
was
was
or
showing,
basically,
which
job
policy
introduced,
which
pgp
attribute
changes
onto
the
network.
B
So
next
slide
here
a
short
explanation:
what
data
pipeline
he
was
using.
So
we
had
a
pma
cct
on
the
data
collection,
patrick
kafka
and
apache,
to
read
for
message,
program,
time,
series
db
and
plywood
based
on
node
gs,
basically,
which
is
creating
the
database
and
d3js
for
visualizing
the
graphs
next.
B
So
what
we
learned
is,
I
think,
what
was
special
in
this
hackathon
is
we
had
three
newcomers
that
they
they
brought
some
quite
interesting
question
and
new
ideas
into
it.
One
here,
just
as
an
example,
was
since
basically
that
the
bgp,
pier
establishment
here
up
here
down
is
could
depend
on
on
on
a
bft
session.
B
Basically,
an
interesting
idea
could
be
to
enhance
the
peer
up
peer
down
that
far
that
we
can
actually
tie
it
down
to
an
actual
bft
session.
So
we
know
that
basically,
when
appear
goes
down
which
pft
session
was
responsible
for
it
or
if
pfd
causing
that
appear
to
be
done,
the
young
push
cpu
memory
collection
was
actually
very
detailed
and
that
helped
us
to
to
really
correlate
it
to
the
the
the
bmp
timestamps.
B
B
Questions,
if
not,
thank
you
very
much
looking
forward
for
for
your
feedback
on
the
mailing
list,
or
if
you
have
any
interest
or
question
it
would
be
great
to
have
more
people
on
the
next
hackathon.
C
B
D
Thank
you
so
much
for
your
presentation.
It
is
much
appreciated
that
you
and
the
team
set
out
to
have
a
hackathon,
despite
itf,
not
being
its
usual
itf.
So
I
very
much
appreciate
you
guys
putting
in
this
effort.
D
D
I
think
we'll
have
a
presentation,
the
next
itf
meeting,
but
in
the
meantime
I
I
would
encourage
people
to
take
a
list.
Look
at
the
grow
mailing
list
and
chime
in
on
the
zero
zero
draft
about
asp,
preventing.
E
Hey
job,
hey
cheers:
can
you
hear
me
I
can
hear
you.
Thank
you
fantastic.
So,
dear
chairs,
I
wanted
to
ask
you
one
thing
like
we
have.
This
lock
drive
locrip
draft,
which
is
pretty
much
you
know,
finished
or
there's
not
been
any
comments,
any
updates,
if
not
the
version
bump
for
one
year
now.
E
It's
if
I
see
the
itf
status
it's
in
last
call
I
mean,
shall
we
give
it
the
last
kick
and
move
forward
or
not,
and
the
reason
I
asked
this
is
because
you
know
the
two
reasons
I
mentioned
like.
There
has
been
essentially
no
work
for
almost
one
year,
but,
as
you
see
from
the
hackathon,
we
are
already
you
know,
building
the
software.
You
know
a
couple
of
layers
above
that
right,
so
I
don't
know
we
have
something
that
is
working
and
there
is
no
update.
What
shall
we
do.
D
So
the
chairs
will
take
it
up
as
an
action
item
to
analyze
the
current
station.
Sorry
there's
an
incredible
echo,
so
it's
a
little
bit
hard
to
to
talk.