►
From YouTube: IETF115-GROW-20221111-1200
Description
GROW meeting session at IETF115
2022/11/11 1200
https://datatracker.ietf.org/meeting/115/proceedings/
A
All
right
welcome
to
the
grow
session
at
ietf
115
in
London,
if
you're
not
supposed
to
be
in
London,
please
consult
an
adult.
Thank
you
so
much
for
attending
this
session.
The
grow
working
group
concerns
themselves
with
operations
of
the
global
internet
routing
system,
and
these
days
we
also
spent
a
lot
of
time
on
BMP
the
bgp
monitoring
protocol.
A
Today's
session
will
be
mostly
split
between
focus
on
BMP
and
a
novel
proposal
about
describing
bgp
communities
in
a
Json
format.
Chris
is
my
co-chair
I'm
job
Snyders.
A
A
Please
pay
attention
to
the
note.
Well,
the
note
will
outlines
expectations
about
behaviors
and
processes,
but
I
think
by
now,
at
the
end
of
the
week.
Most
of
you
have
seen
this
slide
multiple
times
next
slide.
Please
here's
an
overview
of
the
resources
related
to
The
Grow
working
group.
Our
agenda
is
available
online.
Everybody
submitted
their
slide
deck
for
today's
session
and
you
can
find
those
online.
A
A
A
We
need
somebody
to
keep
notes
of
what
was
said
and
processed
in
this
proceeding.
Is
anybody
willing
to
volunteer
to
be
our
minute
taker
today.
A
Well,
if
nobody
is
volunteering,
I
I'm
gonna
try
to
give
the
job
to
somebody
Jeff.
Would
you
be
so
kind
to
be
a
minute's
takeaway
session.
A
The
jeopard
stripes
task
is
to
keep
an
eye
out
on
the
chat
room
and
if
questions
emerge,
we
try
to
make
sure
that
they
are
handles.
Oh,
it's.
It's
zulip
these
days,
I'll
keep
an
eye
on
on
sulip
agenda
bashing.
A
A
We
have
an
update
on
BMP,
High
availability,
data
collection
and
updates
about
the
bgp
young
model
and
finally
BMP
path
marking.
Is
there
anything
that
should
be
added
to
this
agenda?
Oh
wait,
I
see
that
Martin
tells
us
item
didn't
make
it
into
the
slides.
Our
last
item
is
smart
and
Pals
on
a
mechanism
to
describe
HP
communities
in
adjacent
format,
so
I
think
with
that
we'll
we'll
need
roughly
an
hour
next
slide.
Please.
A
All
right:
let's
kick
it
off
hello.
Would
you
mind
stepping
forward
to
the
stage
and
sharing
with
us
your
thoughts
and
insights
on
that
the
bmpe
bits
and
tovs?
Okay.
C
D
D
Can
you
hear
me
yes,
so
actually
I
have
three
items
to
to
talk
about.
Two
are
the
microphone.
D
Wow
that
close,
okay,
fantastic,
really
close,
so
I
have
three
items:
it's
the
tlv,
the
ebit
plus
this
new
brand
new
idea
that
all
of
you
is
going
to
love,
not
like
love.
So
next
slide,
please
we
start
with
the
tlv
next
slide
already,
because
this
is
just
a
problem
statement.
You
remember
it
from
from
the
past.
It's
just
to
make
sure
that
there
are
tlbs
in
every
BMP
message
and
we
were
missing
that
for
root
monitoring
and
for
peer
down.
D
So
what
happened
since
version?
Seven,
we
skyrocketed
to
version
tank
first
of
all,
and
there
are
there
are
there
has
been
a
lot
of
minor
fixes,
but
the
two
main
things
are,
first,
that
so
we
are
ex.
We
are
extending
the
root
monitoring
message
with
the
tlds
and
I
think
in
a
protocol.
It's
important
to
be
consistent
to
something
else
that
happens
in
the
protocol.
So
until
the
previous
revision
of
the
draft,
we
were
being
consistent
to
peer
up
peer
down
to
let's
say
init
and
tournament
messages.
D
But
actually
you
know
that
was
a
kind
of
a
sub-optimal
I.
Think
Jeff
was
the
very
first
to
notice
that
you
know
you
have
to
parse
the
whole
pdu
in
order
to
skip
then
to
the
tlbs
and
the
tlvs.
They
may
convey
in
some
characteristic
about
the
PDO,
so
it
was
like
a
not
so
nice
and
beautiful.
But
let's
say
overall
we
got
through
the
revisions
with
the
you
know,
comments
like
the
one
of
Jeff
but
saying:
okay,
it's
not
nice,
but
it's
okay,
right,
I
would
say.
D
Finally,
I
did
take
this
decision
that
you
know
we
should
be
consistent
to
something,
let's
be
consistent
to
the
root
root,
mirroring
message
which
is
entirely
TLD.
So
now
we
have
a
bgp
message:
tlv
and
the
pdu
is
part
of
the
bgp
message
TLP.
So
now
you
can
do
whatever
you
like.
You
can
really
put
it
this
bgp
message:
tlb
you
can
put
last
first,
you
can
sandwich
between
other
tlbs
and
things
like
that.
I
think
it
makes
sense,
and
it's
much
more
beautiful
than
before
and
based
on
team
events.
D
Feedback
on
the
list
I
did
introduce.
The
group
TLD
Group
tlv
is
because
Team
said
like
we
can
do
essentially
want
to.
We
can
model
one-to-one
like
tlvs
to
angle
arise.
We
can
do
one
to
one
because
there
is
the
index
0,
meaning
it
applies
to
everything,
but
then
we
can
do
not
do
M,
twang,
right
kind
of
relationship.
So
I
said
you
know,
let's
do
the
group
tlv
and
essentially
we
just
yeah
and
I-
don't
know
it,
it
can
get
complicated,
but
it
it
is
one
of
the
way
forward.
D
Right
and
what
is
the
status
I
would
say
that
the
status
is
simply
that
there
is
a
little
bit
more
feedback
from
Tim
Evans
that
I
have
to
process.
Actually,
I
am
in
weight
instead,
I
mean
waiting
for
some
answers
from
him
on
the
list
and
I
would
say
like
yeah
I.
This
is
the
status,
and
probably
there
will
be
more
changes
coming
up
in
the
next
version
of
the
draft.
So
this
is
it
for
the
tlb
draft.
D
I
don't
know
if
there
is
any
comments,
if
you
want
to
make
them
now,
nothing
share
remote
chair.
E
D
All
right
next
slide,
so
this
is
about
the
ebit
next
slide.
D
And
next
slide
again
because
I
just
recap
the
problem
statement
for
the
ebit,
but
you
already
know
from
previous
version.
So
what
happened
is
that
first,
the
document
was
adopted
by
the
working
group.
Thank
you
so
much
and
to
be
honest
with
you,
I
mean
there
have
been
only
really
minor
updates
updates
to
the
to
the
document.
D
Nothing
really
worth
mentioning.
Maybe
I
removed
a
little
bit
of
text
and
word
because
they
were
repeated
from
the
tlv
draft
and
then
I
said
instead
of
you
know
repeating
all
of
this.
Let's
just
refer
to
the
to
the
other
draft.
D
I
have
still
one
open
question:
I,
don't
know
if
anybody
has
any
thoughts
on
it
and
whatever,
like
the
ebit
so
far,
we
do
apply
it
to
the
informational
tlvs
and
things
like
that.
But
of
course
we
have
a
stats
message.
We
have
a
statistics
message
and
I
was
very
much
wondering
and
that
is
in
a
tlv
format.
So
I
was
wondering
three.
Would
the
ebit
apply
to
the
stats
message
as
well?
If
so,
a
little
bit
of
text
should
be
added
and
I
am
really
looking
forward.
D
Also
on
this
aspect
to
feedback,
if
there
is
any
I
think
it
makes
sense,
but
you
know
open
to
your
thoughts
and
with
this
I'm
finished
on
the
ebit
as
well,
any
thoughts
comments.
D
Cool
next
next
slide,
please
thank
you,
so
this
is
like
an
idea
that
we
have
a
little
bit
discussing
with
Camilo
as
well,
which
is
a
logging
of
routing
events
in
BNP
right.
D
So
what
is
the
idea
next
slide?
Please
what.
D
Okay,
Chris
I
I,
never
sorry
Chris.
So
what
is
the
basic
idea
here?
The
idea
is
that,
like
we
have
a
state
synchronization
with
the
root
monitoring,
meaning
that
we,
it
is
mandatory
that
we
have
an
initial
flooding
of
data
and
then
we
get
all
the
rest.
We
have
debugging
with
the
root
mirroring
okay
and
with
the
root
mirroring,
we
should
send
the
verbatim
copy,
High
very
high
fidelity
of
what's
going
on,
and
then
we
have
a
you
know,
session
data
and
we
have
stats
and
all
of
that
right.
D
But
what
we
are
really
missing
is
a
message
type
that
is
Event,
Event,
driven
right,
so
I,
don't
know
you
have
a
policy
that
policies
I,
don't
know
blocking
or
denying
prefix.
Then
you
know
we
want.
You
know
to
be
notified,
let's
say
or
for
example,
there
is
some
sort
of
validation
taking
place
on
the
router
I,
don't
know
replica
validation
or
some
sort
of
other
validation.
We
want
to
know
that
something
didn't
validate
and
things
like
that
right.
D
So
all
of
this
so
far
as
I
was
saying,
it
doesn't
exist
also
the
unchanged
so
the
unchanged
analysis
I
mean
you
have
to
do
really
a
differential
analysis,
for
example,
address
being
a
pre
and
post
policy.
D
You
have
really
to
go
like
and
scan
the
whole
pre-policy,
all
the
POS
policy,
and
then
you
should
derive
what
change
change
it
there
right,
but
you
could
also
have
you
know
some
sort
of
notification
like
again,
these
specific
prefix
didn't
make
it
to
the
post
policy
and-
and
things
like
that,
so
this
is
the
intuition.
It
kind
of
makes
sense
to
me.
I
hope
it
does,
for
you
guys
as
well.
Super
duper
looking
forward
to
your
thoughts
next
slide,
please,
and
so
that
was
the
intuition.
This
is
the
execution.
D
So
what
essentially
we
did
is
we
again
did
a
message
body
that
is
consistent
with
the
root
mirroring
again,
so
it's
all
tlv.
We
have
a
bgp
message:
pdu
bishoping,
massage
tlv.
That
includes
a
pdu
with
the
so-called
event
subjects
event
subjects
would
be
nlri,
okay
and
then
the
indexed
informational
tlv
are
the
event
attributes
right
and
so
far
the
only
tlv
that
I
did
Define
was
the
event
racing
right,
so
why
we
are
reporting
something
so
essentially
now
you
can
say
that
at
the
third,
the
angularri
of
this
bgp
update
message.
D
That,
of
course,
is
an
artifact.
It
doesn't
exist,
it
didn't
exist.
It's
just
a
way
to
convey
the
information
that,
for
example,
it
was
policy
denied
or
it
was
not
valid-
or
something
like
why
I
choose
to
go
for
a
bgp
pdu
right
to
convey
this
information,
because
I
thought,
like
we
already
have
a
code
to
encode
and
to
decode
this
kind
of
information,
so
I
really
wanted
to
be
a
little
bit
low
touch.
D
You
know
it
is
an
artifact
I
could
have
invented
something
totally
new
and
things
like
that,
but
it
kind
of
didn't
make
sense
to
me
right,
but
again,
super
duper.
Looking
forward
for
feedback
lasting,
it's
a
tiny
detail
in
the
flags.
We
have
something
that
tells
us
whether
it's
pre
or
post
policy,
something
I.
Just
removed
that
flag
because
it
still
following
an
intuition,
the
the
you
know,
the
the
event
happens
between
a
pre
and
post
policy
right.
D
So
it's
in
between
something
is
happening,
and
it's
generating
this
kind
of
event.
Super
duper
again.
Looking
for
feedback
next
slide,
please
so
status.
I
am
super
duper
conscious
that
the
draft
need
to
be
still
worked
upon.
It's
incomplete.
Probably
there
are
also
some
errors
in
it,
and
things
like
that.
It
was
really
a
work
done
in
rush,
but
I
wanted
to
make
sure
to
present
it
here
so
to
collect
feedback
in
person
other
than
the
mailing
list.
D
Also,
it's
a
very
humbled
down
proposal
so
far
like
it
only
applies
to
before
V6
IP
prefixes
Can
it
can.
It
apply
to
something
else
than
just
IP,
prefixes
right
and
also
so
far,
I'm.
Just
saying
that
the
bgp
update
is
enclosed
in
the
bgp
message
tlv.
But
can
we
maybe
report
on
different
bgp
messages
right?
D
So
I
am
here
crucifix
me,
throw
me
Tomatoes
I
mean
I.
Am
you
know
looking
for
feedback?
Thank
you.
First
up
is.
F
Jeff
is
we
shall
I
crucifix?
You
because
you
know
you
would
not
be
able
to
type
so
well.
So
you,
you
have
reached
a
magic
point
of
popularity.
You
know
with
a
protocol
when
you
know
that
you're
trying
to
throw
everything
into
it.
So
this
is
a
good
time
the
wisdom
I
would
share
with
you.
Is
that
when
you
reach
that
point,
there's
a
lot
of
discussion
about,
should
you
do
this
yeah
you're
at
the
borderline
right
now
of
you
know,
you
know,
have
interesting
things
you
wish
to
report.
F
You
have
a
good
mechanism
that
can
carry
it.
You
have
some
use
cases
that
this
may
make
sense,
for
you
are
heading
down
the
road
very
quickly
to
a
generalized
streaming,
Telemetry
mechanism
that
other
things
may
be
trying
to
attach
to,
and
you
probably
went
to
as
grow
figure
out
what
that
line
ends
up
being
and
try
to
keep
the
lines
nice
and
crisp.
Your
overlapping
spaces
that
you
have
to
worry
about
are
you'll
work
out
of
like
netconf
or
streaming
Yang
extensions.
F
You
have
gnmi
coming
out
of
openconfig,
so
these
are
all
good
things
specific
to
BMP.
The
thing
I
would
suggest
you
is
the
same
thing:
I'd
give
for
the
streaming
Telemetry
feedback.
This
is
critical
data
that
you're
trying
to
use
the
more
things
you
put
into.
You
know
the
drinking
straw,
that's
carrying
the
fire
hose.
D
C
Shared
matcha,
how
can
I
I
want
to
Echo
some?
You
know
impart
what
what
Jeff
said,
which
is
if
we're
going
to
have
different
bgp
messages
and
stuff,
be
you
know,
be
reported,
I
mean
BMP
is
largely
just
the
bgp
on
The
Wire
in
a
wrapper
and
so
the
ability
to
pass
through
the
MP,
and
you
know
what
other
whatever
other
stuff
it
is
there,
and
then
you
know
back
on
the
you
know.
C
Kind
of
the
specific
monitoring
piece
is
the
vision
that
the
router
that
you
would
configure
in
the
routers
or
the
b2p
speaker,
the
things
that
you
would
want
it
to
highlight
to
The
Collector,
or
is
that
something
that
would
happen
in
The
Collector?
It
sounds
like
that
would
be.
You
know
in
the
router
exactly
that
it
would
highlight
it
yeah,
and
then
you
know
that
that
makes
a
lot
of
questions
about.
C
You
know
then
you're
sticking
that
in
an
ephemeral
database
on
the
router
or
is
it
that
live
it
live
in
the
permanent
configuration,
store,
etc,
etc,
right
which
you
might
want
both
and
then
that
gets
back
to
the
you
know
is
this:
you
know.
C
Is
this
analysis,
should
it
just
be
done
offline
after
the
fact,
and
that's
that's
the
way
we've
done
it
is
we're
just
taking
all
the
data
and
we're
processing
that,
after
the
fact
out
of
the
routers
and
just
getting
that,
mirror
that
basically
the
mirror
copy
of
all
the
information.
So
you
know
how
much
do
you
want
to
build
the
alerting
mechanism
into
that
and
have
that
be
integrated
in
the
routers
and
I
think
having
the
routers
alert
on
that
is
a
very
dangerous
and
slippery
slope.
D
Yeah
I
I
see
to
be
honest
with
you
like
yeah.
What
I
found
out
is
that
like,
if
you
want
to
find
a
difference
like
a
pre-policy
to
policy,
you
have
to
mine
a
ton
amount
of
data
at
The
Collector,
just
because
you
are
fine
trying
to
find
one
tiny
bit,
but
you
know
if
the
router
is
denying
a
prefix,
for
example.
For
some
reason,
I
mean
the.
Let
me
put
it
in
between
quotes.
The
router
already
knows
so.
It
just
seems
a
more
I,
don't
know
efficient
process.
D
Although
I
see
that,
then
you
know
you
may
be,
let's
say
yeah
you
are
overloading.
Maybe
the
router
on
another
aspect,
I
that
I
wanted
also
to
say
I
mean
this
is
similar.
I
mean
the
Rel
naming
it
doesn't
come
out
of
nothing.
I
mean
it
comes
from
a
Nell,
the
netflow
event,
logging,
so
I
was
kind
of
thinking.
You
know
we
are
building
a
super,
similar
mechanism
but
tailored
to
bgp.
Something
like
that.
I
don't
know
if.
E
This
is
more
related
from
swisscom,
so
this
is
more
related
to
the
tlv
draft
and
I
noticed
that
you
bombed
the
version
from
three
to
four
for
all
messages
which,
even
for
the
messages
that
are
not
touched
by
the
draft,
the
bars
that
have
to
be
updated-
and
you
have
to
do
something
different
I'm
wondering
why
not
just
change
the
name
a
bit
to
indicate
this
is
actually
new
version,
that's
coming
up
and
maybe
bundle
some
of
the
other
changes
with
it
as
well.
Since
we
are
probably
changing
the
version
anyway,.
D
I
Okay,
my
name
is
Julia
Lin
and
I'm,
a
master's
student
studying
in
a
global
Polytechnic
in
Paris
and
I'm
happy
to
introduce
to
you
the
high
availability
in
vnp
data
collection,
which
is
a
project
that
I
have
done
in
Switzerland
for
the
past
six
months
from
March
and
next
slide.
Please,
the
goal
of
this
project
is
to
guaranteeing
and
is
to
achieve
BMP
data,
High
availability
and
also
do
possible
load
balance
for
other
natural
telemetric
data
wire
nuts.
I
Why
not
bringing
a
lot
of
data
duplications
and
nowadays,
as
the
network
going
larger,
it's
more
and
more
important
to
to
network
monitoring,
so
we
are
so
Network.
Telemetry
is
very
important
and
the
network
Telemetry
protocol
we're
using
in
these
projects
is
the
BMP
which
is
based
on
bgp
and
BMP,
can
provide
us
access
to
different
reps,
as
well
as
skipping
being
kept
update
about
the
bgp
events
happening
in
the
network.
I
And
this
all
this
data
is
crucial
for
monitoring
networks.
The
page
the
BMP
works
as
following.
It
will
have
one
bgp
router
in
the
bgp
network
to
export
BMP
data
to
an
external
BMP
stations,
but
as
the
network
going
larger,
it
might
not
be
enough
with
it
might
not
be
enough
to
have
only
the
the
single
one
BMP
one
router
and
one
stations
architectures.
So
we
want
to
introduce
more
collectors
and
next
slide.
Please.
I
For
Prototype
designs,
we
have
used
two
collectors
in
the
systems.
Each
collector
will
expose
it.
No
sorry
each
router
will
expose
this
BNP
twice
to
to
the
to.
The
collectors
means
that
both
collector
will
maintain
bmp's
identical
BMP
sessions
to
guarantee
BMP
High
availability.
Also
with
this
architecture,
it's
also
it's
possible
to
do
low
balance
for
other
network
telemetric
data,
for
example
ipfix.
I
But
it's
not
further
discussed
here
and
then,
since
we
are
having
BMP
data
in
both
sides,
if
we
dump
all,
if
we
dump
both
of
them
to
the
database,
we
might
bring
a
lot
of
duplications.
So,
at
the
end
of
dumping,
the
the
BMP
data
in
the
collectors
we
have
designed
a
share
internal
logic,
Across
The
Collector,
to
guarantee
that
the
BMP
data
is
only
forwarded
once
but
cached
twice
and
next
slide.
Please,
the
design
is
the
following:
we
will
make
the
collector
to
work
in
active
standby
States
and
this
active
standbys.
I
Why
only
the
active,
active
collector
will
be
thumb,
paints,
BMP
data
and
the
standby
collector
will
not
talk
but
still
keeping
the
BMP
sessions.
The
active
standby
feature
is
decided
by
its
timestamp.
I
The
timestamp
is
set
at
the
establishment
of
BMP
sessions
and
the
time
step
will
be
set
to
the
will
be
sent
to
redis
as
a
key,
and
in
this
case
The
Collector
The
Collector
can
exchange
it
about
their
about
their
Works
Day
by
by
The
Key
by
the
by
the
timestamp,
kids
and
redist,
and
also
signals
are
used
for
maintenance
purpose,
since
we
also
want
to
manually
configure
the
active
standby
States
during
runtime
next
slide,
please,
since
redis
is
using
key
value,
form
2
or
to
sort
records.
I
I
In
the
previous
introductions,
we
have
two
collectors,
so
in
the
cluster
there
will
be
two
collectors,
so
the
questioning
and
cluster
ID
will
be
used
to
identify
which
one
is
this,
and
the
core
processes
core
process
name
is
used
to
identify
which
session
it
is
since
we
might
have
different
network
Telemetry
sessions
and
this
records.
These
kids
will
be
sent
with
a
two
seconds
time
up
what
so
it
needs
to
be
kept
refreshing,
and
if
it's
not
refreshed,
then
it
will
be
time.
I
It
will
be
expired
and
deleted
after
two
seconds
and
then
every
seconds
The
Collector
the
program
PMS
Mississippi
running
in
the
collector,
will
list
all
the
timestamp
it
gets
from
redis
and
set
the
damp
flag.
Accordingly.
If
the
down
flag
is
true,
then
the
collector
will
be
active
and
if
it's
forced,
then
it
will
be
stand
by
it
and
there
are
the
following
four
possible
conditions.
If
the.
I
If,
if,
if
in
redis,
there
is
only
collector
Ace
timestamp,
then
a
will
be
active
and
while
the
the
state
of
B
will
be
unknown,
because
if
for
B
radius
is
available
it
will,
it
will
not
be.
It
will
not
be
aware
of
the
existence
of
other
collectors,
so
it
should
be
set
to
it,
which
it
should
be
set
to
active,
to
make
sure
that
the
collector
won't
lose
any
data.
I
And
if,
in
the
radius
collector
a
has
the
smallest
tangent,
then
it
should
be
active
and
B
should
be
standby.
And
if
a
has
not
the
smaller
timestamp,
then
n
should
be
standby
and
B
should
be
active
next
slide.
Please.
I
Since
we're
using
pmcct
in
the
collectors,
it's
easier
for
us
to
implement
this
sure
internal
logic,
BMS
CCT
is
a
powerful
set
of
it's
a
powerful
network
monitoring
too,
with
pmscity
we
can
simply
Implement
the
the
the
software
logic
at
The
Dumping
process
and
The
Dumping
program
of
the
BMP
data.
So
it
can
so
it
can
so
we
can
and
Implement
our
implement
the
logic
easily.
Next
slide.
Please.
I
There
are
two:
there
are
two
workflow
for
the
for
the
projects
for
a
regular
workflow.
I
We
should
first
the
system
will
write
collectors,
timestamp
key
to
redis
with
a
two
seconds
time
out,
and
then
they
will
get
all
the
timestamped
from
redist
so
that
it
can
be
be
aware
of
the
work
states
of
other
collectors
in
the
cluster,
and
then
it
will
make
a
comparison
a
month
or
the
timestamp
and
set
the
template
accordingly,
more
specific
key,
the
forward
conditions
that
I
have
mentioned
before
and
at
the
end
they
will
sleep
for
one
seconds
and
repeat
the
whole
workflow,
and
but
if
we
want
to
manually
configure
the
states,
we
can
put
it
into
maintenance
mode
by
sending
a
signal
34.
I
Since
it
has
now
the
larger
timestamp,
we
will
definitely
become
standby
immediately,
but
for
the
other
collectors
they
will
be
aware
of
the
change
of
the
of
this
change
by
getting
by
getting
time
stamps
from
ready.
So
there
might
be
some
delays
and
this
delay
will
be
maximum
two
seconds
next
slide.
Please
we
have
mentioned
before
that.
I
Only
the
active
collector
will
dump,
but
for
the,
but
the
standby
collector
will
not
export
any
metrics
to
the
database,
but
he
will
kept
receiving
BNP
data
and
BMP
rips
BMP
events
in
in
the
local
cache
for
two
seconds.
This
is
to
avoid
that,
if
a
flip,
if
a
failover
happens,
for
example,
if
it
needs
to
switch
from
standby
to
active
since
there
is
a
delay
there
without,
we
do
not
want
to
lose
any
data.
I
So
the
the
new
receipts
BMP
events
will
be
cached
in
the
local
buffer
for
two
seconds,
and
once
there
is
a
failover
happens,
the
the
data
that's
in
the
buffer
will
be
dumped,
firstly
and
the
failover
magnets
work
as
following
assume
that
there
are
two
collectors:
A
and
B,
while
accepting
and
B
standbytes.
I
If
collector
a,
has,
crashed
and
has
crashed
and,
for
example,
it
has
been
shut
down
the
then
the
braid
is
then
the
key
in
the
radius
will
be
timed
out
in
maximum
two
seconds,
because
we
don't
know
that
if
it's
been
shut
down
by
sending
signals
or
or
shutting
down
the
shutting
down
the
collector.
If
we
shut
down
the
collector,
then
we
need
to
wait
until
the
until
the
key
expired
itself
and
only
after
the
key
in
the
the
key
in
red
is
expired.
I
Then
the
the
other
collector
can
be
aware
of
the
change
of
this
collector
and,
let's
say
B
is
now
the
collector
who
has
the
lowest
timestamp,
so
it
will
naturally
become
active
and
then
it
will,
for
what
it
would
do
is
to
firstly
send
the
local
cache
to
the
database,
and
then
it
will
work
in
no
normal
workflow
to
send
all
the
BMP
traffic
that
they
have
received
to
the
database
next
slide,
please
for
testings,
we
have
designed
two
demand
setup
and
three
demand
setup.
I
I
We
don't
have
animation
here,
so
we
can
see
that
there
are
two
pictures
and
the
first
one
is
the
results
of
two
demon
setup
and
the
second
one
is
three
demand
setup
for
the
two
demand
setup.
If
we
look
at
the
first
picture
of
BMP
metrics,
we
can
notice
that
even
there's
there
are
two
collectors
maintaining
BMP
sessions.
There
will
be
only
one
collector
dumping
to
the
database,
so
it's
only.
I
There
is
only
one
line
at
the
at
the
same
time,
but
at
on
the
right
on
the
right
side,
there
are
two
pictures
of
ipfix
loads
received
from
the
two,
the
two
demons,
so
ipfix
is
always
received
and
from
from
both
sides,
and
it
will
not
be
influenced.
But
since
here
we
we
we
use,
we
try
to
simulate
the
failover
by
shutting
down
the
collector.
I
The
ipfix
flow
received
from
these
two
sites
will
be
slight
difference,
so
I
prefix
received
from
a
will
be
slightly
less
than
that
from
B.
Since
we
have
shut
down
H
to
make
the
active
failover
to
be
and
for
the
three
demons
setup,
we
can
also
notice
that
there
is
only
one
a
month
a
month,
three
demons.
I
There
is
only
one
one:
one
demon
dumping,
the
BMP
data
and
but,
however
ip4
is
always
received
from
all
three
all
three
collectors,
so
it
works
for
both
two
and
three
demons,
even
for
more
and
if
we
want
to
add
more
demons,
it's
just
a
matter
of
configuration
and
it's
easily
to
add
next
slide.
Please.
I
To
sum
up
this,
the
goal
of
this
project
is
to
the
most
important
goal
is
to
ensure
the
high
availability
of
BMP
data
and
in
the
other
hand,
we
also
want
to
do
a
possible
load
balance
by
introducing
more
collectors
to
the
systems
for
other
network
telemetric
data.
I
However,
we
don't
want
to
introduce
data
duplications,
since
we
have
more
collectors,
maintaining
identical,
BMP
data,
so
we
have
designed
assistance
to
make
sure
that
BMP
data
is
cached
twice
in
The
Collector,
but
or
only
forwarded
once
to
the
database
to
help
in
scalability
and
let's
forward
the
introduced,
introduction
thanks
and
for
backup.
If,
if
we
go
to
next
two
slides,
there
will
be
the
the
state
machine
of
The
Dumping
process
and
the
data
queue,
and
that's
all
for
my
introduction.
Thank
you.
So
much.
A
Thank
you
so
much
for
sharing
these
insights
and
outline
of
the
designs
who
establish
High
availability
in
data
collection,
I
think
we
have
two
people
that
want
to
ask
questions:
let's
go
with
Chris
Morrow
first,
as
he
was
the
first
one
to
indicate
on
zulup
that
he
has
a
question.
F
This
is
Jeff
has
thank
you,
Joel
for
your
presentation.
I
had
first
one
question
about
your
time.
Stamps
then
a
comment
about
your
methodology.
What
time
stamp
exactly?
Are
you
collecting
what
what
is
the
trigger
event?
Is
this
coming
from
the
DMP
protocol,
or
is
this
messages
that
are
coming
from
The
Collector
receiving
a
thing.
F
Okay,
so
this
goes
into
my
comment
about
methodology.
I
think
so
one
of
the
problems
that
you'll
have
for
BMP
is
that,
with
your
displayed
slide
here
collector
a
and
collector
B
receiving
from
the
same
router.
There
is
no
guarantee
that
they
will
receive
the
same
messages.
F
I
Since
is
since
the
BMP
session
is
established
from
from
1
pm
from
one
bgp
router,
it's
just
about
configuration,
so
we
have
configured
the
router
to
stand
identical
BMP
session
to
The
Collector.
F
I
understand
your
configuration.
What
I'm
telling
you
is
that
in
such
a
configuration
there
is
no
guarantee
that
data
is
the
same.
The
eventual
converged
state
will
be
the
same.
So
to
give
it
easy
example,
you
know
presume
that
collector
a
receives
prefix,
10
8,
no
from
Pure
a
and
much
time
goes
by,
and
you
know,
collector
B
eventually
could
receive
the
same
state
or
maybe
10.
8
has
changed
at
some
point
as
well,
and
collector
B
is
instead
getting
the
most
recent
thing,
while
collector
a
has
yet
to
receive
the
recent
update.
F
So
eventual
consistency
shall
happen,
but
incremental
consistency
is
not
there.
This
is
a
common
problem
with
distributed
computing
type
things.
So
my
suggestion
to
you
is,
as
part
of
your
continuing
work,
maybe
consider
how
you
would
want
to
address
this
in
the
database.
I
Yes,
we
have-
we
have
discussed
this
at
the
beginning,
but
since
this
is
just
for
Prototype,
we
have
skipped
this.
We
have
considered
to
also
think
the
BMP
data
that
you
have
received
from
the
router
to
to
redis,
so
that
we
can
make
sure
that
they
are
working
synchronously.
I
But
since
this
just
the
first
day,
we
have
skipped
this
and
we
by
default,
consider
that
they
are
receiving
identical
BMP
data
and
in
this
way,
if
we
ignore
this,
we
can
we
can
just
implement
the
active
standby
feature,
but
but
you're
right.
This
will
be
the
next
step.
To
do
that,
we
need
to
sync
the
BMP
data
that
it
receives.
A
B
Yeah
so
first
off
I
also
would
say
this
looks
like
pretty
cool
work,
I
appreciate
you
presenting
it
and
it
wasn't
clear
from
the
slide
deck
that
you're
sort
of
in
still
the
discovery
phase
of
how
this
all
works
but
I
or
how
it's
going
to
work.
I
should
say
I
think
the
point
Jeff
was
making
I
also
had
the
same
question
about
which,
what's
sort
of
the
primary
key
that
you
decide.
You
got
the
same
message
at
both
places.
It's
not,
it
seems
like
that's
hidden
in
your
redis
custom.
B
Key
comment,
so
I
think
if
you
skip
down
the
road
later,
you'll
come
to
the
same
state
where
you
say:
oh
wow,
the
BMP
collection
isn't
necessarily
guaranteed
to
be
guaranteed
to
be
synchronized
in
time.
That's
sort
of
Jeff's
comment
and
that
you'll
have
to
figure
out
which
nlri,
which
be
it,
which
bgp
message
in
BMP
is
the
same
to
tell
which
one
is
which
collector
has
you
know
you
want
to
to
use
here,
and
it
may
be
that
you
don't
even
care
so
much
about
primary
and
and
backup
State.
B
J
Yeah,
this
is
Sarah
and
Ellsberg
from
Austin
University
I
have
like
a
question
about.
You
talked
a
lot
about
redis
so
has
have
you
also
thought
about
like
introducing
an
intermediate
service
to
not
only
safest
timestamps
into
your
radius
and
KV,
but
also
like
in
other
technology
like
memcache
D,
or
something
like
that.
J
Yeah
I
think
it
can
can
be
like
makes
a
lot
of
sense
if
you
have
like
this
intermediate
service
to
just
use.
Also
another
technology,
not
only
redis
I
mean,
of
course,
that
is
maybe
the
the
major
one,
but
maybe
another
company
that
does
not
use
already.
So
you
would
have
like
a
lot
of
advantages
when
you
introduce
like
this
intermediate
service,
yeah.
I
The
the
reason
why
I'm
using
redis
is
that
we
have
we
are
the
swisscom
is
already
using
pme
CCT
to
to
the
network
telemetric
in
The
Collector
and
where
this
is
naturally
supported
in
PMA
CCT
and
if
I
explore
the
timestamp
to
read
this
since
PMA
CCT
is
already
years
across
the
collector,
then
it's
easy
for
them
to
exchange.
But
so
I
would
say
this
design
is
is
highly
based
on
the
pmsicity,
which
is
already
used
in
the
collectors
then.
I
But
if
we
want
to
play
for
other
companies
and
need
to
who
is
not
using
pme
CCT,
then
it
will
need
to
decide
another
solution.
Probably.
A
Sufferin,
would
you
mind
emailing
the
presenter,
a
pointer
to
memcache,
because
I
heard.
K
K
So
this
is
a
short
one.
So,
compared
to
the
latest
version
we
we
presented
in
the
latest
ITF
we
had
a
few
a
few
differences.
So
first
of
all,
Tom
patch
went
full
young
doctor
to
the
model.
I
can
only
thank
him
enough.
He
made
multiple
comments.
We
tried
to
address
them
all.
We
have
still
yet
to
change
the
names
of
some
containers
and
identities
we
had
attending.
We
will
do
it.
Jeff
has,
of
course,
always
checks
the
model
and
he's
very
helpful.
K
I
also
again
can
only
thank
him
for
his
observations
for
the
more
midi
stuff
since
the
first
email.
That's
about
the
first
time
we
introduced
the
model
team
Evans
mentioned
that
he
would
like
to
have
initial
delay
on
back
of
timers.
That
doesn't
hurt
so
initial
delay.
We
added
back
of
timer
is
actually
part
of
the
BMP
RFC,
so
we
try
to
model
that
in
the
Yan
module.
K
If
anybody
can
take
a
look,
I
personally
check,
if
there
was
any
exponential
back
of
this
container
somewhere
in
the
ITF
I
couldn't
find
it
so
I
mean
we
did
it
from
scratch.
That's
it
for
the
more
controversial
part,
yeah,
jimin,
Chen,
I,
think
I
mentioned
this
briefly.
The
latest
ITF
he
mentioned
a
news
case
in
which
he
would
like
to
send
specific
prefixes
to
the
station
to
the
VMP
station.
K
This
goes
very
well
with
our
thought
or
what
we
have
presented
previously,
in
which
we
would
like
to
have
a
very
flexible
but
very
accurate
way
of
defining
what
to
send
to
the
station,
because
not
all
of
us
have
ways
of
ingesting
50
million
packets
every
minute
and
so
having
a
way
of
this,
of
selecting
exactly
what
you
want
could
help
and
extend
adaptability,
adoption
of
BMP,
so
he
jimin
actually
also
mentioned
that
there
was
this
routing
policy
model
already
available
in
the
ATF
already
standardized,
so
we're
using
that
the
relative
policy
model,
even
though
simple,
is
still
powerful
enough
that
maybe
there
can
be
some
some
stuff
that
makes
no
sense
for
BMP
like
if
you
mash
or
filters
on
like
an
attribute.
K
Maybe
that
makes
no
sense,
but
but
we'll
see,
I
mean
we'll
check
it
out
how
it
works,
but
but
we
added
it
yep
next
one,
please
that's
it
the
call
for
adoption.
I,
don't
know
if
it
got
a
bit
orphaned,
cheers
I,
don't
know
what
happened
there,
but
we'll
continue
working
on
it
on
on
this,
because
I
think
we
need
it.
A
B
K
So
for
the
next
one,
so
this
is
a
another
draft
that
we
introduced
previously
even
a
couple
of
years
ago,
just
an
update
on
this
one.
Next
one
please
so
this
is
the
path
status.
Steel
beat
so
just
as
an
overview.
What
this
DLB
does
is
to
as
to
convey
the
pets
the
path
status
of
a
path
in
a
tlb
optional.
K
What
is
a
path
status,
whether
the
path
is
installed,
whether
the
path
is
a
backup
whether
the
path
has
been
rejected
or
filtered,
whether
parties
have
not
passed
where
the
path
is
for
forwarding
like
this
sort
of
thing
and
there's
an
optional
reason
field.
If
you
want
to
go
crazy
and
convey
also
why
the
status
happen
like
if
it's
filtered
because
of
local
prefer?
K
Oh
sorry,
if
it's
installed,
because
of
local
preference
or
I
mean,
if
you
can
do
that,
then
there's
an
optional
result
field
for
that
it
depends
on
the
tlb
draft
power
already
I
mean
this:
that's
getting
progress,
so
it's
getting
mature.
So
we
were
more
confident
about
this.
Also
about
with
the
ebit
I.
Don't
have
to
follow
the
a
bit
where
I
will
just
introduce
it,
so
thanks
to
David,
basically,
every
company
could
do
their
own
their
own
status.
Right
so
I
mean.
K
If
they
have
a
property
way
of
status,
then
they
can
use
David
and
convey
it
and
next
one
please
so
there's
no
much
difference
on
this
draft
compared
to
the
latest
version,
so
in
small
editorial
changes
we
added
a
few
more
examples:
some
bad
status.
So
here's
the
thing
in
the
draft.
We
actually
don't
want
to
standardize
any
path
status.
K
We
want
that
to
be
done
somewhere
else,
I
think
I,
I
always
get
lost,
I,
don't
know
if
it's
Indiana
or
I
get
lost
where,
where
this
sort
of
thing
gets
standardized
but
I
still
I
really
well.
This
is
me.
I,
really
think
that
without
the
examples,
it's
very
hard
to
understand
the
draft
so,
but
maybe
with
this
is
a
bit
more
clear.
K
So
I
do
not
know
whether
whether
I
mean
just
to
make
an
expression
without
the
examples
of
bad
status
and
just
continue
doing
it
like
that
or
keep
it
there
for
a
while.
So
to
make
it
clear
for
the
people
that
read
it
or
I
know
reference
a
previous
version
which
I
don't
know
if
you
can
do
that.
So
that's
a
general
question:
I
I
do
not
know
how
to
do
that
so
or
at
least
I.
Maybe
we
can
discuss
it
later
next,
one
please.
K
Okay,
so
again
why
this
is
interesting
in
our
first
use
case,
it
will
be,
for
instance,
interesting
to
know
when
a
path
is
being
filtered
right.
Now.
Yes,
we
can
do
that.
We
can
take
the
pre
in
the
post
in
compare
it,
it's
a
lot
of
work
and-
and-
and
this
is
interesting
information
if
you
can
achieve
obtain
it
directly
from
the
router
and
if
you
can
know
the
real
well,
the
reason
or
the
status
like
maybe
a
path
was
filtered
because
of
a
policy.
K
Maybe
a
path
was
filtered
because
it
has
an
invalid
graph.
I
mean
this
sort
of
thing
can
be
useful,
because
if
we
have
data
other
data,
additional
data
from
the
network
or
historical
data,
then
we
can
know
if
maybe
it's
hunting
is
wrong
before
somebody
calls
and
complains
I
mean
that's
like
a
low
hanging
fruit
again,
we
can
do
that
already,
but
comparing
pre
and
post
in
can
be
very
painful.
K
If
you
have
a
lot
of
messages
or
a
big
routing
table,
you
know
in
a
more
com,
I
mean
complex
cases,
but
it's
still
useful,
at
least
both
co-authors
of
the
wrap.
At
some
point,
careers
we're
working
in
a
system
that
that
basically
would
like
to
offer
what,
if
scenarios
for
networks
and
of
course
nobody
wants
to
do
bgp
Simulator
for
that.
But
if
the
router
somehow
can
send
what
paths
are
backup,
then
you
can,
let's
say
cheaply,
do
some
what,
if
a
scenario,
analysis
like
okay?
K
If
this
fails,
then
I
already
know
the
backup
it
might
not
be
perfect,
but
it's
better
than
nothing.
So
that's,
for
instance,
one
other
example,
and
also
if
you
have
a
very
diverse
environment,
with
a
lot
of
a
lot
of
paths,
then
you're
getting
from
the
local
group.
For
instance,
then
you
might
be
able
to
know
which
parts
are
being
is
for
forwarding
or
not,
and
I
mean
you
can
have
different
use
cases
for
that.
Okay!
K
Next
one,
please
the
status
of
these
from
what
you
understand,
but
I
need
to
visualize
Huawei
already
supported.
We
have
I,
mean
I,
I,
I
model
this
in
escapee
and
against
pmsgt,
and
it's
already
showing
it
so
at
least
there's
or
there
that
the
wire
format
looks
good
enough.
So
there's
a
bit
of
implementations:
I
mean
we
probably
need
to
officialize
that
so
to
move
this
forward.
But
if
anybody
has
any
more
questions
or
comments,
please
let
us
know
or
I
mean
ask
here
and
that's
it.
A
A
Next
up
we
have
Martin
Pell's
for
a
topic.
That
is
not
yet
an
internet
draft,
but
perhaps
beautiful
things
will
come
out
of
this.
Take
it
away,
oh
and
make
sure
to
point
the
microphone
at
your
mouth.
H
Go
so
yeah
as
job
mentioned.
This
is
not
a
draft
yet
could
be.
Maybe
I
just
wanted
to
throw
this
at
the
group
to
see
if
there's
interest
for
it
next
slide.
B
H
H
So
yeah
it's
another
draft.
It's
just
an
idea.
I
wanted
to
throw
out
the
group
and
the
idea
is
to
to
Define
BHP
communities.
Large
communities
extended
communities
in
a
Json
structure.
Why
would
you
want
to
do
that?
H
First
of
all
to
have
a
standardized
way
for
publication
right
now,
isps
published
their
bgp
communities
on
their
website
or
in
their
autumnum
objects
in
the
irr,
but
it's
all
plain
text
and
if
you
want
to
do
anything
with
it,
you're
going
to
have
to
figure
out
what
they
actually
want
and
how
to
parse
that
so
yeah,
that's
the
second
reason
if
I
wanted
to
do
this
is
if
you
have
tools
that
want
to
look
at
bgp
communities
such
as
looking
glasses,
then
it
becomes
much
easier
if
we
have
a
structured
way
to
describe
them
next
slide.
H
H
So
an
example
tool
that
could
use
this
is
a
Looking
Glass,
so
I'm,
a
part
of
you
know,
knock,
and
here
we
run
a
Looking
Glass
that
that
we
recently
modified
to
display
descriptions
for
bgp
communities,
and
this
is
basically
a
self-compiled
text
file
with
communities
and
descriptions
and
yeah.
If
you
want
to
change
anything
to
that,
you
have
to
go
in
and
modify
text
file,
yeah,
which
is
not
really
a
nice
solution.
H
H
This
describes
a
way
to
use
bgp
large
communities
and
sets
up
a
structure
with
ASN
a
function
and
a
parameter,
so
it
divides
the
community
up
into
three
fields,
and
then
the
RFC
also
gives
a
couple
of
advantages.
Examples
such
as
defining
Community
for
to
instruct
an
ISP
not
to
export
a
route
through
a
certain
as
next
slide.
Please.
H
H
Next
slide,
please
and
then,
based
on
this,
we
would
have
a
large
BHP
community,
in
this
case,
advertise
instructing
a
as64497
not
to
export
to
as6551,
and
there
we
indeed
again
take
the
two
Fields
the
function
field.
We
set
it
to
four
which,
in
the
previous
slide,
was,
was
set
through
no
export
and
as
a
parameter.
We
take
the
as
of
the
beer
that
we
do
not
want
to
export
to
and,
of
course,
we
also
have
the
global
administrator
field
that
we
filled.
So
that's
the
Azam
next
slide.
Please.
H
Another
example
is
for
an
RFC
4384.
H
H
So
here
we
do
a
similar
thing.
We
have
fields
for
the
region.
This
RFC
also
has
a
satellite
field,
which
is
one
or
zero,
and
a
country
code
which
is
encoded
in
a
bit
stream.
Next
slide.
Please.
H
And
again,
then,
we
would
have
an
an
instance
of
this
community,
which
would
Define
routes
originated
in
the
Netherlands,
which
would
have
a
particular
region
EU.
This
is
the
bitstring
defined
by
this
RFC
for
you,
satellite
set
to
zero
and
the
country
code
for
the
Netherlands
encoded.
According
to
this
RPG
next
slide.
Please.
H
A
All
right,
Jairus
was
Jarrett
was
first
in
the
queue.
So,
let's
start
there.
C
So
I
think
this.
This
could
be
interesting,
I
think
there's
a
couple
concerns
I
have,
which
mostly
can
be
solved
so,
for
example,
that
there
is
the
Ayana
registry,
for
example,
that
has
all
of
the
well-known
b2p
communities
in
there
and
so
I
don't
know
if
we
would
want
to
look
to
that
for
some
of
the
formatting
and
stuff,
because
they
do
already
have
an
XML
format.
I
know
Json
is
the
current
sexy
thing
until
we
all
moved
to
seabor
or
something
else
you
know
or
whatever
is
in
the
future.
C
But
we
may
want
to
just
look
to
that
for
for
some
of
this,
because
there
is
a
lot
of
ranging
here.
The
other
thing
is
having
been
involved
in
a
different
discussion
earlier
today
about
regular
or
this
week
about
regular
expressions
and
how
to
define
those.
A
lot
of
us
have
bgp
communities
that
match
regular
Expressions,
that
we
would
maybe
not
want
to
specify
all
of
the
things
but
be
able
to
drop
a
regex
in
rather
than
expanding
the
entire
thing,
because
it's
going
to
be
very
large.
C
So
we
need
to
think
about
how
we
would
encode
either
regexes
or
ranges
with
some
of
which
is
done
there.
I
think.
The
other
thing
is.
C
We
want
to
also
be
a
little
cautious
about
this,
because
there
have
been
efforts
in
the
past
to
try
and
Define
a
number
of
new
softly,
well-known
bgp
communities
like
same
country
or
country
codes
and
stuff
like
that
and
trying
to
standardize
that
across
the
industry
as
well,
and
so
we
we
should
be
conscious
as
we're
doing
this
and
then
the
last
piece
is
the
same
as
many
other
things
like
geofeeds
and
such
is.
How
do
we
discover
where
to
find
the
list
of
them?
C
C
H
Thanks,
that's
a
very
good
feedback
on
the
last
one
Europe
actually
suggested.
Maybe
we
could
do
something
with
the
RP
API
to
publish
and
publish
them
there,
but
good
feedback
and
thanks.
K
Hey
Camilo
Cardona
from
NTT
I
would
just
say:
I
mean
this
is
not
a
anything
against
I'm,
just
saying
consider
also
doing
it
in
yank
or
doing
it
in
junk
and
making
Jason
a
Jason
a
schema
from
it.
I
mean
I'm,
not
going
to
say
no
to
Jason's
Kim.
It's
also
fine
I'm,
just
saying
that,
maybe
for
the
Machinery
of
the
ITF,
you
will
fight
at
some
point
somebody
that
says
why
not
Young
well
I
I.
Maybe
a
structure
is
simple
enough.
I
think
there
will
be
nothing,
no
problem
doing
it
there.
K
F
F
I
I
think
you
know,
I
would
suggest
taking
a
number
of
different
directions,
but
what
also
just
directly
is
I'm
happy
to
directly
collaborate
on
this
one,
because
I
have
a
large
wish
list
and
suggestions
on
how
this
can
move
forward
if,
if
you're
interested,
but
the
easy
ones
are,
you
know,
Yang
schema
to
model
the
thing
and
then
at
that
point
you
have
many
places
that
you
can
transfer
the
data,
including
seabor.
The
thing
you're
going
to
find
is
high
level
problems.
Jared
hits
the
point
of.
F
We
don't
want
to
try
to
be
too
normative
about
how
people
use
communities
mostly.
What
we're
looking
for
is
a
way
to
allow
for
communities
to
be
described
and
allow
that
to
be
consumed
by
routers
for
informational
purposes
or
config
purposes
that
moves
us
into
the
problem
of
how
you
distribute
the
thing
you
know.
F
Rpki
is
an
example
of
how
you
would
want
to
maybe
discover
these
sort
of
things,
or
at
least
where
to
pull
the
data
from
signing
the
data
you
know
from
the
provider
falls
into
the
same
place
and,
finally,
these
the
speed
at
which
you
can
consume
these
things.
You
know
Json,
is
not
a
it's,
not
the
worst
format
ever,
but
it's
pretty
darn
close
and
you
know
you're
going
to
find
yeah.
F
It
could
be
xdr
or
something
like
that,
but
being
able
to
consume
the
things
fast
on
the
routers
would
probably
push
us
through
something
more
long
lines
like
Seaboard
we're
starting
to
get
good
experience
across
multiple
vendors
with
that
is
a
nice
binary
format
for
fast
consumption
of
structured
data.
So
a
lot
of
comments,
good
idea,
thrilled
to
collaborate.
If
you
want
that
much
appreciated.
H
E
From
swisscom
very
interesting
for
sure,
just
a
few
things,
I
would
like
to
point
here:
I'm,
not
sure
if
you
think
about
it
before
or
not,
but
you
know
right
now.
We
have
collectors
like
the
MCT,
which
is
collecting
a
little
bit
traffic
and
then
push
it
back
to
for
further
analysis
in
some
kind
of
message,
bus
and
they
look
at
the
bgp
messages
and
then
transform
it
in
non-standard
way
to
another
format,
Json
or
outdoor,
or
something
like
that
and
would
be
nice
to
look
at
this
use
case
in
general.
E
It's
very
interesting
use
case
because
you
know
most
of
the
analytical
programs
on
top
of
these
things
will
not
consume
the
wire
bgp.
It's
not
only
about
the
community,
but
maybe
we
should
start
to
consider
like
what
other
type
of
applications
in
the
network,
automation
mode.
That
will
need
to
understand
the
B2B
messages
as
they
come
from
the
router,
but
they
don't
want
to
deal
with
parsing
visually
messages,
because
that
you
have
it's
kind
of
complicated
and
you
don't
want
to
have
it
everywhere.
H
A
G
Say:
okay,
maybe
I,
okay,
I
think
I'm
being
hurt
now
kind
of
interesting
stuff.
G
When
I
inch,
when
I
got
around
to
work
on
the
large
communities
for
my
policy
definition
and
configuration
generator
system,
I
invented
something
and
got
it
implemented
that
I
called
Community
registry
actually
I
think
almost
all
points
that
Jared
was
mentioning
were
covered.
G
G
Is
it
looks
a
little
bit
like
say
domain
name,
notation
or
similar
stuff,
where
you
can
put
in
symbols
and
parameters
into
a
structured
string,
and
you
have
an
XML
definition
of
how
the
community
feels
are
defined
and
you
can
put
you
can
even
generate
the
regular
Expressions
used
in
vendor
policy
languages
for
matching
simple
communities
or
sets
of
communities
or
communities
with
some
of
the
used
parameters
wildcarded
and
as
far
as
I
can
tell
I
think
that
is
still
working
and
about
the
only
major
thing
that
has
been
mentioned
in
the
discussion.
G
Right
now
that
I
think
has
been
missing
in
our
work
was
that,
yes,
we
did
not
address
how
to
make
a
a
system
for
Global
access
to
definitions,
thou
in
fact
kind
of
that
would
probably
fairly
easy
easily
fit
into
the
XML.
That
Ariana
is
using
for
its
registry.
G
I'll.
Try
I'll,
try
to
pull
out
some
easy
stuff
from
the
old
documentation
and
share
that,
and
people
who
are
interested
probably
should
bother
me
in
private
email,
yeah,
well,
okay,
just
as
a
report
so
far
though
no
Jason
would
happen
or
did
happen
in
that
that
was
XML
based
and
yeah.
Well,
okay,
so
much
to
tell
thanks.
H
All
right,
thank
you.
We
would
definitely
be
interested
to
to
look
at
that
to
see
if
we
can
learn
from
it
and
not
reinvent
the
wheel
or
make
mistakes
that
maybe
were
made
there
thanks.
B
Sorry,
it's
Chris,
Morrow
Google
I
have
a
question:
have
you
looked
at
I
mean
the
community's
Json
thing
is
interesting,
I
think
ahmed's
point
about.
Perhaps
the
larger
bgpe
message,
as
some
standardized
form
other
than
binary
bgp
data
would
be
useful
as
well.
B
We
ingested
all
of
the
route
views
data
and
put
it
into
bigquery
and
in
that
process,
converted
it
to
Json
kind
of
on
our
own
I.
Don't
know
that
we
picked
a
particularly
terrific
format
for
that,
but
if
there
was
a
standard
format,
we
could
just
go
redo
it
to
make
it
useful
our
point
where
our
project
was
really
to
make
the
data
available
to
researchers
in
a
fashion
that
didn't
involve
them
having
to
download
it
all
parsed
into
something
and
then
do
something
else
with
it.
So
some
more
General
conversion
would
be
helpful.
F
Jeff
Jeff
is
hey
Chris,
just
as
a
follow-up
to
your
point,
I
I,
don't
no
I,
don't
think
it's
quite
in
the
scope
for
what
we're
looking
at
specifically
for
this
presentation.
So
what
you're
sort
of
talking
about
is
it'd
be
sort
of
nice
for
things
like
you
know,
like
the
prior
presentation,
redis
Costco
know
what
pick
whatever
your
bus
happens
to
be
for
throwing
bgp
State
at
something
that
needs
to
catch
it
of
having
consistent
naming
for
the
fields
it
would
make.
F
You
know,
like
even
MRT
parsing
program,
is
a
little
bit
nicer,
that
sort
of
thing
so
that
all
the
schemas
can
be
folded
together.
This
is
sort
of
an
interesting,
related
piece
there,
which
is
once
you
have
parsed
out
components
like
communities
or
as
numbers
or
whatever
having
a
consistent
mapping
component.
Where
something
says:
here's
what
this
means.
You
know
as
numbers,
you
can
point
to
like
the
who
is
Data
or
what
they
claim
their
network
name
means
and
I'm
sure
you
probably
fold
that
stuff
at
the
back
end
yourselves.
F
So
this
is
the
same
sort
of
thing
of
you
know.
What
would
you
do
with
a
operator
to
find
thing
like
a
community
extended
Community,
especially
for
like
VPN
context,
some
of
the
best
stuff
that
we're
seeing
you
know
in
terms
of
user-defined
data
having
a
more
generic
format
to
try
to
unfold
that
into
a
user
printable
component
I?
Think
if
that's
your
goal
as
well
and
this
impacts,
the
discussion
here
is
part
of
the
challenge-
will
be
internationalization
components
to
this.
F
So
if
you
say
here's
a
definition
file
for
communities,
part
of
the
requirements,
discussions
I've
had
with
other
people
and
other
contexts
is
how
do
you
provide
multiplicity
for
some
of
the
fields
like
a
description,
so
here's
the
English
version
of
this
string?
Maybe
you
want
this
also
in
a
native
language
now
for
the
operator
as
well.
F
A
So
it
seems
there
is
a
good
amount
of
interest
to
explore
their
space
and
everybody
has
opinions
on
almost
every
aspect
of
this
proposal,
which
to
me
means
this.
This
is
a
good
candidate
to
to
take
on
as
as
work
with
the
goal
of
making
it
a
working
group
document.
A
Sorry,
exactly
that
small
detail
I'll
make
the
logo
you
guys
do
the
internet
draft
no,
but
really
Martin
I
would
encourage
you
to
to
write
an
internet
draft
and
submit
it
to
the
working
group
for
consideration
and
I
I.
Think
the
time
is
ripe
to
to
try
to
standardize
mapping,
Community
Values
to
human,
readable
text,
cool.
A
And
with
that,
we
have
almost
reached
the
end
of
our
growth
session.
I
noticed
that
I
forgot
a
small
agenda
item.
I
was
going
to
go
over
drafts
that
are
in
flight,
but
then
I
didn't
actually
go
over
those.
A
We
have
one
ongoing
call
for
working
group.
Adoption
related
to
the
EGP,
well-known
Community
for
any
cast.
A
few
people
already
responded
on
the
mailing
list.
A
The
the
call
will
be
open
for
another
week
or
so
from
the
top
of
my
head.
So
please
take
a
look
at
that
internet
draft
and
reflects
on
the
mailing
list
whether
you
support
adoption
wish
to
contribute
with
or
not,
and
that's
it.
Thank
you
so
much.
The
next
growth
session
is
going
to
be
at
ietf116
in
Yokohama
in
Japan
and
I
look
forward
to
seeing
all
of
you.
There
have
a
good
day.