►
From YouTube: IETF112-SIDROPS-20211108-1200
Description
SIDROPS meeting session at IETF112
2021/11/08 1200
https://datatracker.ietf.org/meeting/112/proceedings/
A
Okay,
everybody:
it's
7
a.m.
Somewhere.
This
is
the
cider
ops
meeting
at
ietf,
112.,
ideally
you're
here
for
the
fun
I'm
chris
there's
cair
and
natalie
as
well.
I
think
we
have
a.
A
A
If
there
need
to
be
updates,
we
can
make
those
after
the
meeting's
over,
I
suppose,
to.
A
We
have
two
things
on
the
agenda
plus
some
updates
about
drafts.
I
think
we
have
the
jabber
and
we
have
the
jabra
scribe
covered
with
the
chat
thing.
I
think
we
need
somebody
to
take
notes.
Somebody
to
volunteer.
C
C
D
E
D
F
A
To
him
or
something
okay
anyway,
this
tool
is
very
confusing,
so
for
all
the
drafts
that
are
currently
in
flight
or
waiting
for
review,
I'm
not
gonna
go
through
every
single
one.
A
couple
highlights
there's
a
three
pages
of
these,
so
the
6486
biz
is
off
to
the
iesg
there's
one
other
document.
I
believe
that
was
pushed
forward
to
the
iesgo.
A
Just
the
lta
use
cases
which
I
keep
remembering
is
being
briefly
pushed
anyway.
The
rest
of
these
are
kind
of
waiting.
There's
a
couple
group
last
calls
which
either
need
to
be
issued.
The
has
no
identity,
I
think,
had
an
update
since
our
last
conversation,
so
that
needs
to
be
make
sure
we're
okay
with
it,
so
that
off
the
working
review
or
sorry
work
group
last
call.
A
A
A
Okay,
cool
then
some
rov
timing
is,
is,
has
a
work
group
last
call
pending,
which
I
think
will
decide
upon
at
the
end
of
the
meeting
time
and
same
for
max
length
rpa
max
length.
A
Okay,
all
right,
so
I
think
we're
ready
to
do
slides.
I
think
the
slide
makers
can
present
their
screen.
A
A
D
Okay,
so
during
the
last
meeting
in
a
couple
of
on
a
couple
of
topics,
the
fact
that
I
had
this
kind
of
implementation
for
creating
and
signing
and
reading
various
rpg
objects
came
up
a
couple
of
times.
So
I
thought
it
would
be
worthwhile
talking
a
little
bit.
Seeing
as
this
call
pretty
much
has
everyone
in
the
universe,
that's
likely
to
care
about
it
on
it.
D
D
It
started
out
life
really,
as
as
a
personal
itch
that
I
had
to
scratch
around
two
problems.
D
The
first
one
was
that
it's
always
kind
of
bothered
me
that,
in
order
to
read
the
objects
that
make
up
the
rpki,
we
either
need
to
kind
of
crack
out
of
browser
and
look
at
one
of
the
various
kind
of
tools
that
online
tools
that
people
have
built
over
the
years
or
you
have
to
really
be
a
master
of
the
openssl
cli
tool
and
the
name
rpkin
mansa
came
out
the
fact
that
I
always
felt
like
I
was
doing
some
sort
of
black
magic
whenever
I
was
actually
trying
to
read
one
of
these
objects
using
kind
of
the
open,
ssl,
cms
and
asn
1
command
line,
options,
which
I
can
never
remember,
and
I
need
to
spend
an
hour
refreshing
my
memory
before
I
can
read
anything,
but
that's
not
what
initiated.
D
Actually
writing
the
thing.
What
actually
initiated
writing
the
thing
is.
I
was
part
of
the
one
of
one
of
the
authors
on
the
sign
checklist
draft
and
we
were
going
through
a
few
iterations
of
the
asm1
module
and
I
wanted
because
I'd
never
really
done
anything
hands-on
with
asm1
before
I
wanted
some
sort
of
tool
to
add
to
a
kind
of
a
ci
pipeline.
D
That
would
allow
me
to
you
know
and
allow
me
to
know
at
commit
time
whether
I'd
broken
anything
in
the
syntax
and
whether
the
syntax
that
I
was
I
was
writing
was
resulting
in
an
object
that
could
actually
be
written
out
to
the
disk
and,
first
of
all,
so
y'all
had
previously
created
a
demo
object
from
the
first
version
of
the
asm1
module
that
was
written
for
the
draft,
and
my
initial
attempt
was
basically
trying
to
script
what
he
did
and
after
a
few
minutes
of
him
explaining
what
he
did
and
how
he
did
it
to
me.
D
I
gave
up
because
there
was
lots
of
manual
twiddling
of
text
files
and
lots
and
lots
of
different
stages
which
failed
in
cryptic
ways.
If
you
got
it
even
slightly
wrong
and,
most
importantly,
even
with
all
of
those
steps.
Couldn't
deal
with
the
untouched
asm1
module
with
its
imports
and
so
forth
that
was
going
to
appear
in
the
craft.
What
I
wanted
to
do
is
make
sure
that
the
one
that
was
actually
in
the
craft
was
valid.
D
So
the
second
thing
I
tried
to
do
is
I
started
trawling
around
some
open
source.
Rp
implementations
discovered,
to
my
slight
surprise
that
nobody
actually
uses
the
asn
1
modules
that
are
published
in
the
rfcs
to
generate
any
code,
or
at
least
most
don't
do
it
at
all.
D
I
believe
that
fort
does
do
it
a
bit,
but
with
some
fairly
heavily
patched
asm1
modules
in
order
to
get
things
to
work,
and
so
then
I
tried
to
go
out
and
find
a
you
know
relatively
widely
used,
asn
1
to
c
compiler
called
sm1c,
and
it
just
couldn't
cope
with
any
of
the
dependencies.
It
didn't
like
the
x509
modules
it
didn't
like
the
cms
modules.
D
It
didn't
like
really
any
of
the
modules
that
get
used
in
the
pyramid,
on
top
of
which
some
rpk
is
an
object
to
build,
so
that
will
rather
came
to
nothing,
and
so
I
decided
that
I
either
needed
to
not
have
such
a
tool
or
I
needed
to
write
it
myself.
D
So
the
first
thing
that
I
needed
to
do
because
I
wasn't
really
in
the
market
for
writing
an
asmr
compiler
from
from
scratch,
because
I
didn't
really
need
you
know
a
particularly
performant
as
one
compiler.
What
I
needed
was
something
fairly
vanilla
flavored,
but
that
which
was
which
was
out
of
the
box
gonna
work
with
some
of
the
slightly
more
outlandish
syntax
constructions
that
you
find
in
in
modules
such
as
5912
and
stuff.
Like
that.
D
So
I
did
a
lot
of
searching
there's
quite
a
lot
of
different
asm1
implementations
out
out
there.
Most
of
them
are
not
what
I
would
call
an
asm1
compiler,
they
don't
generate
code.
They
expect
you
to
hand,
write
data
structures
and
what
they'll
give
you
for
free
as
things
like
a
ddr,
encoder
and
decoder.
D
But
finally,
I
came
across
this
library
called
pie,
crate
which
I'd
never
come
across
before,
and
it
has
some
fairly
ugly
warts,
but
it
was
very,
very
feature
complete,
and
so
I
decided
that,
although
I'd
started
this
off
not
really
wanting
to
write
this
thing
in
python,
pycrate
is
a
sn1
to
python
compiler,
and
so
it
seemed
to
be
the
only
way
forward
and
that's
what
that's
that's.
D
What
gets
used
under
the
hood
for
the
sn1
compilation,
part
of
this,
and
so
with
that
in
hand,
I
began
kind
of
scratching
around
and
trying
to
build
a
library
that
would
would
meet
kind
of
two
objectives.
I
wanted
to
be
in
a
position
where,
given
only
a
asn
1
module
with
a
content,
type
instance
definition
and
a
very,
very
simple
class
implementation,
corresponding
to
a
to
a
particular
object.
D
I
wanted
to
be
able
to
instantiate
any
arbitrary
new
rpki
signed
object
with
just
those
two
things
to
and
try
and
keep
them
the
boilerplate
to
an
absolute
memory,
and
I
think
mostly,
I've
succeeded
in
doing
those
because
python's
an
interpreted
language,
it
kind
of
works,
trying
to
do
code
generation
and
python
is
a
bit
weird.
D
You
don't
have
a
discrete
compiled
time,
step
like
you
have
in
you
know,
c
or
c,
plus,
plus
or
rust,
or
go
even
where
you've
got
an
opportunity
to
take
some
sort
of
external
data
and
generate
code
from
it
before
a
compile
step,
and
so
a
lot
of
the
work
that
I
did
was
getting
getting
asm
1
modules
to
be
discoverable
and
compilable
at
runtime,
which
makes
the
whole
thing
much
much
easier
to
use
as
a
library
user
as
a
cli
user.
D
The
downside
is,
it
makes
the
tool
terrifically
slow.
So
this
is
not
something
that
you
should
use
should
be
used
in
performance,
sensitive
situations,
the
other
thing
that
it
does,
which
I
believe
is
unique.
As
far
as
I've
been
able
to
find
out
there,
I'm
sure
someone
will
correct
me
is
I
wanted
to
be
able
to.
D
D
So
I
know
what
can
validly
appear
in
a
cms
data
structure
at
run
time,
and
that
also
works,
which
is
it
requires
some
fairly
obscure
and
fairly
recent
python
features,
and
so,
unfortunately,
rpk
answer
only
works
on
python,
3.8
and
above
as
a
result
of
that,
but
I
think
it's
worth
it,
because
that
allows
us
to
keep
boilerplate
to
an
absolute
minimum
and
it
also
places
the
the
onus
of
correctness
much
more
strongly
on
the
side
of.
D
So
it
includes
implementations
of
taca
and
entity
resource
certificates.
It
can
also
write
tal
files.
D
It's
in
the
base
package
in
the
kind
of
the
main
archicad
answer
package
there
are
implementations
for
manifests,
rowers
and
ghostbuster
records
and
it
ships
with
a
fairly
simple
but
quite
extensive,
cli
tool,
which
has
two
sub-commands
rpkin
cam
conjure,
creates
a
kind
of
a
local
publication
point
on
disk
and
everything
kind
of
ships
with
default.
So
you
can
run
literally
just
that
command
and
get
a
whole.
D
You
know
get
a
ta
with
its
publication
point
and
manifest
and
crl,
and
then
a
subordinate
ca
and
a
bunch
of
stuff
under
that,
and
that
makes
for
very
kind
of
very
quick
and
easy
object
generation.
If
you're
just
trying
to
spin
up
a
quick
prototype
to
see
if
someone
else
can
read
it,
I'm
sorry
I've
gone
too
far.
D
And
then
the
second
part
of
it
is
the
perceived
command,
which
is
a
decoder
and
data
dumper
which
just
dumps
signed
objects
to
standard
out
and
it
can.
It
can
dump
it
out
in
sn1
value
syntax.
It
can
dump
it
in
json,
it's
possible
to
provide
methods
for
custom
formats
like
there's
one
that
I've
written
for
the
rower
object
to.
D
Allow
you
to
actually
look
at
the
ip
addresses
as
ip
addresses,
rather
than
the
weird
kind
of
der
bit
strings
that
are
used
in
the
actual
encoding,
and
it's
got
a
plug-in
architecture
that
borrows
quite
heavily
from
the
python
setup
tools.
Ways
way
of
declaring
plugins,
which
allows
plugins
to
declare
to
declare
the
the
asm1
modules
that
they
ship
with
the
signed
objects
that
they
can
create
and
also
extensions
to
the
rpko
and
conjure
subcommand.
D
And
there
are
existing
plugins,
both
of
which
I've
written
for
the
rsc
object
and
from
for
the
for
the
aspa,
object
and
there's.
But
there's
a
working
branch
for
the
aspa.
Object
for
the
most
recent
proposed
change
to
the
profile.
That's
been
discussed
recently
on
the
list
and
which
has,
I
believe,
been
implemented
in
the
most
recent
version
of
krell,
so
that
should
interrupt
I'm
barring
a
couple
of
oid
changes.
D
Now
the
reason
I
the
the
the
things
that
it
I
think
it
can
be
useful
for
at
the
moment.
As
I
say,
the
first
thing
that
I
wanted
to
do
is
while
I
was
writing
internet
track.
I
wanted
to
validate
the
module
that
leadership
I
didn't
want
to.
D
You
know:
have
someone
come
back
to
us
after
we've
published
a
new
version
of
the
draft
and
say
actually
you
know
having
a
second
there's,
some
sort
of
fundamental
invalidity
in
the
sn1
syntax,
and
it's
used
for
that
today
and
fairly
successfully
by
the
looks
of
it
so
far,
russ
hasn't
come
back
to
us
and
told
us
that
one
of
our
sn
modules
don't
compile
and
similarly
it
can
be
used
for
object
prototyping.
D
So,
during
the
development
of
a
new
of
a
new
object
type,
it's
frequently
useful
to
quickly
be
able
to
dump
an
example
of
that
to
send
to
someone
to
see
if
they
can
read
it
and
that's
been
used
successfully.
A
couple
of
times
for
signed
checklists
we've
confirmed
that
the
objects
that
get
created
by
a
rpk
announcer
can
be
read
by
both
rpki
client
and
another
prototype
implementation,
that
tom
harrison
from
ap
nick
built
and
for
the
the
the
kind
of
the
work
in
progress
new
version
of
the
aspa
profile.
D
The
other
thing
that
it's
useful
for
is
recreating
and
and
finding
bugs
in
rp's
and
ca
implementations,
and
it's
actually
used
successfully
for
that
a
couple
of
times
as
well,
which
I'll
come
to
and
what
I
would
like
it
to
be
a
little
bit
more
usable
for,
but
can
still
be
used
at
the
moment,
for
this
is
to
do
integration,
testing
and
and
kind
of
software
acceptance.
Testing
for
rpm
and
ca,
implementations.
D
And,
of
course,
you
know
the
the
original
literally
I
had
to
scratch.
You
know
ad
hoc
debugging
of
of
objects.
If
I
none
of
the
because
an
rp
is,
by
necessity,
fairly
strict
on
what
it
ingests,
it's
quite
difficult
to
extract,
useful
information
about
what
is
wrong
with
an
object
from
an
rp,
because
it
will
kind
of
give
up
passing
as
early
as
possible
and
so
having
a
tool
that
doesn't
do
any
validation
beyond
the
asm,
1,
syntax
validation
and
just
dumps.
D
The
contents
to
your
standard
out
is
potentially
quite
valuable
and
so
far
we've
found
at
least
two
you
know
actual
bugs
in
the
real
world.
Using
this,
the
first
was
a
issue
that
affected,
I
think,
multiple
rp's,
but
I
only
know
of
the
the
actual
issue
number
in
fort,
where
it
caused
a
crash,
and
this
was
caused
by
a
ca's
manifest
that
lists
itself
on
its
manifest,
which
causes
a
loop
and
and
and
in
this
particular
case
it
resulted
in
a
double
free.
D
I
was
sent
a
demo
object
which
had
a
common
name
with
common
name
attributes
in
the
subject
name,
which
was
greater
than
the
64
characters
that
are
actually
allowed
and
answer
refused
to
eat
it,
and
that
points
out
a
pretty
important,
a
pretty
important
aspect
of
the
approach
here,
which
is
that,
because
the
only
validation
that
takes
place
is
is
based
on
the
asm1
modules.
D
None
of
the
none
of
the
crypto
stuff
is
checked.
But
what
is
checked
quite
exhaustively
because
of
pi
crate
support
for
constraints
is
all
of
the
obscure
constraints
way
down.
The
dependency
stack
that
nobody
kind
of
thinks
to
check
when
they're
writing
this
stuff
by
hand-
and
I
didn't
know
before
before-
I
rp
commander
refused
to
eat
that
object.
D
I
didn't
know
the
common
name
had
a
maximum
length
of
64
characters,
but
it
turns
out
it
does,
and
that's
been
confirmed
in
a
couple
of
places
now
and
there's
a
there's,
an
issue
open
in
rpk
irs
to
address
that
as
well.
D
I'd
like
to
just
quickly
do
a
walk
through
for
the
benefit
of
anyone.
You
know
who
might
want
to
use
this
at
some
stage
of
what
a
plug-in
for
rpkyomancer
looks
like
to
implement
a
new
signed
object,
in
particular
in
this.
D
So
all
of
this
all
of
this
repo
I've
got
up
on
github
and
there'll,
be
a
link
at
the
end
of
the
presentation
so
that
you
can
find
it
inside
our
mounts
of
poem.
I've
implemented
just
today
a
very,
very
simple,
rpi
object,
which
allows
you
to
sign
a
poem
with
an
as
number,
and
this
is
implemented
in
rpe
commands
with
really
just
three
files.
The
first
is
the
asm1
module
itself,
which
is
in
rpk
immense
power.
D
And
looks
like
this:
most
of
this
is
boilerplate.
D
We
have
to
import
a
bunch
of
things,
but
what
is
on
screen?
There
is
the
whole
implementation,
and
all
it's
doing
is
telling
it
which,
which
oid
to
use
for
the
e-content
type,
what
syntax
to
use
for
the
content.
What
the
file
extension
of
the
resultant
object
should
be,
and
it
needs
properties
for
ip
resources
and
is
resources
and
the
reason
it
needs.
D
That
is
because,
when
you
create
an
instance
of
this
poem
class
underneath
here,
it
will
automatically
it
will
automatically
generate
and
generate
an
identity
certificate
with
the
necessary
resources
in
it
and
wrap
the
whole
thing
up
in
a
valid
cms,
signed
data
structure,
and
so
really
only
all
the
the
only
kind
of
logic
that
you
need
to
add
is
a
mapping
from
a
bunch
of
arguments
which
will
vary
depending
on
the
intended
use
of
the
object
to
a
simple
python
dictionary,
which
kind
of
bears
a
direct
resemblance
to
what's
in
the
air.
D
Someone
and
armed.
Only
with
that,
you
can
immediately
generate
it
because
all
of
the
heavy
lifting
is
done
in
the
in
the
asm1
parser
and
an
encoding
logic.
D
There's
then,
a
plugin
for
the
cli
tool,
which
is
similarly
simple
again,
there's
a
bit
of
boilerplate
up
to
the
top
and
a
default
value
for
the
poem
which
I
stole
from
the
right.
Who
is.
D
This
plugin
wants
to
receive
and
then
define
this
run
method
which
actually
holds
the
logic
for
interacting
with
the
library
to
create
an
object,
and,
as
you
can
see,
this
really
is
just
garbage
in
and
garbage
out,
two
command
line
arguments
one
corresponding
to
the
as
number
and
the
other
to
the
poem
and
those
get
passed
straight
through
to
the
poems
object.
Constructor.
D
In
order
to
use
it
first,
we
need
to
set
up
a
virtual
environment,
so
it's
not,
you
know,
start
installing
things
globally.
So.
D
D
Fortunately,
pi
crate
has
no
dependencies
for
the
feature
set
that
we
that
we
need
for
this.
It's
all
written
in
pure
python,
which
also
makes
it
very
slow,
and
so
once
we're
set
up
and
ready
to
go,
we
can
issue
rpg
and
can't
conjure,
and
you
can
use
dash
v
to
turn
up
the
debugging.
D
And
you'll
see
it
prints
it.
It
will
always
print
some
warnings.
That
is
partly
because
of
some
slightly
weird
constructs
that
are
used
in
the
picoix
sn1
modules
that
the
pie
crate
struggles
to
deal
with
sanely
and
also
because
all
of
the
it
complains
about
having
to
remove
all
of
the
default
version
numbers
that
are
set
to
zero.
D
So
you'll
see
things
like
this
repeated,
but
those
are
all
fine
and
that
exited
successfully-
and
it's
created
a
a
directory
here,
which
contains
a
very,
very
simple
directory
structure
that
you
would
expect
to
find
on
the
publication
point
or
in
the
cache
of
your
favorite
rp.
D
So
it
creates
a
towel
for
a
trust,
anchor
called
ta.tau
and
it
creates
a
repo
for
everything
under
that
trust.
Anchor
the
ta's
root
certificate,
its
publication
point,
which,
which
contains
its
crl
its
manifest
and
a
subordinate
ca,
and
then
that
ca
is
publication
point
which
contains
its
crl
its
manifest
a
rower,
a
ghostbusters
record,
and
this
funny
rsp
object
that
we
just
created
so
to
look
at,
for
example,
a
manifest
you
can
do
rpg
and
can't
perceive
targets.
D
D
And
we'll
get
an
equivalent
thing
back
out
the
json
one
doesn't
decode
the
the
the
oids
and
stuff
like
that
for
you,
which
is
inconvenient,
but
it's
otherwise
quite
useful.
If
you
just
want
to
extract
a
particular
value
using
a
tool
like
jq
or
something
like
that,
and
then
to
have
a
look
at
our
our
new
object,
we
can
just
search
for
any
rsp
ones,
because
we
know
there's
only
one
of
them
and
this
time
we
can
have
a
look
at
the
whole
encapsulated
content
structure.
D
And
you
can
see,
we've
got
our
new
content
type.
Our
e-content
type
is
a
content,
type
instance
of
type
poem
and
we've
got
our
asid,
which
just
defaults
to
65,
000
and
nigel's
poem,
which
is
complaining
about
the
the
state
of
rpsl
and
right
181,
which
I
thought
was
appropriate
for
this.
D
As
I
mentioned
it,
use
uses,
set
up
tools
for
what
are
called
entry
points
which
people,
if,
if
anyone
has
written
a
console
script
in
in
python
before
they'll
they'll,
be
familiar
with
this,
but
essentially
this
this
plugin
simply
declares
it
uses
rpg
manager.asm1.modules
to
declare
where
to
find
its
asm1
module.
D
It
uses
rpcoms
dot,
sig
object
to
say
where
to
find
any
signed,
object,
types
that
it
implements
and
similarly
cli.closure
to
declare
any
plugins
that
it
supports,
and
so
it's
all
you
know
it's
all
run
time.
Discoverable
and
whatever
you
have
installed
in
your
environment,
is
whatever
the
various
tools
will
have
available
to
them.
D
So,
there's
a
few
things
that
I
still
have
left
on
my
list
to
do
and
if
anyone
wants
to
contribute
to
they're
very
welcome
I'd
like
to
implement
bgp
security
certificates.
I
don't
believe
that
it's
going
to
be
true.
I
think
it's
going
to
be
pretty
trivial
to
do
that.
It's
just
a
question
of
finding
the
time.
D
At
the
moment,
the
directory
structure,
the
directory
structure
that
is
generated
by
the
cli
tool,
follows
the
directory
structure
that
was
used
on
disk
by
rpki
client
from
a
couple
of
versions
back
and
what
I
would
like
to
do
is
have
a
similar
plugin
architecture
that
allows
you
to
output
a
directory
structure
according
to
what
is
expected
by
whatever
rp.
You
happen
to
be
trying
to
read
this
stuff
with
and
that's
in
order
to
try
and
improve
the
the
integration
testing
experience
a
little
bit.
D
I'm
in
two
minds
whether
this
one's
a
good
idea,
but
it
should
be
fairly
easily
possible
to
generate
the
necessary
xml
files
to
synthesize
a
rdp
service
locally,
and
I'd
also
like
to
implement
something
like
the
diff
tool
for
signed
objects,
because
you
know
looking
at
them
in
hex.
It's
not
particularly
helpful
on
looking
at
them
in
text
form.
It's
not
particularly
helpful.
D
It
would
be
quite
nice
to
have
a
structure
aware
diff
tool
for
these,
so
that
you
can
see
you
know
what
changed
between
two
instances
of
the
same
ca's
manifest
for
example,
and
then
the
other
thing
that
I'd
like
to
do
is
is
just
make
a
template
available
to
people
who
are
implementing
plugins
so
that
it's
easy
to
get
up
and
running
because,
as
I
say,
there's
I've
tried
to
minimize
the
amount
of
boilerplate,
but
it's
certainly
not
boilerplate
free
and,
as
I
mentioned,
any
help
and
suggestions
and
pr's-
and
you
know,
criticisms
are
welcome.
D
Specifically,
I
think
the
areas
where
people
can
help
that
I'm
not
in
a
great
position
to
to
do
for
anyone,
that's
implementing
a
ca
or
an
rp.
I'd
love
to
have
some
feedback
about
what
a
good,
what
a
convenient
way
of
serving
the
data
that
this
tool
generates
to
a
running.
A
locally
running
rp
looks
like.
Is
it
to
spin
up
a
kind
of
a
dummy,
rsync
server
or
rrdp
server
on
localhost?
D
Is
it
to
output
files
directly
to
some
cache
directory
in
a
particular
structure?
Is
it
all
of
them?
You
know:
do
people
want
to
do?
People
have
significantly
different
code
paths
that
it
matters
whether
something
is
retrieved
from
the
network
or
just
on
disk
when
when
startup
happens,
that's
feedback
that
I'd
really
like
to
have,
and
if
the
latter
it
would
be
great
to
have
have,
in
particular,
rp
implementations
own
the
plugins
for
their
own
directory
layout,
so
that
for
two
reasons.
D
Firstly,
it's
difficult
to
guess
as
a
you
know,
as
a
as
a
third
party
when
or
why
something
might
change,
and
also
it
prevents.
You
know
that
having
to
become
part
of
the
public
api
and
make
stability
get
guaranteed.
You
know
if,
if
rp
implementers
are
simply
shipping
a
new
version
of
the
plug-in
for
each
new
version
of
the
rp,
then
nobody
cares
about
the
stability
of
that
layout.
D
And
you
know
what
what?
What
does
a
good
test
harness
for
this
look
like
more
generally,
whether
on
the
rp
side
or
the
rpk
manager
side.
What
can
be
done
to
improve
that
integration?
Testing
experience
and
in
particular
one
suggestion
that
I
have
is
that
none
of
the
rpgs
from
what
I've
seen
have
particularly
useful
or
machine,
readable
logs
to
work
out.
Why
something's
gone
wrong,
and
I
think
that
would
be.
That
would
be
a
helpful
area
to
do.
D
I've
just
shown
a
very,
very
simplified
one,
but
the
checklists
and
aspa
ones
are
maybe
a
dozen
more
lines
of
code
than
that
that
dummy
poem,
one
that
I
just
I
just
showed
and
in
the
case
of
checklists,
the
the
the
plugin
that
implements
that
signed
object
lives
in
the
same
git
repo
as
the
internet
draft
itself,
and
that
has
the
advantage
of
being
able
to
keep
the
version
of
the
plug-in
and
lockstep
version
of
the
draft,
and
it
also
has
the
benefit
of
being
able
to
unit
test
the
module
that
gets
shipped
in
the
draft
using
the
plugin
and
so
that
that
tight
coupling
works
quite
well.
D
D
That's
all
I've
heard
sorry.
I've
been
waffling
on
for
so
long.
But
if
anybody
has
any
questions,
then
please
ask
either
now
or
by
the
issues
of
the
mailing
list
or
wherever
else
you
can
find
them.
Thanks.
B
B
I
think
this
tool
has
been
incredibly
helpful
and
it
has
helped
discover
numerous
bugs.
So
thank
you
very
much
for
putting
in
the
time
and
effort
to
create
this
you're
welcome.
A
D
So
the
being
able
to
do
a
diff
between
two
two
kind
of
known
versions
of
a
given
object
should
be
fairly
easy
to
implement.
I
don't
expect
that
to
be
hard.
The
difficulty
with
making
that
kind
of
a
temporal
view
is
how
hard
it
is
to
store
snapshots
of
what
the
rpkr
looks
like
over
time.
For
a
while,
I
had
a
github
repo
with
a
bunch
of
automation
that
would
run
an
rp
periodically
and
just
store
the
cache
and
kept
that
for
for
a
while.
D
So
I
don't
think
that
there
is
that
that's
the
missing
piece
of
being
able
to
do
that
easily
and
at
the
touch
of
a
button,
and
I'm
not
sure
that
I
have
a
good
solution
for
that.
And
it's
certainly
not
this
tool.
A
A
Perhaps
some
thought
process
and
meaningless
conversation
about
how
to
keep
historical
track
of
the
rpki
like
we
do
with
bgp,
and
a
couple
of
different
places
would
be
useful.
D
So
I
I've
got
some
ideas
as
to
how
one
might
achieve
that.
The
the
issue
is
you
end
up
with
a
phenomenal
amount
of
object,
duplication
because
of
manifests
and
crls
rolling
over
all
the
time,
and
I
think
that
being
able
to
track
it
on
a
more
granular
level
than
the
file
itself
down
to
the
entries
in
the
data
structure
would
eliminate
that
duplication
and
probably
give
you
a
diff
for
free.
D
A
A
D
Also,
yes,
I
think
that's
what
job
has
built,
basically
with
rpk
views,
which
is
a
kind
of
a
series.
As
I
understand
it's
a
series
of
archives,
but
what
it
doesn't
tell
you
a
lot
about
is
what
the
it
tells
you
about.
The
snapshots
doesn't
tell
you
what
happened
in
the
middle,
which
is
unfortunate
because
usually
it's
the
middle,
where
something
broke.
B
A
Look
forward
to
some
slidewear
and
discussion.
I
think,
unless
there's
any
other
questions.
D
So,
if
maybe
I
could
just
respond
to
these
comments
in
the
in
the
chat
quickly,
I
I
I
agree
in
general,
I
think,
having
hooking
a
a
test
rp
to
this.
This
data
is
probably
easiest
to
implement
if
you
use
one
of
the
the
retrieval
mechanisms
like
rrdp
or
rsec
running
locally.
D
The
problem
is
that,
because
you
need
to
embed
the
urls
in
the
actual
objects
themselves,
you
end
up
needing
to
kind
of
spoof
your
own
machine's
dns,
in
order
to
do
it,
which
can
be
done
totally,
but
it
just
kind
of
feels
like
a
little
bit
of
an
overreach
and
it
feels
a
bit
clunky
and
prone
to
breakage
on.
You
know,
cloud
ci
systems,
which
is
but
that's
exactly
the
feedback
that
I
was
wanting
to
get.
D
G
E
H
So
we,
the
the
project,
was
relatively
small
because
we
said
okay,
let's
we
want
to
have
a
project
that
you
can
start
on
monday
and
finish
on
friday
in
parallel
to
all
the
other
work,
what
we
had
to
do
and
so
what
what
we,
what
we,
what
we
did
was
or
so
yeah
we.
As
you
know,
we
have
the
nest,
bgp
srx
software
suite
what
we
back
then
developed
for
first
for
the
origin,
validation
and
then
later
on
also
for
the
digital
path,
validation
and
now,
where
we
are
talking
about
the
asba
verification.
H
H
H
So
if
you
look
at
the
data
flow,
so
basically
that
data
flow
take
it
with
a
grain
of
salt,
so
you
register
your
aspa
object
and
then
the
validation
cache
gets
all
this
stuff
validates
it
and
shoots
it
over
to
the
routers,
and
that
is
the
area
of
our
interest.
So
we
we
have
the
validation
cache
test
harness
that
does
not
do
the
509
validation
and
all
this
kind
of
stuff.
It
basically
takes
data.
H
H
So
our
first
task
was
basically
to
create
the
aspa
data
set
and
the
cache
test
harness
take
them
pretty
much
in
an
ascii
version
of
the
8210
best
three
pdu
we
used
as
as
input
data
the
cada
cada
data.
That's
a
very
nice
data
set
where
they
go
out
and
look
into
the
internet
topologies
and
try
to
infer
peering
relationships,
and
then
we
create
our
test.
Input
looks
basically
at
an
aspa
with
this
particular
av.
H
H
That
was
the
collector
what
we
used.
So
what
did
we?
Do?
We
created
a
script
that
takes
the
cata
data
and
formats
a
little
bit
in
a
different
way
so
that
we
can
work
with
that
and
we
created
around
72
000
plus
asp
apdus,
and
they
contain
around
100
a
little
less
than
150
000
customer
provider
relations.
H
Last
friday,
when
we,
when
we
presented
that
to
the
hackathon,
I
I
had
a
little
calculation
error.
I
said
we
did
around
180
000.
I
I
over
the
weekend.
I
went
one
more
time
over
the
stuff
and
it's
it's
more
like
150,
but
you
know.
H
Select
that
so
we
created
we
created
tools
where
we
can
down
select
the
aspa
data
depending
on
the
updates,
what
we,
what
we
found
or
what
we
will
play
into
the
router,
and
then
we
created
an
output.
What
we
believe
is
easy
to
to
be
used.
H
If
you
want
to
make
comparison
between
different
implementations,
they
should
basically
have
the
same
outcome.
So
we-
and
I
showed
that
later
so
we
created
a
couple
of
data
sets.
One
was
100
updates,
500
updates,
800,
1000,
10,
000,
20
thousand.
You
can
create
whatever
you
want
and
and
then
the
tools
go
out
through
the
raw
data
and
generate
you,
a
nice
set
of
the
asba
input
and
the
bgp
traffic
that
that
fits
to
that.
H
So
the
first
thing
what
we
did
was
we
we
looked
at
okay.
How
do
we
do
one?
Do
the
peering,
so
we,
as
I
said
before,
we
took
the
mrt
data
from
route
23
and
we
selected
the
table
data,
not
the
not
the
update
stream,
and
then
we
said:
okay,
we
for
our
bgp
secio
traffic
generator.
We
have
a
slight
different
format
than
the
data.
So
what
we?
What
we
did
is
you
see
here
on
the
right
is
basically
printout.
H
We
have
the
prefix,
then
our
our
b4
basically
means
generate
only
bgp4
updates
because
remember
it
was
this.
Traffic
generator
originally
was
built
to
create
bgb's
hack
updates,
and
we
are
not
interested
in
bg
sec
right
now
and
and
in
the
past,
to
create
bgb4
updates.
We
just
didn't,
have
any
keys
and
had
us
fall
back
bgb4,
but
this
is
just
resource
waste.
So
we
added
these
before
that.
H
So
normally
you
saw
701
all
the
time
in
front
of
it,
but
we
created
the
file
701.text
and
then
these
are
the
updates
and
then
bgp
sec
io
will
take
on
the
role
of
701
and
play
these
updates.
And,
of
course,
it
puts
its
own
as
in
there,
so
so
yeah.
So
that's
basically
what
we
did
on
this
side
and
then
on
the
other
side,
the
cada
data.
So
what
we
did?
H
We
said:
okay,
let's
go
through
through
the
bgp
update
and
we
we
only
generated
aspa
data,
kaida
data
or
no
sorry,
we
only
generated
the
aspa
input.
Data
for
containing
custom
is
what
we
saw
in
the
bgp
traffic.
Of
course
you
could
say
you
know
what
I
don't
care
about
this.
I
shoot
all
72
000
asp
data
to
the
router
and
everything
works.
Fine,
that's
perfectly
good
in
guessing,
but
sometimes
you
don't
want
to
have
everything
sent
directly.
You
wanna,
you
wanna.
H
You
wanna
reduce
a
little
bit
the
data
set,
what
you
work
with
it
might
be
for
debugging
purpose
or
or
other
things,
and
then
we
created
the
test
traffic.
So,
as
I
said
before
you,
you
saw
pretty
much
this
file,
but
it
had
real
real
prefixes
and
in
acep
we
are
not
really
interested
in
the
prefix.
We
are
more
interested
in
the
past,
so
what
we
did
was
we
went
through
the.
H
If
I
want
to
say,
I
want
to
have
100
000
or
I
want
to
have
a
thousand
or
ten
thousand
rounds,
we
are
interested
in
10
000,
unique
routes.
So
we
we
prune
the
the
egp
traffic
data
one
more
time
we
throw
away
all
the
prefixes
we
and
then
we
said.
Okay,
we
only
take
the
unique
as
path
and
then
eventually,
because
we
need
prefixes,
we
just
we.
H
The
why
are
we
doing
that?
Because
we
want
to
make
sure
that
that
we
get
every
path
in,
because
even
if
we
don't
do
that,
we
would,
for
example,
keep
the
the
original
prefix
is
what
we
have
in
there
and
we
would
say:
okay
select
the
first
as
path.
What
you
of
of
of
this
lookalike
the
chance
could
be
that
you
have
duplicates
if
you
don't
take
the
table
dump,
but
if
you
took
the
bgp
live
stream
and
because
we
don't
want
to
rely
on
the
that
you
use
the
table
down.
H
H
And
then
we
we
again,
this
whole
thing
is
done
automatic.
So
I
always
like
plug
and
play.
I
always
like
you
have
one
or
two
scripts
and
it
does
everything
in
the
background
for
you,
because
you
don't
want
to
waste
your
time
generating
the
data.
You
want
to
waste
your
time
or
you
want
to
spend
your
time
in
and
testing
yours,
your
software
or
creating
data
for
analysis.
H
So
we
have
a
tool
that
is
called
generate
data,
and
then
I
give
my
prefix
in
this
case
701
and
I
say,
give
me
the
first
100
unique
as
path
and
then
it
goes
in
its
file
based
database
thing
or
it's
not
really
database.
It's
basically
a
directory
where
you
have
a
file
that
calls
701.tx
and
then
it
generates
out
of
this
all
the
things
and
it
creates
the
the
the
import
file.
We
call
it
dot
the
extension
bio
that
is,
for
the
bgb
sec
io
that
contains
the
updates.
H
We
have
one
file
that
contains
all
the
unique
asn's
we
could
have
deleted
it
again,
but
we
kept
it
in
there.
Just
sometimes
you
want
to
know
what
are
the.
As
is
what
you
use.
You
don't
want
to
have
the
unique
number
of
that,
and
then
we
created
out
of
the
big
72
000.
H
H
Then
we
started
our
oh
yeah
and
then
from
the
scripts.
There
are
two
ways
how
you
can
start
the
experiment.
You
can
start
it
from
a
from
a
console
where
you
just
remote
login
or
you
can
start
it,
preferably
from
from
all
on
linux,
from
a
windows
system.
H
If
you
use
the
terminal
only
then
what
it
basically
does.
It
just
starts
all
the
all
the
modules,
the
the
cache
test,
harness
the
srx
server
for
the
validation,
the
quagga
router,
the
bgb
sec
io
and
runs
everything
in
the
background.
It
redirects
all
the
output
to
standard
io
and
arrow
into
into
log
files,
and
and
it
works
fine,
but
the
problem
is
sometimes
you
run
into
issues
and
then
it's
really
a
pain
to
or
you
you
want
to
manipulate
a
little
bit.
H
Our
cash
test
harness
has
a
cleave
where
you
can
add
and
remove
data
on
the
fly,
and
you
cannot
do
that
when
you
run
it
there.
So
therefore
we
have
the
gnome
terminal.
That
is
a
preferred
one.
So
you
start
it
it
it
moves
or
it
starts
every
module
in
its
own
tab
and
you
can
easily
then
switch
between
one
tool
and
the
other
one.
H
So
that
looks
pretty
much
like
this,
so
I
start
the
stuff-
and
here
I
have
my
start
service
I
say,
use
my
terminal.
The
minus
w
basically
means
in
case
one
of
these
crashes.
Normally
the
the
tab
would
immediately
disappear.
The
minus
w
says
we
ask
for
a
key
input
before
the
windows
closes,
so
that
you
have
a
chance
to
look
into
this
window
and
see
what
what
happened.
H
That
is
especially
important
when
it
comes
to
the
traffic
generator,
because
sometimes
when
bg,
when
quagga
srx
is
not
ready
yet-
and
we
start
this
too
early
it,
it
tries
a
couple
of
times
to
connect.
But
if
something
goes
wrong
in
the
connection,
this
one
just
stops
and
then
you
want
to
know
what's
going
on
and
you
might
have
to
do
something
in
timing
or
query
for
open
ports
or
what
have
you
and
there
it's
nice
to
see.
Okay,
did
it
actually
run
or
did
it
crash?
If
something
is
not
coming
out?
H
The
way
you
want
again,
so
you
start
that
and
then
it
it
automatically
configures
the
router.
So
that's
a
nice
thing
the
we
have
we
have
templates.
If
I
say
I
want
to
run
it
with
701,
then
it
configures
bgb
secure
to
conduct
to
to
act
a
701,
it
configures
quagga
that
701
disappear
and
so
forth.
It
starts
everything
we
have
some
timing
in
between
and
at
the
very
end
it
asks
you
if
you,
if
you
you
press
r
for
the
resource,
then
what
it
does
it.
H
It
makes
it's
like
a
10
attack
where
we
go
into
quagga
and
we
make
a
show
ipv
gp
and
then,
with
some
regex
we
modify
the
output
so
that
this
one
is
basically
the
output,
the
result,
output,
what
you
want
to
have
the
validation
state
and
the
as
path,
and
you
can
also
start
the
service
pages
search
service
and
then
the
parameter.
H
I
think
it's
called
minus
view
table
or
minus
minus
view
table,
and
then
it
does
this
one
for
you
as
well.
You
run
a
little
bit
into
problems
there
that
when
the
data
set
is
too
big
that
we
lost
the
connectivity
between
quagga
and
the
tennet
session,
we
did
some
timing
there,
but
there's
something
else
going
on.
So
I
don't
know
if
this
one
is
the
the
best
way
of
doing
it,
but
for
smaller
data
sets
it
is
very
nice.
H
Maybe
it
was
also
just
my
system
that
cracked
up
there
a
little
bit
so
we
have.
We
have
to
look
into
this
one
going
forward,
but
for
the
time
being,
that's
that's
a
very
good
way
of
doing
that.
So
what
do
you
do?
We
basically
use
the
large-scale
isp
in
the
data
from
cada
from
october
1st
2020.
H
We
use
the
october
1st
2020
data,
because
scada
right
now
is
rewriting
all
the
algorithms
and
that's
the
latest
data
set.
They
they
provide.
We
created
a
subset
of
unique
routes
and
we
take
the
cata
data.
We
performed
the
asbi
validation
and
we
set
the
iot
as
a
private
is,
I
think,
the
65000,
what
we
use
and
then
we
run
it
against
it
that
versus
the
results.
But
again
you
have
to
take
these
results
with
a
grain
of
salt,
because
one
thing,
for
example,
we
we
run
it
where
the
isp
is
a
provider.
A
H
You
have
to
you
can
choose
other
other
isps,
but
we
just
want
to
show,
though,
so,
even
even
though
that
is
that
I
would
be
careful
with
analyzing
this
really
really
deep
right.
Now,
you
see
if
it's
a
provider
you
have
most
of
it,
is
valid
and
just
a
small
of
embedded.
So
it
just
gives
you
already
some
some
ideas,
and
then
this
was
a
relatively
small
data
set
depending
what
data
you.
It
would
be
there
nicer
to
have
or
or
to
to
play
them
the
whole.
H
I
think
the
8
000
prefixes
ended
up
to
be
a
hundred
thousand
unique
routes.
So
so,
but
don't
don't
take
my
word
for
that-
might
be
that
it's
even
a
little
bit
more.
So
it
makes
more
sense
to
to
then
really
look
into
what
kind
of
data
you
want
to
put
if
you
want
to.
If
you
want
to
make
serious
research
on
that
and
again
it
was
this
data
we
generated
in
the
middle
of
the
night
before
the
hackathon
presentation.
H
So
I
was
happy
to
have
at
least
something
so
the
code
itself
we
will
put
on
on
github.
We
don't
have
it
on
right
now.
I
don't
know
right
now.
We
will
put
it
on
the
hackathon
github
part
or
if
we
make
a
part
of
the
the
egbsrx
github
once
we,
but
it
doesn't
matter
at
this
point
because
I
still
want
to
clean
up
a
little
bit.
It's
a
little
crude.
H
So
I
want
to
have
it
in
a
way
that
that,
if
you're
interested
in
looking
into
that,
that
you
actually
don't
have
to
fight
for
two
three
hours
to
figure
out
how
the
stuff
works,
but
that
you
have
all
the
information
needed
that
you
can
do
it
relatively
quickly
and
then
we
will
send
an
email
out
to
the
list
or
even
have
it
on
our
at
least
on
this
bgb
srx
github
page,
we
will
have
a
reference
to
where
the
data
can
be
found
or
if
you
want
to
have
it
in
the
status
right
now,
just
drop
me
an
email
and
I
just
wrap
it
up
and
send
it
to
so
there's
this
incentive.
H
Now.
Why
did
we
do
this?
We
said
it
would
be
nice
again
to
to
maybe
start
using
hackathon
for
these
simple
small
scale
projects
and
maybe
finding
others
who
are
also
interested
in
in
doing
that
and
tackling
just
a
simple
problem
and
trying
to
get
the
standards
in
the
seven
days
or
the
five
days
of
the
of
the
hackathon
and
bring
it
out.
So
one
thing
is
what
what
I'm
personally
very
interested
in
is
taking
really
a
larger
scale
set
and
then.
H
Playing
with
gradual
deployment
so,
for
example,
our
cache
test
times
what
we
can
do,
we
can
say
if
I
have
let's
say
a
thousand
aspa
objects,
I
can
say,
play
the
first
100.
Then
wait
five
minutes
play
or
wait
until
I
press
a
key
and
then
press
the
next
100
and
so
forth,
and
with
this
one,
then
you
could
basically
run
and
always
see
what
is
the
validation
output
and
see?
How
would
it
look
like
if
we
have
gradual
deployment?
H
That
is
one
thing
and,
of
course,
if
this
one
is
automated
even
better,
because
I
always
prefer
as
little
user
input
as
possible
necessary,
but
I
also
want
to
be
have
the
chance
to
give
my
user
input.
If
I
want
to
so,
then
the
other
thing
is
currently
we
made
it
with
one
peering
session,
maybe
extending
the
the
the
scripts
and
this
all
shall
all
linux
telescopes
extending
them
to
allow
having
multiple
peering
sessions.
H
If
you
want
to
start
looking
into
performance
testing
of
the
router,
then
you
have
multiple
peers
and
testing
scaling
scaling
scaling.
We
run
in
some
issues
in
our
implementation
with
scaling.
That's
why
we
we
couldn't
run
the
full
scale.
H
We
found
one
segmentation
fault,
unfortunately,
what
we
will
now
start
looking
into,
and
hopefully
we
have
it
fixed
within
the
next
week
or
two,
and
then
we
maybe
can
look
into
really
at
the
full
table.
You
know
we
were
loading.
The
72
000
asp
objects
into
our
cache
test.
Harness
that
worked
was
very
quick
was
very
nice.
We
fed
them
into
the
srx
server
that
went
very
nice
as
well.
We
just
have
to
see
that
we
manage
our
memory
correctly
in
every
little
thing.
H
H
Then
maybe
it
would
be
really
nice
to
test
this
also
with
other
implementations.
So
you
can
take
our
shell
scripts
and
instead
of
starting
the
srx
server
and
quarkxrx,
you
can
start
your
own
implementation
and
run
it
against
that,
and
it
would
be
maybe
nice
to
have
other
other
reference
implementation
implementers
to
partake,
maybe
next
next
hackathon
to
to
work
a
little
bit
on
that.
H
Another
interesting
part
is
looking
at
the
other
side
of
it,
so
taking
the
cata
data
and
not
only
generating
the
the
data
for
the
pdus
for
the
80
to
10
videos,
but
creating
the
input
data
for
the
validation
caches
so
that
you
can
take
your
validation,
cache
and
and
test
it.
And
then
you
don't
need
our
test
harness,
but
we
could
run
it
against
the
real
validator.
H
That
would
be
something
really
nice
too.
I
don't
know
if,
if
I
can
do
it
right
now,
because
I
didn't-
I
didn't
work
enough
on
this
side
of
the
of
the
project,
but
maybe
others
can
join.
That
would
be
something
really
cool
and
I
would
really
look
forward
to
doing
stuff
like
that.
So
that's
it
for
now.
So
if
you
have
any
questions
speak
up
now
or
down
there,
we
have
or
send
me
an
email.
H
A
H
I
I
Okay,
yeah,
but
kind
of
that
precludes
empty,
a
as
provider
sets
which
I
hope
still
are
in
aspa.
Oh.
H
F
So,
first
of
all,
thank
you
for
the
report,
so
it's
good
to
see
more
testing
evaluation
around
sp,
but
I
have
a
two
questions.
The
first
one
is
about
your
source
of
the
data,
since
you
were
using
cada.
As
far
as
I
understand,
and
as
far
as
I
remember
k,
the
peering
relations
is
based
on
some
kind
of
expert
set
of
t1
providers,
and
the
thing
is
that
these
tun
providers
are
not
in
the
customer
to
provide
a
set,
so
you
should
have
explicitly
create
what
is
called
now
is
p0
or
mta
spay.
H
Again,
that's
that's
why
I
say:
take
the
data
with
with
a
grain
of
salt.
We
we
did
not
really
the
only
thing
what
we
did:
data
data.
We
looked
basically
into
the.
H
The
peering
what
they
gave,
we
didn't
really
spend
time
into
really
adding
other
things
or
analyzing
the
data
correctly,
because
for
us
the
main
the
main
issue
was
for
the
testing
that
we
wanted
to
go
scale
test
and
having
some
data
set.
That
would
make
some
kind
of
sense,
rather
than
just
randomly
generating
something
and
again
this
data
currently
is
not
meant
to
go
out
and
start
making
real
in
deep
analysis.
Why
is
it
now
like
that
or
like
those?
You
also
have
to
see
what
kind
of
as
you
you
choose.
H
H
So
so
again,
I,
the
the
main
part,
is
this,
even
though,
if
the
data,
what
is
is
in
its
in
its
greatness
or
not
that
greatness,
the
main
part
is
that
if
I
have
two
implementations
that
take
this
data
as
input,
the
output
should
be
the
same.
So
so,
with
this,
you
can
at
least
assure
that
the
validation
itself,
the
validation
algorithm
in
the
implementation,
is.
F
Okay,
I
understand,
but
I
will
just
highlight
my
point-
that
you
should
check
how
you
are
processing
data
from
cada,
because
once
again,
what
is
called
to1
providers
will
not
be
in
this
data
because
they
are
in
the
source
of
the
algorithm,
but
the
data
shows
only
autonomous
systems
that
do
have
providers,
and
the
second
comment
is
about
the
kd
data
itself,
because
everybody
ever
knows
that
it
is
kind
of
noisy.
You
may
have
both
kinds
of
false
positives.
F
You
may
have
the
the
systems
that
are
named
to
be
provide
providers
when
they
are
not
and
the
more
important
one
when
there
are
missing
providers
and
such
a
situation
may
turn
into
the
invaded
outcome.
F
My
suggestion
is
to
not
using
the
big
set
of
outdoor
system
relations,
but
to
use
top
bottom
approach
to
start
with
simple
sp
records
for
t1
providers.
F
That's
what
we
did
in
in
our
detection
system
and
it
just
works.
There
is
very
rare
noise
in
this
kind
of
system
and
com,
combining
this
kind
of
of
pro
of
approach
with
proud
use.
What
you
just
did,
I
think
it
can
be
very
interesting,
research
yeah,
but
anyway,
thank
you.
Thank
you
for
your
work.
H
I
mean
you
agreed,
you
know
it's
always,
but
for
us
the
main
part
is
we
wanted
to
have
a
very
quick
way
to
generate
large
large
scale.
Data
sets
and
we
have.
We
have
handcrafted,
asba
experimentation.
What
what
comes
with
the
software
suite,
what
we
have
in
there.
We
have
a
complete
experimentation
framework
and
that
that's
all
fine,
but
I
mean
okay.
That
was
what
is
what
is
right
now
around.
I
know
it's
not
the
perfect
set,
but
it
always
depends
for
what
you
want
to
do.
H
If
you
have
other
sources,
though,
that
that
we
can
use
that,
maybe
give
more
or
a
better
data
set,
even
for
looking
into
into
working
with
I'm
more
than
willing
to
to
to
use
them
as
well
again,
it's
it's.
E
E
I
think,
but
we
do
have
like
small
data,
also,
which
is
synthetic
with
experimental
as
numbers,
so
so
we
can
also
make
that
available
in,
in
which
case
you
can
just
do
a
quick
test
and
see
an
output
that
is
only
12
aspa
or
us
12
updates,
or
something
like
that
with
few
few
asps,
and
you
can
do
a
quick,
simple
test.
So
so
that's
available,
and
I
think
oliver
can
make-
is
planning
to
make
no
no.
H
E
We
realized
it.
We
realized
that
for
tier
one,
ass
or
any
ass
that
don't
have
providers,
we
should
have
an
aspa
as0
aspa.
So
so
that's
going
to
be
fixed,
it's
very
simple.
To
fix
it.
We
identify
the
the
the
tier
one,
ra
says
that
don't
have
providers
and
we
would
add
aspas
that
have
asps
0
in
them,
and
that
would
fix
that
problem
and
alexander.
You
will
see
that
once
we
do
that,
the
the
unknowns
will
not
be
so
many.
E
The
701as,
which
is
a
tier
one
in
in
the
second
set
of
experiments,
the
iut,
considers
701
as
a
customer,
which
is
unusual
and
many
so
once
we
fix
this,
if
it's
considered
as
a
customer,
many
of
the
routes
will
become
actually
uninvalid,
and
so
there
will
be
a
shift
from
unknown
to
invalid.
Once
we
fix
the
asps
to
include
the
tier
one
asps
with
as0
in
them
and
the
third
point
picking
up
on
rudiger's
comment.
There
is
no
such
thing
as
alexander,
correct
me.
E
If
I'm
wrong,
but
from
the
draft,
there
is
no
such
thing
as
empty
aspa.
There
is
a
zero
aspa
which,
which
we
you
would
create
for
tier
ones
or
pro
or
asses
that
don't
have
providers.
But
there
is
no
such
thing
as
empty
aspa.
Right.
F
About
your
last
point,
yes,
you're
you're
correct.
So
at
the
moment
the
draft
says
that
the
empty
should
be
represented
with
asp
0.
Likewise,
it
is
done
with
ross.
The
in
the
medical
east.
There
is
still
ongoing
discussion
of
the
subject,
we'll
see
how
it
ends
up,
but
the
syntax
is
correct
in
the
slides
so
and
thank
you
for
giving
comment
for
my
comment.
H
Yeah,
maybe
one
thing
what
I
I
also
want
to
say:
the
the
syntax
basically
is
based
on
the
draft
a's
82
10
bit
three
and
the
pdu
that
gets
sent
over
to
the
router,
I
think,
has
a
plus
in
there.
But
again
I
will
verify
that
and
if,
if
I'm
mistaken,
then
we
will
of
course
make
the
modification
or
maybe
the
ace
right.
Then
then
you
will
deal
with.
A
A
I
think
lacking
any
other
people
showing
up
at
the
mic
line.
I
think
we're
at
the
end
of
today's
agenda
and
I
would
thank
oliver
and
ben
both
for
some
actually
pretty
cool
talks.