►
From YouTube: IETF115-SIDROPS-20221110-1300
Description
SIDROPS meeting session at IETF115
2022/11/10 1300
https://datatracker.ietf.org/meeting/115/proceedings/
A
Super
hi,
everyone
I'm
care
Patel,
and
we
have
here
Natalie
with
us
who's.
Our
cider
Ops
secretary
and
I
have
Chris
Morrow
who's.
Our
my
co-chair
joining
us
remotely
Chris,
you
want
to
say
a
quick
hi,
hello
super.
A
A
And
on
the
agenda,
we
have
five
presentations.
We
have
Tim
talking
about
challenges
and
Lessons
Learned
in
deploying
a
couple
of
rfcs,
8492,
8181
and
8183.
Then
we
have
both
Igor
and
sriram.
Talking
about
an
update
on
Seth
using
bgp
for
aspa
and
Roa,
then
we
have
signtel
that
Tom's
going
to
talk
about
again
update
on
ASP
verification
from
sriram
and
finally,
job
is
going
to
talk
about
update
on
RFC
6482
base.
B
Yeah
hi,
everyone
I've,
said
20
minutes
on
the
agenda,
but
I'll
try
to
be
shorter.
Maybe
we'll
have
some
discussion,
though.
Let's
see
I
wanted
to
talk
today
about
the
experience
that
well
I've
had
firsthand
in
the
last
couple
of
years
implementing
these
rfcs.
We
need
next
slide
before
we
start
I
want
to
say
a
big
thank
you
for
creating
these
rocs
I
think
by
and
large
they
work
actually
very
well.
B
We
see
a
lot
of
deployment,
different
implementations,
different
instances,
so
I'm
not
here
to
say
we
have
an
immediate
problem
that
you
know,
there's
well
I
hope
it
comes
across.
Although
all
of
this
stuff
essentially
works,
I
think
there
are
also
things
that
we
can
improve
next
slide,
please.
B
If
the
numbers
don't
mean
much
to
you
and
I
can
I
can
imagine
they
don't
there's
three
rfcs
I
named
so
8183
is
about
the
exchange
of
identity
certificates,
essentially
between
different
parties
in
the
rpgi.
So
a
child
CA
needs
to
talk
to
its
parents.
To
get
certificate
signed
needs
to
talk
to
its
publication
server
to
get
its
signed
content
published,
there's
an
initial
setup
for
that
that's
81.83
and
then
there's
the
provisioning
protocol
6492
and
the
publication
protocol
8181
next
slide,
please.
B
Anything
missing
well,
yeah,
I
think
that,
based
on
experience,
some
things
might
be
missing.
Other
things
can
just
be
improved
and
the
urgency
of
these
things
well,
I,
guess
that's
to
a
degree
of
opinion,
but
also
it
varies
a
bit
from
topic
to
topic,
but
I'd
like
to
go
over
the
well
The
Fairly
long
list
of
things
that
I
wanted
to
say
next
slide.
B
So,
starting
with
the
general
protocol,
what
I
have
found
is
that
the
definition
of
the
CMS
messages
and
the
identity
certificates
used
in
the
in
any
communication
is
fairly
well
Loosely
specified,
and
this
can
lead
to
some
interrupt
issues,
because
you
know
you
don't
want
to
just
accept
everything,
that's
possible
on
CMS
or
certificates,
so
you
kind
of
narrow
it
down
and
it
makes
sense
to
do
it
in
a
similar
way
as
what
we're
doing
for
the
resource
certificates.
B
But
then
you
know
you
find
that
some
implementations
use
additional
things
and
then
you
need
to
go
back
into
your
code
and
make
fixes
at
an
ad
hoc
basis.
So
this
can
be
a
bit
annoying
replay
protection
can
be
improved,
I
think,
because
currently,
this
text
that
says
that
the
the
signing
time
of
the
CMS
cannot
regress
essentially,
which
is
which
is
okay,
but
might
not
be
enough.
For
example,
if
I
ask
my
parent
for
my
resource
entitlements
and
I
get
a
message
from
before,
but
it's
not
regressed.
B
Identity,
key
role
is
a
thing
at
scale,
so
one
implementation
that
I
are
in
I
should
say
one
deployment
that
I
work
with
have
well
over
a
thousand
delegated
chart
CA
so
they're
on
their
own
systems,
and
they
need
to
do
this
exchange
initially.
But
what
if
you
want
to
change
your
identity?
Key,
for
example,
because
you
wanted
to
start
using
an
HSM
and
you
weren't
before?
Do
you
go
out
to
all
of
these
Cas
and
ask
them
to
do
another?
Another
exchange?
That's
it's!
B
It's
difficult
from
a
scanning
perspective,
signing
algorithm
for
the
messages
it's
all
RSA,
2048
and
sha-256
and
I.
Don't
think
we
have
a
plan
for
changing
that.
We
may
want
to
think
about
it.
Next,
then,
more
on
the
control
of
the
messages
error
messages.
There
are
quite
a
few
error
messages
and
some
of
them
are
really
useful,
but
I
think
there's
also
room
for
improvement.
There
rate
limiting
is
something
we
may
want
to
think
about,
and
then
I
don't
know
there
may
be
other
things.
B
A
lot
of
people
have
run
into
that
I'm
not
aware
of
next
slide,
please
now
more
specifically
on
the
publication
protocol.
So
there
has
been
from
time
to
time.
People
suggest
that
maybe
publication
server
should
be
more
proactive
in
what
it
accepts
from
Cas.
Relying
parties
cannot
trust
the
repository
inherently.
They
need
to
do
their
own
validation,
but
still
if
a
server
would
you
know,
apply
some
hygiene,
there
is
some
attraction
in
that.
So
things
you
could
think
about
is
like.
Should
a
server
protect
against
certain
object
types
syntax?
B
Should
it
be
insist
on
the
consistency,
there's
actually
an
error
message
for
that
already.
So,
but
then,
if
we
go
down
that
road,
we
also
need
to
consider
the
risk
of
server
errors.
So
what
AP
publication
server
has
a
has.
A
bug
was
just
you
know,
not
up
to
speed
with
the
latest
development,
and
it
starts
rejecting
things.
Then
how
does
the
ca
deal
with
this?
So
it's
not
as
trivial
as
we
might
think.
B
It's
hard
to
say
what
quote:
I
should
be
probably
it's
something
that
is
on
a
per
publisher
basis,
but
then
again,
yeah
we'd
like
to
protect
against
Cas,
just
publishing
a
million
objects
and
causing
relying
party
software
all
over
the
world
to
download
all
that
stuff
and
I
think
it
would
be
good
if
you
have
something
in
the
protocol
for
that
server,
notifications
or
resync.
B
These
question
marks
are
about
a
specific
outage
that
I
witnessed
where
essentially,
there
was
a
configuration
mistake
applied
to
the
publication
server
and
it
had
to
be
restored
to
a
previous
version
from
backup,
and
when
you
get
done,
is
that
Cas
are
out
of
sync
with
the
publication
server,
but
they
don't
know
they
think
I
published
new,
manifest
URL.
Everything
is
good.
I'll
come
I'll
come
back
tomorrow,
but
the
rest
of
the
world
sees
the
whole
manifest
URL
and
they
expire.
B
D
B
Slide
similarly
in
the
provisioning
protocol,
the
when
our
resource
is
safe
to
use,
that's
also
a
a
concern
specifically
when
will
my
certificate
with
new
resources
be
published?
Now
this
is
written
in
the
in
the
CPS,
but
I
can't
really
pass
the
CPS
I,
don't
know
where
it
is
and
I
don't
know
how
to
pass
PDFs
so
but
yeah.
B
Similarly
to
before,
how
frequently
do
we
pull
the
parent
for
what
the
entitles
are?
Have
they
changed
or
shoot?
Maybe
the
parent
have
in
notification
mechanism
like
we
have,
for
example,
in
RCR
that
they
can
say
you
may
want
to
talk
to
me
algorithm
agility,
there's
a
document
describing
what
we
would
have
to
do
if
we
wanted
to
use
other
algorithms
in
in
the
rpki
and
essentially
is
based
on
having
separate
trees
for
a
while
and
a
new
document
that
defines
flag
dates
for
going
from
one
to
the
other.
B
But
the
point
is
more
as
a
child
CA
I
have
no
way
of
knowing
that
this
is
going
on.
What
I'm
supposed
to
do.
So,
probably,
if
we're
going
to
do
this,
we
would
need
something
that
okay
A
little
bit
of
detail.
The
response
that
the
parent
gives
me
is
essentially,
these
are
your
resource
classes
and
your
entitlements,
your
resources
in
each
resource
class,
so
most
likely
you'll
need
something
that
says
here.
You
have
resource
Class
A
and
you
can
do
RSA
there.
B
You
have
resource
Class
B
and
that's
where
you
can
do
elliptic
curve
or
you
know
whatever
it
might
be,
but
something
like
that
we
will
probably
need
at
some
point,
because
currently
we
have
a
document
that
describes
how
we
could
do
algorithm
roles,
but
I,
don't
think
that
in
practice
we
can
make
this
work
at
least
not
for
the
world
where
you
have
delegated
cas,
and
there
may
be
other
things
of
course,
which
other
people
have
seen
next
slide.
B
Please
now
what
would
be
requirements
if
we
think
about
changing
all
of
this
I
think
it's
really
important
that
we
basically
don't
need
anybody
behind
at
least
definitely
not
from
the
start.
So
we
would
need
some
kind
of
graceful
negotiation
of
protocol
of
capability
or
capabilities
or
something.
B
What
might
help
is
that
we
stay
to
the
current
protocol
as
closely
as
possible,
because
it's
less
work
but
of
course,
provided
that
we
can
do
it
in
a
safe
way
and
then
other
questions,
questions
come
to
mind
like
okay.
If
we
look
at
all
of
this
and
other
potential
things
that
people
might
think
of,
will
we
go
for
a
new
version
that
has
tries
to
fix
all
of
the
issues
or
which
might
be
hard
or
you
know?
Is
there
a
way
to
to
do
things
incrementally
and
say?
B
What
I
propose
this
is
the
last
slide
actually
and
to
start
with
the
bottom.
Well,
I'll
start
with
the
top
I'll
get
to
that.
B
What
I
propose
now,
also
after
talking
to
some
people,
is
that,
even
though
you
know
I
I
would
love
to
move
fast
and
design
a
thing
and
have
something
great
I
think
the
proper
thing
to
do
is
to
to
make
a
document
that
really
defines
the
problem
statement
in
these
different
areas
that
lists
requirements
of
how
we
might
move
forward
and
that
can
also
be
used
to
discuss
what
the
priorities
are
that
people,
you
know
feel
do
we
have
consensus
on
that.
Something
is
an
issue.
B
Maybe
we
don't
have
consensus
on
certain
things,
and
you
know
it's
hard
to
work
on
that.
That's
probably
a
good
starting
point.
Having
said
that,
the
identity,
key
role
is
a
operational
issue.
That's
that
I'm
facing
so
I
need
to
do
something
a
little
bit
more
proactive
there.
B
Now,
if
we
can
have
a
discussion-
and
it
goes
fast-
and
perhaps
we
can
have
something
there-
that
is
really
within
standard,
but
if
not
and
I
still
need
to
do
this,
then
what
I
would
propose
is
that
I
make
an
informational
document
that
describes
how
this
works
potentially
get
a
external
auditing
on
it
as
well,
preferably
I
would
get
review
from
the
working
group,
though,
and
I
would
feed
back
into
a
ietf
standard.
Hopefully,
plus
I
will
also
commit
to
that.
B
If
we
do
have
that
I'm
more
than
willing
to
change
things,
that
I
have
done
as
a
temporary
measure
to
to
follow
the
the
standard
to
be
now.
The
final
thing
I
wanted
to
say
is:
some
of
you
might
have
seen
the
document
that
I
submitted
with
just
myself
as
author
having
you
know
where
I
documented
some
ideas,
some
ideas
also
resulting
from
discussion
with
other
people
that.
B
Document
that
is
just
a
document
that
I
wrote
because
I
wanted
to
have
something
tangible,
so
I
could
have
discussions
about.
You
know
what
are
possible
mechanisms
we
might
want
to
think
about
there
and,
and
it
has
helped
and
serve
that
purpose,
but
to
be
clear,
that
is
not
a
document
that
I'm
proposing
to
write
in
the
first
Buddha
point,
and
that's
it
really
so
I
guess
my
question
to
the
group
would
be:
do
you
agree,
that's
going
for
a
problem
statement
on
requirements.
B
Documents
in
this
case
is,
is
a
valuable
exercise,
and
would
you
be
willing
to
to
contribute
to
it
yeah
and
that's
that's
it.
B
G
I
think
thank
you:
hey
everybody,
I'm
Igor,
lobashev
and
last
date
of
in
Philly
we
presented
a
new
proposal
for
a
source,
address,
validation,
algorithm,
that's
using
a
bgp
as
well
as
rpki
data.
G
We
received
quite
a
bit
of
good
feedback
at
the
mic
and
after
and
so
today,
I'm
going
to
talk
about
what's
happening
next,
with
this
work,
can
you
do
next
slide?
Actually,
next,
two
slides
next,
so
I'll
start
with
a
just
quick
recap
of
barsev
algorithm
not
going
to
go
into
much
details.
Look
at
the
Philly
presentation
from
work
next.
G
All
right,
so
we
have
a
pretty
nice
long
pedigree
of
last
22
years
since
2000,
when
BCP
78
was
published,
which
says:
thou
shall
do
Source
address,
validation,
3704
invented
a
feasible
path,
RPF
method,
8704,
didn't
enhanced,
feasible
path
and
well.
Here
we
are
today
with
barsev
so
and
the
main
advantage.
The
main
innovation
of
barsev
is
augmenting
bgp
based
methods
with
rpki
data,
just
like
8704.
G
It
starts
with,
let's
find
the
customer
cone
for
the
interfacing
question
for
the
customer
or
peer
interface,
and
it's
also
using
aspas
data
from
bgp
in
a
more
advanced
way,
but
fundamentally
aspects
from
bgp,
but
also
looking
at
spa
data.
As
available
once
customer
cones
been
built,
it's
gonna
find
all
the
prefixes
that
belong
to
ass
in
the
customer
cone
using
prefixes
from
bgp
announcements
and
also
low
generator
data.
G
So,
what's
the
advantage
of
the
biggest
Advantage
is
that
the
data
available
to
barsev
is
more
than
just
the
data
that
was
previously
available
to
algorithms
that
only
use
bgp,
and
that
means
that
for
networks
that
use
some
sort
of
traffic
engineerings
and
their
prefixes
or
as
numbers
don't
show
up
in
bgp,
we
have
a
way
to
augment
that
data
and
build
a
better
self-filter
list,
looting
it
rpki
in
some
future.
G
If
customer
code
has
a
perfect
adoption,
for
example
of
aspn
and
raw
burstev
can
then
build
a
perfect
filter
chat
from
rpki
data,
but
it
doesn't
have
to
be.
We
don't
have
to
wait
for
that
future
because
it's
perfectly
happy
to
augment
rpki
data
from
BJP
next.
G
Next
slide,
thank
you
so
again,
quickly,
going
through
bar
7
operation
step,
one
build
customer
cone
start
with
just
a
single
as
number
that's
on
the
other
side
of
your
interface,
peer
interface
or
customer
interface.
Look
up
the
test
number
in
aspa,
a
customer
of
relationship
or
a
lookup
or
end
lookup,
that
as
number
in
all
the
asps
that
the
router
has
received
and
looking
for
what
is
the
previous
as
number
in
the
asps
should
be
stated
that
we're
looking
at
every
single
bgp
update
message.
G
That's
available
not
just
from
this
interface,
so
anything
that
received
from
any
of
the
customers
appears
even
your
provider.
If
you
get
a
full
table
from
them
all
right,
so
you
discover
some
as
numbers.
Iteratively
repeats
the
process.
When
you
can't
discover
anything
new
you're
done,
you
have
your
customer
call
and
then
for
the
customer,
Corners
numbers,
you
look
them
up
in
raw,
find
the
prefixes.
G
You
look
them
up
in
BJP,
update
messages
for
bgp
update
messages,
whether
originating
as
is
in
your
customer
cone.
You
find
the
prefixes
at
this
point,
it's
kind
of
implied,
but
a
good
idea
to
State
explicitly
that
the
inputs,
the
bgp
data
that
you
are
looking
at
are
should
be
pre-validated,
so
using
rpti,
ROV
or
any
other
validation
method.
So
don't
use
invalid
bgp
data
for
the
for
serve
all
right.
So
you
combine
the
two
set
of
prefixes
found
from
row
and
from
bgp,
and
that's
your
sub
list.
G
Next,
as
you
can
see,
you
don't
need
a
widespread
deployment
of
Rowan
and
spa
for
it
to
be
useful.
Barsef
Can,
happily
take
data
from
bgp.
The
only
place
where
rpki
data
is
useful
is
for
networks
that
do
some
sort
of
fancy
traffic
engineering,
so
the
prefixes
Nas
numbers
do
not
show
up
in
a
BJP
feed
most
likely.
Those
networks
that
do
this
kind
of
stuff
are
more
sophisticated
networks
and
that's
most
likely
translate
to
them
being
more
likely
to
actually
publish
information
into
rpki
next,
all
right.
So
what's
new
next.
G
Most
of
them
feedback
we
received
revolved
around
things
that
we
published
in
I
did
the
new
chapter,
like
section
6,
for
operations
and
management
considerations.
Next.
G
So,
first
of
all,
it's
very
true
that
we're
using
raw
and
aspa
information
that
was
not
really
designed
for
it
and
honestly
bgp
was
also
not
designed
for
sale,
but
we
looked
at
it
and
it
seems
to
us
that
it's
sufficient
for
the
purpose,
but
there
was
a
suggestion
so
that
what
if
we
actually
try
to
introduce
sub-specific
sub-specific
objects,
row
like
and
like
they're,
very
much
like
raw
in
Spa,
but
they
designed
for
Sev-
and
there
is
clearly
Merit
in
to
the
idea
of
let's
use
information
specifically
designed
for
the
purpose,
but
there
is
also
clearly
a
cost
to
that.
G
So
one
is
you
double
the
number
of
objects
and
two
when
you
it's
always
a
pain,
to
keep
the
tools
synchronized
for
probably
99.9
percent
of
the
cases.
So
we
tried
hard
to
figure
out.
I
mean
find
examples
where
asking
the
operator
to
publish
rpki
data,
that's
specifically
for
sale
and
that
they
wouldn't
other
publisher
otherwise
like
when
would
it
be
harmful
and
we
couldn't
come
up
with
good
examples
with
us.
The
mailing
list
with
we
still
haven't,
received
any
good
example.
So
we
welcome
further
discussions
further
ideas.
Next,.
G
The
other
feedback
we
received
was
around.
We
need
to
give
much
more
implementation
guidelines
to
the
implementers.
This
stuff
I
mean
rpki,
is
not
guaranteed
to
be
100
available
or
even
consistent,
so
must
fail
open.
It's
the
case
for
the
traditional
rpki
ROV
case.
Something
fails,
bgp
would
still
work
and
for
Save
case
something
fails.
Data
forwarding
should
still
work.
G
G
G
G
The
idea
is:
maybe
the
there
is
some
sort
of
temporary
inconsistency
in
the
repository
in
the
file
system
or
some
sort
of
synchronization
problem
and
honestly,
if
an
object
is
not
expired,
it
should
be
put
on
the
crl.
If
you
really
don't
want
to
have
it,
but
more
ideas
definitely
welcome
next.
G
Just
a
quick
blog
for
asba
adoption,
it
really
works.
Well,
it's
good
for
its
purpose
of
detecting
BJP
route
leagues,
but
it's
also
very
good
for
sale
and
there
are
updated
draft
next
and
so
we're
asking
for
more
discussion.
More
feedback
with
the
new
version
of
the
draft
has
addressed
a
bunch
of
feedback
we
received.
Let
us
know,
is
it
good?
Do
we
need
to
talk
to
talk
more
with
you
and
since
we
did
get
some
engagement
from
the
community,
we
would
like
the
working
group
adoption.
E
E
E
So
I
mean
so
we
have
internal
interconnect,
customers
that
might
get
incidental
Transit
from
us
out
to
the
public
internet.
If
we
have
one
of
our
clusters
that
becomes
disconnected
from
our
Global
backbone,
based
on
how
the
aggregate
ipspaces
announced-
and
so
it's
possible
that
in
those
cases
we
might
direct
a
customer
to
a
location
that
is
off
net
either
because
we've
taken
it
offline
for
maintenance
or
some
sort
of
other
activity,
and
so
The
Source
address
may
be
the
our
customer's
IP
address
that
we're
not
really
intending
to
provide
transit
to.
E
So
that's
that's
something
here
that
would
make
it
challenging
for
us
to
go
to
our
external
network
providers
and
ask
them
to
implement
this
on
our
ports.
E
G
Get
better
idea
about
it,
but
so,
if
purely
talking
about
publishing
reluctance
to
publish
aspa
data
well
and.
I
G
E
They
don't
want
us
to
announce
their
IP
space
right,
so
so
there's
that
and
then
on
so,
but
we
can
talk
offline
about
that
and
then
on
10
the
next
slide.
You
know
the
concept
of
using
you
know
stuff,
that's
still
valid,
but
kind
of
ignore
the
expiration
there's
precedence
for
that
in
the
DNS
and
the
use
tail.
E
Definitely
you
know
that,
and
so
I
think
that
that
makes
sense.
So
I
just
wanted
to
add
that
in.
J
Thank
you,
hi
Anthony
I
have
a
question.
You
know
when
we
are
implementing
currently
customer
accounts
on
basis
of
bgpa
84.
There
is
a
warning
here
for
people
for
customers
that
have
a
quite
large
customer
call.
A
number
of
prefixes
reaching
more
than
5
000
and
I
mean
our
own
custom.
Account
is
like
1.3
million
prefixes.
G
You
need
to
I
mean
at
that
time.
You
know
like
at
that
point.
You
might
even
need
to
think
about
the
size
of
memory
on
your.
G
Well,
it
almost
doesn't
matter
where
you
calculate
it
from.
If
you
really
have
a
very,
very
large
cider
list,
then
you
have
a
very,
very
large
cider
list
to
consider
exactly
right.
So.
G
It
the
expectation
it
doesn't
really
have
to
be
computed
on
the
route
or
it
could
be
computed
on
a
server
next
to
the
router,
using
the
feeds
and
just
feed
to
the
router
through
some
other
means
through
bgp
I.
Don't
care
in.
G
So
it's
not
it
if,
if
the
question
is
basically
a
huge
cider
list
and
you
don't
know
how
to
implement
it,
this
doesn't
tell
you
immediately
how
to
implement
it.
Maybe
there
are
ideas,
but
this
is
trying
to
come
up
with
a
more
accurate
save
list.
If
you
could
install
it.
Okay,.
A
Hi
Nan:
please
go
ahead.
L
Hello
from
Hawaii
technology:
that's
the
one
who
share
our
preservation.
What
I
want
to
say
is
that
what
happened
during
the
evolution
of
South
surprise,
validation
mechanisms,
what
I
found
is
that
we
are
considering
more
and
more
information
to
generate
accurate
as
a
way
roles
in
particular
scenarios.
L
At
the
beginning,
we
can
manually
configure
Excel
rules
to
filter
particular
Source
prefix.
We
need
to
update
these
rules
in
time
when
the
preexix
changed,
and
then
we
have
strictly
your
RPF.
We
can
generate
these
rules
by
considering
local
Fable
and
this
rules
can
be
generated
automatically,
but
under
a
symmetric,
routine
is
not
accurately
enough,
so
we
have
enhance
the
urpf
and
enhance
the
RPF.
L
We
consider
local
rib
information,
so
we
are
considering
more
and
more
information
into
consideration
when
we
generate
actually
rules,
and
now
we
are
considered
more
extra
information,
besides
the
local
variable,
to
generate
a
more
accurate
rules
and
okay
congruent.
In
other
words,
if
we
want
to
generate
accurate
as
well
as
we
rules,
we
need
to
import
extra
information
and,
of
course,
extra
cost
will
be
imported.
So
there
is
a
trade-off
between
the
potential
benefits
and
the
extra
cost
thanks.
G
K
Sure
job
Snyder's,
honestly
section
651
suggests
that
you
should
refresh
daily
but
there's
existing
work.
That
recommends
refreshing
at
least
once
an
hour,
preferably
once
every
10
minutes
and
I
don't
see
a
justification
to
deviate
from
what
is
already
the
established
best
practice
in
that
regard.
Thank.
K
K
The
document
already
describes
a
sort
of
fill
open
mode
where
it's
suggested
to
fall
back
to
enhanced
urpf
or
enhance
feasible
urpf,
because
that
downgrade
is
better
than
suspending
Seth
entirely
and
I
think
it
is
very
good
to
consider
a
path
towards
a
fill
open
of
sorts,
but
ignoring
objects,
either
being
deadlifted
from
a
manifest
which
would
cause
them
to
not
appear
on
a
crl,
but
it
does
mean
that
the
ca
refocused
the
Roa
or
the
exploration
that
is
is
unhelpful,
I
think
so.
G
Thank
you.
So
the
recommendations
were
not
for
processing
the
traditional
rpki
ROV,
but
only
for
theft
purposes.
The
concern
is
that,
if
you
start
ignoring
pretending
objects
don't
exist.
For
example,
you
fail
to
refresh
your
cache
and
the
object
is
expired
in
the
meanwhile
that
falling
back
to
your
to
enhance
the
RPF.
K
But
what
is
the
garbage
collection
mechanism,
because
I
do
understand
your
concern
that
it
is
operationally
potentially
a
little
bit
nicer
to
to
be
more
permissive
than
strictly
needed,
but
somewhere
in
the
decision
path
there
there
is
going
to
be
an
event
horizon
where
you
go
left
or
right
right.
So,
for
instance,
if
I
see
a
traffic
stream
coming
from
you
towards
me
and
I
I
own,
the
IP
space
and
I
want
to
block
it
and
I.
Remove
the
robot
authorizing
you
to
send
traffic
for
the
source,
and
others
continue
to
use
that
Rover.
K
G
K
But
then
you
are
changing
some
fundamental
parts
of
how
rpki
was
designed
to
work.
The
crls
are
shrunken
both
based
on
on
what
has
now
properly
expired,
but
also,
if
it's
not
listed
on
a
manifest,
you
don't
need
to
add
it
to
the
crl.
So
crls
in
the
rpki
are
fairly
small,
like
it's
an
average
of,
say,
seven
to.
H
H
K
This
is
possible
because
manifests
are
strictly
interpreted
so
now
you're
you're
changing
a
few
complications
of
the
rpki
in
a
way
that
I
think
is
I.
G
Think
it's
a
great
discussion.
I
mean
we
can
definitely
see
what
the
guarantees
of
consistency
that
you
could
expect
from
the
repo
versus
what's
yeah.
So
if
we
can
believe
that
Reaper
has
a
good
consistency
that
we
will
not
accidentally
drop
objects,
then
we
don't
need
this
recommendation.
K
Manifests
Were
Meant
to
provide
very
strong
guarantees
about
the
Integrity
of
the
repository
okay,
but
then
you
still
have
your
your
concern
about
hey.
Maybe
I
want
to
operate
things
on
different
timers
and
to
overcome
that
concern.
K
You
might
want
to
consider
defining
a
new
science
object
where
the
expiration
date
is
what
you
want
it
to
be
if,
for
some
reason,
the
expiration
date
of
roast
is
unsuitable
for
this
protocol's
purpose,
a
new
object
could
be
defined,
that
that
has
slightly
different
rules,
but
and
and
if
anything,
it
would
be
good
to
put
in
the
internet
draft
just
the
concern
of
garbage
collection
being
stricter
than
is
applicable
to
this
particular
case,
but
then
also
find
other
ways
to
to
do
to
shrink
the
the
set
filters
as
time
progresses.
K
G
G
M
M
A
You
want
to
wait.
That
is
a
queue.
A
Sorry
about
that.
Ming.
H
Okay,
a
very
good
simulation
is
pretty
good
Improvement,
of
course,
but
I
have
a
question
in
page
10,
you
said
that
we
use
rpk
as
the
tours
to
find
hidden
prefix
right.
So
in
this
case,
if
rpk
is
failure,
I
think
that
will
make
this
perfect
will
Lobby
included
in
sap
table.
H
G
It
was
a
little
garbage,
but
I
think
I
understood
the
question
as
right.
If
we're
augmenting
a
table
from
rpki
data,
then
rpki
failure
will
be,
will
put
that
prefix
at
risk
and
that's
why
we
have
this
implementation
guidelines.
So
some
of
it
looks
like
it's
pretty
clear
and
the
other
one
we
need
to
have
more
discussion
on,
but
that
basically
points
to
the
need
to
make
sure
we're
very
careful
at
managing
our
cash
so
that
any
failure
doesn't
result
in
failing
clothes
for
any
prefix.
F
I
just
wanted
to
make
the
point
that
you
know
as
far
as
rpki
objects,
staleness
I
think
we
have
to
follow
what
the
basic
rpki
validation
algorithm
is
doing
and
what
route
origin
validation
is
doing.
If
for
no
other
reason
that
you
know
subtle
issues
of
rpki
objects,
staleness
is
something
that's
typically
in
the
purview
of
the
validator,
and
this
algorithm
operates
on
bgp
ribbon
data
and
those
two
data
sets
are
typically
on
different
platforms.
So
I,
don't
I
mean
it
sort
of
aligns
with
jobs.
F
G
Yep,
so
definitely
just
like
I
said
to
your
brother:
that's
good
good
feedback
and
we'll
definitely
come
up
with
something
that
makes
sense.
A
I
Okay,
thanks
next
slide.
I
Okay,
so
so
to
recap
briefly
on
this,
this
document
defines
a
new
type
of
sound
object,
called
a
trust,
Anchor
Key
object
or
attack
object,
and
that
can
be
used
by
trust
tankers
to
communicate
ta
certificate,
URL
changes
and
taq
changes
to
relying
parties.
So
the
aim
here
is
to
simplify
the
key
rollover
process,
get
support
of
some
sort
into
relying
parties,
and
that
in
turn,
will
help
with
HSM
vendor
lock-in
next
slide,
please.
I
So
this
was
presented
at
the
last
meeting
as
at
version
10..
There
were
three
main
changes
between
version,
10
and
11..
The
first
was
to
note
that
summer
line
parties
that
can't
support
Automatic
Transition
can
still
get
most
of
the
benefit
of
the
model
here
by
doing
a
sort
of
semi-automatic
type
thing.
I
So,
for
example,
RP
card
client
is
a
relying
party
that
can't
do
Automatic
Transition
because
by
Design
it's
not
permitted
to
update
the
key
material,
that's
being
used,
but
it
can
still
fetch
the
attack,
object,
validate
it
alert
the
user
to
a
new
key
check,
the
acceptance
assignment
period,
and
then,
when
that
expires,
it
can
request
that
the
user
update
the
key
material
manually.
I
The
second
change
was
to
note
that
our
attack
object,
distributed
out
of
band
is
not
somehow
more
secure
or
more
reliable
on
account
of
it
being
signed.
It's
pretty
much
just
a
telephone
in
a
different
format.
I
The
reason
it
doesn't
matter
that
it's
signed
is
because,
if
the
relying
party
trusted
the
the
signing,
trust
anchor
it'll
be
getting
it
inbound
and
the
third
challenge
was
to
add
some
text
to
the
security
considerations
around
what
we're
calling
for
lack
of
a
better
term
temporary
ta
compromise.
So
this
is
where
a
trust
anchor
is
using
a
device
like
an
HSM
that
permits
key
signing
without
actually
having
access
to
the
raw
key
and
attacker
gets
access,
or
rather
control
of
that
device.
I
Somehow
but
then
the
trust
anchor
is
able
to
regain
control
over
that
device.
So
a
trust
anchor
in
this
situation
might
think
great
I've
got
control
back.
Everything
is
fine,
but
at
least
in
the
presence
of
attack
objects.
There
are
some
scenarios
where
that
temporary
access
can
cause
long-term
problems,
so
by
documenting
that
trust
anchors
can
consider
what
to
do
if
something
like
this
happens
and
update
their
processes
accordingly,
next
level
for
11
to
12,
it's
a
bit
simpler.
I
There's
been
some
implementation
work,
since
the
last
meeting,
Apex
code
has
been
updated
for
version
12
of
the
draft
job
did
some
tech
object,
validation,
work
in
rpqr
client,
which
is
now
in
openvsd,
proper
and
Tim.
Also
did
some
tech
encoding
work
in
a
branch
in
group
and
jobs
and
Tim's
work
was
very
helpful
in
finding
problems
in
the
opening
implementation
work
next
slide,
please
some
things
to
discuss.
I
Ross
Housley
had
a
suggestion
on
the
list
about
adding
some
clarifying
texts
to
the
signed
object
registry
at
Ayana,
that's
or
rather
on
the
author's
side.
We
think
that's
a
good
idea
in
principle,
but
because
it's
not
strictly
related
to
what's
happening
in
this
document,
we
think
it
might
be
better
off
as
a
separate
thing.
I
Job
had
some
job
had
a
suggestion
about
removing
the
TA
compromise
section,
which
was
added
in
version
11.,
because
it's
kind
of
hard
to
talk
about
this.
Clearly
it
might
just
confuse
people
and
it's
not
strictly
necessary
to
to
the
document
as
a
whole.
On
the
author
side,
we're
fine
with
that,
but
teas
did
indicate
on
the
list
that
he
thinks
it
will
be
worth.
I
Keeping
that
text
and
then
the
third
suggestion
was
to
it-
was
also
from
Joe
adding
texts
about
certified
destruction
of
key
pair
material
again,
on
the
author's
side,
we're
fine
with
that.
Teas
did
indicate
that
the
term
certified
invites
questions
about
what
certifications
and
so
on,
and
it
might
be
better
off
to
avoid
that.
If
we
can
thanks
a
lot
please,
apart
from
resolving
those
issues,
there
are
some
other
suggestions
from
Joe
that
are
uncontentious.
I
So
we
need
to
update
the
document
for
that
it
needs
some
editorial
work,
particularly
on
the
server
side
of
things.
The
process
for
Server
size
for
server-side
implementations
to
follow
is
a
little
bit
strewn
around
the
document,
so
that
needs
to
be
Consolidated
and
there
needs
to
be
some
text
around
the
purpose
of
the
acceptance
timer
just
to
make
it
clear
what
what
that's
about
and
more
implementation
work
would
be
good
too,
so,
particularly
on
the
server
side,
and
that's
it.
Thank
you.
D
So
you're
absolutely
right
about
my
suggestion,
being
like
really
small,
but
what
I
did
is
I
looked
for
the
next
document.
That's
updating
that
registry,
because
the
previous
one
was
already
an
authority.
Eight
and
Warren
said
I
won't.
A
I
N
N
What
I
was
wondering
is
if
the
compromise
text
could
maybe
just
be
stuck
in
an
appendix
as
sort
of
like,
then
it
doesn't
need
to
be
as
clear
and
it's
just
a
sort
of
like
here
are
some
additional
information
that
where
it
can
stay
the
remove
the
tear
compromise
section,
maybe
that
could
just
be
an
appendix.
Instead,
if
that
works
for
nope
doesn't
work.
Okay,.
M
I'll
just
clarify
my
main
concern
about
key
destruction
here.
The
limitations
that
I've
seen
with
the
various
vendors
that
we
have
looked
at
for
hsms
is
mostly
that
you
cannot
be
certain
that
there's
no
copy
of
a
key
somewhere
somewhere
else,
so
we
can
have
a
certified
process
that
shows
that
if
that
really
is
the
only
box,
we
have
deleted
the
copy.
M
That's
in
that
exact
box,
which
you
include
in
the
process,
but
you
kind
of
really
have
guarantees
about
the
thing
meaning
in
really
gone,
and
it
gets
really
Murray
if
you,
if
you
want
that.
So
that's
why
I
was
opposed
to
that.
Okay,
thanks.
A
C
So
good
morning,
good
afternoon,
everyone,
this
is
sriram
from
nist
I'm,
going
to
talk
about
the
updated
aspa
path.
Verification
draft
today
next
slide.
Please
so
I'll
quickly,
we'll
quickly
look
at
changes
in
version
11
that
was
published
a
few
weeks
ago
and
compare
it
to
version
09.
We
skipped
version
10
because
it
was
submitted,
and
then
we
had
a
few
more
changes
to
make.
C
So
we
submitted
version
11
soon
after
we
received
some
good
comments
on
version
11
already
on
the
working
group
list
in
the
last
couple
of
weeks.
We'll
take
a
look
at
those
comments
and
then
the
next
steps
next
slide.
Please
so
just
to
recap:
we
are
doing
aspa
based
path,
verification
because
it
has
the
benefits
of
detecting
and
mitigating
bgp
route
leaks
and
also
does
the
same
for
forged
origin
hijacks.
So,
essentially
it's
a
basic
form
of
path.
C
Verification
does
not
do
a
complete
path,
verification,
but
establishes
that
that
it
is
a
feasible
path
and
it
is
free
of
any
route
leaks
and
also
catches,
forged
origin
route.
Hijacks
next
slide,
please
so
the
changes
in
version
11
compared
to
Version
9,
the
algorithm
needed
some
corrections,
and
we
realized
that
I
made
a
presentation
about
a
year
and
a
half
back
at
ietf
110.
C
That
is
the
three
Ram
one
reference
and
that
that
pointed
to
some
enhancements
that
were
necessary
in
order
to
get
rid
of
some
mix
up
between
the
invalid
outcome
and
the
unknown
outcome,
and
we
took
care
of
that
in
version
09,
but
but
additional
refinements
were
also
necessary
and
they
are
now
in
version
11..
C
These
additional
refinements
are
in
the
form
of
as
set
handling
route
service
route
server
as
how
to
treat
that
some,
some
other
refinements
related
to
clarification
about
applicable,
fee,
Safi
and
a
statement
about
as
configuration,
and
we
in
addition
to
these
refinements,
we
devoted
a
good
amount
of
effort
to
get
like
pretty
good
text
Clarity
throughout
the
document,
but
there
still
needs
to
be
done
a
little
more
next
slide.
Please.
C
So,
on
the
as
set
handling,
we
had
a
pretty
good
discussion
and
feedback
on
the
working
group.
A
few
months
back.
The
pointer
to
that
discussion
is
provided
at
the
bottom
of
this
slide.
So
now,
based
on
the
working
group,
discussion
and
general
consensus,
the
presence
of
as
set
anywhere
in
the
bgp
path
would
make
the
path
invalid
per
aspa
verification
algorithm.
So
that's
in
the
draft
in
the
in
version.
C
11
next
slide,
please
so
for
that
out
server,
as
also
we
had
a
good
discussion
on
the
thread
on
the
working
group
list
and
the
thread
is
provided
at
the
link
to
the
thread
is
provided
at
the
bottom
of
this
slide
as
well.
We
had
basically
on
based
on
that
discussion.
It
emerged
that
we
had
two
choices:
one,
a
choice,
a
is
to
add
the
RS
ASN
to
the
as
path
in
the
case
of
a
transparent
as
and
in
this
case
we
can
apply
the
algorithm
for
Downstream
paths.
C
If
we
go
with
Choice
B,
we
would
remove
the
rsasn
from
the
as
part
in
the
case
of
non-transparent
RS
and
apply
the
algorithm
for
Upstream
paths,
so
in
version
11
we
included
Choice
B,
they
are,
they
are
equivalent.
They
give
you
the
same
results
for
the
path,
validation
under
under
any
scenario,
so
they
are
equivalent.
We
chose
Choice
B.
C
In
addition
to
that,
we
should
also
mention
that
now
the
draft
makes
it
clear
that
NRS
client
must
include
rsas
in
its
aspa
and
also
an
rsas
must
register
nas0
aspa.
That
would
facilitate
the
path,
verification
unambiguously
and
it
will
make
it
work
right
next
slide,
please.
C
So
we
just
have
a
clarification
about
the
office,
had
some
feedback
on
this,
and
we
have
since
included
this
statement.
C
C
We
also
have
a
statement
about
as
Confederation,
which
is
new
and
that
simply
says
that
the
ass
on
the
boundary
of
an
as
Confederation
must
register
aspa's
ASP
is
using
the
confederations
global
ASN
or
Global
ID,
and
the
procedures
for
aspa
based
path.
Validation
in
this
document
are
not
recommended
for
use
on
ebgp
links,
internal
to
the
confederation,
I
think
that's
cleared
enough
next
slide.
Please.
C
So,
like
I
said,
since
we
published
version
11
a
few
weeks
back,
we
have
received
the
comments
from
a
couple
of
people,
and
that
includes
Claudio
and
also
Rich
content
from
Charter.
A
good
set
of
comments
both
of
them
and
I,
have
responded
to
them
on
the
list,
and
those
are
comments
that
we
can
incorporate
into
the
next
version
version
11.
in
particular.
Claudio
read
the
draft
very
carefully
and
he
offered
a
very
good
extensive
set
of
comments.
His
main
difficulty
with
the
draft
or
main
main.
C
Suggestion
is
to
improve
the
readability,
so
he
found
that
the
draft
by
itself,
especially
in
the
algorithm
description,
it
gets
a
little
in
intricate
and
at
that
point
he
made
use
of
sriram
one,
the
my
ietf
110
presentation,
so
that
has
a
nice
set
of
figures
explaining
the
algorithm.
It
has
a
good.
It
has
good
notation
to
begin
with
and
based
on
that,
it
builds
on
the
description
of
the
algorithm,
so
he
he
found
it
essential
to
make
use
of
that.
C
In
order
to
be
able
to
under
understand
specifically
the
the
algorithm
description
in
the
draft
and
that's
where
he
said
it
can
be
improved
a
whole
lot.
C
So
what
I'm,
proposing
as
part
of
the
next
steps
is
that
we
can
follow
the
notation
and
style
in
Sri
Ram,
one
reference
to
better
describe
the
algorithm,
that's
quite
doable,
and
it
will
make
the
description,
shorter
and
more
concise
and
also
also
clearer,
and
we
will
publish
a
version
11,
a
version
12
sorry
in
the
next
few
weeks,
and
at
that
point
we
will
invite
some
more
feedback
from
the
from
the
working
group
and
hopefully
then
it
will
be.
C
It
will
be
in
good
shape,
so
at
that
point,
job
has
suggested
that
we
should
wait
a
few
few
months
and
solicit
implementation
experience
reports
and
then
we
can
proceed
to
to
working
a
group.
Last
call
right
so
and
then
I
have
a
couple
of
slides,
backup,
slides
if
we
can
just
quickly
move
to
the
next
one,
the
one
after
that.
C
So
in
case
you
are
interested
and
want
to
understand
the
I,
how
the
RS
as
is
treated
and
how
these
two
Alternatives
the
choice,
a
and
the
choice
B
that
I
mentioned
earlier.
If
you
would
just
want
to
get
some
clarity
on
that,
I
won't
go
through
this
slide.
But
you
you
may
look
at
this
slide
to
to
to
figure
that
out
and
please
ask
me
questions
if,
if
you
have
any
on
the
list
or
or
one-on-one,
so
thank
you.
We
can
move
on
to
the
questions.
K
Joe
Snyder
open
BSD.
It
is
Our
intention
to
implement
verification
in
the
next
three
to
six
months,
which
is
fairly
soon.
So
that's
where
my
DSI
and
recommendations
comes
from
to
wait
a
little
bit
with
working
with
last
call,
and
so
we
finished
that
implementation
and
I
hope
some
other
thunderstorms
so
start
working
on
this
in
the
next
few
months.
C
Thank
you,
y'all
I
should
add
that
at
least
we
already
have
implemented
these
this
pretty
much
close
to
this
latest
version.
11.
C
aspa
path,
verification,
and
we
also
have
a
number
of
test
cases
that
we
run
against
to
to
verify
the
the
implementation.
So
we
have
that
all
available
on
GitHub
and
we
will
come.
We
welcome
anyone
interested
to
pick
that
up
and
make
use
of
the
tests
or
or
make
use
of,
or
make
use
of
our
implementation
for
if
you're
running
some
experiments
you're
welcome
to
do
that.
C
In
addition
to
that,
I
must
also
mention
that
that
Claudio
zika
he
also
mentioned
in
his
comments
on
this
version
11
on
cider
Ops
list
that
he
is
implementing
it,
so
it
would
be
nice
to
I
mean
we
know
that
that
there
is
another
effort
as
well,
and
job
mentioned
another
one
as
well.
So
it's
good
to
see
that
there
are
multiple
implementation
efforts
already
on
way
are
available
or
on
way.
Thank
you.
B
B
K
Joe
Snyder's
I
think
in
terms
of
the
profile.
The
profile
in
my
perspective
is
now
stable.
There's
multiple
implementations,
specifically
your
ca
implementation,
helped
me
develop
my
validator
implementation.
K
The
current's
profile
drafts
contains
I,
think
references
to
seven
or
eight
implementations
that,
to
some
degree,
have
tested
interoperability
with
each
other.
So
as
far
as
I'm
concerned,
unless
there
is
a
grave
grave
mistake
in
the
profile
that
needs
addressing,
we
should
stop
touching
the
profile
and
the
contention
between
the
current
version
of
the
profile
and
the
one
before
that
is
of
a
somewhat
cosmetic
nature,
and
that
to
me
does
not
count
as
a
an
urgent
passengers
him
to
to
change
and
then
from
a
CA
side.
K
M
For
a
talking
profile,
anyway,
we
now
have
a
implementation
of
the
current
drafts
as
per
profile
in
the
CA
environment
available
on
our
testbed,
and
we
will
publish
a
documentation
how
to
create
objects
soon.
It
appears
to
be
interoperable,
not
causing
problematic
validators.
However,
one
implementation
will
complain
about
unknown
objects.
A
K
Hello,
everyone
loud
and
clear,
yep
I,
wanted
to
present
an
update
on
a
recent
effort
to
do
a
best
version
of
the
document
that
specifies
the
route
origin
authorizations
profile
rc6482.
This
next
slide,
please
the
best
effort
started
because
I
noticed
an
oversight
in
the
original
specification
with
regards
to
the
mandatory
presence
or
absence
of
as
identifier,
extensions
and
I
felt
to
myself.
Well
I'll
just
file
an
errata
next
life.
K
Unfortunately,
it
was
shut
down
to
me.
This
seems
a
little
bit
of
an
arbitrary
decision
by
the
powers
that
be
I,
feel
that
what's
currently
deployed
in
a
while
in
terms
of
objects
in
the
repositories
and
how
validators
react
to
them
and
other
erratas
that
have
been
approved
similar
to
this
one.
It
should
have
been
verified,
but
it
didn't
so
next
slide.
Please
buckle
up
here.
We
are
able
documents
next
slide.
K
The
best
document
is
available
for
your
consideration.
I
started
with
a
for
bottom
copy
of
the
original
RC
to
really
try
and
make
the
changes
from
revision
to
revision
as
minimal
as
possible,
so
that
the
whole
crowd
can
follow
the
story
and
see
that,
even
though
some
of
the
changes
might
seem
intrusive,
that
all
in
all,
these
are
very
nuanced
changes
to
tie
up
any
and
all
Loose
Ends.
The
original
RFC
presented
next
slide.
K
Please,
the
goals
of
the
best
documents
as
far
as
I'm
concerned
are
to
clarify
that
that,
as
identifiers
should
not
be
present,
I
want
to
strengthen
the
asm1
notation,
pull
in
the
verified
Errata
that
that
appeared
so
far
extend
the
documents
a
little
bit
by
providing
an
example
that
people
that
are
implementing
roast
may
find
useful
and,
above
all,
maintain
full
compatibility
with
the
ecosystem.
As
we
currently
extend
the
understand
it.
K
So
the
roast
payload
contains
essentially
two
elements.
One
is
the
origin
as,
and
the
other
element
is
a
list
of
IP
prefixes
and
as
we
validate
roast
in
an
rpki
cache
validator,
all
the
IP
prefixes
in
the
payload
must
be
contained
in
the
3779
extensions
of
the
ee
certificate
and
the
parent
certificate
of
that
ee
certificate
and
that's
parents
certificate
Etc.
K
The
prefix,
in
other
words,
validators,
do
not
check
whether
the
asid
in
the
payload
is
contained
in
the
3779
extensions
in
the
certificate
chain
and
I've
seen
multiple
implementers
that
that
made
a
mistake
in
this
regard,
because
from
the
original
specification,
the
the
whole
notion
of
as
identifiers
is,
is
not
described,
and
since
in
the
payload
there's
an
AS
and
potentially
in
the
ee
certificate,
there
could
be
something
as
related.
K
It
is
intuitive
for
people
to
assume
that
maybe
they
have
some
kind
of
connection
and
by
explicitly
documenting
that
the
as
identifiers
extension
must
not
be
present.
I
think
it
becomes
easier
for
developers
to
understand
that
the
asid
is
not
part
of
something
that
is
verified
to
be
contained
in
the
chain
of
authorities.
K
K
I
looked
at
various
open
source,
CA
implementations
and
none
of
them
set
the
extension
the
as
identifier
extension
on
Raw
ee
certificates
and
on
the
validator
side.
Most
validators
will
ignore
the
extension
if
it
is
present
and
one
validator
will
consider
the
robot
invalid
if
the
extension
is
present.
K
K
All
right
next
topic,
strengthening
the
asm1
notation,
the
original
Roa
asn-1
notation,
was
I,
think
written
in
a
time
where
there
was
less
understanding
of
of
all
the
powers
that
and
features
that
asm1
can
offer
us
and
what
the
benefits
are
in
being
very
concise
with
with
constraints.
K
So,
for
instance,
the
asid
is
an
asm1
integer,
an
asm1
integers
can
hold
very,
very
large
values,
think
larger
than
64
bits
and
also
negative
values.
K
The
slide
you're
looking
at
what
I've
done
is
the
the
rats
colorize
text
enclosed
in
square
brackets
is
what
is
removed
and
the
green
colored
text
in
the
curly
brackets
is
what
is
added
as
its
replacement.
So
let's
go
over
these
changes,
one
by
one
in
the
the
the
the.
K
In
the
container
that
that
has
the
optional
version
attributes
the
asid
and
the
adder
blocks,
A
Change
Is
made
that
ipader
blocks
cannot
appear
unlimited
amount
of
times,
but
it
can
appear
only
once
or
twice.
K
The
reason
for
this
constraint
is
that
the
address
family
is
contained
within
ipadder
blocks
and
all
robot
producers
currently
will
add
maximum
insert
to
IP
other
block
structures,
one
for
V4,
which
can
contain
multiple
V4
prefixes
and
if
V6
also
is
a
V6.
Prefixes
also
appear
in
the
roller
payload.
Another
ipadder
block
is
added
inside
this
ipadder
block.
It
specifies
ip6
and
then
a
list
of
one
or
more
ip6
prefixes.
K
In
retrospect,
I
think
it
shows
that
the
data
structure
of
rowas
should
have
done
an
inversion
where,
for
instance,
the
API
is
is
on
the
outside
of
the
container,
but
that
type
of
change
would
prompt
us
to
do
a
row
of
a
profile
version
bump,
and
that
would
then
we
would
not
meet
our
goal
of
compatibility.
K
K
K
So,
according
to
the
normative
natural
text
in
the
original
RFC,
the
only
possible
outcome
is
is
two
octets,
so
it
seems
a
bit
silly
to
me
to
on
The
Wire,
permit
three
octets
and
then
error
out
because
you
specified
the
third
octet
so
again
this
this
is
a
change
that
is
perfectly
compatible
with
with
what's
deployed
out
there
you're
not
allowed
to
specify
surface
so
we
clean
up
the
room
that
would
allow
you
to
express
Safi
onwards
to
max
length
inside
the
Roa
IP
address
sequence.
K
Next
link
again
was
an
integer
unconstrained,
so
it
could
be
negative
could
be
really
large,
but
the
reality
of
the
situation
is
that
max
length
cannot
be
smaller
than
zero
cannot
be
a
negative
number.
That
would
not
make
sense
and
it
cannot
be
larger
than
the
maximum
prefix
length
of
an
IPv6
address
which
is
under
28..
K
There
are
additional
constraints,
for
instance,
in
the
case
of
an
ip4
Roa
IP
address
block
the
max
length
value
can
at
maximum
be
32.
K
Unfortunately,
it's
super
complicated
to
express
this
in
asm1,
and
while
we
do
have
a
draft
profile
that
that
introduces
API
context,
dependence
constraints,
my
personal
take
is
that
the
ASM
one
is
utterly
unreadable
to
both
humans
and
most
open
source
compilers.
So
I'm
a
bit
hesitant
to
go
down
that
path.
K
K
If
it's
ip4
it's
going
to
be
32.,
it
definitely
is
not
unlimited
and
never
larger
than
128s
and
and
these
constraints
are
expressed
in
the
natural
text
in
the
original
RFC
to
to
some
degree,
but
I
think
it
is
helpful
to
also
repeat
these
constraints
in
the
asm1
profile
itself,
so
that
the
next
person
that
takes
the
asm1
profile
and
compiles
it
into
source
code
gets
some
benefits
of
these
constraints.
K
Jeff
you
jumped
into
qdf
a
question
specific
to
this
or,
at
the
end,
all
right
next
slide.
Please
all
these
changes
are
100
compatible
with
all
robots
deployed
out
there.
There
are
no
Ross
that
carry
more
than
two
IP
other
blocks.
There
are
no
row
ads
that
have
negative
or
or
larger
than
two
to
three
as
IDs
and
max
length.
K
An
IP
address
also
are
within
these
constraints,
so
I
I
think
there's
a
good
justification
to
to
accept
these
changes
to
the
asm1
profile,
because
it
does
not
break
anything
we're
currently
using
on
the
wider
internet
next
slide.
Please.
K
In
terms
of
incorporating
verified
errata,
these
were
super
easy,
I,
just
copy
pasted,
a
sentence
that
the
inherit
element
in
the
3779
IP
address
extension
is
is
not
allowed.
The
asm1
has
been
verify
to
compile
and
be
complete
in
the
latest
version.
Thanks
Russ
for
your
help
on
that,
and
since
the
the
document
now
in
its
native
form
is
an
XML
to
RFC
version
free
documents,
the
table
of
contents
is
automatically
generated
and
I
got
to
take
off
this
Errata
without
actually
doing
much
slide.
K
Please
I
included
an
example
in
as
an
appendix
in
in
the
draft.
This
is
just
the
payload
of
a
Roa
and
I
provided
a
standard,
Unix
utility
invocations
to
demonstrate
how
the
hex
the
Dr
encoding
transposes
to
to
Output.
That
is
a
little
bit
more
human
readable,
and
my
hope
is
that
if
somebody
writes
a
validator
or
a
CA
implementation
that
an
example
like
this
would
help
them
and
then
again,
there's
of
course,
in
the
repositories.
There
are
hundreds
of
thousands
of
examples.
So
it's
it's
more.
J
K
And
with
that,
I
would
like
to
open
up
the
the
microphone
for
comments.
Feedback.
Also,
please
email
to
the
mailing
list
or
if
you
want
to
have
a
private
discussion,
email
the
office
Alias
directly
or
if
you
like,
using
GitHub,
use
that
Jeff.
O
Jeff
has
mostly
nitpicky
type
stuff,
so
the
main
reason,
historically,
at
least
that
constraints
were
not
thrown
at
a
lot
of
asn1
stuff.
Is
the
tool
set.
Anybody
that's
tried
to
actually
use
any
tools
that
do
ASN.
One
parsing
usually
end
up
tearing
their
hair
out
swearing
at
the
authors,
no
trying
to
actually
dig
through
necromancy
manuals
to
actually
figure
out
what
the
apis
are
supposed
to
be.
O
Two
binder
points
on
some
of
the
items
you
had
in
there
so
like
your
as
numbers,
you're,
trying
to
actually
restrict
them
to
be
positive,
integers
as
big
as
an
as
number
can
be.
That's
great
one
of
the
weird
side
effects
of
leaving
it
just
integer
is
what
happens
if
you
want
to
do
strange
things
like
we've
affected
them
with.
E
O
And
other
Scrolls,
where
we
stick
weird
as
numbers
and
they're
like
zero,
if
you
have
a
negative
number,
is
a
signal
that
allows
you
to
Signal
stuff
later,
very
similarly
like
by
not
being
prescriptive
about
the
length
of
the
addresses
you're
allowing
for
something
that's
not
V4
or
V6
to
eventually
become
specified.
O
But
in
my
last
comments
really
directed
towards
you
know
the
original
spec
encoding,
an
Effie
as
an
octet
string,
was
always
stupid.
You
know
it's
an
integer
and
it's
you
know
bounded
to
two
two
bytes.
K
If
I
respond
to
that
the
constraints
as
used,
can
you
go
back
to
the
ASMR
slide?
Please
one
more
one
more.
K
The
only
two
constrain
types
that
are
used
here
are
a
size
limitation
and
a
value
range
limitation
and
those
are
supported
by
all
the
asm1
compilers
I'm
aware
of
so.
K
Yeah
Yep,
this
is
much
later
than
I
mean
the
original
robot
specification
is
I,
think
a
little
bit
more
than
12
years
old,
so
times
have
changed
to
some
degree
where
we
had
more
difficulty
was
instantiating
new
classes
to
the
API,
contextual
context,
dependent
size
limits,
and
there
we
finally
found
the
compilers
like
asm1c,
have
trouble
understanding
what
is
happening,
but
I'm
I'm,
very
confident
that
these
constraints
are
supported
by
the
vast
majority
of
the
ecosystem.
D
So
Russ
Housley
there's
when
we
were
working
on
three
seven,
seven
nine
like
snack
was
the
most
common
open
source,
asn1
compiler,
and
it
only
did
the
1988
version
of
the
Deus
unlock,
and
so
all
of
that.
So
when
you
look
at
this
I
think
if
you're
using
a
tool
that
understands
it,
you
will
get
a
benefit
if
you're
not
you're,
going
to
have
to
put
code
around
it
to
make
these
same
checks
anyway.
D
So
I
see
no
harm
in
in
putting
it
here
where
they
get
a
decode
error
or
you'll,
get
a
checking
of
consistency,
error,
but
either
way
you're
going
to
have
to
perform
these
very
same
checks
and,
as
you
have
said,
I
already
compiled
the
module
with
the
addition
of
a
single
semicolon
that
was
inadvertently
dropped.
It
works.
N
K
It
was
a
judgment
Hall
and
I.
Agree
that
that
taking
this
opportunity
to
do
a
best
document
is
is
a
better
outcome,
so
I
I
just
wanted
to
you
know
you
were
sitting
right.
C
A
E
You
have
10
questions,
no
I
have
just
back
to
the
slide
10..
You
know
that
had
the
ASN
notation.
H
E
The
one
the
one
problem,
I'm
kind
of
having
with
this
is
I,
mean
I,
certainly
agree
that
we're
not
going
to
have
negative
as
numbers,
but
you
know
in
in
looking
at
this
I'm
trying
to
think
of
the
future
as
well.
You
know
if
we're
talking
about
asn1
compilers
and
you
can
encode
that
well,
we
may
not,
like
you
know,
dates
back
to
the
80s.
E
You
know
in
some
of
these
cases,
I'm
also
trying
to
think
about
the
future.
So
if
we
look
at
an
equivalent
time
window
in
the
future,
could
we
have
IP
addresses
or
ASN
values
that
exceed
this
range
in
the
future?
E
That
by
doing
this,
we
are
constraining
ourselves
from
doing
something
creative
when
we,
when
it
be,
may
become
operationally
necessary
and-
and
that
is
one
of
my
concerns
in
trying
to
make
a
change
like
this-
is
you
know
by
by
going
and
doing
these
things?
If,
for
some
reason,
we
need
to
go
and
change
all
the
asns
or
move
them
all
to
64-bit
or
some
other
future
number,
even
though
today
we
may
not
imagine
32
to
be.
You
know
you
know
so
far
out
of
range.
E
I
I
have
reason
to
believe
that
we
would
still
be
using
the
same
dgp4
protocol
plus
plus,
and
that
time
frame-
and
so
it's
hard
for
me
to
believe
that
we
would,
since
the
universe
of
people
using
this,
is
even
smaller
than
the
universe
of
people
using
bgp4
plus,
plus
that
we
wouldn't
that
we
would
want
to
construct.
You
know
to
implement
a
constraint
to
say
hey.
This
should
only
be
32-bit,
especially
in
you
know
this,
because
this
is
an
outside
encoding
encoding.
This
isn't
a
wire
encoding.
This.
E
Is
not
a
wire
encoding
for
the
purposes
of
this
data
storage?
You
know
for
actually
signaling
in
in
the
protocol
in
in
this
place,
because
this
is
something
that
gets
passed
through,
that
you
know
1983
or
you
know,
or
later
decoding
engine
for
asn.1.
E
If
we
want
to
use
something
other
than
asn.1
like
you
know
the
tlvs
that
we
use
in
bgp
protocol
like
maybe
we
should
be
doing
that,
but
if
we're
going
to
be
relying
upon
asn.1,
which
is
what
For,
Better
or
For
Worse
x509
relies
upon,
you
know
I
I'm
concerned
about
constraining
the
numbers,
even
if
today
it
seems
rational,
because
it's
a
valid
range,
but
by
going
and
doing
that,
we're
going
to
foreclose
that
and
make
it
much
harder
for
people
in
the
future
can.
K
I
interrupture
a
beautiful
monologue
if,
as
and
numbers
are
specified
to
be
say,
64
bits
or
128.,
the
path
to
extend
this
profile
is
fairly
straightforward.
On
line
two,
you
see
a
version
and
if
the
semantics
of
of
the
data
structure
elements
would
change
such
as
hey
asns
can
be
larger.
Nowadays,
you
would
use
a
new
version
number
that
has
new.
K
K
E
But
the
list
of
things
you
know
I'm
just
wondering:
do
we
want
to
be
adding
to
that
list
of
things
in
the
future?
That
would
you
know
future
debt
when
it's
not
necessary
now
and
I
haven't
heard
a
compelling
reason
to
make
that
change
other
than
well
people
implemented
it.
This
way.
Now,
maybe
for
this
thing
that
isn't
also
I've,
you
know
you
claim
isn't
being
used
either.
So
I'm
also
wondering
why
we're
spending
a
lot
of
time
mucking
with
it.
A
P
George
Michaelson
AP
Nick
I
actually
have
two
points.
They
are
unrelated,
I
I
think
there
is
significant
benefit
in
being
narrowly
specific
and
are
arguably
prescriptive
in
binary
structures
like
these,
because
the
primary
risk
here
is
Bad
actors,
not
good
actors
and
the
negative
number
wrap
around
the
unexpected
Behavior
and
the
we
didn't
expect
you
to
use
it,
but
we
didn't
Define.
It
concerned
me
from
risk
and
I
know
that
that's
the
hand
waving
we're
a
little
unknown
risk
statement,
but
I
do
see
this
as
bad
actor
threat.
P
We've
got
people
with
implementations
and
they
don't
have
defined
constraints
in
the
ASN
one.
If
it
turned
out
writing
a
negative
number
caused
an
out
of
memory
event
and
it
was
in
routers
that
would
be
very
unpleasant
I'd
rather
rewrite
specs,
which
narrowed
that
opportunity.
That's
my
personal
belief
and
if
what
you're
doing
has
significant
on
The
Wire
binary
compatibility
for
good
actors,
it
does
have
a
certain
zero
cost
quality,
and
that
again
is
good
and
I
would
I
would
absolutely
wish
to
applaud
and
welcome.
The
word
cross
does
using
a
well
understood.
P
Validator
to
check
Behavior
I.
Think
is
a
huge
net
benefit
to
the
community.
So
thank
you
for
doing
that.
So
now,
I
want
to
make
another
comment,
and
it
is
a
personal
comment.
I
wish
you
to
understand
that
this
is
not
a
reflection
of
AP
Nick
and
it
actually
is
not
directed
at
you.
It's
a
comment
to
the
chairs
and
to
the
idea.
It's
something
I
said
to
Warren
informally
over
breakfast.
That
I
think
should
be
said
publicly.
P
In
ietf,
in
the
wide
is
insufficiently
understood
and
I
feel
documented,
and
irrespective
of
the
merits
of
this
proposal,
I
believe
risks
of
contention
and
dispute
around.
How
this
is
happen
actually
is
a
problem
that
should
be
addressed,
and
it
is
a
working
group
chair
matter
and
an
ID
matter.
It's
not
a
matter
about
authors,
it's
about
what
should
the
ITF
say
is
how
these
things
are
done.
It
is
a
personal
comment
and
it
does
not
relate
to
the
specific
work.
Thank
you.
K
Yeah
final
sentence
to
to
reiterate
my
goal
with
this
best
document
is
to
tie
up
any
and
all
Loose
Ends.
So,
for
instance,
there's
a
part
of
the
specification
that
talks
about
two
out
of
three
permutations
with
a
certain
data
structure.
I
intend
to
add
the
third
one
for
completeness
and
at
some
point
My
Hope
Is
that
we
we
have
understood
all
loose
ends
and
and
documented
everything
that
must
be
absent
or
present
and
then
go
for
working
group
last
call.
K
So
if
you
want
to
contribute
to
that
effort,
please
help
review
the
document
as
it
is
and
send
comments,
concerns
or
paragraphs
to
the
offers.
Thank
you
all.