►
From YouTube: IETF110-SIDROPS-20210310-1200
Description
SIDROPS meeting session at IETF110
2021/03/10 1200
https://datatracker.ietf.org/meeting/110/proceedings/
A
A
A
Okay,
I
think
we
have
a
not
totally
packed
agenda,
so
we
should
probably
get
started
close
to
time,
which
is
now
I'm
chris,
cayer
and
natalie
are
also
on
and
we
have
34
other.
Apparently
oh
yeah
34
other
people
on
the
meeting,
so
we'll
have
to
select
a
notes
taker,
the
greater
somebody
could
chime
in
in
the
chat.
Perhaps
I
think
I
can
put
the
link
in
here.
A
A
A
A
Moving
forward,
this
is
the
siderops
meeting
at
virtual
itf
110,
supposedly
in
prague.
I'm
missing
the
indian
food
george
says
he's
also
going
to
help
out
with
the
notes.
Thank
you,
george.
A
Whoops
wrong
window-
here's
a
note!
Well,
hopefully,
the
chair,
slides,
did
just
barely
make
it
under
the
buzzer.
So
if
you
want
to
follow
along
yourself
or
on
slide
two,
you
can
download
the
slides
from
the
meeting
materials
page,
which
I
think
I
can't
actually
get
to
right
now,
anyway,
everyone's
read
this,
I'm
sure.
If
you
haven't
please
take
time
to
read
it
after.
C
A
Meeting
is
over,
we
have
an
agenda,
we
have
seven
presentations,
six
presentations
it's
early
morning
and
I
can't
count.
Do
we
need
to
add
anything
to
the
agenda
other
than
these
six
items.
E
F
Yeah,
there's
something
very
awkward
about
having
to
say
next
slide
every
couple
seconds:
okay,
I'm
ben
madison
from
work
online
communications,
and
this
is
an
update
on
the
rpg
rpki
max
length
document,
which
has
been
around
for
quite
a
while,
but
hasn't
been
spoken
about
at
one
of
our
meetings
for
a
little
while
these
are
my
co-authors,
a
quick
recap
of
what
it's
about.
F
If
you
haven't
read
it
for
a
while
or
you
haven't,
read
it
at
all,
it's
a
fairly
short
document,
it's
targeted
at
bcp
status
and
it
describes
a
a
hijack
attack
type
that
is
made
easier
by
the
presence
of
a
rower
that
authorizes
a
prefix
to
exist
in
the
in
the
routing
table.
That
is
longer
than
that
which
is
ordinarily
announced
in
bgp.
F
A
hijack,
will
always
be
fighting
fighting
for
best
parts
because
of
its
longer
as
path
length,
whereas
a
more
specific
hijack
doesn't
suffer
from
that.
So
if
it
can
be,
if
it's
authorized
by
the
same
rower,
then
that
that
attack
type
becomes
a
hell
of
a
lot
easier
and
there's
a
simple
recommendation
contained
in
the
draft,
which
is
that
you
shouldn't
do
that
and
more
specifically,
you
shouldn't
use
the
max
length
attribute
unless
you
thought
carefully
about
this
attack
vector.
F
F
The
content
hasn't
really
changed
much,
but
it
has
been
edited
for
readability.
To
help
make
the
example
a
little
easier
to
follow.
It
now
describes
the
type
of
attack
that
we're
we're
discussing
and
how
that's
made
easier
by
the
presence
of
what
we
term
a
loose
rower.
That
is
a
rower
that
has
a
max
length.
Attributes
is
longer
than
the
the
ordinary
pgp
announcement.
F
Why?
The?
What
a
strict
rower
looks
like
in
the
context
of
of
that
example,
why
the
strict
rower
mitigates
the
attack
and
why
the?
Why
that
it's
possible
to
still
launch
a
similar
attack,
but
it's
substantially
less
likely
to
be
effective
in
that
it's
less
likely
to
attract
any
actual
traffic.
If
the
rower
max
length
matches
the
the
bgp
announcement,
there's
a
measurement
section
which
describes
some
measurements
that
were
taken
a
while
back
that
observes
that
this
kind
of
vulnerability
is
very
prevalent
in
currently
issued.
F
Rowers
that
have
the
the
max
length
attribute,
that's
been
revised
to
make
it
a
little
bit
longer,
but
hopefully
a
little
bit
clearer,
because
the
first
few
times
I
read
it,
I
found
myself
going
back
to
the
beginning
of
the
sentence.
Over
and
over
again,
there
is
a
question
that
I
have
for
the
working
group,
which
is
that
the
measurements
in
the
draft
are
from
2017..
F
I
would
like
to
know
if
anyone
is
aware
of
any
similar
studies
that
have
been
taken
that
have
taken
place
more
recently
and
if
so,
whether
we
should
incorporate
that
into
the
into
the
references
or
failing
that
if
anyone
has
any
reason
to
believe
that
the
numbers
that
are
discussed
have
changed
significantly
from
my
own
observations,
I
don't
believe
that
they
they
have.
F
But
my
observations
are:
you
know
just
that
they
are
things
that
I've
seen
in
the
operational
side
of
things
that
certainly
wouldn't
qualify
as
an
academic
study
in
section
five,
the
the
background
to
this
draft.
This
came
about
because
of
a
study
done
by
a
couple
of
my
co-authors,
where
they
observed
this
phenomenon
in
the
wild,
and
they
that
research
was
centered
explicitly
around
the
the
use
of
of
max
length
in
the
rower,
which
is
how
this
this
draft
got
its
name.
F
What
we're
trying
to
describe
is
how
to
minimize
the
attack
surface,
that's
available
for
this
type
of
of
sub-prefix
attack,
and
so
it's
that
section
has
been
reworded
to
say:
don't
create,
or
at
least
try
not
to
create,
if
you
possibly
can
avoid
it.
Non-Minimal
rowers
when
you're
creating
rowers
think
carefully
about
the
use
of
max
length
so
that
it
doesn't
result
in
non-minimal,
rowers
and
we've
tried
to
clarify
the
fact
that
this
creates
a
bit
of
a
tension
where
there's
an
operational
requirement
to
be
able
to
de-aggregate
fast.
F
F
To
be
honest,
none
of
none
of
them
are
ideal
and
they
they
require
kind
of
educated
trade-offs
to
be
made
by
the
operator,
but
those
are
enumerated
hopefully
fairly
systematically,
and
I
think
I'd
just
like
to
call
to
the
attention
of
the
working
group
that
this
is
a
gap
that
we
have
at
the
moment
and
we
should
probably
think
about
potential
solutions
to
it.
But
this
draft
is
not
the
place
to
document
that.
F
F
F
F
It
specifically
says
that
a
solution
to
that
fundamental
problem
is
out
of
scope
and
it
pumps
it
down
the
road
and
the
only
recommendation
in
this
area
that
it
explicitly
makes
is
that
making
use
of
an
rtbh
signaling
mechanism
or
making
such
a
signaling
mechanism
available
to
your
customers
or
peers
is
not
a
good
enough
reason
to
either
create
or
require
the
creation
of
non-minimal
rowers,
and
so
that's,
hopefully
that
hopefully
fixes
the
the
the
most
fundamental
problem
that
that
existed
in
the
previous
version
of
the
draft.
F
As
a
result,
I
think
the
authors
are
all
on
the
same
page
in
terms
of
believing
that
this
is
now
pretty
much
a
finished
finished
piece
of
work,
I'm
happy
to
take
questions
and
and
respond
to
comments,
and
you
know
if
there
are
other
changes
that
people
think
are
necessary.
Then
we
can.
We
can
look
at
incorporating
them,
but
failing
any
of
that,
I
think
we're
ready
for
the
chairs
to
start
a
last
call
if
they
would
be
so
kind.
A
G
E
F
Milk
so
alex
I,
I
was
struggling
to
hear
you
a
little
bit,
but
but
I
think
your
question
is
what
type
of
hijack
attack
vector
this
is.
This
is
aiming
to
to
solve.
Right,
yes,
correct,
so
the
the
specific
attack
vector
is
where
you
have
a
a
rower
issued
covering
length
up
to
say
a
24
for
ipv4,
and
but
you
know
the
the
ordinary
course
of
operations.
All
that
all
that's
announced
in
bgp
is
some
aggregate.
That's
that's
shorter
than
a
24.
F
and
the
the
attack
vector
that
we
discussed
is
that
that
makes
it
fairly
trivially
easy
for
a
a
fault,
a
hijacker
to
spoof
the
the
as
path
announce
that
upstream
to
their
their
peers
or
transits,
and
become
best
path
for
that
prefix
because
they
are
the
only
longest
match
for
anything
that
exists
in
the
routing
table.
And
that's
not
that's
made
possible
by
the
fact
that
the
the
longer
prefix
is
authorized
for
origination,
even
though
it's
not
ordinarily
originated.
E
Does
that
make
sense,
you
are
speaking
about
intentional
hijacks,
not
a
mystery.
Yes,.
F
F
Case,
sorry,
sorry,
so
I
I
think
it
I
think
it's
mostly
about
intentional
hijacks,
because
making
a
mistake
of
this
kind
would
be
pretty
hard
to
do
using
most
router
implementations,
but
it
would
you
know
if
someone
managed
to
make
this
particular
type
of
mistake.
It
would
protect
against
this
against
that
as
well.
E
I
do
agree
that
this
kind
of
solution
can
be
can
be
helpful
only
in
case
of
intentional
hijacks,
but
imagine
that
there
are
two
islands,
one
good
island
where
there
are
isps
assigning
rows
where
origin
validation
is
happening
and
bad
island.
E
E
There
will
be
a
more
specific
prefix
which,
which
will
be
gone
to
spread
from
this
island
to
other
places
and
so
on.
What
you
are
suggesting
that
your
good
island
will
be
protected
against
this
activity,
but
all
space
between
these
islands
will
have
a
high
chance
to
use
the
most
specific,
prefix
and
and
in
response
to
this
hijack,
the
victim
will
not
be
able
to
advertise
its
own
more
specific,
because.
E
The
role
will
not
be
permitted
so
in
case
of
partial
adoption
of
router
region
validation.
E
I
believe
this
kind
of
strengthen,
ros
or
not
strengthening,
but,
as
a
recommendation
do
not
use
max
length
as
it's
used
today
will
results
can
result
even
in
both
problems
when
your
outer
space
will
be
hijacked,
and
you
will
have
nothing
in
in
response
at
the
moment.
E
And
if
you're,
for
example,
applying
even
egress
routable
original
validation
on
your
own
router,
then
you
will
not
able
to
pass
it
even
from
your
own
isp.
Of
course
you
can
remove
it,
but
anyway.
F
Yes,
that's
probably
a
valid
criticism.
I
think
that
it
is
a
pretty
unlikely
scenario
because
of
where
origin
validation
is
happening
in
the
wild.
Today,
it
is
not
the
case
that
there
are
kind
of
readily
distinguishable
islands
in
the
internet,
topology
where
origin
validation
is
happening
and
where
origin
validation
isn't
happening.
F
It's
increasingly
the
case
that
the
the
large
transit
networks
in
the
kind
of
topological
center
of
the
internet
are
doing
origin
validation,
and
so
I
think
that
there
is
I
I
I
think
that
we
are
better
off
optimizing
for
the
case
that
we're
better
off
optimizing,
the
protection
that
the
existence
of
validating
parties
creates,
rather
than
optimizing
for
the
case
where
you
need
to
propagate
a
defensive
de-aggregation
into
a
non-validating
network.
I
hope
that
makes
sense.
E
Great,
so
my
suggestion
is
to
make
this
kind
of
scenario
also
cited
in
the
document,
so
that
the
implementers
the
network
operators
will
will
be
aware
that
by
issuing
this
kind
of
raw
they're,
somehow
limiting
their
response.
F
Yeah,
what
would
be
really
helpful
is
if
you
could
send
an
email
setting
out
the
kind
of
the
the
kind
of
problem
that
you
envision,
so
that
we
can
make
sure
that
we're
on
the
same
page.
And
then
we
can
look
at
how
we
go
about
incorporating
that,
because
I
think
that
it's
a
valid
point.
I
just
I
struggle
to
see
exactly
how
we
articulate
it.
A
I
think
the
next
was
rudiger
and
then
sriram
and
then
we
got
to
skip
to
the
next
presentation.
H
Audio,
unfortunately,
I
haven't
been
reading
the
text
for
a
long,
while
what
I
wonder
is
do
we
tell
and
do
we
want
to
tell
that.
H
Attack
vector
well,
okay
kind
of
seems
not
really
to
be
the
most
important
thing.
So
I
switch
off
my
mic
and
I
will
not
hear
you
for
a
couple
of
seconds.
F
Goodbye,
hopefully
you
can
hear
me
now
chris,
have
I
got
time
to
respond
to
that?
F
Yes,
please,
so
that
that
point
is
pretty
much
exactly
what
I
was
driving
at
on
on
the
slide
that
I've
just
gone
back
to
the
previous
wording
seemed
to
suggest
that
max
length
in
and
of
itself
was
the
problem,
and
we've
clarified
the
recommendation
section
so
that
it's
clear
that
the
problem
is
whenever
the
problem
exists,
whenever
you
have
a
rower
in
the
rpki
that
covers
a
prefix
that
is
not
usually
announced
and,
in
particular
a
a
longer
than
usually
announced
prefix,
and
that
the
use
of
the
max
length
attribute
is
just
a
very
convenient
shortcut
for
people
to
create
those
kind
of
problematic
rows.
F
B
Sriram
yeah,
I
just
want
to
make
a
quick,
quick
observation
which
ben
may
have
already
mentioned,
but
I
want
to
emphasize
in
response
to
alexander's
comment
that
even
when
a
regular
likes
prefix
is
announced,
not
I
mean
not
hijacked
and
not
necessarily
maliciously,
the
the
owner
of
that
prefix,
which
is
hijacked
somewhere
in
the
network,
away
from
you
that
hijack
will
be
dropped
in
the
part
of
the
internet
where
the
roads
are
implemented
and
the
rov
validation
is
implemented.
B
A
A
I
Hope
all
right,
I
see
lines
moving
on
my
screen,
so
I
guess
you
can
hear
me.
I
Yeah,
don't
worry,
okay,
right,
okay,
maybe
next
time
I'll
do
my
own
slides
or
not
this
time.
Let
me
try
to
be
quick,
so
there's
an
update
to
the
deplicate
rsync
document.
Indeed,
it's
been
re-christened
to
prefer
rdp.
I
I
think
that's
well.
The
change
here
is
really
one
of
emphasis.
I
The
document
that
was
there
always
said
that
we
had
to
go
to
a
place
where
we
prefer
rdp
first
before
we
can
consider
deprecating
arsenic
altogether,
but
we're
trying
to
make
this
more
explicit
right
now,
so
the
first
objective
would
be
to
promote
rdp
to
a
mandatory
to
implement
a
protocol
and
make
it
preferred
so
that
the
operational
dependence
on
our
assing
infrastructure
is
reduced,
and
I
think
with
that
we
will
already
achieve
a
great
deal
of
the
issues
that
at
least
I
see
with
with
arsenal
next
slide.
Please.
I
So
there
were
already
a
couple
of
phases
in
the
document.
I
What
was
not
clear
to
all
implementers
before
is
that
there
was
a
strong
desire
that
if
rdp
is
unavailable,
a
relying
party
software
should
or
indeed
must,
fall
back
to
rsync.
Because
if
you
look
at
the
current
text
in
the
rdp
rfc,
then
it
actually
says
it
could
fall
back
in
a
small
print.
Actually,
so
in
the
current
document
we
suggest
that,
since
there
is
a
feeling
in
the
room
that
if
information
is
available,
it
should
be
used,
then
this
wording
needs
to
be
much
stronger.
I
So
perhaps
it
should
say
must
use
alternative
access
mechanisms
if
available.
The
other
change
in
this
document
is
that
the
or
this
revision
I
should
say,
is
that
rdp
already
sent
so
8182
already
centered
publication.
Servers
that
do
rdp
must
make
sure
that
all
files
are
available,
but
it
does
not
explicitly
say
that
things
need
to
be
highly
available
in
64.81,
because
that
only
talks
about
rsync.
I
I
I
I
think
that
if,
if
we
do
all
of
this,
then
the
the
practical
operational
dependency
on
rsync
will
be
much
reduced
and
at
least
my
immediate
concern
with
it
could
be
addressed
doesn't
mean
we
have
to
stop
here,
though,
but
first
let
me,
I
guess,
read
out
the
questions
that
are
were
raised
so
with
regards
to
fall
back
to
rsync.
I
Currently,
the
suggestion
just
says
must
use
alternative
methods.
It
doesn't
say
anything
about
the
strategy
that
should
be
used
here,
but
our
concerns
raised
about
if
an
rdp
server
would
be
unavailable,
that
all
relying
parties
in
the
world
would
immediately
start
hammering
async
servers
and
you
would
see
immediate
problems
there.
So
we
may
need
to
take
some
feedback
on
that
and
include
some
words
on
that.
I
Another
observation
is
that
even
in
phase
zero
like
today,
well,
if
rdp
is
available,
then
I
believe
it
will
already
be
good.
If
relying
parties
who
choose
to
support
it
today
already
prefer
it
and
stop
hammering
rsync
servers.
So
we
may
want
to
have
some
words
on
that
as
well.
Next
slide,
please
so
yeah
again
the
longer
term
objective.
I
think
it's
still
good
if
we
would
remove
the
operational
dependency
on
rsync
altogether,
because
it
would
we
allow
us
to
simplify
the
the
code
in
in
reliant
party
software.
I
It
will
also
allow
us
to
simplify
repository
operations
because
you
don't
have
to
run
these
servers
anymore,
but
of
course
I
understand
that
before
we
can
get
there,
we
need
to
do
a
couple
of
things.
We
need
to
get
operational
experience
and
and
be
confident
that
it's
actually
safe
to
do
and
there's
another
question
that
was
raised
because
we
have
rsync
uris
in
the
rpi
all
over
the
place
and
if
we
need
to
get
rid
of
them,
then
that
actually
has
is
not
so
trivial.
Next
slide.
Please.
I
Yeah,
because
yeah
we
use
these
names
as
identifiers,
it's
very
useful
for
debugging
for
talking.
You
know
for
reporting,
but
also
they're
oftentimes
used
by
relying
party
software
to
build
up
a
key
of
things
to
validate,
so
you
can
actually
build
up
a
hierarchy
in
a
different
way,
but
you
know
it
would
require
some
serious
thought.
I
That
would
need
to
be
addressed
as
well,
because
there's
restrictions
there
in
the
uris
that
are
to
be
used
like,
for
example,
if
you
set
yourself
up
as
a
publisher
on
a
remote
publication
server,
you
get
a
response
that
says
this:
is
your
rsync
jail
where
you're
allowed
to
publish,
so
those
things
would
need
to
be
changed
and,
of
course,
the
certificate
sign
requests
would
include
different
kinds
of
identifiers.
That
would
also
have
to
be
honored.
I
I
We
can
already
start
this
initiative
if
people
are
interested
but
yeah,
I
think
it's
it's
a
lot
of
work
and
I
would
not
like
it
to
block
progress
on
on
preferring
rdp,
so
that
would
be
my
my
proposal
to
the
group
and
I
have
a
final
slide.
I
forgot
what
I
put
on
it,
but
I'll.
Remember
now,
yes
yeah,
so
the
document
currently
includes
a
status
overview
which
is
kind
of
informational.
Only
really,
but
it's
to
give
an
idea
of
where
we
are
so
rdp
is
already
supported
by
by
most
repositories.
I
One
rar
is
planning
to
deploy
very
soon.
It's
also
supported
by
the
allocated
rpki
software
and
there's
one
repository
of
that.
That's
not
doing
it
yet,
but
there
is
software
available
that
can
do
it,
so
I
think
there
is
an
option
to
go
there
in
future.
I
Furthermore,
rdp's
is
also
supported
by
six
out
of
seven
validation,
implementations
and
the
seventh
is
developing
it
right
now,
of
course,
I
don't
expect
a
a
yes
or
no
answer
here
to
this
question,
but
my
addition
would
be
to
to
aim
for
wrapping
up
the
preferred
rdp
work
sometime
this
year,
preferably
towards
the
end
of
it,
and
where
currently,
we
have
three
different
phases
defined.
I
I'm
not
so
sure
that
we
really
need
to
have
like
documents
for
each
phase,
because
in
practical
terms,
I
think
we're
pretty
much
close
to
it
being
done
by
everybody
anyway.
So
we
may
then
have
this
plan,
which
is
good,
but
at
the
end
of
it,
produce
one
document
that
does
the
the
updates
that
we
think
are
needed
to
the
current
rfcs
in
one
go
and
not
have
like
three
different
published
documents.
For
that
with
that,
I
would
like
to
head
over
to
the
queue.
I
I
suppose,
because
that
was
all
I
had
to
say
for
now,
and
I
see
opposed
first
and
then
randy
so
go
ahead.
J
Joe
snyder,
lastly,
I'm
not
entirely
sure
what
publication
for
phase
2
end
of
2021
means
to
the
rpki
community,
but
I
do
want
to
point
out
that
a
relying
party
software
implementation,
I
contribute
to
supports
previous
versions
up
to
one
year
and
looking
at
my
rdp
logs,
I
noticed
that
this
industry
appears
to
need
almost
up
to
one
and
a
half
years
for
95
percent
of
rps
to
upgrade.
J
So
I
think
that
phase
two,
whatever
publication
for
phase
two
means,
should
factor
into
account
that
there
is
essentially
a
one
and
a
half
year
lead
time.
I
Right
can
I
respond
to
that?
Yes,
I
appreciate
that,
but
I
think
publication
for
phase
two
would
essentially
mean
that
there
has
been
a
requirement
for
current
implementations
to
support
rdp.
I
I
You
can
still
publish
a
document
that
says
that
you
really
ought
to
do
it
in
my
mind,
but
because
you
still
don't
say
to
relying
parties,
you
don't
have
to
do
rsync
anymore,
so
I
hope
that
clarifies
my
my
thinking
on
it.
My
objective
definitely
is
to
not
break
anything
by
by
publishing
anything.
I
I
Okay,
yeah,
I
would
be
quite
happy
to
to
take
out
the
implementation
status
report
and
handle
that
separately.
I
I
would
also
be
happy
to
take
out
the
words
that
I
talk
about
the
phases
beyond,
let's
say,
to
really
deprecate
rsync
and
perhaps
launching
a
separate
effort
for
that,
and
then
then
I
think
the
document
will
be
focused
on
just
the
first
well
phase:
zero.
One
and
two
would
that
be
in
line
with
what
you.
I
C
I
I
If
I
hear
you
correctly,
because
this
is
the
plan,
it
also
confuses
being
a
plan
with
trying
to
update
actual
documents
and
that's
the
confusion
that
I
have
with
it.
If
you
see
what
I
mean
I'll.
I
So
what
I'm
trying
trying
to
suggest
is
that
currently
we
talk
about
a
plan
which
is
fine,
but
at
some
point
you
may
want
to
execute
the
phase
of
the
plan,
and
at
that
point
maybe
what
you
need
is
not
this
document
but
a
separate
document
that
does
that
so
a
separate
document
that
does
the
actual
update
of
whatever
updates.
We
want
to
do.
C
That
seems
reasonable
and
you
could
either
put
it
in
the
january
implementation
report
or
a
separate.
You
know
rfc
42
bits,
yes,
but
I
want
to
seriously
separate
the
uri
problem
because
that's
messy.
I
Yes,
there
was
somebody
in
the
queue
earlier,
but
I
missed
them
now.
I
Right
so
you're
suggesting
that
we
should
talk
about
making
ipp6
mandatory
for
the
rpi
repositories.
Did
I
understand
that
correctly?
A
Okay,
as
a
as
a
participant,
I
think
it
would
be
helpful.
Randy
had
a
lot
of
comment
about
implementation
reports
and
maybe
splitting
this
draft
into
a
couple
pieces.
I
think
that
conversation
would
be
good
to
have
on
the
mailing
list
if
possible.
So
I
would
call
for
randy
to
please
make
an
email
with
the
concise
four
points
he
had
or
five
points.
A
A
Oh
excellent,
I
don't
have
my
email
up,
but
okay
yeah,
so
tim
was
there
anything
that
you
wanted
to
get
out
of.
G
This
aside
from
the
no,
I
think,
we're
good.
I
And
I'll
follow
up
on
the
list
and
and
with
randy
on
our
next
steps.
A
J
C
J
J
J
I
believe
there
is
a
need
in
the
internet
network
operator
community
for
a
industry-wide,
understood,
rpki-based
attestation
mechanism
to
facilitate
a
number
of
use.
Cases
such
as
for
the
peeringdeb.com
organization
to
associate
onboarding
user
accounts
to
internet
number
resources
to
make
bring
your
own
ip
space
workflows
a
little
bit
easier,
such
as
amazon
supports
in
their
ec2
product
or,
for
example,
out-of-band,
distribution
of
ghostbuster
style
information.
J
J
A
issue
I
perceive
there
is
that
there
could
be
key
identifier,
collisions
along
the
trust
anchor
trees.
Whether
this
fear
is
founded
or
not,
is
I
guess
subjective,
but
this
is
a
roadblock
I
perceive.
J
J
Then
I
think
from
a
user
experience
perspective,
but
again
this
is
a
subjective
interpretation
of
possible
future
scenarios.
I
think
the
rta
model
encourages
to
assign
things
with
all
your
resources,
whether
that's
relevant
to
the
business
transaction
at
hand
or
not,
whereas
the
rc
s
rsc
model
invites
the
operator
to
only
list
the
resources
applicable
to
the
business
transaction
at
hand,
but
these
are
minutes
implementation
differences.
J
J
Then
a
second
signer
implementation
that
is
open
source,
was
provided
by
ap
neck
and
can
be
downloaded
from
their
github
account
in
terms
of
validators.
There's
also
two
efforts
on
their
way:
there's
a
work
in
progress.
Development
branch
within
the
rpi
client
project
pending
ayanna
early
allocation
and
ap
neck
also
provided
a
validator
implementation
that
can
be
accessed
via
their
github
account.
J
The
purpose
of
early
allocation
in
this
specific
context
is
that
it
would
be
undesirable
for
my
personal
private
enterprise
number
to
be
deployed
in
the
world
in
some
capacity
and
to
further
interoperability
with
future
implementations.
It
is
good
to
obtain
a
oid
from
the
appropriate
ayana
registry.
J
So,
for
instance,
in
the
case
of
fast
lease
deployment
once
an
hour,
the
fetching
operation
occurs,
but
any
subsequent
validation
of
rta
or
rsc
objects
can
leverage
the
cache
that
the
validated
cache
that
exists
within
the
fastly
administrative
domain
without
having
the
need
to
reconnect
to
the
internet
and
then,
lastly,
certificate
authorities
are
not
mandated
to
produce
either
rtas
or
rscs.
So
this
is
a
technology
that,
if
you
do
not
wish
to
use
it
by
all
means,
don't
use
it.
J
This
is
what
makes
that
I
believe
it
is
possible
to
use
these
types
of
objects
in
message,
digest,
authentication
procedures
and
how
I
think
this
industry
can
make
productive
use
of
this
concept,
even
though
you
don't
know
exactly
who
signed
it.
You
do
know
that
somebody
under
the
trust
anchor
signed
it
and
that
you
can
validate
up
to
the
trust
anchor.
J
J
J
I
see
tim
branson's
raise
this
end
tim
go
ahead.
I
will
mute
myself.
I
All
right,
yeah,
so
first
off,
I
think
rsc
in
and
by
itself
has
a
place
for
the
more
simple
use
case,
which
is
what
what
you
want
to
address
compared
to
rta.
I
Just
the
point
that
rta
was
not
only
about
having
multiple
parties
signing
the
other
thing
it
contained
was
the
the
idea
that
you
could
actually
include
the
cryptic
graphic
material
needed
for
validation
inside
the
cms,
which
is
not
allowed.
If
you
follow
the
rpi
signed,
object
draft,
sorry
rfc,
so
I
guess
what
I'm
trying
to
say
is
that
there
are
at
least
two
other
use
cases
that
exist
in
the
world.
I
I
That
being
said,
we
need
to
think
about
where
we're
headed
with
the
rta
specification,
but
one
option
might
be
that
we
actually
look
at
the
rsc
specification
and
see
if
we
can
wrap
that
in
a
way,
because,
essentially
you
could
just
include
these
objects,
you
could
have
multiple
objects
if
that
makes
sense
for
you
for
your
use
case,
so
you
can
present
them
in
one
go
and
if
it's
useful
for
use
use
case
to
have
all
the
ca
certificates
and
crl
is
readily
shipped
with
it.
So
you
can
make
a
validation
quickly.
I
Like
is
this
thing
valid
right
now,
then,
that's
also
something
you
could
look
at
in
a
in
an
enclosing
structure.
Let's
say
so
sorry
that
was
those
were
a
lot
of
words,
but
I'm
trying
to
say
is:
I
can
see
the
use
case
for
the
simple
case
and
I
think
we
need
to
think
about
whether
the
rta
keeps
its
current
specification
or
tries
to
leverage
this
in
some
way.
J
J
One
allows
multiple
signers
to
attest
a
single
sha256
hash,
whereas
the
other
has
a
single
signer
allowing
at
the
station
of
multiple
sha256
hashes,
and
this
makes
the
ideas
fundamentally
different,
but
from
a
getting
things
done
perspective,
I
think
this
industry
has
been
waiting
for
a
significant
amount
of
time
for
the
rpki
community
to
deliver
some
kind
of
technology
that
fits
into
the
workflows.
We
both
agree
on
exist,
so
I
would
my
take
on
it
is
that
the
rsc
effort
should
proceed
so
that
the
simple
case
is
covered
and
separately.
J
Discussion
can
continue
what
rta
means
and
whether
it's
feasible,
but
as
I
as
implementer,
I
had
trouble
implementing
rta
in
its
current
form,
which
is
more
a
reflection
of
my
limited
capabilities
than,
of
course,
the
rta
specification
in
and
of
itself.
I
Yeah,
okay,
no,
I
don't
have
a
lot
to
add,
except
we
we
do
have
working
implementations
of
rta,
but
of
course
I
mean
it's,
it's
a
draft.
It's
a
work
in
progress,
but
yeah.
If
you
want
a
data
point
that
it
can
be
implemented
and
can
be
interoperable,
we
have
done
an
implementation
at
the
nailnet
labs
and
ap
have
done
an
implementation
as
well
and
they
they
work
well
together.
J
I
am
I'm
not
confident
that
that
is
a
robust
validation
strategy,
as
it
requires
multiple
instantiations
of
the
opensso
command.
I
could
be
wrong,
I
mean.
Arguably,
there
is
some
code,
I'm
I'm
just
not
100
sure
that
it
is
a
perfect,
robust
fit
for
what
is
described
in
rta,
but
this
this
can
be
attributed
to
my
limited
abilities
and,
and
the
my
hope
is
that
in
the
rs
rsv
proposal,
these
complications
do
not
exist
and
that
the
working
group
consensus
arrives.
J
J
J
I
pressed
the
share
screen
request.
I
think
you
need
to
act
at
chris.
J
Quick
recap:
what
is
the
problem?
We're
solving
and
have
been
harping
on
for
multiple
years,
almost
a
decade
now,
when
an
intermediate
ca
shrinks?
Any
subordinate
ca
also
need
to
shrink
as
soon
as
possible,
but
there
is
no
signaling
mechanism
from
parent
to
child
to
allow
for
a
shrink
ahead
of
time
concept
and
following
the
validation
algorithm
described
in
rfc
6487,
which
is
the
algorithm
that
all
rps
use
on
all
existing
objects.
J
In
practice,
this
leads
to
ip
transfers
for
a
period
of
time
causing
unrelated
rpk
objects
to
become
invalid,
and
I
believe
this
is
an
undesirable
characteristic
of
the
system,
especially
if
one
considers
that
validation
state
impacts,
the
state
of
the
global
bhp
routing
system,
especially
if
people
carry
validation
state
in
bhp
communities.
So
this
to
me
appears
as
needless
brittleness
of
the
rpki
technology
stack.
J
A
real-life
report
on
the
problem
is
available
at
this
ripen
cc,
hosted
routing
working
group,
mailing
list
message.
The
message
and
the
mail
thread
may
be
somewhat
confusing,
because
two
problems
are
being
discussed
in
the
same
mill
threat.
On
the
one
hand,
there's
a
problem
description
of
a
validator
rejecting
objects,
because
a
single
object
listed
on
a
manifest
somehow
did
not
pass
simple
object.
Validation
in
this
instance
expiration,
sorry,
overclaiming.
J
No,
I'm
confused
exploration,
I
think,
was
the
issue
here
and
then
separately,
two
roas
related
to
the
ca
that
sold
off,
or
at
least
of
or
whatever
happened.
Some
ip
space
became
invalid.
So
there's
a
ton
of
objects
that
became
invalid,
but
that
has
been
handled
in
updates
to
the
rpki
validator
in
question.
J
J
I
think
a
challenge
here
is
that
the
algorithm
described
in
6487
and
the
algorithm
described
in
8360
both
are
valid.
Algorithms,
in
the
sense
that
both
given
as
a
input
produce
a
deterministic
output
and
which
of
the
two
algorithms
is
better
to
some
degree,
is
a
subjective
matter.
Both
arguably
provide
a
means
to
validate,
but
one
compared
to
the
other,
have
different
behavioral
characteristics
in
operations
and
ben,
and
I
prefer
a
little
bit
less
operational
brittleness
as
long
as
it
does
not
come
at
the
expense
of
security.
J
J
J
J
A
new
document
can
update
the
6487
rfc
and
replace
the
algorithm
with
a
different
algorithm
repurposing
existing
code
points
then.
Furthermore,
I
would
suggest
that
the
profile
agility
section,
which
caused
us
to
end
up
in
a
rca
360
undeployable
scenario,
is
removed,
as
it
is
not
clear
that
this
procedure
is
correct.
To
begin
with,
I
think
future
work
will
need
to
re-evaluate
whether
the
the
agility
procedure
is
appropriate
or
not,
and
as
a
consequence
of
updating
6487,
there
is
no
longer
a
need
for
rc-8360.
J
J
J
So
what
about
existing
work?
And
I
I
do
think
that
as
an
rpg
community,
we
have
wasted
an
incredible
amount
of
time,
especially
considering
that
some
already
knew
before
rc
6487
was
published,
that
there
would
be
operational
issues
down
the
road,
so
the
x509
policy
extension
is
not
wasted.
It
can
be
used
in
the
future.
What
is
wasted
is
the
policy
oid
associated
with
8360,
but
the
good
news
is:
this
comes
from
a
infinite
code
point
space
and
there
is
no,
as
far
as
I
know,
existing
deployment
of
the
code
point
so
deprecating.
J
This
is
deployed
in
production,
and
this
means
that
any
changes,
such
as
a
validation,
algorithm
change,
need
to
be
absolutely
incrementally
deployable
without
the
lockstep
dance
that
the
8360
rfc
was
forced
to
follow.
So
we're
changing
the
tires
on
a
live
system,
and
I
do
think
it
is
possible,
but
it
is
a
complicated
dance.
A
A
Seeing
no
other
questions
yo
your
request
to
start
the
code
point
early
allocation.
We
can
do
that
shortly.
Yes,.
J
A
A
E
E
E
E
There
were
major
and
minor
changes.
This
happened
with
the
documents
since
last
meeting,
I'm
not
going
to
describe
all
the
changes
hope
you
have
read
the
document
and
agree
that
its
readability
has
improved.
If
you
haven't
read
the
latest
document,
you
still
have
a
chance.
The
next
update
will
be
in
a
month
the
next
three
bullets
I'm
going
to
discuss
in
more
detail.
E
First,
we
reach
a
synchronization
point
with
rtr
protocol
specification
and
now
rtr
pdu
is
in
sync
with
asp
profile,
though
I'm
not
author
of
8
2
0
1
0.
With
this
I'd
like
to
highlight
key
requirements
for
asp
processing,
the
cache
must
create
a
union
of
providers
from
all
available
asp
records
in
advance
before
it
sends
the
data
it
must
send.
The
union
of
providers
in
a
single,
pdu
and
router
must
support
the
auto
emissivity
of
these
updates.
E
E
If
you
circle
on
top
of
another,
it
means
that
the
one
circle
on
the
top
is
provider
of
the
one
at
the
bottom.
If
there
are,
there
are
circles
on
the
same
level,
they
are
appearing
with
each
other
and
the
errors
most
important.
They
show
the
direction
of
asp
verification
procedure.
Note
the
direction
of
the
advertisement.
E
So,
let's
move
on
here
are
two
scenarios
on
the
left.
You
can
see
a
scenario
with
transparent
eyes
and
on
the
right
you
can
see
ikes
in
the
path.
The
validation
of
the
left
is
simple.
You
just
need
to
check
pair
one
two
and
it
is
the
correct
pair
and
everything
is
valid
on
the
right
side.
The
situation
is
a
bit
harder
because
two
haven't
authorized
autonomous
system
that
belongs
to
the
internet,
exchange
point
to
advertise
it
to
its
upstream
providers
or
peers
and
so
on.
E
So
it's
a
very
specific
situation
where
we
have
another
system
in
the
middle
which
acts
like
a
provider,
but
it's
not
mentioned
in
the
list
of
providers
in
the
asp,
but
we
can
imagine
that
2
is
appear
to
alex
and
3
is
its
customer.
In
this
case
we
can
apply
a
downstream
verification
procedure
and
it
will
be
fine,
so
we
will
have
one
two
as
a
wallet.
E
Two
hikes
is
as
invalid,
but
if
we
apply
downstream
procedure,
it's
okay
because
it
just
highlights
at
the
end
of
upstream
segment
and
everything
works,
fine
and
in
previous
versions
of
the
document,
we
suggested
that
to
make
its
general
policy
just
apply
to
all
routes
received
from
the
route
server
downstream
verification
procedure.
E
Unfortunately,
there
was
a
shot
coming,
and
here
it
is.
The
problem
is
that
it
was
limiting
the
opportunity
of
ixp
members
to
detect
route
legs.
The
problem
occurs
on
the
left
side,
where
the
transparent
x
is
presented
in
this
case.
If
we
apply
a
downstream
verification
procedure,
we
will
have
one
two
as
invalid,
but
it's
okay.
Well,
it's
not
okay.
So
on
the
right
side,
everything
is
fine,
but
on
the
left
side,
three
is
unable
to
detect
a
leak
that
happens
on
the
on
the
other
side
of
transparent
eyes.
E
So
we
decided
to
change
the
procedure
and
to
use
the
presence
of
the
ix
in
the
path
as
a
token.
E
So,
if
atom
system
that
belongs
to
ix
is
present
in
the
path,
we
will
apply
a
downstream
verification
procedure.
If
it
is
received
without
hikes
in
the
path,
we
will
apply
a
upstream
rectification
procedure.
E
In
all
of
these
scenarios,
I
found
back
in
this
three
is
receiving
prefixes
from
its
provider
from
its
provider,
with
with
analysis
number
two,
and
it
proves
out
that
it
doesn't
really
matter
the
appearing
relation
between
one
and
two.
It
may
be
custom,
it
may
be
peering,
it
may
be
provider.
E
So,
with
any
kind
of
p
relation
between
one
and
two,
this
scenario
ends
up
with
a
valid
and
the
observation
that
was
found
that
so,
if
the
relation
doesn't
really
matter,
even
if
one
haven't
created
asp,
so
the
pair
one
two
is
unknown,
we
can
still
treat
this
kind
of
paths
as
valid,
so
to
make
it
clear.
E
E
We
discussed
it
in
a
small
group
of
lists,
but
the
discussion
about
semantics
that
represents
this
changing
logic,
looks
even
more
important
at
the
moment.
E
So
here
is
the
the
plan.
For
today
we
need
to
finish
the
discussion
about
valid
states
in
case
of
downstream
paths
or
what
we
just
discussed.
It
may
result
in
important
awarding
changing
inside
the
document.
E
E
E
And
I
hope
the
next
update
to
the
document
will
be
the
last
one
before
working
group
last
call.
B
Yeah
and.
A
B
Sure,
thank
you.
You
can
see
the
my
pdf.
B
So
this
this
work
is
done
jointly
with
jacob
and
yeah.
It's
not.
B
Yes,
thank
you.
So
this
is
sriram.
This
is
joint
work
with
jacob
we've
been
sharing
these
slides
for
the
last
three
weeks
or
so
on
the
siderocks
list.
Several
people
have
taken
interest.
I
looked
over
the
slides
and
we
got
some
reviews.
Some
comments
back
in
particular
alexander
and
ben
madison
have
taken
as
well
as
tas
from
ripe.
They
have
taken
a
close
look
at
these
slides
and
in
general
there
is
agreement
that
yeah.
Yes,
we
have
an
opportunity
to
improve
the
asphalt
validation.
B
Yeah
that
works
better,
so
the
key
takeaways
from
this
talk
are
the
following:
the
downstream
aspart
algorithm
there's
an
oversight
in
the
draft
as
alexander
has
already
mentioned,
and
there
is
an
opportunity
to
to
improve
on
that,
and
that's
that's
being
done
through
some
private
conversations
between
jacob
myself,
alexander
and
ben
madison.
B
So
that's
that's
going
to
be
coming
forth
soon,
the
the
connected.
So
in
this
draft
we
try
to
show
that
a
correct
algorithm
exists
with
exists
with
formal
proof
and
it
classifies
valid
invalid
and
unknown
as
paths
correctly.
B
B
So
this
is
an
account
that
would
be
a
candidate
way
of
implementing
this
algorithm
and,
as
I
know
are
mentioned,
alexander
and
ben,
are
actively
working
on
versions
of
implementation
of
this
so
to
get
started
with
this
whole
idea
of
checking,
I
mean
looking
for
the
correctness
of
the
algorithm.
It
is
very
helpful
to
have
a
asp,
a
hop
check
function.
B
A
similar
thing
is
also
mentioned
in
the
current
draft,
but
that
it
doesn't
use
like
a
give
it
a
name
here
we
are
giving
it
a
name
that
is
g,
so
g
f,
a
s
I
asj
is
p.
B
If
a
s
I
it
has
a
j
as
a
provider,
it
is
np
if
asi
attests
asj
as
not
a
provider
and
n
a
if
asi
does
not
have
any
spa
no
attestation,
and
if
you
have
a
hop
from
I
to
j,
as
in
the
middle
and
depending
on
whether
I
to
j
the
g
function,
the
hawk
check
function
is
p
or
np
or
n
a
you
have
different
possibilities
for
the
for
that
hop.
B
B
So
this
is
the
example
that
jacob
originally
mentioned
in
on
the
cider
drops
list.
One
and
two
have
asps
two
and
three:
don't
have
asps.
B
What
what
that
link
is,
whether
it
is
up
down
or
l
in
all
three
cases,
the
the
the
yes
path
is
valid
from
the
point
of
view
of
route
leaks
from
the
point
of
view
of
of
valley
free,
so
it's
valid.
The
current
algorithm
calls
it
unknown.
So
that's
that's
something
that
we
all
agreed
on
the
next
two
slides.
B
It's
just
another
example
where
three
can
have
an
aspa
and
still
the
same
thing
happens
and
and
the
third
example
you
can
insert
a
number
of
sibling
relationships
in
the
middle
like
three
to
four:
they
are
mutual
transit.
They
each
have
asp
aspa
pointing
to
the
other
as
a
as
a
provider,
so
they
are
as
a
result.
They
are
siblings,
so
you
can
insert
any
number
of
these,
and
these
the
same
thing
happens
as
in
the
previous
two
examples.
B
The
asp
algorithm
determines
the
path
to
be
unknown,
but
it
is
indeed
valid
in
terms
of
fixing
that
we
make
a
few
simple
design
principles
observations
again.
We
are
focusing
on
the
downstream
as
path
only
yes,
so
here
the
first
bullet
is
valid.
If
there
is
an
up
ramp
of
customer
to
provider
hops
on
the
left,
there
is
a
down
ramp
of
provider
to
customer
hops
on
the
right,
and
those
two
hop
up
ramp
and
down
ramp
are
completely
verified
based
on
aspa
and
in
the
middle.
B
If
you
have
no
hops
at
all,
they
are
a
perfect
inverted
v.
These
two
up
ramp
down
ramp
or
in
the
middle.
You
have
a
single
lateral
hop.
That
is,
I'm
sorry,
a
single
hop
that
is
either
np
or
na
in.
In
that
case,
the
path
is
valid
and
the
second
bullet
is
making
the
same
observation
in
a
slightly
different
wording.
B
B
The
next
question
is:
when
is
the
path
either
invalid
or
unknown,
and,
and
the
answer
to
that
is
that
if
you
have
two
or
more
hops
in
the
middle
between
the
up
ramp
and
the
and
the
down
ramp
in
the
middle,
if
you
have
two
or
more
hops,
then
the
following
can
be
said
and,
and
that
helps
to
separate
the
invalid
from
the
unknown
if
there
are
opposing
values.
So
that
is
a
np
hop
from
left
to
right
and
a
subsequent
np
hop
from
from
right
to
left.
B
So
one
np
hop
from
left
to
right
and
another
subsequent
np
hop
from
right
to
left,
then,
no
matter
what
is
in
between.
There
must
be
at
least
one
valley
in
the
air's
path,
and
hence
it
is
invalid,
as
the
draft
algorithm
also
recognizes
that,
and
if
this
is
not
the
case,
then
the
path
is
unknown.
B
So
the
key
thing
is
this
separation
of
of
the
path,
the
the
representation
of
the
path
in
terms
of
a
valid
up
up
ramp
up
ramp
on
the
left
and
a
valid
sequence
of
down
down
down
hops,
on
the
right,
a
down,
ramp
and
and
at
the
top.
If
you
have
no
hops
at
all,
three
and
four
are
are
the
same
or
you
have
a
single
hop,
no
more
than
one
hop
at
the
top,
which
is
which
is
unknown
or
or
not.
B
C24
is
not
provided
in
either
of
those
two
cases
the
up
update
is
the
path
is
perfectly
valid,
so
you
have
either
a
perfect
inverted
v
or
you
have
an
inverted
v
with
with
one
hop
at
the
top.
In
both
of
those
cases,
the
the
path
is
valid,
given
that
the
up
ramp
and
down
ramp
have
been
verified
to
be
correct,
based
on
aspa
so
moving
to
the
next
one.
This
representation
helps
in
terms
of
formulating
the
design
problem
in
a
systematic
way.
B
This
is
scale
representation
of
the
downstream
as
path
as1
to
asn
is
the
whole
as
part
and
on
the
left.
You
have
a
series
of
up
up
up
up
pro
customer
to
provider
hops.
All
of
them
were
are
valid
according
to
aspa
on
the
right
you
have
down
down
down
hops,
l
took
n
minus
one
and
again
all
of
those
are
valid
according
to
aspa.
B
So
once
you
have
identified
this
valid
portion
on
the
left
and
the
valid
four
portion
on
the
right,
then
you
can
have
peace
of
mind
with
regard
to
those
two
ends
that
allows
you
to
focus
in
the
middle,
from
k
to
l-
and
first
thing
we
know
is
that
the
very
first
one
k,
2
k
plus
1-
is
it's
not
a
provider.
B
It
is.
It
is
either
a
nana
or
np,
and
likewise
on
the
right
side,
l
to
l,
minus
1.
Again,
it's
not
a
provider,
it
is
either
n
a
or
lp.
We
know
that
for
sure
in
the
middle
you
may
have
in
all
these
three
possibilities:
provider,
n,
a
and
b-
doesn't
matter.
So
if
so,
this
is
the
kind
of
representation
we
have
to
start
with,
and
an
example
of
that
representation.
B
Is
this
one?
Where
k
is
three
and
l
is
six,
and
in
this
we
show
the
asp
is
in
the
middle,
the
three
to
four
is
either
np
or
na.
That
is
what
breaks
the
the
the
up
ramp
and
then
or
terminates
the
up
ramp.
Six
to
five
is,
is
n
a
or
np,
and
that
is
what
terminates
the
the
down
ramp.
So
that
is
how
we
define
k
and
l
case.
Three
l
is
six
so
in
this
example
it
naturally
this
is
not
going
to
be
valid.
B
We
cannot
say
that
it
is
definitely
valid.
It
will
either
be
invalid.
We
will
be
able
to
show
that
three
to
six
is
has
a
as
a
value-free
violation,
or
we
will
be
able
to
show
that
it
is
a
mix.
There
are
some
possible
trajectories
between
three
and
six
that
are
valid
and
there
are
other
possible
trajectories
between
3
and
6
that
are
invalid,
in
which
case
this
middle
portion
will
be
unknown.
As
a
result,
the
whole
path
will
be
unknown.
B
So
for
that
one
more
observation
before
we
go
to
the
theorems
l
is
less
than
k.
It
is
perfect,
it
is
possible
and
perfectly
fine.
B
That
means
that
the
up
ramp
and
the
down
ramp
overlap-
and
that
happens
when
you
have
a
sibling
hops
in
the
in
the
middle
and
so
no
issue
with
that,
and
the
algorithm
would
still
be
fine,
so
just
to
take
a
quick
look
at
the
theorems
based
on
what
we
have
discussed
so
far,
the
theorems
kind
of
start
to
emerge
as
like
obvious,
pretty
much,
but
we
have
proof
in
the
appendix
in
the
in
the
backup,
slides
the
proof
is
given.
B
So
the
first
theorem
says
that
if
you
have
l
minus
k
less
or
equal
to
1,
then
the
path
is
valid
for
sure,
and
that
is
the
only
if
and
only
condition
under
which
the
path
is
valid.
And
if
l
minus
k
is
greater
than
2,
then
the
path
cannot
be
valid
for
sure,
but
it,
but
it
is
either
unknown
or
invalid.
B
Here
it
says
that
if
you
have,
if
you
look
at
this
middle
path,
k
to
l
or
for
that
matter,
you
can
look
at
the
whole
as
path
1
to
n
doesn't
matter
if
you
have
a
hop
on
the
left,
that
is?
U
2?
U
plus
1!
That
is
not
a
provider,
and
if
you
have
a
hop
on
the
right
of
that
v,
v
plus
1
to
v,
that
is
also
not
a
provider.
B
Then
the
this
middle
portion
of
the
path
is
invalid
and
the
whole
the
whole
path
as
path
is
invalid.
Otherwise
the
partial
path
is
unknown
and
the
whole
path
is
unknown.
So
that's
those
are
the
theorems.
So
just
a
quick
glimpse
of
like
the
of
the
proof
or
the
validity
of
that
that
theorem.
We
look
at
the
case
when
l
minus
k
equal
to
two.
B
If
you
have
two
nps
from
left
to
right
and
right
to
left
facing
each
other,
then
the
path
is
invalid,
because
you
will
construct
from
this,
you
can
construct.
The
four
different
trajectories
are
between
three
and
four,
and
you
will
see
that
you
will
see
that
out
of
those
trajectories.
B
All
of
them
result
in
value
free
violation,
and
therefore
there
is
no.
There
is
no
trajectory
which
is
valid.
All
of
them
are
invalid,
so
this
three
to
four
is
definitely
invalid.
If
they,
if
you
have
two
mps
facing
each
other,
all
other
combinations,
n,
a
n,
p
and
n
a
n,
a
n,
p
n,
a
those
three
combinations
you
can,
you
can
construct
trajectories
between
three
and
four.
Some
would
be
valid.
Some
would
be
invalid.
As
a
result,
you
you'll
call
it
unknown.
B
B
So,
with
the
help
of
those
theorems,
it's
now
pretty
simple
and
straightforward
to
lay
out
a
crisp
description
which
you
pretty
much
already
know
at
this
point.
If
n,
if
the
s
path
length
is
less
than
less
than
or
equal
to
two
then
the
path
is
is
valid.
It's
trivially
valid
and
we
don't
need
to
look
apply
the
aspa
at
all.
B
In
this
case,
then,
if
the
path
length
is
greater
than
two,
then
we
when
then
we
can
have
the
lk
formulation
and
if
l
minus
k
is
less
than
or
equal
to
one,
as
we
said
before,
the
path
is
again
definitely
valid.
So
now
we
consider
l
minus
k
greater
than
equal
to
two.
B
We
have
already
said
about
that
that
if
you
have
these
two
opposing,
not
provider,
not
provider
hops,
one
on
the
left,
one
on
the
right
they
are
facing
each
other
like
in
this
previous
slide,
np
and
p
are
facing
each
other,
it's
going
to
be
invalid
and
otherwise
the
path
is
unknown.
B
So
I'll
not
go
through
this.
My
last
slide
here
it
is
basically
laying
out
an
implementation
procedure.
B
We
tried
to
squeeze
the
most
efficiency
into
it
by
minimizing
the
number
of
aspa,
hob
checks,
so
people
who
are
interested
in
the
implementation
can
look
and
take
a
closer
look
at
this
and
and
see
if
this
is
something
that
they
would
like
to
use.
Otherwise,
the
implementation
can
can
vary
depending
on
the
the
implementer's
taste
and
choice,
but
the
basic
principles
remain
the
same.
B
So
with
that,
I
conclude:
if
you
can
look
through
the
backup
slides
at
your
leisure
or
to
to
to
look
for
like
the
the
proofs
for
the
theorems
or
the
foundation
for
this
type
of
algorithm,
and
with
that
I
can
open
it
up
for
questions.
A
E
B
Yeah
sure
I
I
don't
mind,
I
mean
what
I
looked
at,
what
you
were
trying
to
do
the
discussions
we
had
and
I
still
have
to
dig
deeper
into
ben's
implementation.
B
F
Ben,
I
think,
just
to
echo
that
I
think
we
have
three
kind
of
algorithmic
or
in
encode
representations
of
fundamentally
the
same
set
of
ideas
now
that
are
more
or
less
equivalent
to
each
other.
There's
the
version
in
in
draft
zero,
seven
there's
a
version
that
I
wrote
based
on
the
email
that
jacob
sent
to
the
mailing
list
a
couple
of
weeks
back
and
there's
the
version
presented
in
your
slides,
and
I
think
they
pretty
much
all
arrive
at
the
same
conclusions
in
a
relatively
similar
way
under
the
hood.
F
But
I
think
that
in
terms
of
what
we
select
up
in
the
final
document,
the
two
overriding
considerations
for
me
are
that
we
should
value
readability
and
understandability
over
efficiency.
We
don't
need
an
optimized
version
of
this
algorithm
in
the
standard
that
can
be
for
implementers
to
do,
and,
secondly-
and
I
think
this
is
the
biggest
shortcoming
of
the
draft
as
it's
currently
written-
I
think
that
we
need
a
much
clearer.
F
Line
of
reasoning
to
go
from
what
we
understand
as
a
root
leak
of
the
type
that
we're
trying
to
detect
to
what
the
eventual
algorithm
ends
up.
Looking
like,
I
think
that
that
I
think
setting
out
that
that
those
logical
steps
are
really
important
to
allow
people
that
haven't
been.
You
know,
reading
iterations
of
this
draft
for
the
last,
however,
many
years
to
actually
know
what
to
expect
in
practice
if
their
server
winds
up
running
on
their
routers.
A
B
Sorry
about
that
chris,
thank
you
for
reminding,
but
a
little
bit
too
late
for
alex
and
ben.
I
I
just
want
to
add
to
what
ben
just
said
that
yeah,
that's
that's!
That's
good,
and
one
other
thing
that
that
I
would
like
to
mention
is
in
the
draft
the
element,
the
g
function,
that
that
that
is
defined
in
the
in
the
slides
is
quite
helpful,
so
that
that
it's
just
a
matter
of
like
in
section
four
of
the
draft.
E
B
It
a
name
like
g,
and
maybe,
if
you
like,
use
the
p
n,
p
and
n
a
notation,
because
if
you
use
like
if
each
hop
check,
if
you
say
it
is
valid
invalid
unknown,
you
are
kind
of
saying
that
my
hop
is
invalid,
but
the
path
is
valid,
so
that
happens
in
many
cases,
so
that
kind
of
maybe
confuses
the
the
reader
so
just
to
make
it
this
ambiguous.
B
I
would
also
like
to
recommend,
in
addition
to
using
a
function
like
g
just
to
make
it
clear
g
is
a
hop
check
function.
I
just
give
it
a
name
and
also
use
p
and
p
n
a
type
of
notation
to
keep
it
unambiguous,
don't
mix
it
up
with
the
overall
path,
validity
or
invalidity,
unknown,
etc.
E
Just
a
small
response
that
the
function
has
already
its
name,
it's
called
in
the
document
pair
verification
procedure
and
it
is
in
separate
section
from
all
other
procedures
that
we
have
in
the
document.
So
I
do
think
that
the
naming
of
these
function
is
not
a
problem,
but
it's
my
personal
opinion.
B
Yeah
alexander,
we
can.
We
can
chat
about
that,
a
little
bit
more
between
you
and
ben
and
jacob
and
myself,
but
yeah
I'm
fine
either
way.
I
thought
I
would
just
make
that
suggestion
for
for
better
clarity
for
the
for
the
reader,
just
just
a
matter
of
giving
it
a
name.
You
give
it
some
kind
of
a
name,
but
a
symbol
like
g
might
help,
but
I'm
not
adamant
about
that.
Thank
you.
A
Okay,
I
think
thank
you
very
much
for
your
presentation,
sriram
and
the
other
presenters,
also
thanks
a
bunch
of
bunch
of
good
conversation
and
some
action
items
to
move
forward.
With
the
last
little
thing
I
wanted
to
mention,
because
I
said
I
would
mention
it
last
night
when
I
emailed-
and
I
got
badgered
by
at
least
one
participant
to
say
something
I
think
there's
there
was
a
relatively
long
conversation,
six
or
eight
emails
about
you
know
getting
some
running
code
before
we
push
forward
drafts
that
have
implementation
changes.
A
That
seems
totally
reasonable
to
me,
and
I
outlined
this
in
the
email,
an
email
reply
last
night
I
think
at
the
very
least
it
gives
us
an
opportunity
to
see
how
the
changes
are
going
to
actually
work,
how
they're
going
to
change
the
operations
of
the
system.
I
don't
think
anybody
has
necessarily
disagreed
with
any
of
that
and
the
previous
amount
of
a
bit
of
thread.
So
I
think
the
chairs
need
to
do
some
work
on
a
little
charter
update
and
push
that
forward
through
our
ads
in
the
in
the
iesg.
J
H
A
So
I'll
give
rudiger
a
second
to
to
get
back
to
the
listening
side
of
his
conversation.
Okay,
so
rudiger,
I
suppose
in
my
head,
implementation,
interoperability
and
running
code
are
kind
of
the
same
thing
to
me.
That's
a
bit
naive
on
my
part.
I
totally
admit,
but
conversations
earlier
today
about
you
know,
get
some
running
code
get
some
implementation
and
interoperability
reports.
I
think
that's
all
kind
of
the
same
thing
to
me
and
that's
that's
where
my
headspace
was
at
least
so.
Yes
and
warren
was
in
the
middle
of
saying
something
about.
K
Yeah
I
mean
I,
you
can
do
a
charter
update
if
you
like.
That
sounds
like
a
lot
of
faff.
The
other
option
is
just
you
know,
update
the
ups
wiki
or
something
and
have
a
note
that
you're
not
going
to
progress
documents
until
you
have
x,
whatever
you'd
like.
If
you
want
to
do
a
charter
update,
I'm
fine
to
push
it
through
the
process.
It
just
requires
many
buttons,
and
you
know:
paperwork
and
faffy
stuff.
D
Yeah,
I
was
going
to
jump
in
and
say
real
quick.
This
is
scary
for
marcus
that
you
know,
having
had
an
experience
at
idr,
where
two
implementations
are
strongly
required,
whether
they
are
deployed
or
not,
is
a
added
plus
but
showing
an
interoperability
between
multiple
implementations
is
what
idr
prefers.
D
A
Yeah,
I
don't
know
that
a
document
is
particularly
required,
but
it
certainly
seems
like
it
would
be
helpful
to
at
least
review
in
a
meeting
or
on
the
mailing
lists
of
kind
of
formal,
hey.
We
three
implementers
got
together.
We
ran
the
bits
and
pieces,
nothing
bad
happened,
and
the
world
didn't
explode
right
or
hey.
This
is
actually
pretty
crappy
a
different
way.
D
E
J
Proprietary
software
solutions
are
also
able
to
write
implementation
reports
and
to
perform
interoperability
testing
so
yeah.
We
should
be
copying
from
how
idr
handles
this.
I
think
it
would
positively
benefit
this
group.
A
Okay,
we
have
less
than
less
than
two
minutes
we
get
kicked
out.
So
the
last
people
are
quick,
doug,
warren
and
randy.
L
I
was
just
making
the
point
in
the
chat
right
that
interoperability
in
things
like
aspa
algorithms,
you
know
interoperability
is
only
a
question
where
systems
interface
right.
So
that's
the
interface
to
rpki
and
processing
the
data.
I
think
what
we're
interested
in
is
that
different
implementations
behaves
consistently,
which-
and
I
guess
is
a
broader
definition
of
interop.
K
Thank
you
warren,
it's
the
more,
I
think
about
it,
the
more.
I
think
it
would
be
a
good
idea
to
just
have
a
strong
suggestion
that
you
have
interoperability
or
code
requirements
or
something
that
way
you
have
flexibility
in
the
future.
So
I
suggest
just
have
it
somewhere,
not
naturally
in
a
charter,
but
we
can
chat
more
later.
A
J
C
A
A
I
think,
with
nothing
else,
we're
about
to
get
kicked
out
and
thanks
everybody
for
participating.
If
there's
open
questions,
please
put
them
on
the
mailing
list.
Here
we
go.