►
From YouTube: IETF113-SIDROPS-20220325-1130
Description
SIDROPS meeting session at IETF113
2022/03/25 1130
https://datatracker.ietf.org/meeting/113/proceedings/
C
B
Believe
warren
is
running
the
presentation.
F
D
B
B
B
B
B
Miniko
says
they
had
to
lower
the
front
mic
because
of
feedback.
Oh
good,
okay.
I
think
we
don't
really
have
any
agenda
bashing
to
do.
We
do
have
a
short
chat
from
allison
menken,
the
ietf
ombudsman,
who
wants
to
come
and
talk
to
us
about
code
of
conduct
and
how
we
should
be
paying
attention
to
whether
the
code
of
context
says,
instead
of
not
paying
attention
which
we
have
a
tendency
to
do
so.
Allison.
G
Yeah
yo
from
the
ombuds
team
that
was
the
yo
from
beowulf,
so
we've
had
a
bad
season
this
this
season
with
cider
ops,
I'm
authorized
to
say
we're
we're
normally
extremely
careful
about
about
confidentiality.
But
I
wanted
to
say
that
we've
agreed
to
let
you
know
that
we've
had
quite
a
few
cases
of
people
who
feel
concerned
and
uncomfortable
with
the
kind
of
discussions
that
go
on
insider
ops
and
the
ticket
to
doing.
G
The
right
thing
here
is
to
just
come
back
to
the
code
of
conduct
over
and
over
again,
which
says
be
neutral,
be
equal,
be
be
respectful
to
each
other.
If
you
are
saying
something
about
somebody
using
the
word,
you
you're
probably
doing
it
wrong
and
understand
that
the
problems
that
you're
solving
are
very
difficult.
G
So
we
wanted
to
make
it
clear
that,
without
paying
enough
attention
to
the
code
of
conduct,
it's
probably
very
difficult
to
solve
the
problems
that
you
have,
which
are
really
difficult
problems.
It's
also
probably
difficult
to
get
new
people
involved.
Who
may
have
solutions
that
you
haven't
thought
of
so
this
is
my
pitch
and
I'd
like
we
were
invited
by
the
a.d
by
warren
to
come
and
speak
to
you,
and
I've
spoken
several
times
now
with
the
the
chairs
about
this
too,
to
go
back
and
reread
the
code
of
conduct.
G
If
somebody
objects
to
something
you
say,
please
think
about
your
code
of
conduct
and
pull
back
from
the
way.
You've
said
it.
Please
try
to
lessen
our
caseload
from
cidrops,
so
I
don't.
I
can't
really
answer
any
questions
because
of
the
confidentiality,
but
that's
the
message
and
I
think
warren
or
the
chairs
may
want
to
add
something.
But
if
not
that's
the
pitch
warren.
D
Anything
you
want
to
add
yep
I'd
like
to
add
something
yeah
I
mean,
I
must
admit
that.
Well,
let
me
turn
my
volume
up
here.
Yeah
I've
never
had
code
of
conduct
discussions
with
the
umbud's
team
in
any
of
my
other
working
groups,
so
you
know
I
would
like
to
continue
not
having
code
of
conduct
discussions
with
the
ombuds
team.
D
I'm
really
glad
that
people
are
passionate
about
this
stuff
and
we
have
some
strong
personalities,
just
sometimes
the
passion
kind
of
overflows
from
let's
make
this
thing
better
into
like
let's
beat
up
the
artwork
and
one
of
the
things
which
I
think
is
happening
is
in
many
cases
a
lot
of
the
people
participating,
know
the
other
people
and
think
that
they
understand
how
their
message
will
be
heard
and
even
if
that's
true
people
who
aren't
in
that
discussion,
see
the
sort
of
sniping
and
get
concerned
that
they
might
also
be
attacked
if
they
start
participating.
D
So
you
know,
while
chris
and
I
can
call
each
other-
you
know
I
can
call
chris's
mommy
fat
and
he
can
say
I'm
ugly
and
we're
both
okay
with
that,
especially
if
you've
seen
his
mama.
It
drives
other
people
away,
and
so
you
know
we
need
to
keep
that
sort
of
thing
in
mind.
My
god,
I'm
glad
that
I'm
not
interested
near
where
chris
is
because
when
I
get
back
there's
going
to
be
a
beat
down,
I'm
sure,
but
yeah
I
mean
really.
We
need
to
be
a
lot
more
careful
with
this.
D
G
So
warren
not
slightly
less
so,
let's,
let's
try
not
to
pull
our
punches
too
much,
not
don't
be
aggressive,
be
be
only
as
aggressive
as
necessary,
only
as
as
as
strong
as
necessary,
not
aggressive,
and
I
know
that
some
people
will
disagree
with
that,
but
but
the
problem
solving
the
the
judgment
about
how
aggressive
has
been
over
the
top
for
a
while
now
so
I'd
like
us
to
to
let
the
strong
personality
say:
you've
been
talking
about
you've
been
aggressive
in
ways
that
are
you
know
too
much
of
your
mom
is
fat,
and
now
we
have
to
have
none
of
that
now.
G
D
I
J
J
Of
hours,
yep
go
back,
have
a
cup
of
coffee
read
it
again,
and
you
may
find
that
you
might
may
want
to
change
a
few
words
here
and
there
and
change
the
entire
tone
of
the
message
that
has
helped
me
every
time.
I've
broken
that
role
role
is
sort
of
broken
that
rule
I've.
I've
had
I've
had
to
to
you,
know
back
up
and
and
apologize
and
and
sets
miserable
and
every
time
I
follow
the
rule.
I've
been
very
glad
glad
afterwards,
so
at
least
make
the
attempt
yeah.
D
Yeah
also,
you
know
if
the
main
discussions
are
between
two
or
three
people.
You
know,
and
there
is
more
than
two
or
three
emails
in
a
day.
You
might
want
to
sort
of
stop
and
take
a
step
back
and
be
like.
Is
this
really
the
discussion
we
should
be
having
now?
If
it's
you
know,
you're
missing
a
comma,
it
might
be
better
if
you
had
a
dash
here,
that's
great!
If
it's
your
idea
is
the
worst
one.
I've
ever
seen.
D
That's
not!
Okay,
I
think
we've
all
all
got
this
under
all
understand.
What's
going
on
and
also
thank
you
very
much
alison
and
the
ombuds
team,
for
you
know
coming
along
and
doing
this,
and
also
apologies
to
everyone
that
this
was
needed
and.
G
Okay,
I'm
gonna
go,
but
I
wish
you,
everybody
luck
and
I
I'm
gonna
we'll
we'll
be
continuing
to
monitor
the
cases.
So
please
do
us,
do
us
a
favor
and
keep
reading
the
code
of
conduct
and
thinking
about
how
to
be
good
to
each
other.
Thank
you.
D
And
also,
if
anybody
is
feeling
stressed,
or
you
know,
attacked
or
something
I
will
admit,
I
don't
always
keep
up
with
all
the
mail
feel
free
to
mention
it
to
the
chairs
and
feel
free
to
mention
it
to
me.
You
know
if
stuff's
not
going
well
just
speak
up
and
let's
get
this
under
control,
and
I
believe
that
the
first
set
is
mr
yobe
with
a
long
title
which
natalie
will
read
out,
because
I'm
trying
to
find
the
slide.
D
D
L
Why
not
try
what
was
wrong
with
slide
sharing
the
pre-uploaded
ones.
F
They
are
so
they
are
in
the
meeting
materials
they
all
are,
but
they
don't
show
up
in
the
chair
truly
thingy.
D
L
Research
resource
science
checklist:
this
has
worked
together
with
ben
madison
and
tom
harrison,
and
there
are
many
other
contributors
that
helped
us
develop.
The
current
specification
next
slide.
Please.
L
L
L
The
current
running
code
state
is
there's
multiple
implementations.
There
there's
two
signers
one
created
by
ben
madison,
one
created
by
tom
harrison
and
on
the
validation
site.
There
also
is
multiple
implementations
that
demonstrate
how
these
objects
are
to
be
decoded
and
validated
next
slide.
Please.
L
The
last
update
on
this
was
at
iatf111.
We,
through
the
iana
early
allocation
procedure,
received
a
code
point
to
foster
interoperability
testing.
By
now
that
code
point
has
been
renewed
and
the
code
point
has
been
added
to
openness,
openssl
3.0,
which
is
has
been
released
a
few
months
ago
and
starting
in
libra
ssl
3.4.0.
L
The
code
point
is
also
available,
but
these
code
points
in
in
the
cryptographic.
Libraries
are
nice,
but
all
rpe
implementations
know
that
they
also
have
to
declare
the
code
points
in
their
own
rp
software,
because
you
cannot
rely
on
the
library
being
new
enough
to
support
code
points
like
this.
Unfortunately,
next
slide,
please.
L
There
is
example,
files
available
on
github
that
you
can
use
for
for
testing
your
own
encoder
or
decoder.
This
should
be
fairly
straightforward,
as
rsc
files
are
sort
of
a
mixture
between
the
rfc3779
extension
as
it
exists
in
the
ca
certificates,
and
you
can
take
some
inspiration
from
manifest
handling.
L
So,
if
you
glue
those
two
together,
you
you
easily
end
up
with
an
rc
capable
parser
next
slide.
Please.
L
The
expectation
of
of
the
implementation
report
is
that
per
normative
term
term
in
the
draft
people
indicate
whether
they
implemented
it
or
not,
and
if
not,
why
not?
So
the?
Why
not
could
be
that
you
wrote
a
signer
implementation
which
doesn't
concern
itself
with
actual
validation,
so
it's
all
very
context
dependent
on
which
of
these
fields
you'd
implement,
which
of
the
what
values
you'd
fill
in
in
this
table.
L
L
L
L
The
spec
has
been
stable
for
quite
some
time
between
version
5
and
version
6.
We
essentially
only
bumped
the
the
version.
I
think
there's
sufficient
running
code
to
to
see
that
it
works
and
is
implementable.
L
D
E
Yo
quick
operational
question
since
I
haven't
been
following
this:
I
think
this
is
a
wonderful
idea.
I
don't
operate
one
of
the
caches
that
collect
all
this
stuff
and
do
all
the
nasty
crypto
to
verify
it.
Is
there
any
long-term
concern
about
adding
a
large
number
of
objects
to
the
rpki
system
and
impacts
on
the
various
applications
that
use
it.
L
It
is
very
important
to
note
that
rsc
files
are
distributed
outside
the
global
rpk
repository
system
to
illustrate
what
that
means
exactly
roas
route,
origin,
authorizations
or
crls
or
manifest
files
are
distributed
inside
the
global
repository
system.
So
if
you
use
rsync
or
rdp,
those
are
the
files
you
you,
you
pull
into
the
system,
but
rsc
files
are
not
distributed
through.
That
means
they
are
distributed
in
a
one-to-one
fashion,
so
I
could
generate
an
rsc
file.
L
D
Thank
you
and
I'm
next
in
the
queue.
What
would
be
helpful
is
at
some
point.
Somebody
explains
to
me
the
relationship
between
this
and
side.
Ups
rpki
has
no
identity.
It
feels
like
they're,
closely
related,
I'm
going
to
have
to
explain
it
when
it
goes
to
isg.
Evil
and
rudiger
will
do
your
question
question,
but
then
we'll
cut
the
queue
right
after
yeah.
L
N
L
L
Shorter,
the
relationship
has
been
noted
in
the
rsc
draft
itself.
The
rc
draft
references,
the
no
identity
draft
and
the
rsc
draft-
explains
that
rc
files
cannot
be
used
to
confirm
identity.
L
All
it
does
is
it
confirms
that
somebody
has
possession
of
the
private
keys
and
the
resources
with
which
they
signed
are
subordinate
to
the
the
certificate
authority.
Okay,
so
from
my
perspective,
there
is
no
conflict
whatsoever.
I
think
everybody
is
on
the
same
page
and
and
it's
it's
explained
in
the
draft
itself,.
N
N
L
Yes,
the
the
only
load
on
the
global
system
is,
if
you
revoke
an
rsc,
the
serial
is
appended
to
to
this,
the
crl
of
that
that
ca
so
per
rsc
that
you're
revoking
you're
you're,
adding
a
few
bytes
to
a
crl.
L
But
then
again
the
cr
rsc
files
could
be
short,
lived
where
you
you
don't
want
to
revoke.
This
is
something
we'll
we'll
have
to
figure
out
in
in
the
wild.
D
K
File
name
for
this
one
is
discard
origin
authorization.
D
D
O
O
So
a
very
brief
bit
of
background
in
case
anyone
is
unfamiliar
with
the
the
practice
out
in
the
wild
of
the
internet
in
order
to
try
and
mitigate
the
effect
of
distributed
dos
attacks.
O
It's
fairly
common
practice
for
operators
to
need
to
ask
someone
closer
to
the
source
of
the
attack
to
discard
traffic
on
their
behalf,
because
otherwise,
by
the
time
it's
on
net,
for
them,
it's
already
overwhelmed
links
or
devices
on
the
path,
and
generally,
this
is
done
by
adding
a
special
purpose,
bgp
community
to
a
to
a
bgp
announcement,
and
that
community
signals
a
request
for
the
recipient
of
that
route
to
drop
the
match
traffic
on
the
floor
rather
than
forward
it
towards
the
the
announcer
of
the
prefix
and
this
mechanism
is,
is
the
thing
that
we're
trying
to
make
a
little
bit
more
robust
and
secure
through
through
the
use
of
doas.
O
Okay!
So
today,
so
today,
as
I
say,
there's
this
fairly
common
practice-
that's
been
around
for
a
while,
which
is
inter-domain,
rtbh
signaling
and
over
the
last,
maybe
three
four
years
or
so.
It's
become
increasingly
common
for
operators
to
run
policies
which
involve
dropping
anything
that
has
a
rov
validation
status
of
invalid
on
the
floor
at
every
ingress
to
their
bgp
topology,
and
these
don't
play
nicely
excellent.
O
The
fundamental
problem
is
that,
by
announcing
an
rtbh
route
for
a
victim
of
a
ddos
attack,
you're
essentially
completing
the
attack
for
the
attacker.
You
know
you
are:
you
are
killing
off
the
victim
in
order
to
mitigate
collateral
damage
and
as
a
result,
you
want
to
keep
the
you
want
to
keep
as
much
granularity
in
that
approach
as
possible,
and
so
usually,
these
rtbh
routes
take
the
form
of
host
length
prefixes
now.
O
Origin
validation,
on
the
other
hand,
has
this
concept
of
protecting
against
two
types
of
misorigination,
the
first
and
most
obvious
being
the
wrong
as
number
doing
the
origination,
the
second
and
often
more
useful,
being
preventing
prefixes
longer
than
some
upper
bound
from
being
announced
on
the
on
the
open
internet
and
there's
a
conflict
here,
because
in
order
to
in
order
to
have
host
length
or
very
long
prefixes
propagate
throughout
the
internet,
one
would
need
to
create
rowers
that
permit
very
long
prefixes
to
be
to
receive
a
valid
status.
O
In
so
doing
you
effectively
remove
that
second
type
of
protection
that
rov
gives
you,
because
you're
you're,
essentially
opening
the
door
for
any
sub
prefix
hijack
to
occur
for
any
any
address
space
for
which
you
want
to
be
able
to
use
this
mechanism.
The
two
common
workarounds
for
this
that
exist
today.
Some
people
force
users
of
this
kind
of
a
service
to
create
rows
with
those
very
long
max
length,
values
which
essentially
turns
off
that
second
type
of
check
and
is
not
a
good
idea.
O
Secondly-
and
this
is
what
probably
the
majority
of
people
do,
is
they
have
a
kind
of
a
a
carve
out
at
the
beginning
of
their
routing
policy,
which
says
if
this
is
carrying
the
rtbh
signal,
then
I'm
going
to
exempt
it
from
the
origin.
Validation
related
policy
upfront.
O
O
The
second
is,
as
I
say,
the
the
the
kind
of
the
lesser
of
two
evils
today,
but
it's
really
really
quite
problematic.
Firstly,
the
the
fallout
from
abuse
of
an
rtbh
signaling
service
can
be
quite
severe,
especially
where
I
sit
in
the
topology
as
a
transit
provider.
My
customers
are
obvious,
are
very
often
competitors
of
one
another
and
there
is
a
very,
very
straightforward
path
to
one
customer
black
holing,
the
prefixes
of
another
customer,
using
this
kind
of
mechanism,
which
is
almost
impossible
for
me
to
defend
against
today.
O
Without
a
mechanism
like
this
next,
please
and
there's
other
kind
of
you
know
side
issues
as
well,
whereas
a
rower
has
a
has
has
meaning
in
a
global
kind
of
default,
free
zone,
wide
scope,
rtbh
signals
are
typically
used
very,
very
locally
you're,
not
expecting
to
advertise
one
of
these
host
length,
black
hole
routes
and
have
everyone
on
the
internet
starting
to
drop
traffic.
The
intention
is
that
at
most,
that
signal
is
intended
to
propagate
one
or
two
as
hops.
O
Away
from
you
and
rowers
simply
don't
mean
that
and
have
no
way
of
communicating
that
kind
of
a
scoping.
O
The
other
problem
is
that
we
have
no
either
implemented
or
even
proposed
mechanism
to
provide
secure
attribution
of
who's
added
a
community
to
a
root.
So
in
a
situation
where
the
origin
is
more
than
one
hop
away
from
you,
it's
entirely
unclear
as
a
receiver,
whether
it
was
the
origin
or
their
transit
or
their
transits
transit.
O
That
decided
this
traffic
should
be
falling
on
the
floor,
and
that
has
substantial
contractual
implications,
as
you
can
imagine,
and
this
is
that
that
first
point
is
made
a
little
harder
actually
because
there's
a
there's,
a
there's,
a
well-known
community,
that's
defined
in
in
several
triple
nine,
which
which
which,
which
is
a
value
that
anybody
can
use
to
indicate
this,
the
the
semantics
without
having
to
carve
out
a
special
community
from
their
namespace.
O
But
what
the
old
practice
before
this
came
along
gave.
You
was
at
least
some
indication
of
who
the
origin
is
trying
to
talk
to,
because
if
they,
if
they're
using
a
community
that
I've
allocated
from
my
name
space,
then
I
can
interpret
that
as
it
being
a
request
to
me.
Whereas,
if
there's
this
general
purpose
signal,
then
it's
difficult
to
know
out
if
it's
propagating
kind
of
beyond
the
the
neighborhood
of
the
origin,
it's
difficult
to
know
who
should
listen
to
that
and
just
a
note.
You've
only
got
around
two
minutes.
O
Okay,
so
the
idea
is
essentially
to
follow
most
of
the
logic
that
exists
today
in
origin,
validation,
but
to
add,
add
some
heuristics
to
the
object
that
we
create
that
allows
for
receivers
to
know
whether
or
not
they
should
act
on
it.
The
let
me
skip
forwards,
maybe
one
in
the
interest
of
time.
O
It's
a
6488
style
signed,
object
very
similar
in
structure
to
a
rower.
It
follows
the
rower
procedure
of
the
prefix
holder,
signing
the
the
signed
data
in
the
cms,
and
the
distribution
mechanism
is
very
similar
as
well.
It
will
require
an
extension
to
rtr
and
it
will
be
pro
validated
and
processed
on
an
rp
and
that
rp
will
be
responsible
for
for
sending
data
to
the
to
the
router
for
use
in
routing
policies.
Next,
this
is
what
the
content
looks
like
next.
O
There's
a
version
filled
next,
the
ip
address
blocks
is
subtly
different
to
what
you
find
in
a
rower.
Instead
of
having
prefix
and
max
length,
you
have
prefix
and
length
range
reason
being.
As
I
say,
you
probably
only
want
sorry,
let's
have
a
minute
you,
you
don't
want,
if
you,
if
you're,
if
your
prefix
is
a
a
16,
you
may
want
to
accept
rtbh
roots
for
a
31
and
a
32,
but
probably
not
everything
in
the
middle.
So
you
have
a
range.
O
O
So
only
if
you
have
received
the
actual
bgp
announcement
from
one
of
the
as
numbers
listed
in
peer
asides,
you
expected
to
act
on
it
next
and
finally,
there's
a
set
of
communities
that
the
receiver
of
a
bgp
announcement
can
cross-reference
against,
and
that
tells
that
tells
the
receiver
whether
what
they
think
is
an
rtbh
signal
community
is
in
fact
being
intended
that
way
by
the
originator
of
the
route.
O
Now
I
use
the
term
heuristics
rather
than
you
know,
than
rather
than
kind
of
specific
criteria,
because
none
of
this
gives
you
absolute
certainty
that
this
this
path,
that
you're
seeing
in
bgp
absolutely
is
a
black
hole.
We
don't
have
the
mechanics
to
do
that
today,
either
in
bgp
or
in
the
rpki,
but
it
gives
you
a
very
a
very,
very
much
stronger
hint
than
than
today's
mechanisms
do
and
at
its
very
worst
it
is
at
least
as
good
as
rower
coverage
for
the
equivalent
unicast
prefixes
are
so
it's
it's
a
it's.
O
O
Yep,
I'm
gonna
skip
over
bgp
reprocessing
because
otherwise
that's
going
to
take
a
while,
but
the
general
idea
is
that
you
have
a
parallel,
a
parallel
status,
which
kind
which
mirrors
in
some
way
the
the
origin
validation
status,
but
the
two
are
orthogonal
to
each
other.
O
O
So
this
is
fairly
early
stages.
Still,
I
think
the
the
idea
is
mostly
fairly
well
formed.
I
don't
think
that
there's
going
likely
to
be
much
change
to
the
underlying
idea.
The
document
itself
needs
a
fair
amount
of
work.
One
of
the
questions
that
we
had
for
the
working
group
is
whether
we
think
that
this
should
stay
as
one
document
or
whether
it
should
be
split
out
into
parts
kind
of
like
the
the
origin.
O
Validation
series
was
where
we
defined
the
the
object
itself
in
one
document,
the
validation
process
and
another
and
the
rtr
extensions
in
another.
Still,
I'm
not
sure
I
have
a
strong
opinion
on
that,
so
I'm
I'm
hoping
for
some
feedback
there
and
I'd
also
like
to
know
from
the
working
group
whether
we
think
that
this
is
a
good
candidate
for
adoption
at
this
stage
or
whether
the
working
group
would
like
to
see
it
mature
more
as
an
individual
draft
and
then
talk
about
it.
Sometime
down
the
line.
E
O
So
that's
the
purpose
of
the
pras
ids
field.
The
default
behavior
is
that
this
will
not
allow
transit
for
rtbh
routes.
But
if
you
add
one
of
your
transit
providers
to
that
list
of
pras
ids,
that's
a
signal
to
the
receiver
that
you
have
authorized
that
transit
and
that
it
should
be
matched
and
accepted
so
that
that
is
supported.
D
D
P
Hello,
the
ignorance,
mcdonald's
exactly
tonight,
let's
look
a
little
bit
into
some
practical
experimentation
with
bgp
sec.
So
if
I
operate
an
exchange
and
if
I
would
prefer
to
see
what
happens
if
the
current
global
routing
system
is
bgp
second
and
next
slide
please.
P
So
this
is
an
experiment.
This
is
a
simulated
playground
based
on
real-world
data.
P
Based
on
the
real-world
data
for
gathering
the
above
absolute
numbers
and
the
related
relation
relative
numbers
of
all
sorts
of
identifiers
paths,
lengths,
distribution
of
prefixes,
and
so
on,
this,
this
was
done
as
an
instrumented
implementation
for
focusing
with
the
measurement
of
the
performance
and
mostly
into
the
right.
I
like
this.
I
will
try
to
mostly
for
measuring
the
performance
and
trying
to
get
some
data
why
things
work
in
the
way
they
work
and
why
we
are
getting
such
performance
measurements.
P
This
is
a
limited
domain,
isolated
environment
which
uses
not
necessarily
what
bgp
spec
specifies
right
now
in
the
rfc.
P
It
was
implemented
as
friendly
to
the
environment,
in
the
sense
that
using
vendor-specific
code
points,
so
there
are
no
hijacks
and
every
negative
impacts
and
the
main
concern,
or
the
main
goal
of
this
is
to
try
to
find
out
how
this
would
work
if
we
move
to
bgp
second
turn
and
why
it
doesn't
work
as
it
is
expected
next
slide.
Please
and.
P
So
if
we
use
play
in
bgp,
there's
a
top
of
400
neighbors
having
some
some
having
full
views,
some
having
less
than
full
views
with
the
distribution
taken
based
on
the
public
available
route
collectors,
so
400
neighbors,
feeding
into
the
route
server
and
then
no
no
policy
just
best
path.
Selection,
everything
is
fed
back
bgp
does
that
in
a
minute
and
a
half
for
bgp
set
for
exactly
the
same
topology
it
takes
over
half
an
hour.
P
That's
not
necessarily
the
nicest
result.
Let's
look
why
next
slide
please.
This
doesn't
run
in
a
vacuum.
It
runs
on
specific
hardware
platforms
and
those
put
certain
limitations
on
to
how
things
operate
there.
It's
one
thing
to
write
an
abstract
code
for
illustratory
purposes.
The
other
thing
is
to
write
a
code
which
runs
on
real
hardware,
and
your
contemporary
compute
platform
has
plenty
of
raw
compute
capacity
as
such.
P
It
might
have
plenty
of
memory
capacity,
but
not
necessary
memory.
Bandwidth
and
memory.
Latency
is
certainly
an
aspect
to
keep
an
eye
on
vectorization
and
smd.
Wide
operations
are
a
general
trend
and
the
increase
in
scalar
platform
capacity
is
single.
Low
single
digit
percent,
whereas
the
increase
in
width
of
the
computation
is
in
orders,
sometimes
orders
of
magnitude.
P
P
If
we
look
at
the
inners
of
a
bgp
sec,
two
steps
receive
things
hash.
The
incoming
parts
get
the
message
to
be
signed,
assign
it
in
this
case
for
receive
its
verify
and
then
do
the
rest
of
processing.
P
Chatu
is
hardware
friendly
and,
in
general,
that's
a
light.
Computational
light
operation
operates
on
the
fixed
blocks
of
four
bytes
and
does
a
rather
light
arithmetical
operation
shifts
bit
back
and
forth,
and
so
on.
The
problem
is
that
it
touches
memory
and
touching
memory
is
expensive.
If
you
can
avoid
touching
memory
without
a
real
need.
You
better
do
that
signature
generation.
It
involves
much
heavier
computational
operations,
large
integer,
realization,
plus.
There
is
also
multiplication
and
computational
it's
more
expensive,
but
it
doesn't
need
to
touch
the
memory.
P
Therefore,
overall,
the
larger.
The
longer
the
assigned
path
is
the
more
time
you
spend
on
calculating
the
hash
and
even
before
that,
fighting
with
the
memory
layout,
and
only
then
you
do
signing
and
verification
in
this
case
next
slide.
Please.
P
What
is
this
vectorization
thing
about?
That's
a
simple
idea
that
you
have
one
set
of
instructions,
but
you
operate
on
multiple
streams
of
independent
data.
At
the
same
time,
this
is
a
perfect
fit
into
hashing
multiple,
secure
path
segments
while
calculating
and
verifying
multiple
signatures.
At
the
same
time,
the
operations
for
each
chat
you
block,
they
are
the
same.
P
Just
the
data
difference
so
take
the
full
received,
secure
path,
elements,
feed
them
sequentially
into
different
lanes
for
processing
and
run
shad
to
transform
on
them
in
parallel
works
definitely
definitely
works.
Fine,
provided
that
you
can
get
your
data
into
the
layout
that
is
friendly
for
this,
then,
once
you
have
calculated
the
hashes
feed
them
into
your
elliptical
stuff
and
on
the
output
you
get
the
answer
valid,
not
valid.
P
Overall,
the
latency
is
marginally
higher
for
this.
You
need
to
do
some
additional
work
and
some
instructions
are
not
directly
one-to-one
mapped,
as
in
the
scalar
world,
but
from
the
overall
throughput
perspective
you
have
performance
increase
proportional
to
the
width
width
of
the
vector
lanes
next
slide.
Please.
P
All
right,
so
the
problem
with
not
the
font
but
with
the
more
important
things
is
that
the
format
of
the
secure
path
message
on
the
wire
is
completely
not
the
one,
as
is
expected
for
the
hashing
function
to
operate
on.
There
are
two
components
in
the
secure
path,
the
path
part
and
the
signature
path.
The
total
length
of
the
signature
for
the
hope
is
100
bytes,
but
that's
six
plus
94.
P
none
of
those
two.
They
are
divisible
by
four,
and
if
you
try
to
use
the
capabilities
of
your
underlying
platform,
which
is
able
to
fetch
only
at
four
or
eight
by
granularity,
you
cannot
use
this.
You
can
force
it,
but
the
end
result
is
that
you
will
lose
far
more
than
the
potential
gain
out
of
all
of
this.
P
Therefore,
the
receive
side
problem
for
bgp
sec
is
that
it
has
data
on
the
wire
which
directly
contradicts
to
performance
to
have
being
able
to
have
a
performant
implementation.
Let's
move
to
the
next
slide
and
see
what
the
font
we
get
there.
Oh,
the
real
one
for
the
transmit
side.
P
Gpsex
signs
also
the
target
s
number
that's
a
good
thing.
The
not
so
good
thing
is
how
exactly
that
is
done.
So
if
I
advertise
the
same
path
in
the
same
prefix
to
a
multitude
of
neighbors,
the
stable
part
is
the
path
itself
and
the
prefix.
What
changes
is
the
target
test?
Number,
the
first,
the
initial
first
four
bytes,
which
are
at
the
beginning
of
the
block
to
be
hashed.
P
That
means
that,
starting
from
the
first
round
of
sh2
processing,
the
result
will
be
different
and
basically
we'll
end
up
needing
to
redo
all
the
calculation,
just
just
for
nothing.
If,
instead,
that
target
s
number
wet
back,
the
stable
part
can
be
pre-compute
and
then
intermediate
state,
cached
and
the
longer
the
the
path
length
is
the
higher
the
savings
in
this
case
would
be.
The
other
aspects
are
exactly
the
same.
Signature
generation
is
computational
expensive.
It
doesn't
touch
memory.
Therefore,
it's
not
off
the
problem
next
slide.
Please.
P
Back
to
the
experiments,
so
do
some
fixing
here
and
there
rearrange
some
fields,
do
some
other
protocol
level
changes
and
call
it
magic,
and
this
can
move
into
the
order
of
five
minutes
for
the
same
test
environment
overall
next
slide.
P
So
does
this
mean
that
bgp
sec
is
fundamentally
broken?
No,
it's
not
fundamentally
broken.
Everything
is
fine
with
the
security
aspects,
just
that
the
current
approaches
to
the
wire
format
and
some
protocol
mechanics
does
not
correlate
well
with
performance
implementing
those
things
in
a
performant
way,
and
it
also
is
a
little
bit
disconnected
from
the
realities
of
the
current
compute
platforms.
Next
slide,
please,
all
right!
That's
actually
strong
encryption.
Even
for
me,
I
don't
remember
what
was
written
there.
P
So
I
think
those
were
the
questions
about
those
questions
which
you
wanted
to
ask,
but
you
didn't
manage
to
run
to
the
mic.
So
is
this
purely
an
implementation
aspect?
Can
a
smart
compiler
fix
all
of
this?
For
me,
we
are
talking
about
the
data
dependencies,
not
control
flow
dependencies.
Smart
compiler
is
good
at
doing
rather
trivial.
Things,
for
example,
lot
of
vectorizing
the
round
of
chatu
is
mostly
feasible.
P
You
need
just
to
provide
a
little
bit
help
to
the
compiler,
but
certainly
the
compiler
cannot
do
anything
about
on
the
wire
data
layout
formats
and
overall,
the
data
architecture
of
your
application.
P
P
Thank
you
so
much
what
can
be
done,
then.
I
will
return
to
that.
Thank
you
jared.
So,
if
I,
if
I
rely
only
on
the
language
aspect,
no
you
can
make
things
worse,
but
not
necessarily
better.
So
what
can
be
done
of
trying
to
fix
the
bgp
sec
here?
P
P
Now
right
now
we
have
mostly
global
deployment
of
zero
instances
of
bgp
sec
version
zero,
and
if
we,
if
we
define
a
new
wire
format
which
changes
a
few
of
the
things
first,
is
how
why
a
message
is
laid
out
on
the
wire
that
it
is
one
consecutive
block
and
not
intermixed
fragments
the
second
thing,
the
algorithm
identifiers
right
now
they
define
only
the
actual
algorithms
they
possibly
could
either
also
identify
the
format
of
the
message
to
be
hashed
and
possibly
processed
in
other
ways.
P
Making
that
in
a
more
or
less
forward
compatible
way
as
much
as
we
would
need
that
questions,
so
that
was
discussed
previously
next
slide.
Please
and
let's
see
what
we
get
in
discussion
right.
So
that's
a
feedback
of
trying
to
experiment
of
what
it
would
take
for
bgp
sec
to
be
deployed
and
and
used.
D
A
Oh
okay,
question
ignis:
we've.
We
did
some
studies
with
like
caching,
the
signatures
that
have
been
all
already
verified.
A
So
during
the
signature,
verification
on
the
update,
you
can
cache
segments
of
the
aspart,
the
signatures
that
you
have
verified
and
next
time
the
same
update
or
another
update
that
has
a
common
aspar
segment
with
the
previous
one.
You
can
make
use
of
the
cache.
So
that's
another
way
of
improving
the
performance.
Perhaps
you
have
thought
of
it.
P
Yes,
I
do.
This
actually
is
contrary
to
the
recommended
practices
of
using
elliptical
signatures.
You
can
do
this
only
if
your
random
number
is
stable
and
that
leaks
your
key,
that's
not
the
right
thing
to
do
at
all.
P
Cashing
is
certainly
possible
and
rearranging
a
little
bit
a
few
things
here
and
there
you
can
cash
and
that's
a
whole
point.
P
However,
signature
signing
and
verification
for
as
part
longer
than
in
this
particular
instance,
four
or
five
hops
becomes
less
computationally
expensive
than
calculating
the
hash,
and
that
is
the
problem,
so
you
are
not
limited
by
the
performance
of
the
elliptic
curve.
As
such,
you
are
limited
by
the
overall
performance
of
memory
system,.
L
Joe
snyder's
fastly,
you
ask
do
we
care,
I
can
indicate
just
like
an
iepg.
I
do
care
and
I
do
think
that
now
is
a
good
time
to
start
work
on
this.
I
think
version
zero
will
give
us
valuable
operational
feedback
on
how
it
works
in
the
world,
provided
that
bgp
sec,
router
key
publication
becomes
easily
accessible
to
operators
and
from
there
migrating
to
a
performance.
Enhanced
version
seems
a
very
logical
and
organic
way
to
further
the
development
of
this
protocol.
D
A
Thank
you
warden,
so
this
is
this
talk
is
about
aspa
verification
procedures.
We
we
have
considered
some
enhancements
and
also
route
server.
There
has
been
a
very
good
productive,
creative
discussion
on
the
mailing
list,
I'm
very
thankful
to
nick
hilliard
for
for
offering
several
very
constructive
comments.
A
I've
had
I've
had
discussions
about
this
in
the
past,
with
alexander
and
also
thanks
to
ben
jacobs,
chunwon
and
jeff
and
others
for
participating
in
the
discussions
on
the
list
next
slide.
Please.
A
So
so,
we'll
first
look
at
like
summarize
the
working
group
discussions
on
the
list
and
based
on
that,
we
will
look
at
a
solution
for
the
route
server
issue.
A
There
has
been
prior
work
where
we
identified
a
shortcoming
in
the
aspa
downstream
procedure
that
was
presented
a
year
ago
at
ietf,
110
and
based
on
that
the
there
was
a
working
group
consensus
around
that
to
to
update
the
algorithms
to
overcome
that
shortcoming.
A
So,
in
today
later
on,
I
will
describe
the
updated
algorithms
as
as
they
stand
today.
That
includes
the
above
fix
from
ietf
110.
It
includes
the
route
server
being
properly
accommodated.
It
takes
care
of
some
necessary
special
and
corner
cases,
and
it's
it's
ready
for.
The
algorithm
description
is
ready
for
updating
the
aspa.
Draft
next
slide,
so
I'm
showing
here
a
couple
of
urls
for
the
working
group
discussion
threads
on
next
slide,
so
just
few
basics
about
the
route
server
rs
is
a
route
server
in
this
picture.
A
As2
it's
an
ixp
rs
clients
are
as1
and
as3,
so
in
the
control
plane,
the
transparent
rs
would
in
would
insert
its
asn
in
the
path,
and
that
is
common
and
the
non-transparent
rs.
It
would
not
insert
it
as
and
in
the
it
inserts
its
asm
in
the
path
the
transparent
one
doesn't,
and
but
the
non-transparent
rs
is
a
is
rarity
or
abnormal
or
even
an
abnormality.
A
As
you
will
see,
we
are
not
focusing
the
solution
on
the
non-transparent
rs.
Instead,
we
are
focusing
the
solution
on
transparent
artists
in
the
data
plane.
The
route
server
passes.
The
next
hop
attribute
unmodified
to
its
rs
clients,
so
the
data
plane
connection
between
rs
clients
is
a
direct
connection.
A
Rs
client,
r2
rs
is
essentially
like
a
customer
to
provide
a
relationship
and
the
and
the
relationship
between
rs
clients,
as1
and
s3
in
this
example,
is
effectively
a
lateral
carrying
relationship
next
slide,
please
so
for
aspa-based
route
league
detection
involving
the
route
servers,
we
solved
the
problem
for
transparent
rs,
and
it
just
so
happens
that
the
solution
for
the
non-transparent
rs
comes
with
it
at
no
extra
effort.
No
extra
effort
involved
next
slide,
please.
A
So
this
is
an
example
of
like
how,
like
it's
a
preview
of
the
of
the
general
solution
with
an
example,
the
rs
client.
Essentially,
we
recommend
that
it
should
include
the
rsasn
in
its
aspa.
A
A
The
asps
take
care
of
take
care
of
that,
so
the
asp
is
that
for
for
as4
to
validate
the
update
and
determine
whether
it
is
a
route
leak
or
not,
the
asps
that
should
be
in
place
are
the
are
the
three
that
I
show
in
the
middle
as1
attests
as2
as
a
provider,
as3
attests
as2
as
a
provider,
so
both
rs
clients
essentially
attest
the
rs
as
a
provider,
and
in
addition
to
that,
the
rs
itself
assigns
a
as0
aspa,
and
this
recommendation
is
already
in
the
draft.
A
So
it
with
this
with
this
set
of
aspas,
whether
as3
receives
a
route
with
with
whether
it
receives
the
update
with
the
rsas
included
or
not
in
both
cases,
whether
in
other
words,
whether
it
is
transparent,
rs
or
non-transparent
rs.
In
both
cases,
the
as4
is
able
to
correctly
validate
and
tell
in
this
case
it's
that
it's
a
leak
so
by
by
having
this
set
of
rs
asps
in
place,
the
the
correct
detection
and
the
correct
operation
of
the
validation
procedure
is
possible.
Next
slide.
Please.
A
So
the
solution
description
is
that
each
rs
client
registers
aspa,
including
the
rsasn
in
the
spas
spas,
stands
for
set
of
provider
asses
in
theory.
It
is
sufficient
that
each
rs
client
has
an
aspa
just
including
the
asns
of
its
providers
other
than
the
rs.
However,
some
rs
clients
may
not
have
any
provider,
so
in
that
case
it
it's
good
to
have
a
general
recommendation
that
the
rs
clients
should
include
the
rsasn
in
the
aspa
and
additionally,
including
the
rsasn
in
the
as
in
the
spas,
has
diagnostic
value
for
troubleshooting,
etc.
A
However,
nick
hilliard
offered
this
a
good
suggestion,
based
on
which
we
don't
have
to
do
that.
That
complication
can
be
avoided
and
the
way
it
is
done
is
that
the
validating
as
if
it
is,
if
it
has
the
rs
client
role,
it
determines
whether
the
most
recently
added
asn
in
the
as
path
equals
the
sender's
as
number.
In
this
case,
the
sender
is
the
route
server.
A
If
that
is
not
so
that
is
a
confirmation
that
the
rs
is
transparent
in
the.
In
that
case,
the
rs
client
can
simply
add
the
rsasn
to
the
as
path
for
aspa
verification
purposes.
Only
and
then
the
downstream
verification
procedure
can
be
applied.
So
there
is
no
need
to
make
a
choice
at
the
rs
client
between
the
upstream
procedure
or
the
downstream
procedure.
So
with
this
modification
or
simplification,
the
the
section
5.3
in
the
draft
can
be
potentially
deleted.
Next
slide.
Please.
A
So
now
quickly
the
refined,
enhanced
aspa
upstream
and
downstream
verification
procedures,
including
the
enhancements,
including
the
the
fix
from
ietf
110
as
well
as
next
slide.
Please,
as
well
as
some
special
cases
pertaining
to
presence
of
air
set,
that's
already
in
the
draft,
but
some
special
cases
that
have
to
do
with
this,
with
the
length
of
the
update,
etc.
A
A
So,
in
terms
of
the
description
this
this
is
complete
and
it
includes
a
number
of
corner
cases
that
are
necessary.
It
includes
the
treatment
of
the
route
server,
as
we
just
discussed
a
few
slides
ago.
Next
slide.
Please.
A
So
downstream
procedure
is
described
in
these
two
slides
and,
and
it
it
kind
of
takes
care
of
various
corner
cases,
special
cases,
the
route
server.
As
I
said,
and
we
have,
we
have
confidence
that
that
this
works
several
people
back
in
one
year
ago,
people
reviewed
it
and
we
have
confidence
that
this
is
a
good
way
to
to
update
the
algorithm
next
slide,
please
so.
Similarly,
the
upstream
has
also
been
enhanced,
including
the
special
cases.
Again.
A
I
will
not
go
through
this
step
step
wise,
obviously,
but
the
description
is
complete
and
we
can
take
one
more
look
at
it
and
potentially
it's
a
good
candidate
to
at
least
in
in
in
its
the
form
I
mean
the
wording
may
be
different,
but
essentially
all
the
necessary
ingredients
for
the
procedures
are
are
described
here.
A
So
it
should
work
pretty
good
to
update
the
draft
next
slide,
please
so
at
least
we
have
been
working
on
implementation
of
everything,
a
bgp
rov
bgp
sec,
and
we
have
those
implementations
available
in
bgp,
srx
and
included
in.
That
is
also
the
aspa
procedures
that
are
described
here,
except
for
the
route
server
related
parts
which
are
new.
So
this
is
the
reference
to
to
our
implementation.
A
There
is
also
a
data
that
is
available
for
for
tests.
Data
sets
are
available
for
testing.
So
that
concludes
my
presentation.
Thank
you
happy
to
take
questions.
D
O
Hi
sharon
I'm
a
little
confused
by
the
the
read
server
handling
warren.
Would
you
mind
going
back
to
the
the
example
topology.
O
That's
the
one
I
I'm
I'm
a.
O
We
thought
was
broken
prior
to
this
update.
There
is
no
difference
from
the
perspective
of
as4
if
the.
If
the
rs
is
a
transparent
one,
it
can
be
ignored
altogether
and
if
it's
a
non-transparent
one,
then
it
is
indistinguishable
from
a
transit
provider.
All
that
needs
to.
N
O
In
order
for
as4
to
correctly
detect
this
as
a
leak
is
as1
needs
to
have
created
some
aspa
with
any
contents
as
long
as
it
doesn't
have
as3
in
it,
and
that's
what
the
previous
that's?
What
the
previous
version
of
the
algorithm
said,
I
don't.
I
find
the
the
additional
corner
case
is
a
complication
rather
than
the
simplification,
and
I
I
don't
see
the
logic
personally.
A
Yeah,
if
you
thank
you
for
the
question
us,
if
you
focus
on
like
here,
we
are
looking
at
as3
from
s
s
or
as4
from
s
four
point
four
points
of
view
points
of
view.
What
you
said
is
correct.
It
doesn't
need
to
know
about
the
presence
of
the
rs,
whether
it
is
transparent
or
not.
All
that
it
needs
are
the
asps
that
that
the
two
rs
clients
should
have
with
any
asn,
whatever
any
provider
that
it
may
show
in
the
aspa.
A
But
if
you
look
at
it
from
the
point
of
view
of
as3,
when
a
as3
rs
client
is
evaluating
a
s,
then
it
then
it
helps
for
it
to
see
that
as1
has
registered
an
aspa,
including
the
rs
asn
in
it,
and
that
is
that
is
one
reason
to
so.
It
is
not
necessary
to
include
the
rsasn
in
the
aspa,
like
you
said,
because
we
are
already
assuming
that
the
that
the
non-transparent
rs
is
is
a
rarity.
A
A
O
That
I
I
understand
the
explanation.
I
think
it's
important
to
realize
that
as3
knows
that
it's
speaking
to
a
root
server,
I
don't
think
that
I
don't
think
that
having
corner
cases
in
the
protocol
helps
anyone
here,
I
think
it's
more
complication
and
the
validation
that
as3
applies
can
take
into
account
its
local
knowledge.
I
don't
think
that
I
think
this
makes
the
validation
procedure
harder
to
understand
rather
than
easier.
C
C
So
one
hope
away
from
the
non-transparent,
transparent
internet
exchange
point.
We
cannot
distinguish
if
it
is
a
round
league
or
if
it
is
not
transparent.
Internet
exchange
point
that's.
O
O
F
F
C
A
Interrupting
no
problem,
thank
you
alexander.
I
just
want
to
add
to
what
you
already
said.
I
think
you,
you
said
it
correctly
that
for
the
non-transparent
case,
if
it
may
be
extremely
rare,
but
just
in
case
the
route
servers
inserts
is
as
number
and,
for
example,
as4
is
the
customer,
as
you
said,
in
order
to
be
able
to
detect
route
leaks
it
it
is,
it
is
not
a
corner
case
it
is
it
is.
It
becomes
useful
for
the
for
that
rs
client
to
include
the
rsasn
in
the
aspa.
A
I'm
sorry
lauren
yeah.
I
would
request
ben
to
put
a
message
on
the
site
on
the
list,
so
we
can.
We
can
understand
if
we
misunderstood
him.
Thank.
D
L
L
So
if
we
look
at
the
rpki,
although
the
standards
are
10
plus
years,
it's
only
in
recent
years
that
we've
really
grown
through
some
some
teething
pain.
I
think
we're
I
mean
this.
This
image
is
is
a
bit
of
a
joke,
but
I've
we've
come
a
long
way
in
the
last
two
years.
I
know
that
almost
all
rars
are
now
testing
with
multiple
validator
implementations
before
pushing
code
to
production,
some
rers
even
move
to
a
24
7
support
model.
L
I
think
rp
implementers
have
taken
a
much
closer
look
at
the
original
intentions
of
of
the
design
behind
the
rpki
data
structures
and
and
also,
for
instance,
on
the
bgp
router
side.
A
few
terrible
bugs
were
uncovered
and
subsequently
fixed.
L
Shows
some
differences
between
rpki
and
web
pki
in
the
web
pki
if
you
want
to
be
a
root
ca
synonymous
to
to
the
trust
anchor
operators
in
the
rpki
there's
a
number
of
things
that
that
happen
in
that
ecosystem
that
we
sort
of
skipped
over
or
are
now
slowly
catching
up
to
in
the
rpi
ecosystem.
L
You
might
be
operating
your
route
for
multiple
years
without
any
subscribers,
so
the
analogy
to
the
rpki
would
be
without
any
subordinate
inr
holders
before
you
are
included
in
commonly
accepted
trust
stores
and-
and
that's
that's
kind
of
the
the
paperwork
side
of
things
I
would
say,
because
an
audit
report,
and
then
a
compliance
report
or
or
running
for
years
in
a
sort
of
dry
mode
is,
is
an
administrative
action
but
to
help
justify
the
trust
in
an
in
a
roots
in
in
the
web.
L
Pki
there's
an
additional
technology
that
is
used.
That
is
certificate
transparency,
so
other
reports
are
for
the
the
encounter
so
to
speak.
Certificate
transparency
is
there
for
computer
geeks
to
verify.
If
what
is
happening
under
that
root
actually
happens
next
slide,
please.
L
So
this
slide
contains
a
pointer
to
the
last
ripe
routing
working
group
where
martin
hutchinson
from
google
gave
an
a
very
high
level
introduction
to
what
certificate
transparency
is
what
the
benefits
are
and
some
pointers
to
how
not
just
the
web
pki
but
but
also
other
pki
infrastructures,
are
life
reaching
certificate
transparency.
L
So
reviewing
that
20-minute
video
is
is
worth
your
time
next
slide,
please
nice.
L
So
in
order
to
kick
off
a
certificate
transparency
project
in
context
of
rpki,
we
need
to
map
certain
con
concepts
from
the
the
web
pki
to
the
rpki
and
use
that
to
to
develop
our
own
procedures
to
make
extensions
to
to
our
own
software.
L
So
in
the
certificate
transparency
world
there
is
a
role
called
the
believer,
and
the
believer
is
an
entity
that
takes
an
attestation
made
by
a
claimant
and
and
verifies
that,
according
to
the
cryptographic
procedures
as
described
in
the
specifications,
so
in
the
web
pki
world,
the
believer
is
the
web
browser
and
in
the
rpk
world
the
believer
is
the
relying
party
cash
implementation.
So
the
likes
of
rpki,
client,
fort
or
rootinator
verifiers
are
entities
that
have
a
stake
in
the
ecosystem
and
in
the
web
pki.
L
This
could
be
the
owner
of
a
domain
name
say
fastly.com
in
the
rpki.
This
is
the
the
resource
holder,
so
the
holder
of
an
as
number
or
or
a
set
of
ip
addresses
and
the
verifiers
want
to
know
what
certificates
were
issued,
covering
my
resource
or
my
domain
name,
which
means
they
can
verify
whether
it's
cas
exist,
which
which
hold
those
resources
as
subordinate,
whether
they
are
all
under
control
of
the
verifier,
but
also
security.
L
L
L
So
in
the
rpk
you
you
have
the
the
trust
anchor
itself,
which
is
usually
the
offline
hsn
and
a
few
intermediate
certificates
that
that
hold
the
power
for
zero,
slash,
zero
in
both
v4
and
v6,
and
then
the
the
trust
arc
jumps
towards
the
the
cas
that
are
heavily
constrained
by
the
3779
extensions,
so
in
in
the
claimant,
for
the
purpose
of
this
presentation
is
kind
of
the
the
top
of
the
the
trust
stream.
L
Now
a
very
natural
question
is
why
not
use
existing
publication
mechanisms
like
rsync
or
rdp,
and
we
have
to
realize
that
both
rsync
and
rdp
have
been
optimized
for
a
very
specific
purpose,
which
is
to
bring
the
current
set
of
objects
as
fast
as
possible
to
a
verifier
to,
for
instance,
the
rp
implementations.
L
So
an
example
of
why
rdp
might
not
be
complete?
Is
you
you
can?
Although
everybody
knows
that
manifest
numbers,
monotonically
increase
between
rdp
deltas,
you
can
observe
in
some
situations
gaps
in
in
that
numbering
scheme,
because
if,
if
a
user
adds
a
rowa
and
deletes
a
roa
in
a
matter
of
seconds,
the
the
subsequent
overt
event
overtakes
the
earlier
event-
and
it's
not
worth
sending
out
the
the
earlier
manifest
because
it
has
been
overtaken
by
events
and
the
purpose
of
certificate.
L
Transparency
in
this
regard
is
to
provide
full
insight
into
all
ca,
certs
that
have
been
issued,
which
makes
it
a
bit
of
a
heavier
machine
and
therefore,
I
think
it's
very
good
to
to
have
separate
mechanisms,
one
targeted
to
get
files
or
objects
very
fast
to
the
verifiers
to
the
believers,
apologies
and
then
a
secondary
system
that
is
designed
to
to
inform
verifiers
about
all
actions
that
transpired
next
slide.
Please.
L
So
the
benefit
to
the
community
is
that
if
we
set
up
some
kind
of
global
rpki
certificate
transparency
system,
we
get
auditable
logs
of
all
actions
that
rars
took,
and
this
means
that
resource
holders
such
as
myself
can
inspect
exactly
which
cryptographic
entities
at
what
point
in
time
had
received
what
entitlements
so
in
the
case
of,
for
instance,
rsc.
L
It's,
it's
really
nice
to
to
understand
at
what
point
in
time.
Could
anyone
other
than
myself
and
my
my
parents
issue
rscs
signed
with
my
resources
or
if
there
was
some
kind
of
administrative
issue
where,
for
one
reason
or
another,
my
my
res,
my
rpki
entitlements
were
revoked.
L
When
exactly
was
my
ability
to
sign
with
those
resources
reinstate
it,
and
I
think
that
the
having
this
type
of
maximum
granularity
or
full
detail
like
the
highest
resolution
image
that
we
can
get
of
the
issuance
process
of
of
the
rers,
will
positively
raise
the
bar
in
the
ecosystem
so
aspiring
to
implement
certificate.
L
Transparency
forces
people
to
take
a
careful
look
at
their
issuance
process
and
document
and
understand
how
exactly
their
procedures
work
and
through
that
process
we
may
see
some
organizations
optimize
their
process,
so
I
I
think
it's
good
to
have
something
on
the
horizon
that
that
yes
is
difficult
is,
is
it's
not
cheap?
It
will
involve
many
man,
hours
or
people
hours,
but
the
end
result
is,
I
think,
a
healthier
ecosystem
that
is
worthy
of
the
trust
of
the
believers.
L
I
don't
think
it's
I
think.
It's
it's
not
sufficient
to
say:
hey,
we
are
an
rer.
You
can
trust
our
brands.
We
we
are
trustworthy
because
we
we
engage
with
the
community.
We
listen
to
you.
All
of
this
is
true,
but
in
addition
to
that,
I
want
to
be
audited
to
be
able
to
audit
those
claims
and
and
see
how
what
the
health
status
of
a
a
trust
anchor
operator
truly
is
because
that
provides
the
grounds
to
to
engage
in
conversation
and
make
retrospectives
on
on
incidents
and
and
talk
about.
L
Why
did
this
happen?
And
what
can
we
do
to
prevent
this
next
time?
In
the
same
way,
I
see
that
ct
in
in
the
web,
pki
has
brought
numerous
improvements
and
and
far
more
operation
of
of
the
cas
in
the
web.
Pki
to
the
point
that
now
ct
and
the
web
pki
are
so
intertwined
that
that
web
pki
is
is
fairly
functional
these
days.
L
So
next
slide,
please
slides.
D
L
L
L
I
think
if
we
can
get
to
the
point
where
ca
certificates
are
carefully
tracked,
that
would
also
already
be
a
big
step
forward.
Now,
who
does
this
concern?
As
I
mentioned,
I
think
the
rars
are.
Are
the
ones
that
the
prime
candidates
to
provide
a
high
level
of
transparency
through
ct
log
operators?
Some
entities
will
need
to
set
up
surfaces
that
can
absorb
information
coming
from
the
rers
and
publish
that
in
immutable
logs
and
then,
of
course,
verifiers,
and
that
could
be
anybody
with
a
stake
or
interest.
L
So
somewhat
out
of
scope
is
delegated
rpki
or
even
rp
implementations,
because
rp
implementations
are
just
believers
and
then
the
second
part
of
my
slide
is
a
call
to
to
look
for
interested
people
step.
One
would
be
to
offer
an
internet
draft
that
kind
of
maps
out
a
plan
about
what
cte
is
and
how
it
applies
to
rpki.
L
L
D
Q
Okay,
this
is
the
yeah.
I
like
the
idea
of
ct,
and
I
see
a
lot
of
value
of
I've
seen
a
lot
of
value
in
applying
certificate
transparency
when
you
apply
this
to
any
certificates
like
the
resource
sign
checklist
that
you
mentioned,
because
it's
very
hard
to
observe
all
the
objects
and
the
nobles
actually
published,
and
unless
you
have
certificate
transparency
on
identity
certificates.
L
If
I
may
respond
to
that,
okay,
it
would
be
very
cool
if
eeserts
can
be
part
of
the
ct,
lock
infrastructure
and
I'm
not
pro
precluding
or
x.
Sorry,
I'm
not
excluding
that
that
path,
but
to
to
reduce
the
scope
and
and
get
somewhere.
I
think
it's
great
to
start
with
c
a's,
and
if
we
can
get
that
working,
we
can
maybe
add
more
to
it.
Q
Oh
okay,
I
don't
see
much
benefit
in
removing
the
code
part
where,
for
a
ca
certificate,
you
submit
it
to
the
log,
incorporate
the
rsc
and
for
you
certificate
you
don't,
but
that's
we
probably
actually
have
to
prototype
this
to
see
how
it
works
out.
But
if
you
want
to,
if
you
want
the
rp's
to
check
certificate
and
transparency,
they
will
need
to
check
the
the
attestations
that
are
in
the
ca
certificates.
Q
This
means
that
when
you
want
to
create
a
ca
certificate,
you
need
to
get
enough
responses
from
qualified
logs,
at
least
in
the
web
context,
and
that
implies
that
that
log
availability
causes
an
upper
bound
on
ca
availability
and
more
brittleness
in
the
rpi
scares
me
a
lot
being
an
actual
ca
operator
of
a
real
world
instance.
That
has
a
lot
of
impact.
So
how
do
you
well?
How
do
you
think
about
this
risk.
Q
L
There's
lots
of
clarifications
that
need
to
take
place.
I
don't
consider
myself
a
ct
expert
either,
so
we'll
need
to
educate
each
other
and
yeah.
That
means
lots
of
talking.
H
In
web
pki,
the
end
game
of
significant
transparency,
as
these
as
I
understand,
is
that
if
a
ca
really
misbehaves,
we
just
pull
it
from
our
trust
and
we
no
longer
trust
the
ca.
How
these?
What
do
you
see
is
the
end
game
for
rpi,
because
I
believe
there
is
no
real,
no
real
alternative
to
the
rars
currently.
H
Okay
and
instead
I'll
tell
your
goal
with
certificate
transparency
to
see
if
an
rer
misbehaves
and
then
to
pull
them
from
from
your
thruster.
L
The
first
goal
is
to
engage
with
rudiger.
The
first
goal
is
to
engage
with
the
ror
and
confirm
with
them.
Hey
I
saw
this
incident.
Can
you
provide
me
with
an
rfo
and
then
an
rfo
appears,
and
hopefully
everybody
learns,
but
if
the
same
type
of
mistakes
repeats
over
and
over
or
if
there
are
systematic
issues,
it
could
motivate
some
operators
to
to
perhaps
temporarily
or
perhaps
permanently.
L
No
longer
sees
using
a
specific
trust
anchor,
so
the
goal
of
transparency
is
is
in
part
to
to
be
able
to
to
hold
people
or
organizations
accountable,
but
distrusting
a
route
is
is
then
that's
the
end
of
the
process.
That
is
the
death
of
the
universe.
So
the
the
goal
of
ct
is
to
avoid
to
get
to
that
state,
because
we
can
learn
from
what
appears
in
the
logs.
R
Hello
russell:
I
have
real
problems
with
this
work.
R
R
So
when
we
started
working
on
this,
the
ieb
suggested
that
ayanna
run
a
slash,
zero
and
the
rir's
be
subordinate
and
then
to
accommodate
easier
transfers
among
the
rirs.
Each
of
them
became
equal
roots
for
slash
zero.
L
Thank
you
for
your
comment,
russ.
I
think
I
should
clarify
and
probably
repeat
this
at
many
subsequent
presentations
in
the
rpk
ecosystem.
There
are
22
000
cas
the
ones
to
which
I
think
certificate
transparency
applies,
are
the
rers
that
have
the
0
certificates
and
their
intermediate
operational
certificates
and
the
moment
the
trust
arc,
bounces
towards
say
an
lar
who,
indeed
are
heavily
constrained
by
rfc
3779
extensions.
L
I
they
can
only
shoot
themselves
in
the
foot,
so
I
want
to
know
which
other
lars
are
are
able
to
sign
with
my
resources.
But
beyond
that,
I'm
less
interested.
So
I
think
in
practice,
in
the
rpk
ecosystem,
ct
will
apply
to
maybe
15
25
cas,
but
the
cas
that
I
myself
as
as
resource
solder
run,
are,
are
not
of
interest
to.
D
N
The
most
valid
thing
to
do,
nevertheless,
the
activity
of
establishing
tracking
mechanisms
and
monitoring
for
what
is,
in
the
p,
in
the
in
the
rpki
and
figuring
out
what
info,
what
information
support
should
be
generated.
N
Like
your
question,
I
am
the
holder
of
resources
and
for
a
long
time,
for
a
long
time
I
have
been
thinking
we
need
to
establish
some
monitoring
and
things
like
that,
including
making
it
easy
for
resource
holders
to
get
an
independent
signaling
of
what
the
global
view
of
their
resources
is.
N
So
for
many
of
the
details,
I'm
I
think
the
web
pki
is
track
is
directing
you
to
bad
tracks.
The
idea
of
we
want
to
monitor
this
and
figure
out
a
lot
of
details.
What
is
necessary
and
useful
there.
That
is
valid,
word
work
and
well.
Okay,
that's
that's
that,
so
I
don't
want
to
dismiss
this
effort
overall,
but
I
don't
think
it
is
headed
in
the
right
direction.
Right
now,.
O
Couple
of
things,
I'm
not
sure
either
of
them
are
really
questions,
so
the
first
is,
I
think
that
I
think
there's
a
there's.
We
need
to
do
better
as
a
kind
of
as
a
group
about
distinguishing
when
we're
talking
about
publication
events
versus
signing
events,
because
often
that
you
know
obviously
they
happen
very
close
together
in
time
and
usually
by
related
parties.
O
This
is
about
for
me
about
seeing
signing
events
which
are
not
visible
through
any
theoretical
version
of
the
publication
system,
not
just
the
one
we
have
now.
I
think
we
probably
want
something
like
this
for
the
publication
system
as
well,
but
there's
separate
problems
and
I
think,
teeth,
for
example,
the
problem
that
you
were
pointing
to
that.
That's
an
important
thing
that
I
want
to
be
able
to
see,
but
that's
not
a
signing
event
that
I'm
looking
for
that's
a
publication
event
that
I'm
looking
for.
O
The
second
thing
that
I
think
it's
important
to
point
out
is
I
I
I
have
a
very
good
working
relationship
with
my
local
rir.
I
know
the
individuals
involved.
I
trust
that
they
are
doing
things
with
the
right
intents
and
they're
not
trying
to
you
know,
act
with
any
malfeasance,
but
that's
not
necessarily
where
the
chain
of
the
chain
of
trust
needs
to
stop.
I
also
need
to
be
able
to
demonstrate
to
some
third
party
when
the
rpki
eventually
causes
some
substantial
outage
for
one
of
my
customers.
O
When
it
will,
you
know
it's
just
a
matter
of
how
and
when
and
what
that
looks
like.
I
need
to
be
able
to
stand
up
in
front
of
those
people
and
demonstrate
why
it
was
reasonable
for
me
to
trust
this
system
in
the
first
place
and
having
things
like
some
version
of
ct
having
the
ability
to
demonstrate
that
the
publication
system
was
sound.
All
of
that
sort
of
thing
allows
me
to
make
that
argument
better
and
allows
me
to
use
this
in
a
more
robust
fashion.
O
L
So
email
me
if
you're
interested
in
this
type
of
work-
and
this
is
super
super
early
stages-
no
direction
has
yet
been
set
other
than
I
want
more
transparency
in
this
ecosystem.
Thank
you.
H
Much
yeah,
I
will.
I
think
we
are
a
bit
short
on
time,
so
I
will
try
to
keep
it
short,
so
my
name
is
kung.
I
want
to
talk
a
bit
about
the
rpki
of
the
beaten
happy
path.
We
know
how
the
rpki
works
when
everything
works,
because
it
currently
does.
However,
there
are
a
couple
of
edge
cases
that
can
currently
occur
with
the
current
standards,
which
would
probably
be
caused
some
issues,
and
I
want
to
discuss
five
of
them
and
basically
ask
your
input
about
what
you
think
should
happen
in
these
cases.
H
Next
slide,
please.
The
first
one
is
is
partial,
rpki
data
the
id
is
now.
This
is
a
fictional
example.
However,
you
can
find
similar
examples
in
real
life.
We
have
a
sca0
at
the
top
and
it
has
three
children
ca1,
which
has
a
roller
for
1.106
16..
We
have
a
ca2
for
that.
Has
that
for
the
slash
8
another
one
for
this
16
and
cr3
for
a
slash,
oh
nca4,
no
ca3,
4,
24.,.
H
The
problem
is
that,
if
you
send
for
some
reason,
ca3
for
example,
becomes
unavailable,
then
you
have
a
rower
for
the
slash
8
and
you
don't
have
it
for
the
slash
16.
That
is
a
subset
of
the
slash,
slash
8.,
and
now
you
get
a
valid
that
becomes
invalid,
which
is
contradictory
to
the
fill
open
situation
that
we
normally
have
with
the
rpg.
H
And
the
question,
of
course,
then
is
well.
What
do
you
do?
Do
you
then
include
the
data
from
co2?
Do
you
use
the
cached
version?
If
you
have
it
from
ca3,
do
you
and
what
do
you
do
with
date
from
ca4?
If
that's
the
child
of
ca3,
or
do
you
just
say,
okay,
if
you're
doing
this,
then
you're
so
stupid,
then
this
is
your
own
fault,
if
anything
bad
happens,
and
then
the
question
that
of
course
also
you
get
is
when
do
you
actually,
then?
H
When
should
the
relying
party
actually
report
ready
to
rtr?
That's,
it
has
tried
to
get
all
the
data
and
perhaps
failed
for
safer
next
slide.
Please
now
a
couple
of
ways
that
you
could
have
situations
where
you
get
don't
get
all
the
data
look.
I
can
delegate
my
resources
and
I
can
do
that
as.
However
I
want
so
if
I
am
e
in
this
case,
I
can
create
nine
children
that
are
all
different
publication
points
and
different
locations,
and
they
can
also
delegate
their
resources.
H
However,
they,
like
so
like
they
can
delegate
it
to
nine
children
again,
and
we
can
do
this
for
eight
layers.
We
have
some
limits
to
the
depth
of
the
chain.
This
is
based,
mainly
implementations.
H
This
is
not,
but
not
in
the
specification,
however,
if
just
already
with
the
limitations
that
we
currently
have,
I
can
create
in
the
end
five
million
publication
points
just
by
creating
something
that
is
nine
wide
and
eight
deep
and-
and
the
question
that
I
have
then
is,
is
what
should
erp
software
and
operator
do
in
this
case,
and
should
the
ca
prevent
that
from
happening
or
what
should
the
ca
actually
do,
and
and
also,
if
you
think
that
you
notice
something
like
this,
but
it's
actually
just
an
honest
structure
that
is
looks
strange.
H
How
do
we
then
deal
with
false
positives
next
slide?
Please
there's
also
something
about
file
systems.
I
mean
this
is
all
runs
on
machines
that
are
actually
that
that
just
have
actual
disks,
especially
with
rsync.
We
first
get
the
data,
and
then
we
do
something
with
the
data.
Now
we
have
some
ways
to
exclude
files
we
don't
like
and
those
are
being
used,
but
we
can
create
many
folders,
and
the
thing
is
that
we
can
even
do
this.
H
H
I
I've
shown
an
example
structure
that
you
could
use,
and
that
means
that
if
you
were
to
apply
that
to
the
right
ncc
publication
point
from
two
months
ago,
then
you
would
get
about
18
billion
folders,
which
is
probably
more
folders
than
your
file
system
is
able
to
manage,
and
in
the
current
implementations,
what
we
see
is
that
there
is
no
way
to
restrict
this
in
any
way,
shape
or
form,
which
then
means
that
your
file
system
says.
H
Okay,
I
cannot
handle
this
anymore
and
whatever
happens,
is
then
left
to
what
your
rp
implementation
does
and
the
operating
system,
but
then
really
doesn't
result
in
you
having
actually
data
that
you
can
use
next
slide.
Please
I-
and
this
is
also
very
simple,
but
we
we
have
robots
and
we
can
create
rawa.
So
I
I
was
just
calculating
okay.
How
many
vrps
can
I
create
from
my
rows?
So
I
mean
let's
say
that
I
have
a
48
which
is
not
that's
out
of
the
realm
of
possibilities.
H
Then
I
can
create
a
2
to
the
power
81
give
or
take
prefix
from
that,
because
I
can
split
everything
up
into
everything
which
is
not
useful
in
any
way,
but
I
can
and
then
I
can
allow
that
for
2
to
the
power
423
asms,
which
means
that
you
get
a
very
large
number
of
possible
vrps
that
your
router
is
probably
not
going
to
handle.
H
However,
rp's
accept
this
in
principle.
Rtr
accepts
this
in
principle,
so
at
which
stage
should
you
actually
go?
Okay,
probably
not
don't
need
all
this
data
we
should
probably
leave
something
out,
should
router,
do
that
or
do
should
rt
do
that
or
your
lying
party
should
do
that
or
the
publication
point
or
the
ca
next
slide.
Please
so
and
then
the
last
thing
is:
if
something
like
this
happens.
H
And
I
just
do
this
because
I
like
they're,
very
very
much
this
like
you,
then
I
can
do
this
and
only
target
you.
Okay.
If
we
do
then
get
significant
transparency,
this
becomes
a
lot
more
difficult
for
everything
that
requires
science
objects,
but
the
rsync
one
that
can
also
be
done
as
a
man
in
the
middle
so
and
you
would
stay
out
of
the
significant
transparency
part.
So
my
question
is
okay.
H
How
can
I
effectively
stop
this
and
if
I
cannot
stop
this,
how
can
I
report
it
to
someone
else
that
can
stop
it,
and
how
can
I
prove
to
this
person
that
I
or
this
organization
that
actually
this
is
happening
and
I'm
not
just
making
this
up,
because
I
want
to
get
someone
else
taken
down.
That
is
actually
the
one
that
that
that
that
I
actually
dislike.
I
mean
there
are
malicious
parties
out
in
the
world.
I
would
like
rather
like
that
there
weren't,
but
they
just
are.
H
And
so
the
question
for
that
is,
how
do
when
can
I
prove
who
the
perpetrator
was,
and
is
it
even
possible
and
and
how
can
a
ca
know
that
what
they
are
doing
actually
is
viewed
by
me
as
something
that
is
malicious
next
slide,
please.
H
So
those
are
the
five
points
that
I
wanted
to
discuss
and
my
question
to
you
is:
should
these
problems,
or
if
there
are
problems,
be
dealt
with
and
if
so,
how
who
should
solve
what
and
if
they
are
going
to
be
solved,
should
that
be
done
in
a
proactive
manner
or
in
a
reactive
manner,
so
I
would
like
to
open
the
floor
and
order
the
microphones
do
your
comments.
Thank
you,
jobs.
First,.
L
I
I
don't
want
to
go
as
a
slight
first
slide
to
to
offer
potential
solutions
or
answers,
but,
for
instance,
I
think
that
publish
in
parent
really
is
a
technique
that
that
helps
the
whole
ecosystem,
because
one
of
the
fears
is
that
a
sibling
ca
of
yours
can
do
something
that
somehow
knocks
you
out
and,
for
instance,
in
the
partial
rpi
data
example.
L
A
lot
of
those
scenarios
are
alleviated
if
published
in
parent
is
used,
and-
and
I
think,
for
this
reason
and
other
reasons
we
as
ecosystems-
should
really
strive
to
to
get
rid
of
the
well
not
get
rid
of
the
delegated
aspect,
but
encourage
everybody
that
the
default
setting
is
publishing
your
parent,
because
it
makes
life
easier
and
if
you
publish
in
the
parents
the
parents
can
out
of
bands
so
on
on
the
layer
where,
where
the
child
is
is
sending
it
to
the
parents.
L
Api
apply
some
restrictions
like
with
email
in
the
smtp
protocol.
It's
not
encoded
that
I
can
only
send
you
up
to
10
megabytes,
but
if
I
try
to
send
you
a
10
megabyte
email,
your
email
server
will
say:
oh
it's
too
large.
I
I'm
not
eating
this.
Another
metal
surfer
might
happily
accept
it,
and
this
is
local
policy,
so
each
parent
repository
can
apply
local
policy,
as
is
appropriate
for
for
that
context,
that
environment
without
signaling
such
limits
through
the
rpki.
L
So
the
parents,
I
think,
can
not
only
help
rp's
and
and
help
help
against
partial
rpi
data,
but
also
keep
their
children
in
check
in
a
way
that
is
lightweight
dynamically,
adaptable
and
and
does
not
encumber
the
the
broader
ecosystem.
H
Yes,
thank
you
yup.
I
think
you
make
a
good
point,
however,
that
that,
in
the
implies
that
you
get
a
first
class
citizen,
namely
or
the
first
class,
publishing
parent
and
a
second
class
delegate,
and
I'm
not
I-
it
is
a
solution,
but
I
think
that
is.
That
is
a
consequence
of
that
solution.
S
Right,
yeah,
tim
russell's,
my
name
was
on
the
first
page
of
the
slide
act.
One
thing
I
wanted
to
say
first
is:
I
think
this
is
an
example
of
a
number
of
issues
that
may
occur
and
a
general
question
about
how
we
should
deal
with
them.
S
Secondly,
to
be
a
bit
more
specific,
so
I
my
feeling
is
that
there
are
things
to
be
discussed
with
regards
to
these
suggestions
that
yop
just
made.
I
think
there
may
be
work
there,
but
the
current
reality
is
that
parent
case
can
only
be
reactive.
They
can
stop
a
delegated
ca
when
problems
occur,
but
I
think
we
should
look
into
more
proactive
measures
and
if
we
want
to
do
things
with
repositories,
that
implies
that
we
need
to
look
at
the
publication
protocol.
S
That
also
implies
that
we
may
want
to
think
about
what
trusted
repositories
are
and
what
not.
So
I
think
those
are
all
very
interesting,
interesting
things
to
think
about,
but
currently,
just
to
repeat,
I
think
the
reality
is
that
we
can
only
be
reactive.
N
N
For
the
voluminetr
volumetric
attacks
that
you
were
going
well,
okay,
I'm
I'm,
I
I
think
I
think
they
will
blow
up
much
earlier
than
they
hit
the
routers,
but
well,
okay,
the
thing
that
you
really
should
be
checking
is
your
first
slide,
where
you
only
told
the
roars
that
certain
cas
are
supposed
to
publish
and
you
did
not
show
what
sets
the
cas
were
holding
and
the
the
trouble
that
you
constructed
depended
on
the
unusual
idea
that
the
delegation
of
resources
was
not
hierarchic
and
a
kind
of
overlapping
between
siblings
and
yes,
the
monitoring
and
tracking
system
quite
certainly
should
show
that
and
yes,
the
policies
that
this
should
not
be
hap.
N
This
should
not
be
done
when
running.
Your
registry
have
not
been
that
obviously
and
formally
erased,
but
they
are
actually,
I
think,
very
well
understood.
H
Thank
you
whitaker.
Yes,
I
want
to
point
out
that
this
was
based
on
a
real
life
example,
which
is
currently
in
the
primarily
a
p,
naked
and
identic
relationship.
Where
this
happens,
it's
not
yet
it
doesn't
happen
a
lot,
but
it
happens
in
some
places.
So
that's
why
that.
N
That
sounds
that
sounds
like
to
rehark
to
russ
russ
remarks
about
having
a
single
or
multiple
routes
and
not
having
very
clear
and
explicit
formal
policies
about
how
the
how
the
resources
managed
under
the
overlapping
routes
actually
are
defined
and
strict.
O
I'll
try
and
keep
it
as
short
as
possible.
I
think
your
your
your
first
dog.
Your
first
example
may
well
look
for
curd
in
the
wild,
but
I
think
it's
an
example
of
you
know.
People
will
be
able
to
shoot
themselves
in
the
head
using
whatever
technology
we
give
them,
and
that's
you
know
that
that
example
is
an
accident
waiting
to
happen.
I
think
for
the
rest
of
I
think
for
the
rest
of
this.
O
I
think
that
it's
an
important
problem
and
we
do
need
to
be
clear
about
what
the
action
we
take
it.
I'm
not
I'm
not
convinced
that
the
any
of
the
action
that
we
take
against
the
the
the
kind
of
the
potential
doses
that
exist
should
be
changes
to
a
protocol.
O
What
I
do
think
that
we
need
to
do
is
being
much
clearer
on,
in
particular,
how
rp's
are
dealing
with
placing
limits
on
their
willingness
to
traverse,
trees
and
lots
of
directories
and
lots
of
objects,
and
so
on
and
so
forth,
and
it's
not
necessarily
the
case
that
they
need
to
implement
the
same
protections
or
make
them
as
conv
configurable
as
one
another.
But
I
think
that
there
would
be
value
both
for
people
potentially
doing
new
implementations
or
for
information
sharing
between
implementations
or
users
of
those
implementations.
O
If
there
was
some
collaboration
between
rpm
implementers
to
document
what
recognized
attack
vectors
there
are
and
how
they
each
deal
with
them
and
the
strategies
that
have
been
taken
in
the
pla,
the
trade-offs
you
know-
and
I
think
that
wouldn't
be-
you
know
anything
more
than
an
informational
document.
But
I
think
it
would
be
a
a
useful
reference
point,
but
I
don't
think
that
we,
I
don't.
I
don't
think
that
having
constructing
a
system
that
is
impossible
to
abuse-
or
that
is
is
capable
of
you
know
not
breaking
under
any.
O
O
M
You
jared
motch
job
q's
coolest.
One
of
the
things
I
observed
in
listening
to
this
is
how
much
it
reminds
me
of
the
early
days
of
usnet
news,
which
was
a
case
where
you
would
have
these
files
and
they
would
get
transmitted
over
this
protocol,
and
you
would
write
a
whole
bunch
of
directories
and
files
out
to
the
file
system
and
one
of
the
companies
that
decided
they
wanted
to
build
commercial
software
to
run
a
usnet
new
server.
M
They
actually
figured
out
that
leveraging
the
underlying
operating
system
was
actually
inefficient
and,
while
I
know
the
original
implementations
of
much
of
this
used
things
like
rsync
as
the
method
to
go
and
transfer
the
data,
the
data
is
still
just
data
and
the
files
are
just
files,
and
that
doesn't
mean
that
an
implementation
shouldn't
perhaps
look
at
okay.
Do
we
abstract
this
out
and
store
it
in
our
own
internal
data
store?
So
you
know
what
one
of
the
companies
did
is.
M
Even
though
it
was
commercial
and
there
were
other.
You
know
open
source
softwares
to
compete
as
a
result,
and
so
I
I
think
here
that
that
that
is
very
likely
the
case.
You
know
that
we
should
be
looking
beyond
the
individual
file
systems
and
looking
at
how
that
actually,
data
store
is
held
in
you
know
internally
within
the
software
and
versus
that.
So
I'm
not
sure
what
your
thoughts
are
on
that.
H
Well,
I
think
you
are
I
agree
and
and
for
rdp
a
lot
of
implementations
already
do
that,
but
the
rsync
protocol
makes
that
more
difficult
to
achieve
that
same.
So
I
believe
that
there
are
implementations
that
do
that
for
rdp,
but
do
not
do
that
for
rsync
at
the
moment,
because
rsync
and
rsync
is
still
requirement,
so
you
can
always
downgrade
to
rsync
and
then
execute
same
attack.
Q
Q
So,
in
my
opinion,
we
need
some
work
on
that
in
this
working
group,
so
that
at
least
relying
party
instances
can
detect
it
when
it
when
the
administrative
domain
changes
when
traversing
the
tree,
because
for
some
entities
it
may
be
logical
that
they
have
an
extremely
large
repository
and
for
the
for
a
non-rar,
it's
probably
likely
that
their
logical
entity
has
an
order
of
magnitude,
less
objects
in
there
and
yeah.
I
think
we
should
continue
investigating
this
issue.
H
D
So
so,
thank
you,
everyone
and
also
thank
you
to
the
speakers,
and
you
know
questioners
for
keeping
things
short.
We
did
lose
some
time
in
the
beginning,
but
we're
going
along
nicely
and
also
thanks.
Everyone,
for
you
know
we're
going
to
try
to
keep
things
civil
code
of
conduct
etc.
D
Again,
you
know
passion
is
great
poking
each
other
is
not
so
thank
you,
everyone
and
button.
Oh,
we
can
still
be
means.
Chris
yeah
he's
an
area.
Actually
I
mean
yeah.
We
should
yeah,
we
can
joke
it.
Chris,
okay,
so.