►
From YouTube: IETF114 SIDROPS 20220727 1400
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Away
like
this,
people
can't
hear
you
if
you
do
this
and
you're
like
people
can't
hear
you
keep
the
mic
in
front
of
your
mouth,
please
also
closer,
like
I
wasn't
doing
over
there,
so
this
doesn't
really
help.
It's
got
to
be
upgraded
like
like
this.
You
can
just
do
like
you
know
all
right.
It's
working!
A
A
A
B
C
And
video
continue
to
work
for
a
little
bit.
Actually,
hopefully
they
continue
to
work
for
the
whole
thing,
so
hi
everyone,
I'm
warren
kumari,
I'm
the
op
sadie
and
have
been
80
for
this
working
group.
C
For
what
feels
like
a
very
long
time,
I
am
still
enjoying
being
ops
80,
but
my
term
is
up
in
march
and
I
will
probably
run
again,
but
what
I
would
really
really
like
is
for
a
bunch
of
other
people
to
run,
and
so,
if
anybody's
interested
in
knowing
what
the
ops
id
role
is
like
what
the
time
investment
is,
what
the
fun
parts
are,
what
the
less
fun
parts
are.
Please
come
along
and
talk
to
me
and
I'm
happy
to
provide
background
chat
about
what
it's
actually
like,
etc.
C
As
I
say,
I've
been
doing
it
for
a
while
and
I've
been
doing
it
for
a
while,
because
it's
actually
kind
of
fun,
some
parts
more
so
than
others.
But
yep,
as
I
say,
come
hunt
me
down:
let's
have
a
chat,
I'm
happy
to
to
provide
any
background,
etc
and
natalie
does.
It
still
seem
like
audio
is
working
yay.
A
Okay,
I
think
the
next
person
up
is
igor.
Who
was
here?
Oh
still
here
excellent.
I
think
you
don't
have
a
phone
with
you,
so
I
can
do
the
slide.
Click
it
clicks.
E
Well
I'll
introduce
myself
first,
while
the
slides
are
coming
up
and
say
here,
so
I'm
igor
lubachev.
This
is
something
that
dawg
and
myself
have
been
working
on,
sri
rama's
in
india,
but
he
is
virtually
with
us.
E
So
the
problem
is
source
address,
validation
on
the
internet
or,
basically,
how
do
we
stop
bad
people
from
spoofing,
ap
addresses
and
doing
bad
things?
Problem
is
not
new
been
around
well
in
been
around
forever.
In
2000
we
even
had
the
draft
that
says:
that's
bad
should
fix
it,
we're
in
2022
still
working
on
this
problem.
E
The
best
we've
got
state
of
the
art
is
look
at
bgp
messages
insure
some
information
from
it,
the
algorithms
that
we
have
don't
work
other
than
in
theory,
because
real
networks,
I
mean
the
algorithms
require
no
route
filtering,
no
real
traffic
engineering.
That
can
happen.
Well,
that's
not
real
world,
so
the
feasible
path
of
rpf
has
been
with
us
from
2004
yeah.
E
E
So,
if
you
find
on
the
interface
some
route
advertised
by
that
origin
is
look
at
all
your
bgp
announcements
and
any
route
advertising
any
prefix
advertisement.
E
That
originates
should
probably
also
be
okay,
and
there
is
also
one
small
paragraphs
in
the
in
the
draft
like
really
an
afterthought
and
talking
to
the
author
of
it,
it
really
is
an
afterthought
that
says
you
could
also
look
at
raw
information
and
maybe
augment
your
data
set
with
that
info,
but
anyway,
it's
much
better,
but
still
has
a
bunch
of
problems,
and
here
we
are
today
next.
E
So
this
is
a
simple,
very
simple
example
why
8704
is
not
working.
You
have
as1,
which
is
a
customer
of
as2
as2
is
multi-homed
and
it
propagates
both
propagates
customer
route
to
both
providers
but
its
own
route.
It
only
wants
to
propagate
to
as3
for
reasons
if
s4
is
doing,
source
address,
validation,
s4
will
not
see
any
prefix
advertised
with
as2's
prefix
2
on
its
customer
interface
and
therefore
it
will
not
accept
packets
from
prefix
2..
E
E
The
next
idea
is
well.
We
have
other
sources
of
authoritative
information
so,
for
example,
raw
and
we're
talking
about
aspa,
and
somebody
also
suggested
earlier
this
week,
what
about
signed,
iir
data
and
actually,
why
not
so
again
that
all
the
signals
that
were
not
designed
for
source
address
validation,
so
all
sort
of
hacks,
but
that's
what
we
do
in
the
internet?
We
try
to
do
hacks
that
work.
E
E
Just
first
of
all,
I
apologize
for
this
slide.
It's
a
little
bit.
It's
got
a
lot
more
graphics
that
were
used
at
the
atf.
I
just
pulled
it
from
an
internal
presentation,
but
it's
basically
a
real
problem
that
I'm
very
familiar
with,
because
it's
a
cdn
that's
trying
to
serve
traffic
on
any
cast
ap
address
and
the
anycast.
E
Well,
of
course,
any
cast
home
pop
will
look
at
the
incoming
request
and
decides.
There
is
an
hp,
that's
better
suited
to
serve
the
traffic
from
so
it
will
tunnel
some
sort
of
ap
and
ap
thing,
the
packets
to
the
edge
and
the
edge
wants
to
reply
directly
and
that's
where
the
bulk
of
the
transfer
is
happening.
So
the
only
thing
that
stating
the
detour
through
the
anycast
address
is
the
headers
and
the
acts.
E
E
Prefix
four
has
a
customer
who
is
conducting
service
at
prefix,
three,
which
is
advertised
which
happens
to
land
in
as1,
which
tunnels
the
packet
to
s
two,
the
age
and
there
it
needs
to
s2
wants
to
complete
the
connection,
but
it
needs
to
source
packets
from
the
prefix
three
so
that
the
end
user
sees
normal
connection
happening
and
the
question
is:
will
s9
this
provider
allow
such
packets
and
dsr
is
a
obviously
case.
Well,
it's
an
important
case.
Cdn
I
just
described,
and
so
is
mobile
roaming
and
some
gaming
and
security
products.
E
So
here
is
a
crux
of
our
proposal.
We
call
it
bar
self
bar
serve
so
bar
for
bgp
we're
still
using
bgp
messages.
We
need
to
get
get
information
from
there,
but
we
also
augment
it
with
aspen
raw.
E
It
is
strictly
an
improvement
on
8704,
even
in
the
way
it
just
processes,
bgp
messages,
so
8704,
just
looked
at
the
origin
as
number
and
bar
server
is
looking
at
the
entire
asp
and
we'll
see
how
it
does
it,
but
it
basically
gets
more
signal
from
existing
messages,
but
it
also
augments
this
information
with
aspen
raw.
E
The
good
thing
that
we
think
it's
actually
valuable
is
that
it
requires
no
new
protocol,
no
new
changes
to
existing
protocols.
Oh
that's
nice!
Can
we
get
the
slides
back
at
some
point
and
thank
you
and
the
the
fact
that
it
requires
none
of
that
means
that
it's
actually
good
for
for
adoption,
because
the
very
very
first
network
that
deploys
something
like
that
will
immediately
see
benefits.
So
that's
that's.
What
you're
going
to
see
is
that
early
adopters
actually
get
value
next.
E
E
E
I
mean,
of
course,
it's
way
way
in
the
future,
but
basically
the
way
it
works
is
that
you
have
two
phases:
phase
number
one
use
ask
the
information
to
discover
customer
call,
so
all
the
customers
of
all
of
your
customers
and
their
customers
transitively,
and
once
you
have
the
customer
code,
which
is
the
list
of
as
numbers,
look
at
raw
information
and
find
the
prefixes
that
they've
advertised
that
they
own
those
are
the
prefixes
that
are
allowed
in
the
interface
next.
E
Thank
you.
The
real
world
is
very,
very
similar.
It's
still
exactly
those
two
phases
find
the
customer
cone
and
once
you
did
find
the
prefixes
that
those
customers
own
to
find
the
customer
code,
what
you
do
is
you
look
at
aspa
when
available
and
also
look
at
bgp
as
path
and
those
can
be
received
from
anywhere
your
cast,
this
customer
interface,
other
customer
interface,
transits
even
provider,
and
look
at
the
asp
consider
every
single
as
number
in
there
and
the
previous
one
is
the
customer
of
the
of
the
of
the
next
one.
E
E
E
E
No
exports
happening
things
like
that.
Next
and
let's,
the
point
is
just
to
illustrate
quickly
how
bar
serve
works
with
something
like
this.
So
next
slide.
E
So
first,
you
will
start
with
the
only
s
number
we
know,
which
is
the
as
number
on
the
other
side
of
the
interface
that
we
are
looking
at.
So
that's
as3
and
well.
There
is
nothing
in
aspas
that
shows
as3
as
a
provider
fine,
but
there
is
a
bunch
of
bgp
prefixes.
We
have
that
have
as3
so
collect
everybody
before
it
and
that's
the
new
as
numbers.
You
discovered
next
path,
the
next
iteration
repeat.
E
E
E
E
In
fact,
it
discovers
that
as2
is
part
of
a
customer
cone
in
a
trivial
way.
It
doesn't
have
to
be
directly
connected.
It
could
be
like
an
another
network
between
as2
and
s4
doesn't
matter,
it
will
discover
it,
and
then
it
will
see
that
there
is
a
route
received
from
the
pier
that
shows
as2
as
the
origin
is
and
prefix
2
will
be
accepted.
Wow.
We
have
a
good
all
right.
So
imagine
my
dsr
slide.
While
it's
been
found
now,
what
can
barsap
help
with
dsr?
E
Well,
it's
actually
pretty
simple.
All
you
need
is
that
the
cdn
owns
both
edge,
both
the
hs
number
and
the
anycast
home
cs
number
cdn
owns
all
the
prefixes
prefix
next,
two
more
times,
yeah
good
and
one
good,
so
cd
yeah
good.
Thank
you.
Cdn
owns
prefix
one
prefix,
two
and
prefix
three,
so
all
it
needs
to
do
is
needs
to
publish
raw
that
says.
S2
owns
authorized
to
advertise
prefix,
2
and
prefix
3.
now
s2.
E
The
h1
will
never
want
to
actually
advertise
prefix
3,
but
that's
ok
doesn't
matter
it
owns
it,
so
it
publishes
raw
and
therefore
as9
when
it
does
bar
save,
will
find
from
raw
that
s2
owns
prefix
3
and
it
will
allow
next
asba
obviously
can
also
help
with
route
leaks.
I
mean
that's
what
it
was
designed
for.
E
E
Now
the
trout
should
really
be
rejected
for
for
for
forwarding,
because
after
says
that
it's
leaked,
but
it
made
it
to
s4
somehow
barcelov
when
it
before
it
tries
to
ensure
information
from
bgpas
path,
would
check
that
hey,
I'm
thinking
of
adding
s
8
as
a
customer
of
as2,
because
it
comes
after
s2,
but
s8
actually
has
a
aspa.
Entry
and
s2
is
not
its
transit,
so
it
will
reject
adding
this
s8.
E
No
reason
to
do
that
and
in
fact,
if
it's
needed
it
will
never
be
actually
needed
in
this
case.
But
as5
is
a
customer
of
s8.
E
So
basically,
aspar
validation
also
helps
ourselves
to
construct
the
correct
list.
Next.
E
E
In
2015,
we've
looked
at
atacama
we've
looked
at
several
thousands
of
our
pubs
where
we
are
and
to
see
how
many
of
them
do
any
sort
of
source
address
validation
and
we
found
that
about.
15
did
and
just
before
coming
here
I
pulled
the
stats
and
then
seven
years
later
about
15
of
the
networks
do
source
address
validation.
So
I
don't
think
it's
purely
economics.
E
It's
mostly
like
it
actually
is
not
working
too
well,
the
algorithms
that
we
have
and
maybe
they
and
we
really
expect
that
improved
algorithms
with
data
produced
by
cider
ups
will
actually
move
the
needle.
Thank
you.
B
Speaking
as
a
working
group
member,
I
think
it's
a
great
presentation.
Two
points
I
will
make
is
that
it'll
be
great
to
see
how
this
solution
solves
the
problem
for
ibgp
cases.
While
you
have
depicted
ebgp
cases,
most
of
the
problems
will
probably
also
require
you
to
have
a
control
within
ebgp,
it's
quite
possible
in
the
scenarios
that
across
ibgp
your
your
a
solution
may
not
be
honored
or
may
not
be
enforced,
but
what
is
critical
is
if
we
can
sort
of
zoom
into
ibgp
cases.
That
would
be
phenomenal
yeah.
Thank
you.
E
Thank
you,
that's
it
is,
I
think,
it's
very
valuable
to
also
look
at
ibcp.
One
of
the
things
that
we
want
to
do
is
that
on
the
internet
is
that,
okay,
if
one
network
will
not
enforce
it,
maybe
the
next
network
should
have
a
chance
to
still
do
it.
That
said,
obviously,
all
this
serve
filtering
best
done
as
close
to
as
close
as
possible,
but
that's
a
good
point.
F
Okay,
since
the
chair
seems
to
be
distracted,
jeff
has
I
worked
with
sri
ram
and
doug
on
8704,
I'm
going
to
offer
two
observations
for
you
that
feed
on
the
stuff
from
8704.
F
the
the
core
enhancement
from
8704
is
one
part
you
can
add
stuff
to
your
source,
address
validation
from
additional
bhp
data.
That's
not
being
used
immediately
for
forwarding
so
being
able
to
see
it
from
other
sources
is
great.
You
know
the
presentation
you're
giving
is
an
excellent
example
of
that
we
talked
about
rose
as
one
example
in
8704.
I'm
glad
to
see
this
is
going
forward,
but
the
second
thing
ties
into
the
slide
you're
displaying
here
about
the
economics
I
was
like.
Why
haven't
we
seen
more
of
this
stuff?
F
It's
one
part
the
tooling
for
adding
8704
is
not
out
there,
but
the
bigger
one
is
source.
Address.
Validation
in
hardware
is
predicated
on
burning,
fib
resources
to
do
the
extra
lookup
to
see.
If
you
know
that
you
can
actually
do
this
sort
of
this
validation,
it's
cheaper
than
doing
firewalling.
So
from
that
perspective,
it's
a
wonderful
thing,
but
it's
still
an
additional
cost
in
your
fib.
In
cases
where
you
can
have
sev
covered
by
what
you're
using
for
forwarding,
you
basically
get
it
for
free
just
at
the
cost
of
additional
forwarding
lookups.
F
Yes,
every
other
thing
that
we're
looking
at
here
and,
as
you
start
expanding
these
use
cases
you're
looking
at
effectively.
You
know
doubling
or
tripling
the
size
of
your
fib
to
be
able
to
implement
this
functionality.
So
part
of
what
you're
fighting
against
is
the
economic
cost
of
something.
That's
there
for
security
that
isn't
actually
selling
moving
bits
around.
It's
actually
to
help.
You
stop
building
bits.
E
So,
thank
you.
That's
definitely.
I
mean
that's
part
of
the
point
I'm
making
here
that
economics
is
definitely
a
driver,
probably
more
for
some
networks
than
the
others.
Some
networks
would
actually
benefit,
and
I
mean
we
do
see
that
15
of
the
network
chose
to
implement
something.
So
there
is
some
value
there,
so,
but
anything
we
do
is
important
that,
yes,
it's
as
economical
as
possible,
especially
for
the
smaller
networks,
because
that's
where
the
source
address
validation
is
done
best
and
especially
for
the
first
movers.
D
Hi
ben
madison
work
online,
so
there's
three
separate
things:
firstly,
just
to
kind
of
continue
from
the
point
that
jeff
was
making.
Certainly
all
of
that's
true,
but
one
caveat
to
that
is
at
least
speaking
for
the
network
that
I
operate.
We
typically
run
out
of
face
plate
interfaces
long
before
we
run
out
of
packets
per
second,
so
that
that
that
that
interaction
with
the
hardware
is
not
necessarily
a
deal
breaker
even
for
reasonably
large
networks
that
are
kind
of
a
similar
sort
of
a
shape.
D
The
second
point
I
wanted
to
make
is
mostly
to
just
reiterate
what
I
mentioned
in
savnet,
which
is
that
using
rpki
objects
in
this
way
kind
of
breaks,
the
the
the
fail
open
semantics
that
they
have
in
their
current
use
case,
and
I
think
we
need
to
think
quite
hard
about
that.
I
think
there
are
other
use
cases
where
we
want
something.
That's
kind
of
like
a
sticky
rpki
object.
D
I
think
that's
going
to
be
required
if
we
ever
want
to
replace
things
like
the
irr,
so
I
think
that
there's
other
work
other
useful
work.
That
would
need
the
same
sort
of
thing
and
then
the
third
thing
is
your
example
of
pruning.
D
The
customer
cone
using
the
aspa,
I
think,
is
problematic
semantically,
because
I
can
imagine
a
scenario
where
a
customer
wants
to
use
a
transit
link
purely
in
the
outbound
direction
and
never
intends
to
advertise
any
inbound
reachability
over
it
and
therefore
doesn't
include
that
adjacency
in
their
aspa,
but
expects
the
return
traffic
to
the
outbound
traffic
to
continue
working
and
and
using
it
to
prune
the
the
source
address,
validation,
filter
in
that
way
breaks
that
assumption-
and
I
think
that's
going
to
be
it
would
be
if
we
wanted
to
do
that.
D
I
think
we'd
have
to
be
very
careful
about
how
we
document
it
so
that
that
doesn't
end
up
being
a
nasty
surprise
for
a
knock
somewhere
in
the
world
at
three
in
the
morning,
because
it's
such
a
corner
case,
but
it's
a
valid
corner
case
kind
of
all
of
those
cumulatively
leave
me
kind
of
feeling
like
if
we
want
to
be
using
rpki
objects
for
source
address
validation.
D
I
think
I
would
prefer
looking
at
defining
new
objects
with
those
precise
semantics.
Rather
than
trying
to
kind
of
shoehorn
the
stuff
we've
got
already
into
this
hole,
I
think
it's
a
worthwhile
thing
doing
potentially
and
I'd
be
happy
to
you
know,
spend
cycles
and
trying
to
get
it
done.
But
I
I
don't
love
the
trying
to
reuse
the
existing
object,
stuff.
E
Okay,
thank
you.
So
let
me
try
to
remember
a
few
comments.
So
the
comment
about
implementation-
yes,
you're,
absolutely
right
and
basically
like
I
said
in
subnet,
I
mean
the
devil
is
an
implementation.
E
As
a
last
comment,
absolutely
having
a
purpose-built
signal
is
much
less
of
a
hack
than
using
another
signal
that
was
not
built
for
the
purpose,
so
it's
I
see
it
as
a
trade-off
between
doing
one
more
thing.
That's
new
versus
using
things
that
already
exist
now
aspa
doesn't
really
exist.
So
it's
an
opportunity.
So
I
agree.
Thank
you.
G
Yeah
yeah.
I
wanted
to
also
continue
on
by
point
that
then
medicine
made
university
of
atlanta
by
the
way
regarding
what
there
are
what
you
expect,
the
failure
condition
to
be
because
we've
seen
that
rpg
publication
points
don't
meet
this
even
100
uptime
availability
every
at
all
times,
there's
probably
some
replication
point
out
there
that
isn't
quite
working
as
it
shoots.
So
you
are
not
retrieving
the
roast
or
in
the
future,
objects
that
from
there
and
what
I've
heard
from
you
or
what
I've
understood.
G
E
Right
so
that's
the
same
comment
about
the
implementation
detail.
Is
that
how
can
it
see?
I
mean
the
first
thing
that
comes
to
mind.
Is
you
cash,
your
information
and
you
assume
if
something
disappeared,
it's
still
valid
for
like
24
hours
or
48
hours,
and
only
if
it's
still
not
there
48
hours
later,
you
remove
it
because
you
think
maybe
it's
for
a
reason.
I
mean
that's
just
the
first
thing
that
comes
to
mind.
Maybe
other
people
will
come
up
with
some
more
clever
heuristics.
A
H
Yeah
hi
jeff
houston
in
looking
through
this.
E
We
can
build
something
we
can
build
something
new,
absolutely,
but
then,
when
we
build
something
new,
it
has
to
have
the
property
that
earlier
it's
cheap
for
early
adopters
and
early
adopters,
get
immediate
value
when
they're
the
first
network
to
adopt
it.
Otherwise,
you
get
problems,
and
I
know
I
mean
ipv6
kind
of
comes
to
mind,
but
let's
not
go
there
so,
but
in
your
essence,
absolutely
right,
but
we
think
that
getting
sba,
information
and
raw
information
will
enhance
the
state
of
the
art
we
have
and
get
it
better.
Will
it
get
it
perfect?
E
H
An
overly
restrictive
view
of
the
prefixes
coming
from
this
bounded
set
of
networks
will
create
filters
that
are
too
enthusiastic
that
there
will
be
valid
presentations
of
source
addresses
that
aren't
in
the
list.
That's
created
the
problem
with
going
down
this
path
is
that
the
operator
push
back
where
an
otherwise
perfectly
valid
packet
gets
discarded
because
of
some
automated
tool
then
becomes
an
operational
cost
and
that's
the
underlying
concern.
H
When
I
review
this
work,
going
you're
starting
from
a
small
set,
that's
constrained
rather
than
a
large
set-
that's
maybe
overly
liberal,
but
at
least
it
encompasses
all
of
that
connectivity
bound,
and
you
know,
there's
more
work
to
do
here.
Obviously,
but
you
know
it's
an
interesting
approach,
but
it
just
struck
me
that
policy
and
connectivity
don't
quite
align
the
way
you'd
like
it
to
for
this
work.
E
Exactly
so
the
goal,
what
this
thing
is
doing
on
top
of
what
exists
is
that
it's
trying
to
expand
the
number
of
prefixes
that
it
will
find
to
put
into
your
more
permissive
list.
So
it's
trying
to
make
the
list
more
permissive,
and
will
it
get
it
perfect?
No,
and
there
was
a
suggestion.
How
about
you
create
a
new
sba
record?
That's
specifically
for
it.
E
That's!
Maybe
we
could
explore
that
as
far
as
the
network
goes
so
that
was
like
okay,
somebody
put
sba
just
because
they
know
that
they
will
never
advertise
to
a
particular
provider.
They
never
listed
them
in
aspa,
maybe
now
they
will
because
they
they
think
that
this
is,
but
maybe
it's
a
bene.
Maybe
it's
a
bad
idea,
because
it
goes
against
the
original
purpose
of
sba,
so
that
that's
why
I
I
think
it
should
be
explored.
Another
sp
record
type.
B
Hey
one
quick
question:
patel
again
speaking
as
a
working
group
member,
you
talk
about
the
customer
cone
and
the
relationships
there,
but
isn't
this
problem
wider
than
that.
E
Yes,
yes,
so
when
you're
talking
when
I
said,
the
algorithm
works
both
on
peering
interfaces
and
customer
interfaces,
when
you're
looking
at
a
peer
interface,
you're
trying
to
discover
their
customer
code.
B
Got
it,
but
it's
quite
possible
that
aisp
has
not
turned
on
this
and
you're
simply
peering
to
that
isp
and
you
still
have
an
attack
or,
and
this
attack
has
can
be
generated
from
a
service
provider
itself
right.
E
D
D
Then
that
kind
of
breaks
this
kind
of
closure
of
of
of
the
the
allowed
to
originate
traffic
relation-
and
you
end
up
not
discovering
potentially
valid
sources
even
under
the
expanded
algorithm,
because
those
paths
don't
show
up.
I
think
that's,
but
I
don't
think
that's
got
anything
to
do
with
the
rpki
related
stuff.
Here
I
think
that's
a
fundamental
problem
with
the
kind
of
algorithm
assumptions.
E
D
The
adjacencies
don't
show
up
in
the
aspa,
because
they're
just
pairings
and
as
a
result
that
gets
left
out
of
the
cone,
I'm
pretty
sure
that
there
is
a
I'm
pretty
sure
that
there
is
a
gap
there
that
we
wouldn't
catch.
It
only
happens
in
the
partial
transit
case,
but
it's
I
think
it
would
have.
A
I
All
right
thanks
so
to
recap
briefly
on
this,
this
document
defines
a
new
signed
object,
called
a
trust,
anchor
key
object
or
attack
object,
and
the
idea
is
that
it's
used
to
signal
to
relying
parties
that
the
ta
key
or
the
root
ca
certificate
urls
are
going
to
change.
I
So
the
main
goal
here
is
simplifying
key
rollover.
If
we
wanted
to
roll
a
key
today,
we
would
have
to
take
our
nutella
file,
distribute
it
to
all
the
different
vendors
wait
for
people
to
upgrade
their
clients.
If
the,
if
they're,
upgrading
them
the
upgrade
process,
would
have
to
involve
getting
that
new
tail
into
place
and
so
on.
So
there
are
a
few
steps
involved
and
a
few
things
that
can
go
wrong
or
clients
might
have
a
custom
da
update
process,
maybe
operating
system
packages
or
something
like
that.
I
So,
whereas,
if
we
have
a
process
like
this,
then
it's
all
in
banned
and
even
the
people
who
aren't
updating
the
trust
store
automatically.
I
They
still
get
a
signal
that
there's
a
change
and
they
need
to
do
something
so
the
more
confidence
we
have
in
this
process,
the
easier
it
is
to
do
key
rollover
and
that
helps
with
hsm
vendor
lock-in,
which
is
the
main
goal
here
not
being
stuck
on
one
hsm
indefinitely,
and
the
secondary
goal
is
the
ability
to
update
root
ca
certificate
urls,
so
that
just
gives
us
a
bit
more
flexibility
around
deployment.
I
So
this
was
last
presented
at
itf
111,
and
one
of
the
key
feedback
items
was
to
look
at
other
approaches
to
ta
rollover
and
just
see
whether
they
might
be
relevant
here.
So
one
document
that
came
up
out
of
that
was
40
to
10
certificate
management
protocol,
which
has
as
part
of
its
ta
transition
process.
I
But
there
are
a
couple
of
things
here
which
are
different
from
rpqr.
The
first
one
is
that
ta
distribution
is
out
of
band,
clients
might
be
using
the
old
one
or
the
new
one,
whereas
with
scientel
ta
distribution
is
in
band,
and
the
other
thing
is
that
with
cmp,
or
at
least
in
that
context,
clients
might
receive
certificates
from
other
sources,
whereas
with
rpco,
it's
all
in
the
repositories.
I
So
it's
not
clear
that
this
model
is
applicable
in
the
uppercut
space.
I
I
The
idea
there
is
that
you
include
the
hash
of
the
upcoming
ta
key
in
the
ta
certificate
that
you
distribute,
so
that
when
a
client
sees
that
new
ta
certificate
it
can,
it
can
compare
the
key
with
the
hash
and
know
that
this
is
the
new
certificate
is
using
the
expected
key.
Basically,
tim
commented
on
this
at
the
time
on
the
list.
One
issue
is
that
rp's
may
not
ignore
that
extension,
which
could
be
a
problem.
I
Another
one
is
that
if
the
new
ta
certificate
replaces
the
old
one,
then
there's
no
way
to
transition
from
previous
tal
data
once
the
certificate
has
been
replaced,
which
is
not
ideal,
it's
good
to
be
able
it's
good
to
have
that
transition
available.
I
Another
thing
is
that
86.9
involves
a
ta
certificate
issued
ahead
of
time
that
is
presumably
stable,
whereas
rpki,
because
of
the
indirection
of
the
tau,
supports
arbitrary
reassurance
of
that
certificate.
So
if
a
model
like
this
were
to
be
adopted,
there'd
need
to
be
additional
guidance
about
what
to
do
when
the
value
changes
and
so
on.
I
I
So
one
of
the
key
things
out
of
from
that
document
is
this
idea
of
an
acceptance.
Timer
a
client
sets
an
acceptance
timer
when
it
sees
a
new
key,
and
it
needs
to
continue
seeing
that
new
key
for
a
period
of
time
before
it
updates
its
trust
store
with
that
new
key
starts,
relying
on
that
new
key,
so
that
model
has
been
adopted
in
the
scientel
document.
I
Now
the
acceptance
time
period
is
30
days,
which
is
just
arbitrary,
and
the
idea
is
that
that
will
help
with
some
of
the
concerns
that
were
raised
around
temporary
key
compromise.
I
If,
as
in
if
an
attacker
has
access
to
the
key,
even
for
a
short
period
of
time,
they
can
transition
everybody
to
a
new
key
and
a
new
publication
point
are
controlled
exclusively
by
the
attacker,
which
is
yeah
and
then
it's
game
over.
So
this
will
help
with
that.
I
Something
else
that
came
up
with
i7
was
the
use
of
the
term
revoked.
The
attack
object
in
i7
had
this
revoked
flag,
and
when
that
was
set,
it
was
a
signal
to
the
clients
to
move
to
the
successor
key.
I
The
problem
is
that
it's
not
really
revoked
in
the
sense
that
that
term
is
used
in
in
other
contexts,
because
you
still
have
to
use
that
attack
object
to
get
to
the
new
key.
So
it's
just
not
the
right
term
to
use
so
to
address
that
the
the
attack
object
no
longer
has
a
revoked
flag.
Just
has
a
different
model
and
there's
some
advice
in
the
document
for
tas
to
reuse
previous
tier
certificate,
urls
for
new
keys,
once
they've
stopped
maintaining
the
previous
key.
I
The
idea
with
that
is
that
if
an
attacker
gets
access
to
a
previous
talent
key
and
publication
point,
a
client
that
connects
to
that
will
see
a
different
certificate
with
a
different
key
and
then
all
things
being
equal,
we'll
think
we'll
go
and
get
new
tail
data
and
everything
will
be
fine.
It
won't
be
possible
for
the
attacker
to
exploit
that
client.
I
This
is
just
a
belt
and
suspenders
type
thing
making
sure
that
each
each
publication
point
is
operating
in
its
expect
in
an
expected
capacity
yep,
so
that
the
success
key
actually
knows
that
it's
operating
as
a
successor
for
a
specific
predecessor
there's
some
discussion
around
the
use
of
tac
objects
as
a
substitute
for
tau
data
and
after
the
cut
off
there
were
some
further
updates
around
security
and,
I
suppose,
threat
model
type
updates,
but
they
didn't
make
the
deadline
so
they'll
go
into
the
next
update.
I
Another
thing
that
came
up
the
last
time
this
was
presented
was
looking
at
the
currency
of
validators
to
see
when
it
might
be
possible
to
rely
on
something
like
this
in
practice.
So
this
is
a
graph
of
the
validators
that
we
see
at
openings,
rrdp
service,
but
probably
every
rrdp
server
sees
something
similar.
I
Each
of
the
data
points
on
the
x-axis
is,
from
the
last
day
of
that
month.
The
validators
we
saw
in
that
day
on
the
y-axis.
We
have
the
asn's
and
their
sensor
got
from
the
ip
address
information
just
by
looking
at
bgp,
so
the
clients
are
rather
the
relying
parties
that
provide
version
information.
I
So
each
of
those
is
taken
into
account
for
one
of
the
time
figures
in
the
legend
on
the
right,
then
there's
rpko
client,
which
doesn't
provide
version
information.
Unfortunately,
so
it
has
a
separate
section
and
then
there
are
other
unknown
clients,
the
unknown
clients
actually
exclude
traffic
that
is
coming
from
browsers,
so
yeah.
It's
software
that
appears
to
be
connecting
to
the
repo
well
is
connecting
to
repositories
for
some
reason.
But
it's
not
immediately
apparent
what
that
reason
is
so
looking
at
this
pessimistically.
I
If
all
those
rpq
client
instances
are
version
7.0,
which
was
the
first
release
of
rpq
client
that
had
support
for
rdp,
then
it
could
be
as
much
as
50
of
the
validator
population.
That's
more
than
12
months
old,
more
optimistically,
it's
still
15
to
20.
So
it's
a
substantial
number
of
validators
that
are
fairly
old
and
then
there's
the
unknown
clients
to
consider
as
well.
So
that
needs
more
looking
into.
I
But
in
short,
if
we
want
to
rely
on
tack
objects
in
practice,
then
after
the
rp
code,
rp
code
updates
are
done.
There
needs
to
be
a
fairly
concerted
effort
around
getting
people
onto
versions
that
support
those
objects.
I
Okay,
so
next
steps,
obviously
feedback
would
be
good
once
that
feedback
is
addressed,
we
will
move
to
updating
the
prototype
code
and
then
we'll
go
from
there
and
that's
it
thanks.
J
J
Yeah,
well
speaking,
oh
that
that
helps
okay.
Well,
let's
move
on
to
the
next
slide,
I'm
going
to
tell
a
short
story,
at
least
that's
the
idea.
I
wanted
to
talk
about,
hosted
rpi
services
delegated.
I
wanted
to
talk
about
delegated
cas
and
repositories
and
then
zoom
in
on
a
particular
aspect
of
it
so
jumping
ahead
of
it
I'll
get
there.
I
want
to
look
at
how
do
we
migrate
from
one
repository
to
another?
J
Actually,
but
let's
start
at
the
start,
so
the
different
models
that
we
see
today,
the
by
far
the
most
common
model,
is
what
is
here
in
the
in
the
in
the
top
left,
which
is
where
you
have
a
parent
and
a
bunch
of
child
cas
in
one
system
provided
by
a
an
rar
and
are
usually
publishing
all
their
content
in
a
repository
that
is
operated
by
that
same
organization,
parent
or
organization.
J
Now
you
can
also
run
dedicated
cas,
and
then
you
have
a
choice
of
where
you
publish
at
least.
Sometimes
you
do.
You
can
run
your
own
repository
for
certain.
A
number
of
arias
provide
a
service
where
a
member
of
said
arya.
J
I
can
publish
at
them,
and
we
don't
see
this
just
yet,
but
it
has
been
mentioned
a
few
times
in
the
past,
the
way
the
separation
between
the
publication
protocol
and
the
delegation
protocol,
if
you
will,
the
provisioning
protocol
is
organized
allowed
for
essentially
third-party
content
providers
to
also
provide
a
service
where
people
could
publish.
So
I
remember
people
saying:
let's
just
publish
at
google
amazon
cloudflare,
I
don't
know
whoever
I'm
not
taking
any
sites,
but
that
was
an
idea
behind
it.
Moving
on
next
slide,
please.
J
Yep,
so
using
a
provided
repository
well,
what
we
have
found
is
that
in
brazil
in
particular,
the
rpki
uptake
is
all
done
through
well
people
running
their
own
system
because
they
don't
have
an
option
to
use
a
hosted
servers
so
yeah,
maybe
if
they
did,
they
would
use
it.
J
Thing:
okay:
now
the
question
I
wanted
to
get
to
is
suppose
I'm
doing
this
and
suppose
I
set
up
my
own
repository
for
example,
but
now
I
want
to
move
on
to
another
repository.
How
do
I
go
about
that?
Is
that
even
possible
and
we
have
actually
implemented
something-
I
think
it
needs
improvement
and
it's
based
on
key
roles,
an
existing
standard
that
we
do
have
so
next
slide
I'll.
Try
to
briefly
discuss
how
key
roles
work.
Maybe
it
go.
How
are
we
for
time?
J
Should
I
be
quick
or
well
there's
a
lot
of
arrows
here
and
most
of
the
arrows
are
actually
missing,
but
it's
to
give
an
idea
right
before
any
any
key
role.
The
situation
is
you
get
a
certificate
from
your
parent?
You
publish
a
manifest
and
a
serial,
and
then
you
have
a
bunch
of
objects,
rawas,
etc,
and
you
may
publish
ca
certificates
for
grandchildren
they're
all
in
in
in
one
repository
right.
So
then
the
next
phase
is
that
you
would
next
slide.
Please.
J
J
Then
the
time
comes
to
activate
your
new
key
and
what
happens
then
in
the
current
key
role,
algorithm
process
or
thing
is
that
you
republish
everything
under
your
new
key,
so
the
roas
I
mean
and
delegated
certificates
for
for
grandchildren
etc,
and
you
remove
them
from
the
list
of
objects
that
you
publish
under
your
old
key.
So
there
you
just
have
a
manifesto
crl
left
over.
You
can
publish
this
in
one:
go
using
a
multi-element
publication
query,
and
this
means
that
the
relying
parties
will
also
see
it
as
one
delta.
J
So
they
will
see
this
as
an
atomic
operation
more
or
less
well
more.
This
is
true
for
rdp.
If
you
set
up
rsync
with
the
right
incantations.
This
is
also
true
for
rsync
yeah
and
then
finally,
you
will
remove
the
old
key.
So
you
will
ask
the
parent
to
revoke
the
certificate
for
your
old
key
and
you
can
get
rid
of
the
manifest
in
the
serial,
and
this
can
be
done
immediately
after
the
previous
one.
So
it
may
not
even
be
visible
as
two
separate
steps
to
everybody.
J
Looking
now,
if
we
apply
this
to
migrating
repositories,
what
we've
done
perhaps
naively,
is
just
you
know:
let's
just
use
a
new
repository
for
the
new
key
and
we
follow
the
steps
as
though
we're
a
normal
key
role.
So
we
have
a
yeah
here
in
the
off
green
color.
Is
the
new
repository
a
new
key?
We
created
new
certificate,
we
got,
we
published
a
manifesto,
we
don't
always
wait
24
hours
to
be
honest,
but
we
could.
J
The
next
step
is
undertalk
manually
and
that's
just
like
any
other
system.
You
know
we
activate
a
new
key,
meaning
that
we
publish
everything
in
the
new
location
and
and
this
I
think
we
should
change.
J
J
Next,
please
then
yeah.
The
final
step
is
easy.
Once
that's
all
done,
you
remove
the
alter.
You
ask
for
a
revocation
of
your
certificate
and
you
remove
the
the
objects
associated
with
it.
So
that
part
should
be
relatively
easy
one
thing
to
realize
as
well.
I
don't
think
any
of
the
relying
party
tools
treat
aia
other
than
informational,
so
they
the
back
pointers.
J
Let's
say
the
authority,
information
access,
pointers,
uris
that
are
included
in
objects.
They
point
back
to
where
a
certificate
is
published.
Now,
if
that
changes
here,
then
that
might
flag
some
things
or
maybe
we
need
to
look
at
the
provisioning
protocol.
649.6492.
J
J
J
J
A
J
All
right
and
I'll
send
one
to
the
list.
Soonish.
A
M
So
I'm
gonna
briefly
check
it.
Okay,
yeah!
I
will
try
to
share
my
printing
because
it's
a
outdated
version
of
my
slides,
it's
my
fault.
So
if
you
can
give
me
a
chance
to
share
my
screen.
M
M
Okay,
I
hope
you
can
see
it
so
once
again,
hello,
everyone,
my
name
is
alexander
asimov
and
today
I'm
going
to
present
a
long
awaited
update
on
asp
documents.
Let's
start
with
the
profile
document,
so.
M
M
M
In
the
latest
version,
this
scheme
was
updated
now,
asp
object
carries
not
a
list
of
providers,
but
a
list
of
lists
where
each
item
contains
a
list
of
providers
and
may
contain
address
family.
As
far
as
I
understand,
the
idea
was
to
give
a
way
to
create
a
single
object
for
both
ipv6
and
ipv4
policies.
M
It's
true
that
a
significant
number
of
networks
have
the
same
set
of
providers
in
ipv4
and
ipv6,
and
the
process
seems
to
be
converging
towards
this
state.
Still,
there
are
networks
who
have
different
policies
in
different
address
families,
so
it
presumes
the.
M
In
my
view,
the
same
goal
may
have
been
achieved
with
less
changes
just
making
the
address
family
option
still.
Even
these
lightweight
change
is
getting
into
the
same
trap.
M
M
I
checked
the
thread
this
morning
and,
if
I'm
not
mistaken,
we
have
a
perfect
split
through,
although
the
new
format
have
intercooked
testing
on
its
side,
I
have
a
nasty
feeling
that
profile
document
and
the
new
rtr
document
are
running
in
different
directions
and
running
surprisingly
fast,
and
if
we
don't
converge,
the
community
will
know
who
to
play.
So,
let's
try
to
discuss
this
topic
before
jumping
through
the
second
part.
How
are
we
going
to
address
this
format?
Change
with
8210
these
in
the
rfc
editor
queen,
please
advice.
D
Then
madison
work
online.
Look,
I
think
I've
probably
said
better.
D
I
think
I
probably
said
everything
that
I
need
to
on
the
mailing
list
already,
but
to
reiterate,
I
think
it's
less
work
for
the
rp
if
there
are
fewer
objects,
I
think
that
the
overwhelmingly
more
common
case
is
that
networks
have
the
same
or
very
close
to
the
same
set
of
transits
in
both
address
families,
and
I
think
that,
for
I
think,
one
of
the
things
that
I
tried
to
point
out
in
a
recent
email
is
that
I
think
that
for
an
operator,
that's
used
to
a
user
interface
where
it
presents
this
kind
of
a
mental
model
where
the
base
assumption
is
that
both
the
dress
family
is
the
same
topologically
the
day
that
that
operator
then
needs
to
go
and
read
one
of
these
objects
to
see
what's
actually
being
transmitted
on
the
wire.
D
I
think
it's
much
less
surprising
if
it
doesn't
diverge
too
much
from
that
mental
model.
I
think
the
other
thing
to
point
out
is
that
there's
quite
a
lot
that
you
know.
There's
been
a
lot
of
running
a
lot
of
progress
in
in
new
implementations
over
the
last
few
weeks
and
all
of
those
are
based
on
the
version
8
profile.
I
think
rolling.
That
back
is
quite
a
lot
of
work,
for
you
know
a
fair
number
of
people,
so
I'm
quite
strongly
in
favor
of
the
version
age
range.
I
M
Understand
was
cooked
for
the
previous
version
of
the
object.
D
So
I
I
don't
think
that
there's
I
mean
certainly
the
two
for
the
formats
in
the
rtr
protocol
and
the
the
asn.1
do
diverge.
I
don't
feel
like
that's
a
huge
problem.
I
think
that
the
the
I
think
that
the
the
the
overwhelming
the
overwhelming
consideration
for
the
rtr
protocol
is
to
make
things
as
convenient
as
possible
for
routers
to
use
it
in
policy
decisions,
and
I
think
the
existing
format
is.
D
You
know
mostly
fairly
well
suited
to
that,
whether
we
do
this
translation
when
it
arrives
at
the
router
or
when
it's
being
processed
by
the
rp.
I
don't
feel
like
that's
it's
it.
It's
not
a
non-issue,
but
I
think
that
the
the
profile
change
can
co-exist
with
the
existing
rtr
spec,
pretty
comfortably.
M
M
M
But
don't
you
think
that
the
debugging
can
become
really
complicated
when
you
will
be
using
three
at
your
router
asking
for?
Please
show
them
give
me
the
information
about
what
base
days
are
for
selected
address
family,
and
after
that
you
will.
You
may
have
problems
by
building
corresponding
asp
objects
in
the
distributed
of
the
other
database.
D
Yeah,
so
I
think
you're
right,
but
I
think
it
is.
I
think
it's
inevitable
that
there's
a
the
the
the
kind
of
structure
to
use
in
kind
of
provisioning
tools
and
config
management
tools
is
inevitably
different
from
what
is
convenient
for
the
router
to
store
in
its
internal
data
structures,
and
so
that
translation
has
to
happen
somewhere.
M
Reading
the
chat-
and
I
still
have
a
feeling
that
it's
a
group
of
side
errors
we're
not
converging,
because
I
see
that
randy
bush
is
still
opposing
the
exchange
and
he's
authoring
the
rtr.
L
We
have
an
internal
implementation
of
the
version
8
profile,
and
I
kind
of
elaborate
here
on
why
I
prefer
the
version
7
profile
in
hindsight,
in
the
end,
the
user
interface
that
people
will
present
may
or
may
not
align
very
closely
to
the
objects
that
people
create,
it's
all
about
making
sure
that
the
right
objects
are
created
and
that
there's
no
confusion
in
these
objects
themselves
and
what
we
realized
after
implementing
this.
L
What
was
that
we
could
create
a
lot
of
edge
cases
in
the
content
of
the
version
8
profile,
where
the
content,
semantically
overlaps
and
you
need
to
take
a
union
there
within
the
object
and
covering
this
with
a
proper.
A
proper
set
of
test
objects
was
just
very
hard
or
test
cases
and
that's
the
main
reason.
I
prefer
the
the
version.
L
Seven
object,
even
though
I
really
like
the
idea
of
having
a
single
signed
object
per
as,
but
I'm
just
afraid
that,
covering
all
these
cases,
where
v4
v6
overlap
or
not
could
lead
to
yeah
interesting
edge
cases
and
implementations.
N
N
Kind
of
yes,
there
is
yeah,
there
is
a
possibility
to
add
a
third
and
the
fourth
afi,
and
there
is
the
complexity
that
tees
was
pointing
to
and
well
okay
kind
of.
I
just
from
that.
I'm
quite
certainly
not
happy
about
moving
to
a
08.
N
In
my
first
understanding,
I
was
expecting
that
the
work
on
8210
bis
would
not
have
to
be
redone
to
fit
the,
which
is
adding
more
complexity
to
the
whole
system,
because
if
you
have
different
presentation
of
the
in
data
structures
for
essentially
the
same
content,
kind
of
that
means
there
are.
There
is
translation
there
is
there
is
there
there
are
translations
necessary
and
that's
more
complexity
in
the
damned
system
than
if
you
just
can't
straight
copy.
N
I'm
for
for
the
question
whether
we
are
actually
delaying
creation
of
the
operational
system.
N
Moving
on
on
eight
on
o8,
I'm
very
unhappy
about
added
complexity.
N
Well,
okay,
ten
years
back,
we
would
have
had
a
situation
where
the
afis
usually
would
not
align
very
well
and
if
they
align
at
this
point
in
time,
this
does
not
mean
that
it's
going
to
stay
this
way
so
kind
of
the
argument.
Well,
okay,
we
are
making
people
happier
for
for
just
this
time.
K
A
Okay,
you
pop
back
into
the
mic
queue
when
you're
ready,
but
I
think
tim
and
then
warren
and
ben.
J
Yeah
so
hi
yeah
tim
gonzalez.
J
J
And
I
think
it
started
with
a
desire
to
use
a
blast
space
even
and
this
afi
limit
actually
came
to
be
as
a
an
additional
thought
in
the
process.
So
the
first
proposal
that
I
did
then
was
that
we
would
have
a
single
as
object
with
two
distinct
lists
for
each
address
family.
Then
the
address
family
limit
was
introduced
as
a
way
to
compress
that
even
further
and
then
the
idea
came
to
be
that
this
might
actually
reflect
better.
What
people
want
to
do
all
in
all.
J
This
can
express
exactly
the
same
kind
of
data.
As
you
know,
you
can
express
now
with
the
with
this
o7
protocol.
So
in
that
sense
it
is
really
a
matter
of
preference,
and
I
think
it's
something
that
we
can.
You
know
keep
on
discussing
until
you
know.
Well,
how
did
they
say
the
cows
come
home
and
I
want
a
second
what
rudiger
said.
I
would
really
hate
for
that
discussion
to
delay
deployment
and
experience
with
with
aspa.
J
So
that's
what
I
wanted
to
have
said
to
comment
on
the
data
format
versus
8210
this
I
think,
there's
prior
art.
There
I
mean.
If
you
look
at
roas,
you
can
have
multiple
prefixes
in
a
single
robot
object.
You
don't
get
this
structure
in
your
router.
You
actually
have
to
validate
multiple
robo
objects
and
make
a
union
of
everything
and
then,
and
then
that
is
what
gets
sent
to
the
router
and
similarly,
whatever
the
profile
is.
This
translation
can
happen
at
different
levels.
J
It
can
happen
in
the
rp,
as
it
is
currently
done
for
rawas
already
and
yeah.
It
can
also
happen
in
the
ui,
where,
obviously
I
can
present
users
with
an
interface
that
allows
them
to
provide
a
common
list,
and
then
my
software
can
work
out
how
to
make
two
distinct
lists
of
that
it's
trivial
for
me
as
well
to
do
that
so
yeah,
I
don't
know
if
any
of
this
is
bringing
it
closer
to
a
solution,
but
I
guess
my
main
message
is
that
you
know
I
just
want
this
to
work.
K
K
When
I
want
to
see
what's
being
published,
I
don't
look
in
the
repository.
I
don't
look
on
the
wire.
I
look
in
my
router
because
that's
where
the
rubber
meets
the
road,
okay,
what
is
in
the
router
is
going
to
separate
v4
and
v6
because
that's
what
happens
in
routers.
Thank
you,
steve
dearing
and
bob
hinden.
K
K
We
wish
they
were,
we've
wished.
They
were
for
20
years
they're,
not
yet.
This
is
especially
seen
in
asia,
but
it
occurs
here
today,
all
this
air
any
operator
on
me
in
this
meeting
who
actually
uses
multi-protocol
bgp
so
that
they
have
v4
and
v6
in
a
single
configuration
configured
session
with
their
peer,
or
is
it
like
all
the
rest
of
us
that
we
have
separate
sessions
for
v4
and
v6?
It's
not
pretty
it's
just
reality.
C
Thank
you
and
warren
kumari
relaying
a
comment
from
rob:
stillmore
yeah.
I
can't
read
and
look
at
the
same
time
so
I'm
extreme.
Oh
thank
you,
ridika,
I'm
extremely
uncomfortable
with
requiring
transit
on
different
atheists
to
be
in
the
same
path
when
he
damn
well
know
that
sometimes
they
are
not.
Maybe
I
misunderstood
the
question
chaor
is
looking
confused.
I
can
read
that
again.
C
A
D
Just
the
last
thing
I
mean
mostly
in
response
to
what
rudiger
was
saying
is
better
yeah.
I
don't
think
that
we're
arguing
here
about
more
or
less
less
complexity
in
the
system
as
a
whole.
I
think
we're
mostly
talking
about
where
that
complexity
should
be.
That
should
be
dealt
with.
I
don't
think
that
any
of
this
need
to
be
a
showstopper,
and
my
priority
in
all
of
this
really
is
to
try
and
get
some
running
code
out
the
door
and
into
production
sooner
rather
than
later,
because
I
think
this
is.
M
F
So
I'm
not.
M
That
we
have
reached
consensus
here.
Hopefully
I
believe
that
getting
back
in
the
form
of
zero
seven
can
simplify
the
process.
If
I
want
to
move
faster,
maybe
it's
the
best
way,
especially
taking
into
account
the
comments
of
one
of
the
implementers
who
was
saying
that
there
are
a
huge
amount
of
tests
and
the
format
is
getting
complicated.
M
I
will
I'm
not
one
of
the
implementers,
so
I
can't
comment
further,
but
my
personal
view
is
that
zero
seven
is
simple,
and
so
it's
can
fly
faster
than
zero,
eight,
okay,
but
nevertheless,
let's
keep
this
discussion
in
demand.
At
least
I
will
try
to
summarize
it
after
the
meeting
ends
and
as
I
have
only
15
minutes
left,
let's
move
forward
to
the
second
document,
and
it's
about
verification,
and
it's
also
suffered
a
lot
of
changes.
M
I
M
M
Let's
see
how
these
indexes
help
to
detect
problems
for
prefixes
that
are
received
by
ear
provider
route
server
route
server,
client,
the
rule
is
very
simple.
The
inverted
index
defines
the
length
of
the
first
upstream
segment
and
in
the
case
of
a
correct
path,
it
should
be
equal
to
the
length
of
this
path.
M
From
this
we
can
get
a
simple
rule
if
invalid
index
is
less
the
length
of
ice
path,
it's
roughly
it's
important
to
know
that
leak,
detection
at
ice
by
route,
server
and
transfer.
A
third
client
is
not
a
special
case
anymore.
M
In
the
profile
document
document,
we
added
that
if
a
route
server
is
not
transparent,
it
must
be
added
in
the
list
of
providers.
With
this,
all
parties
at
the
eyes
are
entitled
to
use
upstream
verification
procedure
that
we
discussed
just
above
now,
detection
of
problem
flips
that
are
coming
from
private.
M
The
correct,
downstream
path
may
contain
the
upstream
segment
and
downstream
segment.
The
invalid
and
reverse
invalid
indexes
define
their
length
respectively.
So
to
detect
roughly,
we
need
to
check
that
the
zoom
of
invalid
indexes
is
less
than
highest
path
length.
For
me,
it
looks
fairly
simple.
Now,
let's
discuss
the
unknowns
in
the
previous
versions
of
the
document,
the
unknown
path
was
defined
as
the
path,
but
that
has
a
common
system
that
don't.
E
M
Psp
record
with
comments
from
sri
ram
and
yara.
The
definition
was
transformed
to
the
next
one.
The
unknown
path
is
the
path
they
may
have
been
leaked,
and
it
proved
also
that
the
detection
of
unknown
paths
is
in.
This
definition
is
very
similar
to
the
detection
of
fractix.
We
again
define
two
indexes:
the
unknown.
N
M
We
also
define
a
reverse
unknown
index
for
the
reverse
I
spot,
so
the
path
may
be
leaked
if
there
is
enough
space
for
leak
to
happen
in
case
of
routes
received
by
providers
peers
or
at
eyes,
it
means
that
unknown
index
should
be
less
than
highest
path
length
and
a
very
similar
family
equation.
We
get
for
routes
received
from
providers.
M
If
the
sum
of
unknown
indexes
is
less
than
ice
path
length,
we
can't
guarantee
that
the
prefix
was
not
leaked
now,
the
other
rules
for
provided
piers
and
types
if
the
invalid
index
is
less
than
I
spot
length,
it's
a
arrival
if
the
unknown
index
is
less
than
its
path
length,
it's
unknown,
otherwise,
it's
weather
for
the
downstream.
A
M
The
same
thing,
so
if
the
zoo
of
invalid
indexes
is
less
than
ice
pathways,
it's
invaded
if
the
zoom
of
unknown
indices
is
less
of
the
ice
pattern,
it's
unknown.
Otherwise,
what
surprises?
M
M
This
should
be
applied
without
reference
where
I
set
is
placed
in
the
beginning
or
in
the
middle
of
the
path,
the
implementation
of
pfsp
logic,
with
corresponding
unit
test,
the
current
implementation
you
can
find
at
the
github.
So
what
I
need?
I
need
input
on
these
questions.
I
need
volunteers
that
want
to
read
the
document.
I
will
need
volunteers
who
want
to
code
the
a
spay
logic
to
check
the
specification.
O
F
O
O
Right
so
just
want
to
make
a
comment
that
when
you
have
a
provider
that
has
no
providers
like
the
tier
one,
we
have
said
in
the
draft
that
they
will
register
an
as0
aspa.
That
is
fine.
O
I
think
we
have
also
said
that
nix
nix
or
the
route
server
as
will
also
register
nas
0
aspa.
That's
also
fine.
You
didn't
mention
it
here,
so
I
thought
it's
worthwhile
mentioning
those,
but
one
more
a
little
bit
more
tricky
thing
that
not
not
in
the
draft
yet,
but
I
think
we
perhaps
should
discuss
that
between
you
and
me
at
least,
and
include
that
in
the
draft-
and
that
is
about,
if
you
have
a
a
transit
provider
who
happens
to
be
present
at
a
rs
at
the
rs
as
a
client.
O
So
it's
a
tier
one,
a
transit
provider
and
happens
to
be
present
as
a
at
an
rs
as
an
rs
client.
In
that
case
they
should
register
naspa
with
the
rsas
as
a
provider,
just
like
any
other
rs
client.
M
M
O
Okay,
good
yeah,
so
so
tier
one
should
not
be
misled
into
thinking
that
they
just
need
to
register
in
a
zero
spa
and
they
are
done.
They
should
be
sensitive
that
if
they
are
present
at
a
route
server
as
a
client,
they
should
definitely
include
the
route
server.
As
in
the
aspa,
if
you
think
the
draft
is
clear
about
that,
it's
okay!
If
not,
we
can
talk
about
it
and
perhaps
put
in
a
word
to
be
sure
that
people
understand
that.
M
If
you
find
that
somebody
is
missing
this
document,
please
send
it
back.
You
know
I'm
trying
to
do
my
best
to
carefully
read
all
the
comments
and
push
that
into
the
document.
O
Sure
I'll
help
you
and
just
a
little
mention
of
the
the
use
of
the
upstream
for
the
route
server
client.
I
think
we,
I
have
some
examples
where
that
doesn't
seem
to
work
correctly,
but
again
that
is
too
complex
to
discuss
here.
We
will
discuss
it
between
yourself
and
myself
and
then
we
can
take
it
to
the
mailing
list
if
needed.
I
just
want
to
mention
that.
Thank
you.
O
K
K
M
M
D
Yeah
madison
work
online,
so
it's
it's.
It's
certainly
true
that
a
transit-free
network
at
a
at
a
non-transparent
ix
root
server
would
need
to
include
that
root
server
as
one
of
its
providers,
but
that
is
such
a
vanishingly
unlikely
scenario
to
actually
come
up
in
the
real
world.
I
really
wouldn't
call
it
out
explicitly
in
ways
prose
on
it.
D
I
think
that,
in
the
interests
of
simplicity,
what
the
document
should
do
is
emphasize
the
fact
that
a
non-transparent
internet
exchange
root
server
is
just
a
transit
provider.
It
just
happens
to
forward
on
mac
addresses
and
not
ip
headers,
but
in
every
in
any
way
that
this
document
cares
about.
It
is
just
a
transit
provider
and
should
be
treated
indistinguishably.
From
that
case,.
M
I
think
for
a
if
we
are
digging
into
the
details.
There
is
only
one
thing
that
is
different.
You
are
speaking
about
rough
server
behavior,
because
if
it
is
transparent,
as
the
specifications
suggest,
we.
M
O
O
The
only
other
comment
is
that
it
appears
to
me
that,
if,
if
as
even
if
the
route
server
is
transparent,
it
may
be
worthwhile
to
register
in
aspa
by
the
client,
including
the
even
the
even
the
transparent
ix
in
the
aspa.
O
M
O
Yeah
yeah
again
like
it
in
my
mind,
it
helps
to
it
helps
to
have
the
algorithm
at
the
route.
Server,
client.