►
From YouTube: IETF95-SIDR-20160404-1400
Description
SIDR meeting session at IETF95
A
A
A
A
B
Is
there
anyone
willing
to
take
minutes?
We
can't
proceed
without
that.
Anyone
at
all
please
oh
food.
Thank
heavens
sue
has
done
this.
This
is
the
fourth
time
in
a
row
with
somebody
else
has
to
be
able
to
do
that.
There's
an
ether
pad
for
collaborative
work.
If
you
want
to
indulge
in
that,
that
would
be
helpful
also.
So
please
having
more
than
one
minute
speaker.
That's
also
good,
and
we
need
a
jabber
scribe.
Please
jabber
scribe
somebody
ever
described.
Somebody
please
begging
come
on.
B
B
So
going
through
the
working
group
status,
the
AF
migration
draft-
you
may
have
seen
as
an
IETF
last
call.
So
this
is
your
last
chance
to
say
something.
If
you
didn't
do
it
in
the
working
group
last
call
the
Alpes
draft.
The
working
group
last
call
ended.
It's
been
waiting
on
60/40
5/5,
the
Elvis
dress
actually
has
exactly
the
same
language
derived
from
64
85
at
the
iesg
objected
to
so
RSC
64
85
bits
has
changed
that
language.
B
It's
working
group
consensus
on
that
language,
so
I
will
have
the
elves
draft
author
just
replaced
the
objected
to
language
with
the
new
language.
Ops's
had
no
activity.
The
overview
dress
is
actually
expired.
It's
working
with
less
call
ended
and
it's
waiting
on
the
protocol
draft
to
progressed.
Pki
profiles
had
some
brand
new
text,
so
we'll
have
to
make
sure
that
that
text
gets
consensus.
Protocol
draft
working
group
last
call
ended.
It
seems
to
be
in
good
shape,
known
and
need
for
concern
here.
B
It
should
finally
be
ready
for
progression
rollover
draft
new
version,
no
real
changes
more
just
to
refresh
in
some
updates
of
references.
The
Delta
protocol
we're
going
to
be
talking
about
today,
origin
validation,
signaling
working
group
last
call,
and
then
there
was
a
request
from
someone
to
add
a
small.
B
Our
group
in
IDR
agreed
to
it
so
that's
ready
for
progression.
The
use
cases,
draft
new
version
again,
just
a
refresh,
not
really
certain,
where
that's
going.
The
publication
draft
finally
has
seen
some
new
activity
and
that's
good
to
see
60
45
bits.
The
working
group
last
call
ended,
got
a
small
number
of
responses,
but
no
objections.
The
topic
of
the
bits
has
never
been
a
problem.
There's
been
no
controversy
in
the
group
so
that,
in
our
opinion,
that's
ready
to
progress.
B
B
Rpki
tree
validation,
brand
new
draft
in
the
working
group,
and
talking
about
that
today,
64
90
bits
has
been
published,
yay-hay
6810.
This
is
the
rpki
round
a
protocol
and
publication
is
requested
for
that.
One
rpki
validation
reconsidered.
We
have
a
new
brand
new
editor.
Timbren
cells
is
going
to
be
talking
about
that
today,
our
PSL
cig,
another
publication,
requested
the
router
king
has
a
new
draft
last
time.
B
I
advise
people
to
totally
those
of
an
up
kind
of
viewpoint
to
take
a
look
at
that
and
see
all
right
see
if
that
draft
says
what
useful
information
you
would
need
for.
Keying
routers
in
bgp
fix
slurm
is
was
a
new
draft
at
the
last
meeting
and
is
still
active
and
two
that
are
kind
of
dormant
and
been
dormant
for
a
long
time.
So
we
have
26.
B
Rfps
is
one
brand-new
one,
nothing
in
the
editor
queue
but
or
an
ISDN
processing,
but
when
an
IETF
last
call
publication
is
requested
for
two
more
and
pass
working
group
last
call
for
five:
three
of
them
are
just
awaiting
write-up,
and
hopefully
my
Wednesday
I
will
have
that
complete
two
drafts
are
expired,
one
newly
accepted
and
so
for
a
grand
total
of
1819.
If
you
count
the
overview
draft,
which
is
expired
so
today
we
have
in
Brazil
talking
about
the
our
RDP
protocol.
B
Oleg
talking
about
the
rpi
tree
validation
draft.
Some
discussion
of
validation
reconsidered
with
our
new
editor
and
a
trust
banker
applicability
they
from
Andy
Newton
but
Carlos.
Also
very
much
involved
in
that
in
a
bunch
of
the
other,
are
IRT
Wednesday.
We
have
a
lighter
agenda.
Thomas
King
is
going
to
talk
about
some
of
the
wealth
server
use
of
the
origin,
validation,
signaling,
and
that
is
going
to
be
a
remote
presentation.
D
C
C
The
second
topic,
I
will
talk
about
it
a
bit
more
later.
There
is
rearrangement
in
the
text
without
any
significant
changes
and
yeah.
That's
basically
it
I,
don't
think
we
changed
much
in
the
implementation
there.
Apart
from
this
second
topic,
where
we
say
that
implementation
is
now
allowed
to
aggregate
updates
from
several
days
in
one
Delta
file
and
more
about
it
later,
and
so
here.
C
C
We'll
start
with
the
recap
for
our
DB,
so
that
we
are
on
the
same
page
with
everybody
for
the
next
things.
What
is
our
DB
in
essence
is
basically
three
types
of
xml
files.
This
is
the
notification,
XML,
probably
not
very
readable,
but
an
XML
file,
and
it
has
links
in
forms
of
URLs
to
the
snapshot
file,
which
is
also
an
XML.
C
The
snapshot
describes
all
the
objects
in
the
repository
current
state,
and
it
also
has
reference
to
bunch
of
Delta
files
and
every
built
a
file
is
XML
itself
as
well,
but
that
one
contains
both
publish
messages
and
withdrawals.
So
it
represents
a
change
from
the
previous
version
of
repository
to
the
next
version
repost.
C
First,
one
is
that
if
relying
party
doesn't
have
any
initial
state
or
previous
state
basis,
what
it
does
it
fetches
notification
file
figures
out.
Where
is
the
snapshot
file,
which
is
the
snapshot,
file
parses
and
stores
all
the
objects
it
finds
there
to
its
local
store,
remember
session
from
the
handout
from
the
header
and
serial
number,
so
it
could
continue
and
figure
out
what
to
do
next.
So
what
are
the
next
is
the
second
step
and
basically
most
of
relying
parties.
We
expect
will
be
in
this
state,
so
usage
notification
file.
C
Again,
you
can
bear
remote
session
with
your
local
session.
If
it's
still
the
same,
we
compare
remote
seals
with
your
local
serial.
Then
you
figure
out
which
Delta
numbers
you
need
to
fetch
from
the
those
definitions.
The
fetch
percent
apply
Delta's,
starting
from
the
local
serial
plus
one,
so
the
next
number,
from
what
we
previously
had
all
the
way
up
to
the
latest
term.
Serial
you've
got
in
that
education
file.
C
You
process
the
deltas
and
you
remember
new
serial,
and
the
third
typical
use
case
is
basically
the
same
as
the
previous
one,
but
in
case
that
you
don't
have
the
know
the
number
for
the
previous
serial.
So
basically,
you
are
not
up
to
speed
with
the
current
state
of
apposite
or
e.
You
can
fetch
the
file
check,
the
deltas
that
you
need
to
say
to
figure
out.
There
is
no
delta
that
you
want.
So
what
you
do
you
fetch
a
snapshot
file
similar
to
this
first
step
now,
the
interesting
part.
C
C
If
you
look
at
the
rsync
repository
that
represented
it's
about
85
megabytes
of
filesystem
data,
if
you
do
tar
on
it,
you
don't
have
all
these
file.
Log,
size,
limitations,
extensions
and
it's
43
max,
but
unexpectedly
for
us,
the
same
content
in
XML
file
is
only
twenty
eight
megabytes,
even
though
it
is
XML
and
the
content
is
base
64.
So
not
very
much
compressed
form
anyways,
let's
go
further.
This
is
how
our
current
deployment
looks
like
so.
C
C
Christo
behind
it,
then
we
send
it
into
cloud
and
crystal
server
and
application
server,
and
we
talk
publication
protocol
here,
another
ID
in
the
working
group.
The
publication
server
basically
produces
just
bunch
of
XML
files
in
RDP
and
then
we
distribute
it
through
a
CDN,
and
this
is
where
we
are
the
burek
net
currently
foreign
stuff.
C
So
we
already
started
to
use
CDN
for
that,
although
there
is
no
actually
need,
because
we
don't
have
that
many
client
and
that
much
country
so
far
yet
another
interesting
thing
here
is
basically
you
could
see
the
whole
path
from
the
user
to
the
relying
party
when
it
fetches
here
and
if
you
are
interested
in
the
total
delay
of
basically
how
fast
the
relying
party
could
get
information
here,
then
it's
not
much
so
of
the
system.
Here
the
crypto
updates
every
ten
minutes.
C
C
What
you
have
is
ten,
plus
one
minute
until
it
gets
to
the
CDN
D'amico
section,
some
graphs
from
the
CDN,
so
the
first
graph
is
the
number
of
requests
we
have
per
day
and
it's
basically
1.1
K
per
day.
Apart
from
these
spikes,
you
see
here
happened
on
the
fourth
till
10
February
yeah,
if
you
split
it
down
two
countries.
C
E
C
In
the
total
guards
now
get
indices,
questions
that
we
figured
out
after
putting
it
life.
How
often
shall
we
generate
Delta
files?
So
our
initial
approach
was:
we
have
one
of
the
publishing
new
set
of
files,
basically
just
in
most
cases,
new
serial
and
manifest.
Maybe
some
changes
in
certificate
in
rows.
E
C
C
So
that's
a
significant
change
and
essentially
what
we
could
do.
Since
we
say
the
notification
file
is
cached
for
one
minute
anyways.
We
probably
could
extend
it
up
to
one
minutes.
This
delay
instead
of
30
seconds,
because
nobody
is
going
to
see
these
changes
anyways.
If
it's
forward
and
next
question
is
how
many
deltas
we
should
list
in
education
file
so
basically
with
every
noodle
that
we
put
another
one
and
we
keep
the
list
growing
growing
Gordon
after
some
time
it
becomes
big
or
too
big.
C
So
our
first
approach
or
first
idea
was
they
should
give
enough
Delta.
So
their
total
size
is
not
bigger
than
the
snapshot
itself,
otherwise
it
doesn't
make
sense
because
it
will
be
faster.
If
you
just
said
snapshot,
we
still
do
that
and
we
think
this
is
a
good
approach,
so
that
relying
party
doesn't
have
to
decide
which
one
to
page,
because
we
basically
provide
just
the
best
solution
for
if
you
still
have
your
deltas
in
our
Delta
to
fetch
Delta's,
otherwise,
page
snapshot,
but
so
with
this
approach.
C
What
we
have
now
is
the
notification
is
about
20k
and
the
amount
of
Delta's
goes
back
for
round
roughly
17
hours,
which
actually
means.
If
we
have
a
group
behaving
as
we
say,
relying
party
that
keeps
itself
up
to
date
regularly
more
often
than
17
hours,
then
it
still
forced
to
fetch
these
20k
every
time
it
needs
to
see
whether
there
is
update
or
not
and
what
to
fetch.
Although
again
it
probably
only
is
interested
in
couple
of
first
lines
in
there.
C
F
Joe
has
so
I
can't
speak
as
a
current
operator,
but
I
did
operate
my
infrastructure
number
years
ago.
One
of
the
things
that
the
snapshots
versus
the
deltas
is
sometimes
interesting
for
especially
difficult
a--'s
and
the
snapshots
are
retained
long-term,
not
necessarily
the
production
server
but
easily
accessible
off
a
secondary
server
is
it
gives
ability
for
you
to
see
histories
and
that
ends
up
being
incredibly
handy
for
research
later
on.
Okay,.
F
And
I
think
it's
given
the
size
of
the
dips
themselves
being
relatively
small
and
the
fact
that
it
doesn't
take
a
long
time
to
apply
them.
I,
wouldn't
let
the
size
of
them
bother
you
too
much.
Okay,
I
would
actually
suggest
that
a
good
procedure
may
just
be
to
provide
daily
snapshots
and
provide
the
deltas
per
day.
A
A
So
I
mean
if
the
notification
is
hey,
this
was
last
updated
here
and
here's
the
history
of
when
it
was
last
updated
for
the
last
30
days
or
whatever
so
I
can
go.
Hi
I
haven't
seen
you
in
eight
days.
Let
me
go
back
to
this
one
and
get
all
the
snapshots
from
that,
and
then
I
need
to
be
able
to
look
back
and
see
the
snapshots.
I
think
it's
probably
worthwhile
to
have
more
than
17
hours
like
I
would
say:
I
would
have
start
with
like
a
month
worth
of
data
by.
F
This
kinds
of
duties,
okay,
all
forward.
So
if
you
start
with
no
it's
what
happens
if
you
have
an
operational
hoops
at
the
place
where
you're
actually
storing
the
files.
So
if
you
happen
to
have
an
oops
and
you're
doing
it
on
monthly
plus
deltas,
you
can't
reconstruct
things
if
you're
missing
one
days
for
the
data,
but
if
you
have
backups
that
are
always
one
day
old,
your
the
amount
of
data
you
could
be
missing
is
very
small
and
given
the
lifetimes
with
the
certificates,
we
probably
want
this
to
be
a
small
windows
hospital.
C
Well,
I,
don't
think
we
actually
want
to
keep
anything
that,
because
there
is
no
point
in
at
least
in
the
protocol
and
in
the
operation
to
have
any
historical
date.
That's
what
we
are
interested
in
is
to
provide
your
current
state
as
faster
as
possible.
So
that's
what
we
try,
l
and
as
effective
as
possible.
That's
what
we
try
to
achieve
here,
not
to
provide
historical
data,
one
one's
back.
G
Yeah
no
I
was
going
to
comment
pretty
much
the
same
thing.
I,
don't
think
we
have
anything
necessarily
against
providing
this
kind
of
history.
If
that's
nice,
if
that's
useful,
for
research
or
whatever
I'm,
just
not
sure
that
you
should
do
it
in
this
protocol
here,
because
we
just
want
to
reduce
the
overhead
to
what
makes
sense.
And
if
you
look
at
a
time
of
8
hours
now,
I
believe
that
I
think
arsenic
has
a
default
of
revalidating
every
hour
and
in
our
case
it's
even
more
frequent,
so
eight
hours
would
give
you
efficient
way.
G
You
have
the
most
efficient
path
they
open
to
update.
If
you
say,
for
example,
if
you
no
need
to
reboot
your
server,
that
runs
your
validation
software,
for
whatever
reason
who
makeup
period
longer,
you
may
have
to
wait
a
bit
longer
when
your
validation
software
starts
up
because
it
has
to
do
a
full
resync.
So
that
affects
that
person
that
the
operator
that
does
that.
G
But
if
we
would
allow,
for
example,
for
16
hours
or
something
tell
you
like
that,
it
doesn't
mean
that
everybody
else
has
this
additional
overhead
all
the
time.
So
is
that
worth
is
like
what's
unacceptable
penalty
for
doing
a
for
having
an
outage?
What's
like
what
kind
of
period
of
outage
do
we
wanna
support
you
know,
and
and
and
that
that
said
when
you,
when
you
come
back,
you
can
do
full
resync
and
it's
not
magnetics
forever,
either.
H
C
H
C
I
I
J
C
I
C
Okay,
so
the
next
point
is
for
how
long
shall
we
keep
the
file?
We
cannot
reference
any
more
in
notification
so
that
the
relying
parties
could
still
download
download
them.
So
imagine
just
a
notification
file,
you
discover
it.
What
are
the
URLs
th
next
you
started
to
face
snap
shot.
Let's
say
it's
20
Meg's
it
take
so
long.
In
the
meantime,
the
new
version,
the
new
Delphis
game,
we
generated
new
snapshot,
we
put
new
notification
file
and
we
deleted
the
file
that
you
are
downloading.
C
So
that's
not
nice
you're
forced
to
fetch
the
notification
file
again
figure
out
the
new
snapshot
URL
again
and
start
again,
and
if
we
get
new
version
again,
then
it
repeats
and
repeats
and
abuse
so
yeah
what
we
are
doing
now
we
are
keeping
the
files
for
one
hour
and
what
it
basically
means
is:
seven
snapshot
is
almost
100
mix.
The
whole
repository
is
than
230
max.
This
is
essentially
not
a
problem
for
you,
but
more
for
us
how
big
our
data
footprint
is
or
to
whoever
run
this
software.
C
So
I
guess
my
question
is:
is
gonna
are
good
enough
or
not?
I
think
we
are
pretty
good
on
that
right
and
the
footprint
is
not
that
big,
so
we
could
do
that
even
in
the
cloud
and
I
think
this
is
the
most
interesting
question
is
how
much
useful
versus
not
so
much
useful
data
we
have.
So
basically,
what
happens
every
day
is
that
everything
publishers,
the
page
to
its
manifest
and
serials
every
X
hours,
and
in
parallel
to
that
there
might
be
some
updates
to
the
certificate
and
reward
and
stuff.
C
So
we
were
interested
in
how
much
they
earlier
useful
data,
meaning
certificates
and
roles
are
there
in
all
these,
the
others
and
how
many
just
updates
to
manifest
in
fear
so
yeah.
What
I
did
I
took
all
the
doses
for
it
in
ourselves.
We
keep
them,
which
is
about
30
Meg's
of
data
and
compares
how
many
changes
to
every
type
of
objects
we
have
there,
and
this
is
the
result.
So
we
have
7000,
almost
8000
manifests
and
almost
same
amount
of
cereals,
only
400
US
and
no
certificates.
E
C
Day,
just
to
figure
out
that
there
were
four
interesting
objects
to
the
relying
party,
all
the
other
informations,
all
the
other
megabytes
are
just
to
keep
the
RPI
state
in
what
the
all
the
RCS
tell
us
to
do.
So
we
keep
resigning
ZL
and
we
keep
republishing
myself
just
for
that.
So
another
thing,
but
maybe
the
second
column
is
not
very
representative.
That
was
the
end
of
the
big
sweep
are
probably
all
of
you
are
flying
here
and
we're
changing
your
cereals,
so
this,
the
last
column,
is
basically
average
for
March.
C
So
the
question
is:
maybe
we
should
do
it
differently,
so
yeah,
maybe
I
should
tell
as
well.
So
what
we
have
in
our
system
is
basically
the
next
update
time
in
CLL
is
set
to
24
hours,
but
we
publish
republish
all
the
objects,
every
eight
hours,
which
gives
us
some
time
to
republish
again
if
we
find
the
problem.
So
essentially
we
publish
all
the
objects
three
times
per
day
and
this
is
the
result
of
death.
C
G
G
What
I
was
thinking,
but
this
may.
This
is
by
no
means
final,
but
anyway,
let
me
just
go
down.
If
we
have
a
certificate
that
has
a
si
a
and
3
that
points
in
HTTPS
it
uses
the
HTTP
HTTPS
protocol.
It
points
to
a
server
if
you
can
trust
the
identity
of
that
server
and
that
server
is
trusted
in
turn
by
the
CA.
Then
do
we
need
this
amount
of
flagging
against
replays
with
roles
or
that
kind
of
stuff.
G
If
we
can
be
sure
that
we're
talking
to
the
right
publication
server,
can
we
not
reduce
this
frequency
of
reissuing
manifests
and
CR
else,
because
we
had
now
have
a
reliable
method
of
knowing
that
an
actual
update
is
available,
because
if
we
could
go
there
and
I'm
not
so
that
we
can
go
there
completely.
But
I
want
to
ask
the
question
what
difficut
go
there?
You
could
reduce
the
the
volume
here
by
potentially
even
a
factor
of
1000
and
and
I.
Think
that's
worth
thinking
about.
So
I
don't
have
the
final
answer.
G
G
For
reference
in
our
implementation,
we
currently
use
the
default
trust
anchors
that
ship
with
the
Java
Runtime
environment,
but
a
may
be
your
choice.
We
want
to
change
it
huger,
but
that's
what
we
do
today
and
you
can
add
individual,
it's
yes,
certificates
manually.
We
added
this
specifically
to
be
able
to
work
in
tests
in
a
test
environment
yeah
just
putting
it
out
here.
B
Some
of
your
comparison
of
one
way
you
did
it
to
the
alternative,
was
stated
in
terms
of
what
the
relying
party
had
to
download
it
seems
to
me.
The
one
of
the
motivations
for
the
are
RDP
protocol
was
concerns
of
the
load
on
the
server,
so
I
presume
that
this
concern
about
what
the
relying
party
is
downloading
is
really
just
masking
that
your
concern
is
what
the
server
has
to
serve
and
that
you're
you.
You
are
satisfied
that
the
RDP
protocol
is
reducing
the
burden
on
the
server
as
it
was
intended
that.
C
B
Randy
said
something
about
this
is
going
to
change
over
time,
and
that
was
something
you
seem
to
be
making
decisions
defined
decisions
on
the
basis
of
the
current
state,
which
isn't
really
as
large
as
full
deployment
is
going
to
be.
It
seems
to
me
that
when
you
were
doing
the
comparisons,
each
of
those
alternatives
were
going
to
scale
linearly
as
time
goes
on
so
I'm,
not
thinking
that
you're
going
to
run
into
a
case
of
oh,
we
made
this
decision
and
it
was
too
early,
but
I
just
wanted
to
hear
you
say
no
I.
C
J
B
I
I
E
Second
month
on
videos,
my
question
is
about
how
to
do
is
arm
many
type
of
data.
We
know
if
there
are
some
valid
sign
objects
in
the
publication
point,
but
if
they
are
not
present
on
the
manifest
so
so
far
the
the
IFC
64
anything's
doesn't
mandate
any
specific,
a
relying
party
behavior,
so
I
wonder,
is
Annie
Pichette
suggesting
in
your
crafts
to
handle
these.
C
No,
so,
basically,
the
piece
is
just
RDP
protocol,
which
publishes
whatever
was
sent
to
the
server,
and
there
is
nothing
about
how
to
validate
it.
There
is
another
draft
I'm
going
to
talk
about
it
later,
which
explains
how,
with
the
validation
in
our
validator
and
basically
what
we
do,
we
only
look
at
the
manifest
entries
and
we
ignore
any
other
objects
that
might
be
there,
but
this
is
not
related
to
this.
Just
thank
you.
G
A
Maybe
so
it
seems
like
your
chief
change
here
would
be
to
make
the
manifest
and
the
CRL
get
updated
less
frequently
all
right,
so
some
so
first
off
I
think
the
manifests
purpose.
Maybe
it's
conflated
with
object.
Security
fits,
but
it's
really
like
this
is
the
stuff
that
I
expect
other
people
to
see
in
use.
I'm
I
put
a
bunch
of
other
crazy
stuff
in
my
repository,
don't
matter
or
to
the
mat,
but
to
them
put
the
manifest
matters.
So
that
should
be
the
only
thing
people
care
about
is
what's
in.
A
That
should
be:
let's
go
what
gets
gather
the
CRL
should
be.
Oh
I
made
a
mistake.
Please
stop
using
this
thing
right.
We
agree.
That's
great
I
think
turning
the
crank
on
your
machinery
on
a
regular
basis,
even
more
often
than
you
would
generate
a
new
grower,
should
tell
you
when
there's
a
problem
with
your
machinery
right,
so
I
think
that
maybe
more
often
is
good.
Also
I
would
say
that
Roiz
and
certificates-
probably
don't
get
created,
particularly
often
after
initial
creation,
because
it
really
represents
gain
and
loss
of
a
customer
more
or
less
right.
A
So
I
would
expect
growers
and
certificates
to
be
a
relatively
small
number
but
say
a
percentage
of
the
overall
table,
something
like
10%
that
the
tables
would
be
represented
as
change
every
day
right.
Something
I,
don't
know
if
10
percents
right
number,
but
let's
just
say
it
is
right.
So
if
the
whole
table
is
600,000
around
today,
you
should
say
60,000
rows,
change
per
day,
right,
I'm
sure
somebody
has
much
better
numbers
than
10%,
but
it
doesn't
really
matter
what
that
number
is
I.
A
Do
I
still,
don't
think
that
turning
the
crank
on
to
manifest
a
couple
times
a
day
is
particularly
bad,
maybe
once
an
hour
is
overkill,
but
that
this
also
depends
on
the
individual
operator
in
question
right.
They
may
have
like
Deutsche
Telekom
may
gain
and
lose
10
customers
an
hour
right.
So
they
will
actually
want
to
turn
that
crank
three
times
an
hour,
but
Google
may
never
gain
new
customers
for
2
or
3
years.
That
we
may
net
may
not
need
to
turn
the
crank.
A
We
want
to
to
make
sure
that
all
of
our
machinery
still
works.
So
maybe
you
what
you're
asking
here
is
really
like:
what's
the
right
default
to
set
and
some
operator
guidance
about,
you
should
turn
the
crank,
at
least
this
often
so
you
know
that
it
works.
However,
turning
it
faster
than
this
doesn't
help
you
turning
it
slower
than
this
hurts
you.
C
A
They
say
there
could
be
some
security
issues.
Yes,
asteroid
could
hit
us,
but
I,
don't
think
that's
the
problem.
You
update
the
manifest
when
you
say
I
put
something
new
there
and
then
you
put
a
row
or
a
cert
and
somebody
should
say:
AHA,
there's
an
update
to
your
manifest.
There's
the
check
some
change.
Let
me
go.
You
get
I'll
figure
out
what
this
is.
C
Definitely
so
whenever
there
is
some
useful
change,
we
work
a
manifest,
and
but
what
I'm
talking
here
about
all
these
numbers,
10,000
are
10,000
objects
were
changed
without
anything
else
in
manifest
changing.
So
it's
essentially
the
same
manifest
published
again
and
the
same
serial
published
again
just
within
you.
Next
data
yeah.
H
Well,
okay,
or
do
folk
so
for
optimizing
this
one
really
just
has
to
focus
on
analyzing.
What
are
the
exact
reasons
for
defining
a
maximum
lifetime
of
the
manifest
regularly
regularly
for
just
moving
on
a
manifest,
only
needs
to
change
when
any
object
does
change,
and
so,
and
my
my
take
is
that
the
quick
refreshing
of
the
manifests
is
a
very
naive
implementation
of
keeping
it
fresh
and
while
okay
actually
actually
a
good
thing
for
starters,
to
rather
play
it
secure.
H
But
when
you
start
to
optimize
well,
okay,
you
really
should
understand
what
you
are
optimizing
for
now
and
yet
well,
okay,
my
understanding
is
that
well,
okay,
the
manifest
is
there
to
deal
with
potential
inconsistencies
of
the
data
store
that
are
particularly
likely
to
happen
when
using
the
arson
thing
and
I
would
not
make
any
naive
prediction
whether
those
insecurities
and
inconsistencies
change
in
any
predictable
way.
When
you
change
the
data
store
model,
as
you
are
doing,
I.
B
G
And
how
long
do
you
want
to
be
vulnerable
to
that,
and
my
point
earlier
point
was
that
if
we
have
HTTPS-
and
we
can
trust
that
to
be
notified
of
updates,
we
may
not
need
that
and
then
we
can
still
crank
machinery,
but
maybe
don't
need
to
do
it
three
times
per
day,
but
once
a
week-
or
you
know
something
like
that
and
I
will
reduce
the
traffic
a
lot
already
so,
but
maybe
not
take
this
to
the
list.
Thank
you
for
your
comments.
G
B
B
C
C
A
so
this
is
again
the
the
other
draft
that
we
worked
on,
and
this
is
yeah.
The
new
name
will
be
old
draft.
The
draft
was
adopted
as
a
working
group
item.
So
that's
why
there
is
a
new
submission
and
the
new
name
essentially
yeah
a
new
version
not
might
changes
functionally.
So
you
name.
We
received
some
feedback
from
several
people.
We
put
it
in
Ixtapa
the
things
added
missing,
sections
that
are
acquired
and
that's
basically
but.
H
B
C
G
Okay,
hello,
I'll,
try
to
be
Twitter,
we
talked
about
we're.
G
G
G
Now
takes:
is
it
really
explained
from
a
point
of
view
of
the
top-down
validation,
and
specifically
it
tells
you
about
what
would
change.
It
is
new
to
the
reconsidered
approach.
I
hope
that
that
is
useful,
and
that
will
help
people
understand
better
what
we've
proposed,
but
it's
really
also
to
you
to
give
feedback
on
that
now
on
the
content
quickly
again
so
in
this
model
here
this
is
an
example
of
certificates
that
are
all
valid.
I
G
G
We
think
that
there
may
be
many
reasons
why
this
might
happen,
and
we
also
feel
that
you
cannot
really
adequately
address
all
of
them
all
the
time
transfers
can
happen
and
timing
may
be
off,
but
then
again
I
know
there
are
efforts
to
describe
transfers,
but
in
general
there
may
be-
and
it
probably
will
have
to
be
a
provision
where
the
parent
says
at
some
point.
Well,
this
is
it.
This
is
enough.
We
need
to
remove
this
resource.
I
know
that,
certainly
for
us
with
it,
you
know
the
rules
that
we
have
an
address
policy.
G
That
is
case
at
one
point.
Things
have
to
be
retrieved
I'm.
Also,
the
parent
can
make
a
mistake.
Now
you
could
enumerate
other
reasons
here
as
well,
but
the
commonality,
though
I
think
so
far
for
us,
is
that
we
don't
see
this
happening
all
the
time.
We
don't
that
the
likelihood
of
this
going
wrong
is
actually
quite
lo,
I
believe
at
least
person,
but
the
impact
is
quite
high.
G
Now
what
reconsidered
proposes
is
that
we
limit
the
impact
to
just
the
resources
that
are
being
disputed
or
disappeared.
So
in
this
case
the
grandchild
certificate
that
lists
a
particular
set
of
resources
not
held
by
the
child
will
be
rejected
entirely.
We
will
just
keep
note,
and
one
that
is
specific.
Set
of
resources
like
10/20
for
year
is
considered
invalid.
The
Roma
there
does
not
manage
mention
this
resource,
so
it's
still
considered
valid.
G
If,
for
example,
the
grandchild
would
create
a
role
for
the
resource
that
was
actually
removed,
that
role
would
be
considered
invalid,
because
the
roller
prefix
has
to
appear
on
the
erroneous
determine
under
our
proposal
on
the
accepted
resource
set
of
the
East
River.
So
there
have
been
criticisms
on
this.
That
I
want
to
also
talk
about
here
briefly,
other
people
can
of
course,
speak
up.
Dorammed
I
wanted
to
kind
of
voice
what
I
heard.
G
So
one
of
the
things
I
heard
was
that
there
less
incentive
for
the
child
in
this
case
to
clean
up
what
they're
doing
and
well
that
may
be
true.
I
would
say
there
are
warnings,
but
maybe
more
importantly,
it
may
not
be
the
child's
fault
that
they
are
actually
issuing
certificates
that
are
out
and
down,
and
they
may
not
be
aware
so
in
validating
them
entirely
in
validating
their
calls,
for
the
grandchild
entirely
to
me
seems
quite
drastic
yeah.
Well,
that's
just
it.
G
The
other
concern
that
I
actually
raised
back
in
Honolulu
is
that
with
this
approach,
you
could
think
that.
Well
now
you
can
ring
very
specific
resources
at
the
top
of
the
certificate
chain
with
less
collateral
damage.
Let's
say
that
can
happen,
but
again,
in
my
opinion,
I'm,
not
sure
that
that
is
a
real-world
issue,
because
I'm
not
convinced
that
if
we
go
down
that
path,
invalidating
the
grandchild
in
this
data
case
entirely
would
stop.
People
and
I
do
believe,
have
a
much
bigger
problem.
That
I
don't
know
time
permitting.
G
I
can
comment
on
because
the
whole
RPI
system
is
built
on
trust
and
it's
people
trusting
that
the
statements
there
are
in
fact,
true
that
they
represent
the
Wolves
resource.
If
this
system
was
used
or
abused
as
say,
I'd
like
this
thing
to
that
trust
would
soon
disappear
and
I
think
its
usefulness
would
be
greatly
reduced.
So
I
would
urge,
because
this
has
come
up
in
conversation
in
on
the
mailing
list
and
and
in
these
sessions
before
I
would
say
that
we
also
have
a
responsibility
to
highlight
that
it
may
actually
not
be
in.
G
You
know:
evil
empires
interests
to
go
down
this
path,
because
what
you
achieve
is
not
very
effective.
People
won't
really
lack
whole
roots
because
there's
local
policy
there's
not
on
the
percent
uptake
and
on
the
flip
side,
and
you
drive
people
away
from
a
system
that
could
actually
protects
your
critical
infrastructure.
So
thank
you
for
the
diversion,
but
I
wants
to
make
that
point
now
going
on.
Why
do
we
actually
think
that
reconsiders
is
a
good
idea?
Well,
I
hunter
to
quite
a
lot
of
this
already.
G
We
think
which
which
has
limits
the
impact
of
the
inconsistencies
that
we
see
to
just
those
resources.
We
believe
that
over
claims
in
this
approach
can
never
be
seen
as
valid.
We
also
have
a
code
running
in
our
travel
data
that
implements
this.
So
we
know
we
have
some
practical
experience
with
it
and
yeah.
G
I
Would
note
that
there
are
almost
no
grandchildren
in
existence,
though
I'm
about
to
have
one
as
it
were,
I've
got
seven
actually,
but
one
with
the
rpki
and
we're
not
seeing
this
problem
we
are
sent.
But
my
my
issue
is
not
with
this.
My
issues.
We've
spent
massive
time
arguing
this
and
we've
had
some
very
real
problems,
with
some
very
real
failures
to
do
in
manifest
lifetimes,
etc,
etc,
and
we're
not
addressing
them.
We're
still
arguing
over
this.
M
C
M
G
G
G
I
I
B
I
K
Doug
Montgomery,
can
you
go
forward
to
the
first
one?
More
so
sorry,
I
didn't
read
the
latest
draft.
Could
you
answer
the
question
that
was
discussed
before
in
the
current
and
the
latest
draft?
If
ninety-two
168
was
part
of
the
row,
are
you
treating
the
Aurora
atomically
or
individual
resources
in
the
relevance,
elves
candy
right.
G
G
If
the
share
fade
in
the
current
text,
because
we,
the
row
of
validation,
says
that
all
the
prefixes
has
to
have
to
appear
on
the
valid
certificate
te
certificate
product
Roma,
we
don't
want
to
modify
that
they
just
keep
it
simple.
The
only
change
is
really
that
we
say
it
has
to
appear
on
the
verified
set
of
resources
or
that
EE
certificate.
So,
if
anything
is,
if
there's
any
mismatch,
then
the
roller
as
a
whole
would
be
rejected.
G
F
Jeff
as
I
think
it
is
a
good
idea
to
allow
for
the
lose
your
validation.
You
know
towards
the
argument
of
you
want
people
to
keep
their
house
clean
and
that
issue
that
stuff.
The
analogy
that
we
have
from
like
existing
IRR
databases
doesn't
really
hold
because
IR
you
can't
tell
what
is
less
valid,
whereas
with
the
roller
chain.
F
You
can
actually
see
that
one
important
of
this
has
been
removed,
and
you
can
tell
that,
even
though
it's
a
somewhat
stale
piece
of
information,
what
the
proper
piece
of
information
actually
is
so
I
think
that's
actually
a
good
property,
even
though
it
doesn't
encourage
people
to
keep
their
house
clean.
The
related
question
I
have
again
apologies
for
not
having
read
the
current
versions
of
the
draft.
It
is
if
a
role
has
been
issued
for
some
resource
like
a
slash
24
and
a
parent
shrinks.
F
The
space
and
question
so,
for
example,
switch
shrinks
by
24
to
25.
Would
it
be
reasonable,
potentially,
in
that
case,
to
say
that
the
cupboard
/
25
is
valid
for
the
grow
up?
Even
though
that's
not
the
role,
that's
actually
issued
I
understand
why
we
may
not
want
that.
The
way
this
is
there
currently
works,
but
sort
of
by
a
similar
analogy.
G
Yeah,
like
I,
said
we
just
try
to
keep
it
as
simple
as
possible
and
I'm
not
sure
that
I
can
foresee
the
implications
of
that.
So
that's
why
I
kind
of
want
to
stay
away
from
that,
because
I
think,
if
we
do
this,
you
already
achieve
most
of
what
we
want
to
achieve,
but
yeah
in
theory,
you
could
take
this
firm
yeah
like
I,
said
I'm,
not
sure
that
I
have
the
full
picture,
so
maybe
should
just
keep
it
simple.
B
G
Well,
on
that,
that
may
not
be
related
to
this.
Actually,
we
have
had
questions
from
people
who,
together,
let's
say,
hole
the
bigger
prefix
and
they
agree
that
the
bigger
prefix
can
be
announced
and
one
of
the
parties
are
more
specific
and
with
the
the
current
model
that
that
that
you
cannot
model
that
in
Reverse
yeah.
So,
but
that's
because
they
can
only
be
one
party
signing
away
yeah.
D
G
B
B
N
C
N
We
plan
on
going
with
our
trust
anchors,
so
current
state
of
the
our
IRS
is
we're
currently
getting
into
a
whole
lot
more
in
Terraria
are
transfers
at
the
moment
we're
doing
a
transfers
between
a
Phoenix,
ripe
and
Aaron.
N
So
there's
a
lot
of
a
lot
more
play
in
the
in
the
space
than
there
was
previously
at
present,
the
our
IRS
have
three
different
models
of
how
the
trust
bankers
are
are
published
and
we've
been
talking
about
ways
to
come
up
with
one
single
model
of
doing
that
and
in
the
process
we've
also
talked
about
how
to
simplify
the
trust
anchor
model,
not
just
to
make
a
uniform
but
simplify
it
and
make
it
more
operationally
robust.
So.
N
What
this
graph
states
is
that
both
effective
over
claiming
that
we
currently
have
with
rpki
mistakes
are
kind
of
easy
to
make,
especially
as
we
get
into
more
transfers
and
essentially
in
order
for
us
to
to
have
a
key,
a
certainty,
a
model
of
where
one
rir
is
subservient
to
another.
Our
a
our
mistakes
and
all
right
when
our
can
make
not
can
cause
some
pretty
big
impact
to
another
region,
and
we
don't
think
that's
a
very
good
thing
to
do
so.
N
What
we
want
to
do
is
we
want
to
move
to
a
situation
where
every
ta
that
each
RIR
has
a
TA
that
holds
all
the
resources.
So
that's
what
the
strap
says-
and
this
is
the
model
we
want
to.
We
want
to
go
to
we're
going
to
start
asking
our
individual
communities
whether
they
think
this
is
a
good
idea
or
not
and
and
go
from
there.
So
if
you
want
to
influence
what
the
RIR
communities
say
about
this,
we
invite
you
to
go
to
the
are
AR
meetings.
N
Talk
to
board
members
talk
to
people
in
those
communities
to
help
give
us
input
on
the
ca
model.
We
do
believe
that
this
is
a
good
thing
to
publish
as
an
RFC,
because
for
people
who
are
relying
on
the
on
the
RPI
for
them
to
understand
how
the
TA
models
works.
All
re
ours
is
a
good
thing.
So
that's
really
presentation
any
questions.
N
B
N
H
We
trust
anchors
okay,
so
let
me
just
let
me
just
remind
that
if
the
trust
anchors
are
pointing
to
root
certificates,
that
over
claim
in
the
sense
you
are
saying,
I
think
there
is
also
a
demand
and
a
requirement
for
a
documentation
of
what
the
overlapping
root
certificates
actually
mean
and
how
they
are
constructed.
Of
course,
you
might
answer,
please
go
to
your
particular
rir
and
ask
them
and
their
boards
to
define
that
separator.
H
Well
in
Yokohama,
you
may
remember
that
I
did
an
improper
to
last
slide
where
I
reported
the
observation
that
the
various
who
certificates
that
are
in
the
game
and
did
have
overlaps
that
were
not
just
the
all-encompassing
root
of
all
the
resources.
But
it
was
actually
2000
something
particulars
and
for.
H
Inconsistencies
or
overlaps
are
something
that
we
would
like
to
have
explained.
Okay,
I,
do
you
mean
so,
in
other
words,
explain
why
we're
doing
this?
No
well.
Okay,
actually
D
actually
actually
explained
what
you
what
you
are
doing
and
what,
as
the
consequence,
the
semantics
of
the
root
certificates
that
are
overlapping
are
I
get
a
point.
So
it's
fine.
G
I
just
comment
that
I
think
that
is
addressed
in
the
document,
actually,
because,
whatever
monthly
is
that
the
RER
say
that
we
trust
each
other
to
over
claim
each
other's
space.
But
if
you
look
at
what's
practically
being
issued
to
certificates
further
down
and
compare
those,
you
should
see
no
overlaps,
yeah.
G
H
Well,
the
thing
is,
as
a
relying
party
I
I,
try
to
be
so
careful
to
actually
detect
when
the
data
that
I'm
working
with
has
inconsistencies
and
overlaps,
whether
that
happens
for
customers
of
AT&T
and
myself.
However,
it
happens
between
what
the
our
IRS
are
doing
is
technically
kind
of
irrelevant
if
there
are
inconsistencies
that
are
put
there
by
design
and
intentionally
I
demand
that
I
get
explanations
for
them
and
actually
I
request
that
such
such
anomalies
should
be
minimized
as
to
make
the
life
of
a
discerning
of
relying
party
easier.
I
Passante
I
think
another
spin
on
what
a
vote
is
asking.
Yes,
if
I
wish
to
be
a
really
rigorous,
relying
party
I
need
some
algorithmic
and
programmable
way
of
knowing
what
you
folks
doing.
I
would
also
note
that
I
have
heard
from
one
rir
that
they
are
considering
just
rooting
it
0/0
I
would
also
note
that
probably
most
of
the
people
in
this
room
and
in
the
community
who
do
not
work
for
our
IRS
are
wondering
why
we're
discussing
trust
anchors
in
the
plural.
O
Terry
Madison
no
affiliation
has
an
individual.
So
when
I
think
back
to
the
determination
of
rpki
unique
was
uniqueness
was
an
important
thing
in
the
structure.
Now,
if
you're
going
to
change
that
and
you're
obviously
coming
together
as
a
collective
to
do
that,
that's
okay,
but
that
kind
of
says
PKI
is
the
wrong
thing.
I.
I
N
I
A
Christmas
Marco,
so
even
in
the
case
of
a
single
route,
if
we
lived
in
a
magical
world
where
I
Anna
had
her
root
certificate,
they
signed
down
a
bunch
of
resources
to
each
RIR.
They
signed
enough
lar
and
they
are
they.
Even
in
that
world
is
a
transition
to
transfer
between
ethnic
and
Aaron.
There
would
have
to
be
double
accounting
before
make
or
break
to
work.
That's
a
question:
yeah
yeah
yeah!
That's.
A
So,
even
in
that
world,
which
is
what
we
would
would
all
like,
the
less
utopia
would
all
like
to
be
in
if
we
have
to
be
cider
is
if,
if
we
have
to
do
this,
that's
the
e
that
we'd
like
to
be
in
even
in
that
world,
we
have
to
double
accounting.
We
have
in
some
cases,
so
we
have
to
have
some
system
to
manage
that
so
Hideo.
Maybe
it's
the
other
data
structures
really
what's
missing
from
that
discussion
or
the
document.
Okay,
I.
N
D
A
I
I
interrupt
yeah.
This
is
why
the
rpki
works
in
right
before
break
and
the
DMS
schemes
that
Tony
Lee
and
I
etc
proposed
98.
Don't
because
DNS
can
only
do
a
single
delegation.
Prp
ki
can
do
multiple,
so
I
give
that
address
space
to
the
sender
and
the
receiver
do
it.
What
Ruettiger
is
asking
for
is
when
I
am
doing
that.
How
do
I
differentiate
between
that
is
broken,
and
that
is
intentional.
I
H
B
No,
no
continue
on
the
mailing
list
with
the
discussion
of
yeah.
Okay
already
anyone
have
anything
else
that
they
would
like
to
bring
up
at
this
time
and
there's
time
on
Wednesday
to
bring
up
other
additional
topics,
so
anything
that
you
would
like
to
see
brought
before
the
working
group
in
the
next
four
minutes.
B
Yes,
no
hey!
Are
you
getting
up
to
the
mic
or
getting
up
to
leave
kidding
up
Chili's,
okay,
alrighty
I'll.
Give
you
three
minutes
of
your
life
back
yay
blue
sheets.
Anybody
who
is
not
signed
the
blue
sheets.
Please
do
so.
We
actually
had
a
good
crowd.
This
time,
I'd
like
to
see
everybody's
name
on
it,.