►
From YouTube: IETF94-SIDR-20151103-1710.webm
Description
SIDR meeting session at IETF94
2015/11/03 1710
A
A
A
C
A
C
Alright,
so
we
also
need
to
collect
a
jabber
person
scribe
and
a
note
taker
I
think.
Maybe
sue
said
she
was
a
new
one,
which
one
note
taker
for
sue
who's
gonna
do
jabber,
jabber,
jabber
jabber
jeffers,
that
XMPP
thing
that
no
one
uses
any
more
that
one
yeah.
You
can
remember
it's.
It's
only
came
after
IRC
before
maybe
jabber
person,
Wes
Oh
John.
Also,
maybe
we'll
have
to
okay
all
right,
sweet,
okay,
so
we'll
be
all
set.
E
If
it
has
to
do
with
the
IETF
can
be
used
against
you
in
the
court
of
human
opinion,
all
the
resources
that
you
need
to
find
out
any
information
you
want
to
know
about
the
cider
and
the
work
that
it's
doing
lots
of
different
ways
to
communicate
between
the
audio
stream
in
the
music.
Oh
and
the
slides.
E
The
blue
sheets
are
already
going
around
and
we
already
have
volunteers.
So
we're
done
with
that.
Anybody
will
have
any
agenda
bashing
to
be
done.
No
okay
working
group
status.
We
have
a
new
version
of
the
a
s
migration
draft.
Now
that
the
eye
dr
a
migration
draft
is
finally
kind
of
baked
the
Alex
draft.
We
have
new
versions
and
actually,
as
of
this
morning,
this
is
behind.
E
We
had
working
group
last
calls
for
the
Alf's
draft
the
overview
and
the
pki
profiles.
We
had
a
late
arriving
after
just
before
I
was
about
to
hit
the
button
for
publication
requested
a
rather
complex
issue
came
up,
so
we
need
a
new
version
of
the
BGP
sec
protocol
draft.
There
will
be
a
short
working
group
last
call
to
a
deal
with
just
the
changes
that
were
necessary
to
address
that
issue,
not
interested
in
doing
a
wholesale
review
of
the
entire
document.
E
E
E
C
E
You
had
an
issue
with
the
protocol
draft
that
you
had
to
do
with
the
a
s
migration
draft
that
needs
to
be
worked
out
with
the
protocol
draft
author.
There
also
was
a
change
that
was
made
to
the
a
s
migration
draft.
Add
thing
to
do
with
will
now
do
this
on
ibgp
sessions,
which
have
to
make
sure
that
the
protocol
draft
and
is
now
okay
with
that
otherwise
it'll
it
conflicts,
and
so
I'll
have
to
just
make
sure
that
those
two
drafts
don't
conflict
with
each
other
yeah.
C
E
E
Provide
a
reason
for
reconsideration
of
the
validation
algorithm
Tim's,
going
to
talk
about
of
the
RR
DP.
The
new
transport
protocol
and
Oleg
is
going
to
talk
about
a
view
of
how
to
validate
a
repository
in
that
system
and
Rob's
going
to
talk
about
the
out-of-band
setup
and
Randy's
going
to
talk
about
router,
King
and
Randy
said
that
he
was
going
to
talk
without
slides
and
if
we
have
time
Rob
request
that
he
might
be
allowed
to
make
a
presentation
that
was
planned
for
Friday
today,
because
he
has
trouble
plans
on
Friday.
E
F
Right
there,
okay,
Jeff
Houston,
a
panic,
still
good,
still
good.
Yes,
no
slides
today,
I
sent
mail
to
the
working
group
mailing
list
on
last
week,
sometime
in
response
to
some
revisions
that
I
put
forward
a
little
bit
earlier,
I'm
kind
of
getting
a
sense
of
some
frustration
personally
with
this
particular
piece
of
work.
Insofar
as
its
come
through
this
working
group
a
number
of
times
now
it
seems
to
be
two
years
but
I
could
be
wrong
and
it's
it's
not
going
anywhere.
F
There's
a
bunch
of
folk
ago,
don't
like
it
and
a
bunch
of
folk
ago
like
it
and
there's
no
real
way
of
making
any
progress.
I
looked
hard
at
the
document
and
as
it
stood
this,
the
reconsideration
draft
is
not
an
appropriate
action
because
it's
trying
to
update
a
standard
and
as
far
as
I
understand
the
process.
It
should
indeed
be
a
standard
itself
if
it
tries
to
update
a
specification
in
the
standard,
and
so
in
that
respect
it's
kind
of
a
strange
document.
F
It's
a
bit
of
a
camel
because
it
is
a
whole
bunch
of
motivational
texts.
It's
actually
described
as
informational,
but
it's
trying
to
update
a
standard,
so
I
then
sort
of
floated
a
personal
draft
out
there,
which
was
the
mechanics
of
the
change
to
the
algorithm.
Nothing
more
and
got
back
some
comments
on
the
mailing
list,
going
Wow
way
to
brief
so
I'm
sitting.
There
kind
of
thinking
too
brief,
too
wordy
what's
going
on
here,
can't
seem
to
find
a
middle
ground
and,
quite
frankly,
I've
run
out
of
personal
drive
here.
F
So
it's
a
working
group
document
but
I'm
no
longer
really
motivated
to
press
it
on
myself.
So
the
offer
was
on
the
mailing
list
and
the
offer
is
here.
I
have
the
xml
if
someone
feels
motivated,
because
it's
still
a
working
group
document
take
it
on
if
the
working
group
want
to
abandon
it.
That's
just
fine
by
me.
I
just
don't
feel
like
this
thing
is
going
to
move
anyway
or
one
way
or
the
other
personally,
so
I'm
going
to
walk
away
from
it.
So
I
give
you
back
time.
F
E
F
G
Yes,
tomatoes,
ripe,
ncc,
I'm,
one
of
the
co-authors,
but
I.
I
agree
with
Jeff's
assessment
I,
don't
think
this
is
going
anywhere
as
it
is
and
them
yeah.
To
be
honest,
it
feels
to
me
like
it
would
be
a
waste
of
everyone's
time
to
keep
pushing
it
at
this
point
yeah,
you
know
without
saying
who's
right
or
wrong
about.
The
argument
is
not
about
that.
G
A
A
A
On
for
years
and
if
we'd
achieved
the
consensus
or
rough
consensus,
we
could
have
shipped
this
one
years
ago,
because
it's
a
really
simple
statement.
There
are
two
ways
of
looking
at
resources
in
use,
and
one
says
everything
must
be
right
all
the
time
and
the
other
one
says
will
understand
that
every
question
is
made
contextually
about
specific
resources
which
may
or
may
not
be
covered,
and
we
tried
to
promote
that
most
of
the
room
just
doesn't
want
to
go
there.
B
Randy
bush
oj
so
you've
been
working
on
this
for
a
couple
of
years.
For
this
working
group,
that's
very
short
and
I
mean
that
and
it's
disgusting
okay,
there
doesn't
seem
to
be
consensus.
I
agree,
I'm
the
only
one
who
stood
up
in
the
WG
and
said:
here's
a
case
for
it
I'm
not
strongly
for
it
I'm
not
strongly
against
it
right
and
I'm
kind
of
understand
where
Steve
Kent's
coming
from.
Is
you
know
if
you
swallowed
X
this
x509
kool-aid?
F
A
Carlos
from
lanik
I
I
remember
getting
in
some
very
heated
discussions
at
the
very
beginning
of
this.
Defending
is
this
draft
and
I
still
think
this
is
the
right
path
to
go.
I
think
the
other
validation
approach
is
just
too
brutal,
but
I
sympathize
with
Jeff
I
mean
it's
been,
is
is
being
car
and
frustrating.
The
whole
process
has
been
hard
and
frustrating
thanks.
C
Think
there's
probably
a
couple
things
here:
one
is
for
what
it's
worth
I
appreciate
the
background
information
in
drafts
I
think
this
working
group
has
a
horrible
problem
with
forking
graphs
into
15,
different
things
and
I.
Don't
like
chasing
information,
so
I
appreciate
the
original
version
that
had
the
background
information,
regardless
of
the
fact
that
it
might
run
afoul
of
what
ietf
believes
the
standards
document
ought
to
have
in
it.
C
C
If
we
have
a
sticking
point
where
we
have
to
at
least
find
rough
consensus,
even
if
it's
very
rough
I
look
in
your
direction
to
help
handle
that
this
shouldn't
be
a
situation
where
jeff
has
to
stand
at
the
mic
and
throw
his
hands
up,
because
we
can't
seem
to
come
to
conclusion
if
we
can't
come
to
conclusion.
Do
we
need
to
rope
in
80s
to
fix
it?
You
know,
there's
got
to
be
something
we
can
do
here
if
we
agree
that
this
is
still
a
problem.
E
E
We
have
to
pick
one
of
these
two
and
that's
where
we're
having
trouble
coming
to
consensus
is
picking
one
of
the
two,
so
the
establish
yeah
it's
it's.
F
The
reason
why
the
authors
put
this
draft
forward
is
the
author's
filled,
because
they
are
part
of
the
group
that
is
issuing
certificates,
that
a
modified
validation
algorithm
would
be,
as
Carla
said,
less
brittle,
because
the
semantic
construct
inside
the
existing
validation
procedure
treats
the
bundle
of
resources
as
having
some
innate
property
as
a
collection
right,
but
that
property
is
an
artifice
that
is
only
related
to
x509
and
has
no
reality
in
terms
of
the
operational
use.
I
have
three
prefixes
in
to
a
esas
I
use
them
independently.
F
I,
don't
need
to
put
all
three
prefixes
and
to
a
esas
together
every
single
time,
I
use
them.
That's
it
never
happens
like
that,
and
so
what
this
draft
is
trying
to
do
was
to
say,
if
you
remove
that
bundling
constraint,
you
actually
have
a
situation
where
the
failure
of
an
encompassing
condition
down
a
validation
power
doesn't
cause
all
subordinate
certificates
down
there
to
be
considered
invalid
a
priori
yep.
It
only
affects
the
resource
right,
so
we've
gone
through
all
that
time
and
time.
F
B
B
B
I
am
Lizzie
Doug
Montgomery
on
the
record
is
being
lazy.
I
came
over
here,
I,
remember
well,
one
in
in
concept.
I
certainly
greatly
support
the
idea
because
of
the
resistance
that
we've
heard
a
deployment
about
a
brutal
system,
but
I
remember
going
back
at
some
meeting
somewhere
there
was
discussions
of
implementations
and
I
thought
I
heard
conflicting
things.
B
G
So
in
our
case,
this
was
actually
quite
easy
to
implement.
I,
don't
think
that
should
be
a
showstopper
for
this,
but
yeah
there's
the
other
discussion.
Okay,.
H
Rob
last
time,
I'm
going
to
present
one
and
a
half
positions
here:
okay
from
my
own
as
an
implementer,
it's
actually
fairly
straightforward.
I
have
not
implemented
this
myself.
I
have
looked
at
it.
I
do
not
believe
it
is
particularly
difficult.
It's
essentially
moving
a
piece
of
code
that
we
wrote
years
ago
to
a
different
place.
It's
a
slight
tweak!
It's
not
that
big
of
a
deal
either
way.
H
One
has
to
be
careful,
but
it's
not
that
big
of
a
deal
I
am
now
going
to
go
out
on
a
limb
here
and
attempt
to
do
an
honest
job
of
channeling.
Somebody
else
who
isn't
here,
okay,
I,
have
spoken
to
the
folks
at
BBN,
I
think
their
opinion,
and
this
is
just
my
understanding
of
what
I
think
their
opinion
is.
So
if
they
say
I'm
wrong,
they're
correct
about
that
I
believe
the
way
they
implemented
this.
H
It
would
be
difficult
for
their
implementation
because
they
rely
on
having
checked
all
the
CA
certificates,
all
the
way
down
and
being
able
to
skip
those
checks
in
the
future.
They're
attempting
to
do
some
optimization
that
they
think
is
appropriate,
and
this
breaks
that
optimization
now
cutting
back
to
me
personally
I
hear
that
I
believe
them
I'm,
not
sure
how
much
I
care
in
terms
of
that
optimization
being
important.
H
I,
don't
know
whether
or
not
they
have
actual
no
data
testing
to
demonstrate
that
modeling
that
that
optimization
is
important.
They
might-
I
don't
know,
but
I
have
not
heard
about
any
such
my
own
testing
suggests
not
that
critical,
but
there's
a
potential
scaling
issue
here
and
there
might
be
a
cost.
I
don't
know
I'm
not
particularly
worried,
but
I
believe
it
have
legitimate
concerns
there
in
terms
of
they
optimized
something,
and
now
this
is
going
to
break
that.
I
Who
do
you
go
folk?
Well,
okay,
I'm
a
little
bit
reluctant
to
throw
this
in
because
well,
okay,
I'm,
actually
I'm,
actually
asking
about
things
that
are
kind
of
slightly
aside.
Nevertheless,
I
wonder
if
the
motivation
for
doing
this
actually
is
enforced
by
the
idea
that
the
roots
of
the
root
certificate
resource
sets
might
long-term
systematically
overlap.
A
C
J
C
The
box,
thank
you,
and
we
can
talk
about
this
in
the
analyst
quickly
with
some
decision
process
there.
That
seems
better,
then
continued
randomness
here
so
who's
up
next
and
thanks
Jeff,
so
I
presume
your
offer
to
hand
off
the
XML
still
stands.
If
somebody
at
the
end
of
the
mail
in
this
conversation
is
oh
ho,
this
really
is
a
good
idea.
Let's
do
that.
Okay,.
G
Once
you
look
at
it
beautiful
on
fresh
released
yesterday,
okay
I'd
like
to
talk
about
our
experiment,
experience
implementing
the
Delta
protocol,
so
next
slide,
please,
before
I,
go
into
more
details,
I
wanted
to
say,
I
think
we
have
a
very
constructive
discussions
between
the
the
authors
of
this.
G
These
look
like
maybe
they
have
some
issues
to
work
out,
but
I
think
we're
getting
there,
but
obviously
I
think
it's
sufficient
for
us
to
discuss
things
one
on
one
at
times,
but
I
think
it's
important
that
the
working
group
is
involved.
So
that's
why
I'm
here
again
and
so
that
all
of
you
have
the
opportunity
to
comment
as
well
next
slide,
please,
because
I
want
to
avoid
that.
We
end
up
with
something
like
this,
that
we
may
be
understand,
but
nobody
else
does
right.
G
So
if
anybody
feels
like
reading
this,
how
have
people
read
it?
Actually,
apart
from
the
office
then?
Well,
you
would
do
me
a
favor
next
slide.
Well
through
the
details,
the
current
state.
It
has
been
working
in
our
pilot
environment
for
a
while
now
I
think
for
about
half
a
year
if
I'm
correct,
following
that
and
following
some
discussions
here
as
well,
some
changes
have
been
made
to
the
document.
G
There
is
a
change
made
in
the
recommendation
about
the
caching
of
the
notification
file
and
I
will
at
least
attempted
to
improve
the
readability,
but
this
is
all
so
I
asked
people
to
proofread
because
you
get
this
blind
is
about
what
you
write.
This
up
right
to
me
is
clear,
but
anyway,
I'm
moving
forward.
I
have
also
spoken
with
the
implementers
of
other
validators,
and
we
would
very
much
like
to
deploy
this
to
our
production
environments.
G
To
get
real
world
experience,
it
would
be
a
it
would
also
be
supported
in
our
validator,
but
as
a
something
you
have
to
enable
so
as
an
option,
because
we
don't
obviously
don't
want
to
well.
We
want
to
have
more
experience
before
we
call
this
done.
Obviously,
okay
moving
on
next
time,
please
yeah.
So
a
very
brief
recap:
I
won't
go
into
all
the
details.
G
This
time
you
can
talk
to
me
privately,
if
you
want
to
know,
but
essentially
a
relying
party
validator,
is
pulling
a
notification
file
regularly
and
that
way
it
can
find
out
about
files
that
contain
either
snapshots
of
a
repository
or
Delta,
where
Delta
is
always
the
difference
between
one
version
of
the
repository
and
the
next,
you
notification
file
gets
updated
regularly.
G
The
other
files
are
very
stable
and
this
is
done
to
support
caching
next
slide,
please
one
of
the
changes
was
HTTPS,
so
in
the
previous
version
of
the
document,
the
notification
file
would
be
fetched
over
HTTPS.
The
references
to
the
other
files
were
a
plain
HTTP
and
we
have
hashes
for
those
files
so
that
you
could
actually
verify
that
they
were
ok
now
that
we're
using
HTTPS
Everywhere
I've
already
removed
this
part,
the
hashes,
but
it's
easy
to
put
it
back
if
people
see
another
use
for
it.
H
Rob
austin
yeah
I.
I
am
not
sure
we
should
be
removing
those.
The
reason
is:
they're,
potentially
still
useful
as
an
integrity
check.
I'm
not
talking
about
security
purposes,
really
this
basically
just
to
check
some
right,
but
I
think
it's
actually
useful
when
shipping
these
huge
things
around
to
have
some
idea
about
whether
or
not
you
actually
got
what
you
expected
to.
D
Not
only
do
you
want
to
know
have
some
ability
to
verify
the
integrity,
the
sort
of
case
to
think
about
in
the
security
space
a
little
bit
is
what
happens
if
the
data
is
subverted
on
the
server
itself.
Now
this
gives
you
a
potential
know
some
potential
towards
that
end,
but
it
also
gives
you
know
some
ability
to
potentially
locally
cash,
this
stuff
somewhere
in
your
own
network
and
use
those
hashes
for
that
purpose.
G
Okay,
yeah
I,
don't
follow
that
well,
I
can
put
it
back.
That's
okay!
We
thought
that
if
you
have
the
the
same
tax
checks
for
the
XML,
you
already
catch
a
lot
of
issues
with
corrupted
files
stuff
like
that,
but
yeah
like
I
said:
I
I
can
put
it
back.
There's
no
problem!
Well,
next
item
I'm
cashing.
G
So
this
is
the
document
used
to
have
a
lot
more
on
this
I
try
to
take
out
overly
specific
information
because
cashing
it
can
be
set
up
in
many
different
ways
and
I'm
not
sure
that
the
document
should
actually
tell
people
how
to
do
it,
but
again,
there's
also
something
that
I
like
feedback
on.
If
people
think
is
useful,
we
can
have
some
stuff,
but
otherwise
I
would
just
leave
it
and
focus
on
the
god
thing
at
hand.
Here
when
the
document
mentions,
caching
is
really
talking
about
the
time
that
is
cache.
G
So
next
slide
please,
first
of
all,
caching
is
optional.
You
don't
have
to
use
it.
If
you
are
confident
that
everybody
can
go
to
your
server
straight
away,
that's
what
you
can
do,
but
that
notification
file
I
mentioned
we
used
to
say
we
used
to
have
text
saying
we
used
to
say
five
minutes
of
caching
maximum
and
we
lower
that
to
one
minute
now
and
also
the
recommendation
to
the
validation
is
to
not
try
to
fetch
more
often
than
once
per
minute.
G
But
I
would
really
like
to
have
comments
from
operators
on
that
next
slide,
yeah,
because
other
than
that
well
I
mentioned.
We
want
to
enable
this
in
production
because
yeah
we
really
want
to
see
that
it
works
before
we
call
it
done,
but
from
where
we
are
standing
well,
except
for
the
hashes
and
possibly
comments
on.
Should
the
minute
be
is
a
minute
enough
or
too
much
or
whatever.
G
Well
to
me,
it
looks
pretty
much
done
so
if
you
have
any
concerns
about
this
or
if
you
are
interested,
I
would
really
like
to
ask
you
to
have
a
look
now
and
and
comment
before
we
do
go
for
law
school,
which
is
not
now
but
yeah.
Again.
I
want
to
have
proof
of
running
this
in
the
introduction
for
a
while
before
I
go
there,
ok,
yeah!
That's
it
questions
comments.
D
Jeff
has
comment
about
the
one
minute,
so
a
lot
of
it
comes
down
to
what
your
distribution
infrastructure
is
going
to
be.
If
you
have
something
that
is,
you
know
allowing
this
the
Akamai
door,
something
similar,
it's
less
of
an
issue
if
you're
hitting
it
when
submitted,
because
your
real
problem
at
that
point
is
distributing
across
your
CDN.
D
But
if
you
don't
have
some
sort
of
no
big
distribution
network
in
the
back
end,
what
you're,
actually
encouraging
people
to
do
is
hit
the
thing
often
and
when
things
do
change,
you'll
have
potentially
significant
portion
of
the
internet
all
converging
on
a
single
server
or
what
appears
in
the
outside
of
your
single
server
trying
to
grab
the
changes.
So
I
encourage
you,
as
part
of
thinking
about
your
timings,
trying
to
figure
out
how
you
actually
want
people
to
grab
the
stuff
and
what
happens
when
significant
number
of
people
try
to
do
this
all.
G
H
Point
in
so
Rob
last
time
there
are
actually
two
different
timing
values
you
want
to
think
about
here.
There's
the
lifetime,
the
caching
lifetime
of
the
notification
fall
and
there's
polling
cycle
of
the
relying
party
okay,
I've
been
assuming
perhaps
incorrectly,
the
relying
parties
would
still
be
keeping
a
somewhat
slower
polling
cycle
and
I'm
currently
polling
once
an
hour.
I,
don't
see
any
strong
reason
to
change
that.
H
What
the
notification
file
caching
timeout
tells
you
is
basically
how
much
delay
is
built
into
the
caching
system.
That's
the
thing
we
want
to
keep
small
is
I,
think
there's
a
there's
a
case
to
be
made
for
keeping
the
polling
cycle
significantly
slower
than
one
minute
like
more
like
an
a
half
an
hour
an
hour.
So
look
on
that
order,
but
keeping
the
notification
caching
thing
small.
So
we
don't
have
a
lot
of
delay
built
into
the
caching
system.
G
Yeah
we
do
get
a
lot
of
questions
from
people
using
or
validated
well,
why
does
it
take
an
hour
or
whatever
it
takes
for
my
new
ruins
to
appear
in
the
validator?
So
that
is
a
question
we
see
regularly.
G
Another
question
is,
of
course
well
how
relevant
is
this
for
for
routing,
as
opposed
to
well?
Another
use
case
is
that,
but
somebody
creates
it,
but
if
they
can
also
see
that
it
appears,
and
it's
valid,
that's
the
confirmation
that
everything
works
so
I
do
think
people
like
to
see
this
also
I
was
hoping,
maybe
wrongly,
but
I
was
hoping
that
with
a
caching
infrastructure
in
front
of
it.
G
Well
suppose
that
every
AAS
in
the
world
runs
this,
then
we're
talking
like
40,000
and
40,000
hits
per
per
minute.
I
think
that's
something
that
we
can
can
cope
with.
It
is
a
lot
of
data.
That's
there's
a
cost
factor
there
I
suppose,
but
you
can
just
do
a
head
call
to
see
you
get
to
change
them
and
all
that
so
I'm
not
just
sure
that,
even
with
a
minute,
it
would
result
in
a
load
that
we
cannot
help
handle.
If
you
use
a
right
infrastructure
from
them,
yeah
yeah.
G
K
There
was
already
a
draft
that
discussed
possible
solutions
of
how
to
validate
things
when
you
have
our
sink,
an
rdp
together
and
some
other
stuff,
so
that
was
revision
01
of
this
document.
But
yet
we
decided
what
we
want
to
go
with
is
basically
the
description
of
algorithm
and
less
discussion
of
possible
things
and
wise
next
slide.
Please
so
yeah
what
we
have
in
this
version.
K
K
For
example,
we
do
not
discuss
how
we
check
whether
something
is
signed
correctly
or
source
sets
match
between
parent
and
child,
but
mainly
fill
the
gaps
that
are
not
described
in
other
documents,
but
we
also
try
to
do
as
much
as
possible
is
not
rely
on
your
eye
links
or
our
sink,
your
Island
in
the
in
the
objects.
But
there
is
some
other
approach
what
it
gives
us
is.
Potentially
we
could
support
multiple
publication
points,
which
is
described
in
another
draft
and
potentially
other
retrieval
protocols.
So
next
slide.
K
Yeah
about
the
name
why
local
cache
I
think
it
might
be
a
bit
confusing,
because
some
other
documents
use
the
same
term
in
a
bit
different
with
a
bit
different
meanings,
so
they
mean
a
cache
of
already
validated
objects.
In
our
case,
it's
basically
a
store
of
objects
that
are
fetched
either.
We
are
saying
or
rdp,
not
necessarily
valid,
so
in
in
the
text
of
document
we
use
their
object,
store
and
yeah.
Maybe
will
reconsider
the
name
of
the
draft
to
have
something
more
meaningful
mm.
K
Yes,
so,
as
I
said,
because
we
use
both
a
HTTP
and
are
saying
possibly,
you
could
have
multiple
objects
with
the
same
URI,
because
your
I
in
our
DP
is
not
depot.
It's
just
an
attribute
on
an
object
so
yeah
there
could
be
some
possibilities
so
get
to
what
I
mean
is
that
our
sink
URI
is
not
a
unique
identifier
of
object
anymore
and
what
we
also
do
to
achieve
this
whole
vibration
working
is
we
separate
retrieval
of
objects
from
aids,
from
their
validation?
K
Yes
very,
is
pretty
sane
simple.
We
have
an
engine,
the
does
validation,
the
fetcher
that
does
fetching
and
the
store
where
we
keep
objects,
looks
pretty
simple.
So
this
fetch
your
eyes
yet
could
be
anything
if,
in
future
we
want
something
else,
it
could
be
some
other
protocol,
it
could
also
be
multiple
protocols,
so
we
could
extract
multiple,
your
eyes
and
pass
them
to
fetch
her
store
them
in
a
store
process
all
in
the
same
way.
K
But
more
importantly,
next
slide
is
what
we
could
do
with
rdp
easily
is
we
could
have
a
watcher
on
top
of
it,
which
could
basically
check
the
notification
xml.
You
arrive
that
we
have,
for
a
repository
and
in
case
of
rdp,
the
the
polling
of
that
file
for
changes
is
essentially
just
ahead.
Http
head
request
on
that
you're
right
and
when
we
detect
changes
we
could
kick
engine
to
revalidate
repository
again.
So
this
is
where
this
one
minute
saying
that
we
just
discussed
could
be
easily
implemented
in
this
case
with
the
arcing.
A
K
Yeah
what
I
also
mentioned?
We
have
different
approach
now
on
how
we
discover
objects.
So,
on
the
left
side,
you
see
the
traditional
way
when
we
have
a
certificate.
It
has
an
arcing,
your
eye,
pointer
to
the
publication
point
and
our
think
your
I
pointer
to
the
manifest
which
is
supposed
to
be
at
the
same
place
and
then
from
the
manifest.
K
You
have
a
list
of
entries
that
have
file
names
which
are
also
supposed
to
be
at
the
same
publication
point
and
one
of
them
ser
el,
and
we
also
have
see
ldp
pointing
to
the
same
place.
We
do
not
use
that
to
discover
objects
in
our
new
implementation,
but
we
rather
use
the
aki
sji
fields
for
that.
So
when
we
want
to
fetch
an
object,
we
basically
go
to
our
store,
look
it
up
by
hashes,
so
either
API
or
the
hash
which
is
mentioned
on
manifest
instead
of
file
name.
K
K
Yeah
and
some
bull
what's
next
is
we
want
to
still
improve
the
document?
It's
very
work-in-progress
provide
more
details
in
the
probably
in
the
choices
we
made
specific
choices.
Why
we
do
things
in
a
certain
way
and
we
think
it
it's
useful
for
this
working
group
to
have
such
a
document
not
only
for
our
users
but
in
general.
So
we
want
to
have
your
opinion
on
it
plus,
since
we'd
describe
the
algorithm.
Please
look
at
it
think
about
possible
problems
with
it,
provide
a
feedback
speaking.
E
K
G
G
We
think
it's
very
useful
to
have
a
document
for
ourselves
for
our
users,
and
we
would
very
much
like
to
make
it
a
working
group
document,
an
informational
document,
obviously
because
we'd
also
like
the
working
group
to
scrutinize
it.
So
if
the
working
group
is
willing
to
do
that,
that
would
be
great,
otherwise
yeah.
We
could
look
into
different
ways
of
documenting
this,
and
maybe
it
doesn't
need
to
be
an
internet
draft,
but
I
would
prefer
that
the
boss,
okay.
A
H
All
these
tall
people,
okay,
if
the
church
would
be
so
kind,
is
to
go
temporarily
to
slide,
for
you
may
be
able
to
cut
this
short
okay.
That
is
all
that
has
changed
in
the
last
two
years,
as
we
added
1
URI.
H
A
H
Yes,
if
you
don't
do
read
the
document,
if
you
actually
want
to
know
seriously,
though
do
people
actually
want
me
to
print
to
explain
how
the
protocol
works,
cuz,
otherwise,
they've
really
just
a
few
questions
about,
and
then
we're
done.
We
can
move
on
to
the
other
presentation.
Okay,
I'm,
not
hearing
loud
cries
of
explain
this.
Okay,
anybody
who
wants
the
overview
the
slides
are
posted.
You
can
review
it
if
you
could
go
to
the
questions
at
the
end,
please.
H
Step,
theoretically,
if
you
click
on
that
it'll,
just
take
you
there
there
you
go
so
one
thing
that
has
happened
is
I
wrote
you
propose
to
this.
The
working
of
mailing
list
I
believe
an
XSL
transform.
It
can
actually
translate
between
the
old
version
that
other
people
have
implemented
and
the
version
that's
in
the
current
specification
on
anybody,
who's
actually
implemented
the
old
one.
Please
test
this
transform.
So
we
know
whether
or
not
it
really
is
possible
to
mechanically
translate
I
think
it
is
I've
tested
it.
H
When
last
I
heard
I
believe
ripen,
CeCe's
implementation,
Aaron's
implementation,
AP,
Nick's
implementational,
had
partial
implementations
of
this.
As
far
as
I
know,
I'm
the
only
one
who
actually
commit
the
entire
protocol,
because
nobody
else
was
doing
the
publication
protocol
setup
stuff,
but
I
believe
the
other
implementations
all
did
the
first
part,
which
is
what
transforms
cover.
H
So
please
test
plee,
plee
right,
I'm
guessing
the
working
group
still
wants
this,
but
it
would
probably
be
good
to
confirm
that
before,
but
a
lot
of
more
work
into
pushing
the
draft
out.
The
question
is:
do
we
actually
have
more
work
to
do
here?
The
one
thing
I
know
of
that
somebody
requested,
I
believe
it
was
dr.
kent
said
he
wanted
more
explanation
of
how
the
bpk
I
certificates
were
set
up.
That's
a
fair
request,
but
I'm
not
sure,
belongs
in
this
document.
What
this
document
is
about
is
shipping
the
bpk
I
certificates.
H
We
have
around
and
shipping
a
bunch
of
your
eyes
that
go
with
them.
Okay,
they
are
what
they
are
they're,
what
we're
using
the
fact
that
nobody's
ever
bothered
to
specify
them
while
arguable
that
is
indeed
a
problem,
but
that
doesn't
change
the
fact
that
they're
what
we're
using
so
the
question
is:
do
we
actually
need
to
make
this
document
wait
for
a
specification
of
what
the
BBK
I
looks
like
open
question?
That's
the
working
group
to
decide.
H
H
At
the
beginning,
okay,
okay,
so
this
is
an
update
of
a
presentation.
I've
done
periodically.
I
haven't
done
this
one
in
a
couple
years
and
some
of
the
graphs-
and
this
is
more
interesting
than
the
others,
but
we
can
skip
over
the
ones
that
people
get
bored
by
just
as
an
explanation.
Every
single
one
of
the
graphs
there
are
two
pictures,
there's
one!
H
That's
a
linear
scale,
there's
one
that's
a
semi
log
scale
just
because
the
numbers
and
some
of
them
are
sufficiently
disproportionate
between
some
of
the
the
repositories
and
some
of
the
other
repositories
that
without
using
both
scales,
you
can't
actually
see
what's
going
on
here.
Okay,
so
we've
been
running
an
rpi
validator
on
one
particular
server
in
Seattle
for
four
years
well
lower
than
four
years,
but
we've
been
keeping
records
of
everything.
H
It's
all
really
the
same
data
that
we
were
looking
at
all
along.
This
is
all
still
rsync.
By
the
way,
we
do
not
have
our
DP
in
this
mix
anywhere
yet
next
slide,
please
a
brief
overview
of
how
r
PK
I
validation
works,
read
the
slides
on
them
on
the
website.
If
you
really
need
to
know-
and
you
don't
remember
yet
next
slide-
please,
okay,
so
this
is
the
first
basic
one.
That's
just
objects,
any
kind
of
our
PK
object
in
the
various
repositories.
H
Now
we're
not
actually
tracking
every
repository,
because
whether
we're
not
reporting
it,
we
are
tracking
them,
but
because
at
the
moment,
nobody's
really
doing
the
up/down
protocol.
The
ones
are
actually
worth
reporting
in
this
explanation
or
its
which
the
rer
repositories
and
one
of
the
ones
of
Randy
and
I
operate.
Just
for
some
contrast.
H
We
are
not
trying
to
follow
all
the
places
where
people
are
doing
up
down
protocol
off
of
test
repositories
and
stuff
like
that,
because
it
would
be
very
noisy
slide
when
we
get
to
a
point
where
we're
actually
doing
sniffing
it
up
down
protocol
we're
going
to
have
to
change
the
way
we
graph
this,
because
it's
not
going
to
fit
on
a
picture
like
this,
but
with
just
the
arm
the
RIR
repositories
you
can
see
ripe.
It's
been
on
a
a
pretty
steady
climb.
H
There
I'll
explain
what
these
vertical
spikes
are
in
a
couple:
slides
a
pinochle
II
before
any
of
the
started.
Ap
Nick
was
the
leader
ap
Nick
had
sick,
innovate
numbers
of
our
BK
objects
before
anybody
else
did,
but
they've
been
pretty
flat,
actually
they're
going
up
a
little,
but
not
very
much
like
actually
passed
them
at
this
point.
Aaron
is
not
there
yet,
but
they're
heading
up.
Okay,
next
slide,
please
same
picture
semi
log
scale.
H
Afrinic
still
doesn't
have
a
significant
number
of
objects
and
the
brown
one
is
the
ca.
Repugnancy,
a
repository,
the
branding
I
operate
I'm,
half
of
the
stuff
in
there's,
actually,
four
workshops
but
and
they're
actually
going
to
be
some
some
idiosyncrasies
you're
going
to
see
the
data
for
that
one,
just
because
significant
number
of
the
objects
and
that
one
come
and
go
depending
on
workshop
status.
H
Okay,
next
slide,
please
so
those
big
vertical
spikes
are
generally
something
really
bad.
It
see,
there's
something
really
bad
that
happening
or
something
really
bad
being
cleaned
up,
which
depends,
but
you
know
things
like
rekeying
the
entire
universe,
because
do
you
miss
something
and
your
top-level
certificate
timed
out
oops?
They
caused
a
spike
other
than
that
it
has
to
be
pretty
steady,
good.
H
Okay,
this
is
just
our
sink
connection,
counts.
Ok,
so
the
most
notable
thing
here
is
that
ripe
was
huge
and
then
right
stopped
being
huge.
Okay,
this
was
one
ripe
moved
over.
I
don't
know
how
many
people
still
remember
the
terminology
we
were
using
for
this
hierarchical
publication.
It
refers
to
the
structure
of
the
repository,
and
this
is
this
is
basically
an
artifact
of
our
sink
because
of
the
way
our
sink
works
and
the
way
we're
doing
a
tree
walk
for
the
certificate
hierarchy.
H
You
get
a
much
more
efficient
use
of
our
sink.
If
you
make
all
of
the
subordinate
cas,
be
child
directories
of
the
parent
CA,
that's
what
we
call
hierarchical
publication.
We
were
advocating
that
for
a
long
time
and
for
a
long
time
nobody
was
doing
it
and
then
one
day
ripe
change
to
using
hierarchical
publication
and
whomp.
You
can
see
they
fall
down
to
the
bottom,
whoever
they're
almost
lost
in
the
noise,
because
they're
now
it
takes
like
I
think
the
first
six
months
they
were
doing
it.
H
It
took
five
connections
to
sync
all
of
ripe,
and
then
they
fixed
some
trivial
bug
and
it
went
down
to
two
so
there's
one
to
get
the
trust,
anchor
and
there's
one
to
get
everything
else.
Black
Nick
followed
the
same
thing
I
believe
about
six
months
later:
ap
Nick,
well,
they're,
stubborn,
okay.
They
have
their
way
of
doing
it
and
they
haven't
changed
it.
So
it
still
think
it's
like
half
a
dozen
objects
per
hour.
Sing
sesh
is
what
it
works
out
to
roughly
and
that's
been
very
steady.
H
I'll
explain
what
the
vertical
spikes
on
this
arc
did
a
bit
it
too.
Okay
and
AfriNIC
is
basically
the
same
patterns,
ap
Nick
with
smaller
numbers,
because
I
believe
they're
using
the
same
code
base
still
I,
don't
know
whether
they
made
nadie
ministry
and.
H
Same
picture
semi
log
scale.
Okay,
next
slide,
please,
ok!
So
this
business
with
misinterpreted
our
sink
exit
codes
is
unfortunate.
This
is
one
of
the
big
reasons
that
I
will
not
be
sorry
if
and
when
we
all
go
to
our
DP.
We
kind
of
have
to
treat
our
sink
as
a
black
box,
but
it's
error.
Reporting
is
a
little
strange,
there's
more
on
this
later
as
well.
There's
a
lot
of
stuff
for
which
it's
like.
Well,
you
got
error
code
23.
What
does
error
code
23
mean?
H
H
Add
here
that
isn't
already
covered,
is
it's
important
to
remember
that
the
whole
hierarchical
publication
thing
probably
goes
away
with
our
DP?
There
may
be
interesting
things
in
our
DP
land
that
we
haven't
discovered.
Yet
that
are
just
as
weird,
but
we
don't
know
what
they
are
you
we
didn't
I,
don't
think
we
realized
the
extent
to
which
the
hierarchal
publications
was
can
be
an
issue
for
our
sink
until
we
started
doing
at
least
initial
deployments.
H
It's
just
one
of
these
things
that
start
biting
us
as
a
whoops,
so
you
know
who
knows
what's
lurking
in
our
GP,
but
we
don't
know
of
any
of
those.
Yet
ok,
that's
likely
yeah.
This
is
I
have
to
be
careful
because
one
of
the
early
reviewers
of
the
slide
deck
massively
misinterpreted
this
one.
This
is
literally
a
number
of
connections
ratio,
the
number
of
objects.
It's
a
number
of
objects
divided
by
the
number
of
connections.
It's
not
how
many
objects
were
fetched
for
that
connection.
It's
just
the
previous
two
slides
as
a
ratio.
H
H
It's
just
because
we
are
now
using
a
handful
of
connections
and
we're
still
handling
the
same
number
of
objects
from
right,
as
with
the
other
slides
generally,
these
huge
spikes
mean
oops,
something
bad
happened:
it
gets
cleaned
up
and
then
life
goes
on
and
as
with
the
other
ones,
you
can
see
the
you
know,
unsurprisingly
the
places
where
their
matching
all
the
RRS
that
are
actually
magic
to
lump
most
of
their
data
into
the
handful
of
connections.
The
number
of
objects
for
connection
goes
up
big
surprise.
H
Ok,
pretty
much
what
you'd
expect
from
what
I
was
just
mumbling
about.
The
only
thing
I
have
to
add
to
this.
One
is
I,
don't
know
what
the
hell
is
going
on
with
the
brown
one.
That's
our
own
repository
CA
0
to
DARPA
God
net.
Something
weird
happened
there
I,
don't
know
what
this
was.
It
might
have
been
related
to
workshop
data.
It
might
have
been
related
to
us
not
being
attached
to
that
repository
for
a
little
while,
because
we're
busy
with
other
projects
I,
don't
know,
I
really,
don't
know
what
this
one
means.
B
H
H
H
H
The
cases
where
you've
got
the
really
high
spikes
here
generally,
where
something
is
just
a
connection,
is
stuck.
Ok,
there
have
been
times
we've
had
particular
problems
talking
from
Seattle
to
AfriNIC.
We
don't
know
that,
does
anything
everything
is
doing
wrong,
just
something
on
the
path
which,
when
does
an
ad
for
Nick,
is
often
unhappy.
So
we
often
see
very,
very
poor
results,
retrieving
from
Africa,
that's
not
necessarily
anything
they're
doing.
There
was
a
period
of
time
when
we
saw
very
bad
results
from
lack
Nick.
H
H
Okay,
oh
yeah,
the
parallelism
thing.
This
confuses
people.
It
is
possible
to
run
more
than
one
hour
say
connection
in
parallel.
Okay,
we
generally
do
this
particular
machine.
That's
doing
these
samples
as
compared
to
run
10
in
parallel.
It
was
not
when
we
very
first
started
part
of
the
reason
we
continue
to
present
this,
as
if
it
were
just
a
single
one
running.
It
is
because
when
we
started,
we
didn't
have
that
parallel
ISM
feature.
So
in
order
to
get
any
kind
of
consistent
reporting,
we
had
to
keep
it
that
way.
H
It
does
not
in
fact
take
as
long
as
you
would
gather
from
this
because
we're
doing
10
and
10
connections
in
parallel.
That's
for
the
ones
where
we
can
do
10
connections.
Parallel
the
downside
of
the
high
our
whole
publication
thing
where
we
only
need
one
connection
to
talk
to
ripe
is
it's
only
one
connection
you
can't
make
it
parallel
is
one
connection
it
takes,
however
long
it
takes
okay,
that
has
not
been
a
problem
to
date.
H
If
we
were
to
continue
doing
this
and
right
where
to
grow
by
another
order
of
magnitude,
or
so
it
might
be
an
issue
but
we're
hoping
to
be
on
our
DP
by
that
point,
the
real
reason
for
having
the
parallelism
features
actually
not
this
anyway.
The
real
reason
for
the
parallelism
feature
is
simply
not
to
have
validation
gets
stuck
when
one
particular
repository
just
isn't
answering
this
used
to
happen
to
us
a
lot
back
in
early
testing
days.
There
was
one
particular
guinea
pig
who
were
really
really
sweet
people.
H
They
were
trying
really
hard,
but
they
had
some
organizational
thing.
They
could
never
quite
get
their
repository
outside
their
firewall.
It
worked
for
them,
but
so
every
time
we
bout
we
try
to
validate
against
them.
We'd
hang
for
you
know
until
TCP
gave
up
and
it
was
just
every
hour.
It
would
do
this,
so
the
parallelism
thing
that
at
least
we
could
go
on
to
do
everything
else,
while
we're
waiting
for
that
to
time
out.
H
So
average
connection
duration-
this
shouldn't
be
surprising
right.
The
average
connection
duration
is
actually
fairly
long.
Some
of
that
may
be
error
related,
but
some
of
this
just
there's
a
lot
of
data,
so
you
you
would
expect
the
connection.
Duration
for
architects
just
arrived
to
be
longer
than
our
connections
day.
H
Connection
durations
to
afrinic
again
it's
it
may
just
be
connectivity
issues
between
us
and
then
we
don't
really
know
next
slide.
Please
and
again,
I
really
don't
know.
What's
going
on
with
CA
0
there
other
than
that
the
stuff
is
relatively
consistent,
except
for
the
blizzard
where
people
change
the
obligation,
algorithm,
yeah.
H
H
There
was
a
period
of
time
when
we
were,
we
were
topping
out.
Most
of
the
connections
never
took
longer
than
300
seconds.
200
yeah
300
seconds
and
when
they
took
300
seconds
is
because
something
at
hung,
so
it
was
basically
it
was
at
300
seconds
would
kill
a
dead
connection
these
days,
sometimes
we
have
active
connections
are
lasting
longer
than
that.
So
that's
actually
a
change.
H
We
don't
really
know
what's
going
on
with
the
right
thing
here,
there's
a
bit
more
on
that
when
we
get
to
the
next
slides
of
that
error.
Stuff.
H
Ok!
This
is
probably
the
flaky
estuve
all
of
the
graphs,
because
it's
actually
fairly
hard
to
measure
errors.
What
we're
doing
here
is
essentially
keeping
track
of
it's
a
sliding
window
out
of
the
last
n
connection.
Attempts
I,
don't
remember
what
end
is
it's
something
on
the
order
of
10,
but
out
of
the
last
end,
connection
attempts
how
many
failed
right
and
that's
what
we're
graphing
hear
something
that
number
goes
up.
It
means
we're
getting
a
lot
of
connection
failures
when
it
goes
down
to
means
we're
not
a
lot
of
connection
failures.
H
Part
of
the
reason
that
this
is
we're
using
such
a
lame
measurement
is
again
our
sink.
We
don't
get
a
lot
of
data
back
from
our
sink.
We
get
back,
as
did
it
work
or
did
network
so
we're
making
due
with
what
we've
got,
but
you
can
see
the
period
we're
starting
to
have
trouble
with
longer
connections
to
raipur,
also
getting
a
higher
error
rate.
I,
don't
know
what
that's
about.
A
H
Next,
please
it's
the
same
data
and
this
at
this
point.
This
is
noisy
enough.
We
really
don't
know
what
this
means.
Ok,
we
included,
because
we
were
able
to
measure
it,
and
it
gives
a
little
bit
of
insight
into
some
of
the
other
things
like
the
the
traffic
levels,
but
I
wouldn't
take
the
error
rate
here
too
seriously.
It's
a
more
hint
of
things
to
look
for
than
anything
else.
I.
H
A
H
H
So
we
still
don't
have
any
measurement
for
freshness
I've
still
not
figured
out
how
to
measure
that
it's
probably
something
we
ought
to
be
paying
attention
to,
but
we
don't
have
a
good
measurement
for
it
on
somebody
can
come
up
with
an
algorithm
for
that.
That
would
be
awesome.
Clearly
we're
going
to
want
to
do
something
kind
of
like
this
kind
of
measurement
for
rdp,
there's
no
point
to
doing
rdp
measurement.
Now,
because
this
is
basically
measuring
deployed
servers.
There
are
no
deployed
servers.
Yes,
there's
nothing!
H
G
Rogers
ripens,
you
see
a
comment
on
the
freshness
I
well
as
far
as
I
know
it's
not
required
by
by
the
standards,
but
we
put
assigning
time
on
the
EE
certificate
of
manifest
whenever
we
publish
and
I
think
also,
this
update
I'm.
This
is
in
there.
So
that
might
be
something
that
you
could
use.
That's.
H
Good
for
recent
data
that
the
I
probably
should
have
defined
the
term
better
that
the
sense
in
which
I
meant
it
was
how
close
is
what
the
relying
parties
are
seeing
to
the
master
copy.
That's
on
the
repository.
How
close
are
they
staying
in
sync?
Okay,
in
theory,
if
everybody
had
perfect
connections
all
the
time,
everybody
is
in
sync
and
everybody's
fresh,
but,
as
we
know,
connections
fail.
All
the
implementations
are
designed
to
tolerate
a
certain
amount
of
sliding
through
periods
when
I
couldn't
get
a
connection.
H
You
just
go
with
what
you
had
from
before
do
the
best
you
can.
So
the
question
is:
how
well
are
the
relying
parties
doing
it
staying
in
sync
with
the
repositories,
and
yes,
definitely
certainly
for
anything,
that's
recent
signing
times
and
certificate
issue
times
are
good
hint,
but
there's
probably
some
cases
with
existing
data
that
hasn't
been
changed
for
six
months,
but
it's
still
the
freshest
thing
there
so
questions.
How
do
you
measure
that.
D
Jeff
has
effectively
what
you're
asking
for
is
some
sort
of
log
file.
That
just
basically
says
this
is
how
often
the
manifest
has
been
updated.
He
literally
had
nothing
but
a
set
a
time
stamps
forgiven
version
that
would
sort
of
give
you
the
data
you're.
Looking
for,
and
you
don't
mean
like
that-
sort
of
a
required
no
document,
that's
in
set
of
things
that
we're
tracking
in
a
bit
of
operational
thing,
you
could
pull
from
the
server
also.
H
J
Annie
Newton
Erin,
regarding
the
measuring
of
up
down.
We
have,
we
had
two
people
doing
up
down
with
two
organizations
going
up
and
down
with
us,
I
think
we
don't
have
one
now
and
one
of
them
you
going
to
try
to
measure
our
sink
of
connections
with
them.
They
were
kind
of
letting
their
server
go
offline
consistently
and
when
we
contacted
them
about
it,
their
response
on
that
was,
they
had
read
the
RFC's
and
according
to
them,
they
didn't
have
to
keep
it
up.
Keep
it
up
all
the
time
so.
A
E
E
The
the
two
examples
most
recently
were
crypto
stuff
and
I'm,
guessing
that
a
lot
of
the
people
here
just
didn't
want
to
take
a
look
at
crypto
stuff,
but
even
so,
please
do
try
to
look
at
the
drafts
when
they
come
through
working
group.
Last
call
it's
ever
so
much
better
for
results,
trying
to
poke
some
people
right
now
to
give
some
additional
reviews
for
those
things
that
have
just
recently
gone
through
a
working
group.
Last
call
another
item
of
the
routing.
E
These
things
tend
to
be
true
and
published,
and
but
they
become
quickly
overtaken
by
events.
So
we
actually
have
a
couple
of
drafts
right
now,
an
overview
document
and
a
use
cases
document.
So
when
we
consider
publication
requests
for
those
it'll
just
want
the
working
group
to
give
a
little
additional
thought
as
to
whether
or
not
these
things
would
be
something
that
would
still
be
a
value
five
years
from
now,
and
whether
it's
enough
use
to
put
the
publication
system
through
the
effort
of
actually
producing
an
RFC
I.
E
Think
that's
about
it
anything
else.
I'll
talk
about
on
Friday,
so
on
Friday
we
will
have
less
to
talk
about
because
Rob's
presentation
was
today
so
we'll
have
to
two
talks
about
bad
things.
Cas
might
do
so.
We're
done
here
and
well
see
you
on
Friday,
okay,
well,
be
looking
for
new
volunteers
for
Jabbar.
Scribe
notes.
Take
her
on
Friday
all
right.
Okay,.