►
From YouTube: IETF109-SAAG-20201119-0500
Description
SAAG meeting session at IETF109
2020/11/19 0500
https://datatracker.ietf.org/meeting/109/proceedings/
A
All
right,
hi
everybody
we
are
at
the
top
of
the
hour,
whichever
hour
it
may
be,
for
you
in
your
own
local
time
zone,
so
we
shall
get
started.
So
I
am
ben
kaduk
and
my
co-ad
rooman
hi,
everyone
hey.
So
we
are
security
directors
and
this
is
the
security
area
advisory
group.
A
A
A
So
here's
the
agenda,
or
at
least
so
far,
if
anybody
has
any
bashing
or
adjustments
or
changes
they'd
like
to
propose
please
join
the
queue
and
we
can
talk
about
those-
and
I
will
give
a
couple
of
seconds
for
that,
especially
as
I
catch
up
on
the
chat
window.
A
But
I
don't
see
anyone
joining
the
queue.
So
let's
go
ahead
and
go
through
to
the
working
group
reports.
Sorry,
let's
go
to
the
memoriam,
my
apologies
so,
as
you
all
know,
and
maybe
saw
at
the
plenary
jim
shot
passed
away
in
october
and
it's
been
a
huge
loss
to
us.
A
A
Can
move
on
to
the
working
group
summaries?
So
I
think
most
of
these
we
can
just
sort
of
page
through
we've
gotten
reports
from
most
people
and
let's
see
I've
got
a
local
copy
up,
so
I
can
sort
of
sneak
ahead,
so
emu
just
sent
one
which
I
do
not
believe
made
it
into
the
slides
yes,
but
that
is
available
and
if
anybody
has
any
additional
comments
that
they
want
to
make
about
what
has
been
going
on
in
their
working
group
by
all
mean,
please
join
the
queue.
A
We've
got
a
good
response
rate
through
the
different
groups,
although
there
are
a
handful
that
did
not
meet
and
also
did
not
send
reports,
so
it
looks
like
sec
dispatch.
We
had
this
session
on
monday
and
we
had
just
a
couple
of
presentations.
A
I
don't
know
if
if
anybody
does
want
to
come
to
the
mic
and
talk
about
sex
dispatch,
particularly
or
for
any
of
the
the
other
related
work,
I
know.
B
B
A
C
A
A
A
I
know
fernando
has
updated
the
miracle
id
security
considerations
actually
a
couple
months
ago,
but
I
just
got
back
to
him
yesterday,
so
that
should
be
ready
to
go
out
for
itf
last
call,
and
then
the
security
txt
already
did
go
through
itf
last
call
and
got
a
significant
volume
of
comments.
A
I
have
attempted
to
go
through
them
and
then
try
to
extract
out
the
technical
bits
and
work
with
authors
to
make
changes
to
address
them,
but
I
still
have
to
do
my
final
pass
to
sort
of
go
through
the
actual
changes
in
the
document
to
see
whether
or
not
the
review
comments
have
actually
been
addressed.
So
that's
still
waiting
for
my
additional
review,
follow-up
and
potentially
would
either
have
more
changes
to
the
draft
or
potentially
go
to
the
isg.
A
Depending
on
the
results
of
that
network,
roman
mentioned
open
pgp
already,
we
do
have
the
rechartering
or
chartering
process
in
in
progress
for
that,
and
so
that
will
be
going
to
the
isg
again
in
december.
A
A
So
if
you
think
you
might
be
interested
even
if
you're
not
sure,
please
reach
out
to
me
and
we
can
talk
about
what
it
would
entail
and
whether
it's
a
good
fit.
So
that's
security
area
highlights.
B
In
addition
to
the
the
ace
working
group
share
opportunity
in
general,
if
you
are
interested
in
being
a
working
group
chair
opportunities
like
this
kind
of
always
come
up,
and
so,
if
you
let
us
know
in
advance,
what
will
keep
you
in
mind
when
these
opportunities,
you
know,
do
present
themselves
so
really
kind
of
do?
Let
us
know
if
you
know,
with
a
short
note
privately
just
say:
hey
at
some
point:
I'd
be
interested
and
we'll
see
whether
there's
a
match.
B
A
couple
other
area
highlights:
we've
had
a
lot
of
discussion
in
recent
weeks
on
various
vulnerability
disclosure
guidance.
So
we
first
had
the
discussion
about
the
llc,
so
we
have
a
published
policy
for
the
ietf
infrastructure
and
very
much
most
recently,
which
was,
I
think,
we
closed
it
to
early
last
week.
We
also
have
now
documented
how
we
do
vulnerability
disclosure
in
the
ietf.
B
You
can
read
that
github
link
there
and
that
should
be
posted
relatively
soon
to
the
official
website.
The
big
kind
of
addition
beyond
everything
that
we
already
do
was
first
having
it
written
down,
but
we
knew
now
have
a
last
resort,
alias
that
will
go
to
ben
and
me
if,
if
that
discoverer
or
that
seeker
of
the
vulnerabilities
can't
can't
find
the
right
working
work,
the
other
thing
ben
and
I
wanted
to
plug-
is
we
keep
a
working
list
of
what
we
call
the
common
the
common
discussed
topics?
B
So
if,
if
you're
working
kind
of
on
a
document
or
if
you
were
or
reaching
out
and
helping
with
security
issues
in
another
working
group,
we
have
this
working
kind
of
tick
list
of
things
to
consider.
Beyond
all
the
official
documentation
we
have
as
documents
go
forward
and
if
you
have
suggestions
of
other
things
to
add
to
that
list,
we
also
really
do
welcome.
Welcome
back
it's
a
it's.
B
The
other
thing
ben
and
I
wanted
to
want
to
just
to
quickly
highlight
our
pipeline
grows
and
shrinks,
and
we've
been
really
putting
in
a
concerted
effort
to
make
sure
that
we're
working
kind
of
the
documents
as
fast
as
we
can.
We
know
that
we
are
a
gating
function
to
to
moving
that
forward.
So
we
just
want
to
highlight,
we've
been
pushing
hard
and
we
believe
everything
is
through
pub
wreck.
B
As
of
as
of
these
kind
of
screen
caps
a
couple
of
kind
of
days
ago,
and
what
we
would
also
kind
of
call
out
is
that
our
reviews
of
these
are
kind
of
agile.
If
you
have
a
document
with
us
in
the
pipeline,
you
know
we
would
typically
probably
handle
it.
First
come
first
serve,
but
if
you
have
some
constraints
where
you
think
there
are
dependencies
on
the
documents,
the
way
the
authors
have
resources
requires
that
we
look
at
something
kind
of
sooner.
B
A
Yes,
I
used
to
second
that
it
is
very
helpful
if
there's
a
particular
document
that
has
some
additional
constraints.
I
know
we're
both
happy
to
try
and
work
to
meet
those,
but
we
have
to
know
about
them
in
order
to
do
anything
and
just
to
sort
of
reiterate
that
the
publication
requested
queue
has
gone
to
zero,
but
it
used
to
be
quite
large.
So
there's
a
little
bit
of.
E
A
That's
making
its
way
through
to
the
the
subsequent
states,
evaluation
last
call
and
waiting
for
the
write-up,
but
we
are
working
to
to
keep
those
cues
short
as
well.
E
And
that
I
set
tried
to
send
a
couple
of
chats,
but
they
went
into
space
and
the
chat
window
remains
empty
and
I
think
it's
broke.
You
can
reload
the
page
and
hopefully
the
chat.
Will
after
you
ask
a
question:
okay,
so.
E
That
I
typed
twice,
but
I'm
not
sure
that
it
got
into
some
chat
window,
was.
Is
there
a
second
working
group
chair
for
seaboard
to
follow
but
not
replace
the
irreplaceable
jim
shot
and
help
francesca
yours
rather
active?
E
A
E
Thanks
man,
one
question:
that's
a
good
question.
F
A
G
Great
amazing,
what
the
volume
knob
does.
G
I
had
a
question
about
the
security
vulnerabilities
disclosure,
just
a
logistical
question,
which
is:
I
know
that
there
is
an
open,
pgp
certificate
published,
so
people
can
make
confidential
disclosures
to
that
catch-all
address,
but
that
is
not
published
on
probably
some
of
the
most
popular
key
servers
and
it's
also
not
published
via
wkd.
G
B
Apps,
absolutely
if
you'll
notice,
the
the
the
the
write-up
of
that
is
actually
still
a
github.
We
have
not
yet
done
the
necessary
steps
to
put
things
on
on
the
website
kind
of
itself.
Publishing
the
key
in
in
one
or
many
kind
of
key
servers
is
still
one
of
those
last
steps.
We
need
to
do
in
addition
to
a
little
bit
of
editorial
polish.
We
still
have
so
that
that's
on
the
tick
list
before
we
launch.
B
So
ben-
and
I
just
really
wanted
to
reiterate
you
know,
while
we
do
a
lot
of
document
review,
there's
a
tremendous
amount
of
magic
and
effort
that
happens
before
things
get
to
us
and
things
that
happen.
That
tremendously
also
help
us
and
that's
only
possible
through
the
sec
review
sector
review
process.
Folks
on
that
team
give
really
of
themselves
and
they
shoulder
quite
a
heavy
burden
to
do
either
last
call
reviews
they
are
there
for
the
telechat
reviews.
B
They
are
the
resources
we
have
when
working
groups
want
an
early
review
and
do
that
iteration
and
it
frankly
makes
the
document
quality
by
the
time
it
gets
to
the
isg
so
much
better,
and
we
can't
thank
that
team
that
that
team
enough
for
for
what
they
do,
and
so
we
just
wanted
to
quickly
publicly
recognize
all
of
the
reviews
that
have
happened
and
who
those
reviewers
have
been
since
the
last
time
we
got
together
in
ietf
108..
So
again,
really
kind
of.
Thank
you.
Thank
you.
Everyone
you're
really
helping
us
in
the
community.
A
All
right,
so
the
next
agenda
item
was
this
sort
of
terminology
question
about.
You
know
on
path
or
man
in
the
middle,
and
what
does
this
actually
mean?
So
if
we
go
to
the
next
slide,
this
was
a
thread
that
was
actually
just
that
was
started
on
sag
by
michael
richardson,
and
so
I've
got
the
link
in
the
slides
to
the
the
initial
message
of
the
thread
and
then
mica
also
tried
to
do
a
summary
as
well,
probably
a
few
days
ago.
A
At
this
point,
though,
the
days
are
learning
together,
but
to
sort
of
reiterate
some
of
that,
if
we
go
to
the
next
slide,
to
reiterate
some
of
that
for
this
audience
there
were
a
few
interesting
points.
So
obviously
many
of
these
terms
that
we
bandy
around
don't
actually
have
a
precise
definition.
I
mean
in
the
middle,
has
been
used
for
probably
decades
and
sort
of
the
landscape
in
which
we
try
to
use.
A
So
if
you
could
be
on
path
only
some
of
the
time,
then
that's
another
consideration
as
well
and
of
course,
there's
this
overarching
point
about
how
much
do
we
really
care
if
we
should
be
trying
to
design
against
the
the
strongest
attacker
that
we
can
imagine?
A
Well,
that's
not
tricky
strictly
true,
but
if
we,
if
we're
trying
to
design
as
a
very
strong
attacker,
then
the
notion
of
on
path
are
not
is
in
some
sense
irrelevant,
because
we
want
to
protect
against
it,
no
matter
what
it
is,
but
there
are
in
fact
some
cases
where
having
the
sort
of
distinction
between
different
classes
of
attacks
is
useful
and
that
we
can
sort
of
be
able
to
discriminate.
A
When
you
know
when
one
thing
would
be
a
problem
or
when
it
would
not,
and
to
be
able
to
talk
about
the
distinctions
versus
just
there's
an
attack
and
there's
not
an
attack,
there
can
be
a
lot
of
context
around
that
in
terms
of
how
severe
it
would
be
or
in
what
cases
it
would
be
problematic.
So
next
slide.
E
E
A
But
if
we
go
to
the
next
slide,
there's
sort
of
some
questions
about
are:
are
these
really
a
useful
taxonomy
to
have
like?
Is
there
more
detail
in
what's
going
on
that,
we
need
to
have
turns
to
be
able
to
talk
about,
and,
of
course,
the
pervasive
question
or
the
theme
of
this
talk.
Perhaps
even
are
these
terms
well
defined
and
unambiguous?
A
First
off,
if
anybody
has
a
immediate
response
to
either
of
these
questions,
please
go
ahead
and
join
the
queue.
This
is
not
supposed
to
be
me
talking
to
y'all
for
the
the
whole
time.
It's
goal
is
to
have
a
conversation
so
paul.
Please
go
ahead.
H
Greetings
so
we're
having
some
of
these
same
questions
in
deprived
right
now,
so
I
would
be
willing
to
edit
a
document
as
long
as
I
was
not
the
person
deciding
consensus.
H
This
is
obviously
full
of
bike
sheds,
as
we
saw
on
the
last
slide,
but
it
is
real.
I
think
it
would
be
very
important
for
the
itf
in
general
to
have
a
small
set
that
are
usable
currently
across
working
groups.
We've
had
old
definitions
and
such
like
that,
but
I
think
doing
something
and
trying
to
finish
it
within
the
next
nine
months
or
a
year
would.
H
Useful
to
a
few,
a
few
working
groups,
so
I'd
be
willing
to
edit,
as
long
as
maybe
ben
and
roman
would
be
the
ones
who
were
deciding
consensus
and
such
like
that.
A
Okay,
I
mean
glad
to
hear
that
you
think
a
document
would
be
useful
in
a
space
to
really
sell
things
down.
That
was
even
not
something
I
was
sure
about,
and
thank
you
as
well
for
the
offer
to
document
jonathan.
Please.
I
I
think
this
is
the
kind
of
thing
where
we
should
just
look
to
academia,
I'm
sure,
or
we
can't
think
of
a
particular
paper
right
now,
but
I'm
sure
that
there
are
very
comprehensive
taxonomies
and
just
limiting
it
to
sort
of
three
levels
of
attacker
is
not
particularly
useful
because,
especially
at
least
with
the
tls
analysis,
we're
also
assuming
a
sacker
that
can
compromise
some
subset
of
the
participants
in
the
protocol,
at
which
point
you're
going
to
have
to
have
you
know
it
can
get
much
more
complicated
than
three
layers.
I
So
I
feel
like
this
is
just
something
we
should
contact.
You.
A
That's
a
good
point:
do
you
know
if
you
or
yourself
or
maybe
somebody
else
might
be
able
to
take
some
time
to
go?
Look
through
the
literature,
I'm
happy
to
look
through
the
literature.
A
Thank
you.
That
would
be
quite
helpful
and
presumably
the
results
of
that
can
be
input
for
paul
as
we
try
to
prepare
a
potential
document
for
specific
usage
in
the
itf
of
summarize
what
we
found
from
the
literature
so
jonathan,
I
assume
you're
done
so
you
can
leave
the
queue.
I
don't
think
I
have
the
option
to
do
that.
F
Joe
hello,
so
I'm
wondering
if
this
is
something
that
would
go
along
with
the
model
t
working,
it
seems
like
the
attackers
might
be
in
scope
for
remodeling.
A
J
Hi,
I
guess
you
can
hear
me-
I
see
the
little
bar
there,
so
I'm
a
little
bit
confused
about
what
problem
we're
actually
trying
to
solve
here.
Is
this
a
question
of
inclusive
terminology,
or
is
this
a
an
issue
of
I?
I
thought
that
was
kind
of
where
this
came
from,
but
it
seems
like
there's
also
an
issue
of
of
trying
to
come
up
with
more
a
finer
grain
definition
of
what
a
an
in
the
middle
attack
is.
J
So
I
I
guess
I'd
like
to
understand
that
and
if
it's
in
terms
of
the
in
terms
of
the
inclusive
terminology,
there's
also
this
other
work,
that's
going
on
in
the
ietf
terminology,
github
repository
and
I
don't
know
exactly
who's
who's,
managing
that
or
or
what
what
working
group
or
or
whatever
is,
is
dealing
with
that.
But
but
that's
going
on
as
well,
and
I'm
seeing
that
cited
externally.
A
Okay,
just
a
administrative
note,
I
do
have,
I
think,
one
or
two
more
slides,
so
I
will
cut
the
cue
on
this
slide
and
we
can
go
drain.
This
cue
and
then
go
on
a
little
bit
more
to
answer
jim's
question
now.
The
primary
problem
here
is
that
when
you
say
man
in
the
middle,
not
everybody
is
thinking
the
same
thing
and
we
would
like
people
to
all
be
thinking
about
the
same
thing
for
any
given
term.
A
So
we
just
want
to
make
sure
that
we
have
accurate
and
precise
terms
that
everybody
knows
the
meaning
of
any
sort
of
inclusivity
would
be
a
secondary
effect.
A
I
Just
relaying
from
jabba
mohit
puts
if
we're
going
to
work
on
taxonomy
and
terminology.
Is
it
worth
revisiting
rfc
4949.
A
K
Hi,
I
was,
I
wanted
to
suggest
that
we,
if
we
come
up
with
some
new
terminology,
that
it
actually
can
be
used,
also
in
some
form
and
analysis.
I
think
it
would
be
unfortunate
if
we
have
all
these
different
terms,
fine-grained
terms
and
then
there's
no
equivalent
way
to
use
those
in
the
underlying
formal
technologies
for
formal
analysis,
because
nowadays
we
use
them
more
and
more
often
so
that
be
better,
be
some
mapping
between
those.
A
That's
a
very
good
point.
I
hope
that
the
academic
literature
is
is
framed
in
a
way
that
would
be
consistent
with
that,
but
if
we
get
some
results
back
from
that
search,
we
can
definitely
keep
that
in
mind
as
well.
H
So
yeah
just
a
quick
answer
to
a
jim
fenton.
The
other
reason
why
having
good
terminology
is
important
is
not
just
so
that
we
in
the
ietf
expression,
security
or
understand
it,
but
people
reading
papers
that
we
write
about
our
protocols
do
as
well,
and
a
very
good
example
is
in
a
paper
that
I've
written
recently,
I
used
a
man
in
the
middle
machine
in
the
middle
attacker
in
the
middle
and
got
some
interesting
responses
from
security
novices.
H
But
people
who
understand
the
internet,
one
is
with
man
in
the
middle,
is
they
said?
Well?
Does
the
person
have
to
be
sitting
there
watching
all
the
time?
And
so
you
know
clearly
we
should
make
whatever
that.
Whatever
that
phrase
is
going
to
be
clear,
that
it's
it's
not
a
person,
you
know
that
which
is
a
completely
understandable
thing,
but
the
other
is-
and
this
has
come
up
a
lot
with
the
questions
around
dough-
is
it's
really
not
in
the
middle?
H
Almost
all
of
these
attacks
are
very,
very
near
one
edge
or
the
other,
and
so
I
mean
I
probably
was
in
the
middle
20
30
years
ago,
but
now
most
of
the
ones
that
we
are
concerned
about
are,
in
fact
or
not.
I
shouldn't
say
most,
but
many
of
the
ones
we
were
concerned
about
are
in
fact
very
near
an
edge
so
saying
in
the
middle
hides
some
of
the
attacks
that
we
actually
care
about,
namely
system
administrators,
putting
security
busting
proxies
near
users.
A
A
So
this,
I
think,
was
mostly
just
summarizing
what
had
already
been
sent
on
the
list,
maybe
a
little
bit
of
brainstorming
for
me
as
well,
and
I
just
want
to
make
sure
that
we
have
the
slide
up
to
see
if
it
would
prompt
any
further
thoughts
from
anybody
in
terms
of
triggering
training
new
ideas.
A
But
I
think
we
seem
to
be
getting
some
general
support
for
doing
something
in
this
space.
We
will
want
to
not
do
it
in
isolation
from
the
work
that
multi
is
doing.
There's
some
other
considerations
to
take
into
consideration
as
well.
A
So
this
is
actually
the
last
slide,
so
the
queue
is
is
open
again.
Sorry
for
the
strange
split
with
the
the
question
slide,
but
I
think
my
overall
takeaways
here
are
that,
yes,
it's
sounding
like
we
do
want
to
have
a
more
fine-grained,
more
precise
set
of
terminology
that
we
can
use
and
there's
definitely
some
constraints
on
how
we
should
do
that,
to
make
it
most
usable
for
us
and
for
others.
C
I'm
wondering
that
we
need
to
make
any
distinction
between
peer-wise
communications,
multi-user
communications,
broadcast
communications,
for
my
focus
is
right
now.
Do
we
need
to
is.
Is
that
at
all
a
factor
in
this
discussion.
A
I
think
we
are
kind
of
at
a
brainstorming
point
in
the
process,
so
we're
not
even
starting
the
discussion
yet.
So
I
think
it's
perfectly
reasonable
to
consider
if
that
should
be
part
of
the
discussion
as
well.
C
A
Roman,
did
you
have
a
point?
You
were
gonna
say.
B
I
just
wanted
to
reiterate
two
of
the
other
constraints
we
heard
in
the
discussion
from
the
jet
from
jabber.
I
think
mohit
was
saying
you
know
we
should
make
sure
we
we
stare
at
the
written
guidance
we
already
have
in
the
ietf
and
jonathan
noted
and
hottest
reiterated
the
idea
that
the
academic
literature
may
also
have
kind
of
things
to
say
and
we
want
to
make
sure.
What's
there
is
consistent
and
we
make
make
sure
we
don't
publish
something
that
conflicts.
A
Yes,
so
I
think
we
should
move
on
to
the
invited
talk,
so
I
believe
I
saw
ryan
in
the
participant
list.
A
A
We
can
hear
you
so
please
go
ahead
and
I
roman's
driving
the
slides
so.
L
All
right
perfect!
Well,
thanks
for
having
me,
I
I
will
give
a
warning
here.
These
slides
are
packed
dent.
My
intent
is
not
to
go
over
them,
but
if
you
find
me
exceeding
0.8
eckerd,
please
yell
and
I
will
try
to
slow
down
when
it
comes
to
some
of
these
topics.
I
I
know
that
many
people
participated
in
some
of
the
discussions
that
I'll
be
discussing.
My
intent
is
to
misrepresent
things.
So
if
you
find
something
that
that
you
don't
believe
is
accurately
presented,
please
do
correct
me.
L
The
only
thing
I
would
encourage
is:
could
you
save
it
to
the
end
to
see
how
much
it
impacts
things
well,
just
to
help
things
progress
so
with
that
we
can
go
ahead
to
the
next
slide,
all
right.
So
yes,
this
talk.
It's
intended
for
people
designing,
implementing
or
using
a
protocol
that
uses
certificates.
L
This
is
not
meant
to
be
the.
How
do
you
run
your
pki
for
your
organization,
it's
more
about?
How
do
you
run
a
pki
on
the
internet?
I'm
not
trying
to
tell
you
how
to
run
a
pk
on
the
internet,
but
I'm
trying
to
share
some
stories
about.
What's
worked
and
what's
not
worked
so
next
slide
so
part
of
where
this
talk
came
from
was
discussion
about
the
use
of
what
colloquially
we
call
the
web
pki,
and
there
was
some
confusion.
L
You
know
what's
the
problem
with
using
the
web
pki
and
what
are
the
gotchas.
So
this
is
interesting
for
you,
if
you're
not
sure
what
the
web
pki
is.
You
hate
the
web
pki
or
you
love
the
web
pki,
and
there
is
some
implied.
You
know
thinking
that
needs
to
be
done
on.
If
you
think
mutual
tls
is
all
that
great
or
it's
an
easy
solution,
so
we
can
go
ahead
and
next
slide.
L
So
I
said
the
word
web
pki.
I
know
I
have
a
reputation
in
that
space.
This
is
not
about
how
the
web
pki
is
awesome
or
how
you
should
follow
the
model.
It's
about
painful
lessons
and
how
you
can
learn
from
these
mistakes
over
the
course
of
two
decades
and
hopefully
avoid
them
next
slide.
L
So
it's
split
into
sort
of
two
parts,
an
evolution
of
what
I
would
call
the
internet
pki
and
the
web
pki
and
you'll
see
a
bit
more
about
what
that
means
and
then
discussing
about
how
that
practically
works
for
requirements
for
pki.
L
So
you
can
go
ahead
next
slide,
and
I
guess
I
don't
need
the
covers.
Here's
the
next
slide.
So
the
first
one
that
I'm
going
to
talk
about
is
the
pki
and
I'm
calling
this
pki
0.1.
It's
sort
of
the
the
alpha.
It
comes
about
x5,
988
and
the
important
part
to
sort
of
capture
here.
When
we
talk
about
what
does
the
x509
pki
was
part
of
the
directory?
You
know
it
was.
It
was
merely
an
authentication
protocol
that
fit
into
the
overall
x500
series
of
of
specifications
describe
how
the
directory
worked.
L
L
It
was
merely
as
a
asymmetric
cryptographic
exchange
to
authenticate
to
the
directory
and,
after
that,
everything
you
needed
would
be
in
the
directory
or
if
you
need
to
communicate
with
someone
in
the
directory,
you
could
use
it
to
look
up
their
information
and
communicate
with
them.
There
next
slide.
L
It
made
use
of
x,
509
certificates,
but
without
the
inherent
dependency
on
the
directory.
The
first
sort
of
approach
was,
as
you
see
in
rc
1114,
that
there
was
a
way
to
name
things,
but
it
was
mostly
determined
by
rsa
dsi.
They
were
going
to
run
the
pki
for
everyone
and
they
would
be
able
to
certify
what
are
known
as
organizational
notaries,
who
would
then
be
responsible
for
within
the
scope
of
that
organization,
defining
a
namespace.
L
Now
not
everyone
was
organizationally
affiliated
so
for
those
that
were
residential
rsa
would
also
run
a
ca.
Now,
despite
there
being
organizational
notaries,
the
suggestion
was
that
rsa
would
in
fact
run
these
as
well
on
a
practical
sense.
Since
running
a
pki
is
hard
in
the
subsequent
privacy
enhancement
rcrs
1422,
it
became
a
bit
more
complex.
L
The
suggestion
was
that
the
isoc,
or
that
isak
would
run
what's
known
as
the
internet
policy
registration
authority,
which
would
be
the
root
of
trust
for
the
internet
pki,
and
it
would
register
policy
certification
authorities
that
defined
policies
appropriate
for
their
communities
and
within
there
cas
would
be
determined
whether
or
not
they
meet
those
policies.
L
Now
the
source
of
information
here
since
you're,
not
really
using
the
directory,
comes
from
the
message.
Headers
everything
you
need
is
sort
of
in
the
message
that
you
get
and
you
don't
need
to
go
through
any
lookups,
because
it's
a
very
flat
structure
right.
You
have
the
policy
registration
authority,
the
policy
cas
and
the
cas
beneath
them,
but
you
don't
really
get
much
deeper
than
that
next
slide.
L
So
around
this
time,
netscape
comes
to
town
with
ssl
kicks
it
off
in
november
1994.,
I
would
say
it's
cold
was
rough
consensus
in
running
code,
but
they
sort
of
did
a
side
step
around
the
ietf
because
they
were
concerned.
It
was
going
to
take
too
long,
so
they
threw
up
a
rough
draft
spec
as
well
as
shipped
a
very
beta
copy
of
navigator
that
implement
ssl
and
the
reason
for
doing
it.
L
This
way
was
they
were
as
reportedly
deeply
concerned
about
what
microsoft
would
do
in
terms
of
bringing
commerce
to
the
web,
and
so
they
wanted
to
be
there
first
to
ensure
that
there
wasn't
a
proprietary
solution.
Another
part
of
this,
though,
was
also
that
they
were
aiming
for
minimal
change
to
existing
software.
L
It
needed
to
work
on
the
computers
people
had
with
the
software
people
had,
and
so
this
largely
meant
doing
things
in
user
land,
which
is
why
it
became
the
secure,
socket
layer,
no
other
changes
and
they
used
x509
because
rsa
said
to
use
x5x9,
and
that
seemed
like
a
good
idea
at
the
time.
What
else
are
you
going
to
use
and
there
was
very
little
external
dependencies,
so
the
very
first
version
of
netscape?
L
It
merely
checked
against
a
list
of
issuers
that
were
baked
into
the
binary,
and
every
certificate
was
directly
issued
by
one
of
those
issues.
There
were
no
chains,
there
were
no
change
in
the
protocol
naming
rules.
There
were
no
naming
rules.
In
fact,
there
wasn't
really
anything
going
on
whatsoever.
So
in
the
early
versions
there
was
nothing
being
checked.
The
idea
is
that
users
would
manually
inspect
certificates
and
decide
whether
or
not
to
continue
after
they
released
the
specification.
L
L
L
If
that
east
lake
and
kaufman
idea
works
out
which
would
become
dns
sec,
then
maybe
we
wouldn't
need
this
whole
pki
thing,
and
if
that
ipsp,
which
would
become
ipsec
worked
out,
then
maybe
we
don't
need
this
whole
ssl
thing,
but
we
need
something
now
and
we
need
to
have
it
shipped
and
it
needs
to
be
able
to
work
so
we
have
it,
and
this
became
secretly
shortly
thereafter.
L
Etf
34
was
the
first
meeting
of
pkx
and
you
can
find
a
little
bit
about
the
history
of
why
that
was
that
the
work
on
pen
had
revealed
that
it's
actually
really
hard
to
ucas
on
the
internet.
There
were
a
lot
of
limitations.
L
The
itu
had
a
proposed
number
of
changes
that
became
excellent,
509,
v3
and
particular
extensions.
Now
the
goal
with
pkx-
and
I
hope
I
again
am
not
misrepresenting
anything-
was
no
presumed
dependency
on
the
directory
to
allow
multiple
routes,
without
necessarily
the
isoc
running
the
route
or
having
rsa
run
the
route
with
many
ways
to
name
things.
There
were
some
important
goals
that
were
set
out
in
that
first
draft.
That
would
become
many
of
these
rfcs,
which
is
minimal
configuration
by
users
minimal
interactivity
requirements.
L
Things
need
needed
to
be
able
to
be
automated
and
verified
with
automated
tools
and
boy
golly
gee.
Wouldn't
it
be
nice
to
have
automatable
certificate,
enrollment
management
using
well-defined
protocols
now
that
took
a
number
of
years
from
when
that
work
started
to
when
that
work
was
specified
and
all
the
meantime
netscape
and
microsoft
were
waging
a
very
exciting
war
next
slide.
L
So
this
is
where
I
call
web
pki
versus
internet
pki
this
period,
where
the
pki
that
netscape
had
deployed
really
started
off
with
presumption
of
user
interactivity.
They
did
add
the
common
name,
checking
and
would
later
add.
Subject:
alternative
name
checking,
but
really
the
assumption
was
users
will
be
inspecting
using
the
document
information.
L
L
Web
pki
was
focused
on
browsers
and
it
started
off
that
way,
but
the
movement
by
microsoft
to
move
ie
into
the
operating
system
started
to
create.
What
I
would
argue
is
the
the
precursor
to
the
internet
pki,
because
they
wanted
to
expose
these
services
a
sort
of
general
operating
service
or
sorry
operating
system
level.
Services
where
anyone
could
write
an
internet
enabled
application,
whereas
netscape
it
started
off
really
focused
on
the
browser.
L
Netscape
would
add,
support
for
ssl
to
some
of
their
other
protocols
like
smtp
and
ftp,
but
in
with
respect
to
the
communicator
product,
but
this
wasn't
sort
of
how
things
started
and
it
blurred
the
lines
between
the
web,
pki
and
internet
pki,
because
here's
a
set
of
cas
that
can
be
used
for
a
variety
of
protocols,
a
variety
of
internet
services
and
it
just
works-
is
it
supposed
to
be
just
for
the
web?
Is
it
supposed
to
be
for
anything?
It
was
a
bit
ambiguous
and
the
selection
of
cas
was
admittedly,
very
weak.
L
It
was
mostly
based
on
organizational
reputation
and
business
risk,
not
necessarily
security
risk,
so
you're
a
bank,
you
issue
certificates.
You
seem
pretty
trustworthy
because
you're,
a
bank
there's
a
lot
of
regulation
around
banks,
so
that
was
what
made
a
lot
of
the
early
ca's.
The
audits
wouldn't
come
until
fair
next
slide,
so
I'm
going
to
call
this
the
precursor
to
the
web
pki
this.
This
is
when
things
started
to
get
really
messy.
So
the
ca
browser
forum
coming
about
in
2006
2007
started
blurring
things.
It
started.
L
Cab
form
started
because
cas
were
unhappy
that
certain
competitors
were
issuing
certificates
with
only
domain
names.
They
felt
that
this
was
fundamentally
the
wrong
way
to
do
certificates
because
they
had
started
out
with
legal
names,
even
though
pkix
was
really
oriented
around
internet.
Naming
browsers
were
unhappy
with
cas
in
terms
of
just
how
widely
divergent
everything
was.
So
a
meeting
was
held
did
not
go
the
way
that
the
originators
intended,
because,
rather
than
forbidding
domain
validation,
what
ended
up
being
is
development
of
a
new
standard.
L
Ev
browsers
were
interested
in
combating
phishing
microsoft,
most
of
all,
with
internet
explorer,
and
so
they
propose
this
ev
standard
and
the
reason
why
I
call
this
like
web
pki
0.9
is
it
developed
a
pki
that
was
very
focused
on
user
interactivity
and
it
was
very
focused
on
web
browsers
and
web
browser
specific
needs.
The
semantics
of
an
ev
certificate
outside
of
a
web
browser
really
don't
make
sense.
Yes,
there's
some
additional
validation,
but
none
of
that
is
actually
part
of
the
processing
model
of
a
certificate
of
rfc
3280.
L
L
And
so
what
I'm
calling
the
birth
of
the
web
pki
was
when
the
cab
forum
adopted
the
baseline
requirements.
So
the
context
here
is
the
cab
form
was
created.
Ca.
Some
cas,
I
should
say,
were
unhappy
about
domain
validation
in
certificates
created
this
new
ev
standard.
So
now
they
want
to
come
back
to
minimum
standards
and
figure
out
what
is
the
absolute
bare
minimum
for
certificates
and
the
work
on
this?
What
made
progress
was
incredibly
contentious
and
incredibly
profane
on
the
mailing
list.
I
will
say,
and
then
diginotar
happened
and
did
you
know?
L
Turner
happened
in
2011
in
the
fall
of
2011,
and
I
really
should
trust
in
the
system
and
the
version
that
had
been
circulated
that
had
been
quite
controversial,
which
allowed
for
both
domain
validation
and
organizational
validation
came
about,
and
it
also
set
up
a
number
of
policies
for
how
certificates
could
be
issued
when
they
would
be
issued.
Most
importantly,
for
example,
is
the
deprecation
of
10
24-bit
certificates
was
in
that
very
first
version.
L
And
that
way
they
did
not
have
to
disable
the
10
24-bit
code
within
their
product
until
the
majority
of
servers
had
migrated
off
their
existing
10-24-bit
certificates.
So
it
was
a
bit
of
a
flag
day
and
a
way
for
managing
that.
But
importantly,
it
also
marks
a
point
where
browsers
really
began
to
tell
cas
what
they
can
and
can't
do
and
that's
why
I
call
it
the
birth
of
the
web
pki,
because
it
was
a
set,
an
application
community,
defining
policies
for
the
cas
and
for
the
pki.
L
So
it
would
not
be
correct
to
talk
about
web
pki
1.0
without
talking
about
2.0
and
what
what
I'm
calling
web
pki
2.0
is
really
the
point
in
which
people
realize
that
there
were
in
fact
separate
pkis
that
there
was
this
concept
of
a
web
pki
being
a
majorly
different
thing
that
it
wasn't
just
about
the
software
verification
quirks
within
browsers.
But
in
fact
that
there
were
a
different
set
of
policies
and
expectations
and
needs
in
user
communities
and
so
symantec
largest
ca.
L
Their
history
of
key
material
goes
back
to
the
days
of
rsa,
dsi
and
verisign,
and
I
do
see
some
you
know.
Participants
here
who
were
around
verisign
at
those
days,
had
some
issues
and
ultimately
got
distrusted
in
browsers.
But
the
problem
was
the
roots
were
embedded
on
virtually
every
device
out
there.
Part
of
that
was
by
design
of
rsa,
which
was
in
the
early
days.
If
you
wanted
to
get
a
license
for
the
rsa
patent,
you
had
to
include
rsa's
routine
material.
That
would
become
the
verizon
group.
L
Part
of
it,
though,
was
also
just
that's
what
everyone
else
is
doing.
It
seems
like
a
good
idea,
so
I
call
it
the
split
between
because
we
saw
a
lot
of
issues
with
this
classic
example,
of
course,
is
a
single
web
server
providing
a
single
api
endpoint,
you
see
it
there
non-updatable
device,
a
number
of
major
media
companies
had
set
top
boxes,
had
tvs
where
they
only
trusted
a
few
cas,
but
all
of
their
users
used
the
same
web
server
all
talk
to
the
same
end
point,
and
now
suddenly
you
had
a
conflict.
L
If
I
have
an
old
set-top
box,
it
doesn't
work
with
new
certificates,
but
if
I'm
using
a
browser,
it
does
not
work
with
these
old
certificates
that
are
being
deprecated,
and
so
there
wasn't
a
good
way.
Now
and
now
you
had
to
think
about
the
situation
of,
I
have
a
single
web
server.
I
actually
need
different
certificates
for
this,
and
I
because
I
need
different
certificates.
I
need
even
different
domains
as
a
way
to
distinguish
what
certificates
to
send.
L
The
other
aspect,
of
course,
was
that
as
part
of
this
deprecation,
it
was
not
uniform
across
all
of
symantec,
so
some
operating
systems
would
continue
and
still
to
this
day,
trust
semantics
certificates
or
use
trust
them
for
other
formats,
like
email,
and
so
this
also
revealed
that
different
protocols
have
different
pki
needs
and
security
needs.
The
security
issues
that
affected
symantec
as
it
affects
the
web
pki
some
vendors
decided,
did
not
affect
other
protocols
other
purposes,
and
so
I
call
it
web
pki
2.0.
L
It's
the
point
where
I
think
many
people,
I
would
argue
outside
the
itf
realized
that
there
were
in
fact
multiple
pkis
and
brought
a
lot
of
awareness
here.
So
I'm
not
going
to
jump
here
to
talk
about
requirements
for
pka
next
slide.
L
L
For
how
we
got
here
well,
the
reason
why
I
mention
it
is
that
there
throughout
this
history
there
were
various
points
where
we
saw
branches,
branches
and
approach
branches
in
need
branches
in
what
was
supported,
and
this
affected
the
design
of
these
different
pdfs,
the
web
pki,
the
internet,
pki,
what's
capable
in
software
and
these
branches
have
implication
if
you're
trying
to
use
certificates
that
generally
work-
and
you
have
to
think
about
what
are
the
answers
for
this
because
they
have
material
impact.
So
an
example.
Next
slide
is
thinking
about
the
connectivity
model.
L
Next
slide
exactly
you
can
see
kerberos
right,
kerberos
has.
This
is
my
terribly
hacky
attempt
at
the
classic
kerberos
model,
where
the
client
talks
to
a
server.
It
also
talks
to
the
kdc,
and
the
assumption
here
is
that
the
server
and
the
kdc
don't
have
to
talk
because
the
client
acts
as
the
mediator.
So
that's
what
it
can.
What
I
mean
by
the
connectivity
model
next
slide,
x509v1's
connectivity
model
kind
of
assumed
that
everyone
was
talking
to
the
directory
right.
X509
was
just
a
subset
of
the
overall
directory
protocols.
L
It
was
how
you
authenticated
to
the
directory
and
you
would
use
the
directory
to
find
other
certificates.
Now
there
was
the
allowance
for
out-of-band
communication,
but
the
idea
here
is
that
say:
crls
would
be
stored
in
the
directory.
If
you
need
to
figure
out
certificate
paths
that
would
be
stored
in
the
directory,
everyone
talked
to
the
directory.
There
was
strong
connectivity
and
equally
their
strong
connectivity
on
the
client
server
next
slide.
L
The
web
pki
connectivity
model
is
a
little
messy.
We
assume
that
the
client
can
talk
to
the
server,
but
one
of
the
the
realities
here
is
we
don't
assume
that
the
client
can
talk
to
the
ca.
This
was
realized
very
early
on
in
in
many
of
the
web.
Pki
implementations
that
in
fact,
it's
really
hard
to
get
a
packet
from
point
a
to
point
b
on
the
internet
reliably,
especially
from
client
computers,
everything
from
local
firewalls
to
dodgy
internet
connections
that
you
don't
really
control.
L
So
there
isn't
an
assumed
connection
here
from
the
client
to
the
ca,
but
there
is
to
some
hand
wavy
degree
in
modern
web
pki
and
assume
connection
to
the
browser
vendor.
You
need
to
find
out
if
you
have
new
updates
and
so
you've
probably
got
a
communication
channel
to
talk
to
the
browser
vendor
and
for
the
browser
vendor.
L
L
L
This
is
somewhat
of
an
oversimplification,
but
with
pkx
the
2459
3280
5280
design,
there's
not
really
any
strong
assumptions
whatsoever.
Now
the
client
and
the
server.
We
presume
that
they
have
a
path
for
exchanging
messages,
but
everything
else
is
fair
game,
whether
it's
one
ca
or
many
cas
it's
a
bit
of
an
unreliable
connectivity.
Now
this
is
important
because,
in
order
for
many
of
the
security
guarantees
to
work,
you
actually
have
to
figure
out
what
is
the
actual
connectivity
so
next
slide.
L
So
an
example
of
this
right,
the
server
can't
talk
to
external
entities,
the
client
can't
talk
to
external
entities,
so
you
have
no
way
to
actually
fetch
or
deliver
revocation
data.
If
the
client
and
the
server
are
the
only
two
parties
that
can
talk
and
you
have
a
certificate
chain,
there's
no
way
to
get
that.
So
what's
your
solution?
Well,
a
solution
is
short-lived
certificates.
You
don't
have
to
worry
about
revocation
now
you
just
let
the
lifetime
of
the
certificate
be.
L
What
it
is
downside
of
that
is
now
this
introduces
into
your
pki
and
your
protocol
design.
You
have
to
think
about
certificate,
lifecycle
management.
You
also
have
to
think
now
about
how
third-party
cas
in
that
relationship
works.
If
your
clients
don't
have
a
way
to
change
what
cas,
they
trust
and
your
servers
don't
have
a
way
to
change
what
cas
they
trust.
L
L
There's
a
problem,
though,
with
assuming
that
ocsp
stapling
is
viable,
which
is
if
the
client
built
or
verified
certificate
path
is
different
than
what
the
server
sent
then
those
ocsp
responses,
the
server
sent,
aren't
helpful,
and
in
order
to
solve
this
problem,
you
need
strict
requirements
on
how
certification
paths
are
built,
so
that
your
client
and
your
server
can
agree
on.
This
is
the
right
path.
L
This
means
to
a
degree
that
you
need
to
understand
what
your
clients
trust
anchors,
are
or
have
a
way
to
communicate
that
to
the
server
so
that
you
can
find
that
mutual
problem.
I
added
this
to
the
slide
and
I
realized
there
was
a
discussion
going
on
within
emu
right
now.
L
That
was
part
of
what
prompted
this
but
epls
is
this
example
right
where
you're
bringing
a
client
onto
the
network,
they
don't
have
connectivity
and
you
need
to
bring
them
up,
and
this
creates
designs
for
you,
design
challenges
for
your
pki
when
your
client
can't
talk
to
the
network.
A
real
world
example
of
this
not
trying
to
promote
google
products,
but
when
early
chromebooks
were
released
and
they
had
a
cellular
modem
in
this.
This
is
the
cr50s.
L
The
problem
with
that
is
in
order
to
use
that
cellular
modem.
You
need
to
activate
the
cellular
remote,
so
you'd
go
to
a
cellular
provider
in
this
case,
verisign
or
sorry
verizon's
website,
and
you
would
activate
your
cr-50
and
you
would
have
cellular
connectivity
there's
one
problem
with
this:
verizon
only
allowed
connectivity
to
their
servers,
they
forgot
to
allow
connectivity
to
symantec's,
ocsp
servers
and
so
chromebooks
would
fail
to
connect
and
fail
to
be
able
to
activate
their
cellular
connection
because
they
had
no
ability
to
fetch
the
revocation
data
and
for
a
brief
window.
L
This
was
a
hard
failure
on
chromebooks
and
so
create
at
first
slow
connections
where
it
would
sit
around
for
60
seconds
spinning
and
then
also
potentially
hard
fail.
These
are
terrible
user
experiences
when
you
just
want
to
use
a
device-
and
this
affects
your
pki
design,
because
these
constraints
change
how
you
design
your
your
protocol,
your
pki,
next
slide.
L
Another
thing
that
is
not
entirely
obvious
is
thinking
about
the
naming
scheme,
and
so
I
mentioned
the
web.
Pki
started
out
with
this
idea
of
show
it
all
to
the
users.
This
naming
scheme
really
leaned
heavily
on
the
idea
of
civil
legal
names,
with
the
idea
being
that
rsa
would
verify
the
name
for
some
value
of
verify
or
whoever
had
the
rsa
license
would
verify
and
that
these
would
all
be
trustworthy.
So
there
weren't
really
rules
on
verification
with
pkx.
L
There
was
a
strong
interest
and
how
do
we
build
something
for
the
internet
right,
dns
names,
but
also
with
the
legacy
x509v1?
There
was
the
globally
unique
name,
the
distinguished
name,
it's
distinguished,
however,
naming
is
not
so
simple.
I
know
I'm
preaching
to
the
choir
here.
Dns,
of
course,
has
many
zones
and
can
involve
many
different
naming
authorities.
L
You
have
to
decide.
How
do
you
reflect
that?
Is
that
something
that
should
be
reflected
in
the
pki
is
it?
You
know
there
is
a
model
of
pki
for
every
dns
zone
split.
There
is
a
potential
new
sub
ca
that
can
allow
delegation
and
control
just
within
that
namespace.
If
you
look
at
say
the
early
drafts
of
tim,
there
was
a
very
strong
coupling
of
restricting
those
organizational
notaries
to
a
very
specific
global
name
space,
which
is
the
organization,
but
then
allowing
them
free
reign
within
that.
L
So
another
way
to
think
about.
Like
looking
at
the
dns,
the
web
pki
was
a
global
naming
scheme.
You
didn't
have
to
work
with
the
coordination
of
any
of
the
zones.
Any
ca
could
issue
for
any
domain
name
on
the
internet.
That
is
wonderful
and
terrifying,
so
you
can
see
some
of
these,
but
also
you
have
to
think
about
how
it
means
for
services
and
protocols.
L
So
I
mentioned
here,
you
know:
should
an
hp
server
on
port
80
be
able
to
obtain
a
certificate
for
your
smtp
server.
Just
because
I
control
the
web
server
doesn't
necessarily
mean
I
am
in
control
or
authoritative
for
the
smtp
server,
and
if
I
can
obtain
a
certificate,
can
I
potentially
spoof
that
connection
and
so
srv
names
could
help
here?
That
is
certainly
one
option.
Another
option
you
could
put
it
all
in
dns
name,
you
could
use
extended
key
usages
that
have
semantic
name.
L
L
L
The
web
pk
took
a
very
different
turn
started
off
as
roughly
one
year
certs,
because
that's
when
organizational
information
could
be
verified
and
both
between
the
combination
of
domain
names
and
in
the
combination,
the
the
quest
to
provide
ever
cheaper
services
certificates
start
getting
longer
and
by
the
time
the
cab
forum
came
about
and
adopted,
the
baseline
requirements
certificates.
There
were
some
cas
issuing
10-year
certificates
for
a
domain
name.
I
assure
you
domain
names,
don't
usually
last
for
10
years.
L
L
So
I
mentioned
a
little
here
that
domain
names
can
change
more
frequently
than
annually.
We
can't
just
assume
a
one-year
cert
research
here
is
like
bygone
ssl
that
looked
at
how
certificates
changed
sorry
certificates
did
not
change
were
not
revoked,
even
though
control
of
the
domain
name
or
where
the
domain
pointed
was
changed,
and
so
this
creates
new
security
risks.
L
So,
as
I
mentioned
earlier,
you
know
the
web
pki,
the
netscape
shipped,
that
became
the
web
pki
of
today
really
was
just
a
quick
hack.
The
assumption
here
was
get
it
to
market
fix
it
in
post,
there
was
strong
confidence
that
ipsec
would
replace
all
this
as
with
dns
sec,
and
this
would
just
be
a
short
thing.
L
And,
of
course,
this
this
last
bit
of
our
names,
unambiguously
interoperably
machine
parcel
domain
names,
turn
out
not
to
be
as
clear,
because
you
have
specifications
like
rfc
5280
talking
about
the
preferred
name,
syntax,
and
then
you
have
the
running
code
that
allows
a
lot
more
than
that
to
hit
the
wire
and
actually
be
resolved.
The
wire.
L
L
I'm
not
going
to
try
to
solve
that
discussion
other
than
to
know
that
the
the
threat
model
from
cas
being
able
to
do
that
global
issuance
is
slightly
tempered
by
the
fact
that
clients
can
change
how
dns
how
they
do
their
dns
resolutions,
or
potentially
their
ip
connectivity
will
bgp.
Second
dns
sex.
Save
us,
maybe
p
kicks
is
a
very
robust
model.
L
It
allows
the
federation
of
all
these
pkis
to
match
from
the
bridge
and
all
of
these
wonderful
techniques,
so
that
you
can
be
very
selective
and
you,
unfortunately,
can
get
the
lawyers
involved
and
figure
out
how
all
this
works.
The
downside,
of
course,
is
in
practice.
Everyone
ends
up
federating
with
everyone,
and
you
have
no
idea
who
you're
trusting
and
why
you're
trusting
them-
and
this
is
practically
worked
out
in
the
web
pca.
L
So
next
slides
another
one
is
who's
the
policy
authority
who
decides
who
writes
the
rules.
This
is
something
that
I
think
causes
people
the
most
pain
when
thinking
about
what
pki
is
internet
pki,
because
why
should
browser
set
the
rules?
And
this
is
because
it
is
a
to
a
degree
browser
pki
for
better.
For
worse,
so,
in
order
to
achieve
the
security
and
interoperability
you
need
you
need
folks
to
agree,
and
this
is
where
sdos
can
help.
L
But
I
think
we
know
that
the
ietf
is
a
bit
shy
around
policy
and
for
good
reason.
So
you
need
a
policy
authority
and
I'm
not
trying
to
say
that
it
can't
be
an
sdo
just
that
in
general
it's
not
an
sdo,
because
you
need
to
be
able
to
say
that
x
is
not
secure
and,
if
I
say
x
is
not
secure
and
you
disagree,
I'm
missing
out
on
security
and
of
course,
if
I
say
if
I
can't
say
that
x
is
here,
then
we
don't
have
interoperability.
L
Unfortunately,
this
is
heavily
dependent
on
the
assumption
of
local
policy,
and
we
want
to
get
something
that
just
works
out
of
the
box.
Can't
have
a
bunch
of
of
these
local
policies
and
you
don't
have
the
ability
to
discover,
say
policy
mappings
in
a
very
easy
and
reliable
way,
and
you
can't
help
network
effects,
because
network
effects
will
always
penalize
the
first
mover
if
you
are
going
to
break
something
you're
in
trouble
and
equally,
if
everyone
else
is
breaking
something
you
benefit
from
breaking
last,
because
they
will
have
solved
all
the
problems
for
you.
L
We
see
this
play
out
in
browsers
all
the
time
when
it
comes
to
like
deprecating
features,
so
the
policy
authority.
There
is
someone
who
can
answer
these
questions,
but
that
is
itself
a
whole
loaded
question
and
as
with
any
sort
of
matter
of
policy,
but
to
something
that,
if
you
want
a
pki
that
works
out
of
the
box
on
the
internet
between
multiple
vendors,
you
have
to
figure
out.
How
are
you
going
to
make
those
decisions
and
get
people
to
agree,
and
how
are
you
going
to
handle
when
things
disagree
next
slide?
L
What
I
mean
here-
and
I
meant
to
make
mention
here
of
you-
know-
protocol
maintenance
is
unfortunately
when
everything
is
extensible,
nothing
ends
up
being
extensible,
because
it's
an
overwhelming
mess
riddled
with
bugs
and
pkix
has,
unfortunately,
because
of
its
history.
Because
of
this
co-evolution
of
web
pki
predating
pkix
pci,
you
know,
being
a
part
of
the
operating
system.
Well,
before
pkx
had
finished
its
first
re
or
finished
its
final
draft
created
issues
where
you
have
incompatible
implementations.
L
You
know
again
thinking
about
a
classic
example
of
this
is
internet
explorer,
not
supporting
basic
constraints
like
that's
kind
of
a
major
thing
that
was
an
oops
are
bad,
but
equally
firefox,
you
know,
netscape,
that
became
firefox,
never
supported
the
authority
information
access
field,
even
though
that's
also
part
of
rsc
5280,
and
so
the
idea
here
is
that
you
have
to
use
rfc
5280
as
a
starting
point.
But
then
you
have
to
rip
everything
else
out
that
you
don't
need,
otherwise
you
will
not
achieve
the
interoperability
that
you're
wanting
for.
L
L
If
you're
using
these
external
lookups,
you
have
to
figure
out
what
protocols
you're
going
to
support
but
equally
say
in
a
web
browser
external
lookups
have
privacy
consequences
that
might
not
be
appropriate
where
in
another
protocol
it
might
be
perfectly
reasonable.
So
there's
a
lot
of
challenges
here.
I
think,
though,
to
highlight
that
it's
not
just
extensions.
L
I
mentioned
you
know:
privacy,
enhanced
mail
took
a
very
simple
flat,
pki
structure.
You
know
you
had
the
ipra,
you
had
the
pcas,
so
you
had
this
sense
of
well.
Okay,
I
only
have
a
few
layers.
You
can
look
at
netscape
with
their
the
first
version
of
ssl
that
they
released,
which
had
no
chains
whatsoever.
L
The
length
and
complexity
of
your
pki
graph
is
very
coupled
with
how
you're
going
to
manage
naming
authority.
Is
everyone
independent
or
is
there
some
central
authority?
Is
there
some
shared
global
name
space
like
the
web
pki?
This
affects
a
lot
of
the
depth
and
a
lot
of
the
complexity,
but
then
also
there
are
other
issues.
Let's
encrypt
recently
published
a
blog
post
talking
about
how
their
certificates
are
going
to
stop
working
on
older
android
devices
beginning
in
late
next
year
september.
L
But
this
creates
implications
for
interoperability.
When
you
think
about
this,
and
yes,
it's
terrifying-
and
I
I
don't
know
how
much
more
I
can
say
about
that.
But
this
this
affects
your
ppi
profile
and
design,
because
you
have
to
think
how?
How
will
you
change
the
set
of
ca's
that
you
trust
over
time
between
devices?
How
will
you
manage
that
migration,
if
you
don't
allow
cross
signing
that
makes
it
much
harder?
L
But
if
you
do
allow
cross
sighting,
that
adds
a
host
of
complexity
to
your
security
and
threat
model,
because
you
allow
the
introduction
of
entities
that
you
may
not
have
vetted
and
that
may
not
be
part
of
your
threat
model
picks
allows
for
both
scenarios.
It
doesn't
really
tell
you,
which
is
the
right
or
wrong
way
and
that's
both
its
strength
and
its
curse
next
slide.
L
So
yeah.
Apparently
I
just
kind
of
heard
most
of
this,
so
I
would
probably
show
you
the
next
next
slide.
I
I
mentioned
in
the
case
of
like
etls,
and
this
is
also
true
in
the
case
of
mutual
tls,
that
you
have
to
think
about
how
the
certificate
paths
are
going
to
be
built
and
whether
what
the
client
builds
is
going
to
be.
What
the
server
agrees
on
recall
that
in
that
x519v1,
the
assumption
here
was
that
the
directory
would
be
the
source
of
truth.
L
Everyone
would
always
have
connectivity
to
the
directory,
or
at
least
enough
connectivity
that
they
needed
and
that
you
could
discover
the
paths
through
the
directory
before
and
reverse,
and
so
you
could
build
a
path
that
you
would
be
sure
that
your
peer
would
accept,
fortunately,
or
unfortunately,
depending
on
your
take
the
directory,
never
materialized,
and
we
had
a
lot
of
alternatives.
That
ldap
is
certainly
an
option,
but
I
I'm
happy
to
tell
you
most
web
browsers:
do
not
implement
ldap
and
all
of
its
complexity.
So
it's
not
an
option
for
say
the
web
pki.
L
It
might
be
an
option
for
your
pki.
It
depends
on
what
you're
doing,
if
you're
doing
it
over
ldap,
probably
if
you're
not
building
an
ldap
client,
probably
not
so
you
have
to
work
things.
Oh
sorry
looks
like
we
jumped
extra
few
extra
slides
so
yes
office.
L
Sorry,
I
looked
away
yes,
so
one
of
the
important
decisions,
sorry,
you
can
go
ahead
next
slide
slide.
34.
all
right,
we're
on
the
same
page,
another
element
to
the
design
of
pki.
You
have
to
think
about.
Do
you
like
check
boxes
or
do
you
like
descriptions
check
boxes?
L
Iso
17021
iso
17065,
they
have
good,
they
give
you
interoperability,
they
give
you
compliance
testing,
they're
terrible
for
security
testing,
because
you
have
to
list
everything
bad
as
a
negative
test
and,
frankly
listing
everything
that
can
go
wrong
is
a
very
time
consuming
process.
You
can
go
for
a
descriptive
type
of
approach
to
audits
isa.
3000
is
certainly
an
approach,
not
the
only
approach.
These
are
examples
where
you
you
have.
The
ca
describe
the
system,
how
it
works,
how
it's
designed,
how
they
meet
your
requirements.
L
But
the
answer
here
for
what's
the
right
way,
it's
going
to
depend
on
what
you're
trying
to
do
also
with
audit
next
slide.
You
have
a
whole
host
of
questions.
This
is
the
auditing
practice
in
general.
If,
if
you're
trying
to
certify
something
as
compliant
certifying
something
as
interoperability,
everyone
has
to
go
through
this
and
think
about
this.
Even
if
you
are
going
to
run
the
pki
yourself
and
every
other
vendor
is
going
to
trust
you
to
do
that,
you're-
probably
going
to
have
audits
because
they're
going
to
want
to
know
why?
L
Should
they
trust
you?
So
these
are
still
questions
you
have
to
work
through
which
is
talking
about
what
are
the
criteria?
Who
decides
the
criteria?
Who
performs
the
audits?
Who
decides
who
performs
the
audits?
How
do
you
know
who's
performing?
The
audits
is
good
enough
to
perform
audits.
It's
turtles
all
the
way
down.
It's
trust
all
the
way
down,
and
yes,
it
ties
to
a
lot
of
civil
legal
obligations
depending
on
how
you're
doing
it.
If
you're
doing
an
iso
17065
type
system,
you
might
not
have
a
legal
backing
to
do.
L
If
you're
doing
isa,
e3
000,
you
might
have
international
treaties
that
can
help
you.
It
really
depends
next
slide.
So
I'm
trying
to
rush
through
and
be
sensitive
to
time.
So
conclusions
last
slide
one
more
slide:
37
yeah
there
we
go
where
the
reason
for
this
is
to
try
to
highlight
there
isn't
a
single
internet
pki
and
that
there
really
can't
be
just
one
internet
pki,
because
different
security
protocols
are
going
to
require
different
needs.
L
L
You
can
do
that,
but
you
shouldn't
it
is
possible
to
build
interoperable
pkis
that
work
out
of
the
box.
It's
a
lot
of
work.
The
web
pki
has
taken
tremendous
amount
of
effort,
and
each
of
these
questions
are
still
questions
that
the
wood
pki
is
grappling
with.
It
also
means
that
the
answers
may
not
be
what
you
need
for
your
use
case
and,
in
fact,
more
often
than
not
are
not,
which
is
why
I
say:
don't
use
the
webpeak,
because
when
there
are
these
differences,
you
potentially
get
broken,
and
this
creates
conflict.
L
It
creates
conflict
and
security
policy
and
creates
conflict
in
I
hate
breaking
things.
Everyone
hates
breaking
things,
and
these
are
things
we
should
try
to
avoid.
When
you
start
thinking
about
a
pki,
you
really
need
to
ensure
that
every
single
bite
is
critically
necessary.
You
can't
leave
anything
under
specified,
because
every
area
that
is
left
for
extensibility
has
turned
out
in
almost
every
pki
to
be
a
pain
point
that
bites
us
in
the
end.
L
Maybe
that's
not
as
important
if
you're
doing
a
small
enterprise
pki
or
just
a
locally
enterprise
pki,
where
you
manage
same
million
devices,
but
when
you
think
about
interoperability,
we
have
to
think
of
pki
like
protocols
and
realize
that
there's
not
a
one
size
fits
all
and
when
you're
doing
protocol
designs.
This
is,
I
would
argue,
a
mistake
of
ssl
tls,
a
mistake,
netscape's
assumption
that
has
lingered
on
with
this
consider
supporting
multiple
pkis
s.
Mime
is
good
at
this
right.
You
can
have
multiple
signatures.
L
Ssl
tls
is
awful
at
this
because
you
have
the
certificate
p
kicks
somewhat
imagined
the
idea
that
you
could
branch
the
pkis
at
the
issuer
certificate
right.
L
L
So
I
realized
that
was
a
little
bit
of
a
disconnected
talk.
I
realized
there's,
certainly
history,
that
a
number
of
people
were
participating
in,
but
I'm
hoping
this
gives
an
insight
into
sort
of
what
the
web
pki
is
and
why
it
may
not
be
appropriate
for
the
use
case
or
if
you
want
a
pki,
what
the
challenges
are
to
actually
get
there
and
try
to
replicate
the
success
of
an
out
of
the
box
pki
that
just
works.
A
Yes,
so
thank
you
very
much
ryan.
That
was
a
very
fun
talk
and
and
hopefully
in
london
in
some
parts.
I
know
we
had
a
good
bit
of
chat
in
the
jabber
about
the
specifics
of
the
history
and
histories,
of
course,
always
hard
to
get
right,
especially
if
you
were
not
there
when
it
happened.
A
I
don't
know
if
he
wants
to
try
and
give
some
key
points
right
now
or
if
we
should
just
wait
to
do
that
later,
because
I
will
try
to
focus
my
enthusiasm,
I
guess,
on
sort
of
the
last
part
of
the
talk
about
what
the
operational
needs
for
for
building
pki
are,
and
especially
even
on
this
last
slide,
that
you
know,
there's
there's
not
a
single
pki
and
that
you
may
need
to
do
your
own
thing
if
your
needs
are
different
from
those
of
the
webp
guy.
A
M
Okay,
I'm
I'm
on
then
yeah.
I
I
I
won't
go
into
the
history.
I
can't
say
that
I
don't
agree
with
pretty
much
all
of
what
was
said
on
history.
M
M
I
mean
certificates
expire,
and
that
is
not
always
what
you
want
them
to
do.
It's
particularly
not
something
that
you
want
to
do
in
general.
It's
not.
If
you
look
at
the
design
of
the
rpki,
I
mean
trying
to
squeeze
that
into
it.
It
was
nothing
appropriate.
M
A
Thank
you
and
we
do
look
forward
to
to
hearing
the
true.
A
N
Thank
you.
I
just
wanted
to
like
sincerely
thank
ryan
for
this
talk.
I
personally
think
that
we
don't
have
enough
like
historical
talks
at
ietf,
and
it's
kind
of
you
know
we're
all
caught
in
the
moment
of.
What's
the
next
best
thing
or
next
thing,
we're
building
and
having
kind
of
that
context
is
really
helpful.
So,
honestly,
thank
you.
I
learned
a
huge
amount
today.
So
thanks.
That's.
A
A
All
right
david,
I
think
you
have
to
manually,
take
yourself
out
of
the
queue.
I
don't
believe.
I
have
an
option
to
do.
A
Okay,
so
phil,
I
don't
know
if
you're
still
intending
to
be
sending
audio
video,
but
fraser
is
next
in
the
queue.
O
L
All
right,
how
are
you
ascending?
Yes,
the
reason
why
I
say
this
is
greece
is
hard
because
you
need
the
cooperation
of
the
ca,
and
so,
depending
on
your
model,
you
may
not
be
able
to
ease
decrease
things.
Now,
when
you
grease
things,
you
need
someone
to
get
a
certificate
and
you
need
them
to
send
that
certificate
to
someone
else,
and
so
this
is
a
three-party
coordination
problem.
That
makes
it
very
hard
to
add
this.
L
This
grease
in
and
also
makes
it
hard
to
randomize
a
lot
of
this,
because
if
you
start
adding
say
extensions
to
make
sure
that
your
extensibility
maintains
and
using
tls
as
an
example,
because
you
can
only
send
one
certificate,
you
sort
of
get
an
all
or
nothing
shot
if
that
certificate
fails.
You've
now
broken
the
protocol
for
your
peers
and
you
have
to
get
another
certificate,
and
you
may
not
know
you
broke
them.
L
It's
a
little
bit
easier
saying
something
with
s
mine
where
you
could
add
a
regular
certificate
and
a
greased
certificate
and
hope
you
sort
of
get
it
the
right
results,
but
because
it
requires
the
coordination
of
multiple
parties
and
whether
you're
thinking
of
cross
certificates
and
sort
of
that,
the
classic
pkix
model,
where
you're
thinking
in
the
very
narrow
case
whether
it
be
web,
pki
or
say
you
need
a
localized
part.
The
multi-party
coordination
makes
it
hard
for
a
single
party
to
say
I
want
to
grease
things.
L
You
need
to
say
a
ca,
that's
willing
to
give
you
a
greased
cert,
a
server
that
is
willing
to
take
the
risk
of
the
grease
cert
and
you
need
clients
who
are
willing
to
allow
grease
deserts
to
be
issued
depending
on
their
compatibility
and
security
needs.
So,
yes,
I
apologize
for
not
going
into
the
greater
one.
There.
A
Great,
so
I
think
the
queue
is
showing
us
empty.
We
have
a
number
of
people
in
the
chat,
thanking
ryan
as
well
for
the
talk.
So
just
I
will
echo
that
again.
Thank
you.
I
enjoy
the.
A
Talk,
I
guess
room,
we
have
to
switch
slide
decks
in
order
to
get
to
the
the
one
remaining
slide
in
shift
slides,
which
is,
of
course,
the
open
mic.
So
we've
got
a
decent
bit
of
time
left.
A
P
Hi
thanks
for
the
interesting
talks
today,
I
think
they've
been
really
good.
I
just
wondered
if
we
could
use
this
time
to
talk
about
the
vulnerability
disclosure
policy
that
the
ietf
is
or
iesg
is
soon
to
publish
and
any
sort
of
time
scales
for
revisions
or
yeah.
P
If
you
could
just
speak
to
a
bit
about
your
future
plans
for
it,
because
I
think,
as
roman
said
yesterday
in
the
iesg
there's,
there's
work
to
do
and
kind
of
how
to
best
harness
the
expertise
that
you
have
in
itf
to
to
make
it
the
most
effective.
It
could
be.
Thank
you.
B
Great
question:
I
think
what
we're
about
to
publish
is,
is
documenting
the,
as
is
on
the
ground.
It
makes
no
attempt
really
to
change
any
behavior
in
the
ietf
other
than
adding
really
kind
of
the
alias,
as
folks
have
commented
on
the
list
and
as
you
kind
of
well
know
what
we're
describing
is
pretty
far
from
what
best
practice
would
be.
B
If
you
were
a
certainly
kind
of
a
product
vendor
or
anyone
dealing
in
this
class
of
coordination,
we
should
be
trying
to
optimize
to
make
it
as
easy
as
possible
to
get
those
reports
and
providing
as
much
you
know
support
to
those
reports
in
working
them.
You
know
triaging
them
and
working
them
through
the
entire
organization
and
the
current
process.
Frankly,
doesn't
do
that.
B
I
I
think
that
to
pivot
what
we
do
now
to
be
a
little
closer
to
best
practice,
it's
going
to
require
work
on
kind
of
consensus,
building
to
to
have
to
have
and
work
work
vulnerabilities
in
a
you
know,
I
guess
a
more
centralized
way.
I
think
the
other
kind
of
component
to
that
is.
We
should
probably
eat
a
little
bit
of
our
own
dog
food.
We
have
security.txt
almost
kind
of
published.
B
A
B
I
mean,
I
definitely
think
we're
going
to
need
a
community-based
approach.
I
think
one
of
the
things
that
came
out
of
the
the
feedback
is:
do
we
have
a
reasonable
place
to
talk
about
what
that
future
would
look
like,
and
I
think
it
made
a
ton
of
sense
to
create
a
mailing
list.
You
know
to
have
that
conversation.
P
Q
Hi,
so
I
think
the
have
you
considered
so
one
of
the
things
I
do
outside
of
the
ietf
is
I
work
within
the
cv
community
as
a
cv
board
member-
and
you
know
one
of
the
things
that
we're
trying
to
do
in
cve
is
we're
trying
to
recruit
additional
organizations
that
are
cv,
numbering
authorities,
organizations
that
are
able
to
assign,
on
their
own
cvids
to
you
know
to
things
that
are
vulnerable.
Q
You
know
this
is
an
interesting
use
case
that
we
haven't
really
explored
too
much
within
within
cve,
but
you
could
effectively
issue
cv.
Ids
against
you
know
rfcs
and
those
could
be
you
know
referenced.
You
know
by
potential
implementers.
B
Yeah,
I
mean
that's,
certainly
a
kind
of
a
good
point.
We
haven't
talked
about
that
in
the
isg,
but
that's
come
back
as
you
know,
as
feedback
that
that's
something
the
ietf
can
do.
I
think
that's
great
kind
of
fodder
for
how
much
appetite
there
is
to
do
that.
You
know
in
further
kind
of
discussion
I
mean
I
would
just
say
what
we
have
now
was
not
attempting
again
to
be
as
ambitious
as
that,
and
we
would
have
to
figure
out
quite
a
lot
of
workflow
things
to
yeah.
B
Q
Yeah-
and
you
know,
if
there's
interest
in
doing
that,
I
would
be
certainly
willing
to
help.
However,
however,
I
can.
B
M
Okay,
so
I
just
wanted
to
say
to
so:
blockchain
is
currently
the
hot
topic
in
trypto.
I
would
urge
people
to
take
a
look
at
what's
starting
to
come
out
of
the
threshold
cryptography
world.
There's
just
been
a
nist
workshop
on
threshold
cryptography
and
the
next
administration
is
going
in.
The
us
is
going
to
be
dealing
with
the
consequences
of
the
snowden
breach
and
insider
threats
and
a
whole
rack
of
other
security
issues,
for
which
threshold
is
the
obvious
technical
solution.
M
A
Thanks,
please
do
post
the
links
to
sag
when
they
are
I'm
sure
it
will
be
an
interesting
topic
for
us,
and
I
agree
that
we
should
start
using
those.
A
R
I'll
I'll
voice,
what
I
commented
in
the
jabber
platform,
kind
of
seconding
support
for
automatic
ways
to
acquire
vulnerabilities
about.
B
A
R
Right,
it's
certainly
not
simple,
but
there
have
been
specific
vulnerabilities
in
the
rfcs
itself
and
not
just
implementations
thereof.
Right.
Yes,.
C
G
Okay,
dkg,
I
guess
you're
up
thanks.
I
am
more
wary
about
the
idea
of
assigning
cves
to
rfcs.
G
I
understand
the
impulse,
but
I
think
most
of
the
vulnerabilities
that
we're
concerned
about
are
either
in
a
specific
subsection
of
an
rfc
which
any
given
piece
of
software
might
or
might
not
implement,
or
they
are
about
how
certain
rfcs
are
used
in
the
world,
in
which
case
it's
not
clear
whether
your
particular
tool
is
going
to
tickle
that
particular
problem
or
the
problem
is
between
between
the
intersection
of
two
different
rfcs
right.
When
you
run
http
one
over
tls
1.0.
G
What
are
the
issues
versus
if
you
right,
I
mean
you
know
if
you're
doing
this
particular
kind
of
dns
lookup
while
talking
to
xmpp
services?
Right
I
mean
this
is
where
some
of
the
problems
that
we
run
into
are,
and
I
don't
see
how
we
can
adequately
represent
that
in
a
way
that
a
machine
can
consume
it.
Unfortunately,.
R
Indeed,
I
did
the.
I
think
this
is
a
common
problem
with
cbes
and
is
absolutely
not
unique
to
rfcs
in
any
way
the
cdes
that
I
see
frequently
when
I
look
at
cves
against
dependent
libraries
very,
very
frequently.
R
In
fact,
most
of
the
time
those
cbes
are
not
indicating
and
disclos
indicating
a
vulnerability
in
a
in
the
way,
I'm
using
that
library
and
the
same
is
going
to
be
true
about
rfcs
right
if
there
is
a
vulnerability
that
only
has
to
do
with
certain
implementations
and
use
cases
and
the
tool
that
implements
this
particular
rfc
doesn't
pickle.
That
particular
issue,
as
he
said,
then
yeah.
It's,
it's
not
going
to
be
a
particular
issue,
but
I
don't
know.
R
I
don't
know
that
that
particular
piece
is
either
a
actually
possible
to
solve
or
c
or
b
a,
but
something
that
we
need
to
solve,
because
the
thing
this
exact
same
problem
happens
in
lots
of
other
places
and
it
doesn't
prevent
us
from
producing
these
easily
digestible
mechanisms.
B
Yeah,
absolutely
and
the
other
bits
of
feedback
that
have
have
come
in
as
the
cv
topic
for
rfcs
have
come
up,
is
a
reminder
that
rfcs
come
in
a
lot
of
flavors
something
some
of
them
are
our
protocol
specs
some
of
them
document
operational
practices
and
to
the
question
of
dependencies.
You
know
we
have
rfcs
that
do
updates
on
other
kind
of
rfcs,
and
so
when
you
tag
something
it
could
mean
a
lot
of
different
things,
but
but
but
concur
that
it's
quite
similar
to
the
the
current
software
equivalent
system
as
well.
A
Okay,
dave
sorry
to
skip
you.
Q
So
the
cv
program
is
actually
working.
They
have
an
existing
cv
record
format
that
describes
a
lot
of
metadata
about
a
vulnerability
we're
currently
in
the
process
of
revising
that
format.
The
previous
format
was
version.
Four
we're
working
on
version
five
right
now,
and
you
know
part
of
what
we're
trying
to
do
is
add
a
lot
more
contextual
metadata.
This
problem
of
how
do
we?
Q
How
do
we
identify
the
interaction
or
you
know
the
the
aspect
of
a
given
protocol
or
library?
That's
vulnerable-
has
definitely
come
up
in
in
that
situation.
That's
something
that
we're
interested
in
solving
either
in
version
5
or
in
sub-subsequent
version.
Q
I
would
recommend
that
this
community,
not
you,
know,
not
go
off
and
create
an
additional
record
format
for
representing
a
vulnerability,
there's
already
a
plethora
of
those
out
there
today,
but
it
would
be
great
if
we
could
find
a
way
for
this
community
to
work
with
the
cve
program
to
work
to
solve
these
problems.
You
know,
because
then
we
could
actually
produce
a
a
more
robust
record
format
for
describing
a
vulnerability
in
a
way.
I
think
they
could
serve
this
community
as
well
as
all
of
the
other
communities
served
by
cve.
Q
B
So
dave
I
was
gonna,
say
follow-up
question.
Is
the
cv
program
have
other
sdos
that
are
on
board
as
well.
Q
No,
I
mean
this
is
kind
of
a
new
space.
You
know,
there's
not
a
lot
of
stos
out
there
that
are
that
are
issue
that
are
there's
no
sdo,
that's
a
cna
right
now,
and
so
any
protocol
vulnerabilities
that
have
been
issued
a
cvid
was
done
by
the
cna
of
last
resort.
You
know
which
is
operated
by
mitre
and
and
so
this
is
new
ground.
You
know
looking
at
an
sdo
actually
issuing
cv
ids
and
so
because
it's
new
ground
we're
we're
gonna
have
to
work
through
some
of
these
issues.
B
F
So
I
I
think
you
know
the
cbe
format
as
it
stands
now
or
you
know
the
information
contained
isn't
perfect,
but
it's
still
really
useful
right
and-
and
you
know
some
things
you
can
do
you
know
through
automation,
but
there's
a
lot
of
things
that
still
you
might
not
be
able
to
because
of
the
either
the
quality
information
in
the
cv
or
lacking
some
details,
and
it's
really
encouraging
that
there's.
F
You
know
continuing
work
going
on
here
and,
and
I
think
it
would
be
worthwhile
to
see
you
know
how
we
could
link
up
with
that,
because
I
think
it
would
be
better
not
to
create
yet
another
format.
But
I
think
some
of
the
value
that
we
would
provide
is
just
thinking
through.
F
You
know
and
categorized
in
that
way,
both
for
somebody
who's
looking
for
issues
and
and
for
our
own
information
when
we
go
back
and
look
at
well,
which
protocols
have
had
problems
and
and
why.
I
think
it
could
be
really
useful.
A
Q
K
Yeah
in
the
oaus
working
group,
we
looked
at
various
security
incidents
and
related
to
all
us,
and
many
of
them
were
actually
fairly
complex,
and
so
it
was
I'm
trying
to
see
how
that
disclosure
process
would
actually
work
in
practice
with
idf
protocols,
because
first
we
had
to
have
the
problem
to
actually
even
learn
about
the
incident
at
the
level
of
detail
that
we
could
make
an
assessment
of
what
was
actually
the
problem,
which
is
often
a
challenge.
K
You
hear
that
oh
there's
a
problem
with
overs,
but
you
don't
get
to
know
like
what
was
really
the
the
setup
and
then
some
people
extended
take
shortcuts
and
or
modify
the
protocol
or
use
it
in
an
un
expected
way
and
understanding.
All
of
this
requires
a
lot
of
time.
You
know.
Sometimes
we
are
talking
about
months
here.
K
B
Respond
hottest,
no
question
that
there's
a
complexity
in
overlaying
the
kind
of
coordination
process
on
how
we
do
things
in
the
ietf
minimally,
not
to
kind
of
break
anything.
But
some
of
the
feedback.
I've
at
least
heard
on
the
value
mentioned
kind
of
here
in
a
previous
conversations-
is
that
you
know,
as
we
think
about
folks
coming
to
our
documents.
B
Would
there
be
some
easy
way
through
the
cv
process,
to
just
kind
of
signal
to
implementers
that
there
are
kind
of
issues
with
kind
of
metadata,
and
you
know
so
that
that
could
be
one
thing
that
that's
done
and
you
know
hotness
is
also
being
modest.
I
wanted
to
reiterate
in
making
the
the
text
for
that
we're
about
to
kind
of
publish.
B
I
surveyed
practices
across
the
ietf
that
were
published
for
vulnerability,
disclosure
and
oauth
is
the
only
working
group
active
working
group
and
closed
working
group
that
I
could
find
that
by
charter
had
actually
taken
the
time
to
publish
how
they
would
handle
the
injective
vulnerabilities,
so
kind
of
kudos
to
the
to
the
current
oauth
chairs
and
the
foresight
of
the
previous
ads
and
chairs.
Who
did
the.
B
A
A
It
seems
almost
certain
that
you
know
the
working
group
itself
is
going
to
have
to
get
involved
with
that,
and
I
know
if
you're
talking
about
confidential
disclosure
that
may
not
always
directly
be
applicable,
but
I
think
that
the
the
way
that
itf
operates
with
generally
open
and
community
participation
we're
gonna,
have
to
get
to
the
working
group.
At
some
point.
R
Yeah,
I
I
was
just
gonna
point
out
that
this
inevitably
inevitably
there
will
be
disclosures
filed
against
working
groups
that
are
now
completed,
have
completed
their
work
and
so
we'll
need
to
make
sure
there
won't
always
be
a
live
working
group
to
handle
the
situation.
A
D
Sean
go
ahead,
hey
so
that
makes
me
think
about
trying
to
resolve
a
rata
from
closed
working
groups.
It's
really
difficult
to
get
the
minor,
the
smallest
things
fixed
if
you're
trying
to
resolve
a
cve
against
a
a
long,
closed
working
group.
I
mean
good
luck.
It's
gonna
be
very
difficult
to
do
that.
D
I
guess
the
other
thing
I'll
add
is
that
at
one
point
I
was
being
copied
on
a
lot
of
see
of
vulnerabilities
for
a
particular
working
group
that
I
was
the
chair
of,
and
I
basically
told
them
to
stop
copying
me
because
there
was
nothing
I
could
really
do
with
it,
and
I
was
really
concerned
about
trying
to
maintain
the
confidentiality
requirements
and
there
was
like
you
know
you
can
talk
about
it
now.
You
can't
talk
about
it
then,
and
I
was
like
I
really
don't
need
to
know
about
this.
D
This
sounds
like
an
implementation
problem
when
you
guys
have
come
to
an
agreement
to
fix
it.
That's
great,
please
come
to
the
working
group
and
tell
us
about
it
later,
so
it's
definitely
some
things
you're
going
to
have
to
think
about
how
to
how
to
work
through.
P
Yep
dave
go
ahead.
Q
The
things
we're
trying
to
dispel
in
cv
is
this
notion
that
every
vulnerability
needs
a
fix.
It's
possible
it's
possible
to
publish
a
vulnerability.
You
know
without
you
know,
without
any
fix,
and
so
you
know
one
one
advantage
of
doing
something
like
that
is.
It
could
be
a
further
motivating
factor
for
implementers
to
move
away
from.
You
know
older.
You
know,
protocols
that
you
know
have
a
lot
of
inherent
vulnerabilities.
Q
You
know
to
newer
versions
that
you
know
don't
have
those
vulnerabilities,
so
you
know
if
if
there
isn't
a
working
group-
and
you
know
the
the
protocol
is-
the
community
is
working
to
try
to
move
on
from
that
protocol,
issuing
a
cv
id
for
something
that
isn't
fixed.
You
know
it
could
be
a
potential
motivator.
You
know
to
get
implementers
to
upgrade.
B
K
Yeah,
just
to
give
you
an
example
of
a
problem
we
ran
into
was
oas,
uses
the
json
web
tokens
and
that
work
was
standardized
in
the
idf
and
there
were
lots
of
libraries
available
that
produced
these
jwds
and
the
corresponding
security
mechanisms.
But
what
many
implementers
did
was
they
didn't
include
sort
of
code
for
the
policy
handling.
K
So
if
you
included
an
algorithm
that
was
completely
insecure
like
no
and
none
null
algorithm,
then
the
software,
if
you
didn't
pay
attention,
would
would
just
accept
that
you
could
basically
modify
the
tokens
in
the
way
you
like
it
by
just
saying:
none
and
then
there's
no
signature,
protecting
it
and
so
and
so
on.
So
obviously
a
problem.
Is
it
a
problem
with
the
rsc
not
to
a
certain
extent
like?
K
Maybe
there
shouldn't
there
should
be
no
such
algorithm
in
the
first
place,
but
that
aside,
we
would
then
wanted
to
reach
out
to
the
guys
who
wrote
those
libraries
and-
and
it
turns
out
that
this
is
also
extremely
difficult,
because
the
ecosystem
is
for
some
of
our
protocols.
So
huge
we
don't
know
who
is
implementing
even
a
fraction
of
it,
and
then
those
people
are
deploying
it.
K
This
is
like
a
completely
different
story,
so
if
we
find
a
vulnerability
and
know
it
and
then
describe
it
as
well
and
then
make
it
available
with
an
ability
to
actually
reach
out
to
the
implementers
of
those
libraries
or
to
the
people
who
deploy
it,
actually
we
make
the
situation
potentially
worse
than
before.
So
just
saying,
oh
move
on
to
something
else,
I
think
it's
not
always
a
good
approach,
because
our
work
starts
right
early.
We
move
on,
we
always
go
to
the
next.
K
The
shiny,
the
shiny
new
thing,
but
implementers
need
a
lot
on
deploying
community
need
a
lot
of
time
and
they
deploy
technologies
that
we
worked
on
like
like
five
or
ten
years
ago.
So
it's
really.
It's
really
tricky
to
do
that,
and
so
I
wish.
Sometimes
there
were
fewer
implementations,
but
just
better
maintained,
because
people
move
on
and
leave
the
code
out
there
and
it's
it's
actually
a
ticking
time.
A
A
Basis
dave
you're
back.
Q
On
us
to
think
about,
though,
is
that
so
I
completely
agree
with
what
you're
saying
and
that's
certainly
a
classic
problem
in
in
this
space.
You
know
another
advantage
of
you
know.
Publishing
a
cvid
would
be
also
a
way,
a
mechanism
to
publish
you
know,
guidelines
around
how
to
mitigate
a
given
vulnerability.
So
you
know
where
there
are
existing
implementations
that
are
vulnerable
and
it
may
be
difficult
to
replace
them
or
upgrade
them.
Q
Or
you
know,
whatever
the
you
know,
whatever
the
more
ideal
path
would
be,
it
might
be
an
opportunity
for
the
ietf,
to
you
know,
publish
guidance
on
how
to
put
in
place
compensating
controls-
or
you
know
other
other
things
to
to
for
for
both
implementers
and
users.
To
consider
to
you
know,
reduce
the
you
know
the
risk
of
the
vulnerability
being
exploited,
so
it
could
be
used
as
part
of
an
awareness
campaign
to
support
that
type
of
activity
too.
A
And
as
we
leave
the
queue
open,
I
do
want
to
note
that
we
have
had
some
notes
being
taken
in
the
coding
d
and
thank
you
very
much
to
whoever
is
doing
that.
Please
leave
your
name
at
the
top
as
having
contributed
to
the
minutes,
and
then
we
can
thank
you
properly
when
those
get
sent.
C
A
All
right,
the
queue
is
still
empty,
so
I
think
we
should
call
it.
Thank
you,
everyone
for
attending.
Thank
you,
ryan,
for
the
talk.
Thank
you
for
the
excellent
discussion
on
all
the
points
that
we
had
and
we've
got
some
follow-up
work
for
us
to
work
on.
So
this
was
a
good
meeting
thanks
everyone
and
we'll
see
you
in
gather
town.