►
From YouTube: IETF114-OPENPGP-20220729-1400
Description
OPENPGP meeting session at IETF114
2022/07/29 1400
https://datatracker.ietf.org/meeting/114/proceedings/
B
Should
we
search
okay
good
morning,
welcome
to
the
open,
pcp
working
group
at
itf114?
I
guess
we'll
get
started.
B
D
B
You're
driving
the
slides,
that's
the
contact.
He
thinks.
B
This
is
the
note.
Well,
I
guess
you've
probably
seen
that
during
the
week,
if
you
haven't
then
noted
well
and
you
didn't
notice-
we
also
are
all
wearing
masks
today.
So
please
do
so
if
you're
presenting
here
you
don't
have
to,
but
otherwise
please
do
next
slide.
B
So
this
is
our
agenda.
We
did
a
working
group
last
call,
not
probably
it
won't
be
the
last
one.
We
have
a
bunch
of
issues
to
talk
about
that
that
we'd
like
to
get
through
and
if
we
can
reach
some
kind
of
resolution
in
the
in
the
room,
whether
here
or
virtually
for
those
issues,
that'll
be
great
and
then
confirm
them
on
the
list
later.
B
So
that's
most
of
the
meeting
today
and
then
talk
about
next
steps
for
the
draft
then
daniel,
who
I
don't
see
in
the
room.
Just
yet
has
a
presentation
and
aaron,
I
think,
is.
C
C
B
We
have
terrorists
taking
minutes,
so
thanks
tara
and
we
can
keep
an
eye
on
everyone's
chairs.
B
Okay,
so,
like
I
said,
we
had
a
working
group
last
call
on
draft
six,
just
really
to
see
where
we're
at,
because
the
we
had
a
design
team.
As
we
discussed
last
time
that
was
working
really
hard.
B
I
think
in
order
to
try
and
get
the
documented
shape,
if
you
go
to
that
link,
you'll
find
the
working
group
last
call
issues
all
the
label
wglc
and
what
we'd
like
to
do
is
again
go
through
those
here
with
this
highly
energetic
early
morning,
crowd
and
see
if
we
can
kind
of
reach
any
resolution
on
those
and
if
we
can
great
and
we'll
confirm
it
on
the
list.
C
So
yeah,
so
the
hope
is
that
people
can
can
speak
to
these
issues
as
they
come
up
if
you're,
remote
and
you're
on
you
know
in
jabber
or
zulip
or
whatever,
and
you
want
someone
to
say
something
to
the
mic.
Just
you
know
put
it
in
there
and
say
mike.
C
So
I
think
we'll
just
get
started
with
issue
132,
which
is
about
padding.
B
A
B
C
So
so
the
reminders
here
this
is
about
whether
the
padding
package
should
have
all
zero
content,
random
content
or
some
other
scheme
question
about.
What
do
we
tell
implementations
to
do
if
they
receive
aead
packets
and
try
to
decrypt
them
and
get
a
failure
issue
about
whether
we
include
gcm
in
this
draft
or
not
question
about
whether
we
use
hkdf
to
bind
keys
to
the
modes
in
aead.
C
I'm
going
to
look
at
what
we
do
with,
where
do
the
certificate-wide
parameters
like
algorithm
preferences
and
things
where
do
they
live
in
an
open,
pgp
certificate?
Are
they
in
the
direct
p6,
as
the
draft
currently
says,
the
whether
we
continue
to
disallow
the
revocation
key
sub
packet
for
v5
keys
or
whether
we
want
to
roll
that
back?
C
Are
we
okay
with
the
changes
to
the
iana
registry
right
we've,
the
ayanna
register?
The
draft
currently
says
that
we're
moving
everything
pretty
much
from
almost
everything
from
rfc
required
to
specification
required,
which
is
a
looser
requirement
to
register
things
in
the
registries.
C
Do
we
believe
that
the
text
on
argon
2
is
clear?
Do
we
have
sufficient
guidance
there?
Is
there
anything
that
can
be
cleaned
up,
and
how
should
we
handle
problematic
keys
that
we've
been
seeing
in
the
wild?
There's
been
some
reports
about
some
fairly
popular
keys
that
don't
conform
to
any
of
the
specifications,
including
rfc
4880,
and
should
do
we
need
to
update
the
text
to
handle
it
better?
C
Do
we
need
to
modify
the
spec
for
v5
keys
to
not
have
to
worry
about
this
kind
of
failure
in
the
future,
and
then
the
last
question
I
think,
is
about
how
we
want
to
you
know
if
we
can
get
through
these,
we
get
a
new
draft
out.
How
do
we
want
to
scope
the
process
going
forward?
Do
we
want
to
say
okay,
when
the
new
draft
is
out
if
it
handles
all
these
things?
The
way
the
working
group
agrees?
C
C
So
that's
the
that's
the
queue
of
issues
that
we're
looking
at
I'm
going
to
pop
back
to
the
first
one.
So
we
have
some
time
to
discuss
each
one,
but
hopefully
that
gives
you
a
flavor
of
the
types
of
stuff
that
we're
that
we're
asking
for.
If
you
pull
the
slides
from
the
data
tracker,
the
meeting
agenda,
you
get
links
there
to
go
to
each
of
the
issues
and
we'll
pull
up
the
issues
here
as
well
during
discussion.
C
So
heading
back
to
issue
132
about
padding.
We
have
this
padding
packet
and
the
question
for
the
group
is:
what
should
the
content
of
the
padding
packet
be?
We've
had
different
people
with
different
perspectives
on
this
on
the
list,
we've
had
a
couple
different
variants
in
the
in,
as
this
evolved
through
through
git.
C
The
status
quo
is
that
the
padding
packet
should
be
full
of
random
octets
and
there's
a
little
bit
of
guidance
about
where
to
get
the
random
octets.
But
there's
been
discussion
on
the
list
about
whether
we
should
revert
this
and
we
have
a
couple
different
merge
requests
that
actually
offer
different
ways
to
handle
it.
C
Either
in
the
chat
or
or
here
at
the
mic,
if
you
nobody's
been
in
the
queue
yet
to
get
in
the
queue,
if
you're
in
the
room,
please
use
the
meet
echo
light
thing
and
click
that
ray's
hand.
C
D
D
So
I
don't
think
that
so
a
rigid
construction
based
on
something
that
is
to
do
with
either
the
the
that
is
to
do
with
something
generated
from
the
key,
but
through
a
different,
kdf
or
whatever
seems
to
be
need
to
be
the
best
solution.
Because
then
you
don't
give
chosen
plain,
gnome
plain
text
and
you
don't
give
a
covert
channel.
F
C
B
So
I
think,
for
this
one
we
had
two
merge
requests.
One
basically
was
saying:
here's
this
deterministic
construction
and
I
think
the
other
one
says
I
forget
what
you
ended
up
saying
last.
C
C
D
D
You
only
need
to
change
the
data
really
slightly
to
take
away
the
you
know
to
take
away
the
leverage.
F
B
And
I
think
the
other
point
that
came
up
in
this
discussion
was
with
the
deterministic
one.
You
could
ask
implementations
to
try
and
verify
that
it
was
done
correctly
or
not
bother.
So
that
was
the
other
kind
of
flavor
of
this.
C
Right,
so
the
receiving
implementation
needs
to
know
what
to
do,
and
I
guess
we
have
this
question
of
what
what
would
a
receiving
implementation
do
when
receiving
one
of
these
things,
that's
supposed
to
be
deterministic
but
isn't
like.
Should
it
reject
the
packet
stream?
Do
we?
What
are
we?
What
kind
of
brittleness
are
we
willing
to
incorporate
in
order
to
defend
against
this
particular
covert
channel.
B
So
I
guess
I
should
try
and
figure
out
how
a
polling
tool
works.
I
think
for
this
one.
Probably
the
easiest
thing
to
ask
is:
should
we
have
zero?
I
think
the
random
padding
is
nobody's
that
keen
on
random
right,
which
is
what
the
current
draft
says.
C
Well,
maybe
the
question
that
you
pointed
out
is
is
a
better
one
to
add
to
the
poll
which
would
be.
Oh,
I
see
aaron
in
the
queue
erin.
Do
you
want
to
speak.
G
Yeah,
I
also
think
that,
like
there
are
many
many
ways
on
open,
pgp
to
put
in
some
side
channel
like
to
put
in
some
cover
data,
just
adding
a
packet
that
is
an
unknown
version
or
an
experimental
and
and
therefore
adding
random.
I
don't
think,
is
that
much
of
adding
like
it's
not
adding
that
much
to
the
copper
channel
in
openphp.
C
So
it
seems
like
one
one
question
is:
do
we
want
receiving
implementations
to
reject
padding
packets
that
are
considered
to
be
malformed
right,
because
that
actually
would
govern
whether
what
like
are
we
are
we
making
a
requirement
about
what
what
goes
in
it
or
not,
and
we
can
make
recommendations
about
what
to
put
in
it?
But
there
is
this.
There
is
this
underlying
question
of.
If
we
make
a
recommendation
and
it's
not
followed,
are
we
going
to
expect
the
receiving
side
to
reject
it?
B
B
C
Random
is
currently
recommended
and
there's
a
step.
There's
a
separate
discussion
about
what
you
know.
This
kind
of
randomness
is
probably
the
least
important
randomness
of
of
of
all
of
the
kinds
of
random
there's
a
section
about
how
how
do
you
generate
randomness
in
open
pgp,
and
it
calls
this
out
as
a
as
a
distinct
flavor
from
like
key
generation,
randomness,
okay,.
B
He
feels
like
repeating
them
doesn't
add
anything
new.
What
should
we
do?
Please,
please
do
repeat
opinions,
justice
and
send
audio
or
text
as
you
prefer,
because
I
think
we'd
like
to
have
the
discussion
until
it's
on
the
record,
for
everyone
and
and
we
can
try
and
resolve
it.
H
Yeah,
so
I
don't
actually
have
an
issue
with
random
data
in
the
panning
packet.
The
only
issue
I
have
with
the
current
text
is
that
it
suggests
that
the
random
data
protects
against
higher
level
compression
disguising
the
let's
say,
padding
or
sorry
negating
the
padding.
So
in
my
opinion,
it
doesn't
because
the
compression
will
compress
the
non-random
padding
anyway
and
will
reveal
information
about
the
length
it
has.
So
I'm
fine
with
leaving
it
random,
but
just
removing
that
text,
but
yeah.
F
B
B
B
C
C
Yeah,
let
me
just
copy
the
specific
poll
on
eight
to
one.
So,
let's
move
on
to
the
next.
The
next
issue
here.
C
Oh
wow,
this
is
this:
the
switching
between
the
slides
and
the.
A
F
B
I
think
I
actually
understand
this
one.
That's
that's
that's
helpful,
so
this
one
here
when
we're
using
aad
algorithms
and
you
decrypt,
the
issue
is
what's
how
to
handle
that
we
have
some
chunking
so
that
these
decrypt
failures
don't
necessarily
apply
to
an
overall
message,
and
currently,
I
think
the
text
in
a
few
places
says
that
if
you
get
an
aea,
decryption
error,
there's
a
should
fail
kind
of
statements,
and
I
think
the
suggestion
for
this
issue
was
to
change
those
to
must
fail.
Instead
of
should.
B
C
C
F
E
C
B
Yeah,
that
was
a
good
thought.
Okay,
so
for
this
one
there
was
some
discussion.
I
think
mostly
it
was
kind
of
in
the
in
the
affirmative
like
saying
that
that
change
would
be
a
good
one.
I
think
so.
C
Know
yeah
that
oh
it's
actually
not
linked.
Sorry,
it's
actually
not
linked
I'll
pull
it
up.
F
C
B
Poll
saying:
should
we
make
this
change
to
change
the
kind
of
showed
statements
about
how
yeah
the
encryption
failure
is
into
most
segments?
If
you
think
we
should
make
the
change,
please
click
the
raise
hand
button.
B
B
So
this
one
had
a
bunch
of
discussion.
I
think
good
points
are
raised
on
both
for
inclusion
and
non-inclusion.
B
So
and
again
there
are
arguments
for
and
against
inclusion
of
gcm,
and
I
think
another
salient
point
is
that
currently,
with
the
specification
required
iana
rules,
if
we
do
not
include
gcm,
somebody
else
can
come
along
and
add
it
next
week
as
soon
as
we're
done.
So
it
seems,
like
the
opinions,
are
kind
of
all
over
the
place.
But
let's,
let's
give
people
a
chance
to
speak
to
should
we
include
gcm
in
this
specification
now
or
not.
C
D
D
But
if
I'm
going
to
implement
pgp
I'm
going
to
implement
the
algorithms
in
the
draft
and
I'm
not
going
to
accept,
expect
to
be
able
to
send
mail
in
a
scheme
that
is
not
in
the
draft.
So
not
putting
it.
Sorry,
if
gcm
is
not
in
the
rfc,
it
will
have
a
material
impact
on
adoption
and
it
will
encourage
people
to
use
ocb,
which
is
a
stronger
construct
which
we
all
should
be
using.
C
So,
thanks
for
the
reminder
for
the
clarification
ocb
in
the
draft
is
flagged,
as
must
implement
and
eax
is
not
flagged,
as
must
implement
and
gcm
is
also
flagged,
as
is,
is
not
mandatory
to
implement
yeah.
I
hear
what
you're
saying
about
the
presence
in
the
in
the
in
the
draft,
but
there
is
a
there
is
a
notable
distinction
between
the
modes
and
there
is
an
advertising
mechanism
that
where
you
can
indicate
what
you,
what
what
aed
algorithms
you
support
with
what
other
ciphers.
C
So
you
won't
accidentally
encrypt
using
gcm
to
somebody
who
doesn't
support
it
and
everybody
has
to
support
ocb
anyway,
so
that
that's
that's
the
that's
the
the
baseline
part
of
the
interop
that
the
current
draft
tries
to
do,
whether,
whether
that's
right
or
not,
or
whether
that's
useful
so,
but
it
sounds
to
me
like
you're
speaking
against,
including
gcm.
D
I
C
K
Are
you
hitting
me
now?
Yes,
we
are.
Thank
you.
Thank
you.
I'm
sorry
about
that.
So,
right
now,
gcm
is
not
a
mandatory
to
implement
or
mandatory
to.
K
K
But
it's
still
good
in
in
some
cases.
So
so
that's
why
I
think
it's
okay
to
to
make
tcm
is
not
a
mandatory
one,
but.
C
B
B
E
Paul
speaking
as
an
individual,
I
don't
think
it
matters
very
much
where
tcm
is
fine
pirates
in
this
rfc
and
another
one,
the
people
that
are
going
to
use
it
are
most
likely
the
people
that
need
to
be
fips
compliant,
and
they
have
no
other
choice
at
this
point.
So
those
will
need
to
implement
these
things
anyway,
whether
they're
in
a
separate
document
or
in
this
rfc.
So
I
don't
think
it
matters
where
it
is,
but
but
as
as
the
people
who
want
fibs
have
no
other
choice
at
the
moment
they
will.
E
They
will
do
this
anyway,
so
so
so
I
I
did
yeah,
I
don't
think
it's
much
of
an
issue
where
it
is
defined
so,
but
I
I
would
personally
recommend
just
doing
it
with
the
group
so
that
we
don't
get
somebody
else
by
their
own
writing
specification
for
gsm.
J
Hi
jonathan
hamill
canadian
center
for
cyber
security,
so
I
I'm
in
favor
of
including
gcm.
While
we
don't
have
requirements
for
using
fips
validation.
It
is
our
guidance
because
because
it
does
provide
implementation,
validation,
it's
not
just
a
policy,
it
provides
a
security
benefit
that
the
implementations
can
actually
be
tested
by
an
independent
lab,
and-
and
that's
why
I
would
like
to
see
it
included.
H
Daniel
howkins,
with
openpgp.js
maintainer
hats
on
all
acknowledge
that
our
requirements
are
somewhat
peculiar,
but
the
reason
we
want
to
use
gcm
is
because
it's
in
web
crypto,
it's
the
only
aad
mode
in
in
the
web.
H
Crypto
api
at
the
moment,
which
is
you
know,
natively
implemented
in
browsers
and
gives
much
better
performance,
so
yeah
we'll
most
likely
implement
it,
whether
it's
included
or
not,
indeed
not
for
fips
compliance
reasons
but
yeah
for
for
performance
reasons,
and
we
can
either
do
that
under
a
private
algorithm
id
or
you
know,
in
a
separate
document
or
in
this
document,
but
yeah
most
likely
we'll
be
implementing
it
anyway.
L
Or
ori
steele
from
transmute,
I
agree
with
everything
he
just
said.
I
probably
would
get
support
for
it
if
they
were
to
implement
it
anyway,
but
I
think
for
the
the
reason
stated
you
know
regarding
fips
and
the
expectation
that
I
think
users
will
have
that
there
will
be
at
least
one
sort
of
fips
algorithm
that
would
be
available.
I
think
it
should
be
included.
M
Ben
kadek,
so
I
feel
pretty
similarly
to
paul
and
that
you
know
this
is
gonna
happen
and
you
know
having
the
working
group
do
it
sort
of
lets
us
retain
a
little
bit
of
control.
I
don't
have
a
particularly
strong
preference
at
all,
but
I
have
a
slight
preference
that,
like
the
working
group,
would
specify
gcm
but
in
a
separate
document,
but
if
we
keep
it
in
this
document,
I
can
certainly
live
with
that.
M
B
B
So
I
figured
this
one
would
actually
go
all
one
way
or
would
be
50
50,
but
this
is
the
better
option.
B
C
So
so
issue
136
is
a
little
bit
trickier,
so
the
way
that
we
have
aead
specified
in
draft
o6
uses
an
hk
uses
hkdf
as
a
key
derivation
function,
which
pulls
in
a
chunk
of
metadata,
including
the
choice
of
mode,
so
that
your
key
can't
accidentally
be
reused
across
modes.
C
If,
if
someone
is
tampering
with
the
with
the
construction
that
you
have
there's
some
concern
that
key
reuse
across
different
modes
could
lead
to
some
kinds
of
problems.
I
believe
there
are
maybe
other
arguments
for
why
to
use
the
hkdf
as
well,
but
there
the
the
issue
raised
on
the
list
was
hkdf.
Isn't
is
an
unnecessary
additional
construction
to
include
here,
and
maybe
it
would
be
simpler
if
we
were
to
remove
it.
C
This
is
again
it's
issue
136.,
so
I'm
wondering
whether
folks
would
be
up
for
speaking
to
this
again.
I
know
that
we
have
had
discussion
on
the
list
about
it.
I
know
we've
had
discussion
in
the
data
in
the
issue
tracker
about
it,
but
we
would
like
to
have
that
discussion
here
in
the
room
if
we
can
anyway
so
now's
a
good
time
to
to
state
your
your
opinions
and
preferences
here.
H
Yeah
so
the
most
specific
issue
that
I
know
of
why
this
was
brought
up
was
there
was
a
paper
where
gcm
ciphertexts
could
be
converted
into,
for
example,
ascfb
ciphertexts.
H
And
then,
if
you
have
a
decryption
oracle
for
cfp,
which
has
happened
in
the
world,
then
you
also
have
a
decryption
oracle
for
gcm,
which
basically
means
that
the
the
security
of
the
gcm
mode
then
gets
reduced
to
the
security
of
cfb,
which
is
bad.
So
I
mean
in
general,
I
think
not
being
able
to
convert
keys
and
ciphertext
between
different
modes
is
good
but
yeah.
So
that
was
the
most
specific
issue
that
I
know
of
so,
if
we're
keeping
gcm,
then
I
think
we
should
definitely
keep
this
as
well.
C
I
C
C
So
that
makes
it
like
a
reply.
It
makes
reply
functionality
when
the
keys
are
not
available
a
possibility.
C
Anybody
else
want
to
speak
to
the
how
they
what
they
feel
about
the
hkdf
in
this.
B
A
C
I
C
C
J
B
C
All
right
and
I'm
putting
that
in
the
issue
tracker
as
well
thanks
folks
issue,
137
questions
about
where
certificate-wide
parameters
live,
so
certificate-wide
parameters,
meaning
things
like
your
algorithm
preferences.
Things
like
the
revocation
life,
the
expiration
lifetime
of
your
primary
key,
which
is
by
default,
the
expiration
of
your
entire
certificate.
C
The
current
draft
o6
says
those
parameters
for
v5
keys.
Specifically
those
things
should
live
in
sub-packets
of
a
direct
key
signature
on
the
primary
key
in
v4.
C
So
the
current
draft
says
for
v5
keys.
Don't
look
for
those
things
in
the
user,
id
self
sigs.
Just
look
in
the
direct
key.
This
makes
it
a
little
bit
more
predictable,
I
think,
was
the
rationale
coming
from
the
design
team
and
real
and
a
little
bit
simpler
to
consider
what's
happening,
but
it
is
a
change
from
v4,
and
so
it
makes
v5
certificates
look
slightly
different.
It
does
not.
Actually,
the
draft
does
not
change
v4
certificates.
C
C
Oh,
and
then
I
guess
the
other
wrinkle
to
this
is
there's
some
amount
of
concern
that,
by
putting
these
parameters
in
the
primary
in
in
a
direct
key
signature
on
the
primary
key,
it
might
facilitate
you
certificates
that
have
no
user
id,
because
now
you
just
you've
got
the
parameters
they're
all
there
and
you
can
just
go
ahead
and
leave
leave
the
user
ids
out
and
as
from
from
several
generations
of
the
draft,
the
user
id
has
been
moved
from
must
have
one
to
zero
or
more
user
id.
C
So
there's
some
concern
that
this
demo.
This
provides
some
pressure
against
user
ids
or
pressure
towards
user
id
list
certificates.
C
So
so
those
are
the
concerns
that
are
sort
of
bundled
into
this
question.
I
wonder
whether
anybody
wants
to
speak
for
either
for
the
text.
That's
currently
in
the
draft
for
v5
certificates
or
against
it.
I
see
ben
in
the
queue.
M
So
I'm
not
actually
speaking
for
or
against
at
the
moment,
but
asking
a
question
in
that
in
the
v4
setup,
where
you
can
have
preferences
at
a
per
uid
level.
Do
we
have
any
reason
to
believe
that
someone
was
using
this
for
some
complicated
setup
where
they
have?
M
You
know
multi-uid
key,
but
they
have
like
different
implementations
on
their
mail
reading
setups
for
the
mailboxes
that
receive
those
different
uids
and
somehow
they
wanted
to
express
diversion
preferences
based
on
the
different
implementations
that
could
process
things
specific
to
that
uid,
as
opposed
to
you
know,
I
have
this
key,
it's
in
multiple
locations,
or
maybe
it's
only
in
one
location
and
I
just
can
handle
what
the
key
has
like
do.
We
think
people
use
that
flexibility
in
that
way.
C
I
mean
it
is
explicitly
contemplated
in
rfc
4880
that
this
is.
You
know.
It
says
like
if
you
look
this
key
up
by
user.
Id
atlas
then
use
the
preferences
from
user
that
are
bound
to
user
id
atlas.
But
if
you
look
it
up
by
bob,
then
use
it
for
user.
You
then,
then
use
the
preferences
found
on
on
bob's
self
sig.
I
have
not
heard
anyone
say
that
they
are
using
it
like
for
themselves.
C
I
don't
know
that
anybody's
done
a
survey
of
keys
to
see
whether
some
you
know
how
many
of
them
have
different
preferences
per
user
id,
and
I
also
don't
know
whether
anyone's
done
any
testing
to
see
whether
implementations
that
are
sending
actually
respect
that,
because
the
how
do
you
know
whether
you're
accessing
the
key
via
alice
or
bob?
Maybe
you've
done
some
binding.
That
says,
when
I
send
to
alice
use
this
fingerprint
or
something
like
that
and
then
from
the
openpgp
implementation.
C
It
just
sees
you
passing
the
fingerprint
and
doesn't
know
that
you're
looking
for
it
in
terms
of
alice,
so
I
don't
know
that
we
have
that
much
data
about
whether
it's
been
used.
There
certainly
has
been
nobody
stepping
up
to
say
if
you
rip
this
out,
I'm
going
to
be
sad
because
I
have
two
user
ids
that
I
expect
to
have
divergent
implement.
You
know
divergent
parameters
and-
and
I
want
to
see
those
things
present,
so
I
I
I
don't
think
we
have
much
data,
but
we
definitely
don't.
C
M
Okay,
I
guess
that
puts
me
slightly
leaning
towards
keeping
it
just
in
the
direct
keysig
and.
B
C
Right
but
we
could
mandate
a
user
id
without
this
change
also,
but
that
would
mean
one
one:
additional
signature
in
a
certificate
because
you'd
have
to
have
the
direct
key
sig
for
the
parameters
and
then
the
user
id
binding
stake.
Okay,.
L
So
I
mean
I
think,
the
question
about
this
is
like
what
are
you
thinking
about,
for
you
know,
v7
or
v8,
because
it
sort
of
signals
a
direction
you're
planning
on
taking.
Like
you
mentioned,
you
know
if
you
have
to
support
v4
and
v5
you're
going
to
have
to
account
for
this
anyway,
but
in
future
states
where
you're,
supporting
v8
and
v7
after
you
know
years
of
this
change,
this
could
be
a
lot
simpler
and
I
am
also
in
favor
of
not
requiring
you
user
ids.
So
that's
that's
my
thoughts.
A
H
Yeah,
I
was
also
gonna
say:
regarding
user
id
less
keys.
H
It
can
also
be
seen
as
an
advantage
that
this
allows
that
there
are
some
use
cases
for
not
having
a
user
id,
for
example,
if
it
doesn't
have
anything
to
do
with
email
or
if,
let's
say
you,
don't
want
to
publish
the
user
id
in
some
contexts
on
on
wkd
or
something
like
that
or
if
you
have
a
catch-all
address
where
you
want
to
serve
some
key
for
many
different
email
addresses,
and
you
don't
know
in
advance
which
ones
you
want
to
serve
it
for
so,
and
you
don't
want
to
keep
the
private
key
on
the
server.
H
H
I
So
I
also
like
user
id
less
certificates,
but
I
want
to
say
to
ben's
question:
if
someone
has
such
a
use
case,
they
are
better
off
having
two
distinct
certificates
and
the
complexity
that
comes
with
having
having
the
preferences
on
user
id
bindings
is
huge
and
the
behavior
of
implementations
is
basically
unpredictable
and
it
also
complicates
stuff
like
simply
adding
adding
a
user
id,
because
adding
a
user
id
now
potentially
changes
the
preferences
of
this
certificate.
B
Okay,
I
think
we
seem
to
have
drained
the
queue,
and
this
is
all
the
last
of
the
poll
again
and
this
time
it's
essentially,
as
was
on
the
slide.
It's
certificate
right
parameters
live
in
a
direct
key
sig
for
v5,
and
should
we
keep
that
as
per
draft
six,
so
I'll
start
the
session?
B
B
It's
a
more
complicated
question
being
on
the
fence
on
is
probably
more
reasonable
on
this
one,
so
we
have
a
software,
it
seems
to
stabilize
that
eight
hands
raised
and
zero
hands
not
raised,
which
is
an
indication,
but
I
guess
again
we'll
confirm
on
the
list
and
if
somebody
does
turn
up
with
some
use
case
as
ben
kind
of
indicated,
then
we
might
have
to
revisit.
But
for
now
we
have
an
opinion.
That's
good.
C
Okay,
noting
that
in
the
issue
tracker
as
well,
okay,
so
issue
138
in
so
in
rfc
4880,
we
have
a
sub
packet.
That
indicates
that
the
that
the
key
holder
is
willing
to
accept
revocations
from
a
third
party.
This
is
the
revocation
key
sub-packet.
It
contains
a
fingerprint
of
the
authorized
revoker
in
the
draft.
C
We
say
that
the
that
this
sub
packet
is
invalid
for
a
v5
certificate
and
we
describe
an
alternate
mechanism
for
doing
a
sort
of
delegated
revocation,
which
is
just
to
make
a
revocation
signature
and
send
it
encrypted
to
to
the
person
that
you're
willing
to
have
be
able
to
revoke
your
key.
That
mechanism
is,
is
available
obviously,
to
anyone
with
v4
keys
as
well.
It's
not
a
novel
mechanism,
but
it's
the
first
time
that's
been
described
explicitly
in
the
draft.
C
C
Key
sub-packet
for
v5
keys.
Some
of
the
reasons
arguing
for
the
removal
are
that,
given
that
it's
just
a
fingerprint,
you
might
not
even
have
a
copy
of
the
key,
and
you
might
see
a
revocation
and
not
know
whether
it
belongs
or
not
so,
there's
an
issue,
there's
a
logistical
issue
and,
secondly,
the
implementation.
Support
for
the
revocation
key
sub-packet
may
not
be
as
universal
and
robust
so
relying
on
it
seems
a
little
bit
dangerous
compared
to
relying
on
an
actual
revocation
signature.
C
So
I
think
those
are.
But
the
concern
here,
of
course,
is
that
it's
still
valid
for
v4,
so
your
implementation
that
does
v4
and
v5
will
have
to
do
both
and
some
people
say
that
this
is
actually
a
concretely
useful
thing
to
have
and
they
they
want
to
have
it
available
for
v5
and
they're,
not
comfortable
with
the
escrowed
keys.
So
hopefully,
people
can
ask
clarifying
questions
if
you
got
them
or
speak
in
favor
of
the
removal
for
v5
or
speak
in
opposition
to
it.
That's
a
good
chance.
C
If
I,
if
I
mangled
that
explanation
of
the
of
the
of
the
problem,
I
hope
people
will
correct
me
as
well.
M
So
you
could
get
into
a
scenario
where
the
the
mere
fact
that
this
revocation
signature
exists
cause
it
causes
it
to
get
uploaded
prematurely
when
it
wasn't
intended
to
be.
It
was
just
like.
I
have
some
data
it's
associated
to
this
key.
Let
me
publish
that
type
thing
and
like
this
is
seems
pretty
unlikely
to
me
and
I'm
not
super
concerned
about
it,
but
it
is
an
additional
risk
of
this
proposed
mechanism
that
you
know.
I
don't
know
if
we've
talked
about
before.
I
don't
remember.
C
Thanks,
I
think
that's
that's
a
that's
a
valid
point
there.
The
original
removal
here
was
was
accompanied
by
a
proposal
for
a
replacement
for
the
revocation
key
sub
packet
that
actually
included
the
revoking
primary
key,
the
the
the
the
key
material
of
the
revoking
of
the
delegated
revoker,
and
that
was
seen
as
either
too
complicated
or
out
of
charter
or
too
distracting,
which
is
why
we're
in
this
in
this
removal
phase.
That's
where
that's
why
this
is
it.
C
The
proposal
is
where
it
is
here
that
would
solve
the
logistical
problem
and
it
also
potentially
solves
the.
I
guess
a
linkage
problem
use
this.
You
want
to
speak
to
it.
I
So
I'm
in
favor
of
removing
that,
because
it
just
acknowledges
the
fact
that
virtually
no
implementation
supports
designated
revokers,
and
I
think
that's
due
to
the
fact
that
it
requires
non-local
reasoning
about
multiple
certificates.
C
Can
you
talk
to
how
you
look
when
you
say
you
know
you're
maintaining
the
interop
test
suite?
Can
you
can
you
speak
to
the
support?
I
mean
you
said
it's
that
few
implementations
support
it.
Can
you
describe
like
the
test
that.
I
I
don't
think
we
have
a
test
for
that.
Sadly,
we
should
have
one,
but
we
we
didn't
implement
it
because
it
it
it
was
very
complicated.
So
I
I
doubt
many
implementations
out
there
supported.
Maybe
daniel
can
shine
some
light
on
his
implementations.
H
Yeah,
so
we
also
don't
support
it
personally,
I'm
not
opposed
to
the
idea
of
including
the
revocation
key
in
the
in
in
the
subpacket
directly.
H
We
can
argue
again
about
whether
it's
in
charter,
but
I
think,
if
we're
removing
it
here,
which
I
think
is
reasonable,
then
I
also
think
that
adding
a
replacement
is
reasonable,
but
I'm
also
fine
with
just
removing
it
and
without
a
replacement
other
than
having
a
revocation
certificate,
which
is
already
a
mechanism
that
exists
and
is
used,
and
we
already
support
it
also
in
before.
So
I
think
that's
reasonable.
Also.
M
Ben
kadek,
again
thanks
everybody
for
the
extra
discussion.
I
think
that
has
sort
of
solidified
my
thinking
that
in
favor
of
removing
the
verification,
key
sub
packet
for
the
v5
keys,
that's
just
like
acknowledging
the
deployed
reality
that
you
can't
rely
on
it
to
work
seems
like
something
that's
really
appropriate
for
us
to
do
in
the
biz
document,
and
so
I
I
would
like
to
see
it
that
way.
M
I'm
okay
doing
that,
even
if
we
don't
have
a
replacement
mechanism
specifically,
I
would
also
be
okay
as
daniel
is
with
having
this
mechanism
for,
like
the
actual
full
verification
key
in
in
the
key,
because,
like
the
the
size
issue
is
on
the
issue,
if
you,
if
you
actually
use
it,
and
so,
if
you
don't
use
it,
you're
not
affected,
but
I'm
also.
Okay,
just
removing
this
and
and
not
providing
a
replacement
right
away.
B
B
Okay,
so
that
seems
to
stabilize
that
10
hands
are
raised
to
keep
the
scheme
in
drought.
Six
and
zero
hands
are
not
raised,
so
that's
good
input,
so
the
next
one
is
kind
of
a
slightly
less
slightly
easier
one
to
get
your
head
around.
B
So
this
is
iana,
so
in
48.80
there's
a
whole
bunch
of
viana
considerations
and
registries
and
registries.
That's
that's
to
require
ietf
consensus
for
change,
and
the
suggestion
in
most
cases
is
not
not
absolutely
everywhere,
but
in
most
cases
the
the
draft
six
basically
changes
those
to
specification
required
and
for
those
of
you
who
aren't
who
are
not
iana,
nerds
that
the
impact
there
is
we're
moving
from
a
situation
where
you
have
to
get
a
document
through
the
ietf
process.
B
C
The
two
exceptions
there
that
remain
rfc
required
are
packet
type
and
packet
version,
so
the
places
where
the
versions
show
up
are
still
requiring
rfc
and
the
and
something
a
new
packet
type
is
also.
B
So
I
think
it
requires
ietf
consensus,
igf
consensus,
sorry
because
an
independent
stream
rfc
doesn't
have
that
so
so
yeah
there's
a
couple
of
exceptions,
but
but
the
the
basic
movement
is
towards
allowing
specification
required
which
is
kind
of
loosening
up
things.
A
lot
of
other
groups
have
done
this
over
the
years,
but
it's
worth
checking
because,
for
example,
it
means
that
you
know
a
vanity
cypher
code
point
could
could
end
up
their
national
algorithms.
Would
connect
can
end
up
there,
which
may
be
good
or
bad.
B
B
There
are
already
there's
already
one
registry
that
requires
like
the
expert
who
does
not
exist.
Okay,
so
this.
N
Yeah
digital
community-
so
yes,
actually
designated
expert
is
needed
and
I
think
it's
it
would
be
very
useful
to
keep
the
instructions
of
the
designated
expert.
What
kind
of
you
know
specifications
are?
Do
we
we
probably
want
to
have
some
kind
of
stable
reference.
You
know
published.
You
know
some
other
standards
or
organization
paper,
not
just
on
some
webpage
somewhere,
so
you
usually
and
usually
dressing
experts
have
quite
are
quite
free
to
actually,
you
know
interpret
the
instructions
they
are
given.
So
we
don't
have
written
very
specific
instructions.
N
B
N
Okay,
I
think
most
of
the
most
of
the
rfcs
are
you
know
for,
for
those
are
obviously
the
time.
For
example,
expert.
I
think
they
actually
just
say
that
they're
expert
there's
actually
no
specification
required,
but
I
have
always,
as
an
expert
required
to
have
a
specification
before
I
actually
go
forward
and
say:
okay,
you
have
to
have
a
three
p.
Three
gpp,
for
example,
specification
is,
is
okay,
some
webpage,
I
will
say:
okay,
I
don't
know
if
I
would
want
to
allocate.
You
know
numbers
based
on
that.
M
Pancada,
I
actually
got
to
make
basically
the
same
point
the
term
made
about.
We
should
have
some
guidance
to
the
experts,
but
I.
M
Example
of
a
case
that
I
think,
does
this
pretty.
Well,
I'm
actually
the
sort
of
ghost
editor
for
the
cozy
drafts
that
are
in
all
48
at
the
moment,
because
jim
is
no
longer
here
to
to
do
the
authority
himself.
And
so
we
have
some
text
in
there
about.
The
expert
is
designated
an
expert
as
a
reason,
because
we
should
trust
our
expertise
and
give
them
leeway,
but
also
guidance
about
the
proposed
mechanisms
or
algorithms
need
to
like
actually
be
appropriate
for
the
requirements
of
the
registry
that
we're
trying
to
use
like.
M
If
it's
supposed
to
be
a
signature
algorithm,
it
has
to
actually
provide
the
signature
functionality
and
I
think,
there's
also
some
some
guidance
there
that
it
needs
to
meet
the
community
requirements
for
security
so
like
if
somebody
wanted
to
prove
so
null
security
and
a
null
sequencer
algorithm.
Essentially
if
it
doesn't
meet
the
requirements
of
the
community.
M
If
there's
also,
some
pointers
to
like
cfrg
is
a
good
resource
in
there
and
in
addition
to
sort
of
answering
the
question
that
you
asked
of
tarot,
I
wanted
to
also
say
that
in
general
I
do
support
this.
I
think
in
other
areas
like
tls-
and
I
think
ipsec
is
actually
in
the
process
of
doing
this
as
well.
You
know
opening
up
the
registries
to
make
it
a
lower
burden
is
good
for
the
ecosystem.
M
I
guess
I
also
wanted
to
mention
a
counter
example
in
the
sense
of
the
expert
is
not
given
very
much
leeway
and
I
believe
the
quick
registries.
The
policy
is
basically
a
shell
issue.
Policy
and
expert
is
there
to
like
apply
some
back
pressure.
If
somebody
is
asking
for
a
lot
of
code
points
or
something
like
that
or
if
they're
asking
for
a
specific
code
point
that
might
be
problematic,
but
the
the
guidance
to
the
expert
is
basically
you've
got
to
approve
this.
C
L
So
I'm
a
editor
maintainer
of
a
registry
that
was
recently
opened
up
in
in
this
way,
and
I
think
it
is
a
positive
thing.
That's
happening
like
you
mentioned
it's
sort
of
trend
in
the
space,
but
the
guidance
to
those
maintainers
is
critical
and
if
you
don't
give
good
guidance
to
them,
you
open
huge
political
cans
of
worms
for
them
and
they're
likely
to
resign.
L
So
you
ought
to
just
yeah
just
make
sure
you
give
them
good
guidance,
and
I
also
think
it's
important
to
support
the
sort
of
mental
shift
that
goes
along
with
this
kind
of
process
as
a
community,
because
if
it's
happening
it
should
be
supported
by
the
community,
so
have
have
a
designated
expert,
give
them
good,
solid
advice
and
then
cheer
for
them,
while
they
handle
the
onslaught
of
registrations,
are.
C
C
B
Great
and
roman
has
post
and
then
have
posted
some
more
about
that
closely
stuff
in
the
chats.
B
B
So
I
created
this
issue
yesterday,
just
in
response
to
there
was
some
list
discussion
about
argon
2.,
and
I
I
think
the
issue
essentially
is
is
that
is
the
text
sufficiently
clear?
B
So
if
people
have
opinions
or
or
have
been
watching
this
thread
on
the
mailing
list,
then
please
do
speak
to
it.
I'm
not
sure
we'll
do
a
poll
on
this
one.
I
think
it's
the
case
of
we'll
probably
just
have
to
go
and
check,
but.
C
Yeah
and
one
of
the
you
know
one
of
the
ways
that
we
can
determine
quantitatively,
if
it
is,
if
it
is
clear,
is
whether
we
have
functioning
interop
as
well.
I
mean
use
this
as
the
as
the
sorry
to
keep
poking
at
you
as
the
maintainer
of
the
interop
test
suite.
But
can
you
can
you
report
back
on
how
things
like
whether
whether
argon
2
has
been
tested?
This
is
something
that
might
be
in
the
in
the
lineup.
I
F
B
Okay,
so
so
I
think
this
situation
is
that
we
have
we
seem
to
have.
I
think
miebe
did
say
that
it's
supported
and
it's
and
then
justice
is
saying
that
it
interoperates,
so
that
that
you
know
seems
to
say
that
the
text
is
not
horrible,
but
I
think
so.
I
don't
think
this
one
would
have
a
poll
on.
I
think
this
is
one
where
we
should
go
back
and
check
essentially
because
argon
2
does
have
a
bunch
of
parameters.
Clearly,
one
person
on
the
list
at
least
has
found
that
confusing
or
something.
B
So
I
think
the
resolution
of
this
one
is:
we
just
need
to
go
back
and
check
and
see.
Is
there
something
that
needs
improving.
B
E
B
C
It's
also,
I
mean
it's
the
it's
the
winner
of
the
password
hashing
competition,
so
that
was
why
it
was
selected.
B
Yeah,
okay,
so
I
think
the
resolution
for
that
one
is
is
again,
people
want
to
talk
to
it.
Now,
that's
fine,
but
the
resolution
there
doesn't
need
a
poll
as
we
need
to
look
at
our
text
just
to
convince
ourselves
it's
clear
and
fix,
if
not
yeah,.
C
And
if
a
third
implementation
could
become
interoperable
or
if
we
could
get
proof
on
the
test,
suite
that
that
these
things
are
capable
of
being
interoperable,
then
that
would
be.
That
would
be
useful,
too
useful
evidence
for
clarity
all
right
getting
through
this
folks.
This
is
good,
so,
second,
last
one
yeah
we're
on
the
second
to
last.
So
thanks
everybody
for
bearing
with
so
we've
had
some
reports
on
the
list
recently
about
open
pgp
certificates
that
are
fairly
widespread
and
do
not
follow
either
rfc
4880
or
the
current.
C
This
draft,
the
crypto
refresh
because
of
some
changes
in
some
of
the
metadata
that
they
have.
We
have
two
potential
things
that
we
could
do
as
a
community
of
specifiers.
C
One
thing
is,
we
could
adjust
the
text
in
the
revision
to
explain
a
little
bit
better
about
what
how
this
metadata
should
be
prepared
and
to
describe
what
what
an
implementation
should
do
if
it
discovers
that
the
metadata
has
been
ill
prepared.
So
that's
just
like
documentation,
cleanup
and
another
thing
which
we
actually
have
a
merge
request
for
is
to
strip
out
those
pieces
of
metadata
that
are
are
apparently
ambiguous
from
v5
keys
so
that
they
simply
wouldn't
be
present
in
those
structures.
So
you
couldn't
make
those
mistakes.
C
It
might
still
be
worth
adding
documentation
about
how
to
deal
with
v4
that
have
those
pieces
of
metadata,
but
so
this
is
these
are
this
is
the
github
key?
Apparently
github's
key
itself
has
a
weird
behavior
and
we've
also
seen
some
keys
generated
by
open,
pgp
php.
C
C
It's
a
checksum,
and
is
there
one
other
piece
or
is
it
just?
Is
it
just
the
checksum
that
we're
talking
about,
but.
C
Not
free
from
table
here
so
there's
a
there's,
a
there's,
a
checksum
that
indicates,
you
know,
what's
yeah
what
has
been
signed
here.
C
So
this
is
supposed
to
be
a
checksum
but
the
something
but.
B
I
So
I'm
aware
of
two
issues:
first,
there
is
the
the
hash
digest
prefix
in
the
signature
packet,
and
the
question
came
up
that
came
up
initially
was
what
should
implementations
do
if
the
16-bit
prefix
is
wrong
and
that
I
think
happens
with
the
php
implementation
and
the
github
key
and
github
p2p
implementation
has
a
different
problem
where
they
produce
malformed
multi-precision
integers.
H
So
for
the
first
issue,
I
have
some
one
fairly
orthogonal
argument
for
removing
it
entirely,
which
is
that
if
you
have
a
crypto
api
which
lets
you
hash
and
sign
in
one
operation,
then
it's
it's
nice
to
be
able
to
use
that
without
needing
to
get
the
intermediate
hash
which
web
crypto
does.
So
again,
it's
fairly
specific,
but
I
I
think
that's
useful
and
also,
if
implementations
are
not
checking
it
anyway.
H
While
some
implementations
are
not,
then
it's
it's,
perhaps
not
very
useful
to
put
it
there
anyway,
we
do
check
it
in
fact,
so
I
I'm
also
not
opposed
to
just
having
some
text
that
says
you
should
check
it
or
something,
but.
C
F
C
H
C
Implementations
accept
them
and
some
don't.
H
So
again,
the
only
real
reason
I
see
for
not
well
two
reasons
I
see
for
not
checking
it
is
one
you
have
such
an
api
where
you
can
hash
and
verify
in
one
step
or
well.
There
are
keys
out
there
in
the
wild
that
that
are
broken,
so
you
ignore
the
bytes.
For
that
reason,
so
I
mean
both
of
those
would
to
me
speak
for
either
removing
it
or
saying
that
you
you
should
or
must
check
it
and
reject
the
signature.
H
If
it's
invalid
yeah,
I
personally,
I
would
slightly
prefer
just
removing
it
entirely
but
yeah.
If
we
keep
it,
then
I
think
we
should
check
it.
L
Again,
agree
with
everything
he
just
said:
it's
becoming
a
bit
of
a
theme
yeah.
If
it's
there,
it
should
be
checked
if
it
doesn't
match
it
should
be
rejected,
I'm
in
favor
of
removing
it,
I'm
not
sure
exactly
the
impact
on
the
keys
in
the
wild
based
on
that
statement,
but
sometimes
you
make
things
that
you
are
broken.
You
have
to
make
new
ones.
F
I
So
my
theory
that
this
is
or
was
once
an
optimization
right,
you
could
skip
the
heavy
lifting
of
doing
the
other
metric
operation
when
you
can
determine
from
the
hash
prefix
that
the
signal
didn't
didn't
check
out.
But
I
want
to
highlight
that
we
have
a
bit
of
a
heuristic
based
on
that
where
we
use
it
to.
I
Reorder
certificates
that
are
somehow
mangled
in
transport
by
a
key
server
also-
and
here
we
can
use
the
the
hash
badges
prefix
to
see
if
we
can
find
the
correct
location
for
a
misplaced
signature.
Even
if
we
don't
have
the
full
context,
like
the
the
issuing.
The
issuing
key.
I
C
So
if
we're
going
to
do
a
poll,
it
seems
like
we've
got
at
least
two
separate
issues
here
and
we've
spoken
mainly
to
the
first
one.
So
maybe
we
want
to
try
to
do
a
poll
to
resolve
this
first
one,
but
about
the
about
the
two
about
the
checksum
in
the
signatures.
C
What
what
I'm
trying
to
think
how
we
want
to
pull
this
right?
So
one
of
them
is,
I
guess,
should
we
remove
the
checksum
from
v5
and
we've
heard
uses
argue
against
it
and
daniel?
I
think
you
were
mildly
for
removing
it.
C
F
B
C
I'm
seeing
two
people
who
are
in
the
in
the
do
not
raise
hand,
and
I
wonder
whether
anybody
wants
to
speak
to
those
to
to
why
they
would
prefer
to
not
remove
it.
B
G
Okay,
so
I
think
that
it,
it
provides
debug
information
too
so
like
at
times
it
can
be
very
useful
to
have
these
two
bytes,
even
if
we
don't
check
it
on
a
regular
basis
to
understand
whether
implementation
is
broken
or
not,
or
whether
the
data
that
has
been
signed
is
different,
like
at
times.
J
C
So
the
the
the
three
arguments
for
keeping
it
that
I've
heard
are
one
it
allows
you
to
reject
signatures
faster
two.
It
allows
you
to
to
this
debugging
argument.
It
gives
you
some
extra
hints
about
where
the
problem
might
be
in
the
emitting
implementation
and
then
three
uses
this
point
about
being
able
to
reorder
certificates
more
efficiently
without
doing
the
heavy
lifting
of
the
crypto
piece.
I
see
use
this
out
of
the
queue
now.
B
Okay,
so
I
mean
that's
input,
it's
a
bit,
that's
a
kind
of
rough
yeah
consensus
if
consensus
so
we'll
again,
we
verify
the
list
and
see
where
we
go
and
then
the
second
poll
we
wanted
was.
B
B
C
C
A
C
And
then
so
so
then
we
what
we
haven't
discussed
yet
is
this
malformed
mpis
in
certificates
like
the
github
certificate
right,
so
the
mpi
specification
is
very
clear
but
not
necessarily
always
followed.
Apparently,
apparently
it's
not
as
clear
as
it
could
be.
It
says
that
the
it's
indeed
it
indicates
length
by
bits.
C
For
some
reason,
and
in
this
situation
we
have
certificates
that
are
it's
supposed
to
indicate
the
the
largest
bit
that
is
set
to
one,
but
in
this
situation
we're
looking
at
certificates
which
contain
mpis
or
make
signatures
that
contain
mpis.
Is
it
signatures
or
certificates
that
we're
only
seeing
this
in
anybody
aaron?
Are
you
still
in
the
queue
from
last
time?
Sorry.
C
So
it's
in
the
signature,
okay,
so
so
the
question
is:
what
do
we
do
if
the
if
the
mpi
and
the
signature
is
is
malformed
in
that
it
indicates?
It
has
a
larger,
it's
basically
counting
bits
by
by
full
bite.
I
believe.
C
So
it
seems
like
we
should.
We
ought
to
have
some
guidance
so
that
we
can
point
an
implementation
to
what
they
should
do
about
it.
It
seems
to
me
I
mean
in
terms
of
just
leaving
it
at
nothing,
doesn't
seem
very
useful
to
me.
I
don't
know
what
sort
of
guidance
people
would
want.
One
is
you
could
tell
people
to
reject
a
signature?
Another
one
is,
you
could
tell
people
to
clean
up
the
signature
if
the
mpi
has
this
particular
alternate
form.
L
Cory,
so
I'm
not
sure
exactly
of
the
internals,
but
unless
there's
a
way
to
warn
in
a
softer
but
very
annoying
way.
Regarding
this,
I
think
the
signature
should
be
rejected.
B
B
B
M
I
think
daniel
beat
me
to
actually
getting
in
the
queue,
but
I
hit
unmuted.
So
my
understanding
here
is
that,
for
these
problematic
signatures
you
can
modify
the
cipher
text
so
as
to
or
will
modify
the
metadata
really
so
as
to
make
it
a
valid
signature,
and
I
believe
that
that
would
not
really
be
something
that
can
constitute
an
attack.
If
you
can
just
fix
it
yourself.
C
F
M
It's
it's
not
like
a
cryptographic.
Failure
on
the
signer's
fault,
it's
just
a
implementation,
failure
to
respect
properly
and
because
the
cryptography
still
checks
out.
I
think
the
practical
like
in
favor
of
better
interoperability
would
be
to
just
fix
it
and
maybe
complain
loudly
if
you
have
the
ability
to
do
that.
M
But
saying
you
should
reject
this.
Just
feels
like
breakage
for
the
sake
of
breakage.
H
I
think
if
we
want
to
say
that
implementations
should
or
must
reject
it,
we
could
do
that
for
v5
signatures
and
keys,
but
for
v4
I
also
don't
think
we
can
for
full
disclosure
open,
very
old
versions
of
openpgp.js
also
used
to
produce
malformed
mpis
in
some
cases,
particularly
if
there
were
leading
zero
bytes.
So
it
was
a
slightly
different
issue
than
this
one,
but
still
I
don't
think
we
can
be
super
strict
for
mpis
and
v4
signatures,
but
for
v5
we
could.
If
we
want
to.
I
So
my
concern
is
that
if
you
have
a
system
that
is
composed
out
of
multiple
components
and
they
use
different
implementations
and
they
behave
differently,
you
may
be
able
to
confuse
the
system
as
a
whole.
Where
one
of
the
implementations
would
say
it's
a
valid
signature
and
the
other
one
may
think
it's
an
invalid
signature
and
that
may
be
able
to
create
problems.
N
So
I
think
you
know
the
issue
that
some
github
or
somebody
getter
signatures
that
are
wrong
is
not
really
issue
in
that
case,
because
I
think
they
can
fix
their.
You
know
signature
generation
faster
than
we
can
get
the
implementation
to
check
these.
You
know
signatures.
N
That
the
old
signatures
are,
are
there?
That's
that's
true,
but
actually,
as
I
said,
I
don't
know,
if
there's
actually
any
way
of
you
know
like
somebody
was
saying
it
would
be
really
nice
to
get
warning
and
that's
actually
one
of
the
things
that's
been
saying
that
we
could
actually
fix
it
in
in
before
we
actually
keep
it.
So
we
could
actually
have
an
implementation,
actually
reacting
the
signature
and
then
checking
that
if
it's
actually
oh,
it's
a
signature.
That
is,
if
we
fix
it,
it
actually
works.
N
That
would
actually
allow
you
know
user
interface
or
or
programs.
To
actually
way
of
you
know
having
a
separate
method
of
or
or
the
other
thing
is
to.
That's
just
reacted
on
all
four
for
version
five,
but
actually
the
most
important
thing,
of
course,
would
be
to
get
you
know
the
people
who
are
doing
this
to
fix
them.
So
if,
unless
you
start
reacting
them,
I
don't
think
this
hub
is
going
to
be.
Is
this
still
generating
those
signatures.
N
So
I
I
think,
because
they
don't
see
any
reason
to
change
until
somebody
actually
starts
breaking
things,
and
I
think
we
should
it
would
be
better
to
have.
You
know
not
to
have
this
kind
of
corner
cases
because
they
usually
have,
as
as
was
pointed
out,
if
you
have
an
implementation
that
actually
checks
this
another
one
that
doesn't
and
it
might
be
happening
very
low
level
in
in
the
you
know,
you
know
crypto
library,
your
crypto
library
might
be
saying.
Oh
no!
No!
N
This
is
not
an
mp,
probably
nbn's,
because
it
doesn't
have
a
first
bit.
You
know
one
and
and
then
you
might
not
be
able
to
have
to
react
it
in
that
that
might
cause
you
know.
You
know
this
kind
of
issues
that
some
you
know.
One
part
that
was
supposed
to
do
something
based
on
this
and
toss
it
because
it's
well,
it's
signature,
other
one
doesn't
because
it's
invalid
signature.
L
L
Some
implementations
will
fail
to
verify
the
signature,
some
won't
and
in
practice,
the
way
I've
defended
our
our
source
from
this
is
to
manually
lower
the
signature
to
low
res
before
emitting
anything
you
know
outside
of
our
library,
so
for
the
for
the
libraries
that
are
handling
this,
they
could
decide
I'm
every
time.
I
see
this
thing,
that's
a
problem.
I'm
gonna
fix
it
for
myself,
but
it's
everyone
has
to
decide
to
do
that
and
it's
like
mess
in
the
code.
F
I
I
C
And
that
means
that
fixing
a
signature
by
twiddling
the
bits
in
the
mpi
header
will
change
the
bitwise
by
wise
representation
of
the
signature,
which
means
that
that
will
in
turn,
invalidate
any
outer
signature
that
is
made
over
the
over
a
signature
itself.
C
So,
for
example,
and
if
we're
talking
about
mpis
generally
twiddling
the
bits
in
the
mpi
of
say,
a
public
key
will
actually
change
the
fingerprint
of
a
certificate
which
is
not
insignificant.
C
F
B
B
B
Yeah,
so
would
somebody
be
willing
to
try
and
write
merge
requests
to.
C
B
B
B
We'll
probably
want
to
do
another
working
group
last
call
at
some
point:
do
we
want
to
raise
the
bar
for
that
for
future
kind
of
work
to
basically
ask
people
to
only
really
be
looking
at
the
diffs
from
let's
say,
seven
to
eight
or
eight
to
nine?
B
You
know
if
so,
if
somebody
finds
some
some
new
facts
or
or
comes
with
some
new
information,
then
we'd
have
to
look
at
things,
but
should
we
raise
the
bar
to
try
and
get
ourselves
done
by
essentially
encouraging
people
to
only
look
at
the
diffs
beyond
you
know,
from
seven
to
eight
or
whatever
draft
actually
resolves
these
issues.
Paul.
E
Paolo
does
a.d
speaking
to
a
former
ad.
Stefan
is:
can
you
do
that
as
a
process?
I
think
not
right
like
we
do
work
a
group
last
call.
It
goes
over
the
entire
document.
E
J
B
With
it,
but
it's
ok,
it's
a
question
of
you
know.
If
we,
if
the
if
the
working
group-
and
I
think
so
http
did
this
a
few
times-
they
basically
agreed
that
they
would
they
wanted
to
get
stuff
out.
So
they
were,
they
were
asking
people
to
look
at
the
divs.
If
people
look
at
other
things,
you've
got
to
deal
with
it,
but
I
think
we
can
do
it
if
people
want
to
do
it.
B
So
nobody's
in
the
queue
that
may
indicate
a
lack
of
enthusiasm.
B
Okay,
so
for
this
one
I'll
I'll
bring
it
up
on
the
mailing
list,.
C
M
Sorry,
you,
you
called
so
passionately
for
people
to
be
in
the
queue
ben,
kadek,
so
yeah.
I
think
it's
it's
probably
worthwhile
doing
this
and
to
sort
of
paul's
question.
You
could
certainly
frame
this
as
saying
we
did
a
working
class
call
on
this
previous
version,
and
we
believe
you,
after
whatever
evidence
that
it
has
consensus,
and
so
bear
in
mind,
is
reviewing
that
we
have
a
presumption
that
this
other
stuff
already
has
consensus
and
so
focus
your
review
on.
M
These
changes
would
be
most
fruitful,
but,
as
stephen
says,
of
course,
if
somebody
does
come
up
with
a
real
issue,
even
on
text
that
has
been
reviewed
already,
if
you
do
need
to
handle
it,
you
don't
just
ignore
it.
B
I'll
raise
this
on
the
list
when
we
think
we've
got
a
next
draft
house
and
we'll
see
how
people
think
about
it,
then
so
there's
no
poll
for
this
one.
I
think
that
was
yeah,
that.
F
C
Yep,
so
we
have
so
we
are
not
chartered
to
do
work
beyond
this
crypto
refresh,
but
there
has
been
interest
expressed
on
the
mailing
list
and
and
here
at
ihf
114
about
potential
for
rechartering.
C
So
if
folks
have
other
things
that
they
want
to
raise
about,
the
crypto
refresh
beyond
the
issues
that
we've
just
gone
through
now
is
a
good
time
to
raise
them.
If
you
don't,
then
we
want
to
give
the
remainder
of
the
time
to
some
discussion
about
these
potential
issues
that
would
come
up
after
a
recharger.
Does
that
sound
right.
B
Yeah-
and
I
guess
what
we
neglected
to
say,
is
that
we'll
obviously
take
the
minutes
to
the
list
of
these
resolutions
from
the
issues
and
then
we'll
we'll
depend
on
our
kind
editor
to
make
changes
and
get
a
draft.
Seven
yeah
we'll
talk
to
paul
about
when
that's
possible.
H
Yes,
thank
you
steven
and
yeah.
So,
just
to
reiterate
this,
this
presentation
is
explicitly
out
of
scope
for
the
crypto
refresh
it's
more
so
meant
to
provide
some
ideas
and
make
some
motivation
to
get
this
crypto
refresh
out
the
door
and
get
to
working
on
new
stuff.
Of
course,
that
only
works
if
there's
actually
interest
in
these
ideas
otherwise.
H
Well,
I
hope
there
will
be
so
yeah
next
slide,
please
so,
first
for
and
for
an
overview
of
the
status
quo
in
openpgp,
we
have
acemeter
keys,
which
are
typically
long,
lived
and
managed
in
a
key
ring,
and
we
have
symmetric
keys,
which
are
either
session
keys
or
derived
from
a
a
passphrase
or
a
password
that
you
can
encrypt
messages
and
keys
with
next,
like
this
and
asymmetric
cryptography
in
general
is
at
least
the
the
algorithms
that
we
have
in
openpgp
currently
are
more
vulnerable
to
quantum
computers
and
are
also
slower,
whereas
symmetric
cryptography
typically
is,
is
less
vulnerable,
at
least
if
you
use
sufficiently
large
keys
and
is
also
faster,
or
at
least
the
the
performance
for
a
given
security
level
is,
is
better
next
slide
please.
H
So
this
leads
us
to
the
conclusion
that
there
are
sort
of
a
missing
middle
or
a
missing
type
of
keys.
If
you
want
to
encrypt
stuff
with
a
symmetric
key,
if
you
don't
need
asymmetric
cryptography,
so
it
would
be
really
nice
if
you
could
store
a
symmetric
key
in
a
long-term
key
ring
to
use
to
to
symmetrically
encrypt
stuff,
but
maybe
also
to
symmetrically
sign
stuff
using
an
hmac
or
a
cmac
if
it's
just
for
yourself
or
local
storage.
H
Next
slide,
please
so
yeah
for
the
use
cases
of
symmetric
encryption
that
we
see
that
we
have
in
mind,
for
this
is
yeah.
So
if
you
have
some
symmetric
file,
encryption
or
you
have
some
file
storage-
that
you
want
to
store
files
symmetrically
encrypted
for
backup
or
long
term
storage
or
on
a
usb
stick
or
whatever,
or
if
you
want
to
symmetrically
re-encrypt
the
the
messages.
The
emails
that
you
got,
for
example
for
long-term
archival,
you
can
decrypt
them
asymmetrically
as
they
come
in
and
grip
them
symmetrically
and
store
them
like
that.
H
So
that
yeah,
it's
it's
smaller,
it's
faster
and
then,
if
you
want
to
retrieve
them
in
the
future,
decryption
will
be
faster
and,
in
general,
will
be
more
secure
again
as
against
quantum
computers
and
in
in
this
way
it
lets
us
prepare,
let's
say
for
quantum
computers,
even
if
they
aren't
here
yet,
but
at
least
then
we
don't
have
a
giant
body
of
rsa
encrypted
and
ec
easy
dh
encrypted
messages
lying
around,
let's
say
and
then
finally
for
drafts
before
you're
sending
an
email,
it
would
be
nice
to
be
able
to
symmetrically
encrypt
them
again.
H
H
So
if
you
have
a
symmetric
key
that
you
want
to
store
and
that's
only
for
symmetric
usage,
then
you
might
also
want
symmetra
key
binding
signatures
just
to
be
able
to
store
information
about
the
key
you
know
expiry
and
everything
else
that
we
have.
H
Similarly,
you
might
want
to
store
some
signature
about
file
to
make
sure
that
it's
authentic,
it
hasn't
been
tampered
with
or
if
you
put
it
on
a
usb,
stick
and
retrieve
it
later
to
check
that
it's
still
the
same
file
might
be
useful
for
that
or
if
you
want
to
store
the
signature
verification
result.
H
H
Our
idea
is
to
define
two
new
public
key
algorithm
ideas
ids,
namely
aad
and
hmac.
So
the
reason
we
are
proposing
that
is
because
the
the
semantics
of
public
key
cryptography
in
openpgp
today
match
much
more
closely.
What
we
want
to
achieve
in
the
sense
that
you
encrypt
a
message
with
some
key
that
you
refer
to
by
a
fingerprint,
for
example,
or
you
sign
a
message
with
a
key,
whereas
for
symmetric,
cryptography
and
open
php
today,
you,
you
encrypt
a
message
with
a
password,
and
you
derive
a
key
from
that.
H
There's
no
way
to
refer
to
a
long
term
key,
let's
say
so.
In
that
sense,
it's
much
more
convenient
to
be
able
to
stick
a
public
key
algorithm
id
in
a
key
packet,
a
signature
packet
and
a
public
key
encrypted
session
key
packet,
despite,
of
course,
the
the
name
not
matching
what
what
it
actually
is.
So
there's
one
idea.
H
We
could
retcon
the
name
of
well,
the
the
algorithm
registry,
perhaps,
and
also
maybe
the
the
public
key
encryptization
key
packet
and
the
symmetric
key
encrypted
attention
key
packet,
which,
by
the
way,
the
latter,
I
would
argue,
is
already
misnamed,
because
it's
more
password
encrypted
session
key
packet,
but
okay.
H
So
we
could
rename
those
perhaps
to
persistent
key
encrypted
session
key
packet
or
and
derived
key
encrypted
session
packet
or
personal
key
and
shared
key,
or
something
like
that
or
we
could
leave
it
as
this
and
just
live
with
the
hackiness
of
having
symmetric
algorithms
in
the
public.
Key
algorithm
registry.
H
So,
despite
that
guidance,
we
do
have
experimental
implementations
in
a
fork
and
a
branch
of
openphp.js
and
go
crypto
respectively.
H
So
this
is
not
meant
to
be
used
already,
but
yeah
just
to
see
how
it
would
work
and
we
have
a
draft
specification
again
it
it
still
very
much
up
for
debate.
H
H
If
you
want
to
export
your
archive
of
email
and
import
it
somewhere
else,
it's
it's
useful
if
other
implementations
also
support
it.
So
then
I
would
somewhat
argue,
for
you
know
actually
defining
an
algorithm
id,
but
you
might
also
argue
this
solution
is
way
too
hacky.
We
should
define
new
packets
with
better
names
for
this
or
or
something
like
that,
next
slide,
please
so
yeah
the
questions
for
the
working
group
I
have
is
first
of
all,
is
there
interest
in
this?
H
H
A
A
H
So,
no
that's
not
in
the
draft.
No.
There
is
no
specific
mechanism
for
how
to
encode
that
or
how
to
store
that.
A
A
B
So,
thank
you.
Aaron.
G
G
Okay,
so
today
I
wanted
to
present
how
like,
like
an
idea,
we've
had
to
do
automatically
email
forwarding,
and
this
means
that,
with
the
user
being
offline,
we
can
forward
the
email
to
another
open,
bgp
user
with
clearly
setting
up
this
protocol
beforehand.
G
So
this
is
the
outline
it's
gonna,
be
I'm
gonna
try
to
keep
it
short,
especially
the
mathematical
part
of
it
but
yeah.
So
this
is
the
concept
of
automatic
email
forwarding,
I
think
we're
all
familiar
with
it,
and
the
problem
is
so
often
that
openpgp
in
order
to
do
that,
you
just
gotta
share
the
key,
so
you're
gonna
take
your
key
and
send
it
to
the
forward
d
so
that
they
can
read
the
ciphertext
in
this
case.
G
So
the
idea
is
basically
the
following
alice
is
the
original
sender
does
not
know
the
fact
that
there's
a
forwarding
ongoing
and
she
basically
sends
an
encrypted
open,
pgp
message
to
bob,
as
as
all
the
papers
we've
read
so
far
and
but
pop
has
basically
created
a
proxy
transformation
parameter
this
kbc
uploaded
it
to
the
proxy
and
the
proxy
can
use
this
per
this
transformation
parameter
to
transform
the
ciphertext
into
a
message
for
chars
charles
must
have
received
as
well
dc
the
private
key
that
will
use
them
to
decrypt.
G
We
have
prepared
a
proof
in
the
paper
that
this
is
actually
a
safe
operation
and
any
attacker
that
does
not.
That
is
not
charged
colluding
with
the
proxy
is
pretty
much
left
with
an
ecdh
instance.
G
G
The
the
schema
is
is
composable,
so
this
means
that
in
fact
you
can
you
can
basically
set
up
a
chain
of
forwardings,
so
you
can
set
up
multiple
forwardings
for
bob
and
daniel
could
also
set
up
his
own
forwarding
to
frank.
So
the
idea
is
the
fact
that
these
chain
can
be
built
theoretically
infinitely
long.
G
Now
the
problem
with
the
open,
pgp
implementation
is
the
fact
that
we
can't
use
vanilla,
open
pgp,
because
chars
requires
a
transform
modification
in
their
implementation.
So
it
is
transparent
onto
the
sender
side.
That
is
probably
the
most
important
because
we
can't
go
and
get
someone
else
to
update
their
open,
pgp
implementation,
but
it
is
not
transparent
onto
charts
implementation.
G
So,
when
you
accept
to
be
a
forward
d,
you
need
to
accept
a
special
key
that
says
you
are,
you
are
receiving
forwarded,
email,
and
this
is
the
reason
because
this
happens
is
because
there's
in
the
key
derivation
function,
there
is
a
binding
to
the
to
bob's
key
and
if
we
do
not
know
bob's
fingerprint,
we
cannot
decrypt
this
key.
We
cannot
decrypt
this
message.
We
cannot
obtain
the
same
shared
secret,
diving
a
little
bit
into
the
open
pgp
changes
that
they
think
are
more
relevant.
G
What
basically
happens
is
the
fact
that,
into
this
para
parameter
in
the
in
the
hash
we
do
not
have.
We
do
have
the
the
fingerprint
of
pop's
key
in
order
to.
G
There
are
two
ways
to
work
these
around
that.
I
thought
that
we
thought
of
and
they
are
to
add
the
fingerprint
to
the
esk,
so,
basically
to
say,
hey.
This
message
was
originally
intended
for
this
fingerprint,
but
this
makes
the
message
distinguishable.
So
you
someone
observing
the
message
on
the
wire
might
be
able
to
tell
that
this
message
is
a
forward
message
and
second,
these
allow
this.
Basically,
it
says
for
each
message:
you
gotta
acknowledge
that
that
message
is
forwarded.
G
Well
instead,
if
you
add
this
information
to
the
forward
d
key,
this
becomes
a
little
bit
more
trans
like
you
once
you
accept
the
forwarding
key
and
add
it
to
your
key
ring.
Your
implementation
should
tell
you
this
key
is
actually
bob's
key
and
you
will
receive
only
email
encrypted
to
bob
that
it
has
been
forwarded
for
you
since
this
key,
as
we've
seen
before,
is
generated
here,
we
can
see
it's
generated
from
bob.
So
bob
knows
the
secret
of
this
key.
G
It
is
better
and
you're
receiving
this
key
from
someone
else.
We
thought
it
would
be
better
to
have
a
key
that
you
know
the
fact
that
it
can
only
be
used
for
messages
and
no
one
can
send
you
message
encrypted
directly
to
this
key.
G
So
these
are
the
reasons
that
has
pushed
us
to
sorry
to
put
it
into
the
forwarding
key
this.
This
extra
information
now,
as
I
said
before,
the
thresh
model
is
basically
that
we
assume
that
bob
is
always
honest
and
never
leaks
the
public
key
or
there's
already
probably
private
key,
because
this
would
clearly
break
any
open,
bgp
protocol,
but
as
as
soon
as
the
server
and
the
forward
d
party
cop
like
collude
corporate,
they
can
recompute
bob's
private
key
with
a
simple
operation.
G
G
And
yeah,
so
basically
this
is
it
and
we've
we've
wrote
this
paper
that
you
can
find
under
this
address
you
can,
then
you
can
also
download
it
from
the
slides
and
we've
wrote
this
paper
that
allows
you
to
set
up
this
forwarding
scheme
and
I
I
would
like
to
ask
the
community
whether
there
is
an
interest
into
standardizing
this
kind
of
procedure
that
was
already
briefly
discussed
at
the
open
pgp
summit
and
whether
I
mean,
if
there's
interest.
I
guess
I
volunteer
to
write
an
rsc
for
this.
G
B
Great
thanks,
aaron,
I
think
we're
pretty
much
out
of
time,
and
I
you
know
so
as
and
when
we
get
to
the
point
of
rechartering.
I
think
it's
these
kind
of
ideas
because
it
would
be
for
upper
discussion.
I
guess
yep
sounds.
B
Thank
you
and
with
that,
thanks
for
all
your
patience
with
us
this
morning,
as
we
worked
through
those
issues,
I
think
it
was
fairly
productive.
We
take
it
to
the
list
and
I
think
we're
set
for
this
meeting.
Yep.