►
From YouTube: IETF110-OPENPGP-20210311-1430
Description
OPENPGP meeting session at IETF110
2021/03/11 1430
https://datatracker.ietf.org/meeting/110/proceedings/
A
Okay,
I
guess
we're
at
time
so
as
dkj
said
a
minute
ago,
we're
we
have
one
hour
today.
The
agenda
is
relatively
full,
so
we'll
try
and
scoot
on
through
it.
My
name
is
steven
farrell,
I'm
one
of
your
coaches
and.
B
So
this
is
the
note
well,
we
are
well
into
the
itf.
Hopefully
you've
seen
this
before
this
covers
rules
about
what
is
discussed
in
this
meeting.
Hopefully,
you've
already
read
it.
I
think
we'll
skip
to
the
next
slide
here.
B
B
We
have
a
talk
about
key
extraction
attacks
through
encrypted
private
key
corruption,
a
discussion
about
the
interoperability
test,
suite
and
using
simple
octet
strings
for
elliptic
curves,
and
then,
in
the
remaining
time
we
have
opportunity
for
the
editors
of
the
crypto
refresh
draft,
which
in
particular,
will
be
paul
today
to
talk
through
the
status
of
the
draft
and
any
issues,
and
hopefully,
by
the
end
of
this,
we
will
be
able
to
discuss
an
upcoming
interim.
B
I
want
to
give
folks
a
chance
to
look
at
this
agenda
and
raise
concerns.
I
see
a
note
from
paul
in
the
chat
that
says
am
I
talking
today.
We
have
a
chance
to
go
through
open
issues
with
the
draft.
B
I
think
we
can
just
pull
up
the
gitlab
issue
tracker
and
give
people
a
chance
to
talk
about
it
if
we
have
the
time,
but
I
want
to
focus
the
presentations
that
we
have
are
going
to
consume
most
of
the
meeting,
so
any
other
agenda
bashing
or
shall
we
move
on
to
a
discussion
of
the
overall
plan?
Just
so
we're
set
on
the
same
page.
B
So
just
a
brief
reminder
about
how
we're
working
with
this
before
the
group
got
recharted,
there
was
a
lingering
rfc
4880
bis
draft
10.
B
That
was
from
a
previous
iteration
of
the
working
group
that
failed
to
produce
an
rfc
in
trying
to
figure
out
how
we
could
get
consensus
on
a
new
rfc
that
still
meets
the
in
charter
scope
for
the
new
revision
of
the
working
group.
What
we've
done
is
we
sort
of
reset
to
rfc
4880.
B
We
picked
a
new
draft
name,
it's
now
draft
ietf
openpgp
crypto
refresh,
and
we
are
trying
to
restore
the
changes
that
we're
in
48
this
dash
10
topic
by
topic,
so
each
revision
tries
to
address
each
each
one.
So
far,
we've
had
three
revisions:
zero
zero
is
basically
4880
with
some
minor
formatting
changes:
zero
one
fixed
all
the
published
errata
and
added
the
camellia
cipher
and
updated
some
terminology,
along
with
one
change
about
white
space.
B
B
We
brought
in
curfew
255.19
for
ecdh,
and
we
deprecated
some
of
the
older
stuff
that
people
shouldn't
be
using
anyway
and
we
reserved
a
bunch
of
code
points
for
what
had
been
in
this
10.
in
the
upcoming
revision.
I
guess
paul
can
maybe
speak
to
this
later,
but
I
think
we're
talking
about
staging
ed
dsa
and
maybe
v5
keys
and
v5
fingerprints.
You
know
in
the
next
one
upcoming
and
thank
you
to
everyone,
who's
reviewed
on
the
list
for
these
changes.
B
It
has
been
really
useful
to
get
a
sense
of
what
the
working
group
thinks
about
them.
It's
pretty
clear
that
we
will
fix
up
some
some
parts
of
it
yeah.
I
see
joao's
comment
there,
but
about
just
how
legacy
the
stuff
is
that
we
are
trying
to
put
in
here
but
yeah,
it's
20
21.
Let's
get
open
pgp
up
to
date,
so
that's
sort
of
the
direction
that
the
working
group
is
doing
on
next
slide
steven
just
about
the
mechanisms,
the
mechanics
for
how
we're
doing
it.
B
So
the
draft
is
being
developed
in
git.
We
are
using
a
central
github
repository
as
a
place
to
keep
track
of
the
changes
we're
using
gitlab.com.
That
is
not
github.com.
B
4080.Md
is
a
markdown
source
version
of
what
is
that
the
actual
original
4880
4080bis.md
is
a
markdown
version
of
480
bis
dash
10,
that
is
the
previous
draft
from
the
previous
iteration
of
the
working
group
and
crypto
refresh,
is
the
draft
that
we
are
working
on.
B
So
there
are
opportunities
to
do
sort
of
diffs
and
comparisons
between
those
three
documents,
as
paul
mentions
in
the
chat
rfc
diff
is
useful
for
comparing
these
things
I
find
actually
also
diff,
given
the
markdown
sources
is
useful
for
comparing
to
the
different
documents
and
if
anybody
wants
to
talk
later,
probably
not
in
this
group
but
in
a
separate
conversation,
I'm
happy
to
walk
you
through
if
you're
interested
in
trying
to
help
edit
or
propose
changes.
I
can
help
you
with
some
of
that
we're
using
the
issue
tracker
where
possible.
B
B
I
think
that's
it
for
the
recap
here
and,
if
possible,
I'd
like
to
go
ahead
and
get
started
with
the
next
presentation.
Unless
folks
have
questions.
A
So,
just
on
that,
I
think
there's
it's
maybe
worth
emphasizing,
since
we
we're
likely
to
do
a
lot
of
the
kind
of
detailed
issue,
tracking
and
processing
via
a
set
of
interim
meetings
and
there's
probably
a
slightly
larger
group
here
today.
A
It's
probably
worth
emphasizing
that
we're
not
really
in
a
position
where
the
working
group
is
open
for
fresh
new
ideas.
Just
now,
really
what
we're
doing
is
we're
playing
kind
of
catch
up
on
the
set
of
tips
versus
4880
that
we
already
nearly
know
for
sure
that
we
want
to
include,
but
we're
making
sure
we
get
consensus
on
those
before
we
go
forward.
So
it's
a
it's
a
it's
a
slightly
kind
of
bookkeeping
exercise
to
some
extent
but
with
technical
parts.
A
But
the
main
thing
is:
if
you
know,
if
you
have
a
super
good
idea
for
something
new
to
do
with
pgp,
now
is
not
really
the
good
time
by
all
means:
you'll
drop
them
out
of
the
list,
but
expect
that
we
might
say,
let's
get
to
that
after
we've
already
got
a
bit
of
success
with
the
current
work.
Just
want
that
to
be
clear,
yep
and
if
there's
no
other,
if
there's
no
questions
on
the
plan,
then
we
will
move
to
key
extraction.
C
Okay,
great
so.
C
Yeah,
sorry,
there's:
okay,
hey
I'm
lara
brazilini
am
a
security
engineer
at
prato,
male
and
as
part
of
my
master
thesis,
I
work
with
kenny
patterson
and
daniel
hunkens
and
we
are
looked
into
the
security
of
bgp
and
one
of
the
questions
we
are
focused
on
is:
how
far
can
an
attacker
go
if
they
have
access
to
the
encrypted
private
key
of
the
victim?
C
C
C
In
practice,
this
means
that
the
victim
will
not
check
the
key
fingerprint
before
using
the
key
next
slide,
please.
So
the
the
requirement
on
key
decryption
is
relevant
because
in
open
bgp
the
private
key
is
not
fully
encrypted.
So
here
you
can
see
the
schema
of
an
rsa
key,
for
instance,
and
when
you
encrypt
the
key,
only
the
fields
in
grey
are
both
encrypted
and
authenticated.
C
So
those
include
the
secret
key
parameters
and
if
an
attacker
tampers
with
the
encrypted
data,
then
the
key
decryption
will
fail,
but
there
are
also
some
parameters
that
are
stored
in
clear
text
and
in
case
of
rsa.
Those
are
the
public,
modulo
n
and
the
public
exponent
e,
and
if
the
attacker
corrupts
those
values,
then
key
decryption
will
work.
Fine
and
clean
rosa
in
2001
already
noticed
this
partial
integrity
check
issue
and
they
exploited
it.
C
They
showed
how
an
attacker
could
corrupt
the
public
parameters
of
a
dsa
key.
Then
they
wait
for
the
victim
to
sign
using
the
corrupted
key
and
from
the
faulty
signature
they
could
extract.
The
dsa
secret
exponents
looks
like
this.
Now
at
the
time,
openpgp
was
not
fixed
to
address
the
specific
attack
vector.
C
So
what
we
have
found
is
we
have
looked
at
the
same
attack
idea
again
and
we
have
found
that
it's
not
just
dsa
keys,
but
any
key
type
is
potentially
vulnerable
to
this
kind
of
key
corruption.
Attacks
specifically
when
it
comes
to
faulty
signature,
attacks,
dsa,
eddsa
and
rsa
keys
can
be
directly
compromised
and
the
attacker
needs
to
corrupt
the
key
only
once
and
then
it's
as
little
as
one
faulty
signature
to
extract
the
secrets,
but
also
any
encrypted
curve.
C
C
So,
aside
from
these
attacks
that
target
signing,
we've
also
found
some
attacks
that
exploit
decryption,
but
those
are
a
little
bit
more
involved
and
we
will
be
sharing
more
information
about
the
decryption
attacks,
as
well
as
all
the
mathematical
details
of
the
signing
attacks
soon
enough.
We
are
finishing
up
a
paper
and
we
will
be
establishing
it
and
sharing
it
with
the
community.
C
So
next
slide,
please
so
just
to
review
what
are
the
existing
protocol
level
protections
in
place
that
are
relevant
for
the
attack
so
as
mentioned,
the
issue
is
that
only
some
of
the
parameters
are
authenticated.
C
This
is
either
with
aad
or
cfd,
but
in
both
cases
the
public
values
can
be
corrupted
by
the
attacker
and
key
decryption
will
succeed
and
when
the
attacker
has
control
of
the
public
values,
it
means
that
the
attacker
is
also
free
to
really
replace
the
sub
key
binding
signatures,
for
instance,
to
make
sure
that
they
still
verify
with
the
corrupted
public
key
parameters.
C
But
to
reiterate,
that's
why,
in
our
threat
model,
we
really
consider
a
user
that
is
not
careful
enough
and
basically
ends
up
using
the
key
without
inspecting
the
key
fingerprint
next
slide.
Please.
C
And
so
currently,
if
an
oven,
pgp
library
wants
to
prevent
key
corruption
attacks
from
happening,
what
it
needs
to
do
is
to
perform
some
kind
of
key
validation
after
decrypting,
a
private
key
and
before
using
it
for
security
operations.
C
For
instance,
the
validation
method
that
does
not
work
is
trying
to
use
the
key,
for
instance,
to
sign
and
verify
a
message
or
encrypt
and
decrypt
a
message
so
in
many,
depending
on
the
key
type.
Even
if
the
operation
is
successful,
you
have
no
guarantee
that
the
private
and
public
parameters
correspond
or
strong
enough,
so
instead
to
carry
out
private
key
validation,
one
should
really
check
the
mathematical
relationship
between
the
different
parameters,
and
this
means
carrying
out
some
algorithm
specific
checks
and
in
case
of
where
I
say
these
are
sorry
forward
enough.
C
We
have
reviewed
a
number
of
popular
open,
pgp
libraries,
and
none
of
them
is
safe
against
all
of
the
attacks
that
we've
found.
So
even
if
variation
is
implemented,
it's
always
it's
often
missing
some
heat
checks
and
in
them
in
terms
of
end-to-end
attacks.
We
have
found
two
real-world
applications
that
are
vulnerable
to
these
key
corruption
issues.
C
So
next
slide,
please
so
a
light
of
the
well,
we
think,
is
the
seriousness
of
the
issue
and
the
fact
that
it's
difficult
and
inefficient
for
implementations
to
address
the
problem.
C
C
It
is
the
second
cfd
solution
is
the
same
that
was
proposed
in
2001
when
back
the
first
dsa
attack
was
published,
and
so
the
advantage
of
this
kind
of
protocol
level
solution
would
be
security
wise
that
you
don't
delegate
there,
you
don't
delegate
anything
to
the
implementations
and
you
have
a
solution
that
works
across
all
algorithms,
and
that
is
really
much
much
faster.
A
Great,
thank
you.
I
guess
we
have
time
for
a
couple
of
questions.
I
I'm
sure
one
of
them
will
be
when
when
can
we
expect
to
see
the
the
full
detail
or
and
where
or
will
you
just?
Let
us
know.
C
A
D
You
for
the
presentation
you
said
you
found
two
real
world
apps
where
this
was
vulnerable
and
did
you
report
the
vulnerability
through
their
vulnerability
disclosure
program?
Did
you
get
in
touch
with
the
apps
themselves?
Thanks.
C
E
Oh
just
to
say:
thanks,
we
did
responsible
disclosure,
I
guess
is
the
short
way
of
saying
what
that
register,
and
so
all
the
affected
libraries
have
been
notified
and
have
had
a
chance
to
patch
and,
as
I
already
said,
both
of
them.
B
So
I
just
wanted
to
say
thanks
to
the
researchers,
to
lara
and
kenny
and
and
your
and
your
colleagues
who
worked
on
this
for
bringing
this
to
the
attention
of
the
working
group.
It
sounds
like
you
did
a
pretty
good
review
of
existing
implementations,
but
obviously
there
may
be
implementations
that
you
don't
know
about
so
being
able
to
talk
about
it
here
should
hopefully
give
people
a
chance
to
have
a
heads
up
if
they
have
an
implementation
that
they
haven't
published.
We're
much
appreciated
for
bringing
that
to
the
group.
C
A
A
A
So,
just
whenever
you
want
next
slide,
just
just
let
me
know
and
I'll
click
the
welcome
for
you
great.
F
F
F
So
why
why
should
we
do
that?
So
we
needed
our
a
way
to
verify
our
implementation
right,
and
if
we
have
to
do
the
work
anyway,
we
can
make
it
useful
for
others
too,
and
this
is
in
line
of
with
our
mandate
of
improving
the
ecosystem
and
it's
good
for
other
implementations
too,
because
writing
tests
is
a
lot
of
work
and
they
get
free
tests
and
there
are
secondary
effects
too.
F
So
the
tests
are
black
box
tests
and
there
are
two
kinds
of
tests:
the
first
one
I
call
consumer
tests
where
artifacts
are
produced
by
the
test,
suite
and
then
consumed
by
all
the
implementations
and
the
other
kind
are
producer
consumer
tests,
and
here
the
artifacts
are
also
produced
by
the
implementations
being
tested.
So
you
get
this
nice
interop
matrix
and
the
test
suite
uses
an
interface
called
sop
or
the
stateless
openpgp
interface
that
has
been
extracted
by
by
dkg,
and
you
can
see
some
examples.
F
So
here
we
generate
a
key,
then
we
use
it
to
encrypt
some
data.
Data
is
read
from
standard
in
and
the
ciphertext
is
produced
on
standard
out
and
to
decrypt.
You
do
the
same.
You
feed
the
ciphertext
and
send
it
in
and
get
the
plain
text
on
standard
out
and
currently
the
test
suite
just
uses
five
very
simple
operations
and
gets
out
a
lot
of
information
that
way
so
on
the
right.
You
see
an
example
test.
F
Every
test
has
a
heading
and
it
has
a
stable
link
that
you
can
refer
to
in
bug
reports
when
you
get
a
description
and
maybe
additional
artifacts
like
certificates,
and
there
is
a
little
button
next
to
artifacts
and
if
you
have
the
right
font.
You
also
get
this
little
magnifying
glass
in
the
button
and
if
you
click
on
that,
you
get
taken
to
a
packet
dumper
to
inspect
the
artifact.
F
F
Most
tests
start
with
the
base
case.
So
to
make
sure
we
are
all
on
the
same
page,
and
there
are
then
there
are
some
variants
and
some
of
the
variants
where
the
the
interpretation
or
the
expectation
is
not
clear.
They
they
don't
have
any
expectations.
So
you
get
a
kind
of
white
row
and
sometimes
mostly
in
producer
consumer
tests,
the
producer
fails
to
produce
an
artifact
or
the
produced.
Artifact
does
not
meet
the
expectations,
so
you
can
get
a
cross
mark
there
too.
F
So
this
is
an
example
of
an
signature,
very
verification
test,
so
nipper
started
looking
into
encodings
of
ecc
artifacts
and
to
aid
that
I
decided
to
write
the
test
and
the
test
verifies
an
ed
dsa
signature
and
it
starts
with
the
base
case.
That's
green
for
all
implementations,
except
for
gpg14,
which
does
not
do
ecc,
and
then
you
see
a
variant
where
the
s
value
of
the
signature
is
zero.
F
F
This
is
an
example
of
a
producer
consumer
test
so
or
maybe
we
could
should
call
it
a
scenario
test
and
this
models
key
generation,
encryption
and
decryption.
So
you
generate
a
key
with
implementation,
a
encrypt
for
that
key
with
implementation
b
and
then
decrypt
it
with
the
implementation
a
and
you
can
see.
F
F
So
I
consider
that
a
success
and
we
improved
implementations
across
the
board,
not
only
sequoia
in
accordance
with
our
mandate,
and
we
also
improved
the
the
understate
understanding
of
the
ecosystem.
So
during
our
implementation
effort,
we
ask
those
questions:
how
implementation
handling
this
all
that
and
we
just
wrote
the
test
to
find
out
and.
F
There
are
some
tests
where
the
implementations
vary
wildly,
and
this
may
point
out
where
inflammators
need
more
guidance
next
slide.
Please.
F
F
And
if
you
look
at
the
the
chart
on
the
right,
a
significant
percentage
of
the
points
you
see
are
the
the
algorithm
supporters.
Then
there
are
problems.
For
example,
if
you
consider
signature
subpackets,
it's
unclear
what
implementation
should
do
if
there
are
multiple,
maybe
conflicting,
sub-packets.
F
F
It's
sometimes
not
clear
how
to
handle
missing
sub
packets,
missing
timestamp
subpackets,
what
to
do
if
a
signature's
creation
time
is
in
the
future
or
what,
if
the
signature's
creation
time
predates
the
signing
keys
creation
time
or
what?
If,
at
the
time
the
signature
was
supposedly
created,
the
signature
sub
key
was
not
bound
or
was
revoked
or
the
certificate
was
for
some
reason,
not
not
valid,
and
there
are
unknown
packets.
F
F
Expirations
implementations
very
widely
with
respect
to
the
question
of
do
signatures
expire
like
in
the
context
of
certificate
component
binding
signatures,
do
primary
key,
binding
signatures
expire
and
do
search
expire.
F
F
F
F
Finally,
the
really
bad
news,
many
implementations,
accept
signatures
from
weak
algorithms,
so
a
third
of
implementations
accept
and
successfully
verify
md5
signatures.
5
of
nine
signature.
Implementations
are
fine
with
sha1
signatures
and
four
of
nine
are
happy
with
ripe,
mdm
signatures
and
just
for
fun.
I
included
a
test
that
takes
signatures
over
the
shattered
collision
and
five
of
nine
implementations
are
fine.
With
that.
F
F
So
if
you
want
to
add
tests
or
propose
tests,
talk
to
me
or
open
an
issue,
most
artifacts
are
generated
on
the
fly
using
sequoia,
but
some
artifacts
are
just
stored
as
test
data,
and
if
you
generate
artifacts,
please
please
use
the
common
test,
keys,
the
alice
and
bob
key,
and
if
you
want
to
get
your
implementation
tested,
that's
really
easy.
You
just
need
to
implement
the
stateless,
open,
pg
and
open
ptp
interface,
and
then
we
can
plug
it
into
the
test
suite
to
do
that.
Just
talk
to
me.
F
F
F
One
area
where
help
would
be
greatly
appreciated
would
be
in
the
presentation
of
results
because,
as
as
we
add
more
and
more
tests,
the
results
become
more
and
more
unwieldy.
So
having
a
front
end,
maybe
that
looks
at
the
json
data
and
then
presents
it
in
a
nicer
way.
That
would
be
very
helpful.
B
Yeah,
I
just
wanted
to
thank
justice
for
the
presentation
and
for
the
tremendous
amount
of
work
that's
gone
into
this.
I
think
it's
actually
highlighted
a
lot
of
stuff,
and
I
know
that
many
bugs
have
been
fixed
as
a
result.
We
have
a
little
bit
of
time
for
questions,
but
we're
a
little
overtime
right
now
do
folks
have
anything
they
want
to
raise.
B
You
can
just
put
yourself
in
the
queue
by
raising
your
hand,
I
want
to
encourage
people
to
take
a
look
at
that
if
you,
if
you're
in
the
working
group-
and
you
have
not
looked
at
the
interoper
test,
suite
you're
missing
out.
This
is
something
that
I
think
really
makes
it
useful
for
implementers
and
for
those
of
us
who
are
working
on
the
spec
itself
to
get
a
sense
of
where
things
might
be
confused
or
confusing.
So
so,
thank
you.
Eustis.
G
Hello,
hello,
I'm
ebay.
I
have
been
working
for
grpg
implementations
for
years
and
today,
I'd
like
to.
G
Provide
an
offer
about
sos
simple
octet
screen
the
country
or
it
was
2019.
I
tried
to
implement
support
of
eg4,
8
and
x448,
and
I
found
that
the
pg
is
a
kind
of
messy
about
ecc,
especially
for
eddsa.
G
So
I
fixed
gruner
pg
when
I
introduce.
G
G
But
for
ecc
easy
point
is
encoded
into
mpi.
I
think
that
that
that
is
the
cause
of
many
troubles
and
for
curves,
which
format
is
big,
big,
indian.
It
is
okay,
but
these
days
modern
ishii
use
little
indian
format,
and
because
of
that,
we
have
zero
removable
problem
or
any
existing
implementation
which
support
ed255,
have
to
support
zero,
likability
things.
G
The
sos
definition
is
simple:
it
just
changes
the
definition
of
mpi
and.
G
Classic
ecc
I
mean
by
the
word
classically
you
see,
I
mean
ecc
or
nist,
missed
carbs
or
blindfold
carbs,
big
endian
ecc
existing
specification
for
classic
pcc
can
just
replace
the
word
mpi
with
sos.
Then
it
is
compatible.
G
Currently
in
the
rfc
6637,
we
have
a
definition
of
the
encode
encoding
into
mpi
for
for
each
point,
but
using
sos
we
can
just
defer
other
specification,
not
inside
open
pgp
specification
next
slide.
G
G
Now
I
use
id
of
sos
to
support
the
ed
448
and
x448,
because
new
modern
curves
uses
little
endian
format.
Sos
is
better.
G
Is
but
currently
already
many
implementation
of
open
pcp
adopted
ed25519
and
the
carb
2519
and
the.
G
G
G
G
I
was
about
to
adopt
easiest
approach,
it's
basically
following
the
ed2519
and
the
carb
2019,
but
I
changed
my
mind
because
of
these
pros
and
cons
it.
It
was
easiest
for
green
rpg,
but.
G
G
Yes,
yes,
yeah
and
next
slide,
please
so
under
so
we
will.
We
have
other
approach
per
club
approach,
defining
data
format
per
for
each
cards.
Next
slide
please
and
the
most
the
best
one
would
be
just
a
octet
sling
if,
but
it
would
require
a
new
algo
number
yes
and
next
slightly.
So
my
conclusion
is
that
sos
is
a
compromise
to
introduce
other
modern,
curves
obcc
without
dividing
from
standard
implementations.
G
It's
a
bit
strange,
but
backward
compatibility
is
good
and
that's
my
presentation,
and
basically
I
my
I
my
point
is
that
when
we
try
to
introduce
edds
things
into
the
standard,
you
will
see
problems
and
I
don't
have
a
strong
opinion
to
push
sos
into
specification,
but
we
can
share
the
experience
of
current
eddsa
problems.
A
Yeah,
I
guess
we
have
not
that
much
time
so
phil
if
it's
quick
kind
of
under
question
speed
would
be
good.
Thank
you.
H
Yeah,
I
was
just
going
to
say
there
is
another
way
that
you
can
skin
this
cat,
and
that
is
the
one
I
do
in
udf,
which
I
dropped
into
the
comments
where
I
use
a
run:
a
random
seed
and
a
per
algorithm
key
generation
mechanism,
it's
defined
fred25519
or
whatever
they
specify
how
to
generate
the
key
from
random
information.
H
If
we
adopted
that
as
the
approach
throughout
the
itf,
then
you
push
the
whole
onus
on
all
this
part
of
tagging
and
bagging
onto
the
creators
of
the
new
algorithms.
We
don't
need
to
revisit
this,
because
the
only
things
that
the
algorithm
ever
the
coder
needs
to
deal
with
is
the
seed,
so
just
just
a
different
way
of
looking
at
it.
B
Phil,
that
seems
to
me,
like
that,
has
pretty
serious
interoperability
issues
with
deployed
clients,
I'm
not
sure
how
it
actually
applies
to
the
elliptic
curve
representation
that
the
bay
was
talking
about
here.
H
B
So
we
are
pretty
close
on
time
here.
The
session
has,
I
think,
three
minutes
left
paul.
I
don't
know
whether
you
want
to
talk
about
any
of
the
outstanding
issues
in
the
draft
or
about
your
plans
for
the
next
revision,
but
nibei.
Thank
you
for
the
presentation.
I
Hi,
so
I'm
not
sure
if
there's
any
point
in
rushing
through
this,
but
just
a
a
generic
thing
for
the
people
who
haven't
followed
the
list
and
who
are
now
in
the
working
group
meeting
here.
What
we've
done
is
as
daniel
said
in
the
beginning.
We
are.
We
are
pulling
in
a
lot
of
changes
in
insistable
chunks,
for
people
to
reconfirm
the
consensus
so
that
we
can
get
to
a
state
where
everybody
agrees
on
the
biz
document.
That
is,
for
instance,
yoaf.
I
Why
you
are
seeing
triple
des
is
a
must
algorithm,
because
we
haven't
gotten
to
the
part
where
we're
going
to
rewrite
that
section
again,
that
is,
that
is
scheduled
to
be
in
the
next
or
the
the
second
next
update
to
the
draft.
So
in
general,
I'm
mostly
interested
in
if
people
see,
if
there's
anything
in
the
process
that
we
can
improve
on
so
far.
I
So
so
we're
basically
trying
to
to
present
this
in
little
chunks
with
you
know,
one
or
two
weeks
of
time
for
the
working
group
to
give
us
feedback
and
then
move
on
to
the
next
one,
in
the
hope
that
once
we've,
like
you,
know,
consumed
all
the
items
that
you
know
you
see
listed
here,
that
we
are
basically
in
a
in
a
similar
position
as
the
older
biz
document.
But
we
have
confirmed
all
the
consensus
of
all
these.
A
B
By
raising
hands,
if
you
want
to
speak
to
that,
but
we
can
also
follow
up
on
the
list
if
folks
have
suggestions
about
other
ways
to
improve
the
process
like
follow,
basketball.
I
The
previous
presentation
is
interesting
in
that
it
would
have
if
we
would
pull
this
into
this
abyss
document.
Then
we'd
have
to
see,
because
that
would
that
would
prevent
that.
That
would
help
us
in
specifying
some
of
the
older,
the
newer
curves
that
we're
about
to
merge
in.
We
would
have
to
do
that
in
this
newer
way.
I
So
I'm
just
the
editor
of
the
document,
so
I
would
hope
that
the
working
group
will
at
least
discuss
this
ss
mechanism
in
the
next
two
weeks,
so
that
we
can
figure
out
how
to
incorporate
that
or
not
in
the
crypto
refresh.
A
Great
thanks
paul-
and
I
guess
the
last
topic
we
wanted
to
cover
really
was
our
assumption-
was
that
we'd
have
one
or
a
couple
of
interim
meetings
depending
on
the
cadence
of
production
of
new
drafts.
A
I
don't
think
there's
too
much
to
say
about
that
here
now,
but
just
wanted
to
raise
that.
That's
the
plan,
basically
on
how
we
just
how
we
think
we'd
be
working
in
between
now
and
ietf111.
A
If
anybody
has
who
wouldn't
otherwise
be
taking
part
once
they're
more
than
welcome
and
we'll
probably
do
some
doodles
and
so
on
on
the
list
for
dates
and
things
quite
shortly,
I
I'm
guessing
we'll
aim
for
one
or
two
interims
between
now
and
july,
and
I
think
I
just
want
to
thank
the
presenters
and
then
I'll
hand
over
to
dkg
can
close
us
out.
B
I
think
that
that
covers
it,
I'm
not
sure
I've
got
too
much
else,
I'm
hoping
that
we
can
get
more
people
to
give
some
feedback
and
proposed
fixes
on
the
list,
and
hopefully
the
editors
will
be
able
to
drop
a
new
revision
in
the
next
sometime
in
the
next
week,
once
you've
had
a
chance
to
recover
from
the
itf.