►
From YouTube: IETF109-CFRG-20201117-0500
Description
CFRG meeting session at IETF109
2020/11/17 0500
https://datatracker.ietf.org/meeting/109/proceedings/
A
A
A
You
can
find
the
participant
guide
and
some
assistance,
and
you
can
report
issues
the
links
that
that
you
can
find
on
the
slides.
New
shades
are
generated
automatically
based
on
the
data
tracker
information
and
the
minutes
will
be
able
using
this
link.
A
This
is
an
opal
I
send
you
know
this
well,
but
if
not,
please
follow
these
links.
A
A
B
A
A
We
don't
have
any
new
rfcs
in
withdrawals
in
rsc
editors
queue.
We
have
a
document
in
isu
review.
In
fact
it's
after
itf
conflict
review,
but
there
were
some
minor
concerns,
so
we
hope
that
after
the
meeting
the
authors
can
update
their
draft
and
the
argument
2
will
finally
go
to
the
rfc
active
student.
A
A
A
A
Some
update
is
needed,
but
all
replies
from
those
source
have
been
approved
by
the
reviewer,
so
we
think
that
we'll
we
can
go
to
a
research
group
last
call
pretty
soon
kangaroo
12
has
gone
under
second
residual.
Plasco
voperf
is
updated
and
we'll
have
some
update
on
this
document.
Today.
At
the
meeting
we
have
hpk
document
updated.
Also
it
the
research
group
pascal
is
done
now.
It's
waiting
for
shepherd's
review.
A
A
A
A
We
have
a
status
update
today
at
the
meeting
and
to
pay
documents
that
were
the
winners
of
the
pack
contest
the
opec
and
cpace
for
both
of
them.
We
have
status
update
today
and
the
world
is
in
the
active
phase.
Now
so
information
will
be
given
in
the
states
update
slides
about
three
related
work
items.
We
don't
have
any
news
about
c2pq
draft
from
both
of
them
and
we
have
two
drafts
in
discussion.
A
Also,
we
have
some
erratas
on
rfcs
that
have
been
published
before.
A
A
C
C
C
C
I
just
got
like
13
text
messages:
okay,
yeah!
So
thanks
everyone
just
going
to
give
a
sort
of
brief
update
on
opaque.
C
Since
the
last
meeting,
we
submitted
the
first
official
iot
version
of
the
draft
and
have
been
working
with
hugo
and
kevin
facebook
and
getting
this
ready
for
you
know
actually,
shipping
software
down
the
road
with
test
vectors
and
reference
implementations
and
a
lot
more
protocol
specification
clarity,
but
there's
still
some
more
issues
to
be
covered
in
resolve.
So
what
I'll
do
today
is
talk
about
sort
of
just
a
high
level
overview
of
the
protocol
highlight
some
of
the
aspects
that
have
changed
since
hugo's
initial
version.
C
Point
to
some
open
issues
that
you
know,
I'm
hopeful
that
people
here
will
chime
in
and
you
know,
give
us
feedback
on
and
then
discuss
any
next
steps
afterwards.
So.
C
All
right,
so
a
high
level
summary
of
opaque
for
those
who
aren't
familiar
with
the
topic
and
haven't
heard.
The
draft
opaque,
as
it
says
here,
is,
is
unauthenticated
or
an
asymmetric
peak,
which
effectively
takes
a
various
assortment
of
cryptographic
constructions,
one
of
which
being
an
oprf,
a
hash
function,
a
memory
hard
function
such
as
argon
or
script,
and
an
authenticated
key
exchange
protocol,
glues
them
all
together
or
compiles
them
together
into
a
strong
apek.
C
C
We
mean
that
the
client
server
don't
both
share
a
copy
of
the
same
password,
unlike
cpase,
which
is
symmetric
in
this
case.
Opaque,
consists
of
two
phases
or
two
rounds.
During
the
first
phase,
clients
will
use
their
password
to
register
a
set
of
credentials
with
the
server,
and
this
is
a
set
of
public
key
credentials
that
are
used
to
key
the
underlying
authenticated
key
exchange
protocol
to
actually
derive
a
new
secret
and
then
there's
the
actual
online
authentication
flow,
which
runs
sort
of
the
same
set
of
cryptographic.
C
So
there's
this
nice
sort
of
decoupling
and
separation
between
the
two
and
I'll
describe
that
visually
a
little
bit
later
on,
and
the
registration
flow
effectively
three
messages
back
and
forth
between
the
client
and
the
server.
C
The
first
round
trip
request
response
between
the
client
and
the
server.
Its
purpose
is
to
basically
compute
an
opr
over
the
client's
password,
so
pw
in
this
particular
case.
C
That
thing
is
cut
off
and
a
per
user
opr
key
that's
stored.
Alongside
the
you
know,
the
user's
password
file
in
the
in
the
password
table
on
the
server
side
and
then
from
the
output
of
that
opr.
C
The
client
constructs
what's
called
an
envelope
which
effectively
just
carries
credentials
that
are
necessary
to
complete
the
authenticate
exchange
change
later
on
the
server
instructs.
The
client,
in
terms
of
you,
know
what
credentials
need
to
be
encrypted.
What
credentials
need
to
be
authenticated
and
what
needs
to
go
in
this
envelope
and
then
the
client
just
hopefully
uploads
that
to
the
server,
so
the
server
can
store
alongside
this
per
user
opr
key,
very
simple.
C
The
credentials
are
extensible
so
to
speak,
and
they
are
composed
of
a
variety
or
a
combination
of
what
we
call
secret
values
and
authenticated
values.
The
secret
values
are
those
that
are
encrypted,
so
the
server
does
not
have
access
to
them.
Minimally.
That
includes
the
private
key
of
the
user,
so
sku
in
this
particular
case,
authenticated
values
are
those
that
just
simply
need
to
be
authenticated
in
order
to
get
the
proof
of
security
that
opaque
requires.
C
So
in
this
example,
pks,
which
is
highlighted
in
green,
is
authenticated
and
included
in
the
envelope
and
applications
are
free
to
choose.
Whichever
sort
of
you
know,
credential
structure
is
you
know
important
for
their
particular.
You
know
instantiation
or
for
their
particular
use
case.
You
know.
Some
applications
might
additionally
want
to
specify
both
an
identity
of
the
user
idu,
as
well
as
an
identity
of
the
server
and
include
that
in
the
envelope
or
they
just
might
want
to
use.
C
You
know
raw
public
keys
as
the
identities
of
each
peer,
but
the
important
bit
is
that
the
the
way
this
credential
structure
is
constructed
is
you
know
flexible?
The
server
allows
basically
tells
the
client
what
to
put
in
it
the
client
obeys
and
happily
and
provides
the
encrypted
and
authenticated
response
to
the
server,
all
of
which
you
know
the
secrecy
of
this,
and
the
authenticity
of
this
is
protected
by
the
output
of
the
opr.
C
C
Importantly,
however,
in
addition
to
the
opr
output,
so
to
speak,
being
sent
back
from
the
server
to
the
client,
the
server
also
sends
the
also
sends
the
envelope
that
corresponds
to
the
particular
user's
identity.
C
For
obvious
reasons,
perhaps-
and
that's
because
the
the
client
uses
the
output
of
the
eop
and
the
the
envelope
they
just
got
from
the
sorry
hold
on
just
muting
notifications,
someone
texted
me
uses
the
upper
the
opr
that
it
just
got
as
well
as
the
envelope
to
decrypt
the
credentials
and
derive
some
additional
keying
material,
which
we
call
an
export
key,
we'll
get
into
that.
C
But
it's
discussed
in
the
draft
and
then
run
an
authenticated
key
change
using
those
credentials,
but
typically
in
you
know
an
actual
instantiation
of
opaque.
You
would
run
these
two
pieces
together.
You
would
run
the
authenticated
key
exchange
inline
with
the
actual
execution
implication
of
opaque
protocol,
and
you
do
it
in
such
a
way
that
the
the
opaque
protocol
messages
are
included
in
the
transcript
for
the
ake
in
question.
C
So,
for
example,
if
you're
running
you
know
opaque
with
triple
d
helmet,
as
is
one
of
the
variants
described
in
the
draft,
you
would
simply
include
the
credential
request
and
response
which
are
effectively
the
messages
to
carry
out
the
opr
evaluation,
as
well
as
transfer
the
envelope
from
server
to
client
in
the
flights
from
the
client
to
the
server
and
respectively,
in
the
flight,
from
the
server
to
the
client
and
the
rest
of
the
ake
specific
bits.
Those
particular
two
triple
the
h
sort
of
remain
unchanged.
C
The
draft
goes
into
details
for
a
triple
dhs
one
of
the
instantiated
akes
about
how
to
derive
keys
necessary
to
both
encrypt.
You
know
parts
of
this
particular
handshake.
If
you
were
to
use
this,
how
to
authenticate
parts
of
this
handshake
and
importantly,
and
ultimately,
how
to
derive
like
a
session
secret
that
is
then
used
for
the
output
of
the
ake,
which
is
the
whole
point
of
running
opaque
in
the
draft,
because
we
tried
to
describe
opaque
in
sort
of
a
modular
way.
We
have
this
notion
of
configuration.
C
A
configuration
fully
specifies
basically
everything
you
would
need
to
instantiate
in
order
to
implement
a
particular
version
of
opaque.
So,
as
I
was
alluding
to
before,
this
includes
minimally
the
opr
a
particular
you
know,
group
and
and
cipher
suite
it
references
in
fact,
the
other
irtf
draft
on
oprs,
a
cryptographic,
hash
function,
a
memory,
hard
function
and
an
ake
so
to
obviously
or
two
perhaps
examples
of
this
might
be.
C
You
know,
prf
instantiated,
with
restretto,
I'm
using
shot512
a
variant
of
argon
with
you
know,
fixed
parameters
or
applications
specified
planners
and
triple.
Alternatively,
another
instantiation
might
be
using.
You
know
one
of
the
nist
curves
with
shop256
script
and
tls.
1.3
is
the
ake
if
you
have
access
to
the
slides
and
you're
you're
following
along
the
link
here.
C
So
from
an
application
perspective,
there's
really
not
much.
You
need
to
do
in
order
to
make
use
of
opaque
as
it's
currently
specified
for
starters
minimally.
You
have
to
actually
you
know,
select
a
configuration
which
is
one
of
these
tuples
previously
presented.
C
You
have
to
choose
a
credential
structure
that
makes
sense
for
your
application,
although
we
can
just
specify
some
same
defaults
in
the
draft-
and
you
know
hopefully
encourage
applications
to
use
these
defaults.
C
One
of
the
defaults
might
be
just
simply
using
raw
public
keys
as
identities,
and
you
know,
sort
of
punting
other
kind
of
identifying
information
outside
of
opic
as
it's
strictly
unnecessary
and
excuse
me
finally
there's
this
is
additional
output
that
I
was
alluding
to
earlier
of
opaque
called
an
export
key,
and
this
is
really
its
purpose
is
meant
to
be
used,
or
it's
meant
to
be
used
as
something
that.
C
C
There
are
a
number
of
open
issues
on
the
draft.
These
are
some,
I
think,
of
the
more
important
ones
to
complete
right
now,
the
first
of
which
is,
you
know
how
much
detail
in
terms
of
the
actual
akes
that
we
need
to
go
into
this
particular
document
without
hopefully
discussing
the
issue
here.
C
This
basically
asks
the
question:
do
we
fully
specify
an
ake
and
its
wire
format
in
this
document
right
here,
or
do
we
simply
provide
a
template
for
other
akes
and
for
applications
to
build
out
akes
like
triple
dh
like
sigma,
I
or
sigma
r,
if
they
wanted
to,
and
if
we
go
down
either
path?
You
know
what
sort
of
what
sort
of
design
choices
or
defaults
do
we
pick
and
how
many
like
what?
What
sort
of
test
factors
do
we
need
to
provide,
and
this.
C
Think
a
fairly
fundamental,
an
important
piece
to
resolve
fairly
quickly,
so
people
have
thoughts
about
which
way
to
go
on
this
particular
issue.
That
would
be
lovely
if
you
could
share
them.
C
Another
issue-
that's
worth
discussing,
perhaps
is
how
parameters
for
memory
hard
functions
are
negotiated
for
a
particular
application
use
case,
whether
or
not
they're
transferred
in
band
during
registration
or,
if
they're,
you
know,
communicated
outside
it's
sort
of
in
the
wrapper
protocol
for
opaque
and
then
there's
some
other
issues
as
well
and
a
link
to
the
github
repositories
here
and
as
well
as
on
the
title
slide,
I
encourage
you
to
check
it
out
file
more
issues,
help
us
improve
the
draft.
C
As
far
as
implementations
are
concerned,
I'm
aware
of
at
least
three
that
are
coming
up
to
speed
a
fourth
one.
Recently,
that's
not
listed
here
just
happened
over
the
weekend.
There
is
a
reference
implementation
based
on
a
built-in
sage,
using
the
reference
implementation
for
the
erf
code
or
for
the
opr
standard.
C
C
C
Okay,
anyways
basically
saying
this
is
it's
at
a
state
where
we
can
implement
it.
We
have
various
interested
people
bringing
up
implementations
and
we're
working
to
converge
on
you,
know
interoperability
but
of
course,
feedback
in
terms
of
presentation,
design,
decisions
apis.
All
that
sort
of
thing
is,
of
course,
welcome.
F
Queue
quick
question:
while
that's
being
sorted
out,
I'm
wondering
if-
and
this
is
kind
of
a
dumb
question.
But
I'm
wondering
if
who
initiates
the
ake,
I'm
wondering
if
it's
possible
for
the
server
to
to
initiate
the
ake
and
send
the
first
message,
along
with
its
response,
with
its
response
from
the
opr
evaluation.
C
It
needs
the
client's
input
from
the
opr
to
actually
produce
its
response,
so
I'm
not
sure
how
that
would
work.
I
guess
on
the
wire
so.
C
A
three
round
because
you
need
key
confirmation
from
the
client
to
the
server.
A
Chris
one
question
from
me:
during
the
pack
selection
process,
we've
had
a
lot
of
security
reviews
and
hugo
has
made
a
lot
of
efforts
to
provide
security
assessment
of
opaque,
and
my
question
is
whether
something
is
needed
regarding
the
security
security
assessment
of
opaque.
Maybe
some
security
proofs
or
some
detailed
security
proofs
and
some
models
or
everything
is
ready
in
this
sense.
C
No,
in
fact,
I
think
the
changes
that
we've
made
since
the
initial
individual
draft
have
made
the
analysis
a
lot
easier.
You
may
recall
that
one
of
the
requirements
for
early
versions
of
opaque
was
a
key
committing
aad.
C
However,
we've
sort
of
worked
around
that
issue
by
using
a
one-time
pad
like
construction
for
authentication
and
or
rather
encryption
of
envelope
values.
So
in
some
sense
the
design
decisions
we
made
have
simplified
the
lives
of
those
who
might
analyze
it.
C
However,
various
instantiations
of
opaque,
like
opaque
and
tls
1.3,
in
particular
those
deviate
a
bit
from
opaque
in
its
purest
form
as
written,
and
I
think
if
that
were
to
come
to
fruition,
that
would
require
additional
analysis
either
from
you
know,
people
on
the
crypto
review
panel
or
the
community
in
particular,
because
the
tls
integration,
one
of
the
tls
integration
options,
is
it's
effectively
using
opaque
to
authenticate
the
already
established
session
secret
from
tls,
rather
than
derive
a
new
output
session
secret
key,
which
is
subtle,
but
an
important
difference
between
the
two
so
tldr.
C
A
Okay,
sent
a
lot
make
sense.
Please
any
other
questions.
B
A
Three:
okay,
thanks
a
lot
chris,
please,
the
next
slide
from
your
house.
Your
please
share
the.
A
G
So
in
the
newly
formed
editor
teams,
the
first
step
we've
been
doing
is
to
make
a
plan
on
our
objectives
and
the
deliveries
that
we
want
to
produce
and
and
of
course
the
rfc
document
is
one
important
aspect.
But
we
also
aimed
at
providing
public
code
and
scripts
for
the
test,
vectors
and
also
public
reference
implementation
for
some
curves
for
some
variants
of
the
protocol,
and
we
also
agreed
on
that.
We
for
the
cpa
draft.
G
G
Currently,
there
are
two
types
of
diffie-harmon
protocols
considered
in
the
draft,
which
is,
firstly,
a
single
coordinate.
If
you
haven't
on
adwords
and
montgomery
curves
using
alligator,
which
have
a
small
cofactor,
and
then
we
have
a
specification
for
x-coordinate,
only
different
harmonic
wires
curve
such
as
e2
256,
and
we
agreed
that
we
should
be
adding
a
third
alternative
which
operates
on
the
full
group
using
both
coordinates.
G
For
instance,
if
you
want
to
use
departure
with
ear
strata
and
seeing
these
variants,
you
have
slight
variation
between
the
different
types
and
different
instances,
specifically
regarding
the
properties
of
the
hash
to
curve
algorithm.
The
map
to
point
algorithm
and
we
sort
need
to
make,
have
a
explicit
security
analysis
of
all
the
slide
variations
as
a
first
step
before
accurately,
specifying
it
in
the
rfc.
G
Order
but
you
will
having
clamping
function
and
which
something
which
doesn't
apply
for
the
short
verifiers
curves,
and
we
aim
it
and
consider
all
of
these
details
so
that
existing
libraries
could
be
reused
and
there's,
for
instance,
no
need
for
an
x25519
variant
that
uses
uniformly
sampled
scalars.
G
So
the
security
analysis
is
the
main
of
the
main
work
item
that
we
are
working
on
there
and
the
editor
team
at
the
moment.
As
a
preliminary
result
like
we,
I
can
give
you
information
that
we
found
in
option
for
providing
tight
bounds
for
all
the
different
variants,
and
we
will.
G
We
saw
that
the
we
could
reduce
the
assumption
set
up
from
the
gap
simultaneously
hermann
and
gab
computational
difficult
problems
to
weaker
assumptions
that
in
this
step,
we
saw
that
we
could
reduce
security
for
curves,
which
have
a
small
cofactor
to
the
problems
on
the
prime
or
the
subgroup.
G
G
We
need
the
property
that
there
must
be
an
efficient
algorithm
for
finding
an
exponent
such
that
the
scalar
multiplication
of
this
point
with
this
exponent
is
on
the
map.
That's
a
property
which
is
provided
by
all
currently
discussed,
mappings
from
hash
to
curve
or
used
for
estradiol
or
dcaf,
when,
on
average,
every
second
experiment
will
do,
and
with
this
property,
it's
possible
to
er
to
reduce
the
hardness
of
the
simultaneous
difficult
problem
on
the
subset
of
points
that
could
be
generated
by
the
map
to
the
simultaneous.
G
If
you
haven't
problem
on
the
group
or
the
prime
order
group
subgroup
respectively,
discussion
of
these
aspects
was
not
considered
to
be
suitable
for
the
rfc
and
we
are
prepared
preparing
a
separate
paper
discussing
all
these
implementation
details
as
a
first
step
and
that's
the
currently
the
main
aspect
in
our
weekly
meeting
in
the
editor
team.
G
G
G
Any
suggestion
regarding
how
to
prepare
this
reference
implementation,
for
maybe
python
and
nzcc,
would
be
welcomed
by
us.
So
it
would
be
good
to
have
a
small
self-contained
library
don't
which
doesn't
have
the
need
for
pulling
in
a
lot
of
a
lot
of
code
and
of
course
it
should
be
a
high
quality,
constant
time
code.
So
any
suggestion
would
be
helpful
and
would
be
welcome.
G
So,
as
a
summary
I'd
like
to
say
that
we
mainly
have
organized
our
team
and
started
the
work,
unfortunately,
we
are
don't
having
the
speed
that
I'd
like
to
have
also
because
of
the
pandemic
situation.
It's
slowing
down
the
current
activity
is
providing
tight
security,
bonds
and
reductions
for
all
the
tiny
implementation
differences
that
we
considered
and
the
next
important
step
will
be
to
have
scripts
and
test
vectors
and
reference
implementations
for
the
alternative
implementations.
A
Bjorn
and
one
question
for
me:
viewer,
maybe
I
have
missed
something,
but
could
you
tell
me,
are
there
any
ideas
or
maybe
drafts
or
some
discussions
about
integration
of
cpas
to
existing
itf
protocols?
You
have
such
documents
for
fro
for
opaque
and
for
spec
2..
Do
you
have
any
process
here
for
cpas.
G
G
I
have
a
list
of
of
of
persons
in
in
order
to
perceive
them,
but
I
didn't
find
time
to
to
to
prepare
a
draft
for
a
tls
integration,
but
that
would
be
the
next
step
that.
A
Okay,
thanks
a
hello
bjorn,
then
we
have
the
next
presentation
from
andre
de
valence
respect
to
play
plus
dkf.
C
I
And
hopefully,
does
that
slide
show
up.
J
I
see
I'm.
I
I
Sorry
about
that
all
right,
so
I'm
gonna
give
an
update
on
ristrato,
255
and
decaf448.
The
draft
is
the
the
name
of
the
draft.
Is
there
first
first
things?
First,
what
problem
are
we
trying
to
solve
here.
A
G
I
Yeah,
maybe
I
can
use
the
no
I'm,
I'm
just
trying
to
figure
out
how
to
get
the
okay.
Let.
A
Me
share
this
live
great,
it
can
be
either.
I
think
you
see
the
live
place
so
tell
me
right
when
you
want
right.
I
So
here
we
are
again
next.
I
So
the
first
thing
is:
what
problem
are
we
trying
to
solve
next,
which
is
you're
an
implementer
of
a
cryptographic
protocol
that
has
just
come
out,
and
you
start
opening
up
the
paper
that
describes
it.
I
It
says
let
g
denote
a
cyclic
group
of
prime
order
p
and
now,
as
an
implementer,
you
have
to
translate
this
from
the
abstract
requirements
to
some
concrete
instantiation,
using
probably
an
elliptic
curve
group
next,
which
means
that
you
need
to
choose
what
kind
of
elliptic
curve
to
use
next
and
there's
two
common
choices
that
people
have
mostly
settled
on
via
stress
curve
or
an
edwards
curve.
I
Next,
the
virus
stress
curves,
give
you
prime
order
groups,
which
is
what
you
want,
whereas
edwards
curves
only
ever
give
you
a
prime
order
times
a
small
cofactor,
but
on
the
other
hand
they
have
the
fastest
formulas
next
and
next
and
they
have
complete
formulas,
so
they
work
with
all
inputs
and
it's
easy
to
implement
them
in
constant
time.
The
kind
of
sad
face
is
there
because
on
on
virus,
stress
curves,
although
complete
formulas
do
exist,
they're
much
slower.
So,
okay,
we
only
have
a
small
conceptual
mismatch
here.
I
How
how
bad
could
that
be?
What's
wrong
with
a
small
cofactor?
Well,
the
the
big
problem
is
that
the
security
analysis
for
the
abstract
protocol
does
not
apply
to
its
concrete
implementation.
If
you
want
to
do
full
validation,
that
some
point
is
in
a
prime
order
subgroup
already.
I
You
need
to
do
essentially
a
scalar
multiplication
and
that
negates
whatever
speed
up
you
would
get,
or
you
can
try
to
do
some
kind
of
ad
hoc
protocol
tweaks
like
oh
for
this
protocol,
we'll
handle
the
cofactor
by
you
know
in
this
part
of
the
protocol,
we'll
multiply
by
a
cofactor,
and
then
we'll
also
multiply
you
know
over
here
and
so
on.
I
But
that
means
that
these
tweaks,
that
you
do
not
only
do
you
have
to
analyze
them,
but
they
also
cause
kind
of
subtle,
quote
features
so
as
an
example
of
this
rfc
8032,
which
standardizes
ed25519
does
not
actually
require
a
conformant
implementations
to
agree
on
whether
or
not
a
signature
is
valid.
Next,
this
is
ultimately
downstream
of
different
behavior
between
batch
and
single
verification
and
that's
ultimately,
downstream
of
a
cofactor.
I
I
If
you're
doing
say
like
a
a
distributed
system,
that
would
be
important
and
the
tweaks
that
you
have
to
do
like
this
key
clamping
cause
a
lot
of
issues
whenever
you
try
to
do
something
beyond
just
the
the
basic
protocol-
and
this
is
the
happy
path
case
right.
The
bad
case
is
next.
You
just
have
a
catastrophic
failure
where
something
is
totally
insecure.
I
So
how
do
you
fix
this
mismatch
this?
The
answer
is
you
can
use
decaf
and
rustretto?
What
are
these?
These
are
construction
originally
by
mike
hamburg
of
a
prime
order
group
next
and
what
they
do
is
they
use
a
non-prime
order
curve
internally
to
implement
all
of
the
group
operations
with
no
overhead,
and
then
all
of
the
work
happens
in
this
canonical
non-malleable
encoding
of
group
elements
that
provides
you
with
a
prime
order
group
and
it
comes
batteries
included
with
a
single
hash
to
group
method.
I
That's
that
fully
encapsulates
all
of
the
elliptic
curve
related
math.
So
it's
agnostic
to
you
know
how
am
I
doing
domain
separation?
Am
I
using
sha2?
Am
I
using
shape
et
cetera,
et
cetera,
but
all
of
the
the
intricacies
of
the
curve
math
are
are
contained
next.
I
So
how
does
this
work
as
the
full
explanation
you
can
find
on
the
ristrato
group
website?
So
this
is
just
kind
of
a
conceptual
overview,
we're
going
to
work
with
three
families
of
curves.
Next,
there's
montgomery
curves.
Next
they
have
this
formula
that
you
might
have
seen
before
here.
I'm
writing
it
in
terms
of
u
and
v
but
other
elsewhere.
You
might
see
as
x
and
y
next.
I
These
are
famous
because
they
support
very
fast
pseudo
multiplication,
where,
if
you
have
the
the
v
coordinate
of
one
point,
you
can
get
the
scalar
multiple
and
they're
also
convenient.
If
you
do
zero
knowledge
proofs,
because
if
you're
in
an
arithmetic
circuit,
they
require
very
few
constraints
next,
so
we
have
edwards
curves.
Also
they
have
this
formula.
I
These
are
irrational
equivalent
to
montgomery
curves.
They
have
the
fastest
known
formulas
for
curve
operations.
Next,
and
the
really
cool
thing
is
that
those
formulas
also
allow
parallelism
within
curve
operations.
So
you
can
do
very
fun
things
with
cindy.
That's
very
nice
for
implementations.
I
The
third
is
jacobi
coordinate,
curves,
which
are
more
exotic
and
not
really
used
very
much,
but
the
reason
that
they're
relevant
here
is
that
it's
very
easy
to
write
down
four
points
of
order
two,
and
that
means
that
you
can
efficiently
encode
a
point
mod,
this
two
torsion
of
order
four,
and
so
that
gets
rid
of
a
cofactor
four.
We
don't
use
jacobi
coordinate
curves.
I
So
the
next
thing
that
we
do
is
we
link
up
these
different
curves
using
esogenes
curves
are
are
connected
on
this
isogeny
graph,
where
the
the
curve
that's
in
the
center,
is
a
jacobi
coordinate
curve
for
some
kind
of
parameter
choices,
a
and
d,
and
it's
two
exogenous
to
two
different
edwards
curves,
as
well
as
a
montgomery
curve,
and
then
that
montgomery
curve
is
equivalent
to
a
a
third
edwards
curve.
I
So
there's
a
lot
of
different
curves
floating
around
next,
but
the
idea
is
that
we're
going
to
use
these
isogenies
to
do
the
encoding.
So
we
already
got
this
nice
encoding
on
the
jacobi
cortic.
We
specify
an
encoding
there,
and
then
we
use
isogenes
to
transport
that
encoding
to
different
curve
shapes.
I
So
we
can
do
define
the
encoding,
starting
with
this
jacobi
quartic,
where
it's
very
easy
to
write
down,
and
then
we
transport
that
over
to
a
curve
that
we
might
want
to
use
for
actually
doing
the
group
operations
next
and
finally,
there's
an
extra
step
to
handle
cofactor
eight
instead
of
cofactor
four
next
and
this
kind
of
lets,
you
see
at
a
conceptual
level
the
difference
between
decaf
and
ristretto.
I
So
using
this
diagram
again,
next
decaf
starts
with
a
curve
that
was
defined
as
an
edwards
curve
n448
and
it
transports
from
the
jacobi
cortex
to
that
edwards
curve
in
the
top
left
directly,
whereas
ristretto
was
designed
to
work
with
curve25519
so
instead,
which
was
originally
specified
as
a
montgomery
curve.
So
instead
it
uses
the
places
the
edwards
curve
in
the
position
in
the
bottom
right,
and
it
also
handles
for
factor
a
next.
I
But
actually,
although
when
we
sort
of
this
is
a
conceptual
picture,
when
we
tell
implementers
what
to
do,
we
provide
them
with
concrete
formulas
that
collapse
all
of
this
into
just
you
know.
Do
these
operations,
it
is
also
possible
for
implementations
to
use
any
curve
in
this
graph
and
remain
totally
wire
compatible
with
all
other
implementations,
which
is
very
cool.
So
we
have
two
concrete
parameterizations
one
is
restretto
255.
This
can
be
implemented
using
curve25519,
either
the
edwards
or
the
montgomery
form,
or
you
can
implement
it
with
a
different
exogenous
curve.
I
128
bit
security
level.
The
security
is
exactly
the
same
as
the
security
of
curve.
25
509
and
we've
already
seen
a
lot
of
adoption
for
zero
knowledge,
proofs,
private
set,
intersection,
pics,
etc.
Basically,
you
know
any
crypto
protocol
that
you
could
think
of.
That
would
be
fun
and
there's
decaf
448.
I
This
uses
the
edwards
448
curve
goldilocks
next,
so
you
get
a
224
bit
security
level
and
you
would
use
this
basically,
wherever
you
use
ed,
four,
four,
eight.
So
for
the
same,
basically,
all
the
same
criteria
for
choosing
between
ed
two
five,
five,
one,
nine
and
four
four
eight
as
a
signature
scheme,
would
apply
to
dcaf448
versus
restreto
255.
Next,
so
the
current
status
of
these
things.
I
I
We
rewrote
a
lot
of
the
language
in
the
draft
last
year
to
address
a
first
round
of
feedback
from
mailing
list.
Participants
and
we've
also
added
explicit
a
section
on
dcaf448,
adding
that
into
the
spec,
adding
in
parameters
test
vectors
for
that
next,
and
we
don't
think
that
there
are
any
more
outstanding
issues
with
the
spec.
I
So
it
is,
you
know
in
some
sense
ready
to
go
for
the
next
round
of
feedback
comments.
Other
kinds
of
you
know
the
next
round
of
the
standardization
process,
so
I'm
sorry
about
the
screw
up
with
the
slides,
but
I'd
be
totally
happy
to
answer
any
questions.
If
anybody
has
them.
B
A
Maybe
is
dealing
with
some
issues
henry.
Can
I
ask
you
to
remind
the
chairs
about
the
next
steps,
namely
security
review
next
week,
if
no
comments
from
the
group
occurs
this
week,
so
I'm
please
send
us
a
reminder.
There's
a
reminder.
Next
week,
if
okay
no
command
secure
this
week,.
G
A
G
Okay,
sorry,
do
you
hear
me
now?
Yes,
yes,
okay,
so
the
question
that
I'm
having
is:
do
you
plan
to
synchronize
your
hash
to
curve
operation
with
the
algorithm
from
the
hash
to
curve
draft,
or
do
you
expect
that
decaf
and
restrictor
will
be
having
slightly
different
hash
to
curve
algorithms
in
the
future?
So.
I
Hope
I
I
think,
ideally
what
we
would
do
is
have
the
so.
As
I
mentioned,
the
the
division
that
is
in
the
draft
that
we
have
now
is
basically
at
the
boundary
of
the
symmetric
crypto.
I
I
But
in
order
to
have
compatibility
with
existing
implementations
and
systems
that
do
like
slightly
different
choices,
I
think
it
probably
makes
sense
to
keep
the
same
kind
of
you
know
you
feed
the.
This
is
the
place
where
you
put
in
the
uniform
bytes
and
then
the
curve
math
happens.
I
So,
ideally,
maybe
it
would
be
possible
for
the
hash
to
curve
draft
to
say
if
you
want
to
do
hash
to
curve
to
ristretto.
You
first
do
this
symmetric
operations,
and
then
you
plug
that
into
this
uniform
map
from
the
ristretto
rfc
or
drafter
a
spec.
C
Yes,
yeah
thanks
henry
the
the
hashtag
draft.
Basically
does
what
you
suggest
now
great
in
the
next.
We
have
basically
specified
two
ways
to
take
arbitrary
input
along
with
domain
separation
stuff
and
produce
an
input
that
you
can
just
pipe
right
into.
Basically,
the
the
the
from
uniform
bytes
or
whatever
the
the
function
name
is
certainly
in
your
document
so
respected.
C
So
I
think
everything
is
good
to
go
from
a
a
hashing
to
ristredo
and
decap
perspective,
cool.
I
So,
just
to
to
highlight
the
the
reason
that
it's
that
this
is,
I
think,
a
really
good
approach
is
that
the
uniform
map
that's
defined
currently
in
the
decaffeinato
spec
is
made
so
that
implementations
that
use
different
internal
representations
can
still
be
wire
compatible
on
their
hash
to
curve.
So
you
could
imagine,
for
instance,
like
a
a
small
space,
could
use
like
a
montgomery
form
and
still
be
compatible.
B
K
Yes,
apparently
I
want
to
share
my
screen
there.
We
are
yes,
this
should
be
relatively
short,
there's
not
a
lot
going
on
here,
quick
status,
update
and
a
couple
of
points
on
the
open
issues
that
we
have.
So
I
think
felix
did
most
of
the
work
here,
but
we've
taken
in
the.
K
K
K
Plain
textile
is
going
to
be
split
and
we
can
provide
two
inputs
to
the
functions
that
we
have.
That
will
allow
people
who
have
reduces
where
there
is
more
authenticated
than
encrypted
data
they'll
be
able
to
get
tighter
limits
in
there
in
the
applications.
K
There
are
two
things
that
I
think
we
probably
need
help
with,
and
one
of
the
things
that
I
was
looking
for
here
was
help.
I
don't
think
that
we
currently
have
any
good
analysis
of
siv
and
there's
been
some
discussion
about
what
to
do
with
adding
an
siv
mode
section.
K
We
don't
have
anything
there
and
we
don't
have
anything
currently
planned
for
dealing
with
pq
in
this
space.
We
do
know
that,
once
quantum
computers
are
around,
there's
going
to
be
very
different
limits
if
we
have
an
assumption
of
a
quantum
computer
of
a
certain
size,
but
we
don't
currently
have
any
plans
to
do
anything
about
that.
One
and
my
proposal
here
would
be
to
proceed
and
say
that
the
document
is
done
just
including
some
caveats
and
notes
about
these
things,
rather
than
having
complete
analysis
of
those
in
light
of
things
like
pq.
A
A
H
K
So
notice
that
dan
asked
in
chat
whether
this
would
be
limits
for
sov
generically
or
the
specific
gcm
sov
mode.
What's
in
the
draft
currently
is
very
specific
to
specific
instantiations
of
usage.
Even
to
the
point
where
we
have
the
randomized
knots,
which
is
the.
F
M
M
K
Me
dicko,
the
the
draft
currently
has
very
specific
limits
in
terms
of
specific
instantiations
of
the
aeads,
including
things
like
having
the
nonce
randomization
that
we
use
in
tls
and
other
settings.
So
I
would
expect
if
we
were
going
to
do
anything
for
siv.
It
would
be
specifically
aes
gcm
siv,
with
a
particular
set
of
bounds
on
all
the
parameters,
rather
than
something
very
generic.
A
E
Okay,
so
just
relaying
from
jabba
it
says
so.
I
just
read
the
drafts
and
I'm
missing
terms
like
gigabytes.
It
has
things
like
p,
the
probability
of
an
attack
which
I
don't
know
where
people
can
take
from.
So
I
think
it
should
have
some
kind
of
table
with
actual
gigabytes
you
need
before
you
need
to
re-key.
K
Is
an
interesting
point,
but
unfortunately
the
way
that
the
deciphers
are
used.
It
very
very
much
depends
on
specifics
of
things
like
number
of
bytes
in
each
message,
number
of
messages
and
a
bunch
of
other
parameters.
So
we
can't
do
something
as
simple
as
this
many
gigabytes
unless
we
have
some
bounds
on
on
that
analysis.
K
So
if
you
look
at
the
way
that
some
implementations
are
doing
that,
they
need
to
have
understandings
of
things
like
typical
message,
sizes
and
other
other
constraints
on
the
implementation,
like
the
the
split
between
aad
and
plain
text.
So
yes,
we
could
make
a
very
big
table
with
many
dimensions,
but
that's
pretty
much
self-service.
The
other
thing
is
that
we
don't
know
what
people's
tolerances
are
for
for
attack
probability.
K
We
don't
actually
have
any
real
good
guidance
here
on
on
whether
two
to
the
minus
57,
which
is
a
number
that
tls
used
or
to
the
minus
40,
would
be
acceptable,
and
so
that's
kind
of
something
we're
leaving
up
to
to
people
to
decide
on
their
own.
A
K
Yeah
so
there's
some
discussion
in
chat
there,
chris,
with
a
little
bit
of
help
from
me,
put
up
a
thing
on
github
that
allows
you
to
put
a
little
slider
back
and
forth
and
see
what
the
limits
might
be
under
certain
assumptions.
K
N
All
right,
so,
okay,
my
name
is
armando.
I'm
gonna
present
the
status
of
oblivious
of
the
random
functions.
You
see
using
prime
order
groups,
so
just
to
recall
and
operf
is
the
short
for
oblivious
drum
function,
as
also
chris
was
mentioned
in
the
talk
about
the
back.
This
is
one
of
the
fundamental
parts
of
opaque,
so
I'm
going
to
explain
a
little
bit.
How
does
this
protocol
works?
N
So
an
opr
ref
is
on
to
party
a
one
round
three
protocol
between
between
a
server
and
a
client,
so
in
this
case
they
want
to
compute
collaboratively
the
output
of
sort
of
the
random
function.
In
this
case,
the
client
has
some
input
x
and
the
server
holds
our
a
private
key,
and
we
want
that.
The
at
the
end
of
the
protocol,
the
client
learns
a
the
output
of
the
prf
in
such
a
way.
That
is
oblivious.
N
That
means
that
the
client
only
learns
the
output
y
but
doesn't
learn
anything
about
decay
and
at
the
same
time,
we
don't
want
that
the
server
can
learn
anything
about
the
client,
input
or
outputs.
N
If
that
means
that
that
the
server
now
can
commit
to
to
the
key
that
is
used
for
computing,
this
prf
and
returns
a
proof
of
of
that
is
used
by
the
client
to
actually
verifies
what
whether
the
pro
was
computed
with
the
server
key.
N
So
in
the
in
when
we
are
in
this
verifier
mode
mode,
we
say
that
we
are
in
the
case
of
of
a
b
of
prf
okay,
so
the
protocol
has
a
setup
phase
where
basically,
the
client
and
the
server
agree
on
a
cypher
suite
id
and
for
the
server
they
just
compute
the
this,
the
secret
key
that
is
used
for
for
the
protocol,
and
there
are
some
other
additional
contacts
both
for
client
and
server.
Just
to
that
are
information.
N
That
use
is
used
during
the
protocol
so
in
the
online
phase.
So
the
client
starts
the
communication
by
blinding
some
input
that
he
has.
We
can
think
of
this
input
like
a
on
on
the
right
bytes.
This
blind
operation
produces
two
two
output,
which
is
an
error
and
applied
blended
element.
N
So
this
blended
element
is
sent
to
the
server
which
operates
using
this
evaluate
function,
which
actually
applies
this
a
private
operation
and
producing
this
evol.
N
Evil
part
of
of
computation
and
this
level
is
sent
back
to
the
client
which
uses
this
online
operation,
which
actually
reverses
in
certain
way
the
the
blind
that
was
used
to
get
this
unblinded
element
or
unblinded
part
which
finally
is
used
together
with
the
input
and
some
additional
information
to
produce
the
final
output,
which
is
the
the
the
whole
point
of
the
opr.
N
This
is
this
whole
protocol
is
just
the
opref
in
the
in
the
vo
prf
there
there's
a
during
the
on
blind.
So
this
the
client
is
able
to
verify
the
proof
that
is
same
inside
of
the
evaluation
made
by
the
server.
N
The
latest
changes
on
the
draft
we're
including
some
two
new
cipher
suite
for
restaurant
and
decaf
groups,
and
also
we
updated
details
about
the
hashing
to
group
because
inside
of
the
protocol,
we
need
to
do
hashing
to
the
group
and
hashing
to
scalar
and-
and
this
is
fully
explained
in
the
draft
now
so
we
are
the
section
about
additive
blending
blinding
sorry.
This
kind
of
blinding
is
allows
to
decline
to
produce
fast,
faster,
blinding
than
the
than
the
natural
way.
N
Also
well,
as
I
mentioned,
there's
a
complete
specification
of
cyphers
parameters.
There
are
also
test
vectors
available
and
there
are
some
a
lot
of
editorial
improvements.
N
There
are
two
ciphers
suit,
based
on
one
on
roseretto,
one
on
and
three
surfaces
corresponding
to
three
nest
curves
and
to
the
corresponding
security
levels.
Also,
there
there's
a
range
for
experimental,
surface
suites
or
additional
surface
suites.
N
N
Some
other
compatible
implementations
are
in
these
languages,
mainly
in
go.
There
are
some
implementations
of
rust
and
also
in
c
and
c
blueprints,
which
is
supported
in
boring
ssl.
Well.
Finally,
I
will
open
for
questions
and
any
feedback
is
welcome
for
us
thanks.
So.
B
Much
thanks
a
lot
questions.
A
J
J
J
J
So
what
are
these
problems?
Mainly?
It's
it's
difficult
to
choose
the
secure
parameters
or
a
combination
of
secure
parameters
and
that's
very
difficult.
Also,
it's
very
difficult
for
the
libraries
to
establish
secure
defaults
so
to
have
it
secure
in
the
future.
It's
easy
to
make
it
one
one
time
secure,
but
it's
a
little
bit
more
difficult
to
make
it
future
proof
and
it's
difficult
to
make
the
libraries
misuse
resistant.
So
everybody
can
use
the
great
algorithms
we
just
heard
about
before
this
presentation.
J
J
And
so
there
are
several
parameters
that
you
have
to
choose
as
developer
for
a
aes.
It's
what
key
length
do
you
choose
which
block
mode?
Do
I
choose
which
or
do
I
need
padding
algorithm,
which
tag
length?
Do
I
choose
and
which
noise
length,
and
how
do
I
generate
these?
The
content
of
these
parameters
and
then
the
next
question?
How
can
I
change
these
settings
in
the
future
and
I
still
can
decrypt
all
what
I
encrypted
before
a
second
example.
J
This
is
about
argon
2,
which
is
the
more
recent
standard
which
we
already
heard
something
about
today
as
well.
So
there
are
a
lot
other
parameters,
format
of
the
input
string,
length
of
non's
number
of
reds
length,
of
the
tag
number
of
bytes
number
of
passes,
and
then
there
are
even
three
different
types
of
argon
too,
and
how
do
I
choose
a
secure
combination
or
a
sufficient
sufficiently
secure
combination
of
these
algorithms
and
what
is
great
about
this
standard?
J
It
already
has
a
section
which
clarifies
this
a
little
bit
yet
this
is
still
in
practice
very
difficult
for
for
developers
to
choose
anything
of
this,
so
in
practice,
usually
developers
rely
on
example,
code
which
they
find
through
google
and
then
most
oftenly.
They
find
insecure
example
code,
but
they
still
are
not
the
experts.
J
This
is
what
I'm
going
to
come
to
here.
So
I'm
talking
about
software
developers,
expectations
versus
the
algorithm
flexibility
that
we
want
to
have
with
all
that
that
you
develop
in
the
in
the
cfrg.
So
my
assumption
is
that
the
developers
are
not
security
experts
and
this
secure,
cryptoconfig,
I'm
talking
about,
is
also
not
for
the
experts.
There
should
always
be
the
algorithm
flexibility
and
the
performance,
tiny
things
that
you
can
tune
to
make
it
really
perfect,
but
I'm
really
talking
about
the
average
joe,
the
developer.
J
That
is
not
a
security
expert
and
they
expect
because
that's
what
they
learn
and
that's
what
we
describe
to
them.
Symmetric
encryption
uses
a
key
and
a
plain
text,
and
that's
it,
but,
as
I
showed
you
in
example,
one
you
need
to
specify
a
lot
more
parameters
to
really
use
it
in
practice
and
also
for
hashing.
J
You
only
expect
to
put
in
a
plain
text
and
not
select
various
different
parameters
for
argon
2.,
okay
and
what
is
the
secure,
crypto
config
now
in
my
proposal
that
we
we
work
on
and
the
secure
crypto
conflict
is
come
from
my
comprised
of
three
things.
First,
it's
a
process
that
is
repeated
every
two
years.
At
least
that's
my
proposal
for
a
new
set
of
default
configurations
for
standardized
cryptography.
Primitives
is
published
in
a
standardized
standardized
machine,
readable
format.
J
Then
a
cp
cryptoconfig
interface
is
described
that
has
common
apis
to
use
cryptography,
primitives
and
software
that's
offered
by
the
standard
libraries.
It's
always
easy
to
create
a
separate
new
library,
but
this
will
be
coming
outdated
very
soon.
So
it's
always.
The
best
thing
is
to
improve
the
standard,
libraries
and
finally,
the
third
part
is
use
courser
at
least
that's
what
I
found.
J
It
could
be
the
best
choice
for
the
format
and
because
you
can
store
the
parameters
in
the
output
of
the
cryptographic,
primitive
and
then
later
on,
use
this
headers
and
keys
and
parameters
from
the
output
as
input
and
derive
everything
from
there
instead
of
having
it
hard
coded
in
your
application
code
and
can
change
the
parameters
anymore
and
below
you
see
the
links
where
I
worked
on
this
already
together
with
a
student
and
what
does
the
process
look
like
look
like
to
make
it
a
little
bit
more
visual?
J
We
see
at
the
top
various
organizations.
These
are
just
some
of
them.
It
can
also
be
other
participants.
Of
course,
this
was
an
early
suggestion
from
me.
The
bse
missed
ietf,
irtf
cfrg,
you
name
it
and
they
come
out
together
and
agree
on
on
a
certain
parameter
set
for
different
classification
levels.
This
is
optional.
I
have
a
question
afterwards
to
this.
J
How
can
I
use
aw
aes
in
java,
for
example,
so
this
there
is
something
that
has
to
be
done
by
cryptography
library
developers,
and
this
is
the
previous
draft.
As
you
see
on
the
right
in
this
description,
I
I
used
previously
the
cryptographic
message:
syntax
cms,
but
someone
at
the
presentation
told
me
that
corsair
would
be
a
better
choice.
J
So
I
changed
it
for
the
secure
cryptoconfig
draft
and
when
these
libraries
are
updated,
then
applications
obviously
can
use
that
these
libraries
in
an
easy
way
and
only
specify
their
required
classification
level
or
derive
that
from
the
document.
Maybe
they
want
to
encrypt
or
hash
or
sign,
and
then
they
automatically
use
the
correct
algorithms
from
the
library
and
they
don't
have
to
choose
all
these
parameters.
J
G
J
Would
be
to
use
the
course
and
like
piana
registry,
because
they
already
have
names
for
these
combinations
and
at
least
it's
important
that
they
really
specify
all
required
parameters,
so
developers
only
need
to
put
in
the
key
and
the
plain
text,
and
that's
it
in
case
of
encryption.
J
Yes
in
the
output
again,
it's
possible
for
input
again,
because
if
you
want
to
decrypt,
you
need
to
know
what
algorithm
was
used
and
what
what's
the
nonce
length
etc,
and
you
need
to
get
all
this
information
from
the
output
code,
and
this
is
currently
not
there
in
any
known
implementation.
I
know
of
at
least
not
in
the
standard
implementations.
J
There
are
libraries,
but
they
will
be
coming
quickly
outdated.
So
I'm
not
configuring
these.
And
what
could
this
look
like
in
practice?
I
already
write
on
an
interface
with
the
student,
and
you
see
only
just
a
quick
view
here
for
encryption.
You
only
need
to
specify
the
key
and
the
plain
text
and
also
for
the
decryption.
J
J
Okay,
coming
to
a
summary,
what
does
the
secure
cryptographic
secure,
cryptocontact
offer
it
defines
common
use
cases
for
recovery
operations,
then
it
offers
a
standardized
configuration
set
for
all
required
parameters
of
widely
available
cryptography
algorithms
for
each
common
use
case
mentioned
above,
and
then
it
establishes
a
process
to
continuously
provide
updated
configuration
sets
based
on
existing
standardization
processes
of
the
ietf.
J
Also,
the
ssc
allows
then
cryptography
algorithms
and
now
libraries
to
use
these
standardized
output
formats
and
for
cryptography
operations.
Also,
the
sdc
allows
cryptography
libraries
to
change
their
defaults
because
it's
all
put
in
the
output
formats,
so
they
can
change
their
defaults
in
the
future
to
stay
secure
right
now,
there's
no
easy
way
to
do
that
without
putting
everything
in
the
output
formats
by
default,
and
this
would
ensure
backwards.
Compatibility
yes,
and
there
are
some
open
questions
which
I
want
to
pose
here,
just
quickly
how
to
proceed
with
standardization.
J
This
is
even
a
good
idea
to
do
this.
To
make
this
easier
for
developers
is
causing
the
appropriate
format
to
use
for
this,
which
is
the
appropriate
ayanna
registry
to
specify
the
algorithms.
I
think
cos
is
already
very
good
what
are
commonly
common
cryptography
use
cases
for
developers
that
should
be
supported
by
the
sec
and
what
is
the
appropriate
security
level
for
what
are
appropriate
security
levels,
because
you
have
to
select
some
parameters
for
a
certain
use
case.
J
I
guess,
and
what
format
should
be
used
for
the
cryptographic
signature
of
the
publish
configurations,
because
it
should
also
be
secure
and
not
be
changed
by
someone
and
there
are
more
difficult
questions,
but
they
might
come
afterwards.
J
D
Thanks
for
the
presentation,
oh
hang
on
a
second
there
we
go
security.
I
think
some
interesting
ideas
here
and
I
think
you're
onto
something
important
about
saying
that
it's
hard
to
pick
the
right
parameters,
so
I
think
some
process
for
getting
parameter
recommendations
would
be
valuable.
I
I
think
I'm
less
sanguine
about
the
idea
that
you're
going
to
have
machine,
readable
and
machine
updatable
profiles.
That
seems
like
a
real
recipe
for
interrupt
failures.
D
I
see
what
you're
saying
about
encoding
them
in
the
wire,
but
the
fact
is,
if
imagine
we
have
two
cozy
implementations,
one
of
which
is
like
just
implemented
normal
way
and
doesn't
allow
secure
update,
is
an
update,
and
one
of
which
takes
this
thing.
So
now,
basically,
you
know
the
real
probability.
Is
you
generate
parameters?
You
know
which,
with
other
person,
can't
read
because
who
says
the
other
person
supports
every
possible
fair,
every
possible
aes
key
size,
for
instance.
D
So
you
know
we
spent
a
lot
of
time
trying
to
like
have
manufacturers
interrupt,
and
this
would
basically
break
that.
So
I
think
that,
like
you
know,
so
I
guess
I
guess
I
think
I
think
the
part
about
like
trying
to
specify
like
good
good
parameters
is
a
good
plan,
but
I
think
the
part
about
trying
to
have
it
be
machine,
readable
and
self-updating
is
overreach.
B
Feedback
any
other
comments,
questions.
O
Yeah
hi
I'll
just
mention
oh
wow.
Let's
turn
off
the
video
that
looks.
Cool
red
hat
is
doing
a
lot
of
work
on
standardizing
crypto
profiles,
so
it
might
be,
I
don't
think,
don't
recall
if
it's
in
red
hat
or
fedora,
but
it
might
be
worth
taking
a
look
at
that
for
a
way
to
see
you
know
formats
and
what
they
did
and
they
were
pushing
to
make
all
crypto
using
programs.
O
Yeah
I
can
post
it
yeah
it'd,
be
a
google
search
query,
but
I'll
see
what
I
can
find.
Okay,
bob
moskowitz
just
said:
fedora
33.
he's
got
it
in
beta.
A
Then
maybe
some
other
business.
I
have
some
discussions
in
the
chat.
I
would
like
to
comment
on
two
questions
raised
in
the
chat.
First
of
all
about
spake
two
draft.
There
was
a
question
from
jonathan
holland
about
moving
on
with
space
to
draft
a
situation.
Here
is
the
space
to
draft
predated.
The
pack
selection.
H
A
A
lot
of
ideas
of
how
to
implement
spec
2
inside
dls,
for
example,
was
developed
before
the
back
selection
process.
So
after
some
discussions
we
discussed
not
to
stop
the
speak
to
draft
and
we
discussed
this
as
a
class
meeting
in.
C
A
There
is
an
a
disclaimer
in
the
space
to
draft
now
that
the
draft
is
not
a
result
of
acceleration
process.
A
So
we
understand
the
question,
but
it
was
a
result
of
a
lot
of
discussions
between
the
chairs
and
the
second
question
from
dkg
about
the
order
of
the
presentations.
A
Slides
with
recalling
what
oprf's
are,
they
will
be
useful
before
opec,
but
since
we
think
that
we
thought
that
it
could
be
just
a
status
update,
slides
with
divs,
etc,
they
won't
be
useful
before
open,
so
we
decided
to
start
with
two
new
documents,
opaque
and
cpas,
which
were
adopted
after
the
election
process,
but
I
agree
that
we'll
try
our
best
to
address
this
concern
in
the
future.
A
L
No
comments
thanks
everybody
for
coming
out
at
this
time,
even
though
we're
not
physically
in
the
time
zone.