►
From YouTube: IETF110-CFRG-20210312-1200
Description
CFRG meeting session at IETF110
2021/03/12 1200
https://datatracker.ietf.org/meeting/110/proceedings/
A
I
hope
everyone
can
see
the
slides,
and
you
can
hear
me
this
is
for
g
and
we'll
start
with
silly
groove
status,
alexey,
melnikov,
nick
sullivan
and
as
chairs
this
session
has
been
recorded
automatically.
We
have
a
minute
taker.
Thank
you
very
much
to
alexei.
We
have
a
job
relay
thanks
to
jonathan
holland.
A
A
A
A
One
state
view
sorry
wanted
to
date
by
the
operator
about
rsa
blood,
brain
signatures,
about
frost
about
he
committing
aed
and
about
some
possible
modification
of
the
operative
that
can
be
needed
for
privacy
pass
and
in
every
other
business.
In
the
end,
maybe
any
agenda.
D
Yes,
now
we
see
the.
A
E
A
Okay,
so
okay
we'll
proceed
not
in
full
screen.
Sorry,
so
I
can
replace
the
slides
with
some
administrator.
F
A
A
A
Only
yesterday,
though,
there
was
a
new
version,
so
we
hope
that
argument
2
will
be
published
before
the
next
meeting
before
because
it
has
undergone,
have
conflict
review
and
there
was
only
a
small
command.
So
I
hope
that
every
concern
is
addressed.
Now
we
have
hpk
document
after
schedule,
plus
code
and
after
document
shepard.
G
A
Draft
it
has
been
updated
and
the
last
call
has
finished
successfully
and
now
we
are
waiting
for
shepard's
review.
It's
my
review.
A
A
A
Presentations
about
most
of
them
today,
the
first
draft
has
been
adopted
and
will
have
a
presentation
in
the
generality
today.
So
the
crypt
ready
panel
is
still
working,
it's
still
providing
good
reviews
and
still
happy
to
do
your
reviews
for
the
cypher
g
or
for
any
other
requests
from
ise
or,
if
groups.
H
H
J
F
So
if
you
can
hear
me
yes,
yes,
okay,
so
for
hpke,
my
apologies.
I
know
this
has
been
waiting
for
me
for
a
while.
I
I
have
not
forgotten
the
pandemic
is
rather
getting
in
the
way,
but
I
hope
to
get
to
it
shortly
for
the
the
argonne,
the
the
is
it
argon
12
and
I'm
forgetting
the
acronym.
If
that
the
chairs
could
just
confirm
that
that
the
changes
satisfactory
then
I'll
get
that
moving
quickly.
E
A
Argon
2
was
updated
only
yesterday,
so
we
hope
to
see
it
in
a
couple
of
days
and
about
the
hpk
collins
thanks
a
lot
for
a
lot
of
comments
from
your
side,
because
I
think
that
a
lot
of
improvements
of
hpk
was
done
after
your
first
review.
So
I
hope
that
no
more
big
issues
with
hpk
will
occur
before
the
next
stage.
A
Now
we
only
say
that
screen
share
has
been:
oh,
yes,
we
see
it
so
please
jail
kangaroo
12!
Please
start.
K
Okay,
thank
you.
So,
yes,
my
name
is
vanessa.
I
work
for
st
microelectronics
in
belgium
and
I'm
one
of
the
editors
of
the
draft
on
kangaroo
12.
So
I
would
like
to
thank
you
for
this
opportunity
and
I
would
like
to
make
a
real
status
of
the
draft.
Can
you
go
to
the
next
slide?
Please.
K
K
So
the
differences
between
shake
128
and
kangaroo
12
are
the
two
main
differences
are
first,
we
reduce
the
number
of
rounds
from
24
to
12
rounds,
and
the
second
change
is
that
we
added
a
mode
of
operation
that
exploit
parallelism
and
in
particular
assign
the
instruction
sets.
So
the
result
is
reserved.
That
is,
at
the
same
time,
secure
and
fast,
of
course.
So
it's
not
my
purpose
to
to
discuss
again
why
it's
secure
and
fast,
because
I
think
benoit
did
that
fine
a
few
times
already.
So
that's
that's
not
the
goal.
K
I'm
just
going
to
give
you
a
couple
of
hints,
but
so
about
security.
The
key
point
is
that
kangaroo
12
reuses
the
crypt
analysis
of
the
reduced
round
ketchup
or
chat
three,
so
the
the
round
function
is
exactly
the
same,
which
means
that
we
can
rely
on
more
than
10
years
of
crypto
analysis
by
the
community,
so,
for
instance,
as
a
as
of
today,
the
best
collision,
pre-image
and
second
pre-image
attacks
against
shake
128
work
only
up
to
five
rounds,
and
so
the
goal.
K
The
same
goes
for
kangoo
12.,
so
about
the
speed
it's
faster
than
md5
sha
one
or
chat
two,
especially
when
the
processor
has
as
ind
instruction
sets
and,
for
instance,
if
you
take
the
skylake
x
or
cascade
lake
architecture,
then
can
go.
12
tops
at
about
half
a
cycle
per
byte,
which
is
roughly
seven
times
faster
than
sha
one
on
that
platform
and
well
in
general,
it's
it's
faster
than
chat
two
and
chat
three.
K
Can
you
go
to
the
next
slide?
Please.
Thank
you.
So
here
is
a
log
of
the
draft,
so
it
became
a
working
group
item
in
august,
29,
2019.
Sorry,
then
we
yeah.
We
fixed
some
tipos.
K
We
had
the
chance
of
getting
some
reviews
from
the
crypto
review
panel
from
formasso
and
thomas,
and
we
integrated
these
their
comments
in
versions,
o2
and
o3.
Together
with
the
comments
from
stephen
farrell,
we
also
received
some
some
requests
for
clarification
from
pascal's
new,
and
these
were
integrated
in
version
04.
K
K
I
mean
at
least
we
proposed
something
that
would
address,
which
is
simply
one
sentence
to
to
be
added
to
at
the
end
of
the
document,
but
it
didn't
confirm
if,
if
that
was
okay
for
him
or
not
yet
so
that's
that's
the
status,
I
would
say
so.
Globally.
We've
received
positive
feedback,
but
not
that
much
feedback,
especially
after
the
last
goal.
So
we
thought
that
at
this
point
it
would
be
a
good
idea,
maybe
to
to
stress
again
our
motivation
for
suggesting
kangaroo
12
and
that's
yeah.
K
K
The
first
main
arguments
about
are
about
the
implementation
properties,
so
we
think
that
these
implementation
properties
are
quite
unique
among
hash
functions
or
symmetric
crypto
primitives
in
general,
so,
specifically,
our
run
function,
uses
only
bitwise
degree,
2
operations
like
source
and
ends
and
rotations,
and
this
means
that
when
it's
implemented
in
the
hardware,
the
critical
path
is
really
short.
So
this
explains
why
ketchup
is
fast
in
hardware
as
well
as
energy
efficient,
but
even
if
you're
not
interested
in
hardware
implementation
per
se.
K
Of
course,
a
cpu
is
a
piece
of
hardware,
so
this
may
have
an
impact
in,
in
the
long
run,
to
illustrate
that
if
you
take
the
avx
512
instruction
set,
there
is
a
new
instruction
that
can
perform
any
bitwise
operation
on
three
operands,
so
you
can
just
specify
the
truth
table
of
this
operation,
and
this
is
an
example
of
an
instruction
that
I
think
is
possible
thanks
to
the
short
critical
path
of
such
operations.
K
Another
important
implementation
aspect
is
when
one
has
to
protect
against
special
kinds
of
such
an
attacks
like
differential
power,
analysis
dpa,
and
we
know
that
these
protections
are
quite
expensive,
but
the
degree
two
operations
make
these
protections
less
expensive
than
on
arx
designs
or
even
the
the
aes.
K
The
second
set
of
arguments
is
more,
I
would
say,
economical
it's
about
reuse
of
things
that
were
already
done.
So
first
is
cryptanalysis,
so
cryptanalysis
takes
a
significant
time
and
effort
and
can
go
to
wealth.
Can
can
leverage
on
these
investments?
We
can
reuse
all
the
cryptanalysis
done
on
catch
up
since
2008..
K
Another
point
is
the
reuse
of
implementations,
so
obviously
we
can
reuse
code
from
shatri
and
adapt
it
to
can
go
to
it
quite
easily,
but
I'm
I
also
have
in
mind
a
recent
trend
that
some
some
vendors
are
adding.
Other
assistance
for
different
standard
hash
functions
and,
for
instance,
arm
has
defined
some
shattering
instructions
in
their
64-bit
processor,
optional
instructions,
so
kango12
can
benefit
from
these
instructions
and,
as
a
matter
of
fact,
these
instructions
are
present
in
the
latest
apple,
a14
and
m1
processors.
K
And
concretely,
if
you,
if
you're
interested
in
some
some
figures,
I'm
based
on
benchmarks
from
andy
polyakov,
I
expect
that
kangaroo
12
can
run
at
less
than
one
cycle
per
byte
on
the
m1
and
to
compare
apples
to
apples.
If
you
consider
the
hardware
accelerated
512
on
the
same
platform,
it
would
be
twice
at
least
twice
faster,
so
yeah
just
just
to.
To
sum
up,
I
think
that
these
points
are
at
least
they
convince
me
that
can
go
to
be
something
interesting
and
valuable
for
the
community
and
actually
that's
all.
I
wanted
to
say
so.
A
Okay,
then
thanks
a
lot.
Thank
you.
Please.
Bjorn
hustle,
where
is
the
presentation
about
the
current
status
of
cpa,
is
draft.
It's
the
balanced
pack
that
was
the
winner
of
pack
selection
contest
last
year,
so
we
have
the
space
draft
updated
recently.
So
bjorn,
please
tell
us
about
this.
Current
status.
L
So
hello,
you
hear
me:
yes,
okay,
so
I'll!
Take
your
slides
there.
So
I'd
like
to
give
you
an
update
on
the
current
progresses
regarding
the
cpace
draft.
L
Yes,
okay,
so
to
to
give
you
an
update
the
basically
the
news
is
that
we
have
now
completed
the
security
analysis
for
cpas
in
different
variants,
and
we
have
published
it
in
the
paper
which
is
online,
which
also
has
seen
reviews
so
that
we
convinced
and
have
sound
results
that
also
the
results
that
the
new
results
in
this
in
the
paper
are
sound.
L
So
the
results
in
a
nutshell,
are
that
the
different
c
protocol
variants
that
we
considered
consider
in
the
current
draft
provides
strong
security
guarantees.
So
we
have
composability
and
adaptive
security
under
relaxed
assumptions,
which
is
the
strong,
simultaneous
difficult
assumption,
which
is
less
than
we
had
previously.
It's
a
decisional,
diffie-hellman
type
assumption.
L
Previously,
the
maps
have
been
considered
as
a
random
oracle
and
in
our
analyzers
we
were
able
to
remove
this
requirement
and
extract
the
actual
needs
that
we
and
the
requirements
that
we
have
to
impose
on
the
mapping
primitive.
L
When
trying
to
integrate
as
he
pays
there.
We
have
identified
three
subsystems
and
chosen
three
tailored
variants,
which
the
first
one
is
considering.
Implementation
on
single
coordinate
letters
such
as
x25519
and
the
second
ecosystem
would
be
implementation
that
use
group
abstractions
such
as
restrato
and
the
third
ecosystem
would
be
implementations
that
use
conventional
short
wire
stress
approaches,
for
instance,
nist
p256.
L
Secure
also
for
sea
pace,
and
that's
the
case
so
there
and
there's
also
no
need
for
an
explicit
additional
point.
Verification
code
since
x2519
and
x448
have
twist
security,
and
we
this
and.
H
L
That
we
just
need
to
check
for
the
neutral
elements,
so
existing
x25519
and
x448
is
suitable
for
seepage,
as
is
similarly
for
estretto
and
decaf.
We
would
be
using
the
built-in
uniform
map
tries
and
ad
construction
that
comes
with
the
abstraction
and
also
there
slightly
non-uniform
something
of
secret
scholars
that
is
commonly
used.
There
is
also
no
problem
for
cpase
and
point
verification
is
enforced
by
the
encode
and
decode
primitives.
L
So
this
will
make
cpase
implementations
most
straightforward,
found
foreign
sample
implementation
online,
which
we
plan
to
use
for
for
deriving
test
vectors
for
short
wire
stress.
We
would
be
using
the
non-uniform
encode
to
curve
construction
from
the
hash
to
curve
draft
and,
of
course,
in
this
subsystem,
conventional
point.
Verification
is
mandatory
and
also
some
care
has
to
be
taken
when
she's
using
existing
libraries,
which
target
ecdh
only
at
the
moment,
because
for
cpas
one
should
really
make
sure
that
the
secret
scholars
are
sampled
uniformly
so,
for
instance,
rejection
something
for
the
session
keys.
L
L
Regarding
implementations,
there
is
not
much
new
you
from
our
site,
but
there
are.
We
found
several
implementation
prototypes
in
go
and
rust
online,
which
we
are
looking
at
at
the
moment,
and
our
main
topic
is
at
the
moment
we're
working
on
reference
implementation
based
on
in
ncc
for
the
short
wireshare
supplementation.
L
So,
but
in
addition
to
the
work
that
we
are
doing
as
an
editor
team,
we
are
actively
looking
for
partners
which
would
be
interested
in
having
preparing
a
cpas
integration
into
different
crypto
libraries.
L
We
believe
that
it
might,
for
instance,
be
an
interesting
student
project,
instructive
student
project
and.
L
Somebody
would
be
raising
his
hand
and
saying
okay,
we
think
that
might
be
an
interesting
topic
and
we'd
like
to
join.
We
are
contacting
different
teams
and
universities,
but
if
anybody
is
is
interested,
it
would
be
welcome
if
you
get
into
touch
with
us,
so
we
think
that
having
others
implemented
would
be
a
good
real
world
test
for
readability
and
completeness
of
the
draft
and,
of
course,
improve
interoperability
tests.
L
So
summing
up,
we
think
that
the
main
use
is
the
is
the
security
analysis
interesting.
It
might
be
interesting
for
the
oprf
team
to
have
a
look
at
the
proof
strategy
that
we
chose
for
cpace,
because
in
my
perception,
handling
dealing
with
the
map
is
one
of
the
more
difficult
aspects
which
is
also
a
problem
on
is
a
topic
for
the
oprf
constructions
that
use
different
on
elliptic
curves.
L
So,
secondly,
on
regarding
the
hashtag
curvecraft,
we
have
reviewed
it
and
I
think
it's
very
well
written
and
very
clear
in
our
perception.
It
would
be
nice
to
have
the
exact
requirements
that
we
need
for
etf,
for
security
for
cpas
integrated
it
so,
basically
the
inversion
property
invertibility
property,
which
is
already
there
in
a
subsection,
but
might
be
a
bit
more
prominent
and
on
our
wish
list.
L
Okay
and
finally,
we
I'd
like
to
invite.
I
think
it's
a
good
idea
to
join
forces
for
reference
implementation
with
the
vo,
prf
and
hash
to
curve
team.
I've
already
been
in
touch
with
christopher
wood,
but
I'd
like
like
to
ask
anybody
and
to
give
us
a
short
note
when
you're
working
working
on
reference
implementations
using
maps-
and
we
would
be
informing
you
and
would
be
happy
if
you
keep
us
synchronized
of
your
advances
there.
L
So
in
indeed,
the
the
presence
protocol
specification
in
the
draft
integrates
the
identities
as
part
of
the
channel
identifier
into
them
into
into
the
protocol.
We
have
discussed
that
in
the
also
in
the
editor
team-
and
we
see
currently
two
two
ways
to
to
deal
with
that
so
in
applications
where
we
have
the
identities
established
beforehand.
L
There's
one
the
straightforward
way
is
to
to
integrate
it
into
the
channel
identifier
field
in
the
beginning
and
in
applications
where
it's
only
established
as
part
of
the.
L
The
protocol
one
could
not
easily
integrate
it
in
the
beginning,
but
and
one
will
you
lose
at
this
point
and
the
fact
that
the
identities
are
authenticated
as
well,
but
one
could
use
the
the
identities
and
insert
the
identities
in
a
final
key
confirmation
round,
as
it
is
done,
for
instance
by
by
tls,
and
so
in
our
opinion,
it's
not
a
real
problem,
as
it
will
become
part
of
the
authenticated
on
as
part
of
the
hash
of
the
over
this
transcript
in
the
higher
level
construction.
L
So
and
that's
so,
that's
the
the
present
the
present
state
of
this
discussion,
and
there
also
has
been
some
feedback
on
the
list
and
in
the
current
draft
we
have
in
the
current
version
of
the
draft.
The
identities
are
marked
as
recommended,
but
not
strictly
mandatory
and
as
a
part
of
the
proof,
of
course,
everything
which
is
not
integrated
as
a
part
of
the
channel
identifier
will
not
be
authenticated.
M
Hi,
so
did
you
mention
that
it
would
be
desirable
to
have
a
non-click
co-factor
in
the
hashtag
drop,
so
in
which
cases?
This
is
an
issue.
L
And
this
provides
the
advantage
is
that
when,
when
you
clear
the
code
factor,
if
you,
if
you
use
a
map,
for
instance
such
as
alligator,
you
will
receive
the
result
of
of
the
mapping
operation
in
affine
coordinates
and
when.
If
you
start
of
start
with
removing
the
cofactor,
you
basically
multiply
by
by
the
cofactor,
you
will
receive
the
result
in
projected
coordinates,
and
you
will
have
an
additional
inversion
operation
which
you
need
in
order
to
get
it
back
to
a
theme
coordinate
so
that
you
could
use
the
most
efficient
scalar.
L
M
N
With
alligator
specific,
well,
okay,
we
don't
need
to
get
into
the
srg
thing.
I
do
worry
what
happens
when
someone
comes
along
and
defines
another
elliptic
curve?
How
much
work
is
there?
Do
you
need
to
go
write
a
sneak
peak
for
this
new
curve,
or
is
it
generic
enough
that,
if
all
that,
if
it's
in
the
one,
the
categories
for
the
hash
to
curve
draft
things
will
just
work
and
people
will
know
what
to
do?
And
it's
really
only
the
examples
where
these
three
integrations
matter
and
the
test
vectors.
L
So
so,
basically,
the
the
the
news
is
all
of
the
different
constructions
are
secure
and
there's
no
risk
security-wise.
It's
just
ease
of
implementation.
For
instance,
if
you
use
the
map,
if
you're
using
rischetto,
you
probably
won't
be
using
the
map
which
is
currently
defined
in
the
hash
to
curve
draft,
because
they
are
having
a
different
construction
and
different
built-in
primitive,
which
does
not
match
the
the
implementation
which
is
in
the
hash
to
curve
draft
so
and
security-wise
we
don't
have
have
a
problem.
It's
just
ease
of
implementation.
L
A
A
C
B
All
right,
thanks
yeah,
so
this
is
just
an
update
on
opaque
next
slide.
Please
stance.
I've
already
kind
of
gave
it
away,
but
opaque
for
those
who
are
young
familiar
is
a
basically
it's
an
asymmetric
password
authenticated
key
exchange
protocol
that
takes
a
number
of
cryptographic,
primitives,
glues
them
together
and
build
something
quite
nice
next
slide.
Please.
B
The
protocol
basically
runs
in
two
phases:
there's
an
offline
registration
phase,
wherein
clients
use
their
password
to
register
public
key
credentials
with
the
server
and
then
store
it
in
the
server
and
then
use
later
on
in
an
online
phase
for
authentication,
wherein
clients
will
use
their
password
again
to
recover
these
credentials
and
then
perform
an
authenticated
and
mutually
authenticated
key
exchange
and
there's
a
number
of
different
authentication
changes.
You
could
use
a
number
of
different
credentials.
O
B
Could
use,
but
this
document
takes
a
sort
of
opinionated
stance
and
specifies
a
singular
ake
based
on
3dh,
with
sufficient
accommodations
for
future
authenticate
exchange
protocols
if
we
want
them
later
on.
So
perhaps
something
based
on
sigma
based
on
http,
the
delta
between
what's
there
currently
for
3dh
and
what's
needed
for
sigma
or
hmqv,
or
our
similar
variants
is
quite
small
and
really
only
varies,
and
how
the
ake
specific
piece
does
authentication
with
the
public
key
credentials
and
how,
like
the
the
key
change
transcript
is
computed.
B
So
the
rest
of
opaque
sort
of
the
core
remains
the
same
and
should
be
shared
across
all
instantiations
next
slide,
please.
B
So.
Between
the
last
meeting
and
now
there
have
been
a
number
of
updates
to
the
draft
some
major
and
some
minor.
The
major
piece
is
that
the
the
whole
thing
was
sort
of
massively
refactored,
based
on
feedback
from
air,
crockett,
aws
and
amazon,
to
sort
of
make
it
clear
what
is
sort
of
the
offline
phase
and
what
is
sort
of
the
online
phase
and
what's
essentially
to
be
done
for
the
online
phase.
B
That
is
both
sort
of
the
the
credential
recovery
step.
As
well
as
the
authentic
exchange
step,
so
hopefully
it's
much
more
clear
now.
We
also
previously
had
sort
of
an
odd
balance
in
terms
of
things
that
were
parameterized
and
things
that
were
pinned
and
or
baked
into
the
document.
B
So
we
had
this
hard
dependency
on
hkdf
and
hmac,
and
so
on
so
to
accommodate
particular
implementations
that
may
want
to
use
different
macs,
so
maybe
cbc
mac
or
something
like
that,
but
we've
now
abstracted
over
all
of
the
the
underlying
primitives
such
that
anyone
can
plug
in
things
that
have
conformed
to
these.
These
various
interfaces
and
the
the
protocol
should
should
work
and
be
safe
to
use.
B
We've
also
done
a
lot
to
sort
of
prune
and
fix
the
the
wire
format
of
the
protocol,
so
that
implementations
have
a
less
difficult
time.
Parsing
things
serializing
things
and
so
on,
with
the
exception
of,
I
think
two
fields
off
the
top
of
my
head:
that
is,
the
the
identifiers
that
are
used
for
client
server
and
arbitrary
application
info.
That's
exchanged
during
the
authentic
exchange
step.
Everything
else
is
a
fixed
size
and
easily
parsable,
so
that
should
help
some
minor
things
as
well.
B
Previously,
the
the
transcript
for
3dh
was
minimal
in
terms
of
what
it
actually
included
for
security.
This
was
sort
of
an
implementation
headache
in
that
it
was.
There
were
sort
of
two
different
transcripts,
one
for
actually
key
derivation,
then
another
for
computing,
the
the
mac
and
the
the
actual
key
exchange
for
key
confirmation.
B
B
We
also
added
a
suggested
password
file,
serialization
format.
The
password
file
is
the
thing
that
the
server
stores
has
like
the
users
or
the
client's
credentials
in
it,
and
yet
the
purpose
of
this
is
that
opaque
can
be
sort
of
treated
as
a
black
box
on
the
server
side
that
outputs
password
files
that
are
then
written
to
disk
as
just
bytes
blobs
of
bytes
and
then
read
from
disk
and
potentially
put
back
into
the
relevant
apis
to
be
parsed.
B
B
These
test
vectors
are,
we
use
them
to
interrupt
with
several
different
implementations,
there's
a
reference
implementation
in
sage
based
on
the
the
underlying
reference
implementations
in
sage
for
the
oprf
as
well
as
hashtag
curve,
and
that
has
achieved
interop
with
e,
with
ristretto
and
with,
I
believe,
for
stratop256
and
maybe
another
curve
and
a
a
separate
go
implementation,
and
I'm
aware
of
others
that
are
on
the
way
as
well
having
yet
achieved
interrupt,
but
are
quickly
approaching.
So
things
are
looking
good
on
the
implementation
front.
B
Right
so
there
are
basically
two
open
two
big-ish
open
issues
right
now
and-
and
we
have,
I
think
we
have
clarity
in
terms
of
what
we
want
to
do
from
both
of
them.
So
I'll
talk
about
both
of
them
now,
the
first
of
which
is
how
we
actually
store
private
keys.
B
So
currently,
in
the
document,
the
the
credentials
as
they're
stored
on
the
server
always
encrypt
a
private
key
that
the
client
provides
to
the
protocol,
and
this
is
interesting
because,
if
you're,
a
user
interacting
with
the
server-
and
you
know-
you
know
the
username
to
use-
you
know
the
password
to
use-
you
can
interact
with
the
server,
get
their
credentials
and
recover
the
private
key
and
by
in
doing
that,
what
you
have
to
do
is
like
compute,
an
opr
or
run
an
opr
protocol
with
the
server
and
that
opr
output
is
like
used
to
like
derive
a
key.
B
That's
then
used
to
later
decrypt
the
credentials.
Basically,
if
you
know
the
right
inputs,
you
can
compute
this
opr
output
and
decrypt
your
private
key
and
then
use
that
to
authenticate.
B
So
the
observation
was
made
that,
rather
than
storing
any
like
encrypted
version
of
key
on
the
server
at
all,
why
not
just
use
the
output
of
the
opr
which
protects
the
key
as
as
it's
stored,
to
derive
a
key
sort
of
in
line
so
to
deterministically,
derive
a
key,
and
that
would
allow
applications
without
any
sort
of
external
key
generation
function
to
just
use,
opaque
out
of
the
box,
derive
keys
and
and
have
one
less
input.
B
So
the
the
proposal
here
is
to
sort
of
shuffle
things
around
slightly
such
that
there's
effectively
two
high-level
modes,
one
which
we're
calling
an
internal
mode
wherein
keys,
are
deterministically
derived
based
on
the
output
of
the
opr
protocol
and
then
another
which
is
an
external
mode
wherein
keys
or
applications
that
have
keys
that
exist
externally
to
opaque,
can
import
them
and
you
state
them
in
the
credential
file
and
then
recover
them
later
on.
B
And
this
should
accommodate
things
that
are,
I
guess,
both
types
of
applications
and
potentially,
if
there's
weird
aks
in
the
future,
that
don't
have
you
know
nice
deterministic
key
generation
functions.
They
can
just
use
the
external
mode.
Richard.
O
Just
a
quick
clarifying
question
on
this
external
mode.
This
seems
like
this
complicates
the
api.
It
complicates
the
the
architecture
a
bit.
Is
there
a
reason
that
folks
are
that
someone's
interested
in
using
external
keys
here,
because
otherwise
it
seems
like
a
necessary
complication
to
to
have
another
interface
here.
B
So
the
the
primary
motivation
was
trying
to
keep
this
future
proof
in
terms
of
like
what
you
know.
Potentially
there
might
be
aks
in
the
future,
wherein
the
actual
deterministic
key
derivation
function
is
is
not
specified.
I
agree
that
it
is
a
bit
more
complicated
and
no,
we
don't
currently
have
a
use
case
in
mind,
specifically
for
3dh
right
now.
B
We
know
how
to
deterministically
drive
keys
for
these
groups
and
we
could
get
away
with
juice
using
internal
mode,
but
external
mode
is
there
specifically
and
solely
for
you
know
future
use
cases
wherein
that
sort
of
derivation
is
not
clear.
O
B
Yeah
we
discussed
yeah
the
the
details
are
subtle
enough
in
terms
of
like
how
you
actually
protect
the
envelope,
because
you
need
certain
requirements
for
how
you
encrypt
the
envelope
like
you
need
like
a
key
committing
aed
effectively.
B
We
kind
of
work
around
that
and
slightly
in
in
a
certain
way,
and
actually
then
we
can
talk
on
that
later,
so
we
felt
it
was
appropriate
to
deal
with
it
now
and
rather
than
sort
of
delegate
it
to
future
specifications
that
might
be
tempted
to
potentially
do
that
encryption
incorrectly,
but
yeah
thanks
make
sense
thanks
next
flight.
Please.
B
B
You
know
certain
username,
password
guesses
and
learn
whether
or
not
a
user
exists
or
does
not
exist
and
then
also
learn
if
a
user
does
exist,
if
the
password
is
correct
or
not
correct,
and
if
you
compare
this
to
other,
you
know
authentication
systems
wherein
it's
possible
for
the
server
to
reply.
Indistinguishably,
you
know
the
user
doesn't
exist
or
the
password
is
wrong.
B
This
is
somewhat
of
a
regression
and
there's
there's
so
there's
two
things
we
can
do
to
sort
of
mitigate
this,
the
first
of
which
is
to
specify
sort
of
optional
server-side
behavior,
wherein
if
a
user
doesn't
exist,
the
server
generates
quote
a
fake
response,
a
deterministic
response,
but
it's
fake,
and
this
this
means
that,
like
if
an
adversary
given
a
snapshot
of
all
the
users
in
time,
queries
the
server
and
the
user
doesn't
exist,
it
will
get
back.
It
will
get
back
an
answer.
B
B
The
problem
with
this
is
that
and
later
on,
if
this
user
is
added
to
the
system
or
if
the
user
that
does
exist
password,
if
like
credentials,
change
under
the
hood
or
something
the
server's
response,
changes
and
the
adversary
can
learn
some
information.
B
So
the
the
second
option
is
to
be
a
bit
it's
to
sort
of
augment
the
protocol
slightly
to
make
this
sort
of
non-optional
wherein
the
server
effectively
encrypts
its
entire
response
to
the
to
this
to
the
client
and
what
this
allows
the
server
to
do
is,
in
the
case
of
a
non-existent
user,
just
generate
random,
bytes
or
random
garbage.
So
from
the
attacker's
perspective,
they
effectively
learn
no
useful
information
as
to
whether
or
not
the
username
exists.
B
The
password
was
right
or
wrong
and
so
on,
and
we
seem
to
have
found
a
fairly
a
minimally
invasive
way
to
do
the
second
option,
which
requires
sort
of
minimal
storage
on
the
server,
and
we
think
it's
pretty
reasonable
to
implement
the
details
are
in
the
linked
issue.
B
B
A
One
question
from
my
side
regarding
the
client
enumeration
issue:
could
you
please
get
back
one
slide
to
the
issue?
Number
84?
A
Yes!
Thank
you.
Oh
sorry,
22!
Yes
about
no
next
items!
Thank
you
so
about
the
second
option.
Chris,
could
you
please
tell
us
more
about
second
option
regarding
the
changes
in
the
protocol?
Don't
exchanges
have
any
influence
on
the
security
proofs
on
security
assessments
or
et
cetera,
or
are
there?
There
are
quite
technical
changes
that
did
not
affect
security
proof
at
all.
B
Yeah
the
changes
don't
affect
the
security
proofs.
Yoko
has
confirmed
that
it's
it's.
This
is
more
of,
I
guess,
an
application
layer
attack
sort
of
thing.
B
Yeah,
I
think
it's
the
I
don't
know
how
many,
how
much
detail
you
want
about.
The
particular
change
simply
say
that
things.
C
Are
unaffected,
you're
worried
enough?
So
if
you
say
that.
A
M
M
M
That
is
only
known
by
the
client,
but
in
in
this
sense
the
protocol
is
oblivious
because
the
client
only
learns
the
output
without
learning
anything
about
the
the
the
server
key
and,
at
the
same
time
the
server
doesn't
learn
anything
about
the
client's
input
and
outputs
too.
Another
mode
is
the
verifiable
verifiability
mode
in
this
construction,
in
which
the
server
also
gives
a
proof
to
the
client
that
that
the
output
was
computed
actually
using
the
the
key
k
and
the
player
has
the
ability
to
verify
that
that
the
proof
is
correct.
M
Right,
so
this
is
our
list
of
the
latest
changes
that
was
done
in
in
the
in
the
current
version,
so
the
the
first
two
are
based
from
removing
one
of
the
input
which
which
is
trying
to
enforce
the
the
domain
separation
as
part
of
the
input.
So
we
give
recommendation
for
that
issue.
239
is
some
updates
on
proof
on
the
proof
generation.
Basically,
it's
just
updating
the
interface
in
order
to
be
usable
in
order
to
have
a
clear
interface
of
of
the
discrete
logarithm
proof.
M
The
last
issue
there
is
about
using
shake
function
for
the
decaf
group,
so
this
is
this
happens,
because
it's
likely
that
implements
implementation
for
cure
448
are
likely
accompanied
by
by
shake.
So
it
makes
sense
to
change
from
from
shot
to
shake
next
slide.
Please.
M
Yeah,
so
there
are
some
current
issues
that
we're
discussing
right
now.
This
is
one
proposal
when
proposal
is
trying
to
put
a
verifiable
mode
which
doesn't
use
serial
knowledge
proofs
at
all,
so
this
construction
was
published
in
in
this
sprint:
2020
72
yeah.
Basically,
no
proof
is
transmitted
and
only
presents
changes
on
the
client
side.
Serverless
still
computes.
M
This
scalar
multiplication
k
times
t0
and
basically
on
the
left,
is
what
is
in
the
in
the
current
mode
and
on
the
right
is:
is
the
new
proposal.
Basically,
it
adds
some
point
that
can
be
one
additional
point
that
can
be
used
as
a
mask
and
this
can
be
cleaned
using
the
the
public
key
of
the
of
the
server.
So
in
order
to
verify
a
set
of
tokens
or
a
set
of.
M
M
Next
slide.
Please
there's
another,
a
very
recent
issue
or
warning:
let's
say
about
how
to
do
blinding
so,
okay,
so
here
just
to
to
set
some
notation.
So
we
work
in
a
group
in
multiple
multiplicative
notation,
so
basically,
the
the
main
result
of
the
paper,
or
that
is
pointing
is
that
exponential,
blinding
is
safe
or
but
there
are
some
use
cases
in
which
the
multiplicative
blinding
could
be
unsafe.
So
so
there
are,
there
is
one
proposal
to
do
in
how
we
can
do
depart
from.
M
A
D
B
Yeah,
so
this
is
a
proposal
for
bringing
something
that
people
use
quite
widely
and
have
been
using
it
for
a
long
time
to
see
afraid
to
see
if
there's
interest
in
standardizing
this
quite
useful
tool.
So
next
slide,
please.
B
So
motivation,
armando
just
gave
a
talk
on
vprs,
but
I'll
summarize
sort
of
the
high
level
functionality
here.
Theoprf
is
effectively
a
multiplayer
protocol
for
computing
this.
This
prff,
given
a
c
receiver
server
secret,
key
k
and
client
input
x
and
in
such
a
way
that
the
server
learns
nothing
of
the
client's
input
x
and
the
client
only
learns
the
output
y.
B
And
this
a
tremendously
simple
functionality
has
found
its
way
into
a
number
of
different
applications
of
privacy,
fast,
perhaps
being
the
first
one
at
a
larger
scale,
but
there's
also
been
sort
of
a
number
of
alternative
applications
that
require
a
similar
functionality,
but
with
sort
of
different
constraints.
So
tor
has
came
up,
come
up
with
a
proposal
for
applying
something
that
has
the
shape
of
a
vo
prf
for
denial,
service
prevention
to
sort
of
help,
check
that
things
connecting
to
relays
are
are
authenticated
and
tested.
B
At
some
point,
there's
also
been
a
proposal
for
using
something
like
view
prs,
for
I
click
fraud,
prevention
up
in
the
w3c
space
and
vopf's,
don't
quite
work
for
some
of
these
cases,
primarily
because,
in
order
for
them
to
be
useful,
it
requires
either
widely
shared
secrets,
because
only
the
owner
of
the
private
key
can
actually
verify
that
the
prf
was
actually
computed
correctly
or
you
centralize
access
to
this
key
and
induce
heavy
load
on
the
thing
that
actually
is
performing
viewer,
prf
evaluations.
B
That
you,
potentially
you
you
run
into
potential
scaling
issues
next
slide,
please
so
blind
signatures
can
help
here.
Blind
signatures
are
functionally
quite
similar
to
voprfs,
with
the
distinction
that
the
the
outputs
are
publicly
verifiable,
which
is
quite
nice.
That
means
anyone
who
has
a
message,
an
input
message
and
a
signature
pair
can
verify
it.
B
So
you
don't
have
to
share
secret
keys
quite
widely.
You
can
have
other
entities
in
the
system
verify
the
signatures
that
are
not
the
signer,
and
that
seems
like
a
release
check
two
of
the
item
or
two
of
the
applications
on
the
the
previous
slide
off,
and
there
are
a
number
of
blind
signature
schemes
that
exist
in
practice.
B
So
we
have
blanche
nourish
signatures,
small
asterisks,
because
some
of
those
are
are
not
such
great
shape
right
now,
blind
bls
based
on
pairings
abbey
paper
and
then,
of
course,
sean
blind
rsa,
and
there
are
others
as
well.
I'm
sure
that
I've
not
included
here
don't
mean
for
this
list
to
be
exhausted
next
to
life.
Please
and
there's
a
lot
of
in
looking
at
the
landscape.
There
are
a
lot
of
trade-offs
and
considerations
that
we
might
take
into
account
so,
for
example,
for
bls,
which
is
really
quite
elegant.
B
It's
very
easy
to
specify
very
easy.
I
guess
to
reason
about,
but
based
on
pairings
being,
perhaps
not
as
widely
distributed
or
as
widely
used
as
we
would
like.
They're,
not
they're,
not
in
you
know,
most
major
libraries
like
boring,
ssl,
ringing,
etc.
That's
sort
of
a
barrier
to
adoption.
B
They
also
have
more
expensive
signing
and
verification
costs.
Schnorr
is
also
really
nice.
Super
lightweight
threshold
friendly,
see,
frost
and
chelsea's
going
to
talk
about
that
later.
B
It
does
have
the
downside
in
that
it
requires
three
messages
first
to
server,
commits
to
announce
and
then
and
then
the
actual
client
server
engage
in
the
the
signature
computation
and
working
around.
B
That
perhaps
requires
a
state
on
the
server
side
or
additional
computation
overhead,
but
perhaps
more
importantly,
there's
this
pretty
great
paper
from
folks
at
google
and
elsewhere,
where
they
effectively
broke
a
number
of
variants
of
blind
shore
signature
schemes,
and
I
I
think
it's
at
eurocrypt
actually
this
year,
where
it's
being
where
it
was
accepted.
But
all
let's
say
like
this:
this
gives
us
pause
about
buying
snore
right
now.
B
There
is,
though,
a
a
very
promising
sort
of
patch
that
was
published
last
year.
That
does
seem
plausible
in
a
way
forward,
but
given
the
state
of
affairs
it
it
gives
us
sort
of
concern,
and
then
we
have,
I
guess
rsa
rsa
is
of
course,
widely
supported.
For
obvious
reasons.
Does
have
the
downside
that
you
know
it's
large
signature
sizes
larger
than
elliptic
curves.
B
It
is
not
easy
to
threshold,
it's
rsa,
I
there's
sort
of
several
cons,
but
one
of
the
very
nice
some
of
the
very
nice
properties
about
it
are
that
it
does
run
in
a
single
round
so
completely
stateless
issuance
on
the
server
side.
So
I
can
run
the
protocol
without
any
additional
state
and
I
can
scale
as
needed
effectively,
but
importantly
in
considering
the
impact
of
the
ecosystem
and
adopting
something
like
blind
rsa
signatures.
B
B
So
the
the
protocol
basically
works
as
follows:
the
client
gets
out
of
band
the
server's
public
key
sks
in
this
particular
case,
and
with
this
particular
public
key
and
a
message
feeds
it
into
the
spline
function.
To
produce
some
pointed
output,
shoots
that
over
to
the
server
who
then
signs
it
with
its
private
key
and
sends
that
back
to
the
client,
who
removes
the
blind
and
produces
the
signature
and
can
verify
the
signature
in
real
time
and
drop
it.
B
If
it's
invalid
or
not
quite
simple-
and
this
is
it
like
a
one-to-one
mapping
effectively
with
the
opioid
or
fmv
or
pf
api,
especially
the
vobrf
api,
because
the
client
needs
the
knowledge
of
the
server's
public
key
in
order
to
actually
verify
things.
B
B
Of
course,
being
rsa,
there
are
a
number
of
potential
pitfalls.
B
Perhaps
most
importantly,
it's
a
or
perhaps
first
on
everyone's
top
on
everyone's
mind
is
what
encoding
function
do
we
use
and
there
are
a
number
of
variants
via
pss,
full
domain,
hash
or
pkcs,
and
in
looking
at
the
different
options
and
looking
at
the
sort
of
the
literature,
it
seems
safest
and
it
seems
safe
to
go
with
either
pss
or
full
domain
hash,
and
the
determination
between
which
of
those
two
we
went
with
was
largely
dependent
on
availability
in
existing
libraries,
especially
in
the
verification
path
wherein
pss
signature.
B
Verification
code
is
already
it's
definitely
in
most
of
the
major
tls
stack
libraries
by
virtue
of
tls,
and
it
also
supports
the
raminize
and
deterministic
exposure
here
which
application
wanted
by
changing
what
the
salt
is,
of
course
like
the
this
is
we
could
probably
go
either
way
here.
B
We
don't
feel
super
strongly
about
pss
versus
fdh,
and
there
was
a
lively
discussion
on
the
mailing
list
about
which
of
these
encoding
functions
is
best,
and
so
I'd
be
curious
to
hear
what
people
think
about
that
thanks
a
lot.
Please.
B
So
the
current
status
we
have
several
interoperable
mutations.
We
test
vectors
in
the
draft.
It's
it's
a
very
simple
thing
to
implement.
Many
people
have
done
it
and-
and
we
think
the
the
spec
has
written-
is
quite
stable
and
given
that
it
solves
this
specific
charter
item
in
the
privacy
pass
working
group,
and
that
is
the
ability
to
instantiate
or
to
invoke
privacy
pass
and
produce
tokens
that
are
publicly
verifiable.
B
B
So
the
two
questions
for
the
group
are,
I
guess,
first
and
foremost,
is
there
interest
in
working
on
blind
signatures
given
sort
of
the
activity
on
the
list.
The
answer
would
appear
to
be
yes,
and
the
second
question
is
assuming:
yes:
is
there
interest
in
adopting
this
document
as
a
starting
point,
and
with
that
I
will
pause
for
questions.
A
Thanks
a
lot
grace,
so
do
you
understand
correctly
that
you
would
like
to
ask
the
group
whether
there
is
interest
in
adopting
the
document
and
on
any
other
ways
on
working
on
the
topic
itself
right.
B
P
You
said
that
this
sold
their
privacy
past
charter
item
to
support
public
verifiability,
so
just
wondered:
have
you
had
presumably
discussions
with
the
privacy
pass
group
about
this?
Like
do
you
know
that
this
is
a
solution
they
would
want
to
pursue?
Has
it
had
any
airing
there
just
kind
of.
P
B
Yeah
blind
say
there
was
a
threat
on
public
verifiability
in
the
privacy
path
mailing
list.
There
was
no
clear
sort
of
consensus
in
terms
of
which
construction
was,
I
guess,
actually
desire,
but
I
see
ben's
in
the
queue.
So
maybe
he
has
a
better
being.
The
cherry
perhaps
he's
a
better
answer.
Q
Hi
benchworth's
privacy
pass
chair,
so
yeah,
I'm
just
going
to
say
the
the
same
thing
there.
Q
B
H
R
Q
Working
group,
at
least
overall
is
is
interested
in
is
interested
in
at
least
looking
at
blind
signatures
and
having
a
concrete
format
to
talk
about
them.
P
Q
Q
So
I
think
I
I
think
that
privacy
past
is,
is,
I
think,
not
not
really
likely
to
to
formally
exclude
any
particular
suitable
algorithm.
B
Yeah
just
to
follow
up
on
that
in
working
on
the
protocol,
the
privacy
pass
protocol
spec,
where
we're
trying
to
make
sort
of
high-level
accommodations
for
any
underlying
cryptographic,
primitive
that
has
different
properties
in
terms
of
like
what
sort
of
public
metadata
it
supports,
whether
that's
publicly
verifiable
or
not,
and
we'll
talk
about
those
in
the
session
later
today.
But
this
seems
very
very
much
like
something:
a
situation
where
cfrg
would
present
a
candidate
and
then
provided
that
it's
suitable
rice
pass
would
just
wrap
around.
C
S
Hello,
this
is
tommy
polly
from
apple,
so
I
mean
blind.
Signatures
is
definitely
something
that
we
are
interested
in
and
I
have
a
strong
interest
in
having
you
know
at
least
one
good
solution
in
this
area.
To
the
overall
point
of
adoption
use
on
this,
I
want
to
highlight
a
comment
from
the
chat
from
john
matson,
and
I
think
the
key
point
here
is
that,
yes,
you
know
it
looks
like
doing.
Biosignatures
is
important,
and
even
I
want
to
highlight.
S
I
view
this
as
for
all
the
reasons
that
chris
pointed
out
a
very
easy
one,
to
specify
that
could
get
testing
and
deployment,
and
we
don't
think,
has
obvious
problems
with
security
or
attacks,
but
it
would
be
essentially
the
first
flavor
of
it
so
for
people
who
want
to
do
early
blind
signatures
that
are
publicly
verifiable
for
privacy
pass
or
other
applications,
we
could
specify
it
use
it
ship
it,
but
then
probably
also
expect
to
specify
other
blind
signature
algorithms
in
the
future
that
people
would
shift
to.
B
Yeah
just
want
to
also
follow
up
on
that
I
mean
this
is
being
used
like
a
variant
of
blind.
Rsa
is
being
used
in
production
for
a
number
of
products
like
the
google
vpn
that
came
out
recently
uses
this
and
and
yeah
it.
It
seems.
Safe
people
are
are
comfortable
with
rsa
enough
right
now.
Definitely
don't
want
to
like
rule
out
future
improvements,
but
this,
like
seems
like
something
that
we
could
do
now
and
would
solve
pretty
real
use
cases.
So
ben.
Q
So
no
hat
the.
Q
The
vo
prf
drafts
here
specify
an
abstract,
vo,
prf
api.
I
think,
there's,
you
know
obviously
been
tremendous
success
with
the
same
approach
with
hpke
specifying
a
generalized
set
of
requirements
and
an
api
for
talking
about
them
is
what
do
you
see
the
the
final
state
for
that
with
blind
signatures?
Do
we
just
reuse
the
the
vo
prf
api?
Do
we
need
to
adjust
the
the
vo
prf
draft
to
specify
that
this
api
covers
both
basic
vo
prfs
and
blind
signature
schemes.
B
I'm
that
the
intent
was
to
make
the
shape
of
this
api
very
similar
to
that
of
the
vop
or
f,
but
I'm
hesitant
to
try
to
fold
it
in
specifically,
because
the
sort
of
semantics
of
the
two
constructions
are
different
with
this
public
verifiability
difference,
ideally
or
at
least,
ideally
to
me.
B
The
the
difference
would
be
sort
of
at
the
the
the
layer
above
that
calls
into
one
of
these
apis
so
like
we
might
accommodate
like
the
the
public
verifiability
aspect
or
whether
or
not
there's
public
metadata
or
not
in
the
computation,
at
the
at
the
privacy,
fast
level
and
and
not
in
this
particular,
not
in
the
voprf
dock
and
not
specifically
in
the
blind,
rsa
doc.
Q
Okay,
so
just
as
a
sort
of
layering
question
for
privacy
pests,
I
think
it
would
be
great
to
have
a
clear
abstraction
boundary
that
we're
supposed
to
use,
because
privacy
paths
probably
shouldn't
be
directly
referencing
blind
rsa
per
se.
B
Yeah
we're
gonna
have
to
like
part
of
the
problem
with
privacy
pass
is
that
we
did.
We
haven't
yet
gone
through
the
exercise
of
sort
of
composing
it
with
different
underlying
cryptographic
instructions
yet,
but
now
that
we
have
like
one
candidate
and
blind
blind
rsa
signatures
and
there's
also
like
the
pmb
token
as
well
and
then
like
facebook's
private
stats
protocols.
B
Well,
we
now
have
like
the
the
opportunity
to
figure
out
how
what
the
api
boundaries
are
so
right
now
it
seems
like
nothing
will
change
in
these
particular
documents,
but
maybe
it
will,
depending
on
how
that
exercise
goes.
So
thanks
for
pointing
that
out.
N
What's
in
that
number,
there
is
a
difference
between
the
semantics
of
a
blind
signature
and
no
prf.
We
can
talk
about
this
on
the
list,
but
I
don't
think
they
should
have
the
same
api
because
they
provide
different
properties.
It's
going
to
be
very
similar,
but
they're
they're
subtly
different.
B
I
guess
I
mean
one
of
ben's
like
potential
points
is
that
you
could
instantiate
or
simulate
an
opr
with
a
blind
signature.
In
fact,
one
of
the
jerkies
or
the
results
basically
demonstrate
that
you
can
do
so
safely,
but
I
I
agree,
like
the
semantics,
are
different
enough
at
a
lower
level
that
perhaps
holding
them
into
the
same
api
is
not
best
fred.
G
I
just
wanted
to
to
make
a
note
about
the
fact
that
we
don't
want
to
generalize
the
construction
too
quickly,
either,
especially
because
some
of
the
alternatives
and
other
papers
require
multiple
step
issuance
or
there
might
be
some
semantic
differences
there.
So
maybe
we
we
can
see
a
bit
like
what
other
alternatives
there
are
to
rsa,
and
once
we
have
confidence
that
we
could
come
up
with
the
right
abstraction
for
it,
then
we
we
could
move
towards
adopting
that.
G
But
if,
in
the
short
term
we're
going
to
be
using
rsa,
I
think
that
building
an
abstraction
on
top
of
it.
If
it's
not
clear
that
future
scheme
might
be
providing
the
same,
abstraction
doesn't
seem
wise.
B
A
I
hope
that
you
can
take
it
to
the
list.
I
think
that
there
is
quite
a
truthful
discussion
and
I
think
it
should
continue
so
I
will
send
a
message
to
the
list
about
as
a
possible
working
arms
or
something
thanks
a
lot.
Please
chelsea.
First
of
all
to
teenagers.
P
Okay,
great
okay,
hi,
I'm
chelsea
and
I'm
getting
giving
the
update
on
our
draft
of
frost
and
frost
stands
for
flexible
round
optimized
snore
threshold
signatures.
P
So
just
a
quick
summary
about
what
frost
is
in
case
you're
not
familiar
with
it.
It's
a
two
round,
schnorr
threshold
signing
protocol
or
it
can
be
optimized
to
a
single
round
with
pre-processing,
and
one
thing,
that's
kind
of
interesting
about
frost
is
that
it
specifically
trades
off
robustness
for
round
efficiency.
P
Signing
operations
in
frost
are
secure
even
when
they're
performed
concurrently,
so
this
improves
upon
prior
work
in
literature,
against
an
adversary
that
controls
up
to
t
minus
one
signers
and
in
frost
key
generation
can
be
performed
either
by
a
trusted
dealer
just
by
straight
schmir,
secret
sharing
or
by
a
distributed
key
generation
or
dkg
protocol.
P
So
that's
kind
of
in
a
nutshell.
What
what
frost
does
so
the
current
status,
so
it
was
adopted
as
a
working
group
item
at
the
end
of
january,
so
relatively
soon.
P
So
with
that,
we're
working
on
the
first
draft
and
we're
focusing
on
the
implementation
details
that
are
not
specified
in
the
paper,
so
really
we're
going
through
right
now
and
just
kind
of
fleshing
out
v1
and
talking
to
people,
who've,
implemented,
frost
and
trying
to
like
flesh
out
those
details.
P
Another
thing
sort
of
externally
to
this
effort
is
we're.
Writing
a
second
proof
of
security
using
standard
assumptions,
so
something
that
came
out
of
frost
and
some
similar
protocols
like
music
and
a
few
others
is
that
the
the
approaches
to
improving
these
kind
of
quite
complicated
schemes
like
multi-signatures
is
not
quite
standard,
so
we're
working
on
a
way
to
standardize
these
proofs
of
security
and
something
that's
exciting-
is
there's
actually
several
parallel
implementations
of
frost
in
both
rest
and
go
so.
P
I
recently
was
notified
of
one
that
was
put
out
by
jp
al
massan's
group.
So
there's
definitely
a
lot
of
independent
interest
in
frost
and
people
are
implementing
it
right
now.
So
I
think
that's
a
that's
a
really
positive
sign.
P
I'm
going
to
summarize
some
of
the
feedback
that
we
got
from
the
call
for
adoption.
So
the
the
paper
for
frost
sort
of
assumed
a
prime
order
curve.
The
feedback
that
we
got
in
the
call
for
adoption
is
that
there
was
desire
for
a
dsa
compatibility,
so
over
curves,
like
two
five
five
one,
nine
and
four
four
eight
one
thing
I
want
to
point
out
is
that
compatibility
in
this
case
will
be
for
verification
of
signatures.
P
Specifically
so-
and
this
isn't
super
widely
known,
but
using
a
dsa
style,
deterministic
nonces
is
actually
insecure
in
a
multi-signer
setting.
So
we've
had
questions
like.
Is
it
possible
to
do
deterministic,
frost
and
it's
not
so
our
draft
will
specify
signatures
that
are
compatible
with
verification
in.
I
think
it's
rfc
8032
that
specifies
these
signature
formats.
P
Another
piece
of
feedback
we
got
is
that
preprocessing
might
be
difficult
to
perform
securely,
so
that
first
round
might
be
tricky,
because
it's
super
important
that
nonces
are
only
used
once
so.
This
is
something
we're
thinking
about
and
as
I'll
talk
later,
we're
planning
just
to
specify
the
two
round
scheme
in
our
draft.
P
So
the
next
steps
for
us
so
we'll
be
fleshing
out
our
v1
so
expect
to
see
a
full
draft
soon.
We
do
need
to
start
testing
interoperability
between
implementations
and
then,
as
I
said
before,
we
do
need
to
adapt
the
draft
to
curves
with
cofactors.
P
So
we've
looked
at
this
just
sort
of
in
very
briefly,
but
things
we
need
to
do
is
specify
point
validation
during
signing,
as
required
by
these
curves
and
making
sure
that
our
signatures
are
verification
compatible
so
expect
to
see
that
in
our
next
update.
P
So
our
roadmap
for
the
draft
within
the
core
draft,
like
I
said,
we'll,
be
specifying
with
a
trusted
dealer,
I
think
that's
kind
of
the
easiest
key
generation
option
and
I
think,
sort
of
commonly
desired,
we'll
just
be
doing
the
two
round
scheme
for
simplicity
and
we
do
have
the
concept
of
a
signature
aggregator
so
we'll
be
specifying
them
as
a
non-signing
party,
so
they
could
be
assigning
party
if
they
want
to,
but
we'll
be
keeping
these
concepts
separate.
P
P
P
Another
thing
is
specifying
this
dkg,
so
this
dkg
could
possibly
be
more
generally
useful
beyond
just
frost,
and
so
we
wanted
to
put
out
this
out
there
as
something
that
you
know
the
working
group
or
the
research
group
might
be
interested
in
but
kind
of
more
general
and
then
finally,
something
that's
forgotten
about
all
the
time,
but
that's
really
extremely
useful
for
threshold
signatures
or
just
threshold
schemes
in
general
is
that
of
share
recovery
or
the
ability
to
add
new
participants.
P
So
these
algorithms
exist
they're
well
known,
they've
been
around
for
a
long
time.
So
I
think
maybe
considering
this
as
an
extension
might
be
helpful
for
failure,
recovery
and
practice
and
with
that
I'll
take.
T
Questions
billy,
yeah
I'd.
T
You
said
you
you're,
focusing
on
the
dealer
version
I'd
like
to
encourage
you
to
also
consider
the
case
where
you
have
a
crypto
co-processor
built
into
the
cpu,
and
you
want
an
application
program
to
be
able
to
securely
delegate
a
partial
threshold
operation
of
a
signature
to
the
cpu,
so
that
then
we
can
bind
the
operations
to
the
cpu
and
the
reason
for
that
is
that
I
believe
that
there's
going
to
be
a
requirement
out
there
fairly
soon
for
that
type
of
processor
and
signature
is
going
to
be
one
of
three
functions.
P
Okay,
so
are
you
advocating
for
the
dkg
then
so
that
these
are
two
I
mean
the
the
key.
The
key
generation
is
distributed
or.
T
Yes,
the
key
generation
will
be
dis
distributed,
but
doesn't
need
to
be
distributed
using
shamir
and
eva
whatever.
It's
just
a
question
of
being
able
to
bind
a
signature
key
to
a
particular
device
so
that
you
know
that
if
the
device
is
lost,
the
ability
to
create
that
signature
is
also
lost.
You
can't
extract,
you
can't
take
a
backup
and
then
use
that
to
create
signatures
unless
you've
also
got
the
device.
P
O
O
Your
suggestion
here
is
that
that
might
be
useful
to
use
other
places.
What
I
wonder
is
whether
we
might
be
able
to
simplify
this
draft
by
not
including
the
dkg
or,
if
you
think
that
the
dkg
is,
you
know,
kind
of
structurally
intertwined
with
the
signing
algorithm
and
worth
keeping.
For
that
reason,.
P
No,
it's
definitely
not
intertwined
and
actually
yeah.
I
completely
agree
so
within
the
core
draft
I'm.
I
think
we
should
just
specify
the
trusted
dealer
and
then
say
also
if
you
want
to
do
something
more
exotic,
like
a
dkg.
Here's
what
we
expect
for
all
the
participants
at
the
end
of
keygen,
that's
pretty
simple,
so
yeah!
P
I
do
think
the
dkg
probably
belongs
in
its
own
realm,
because
there's
a
lot
like
there's
a
lot
of
things
you
can
use
it
for
and
as
long
as
we
get
keys
for
all
the
participants
that
fulfill
certain
properties,
then
I
think
it's
fine
for
it
to
be
external.
U
Okay
thanks
so
today
I'll
be
laying
out
how
real
world
vulnerabilities
have
arisen
because
of
non-committing
aad
schemes,
as
well
as
discussing
the
current
key
committing
aad
landscape
and
how
we
can
hopefully
head
towards
a
standard.
So
next
slide.
Please.
U
So
here
I'll
broadly
discuss
authenticated
encryption
and
to
keep
things
simple,
I'll,
ignore
nonsense
and
associated
data
in
this
presentation.
So
here
in
the
setting,
we
have
a
client
and
a
server
and
they
both
share
a
secret
key.
The
client
chooses
a
plain
text
message
and
encrypts
it
using
its
secret
key
to
produce
a
cipher
text
next
slide.
U
Please
and
the
client
sends
the
cipher
text
over
to
the
server
who
can
then
decrypt
to
recover
the
plain
text
using
its
own
secret
key
and,
of
course,
we
expect
that
an
attacker
who
sees
the
ciphertext
won't
be
able
to
recover
the
plaintext
without
having
access
to
the
secret
key
next
slide.
Please,
and
there
are
many
popular
aad
schemes
out
there,
such
as
aesgcm
and
xlc20
poly1305,
among
others
and
they're
popular
for
good
reason.
They
are
efficient,
they're,
standardized
and
they're,
widely
supported
by
many
libraries.
U
They
also
guarantee
many
security
properties
such
as
confidentiality
and
integrity
and
they've,
been
proven
chosen.
Ciphertext
attack,
secure,
but
there's
one
security
property
that
they
don't
target,
which
is
next
likelies
thanks.
It's
robustness
or
also
called
committing
aad
next
slide.
Please.
U
U
So
this
becomes
an
issue
in
what
we
call
partitioning
oracle
attacks.
So
this
is
a
new
class
of
attacks
that
I've
worked
on
with
my
co-authors,
paul
grubbs
and
tom
riston
part,
and
this
will
appear
at
eastneck
security
this
year.
In
this
attack,
you
can
create
a
cipher
text
that
decrypts
under
multiple
keys,
not
just
two
and
its
goal,
is
to
perform
efficient,
password
or
key
recovery.
U
U
U
So
next
slide,
please,
and
here
the
server
was
able
to
successfully
decrypt
because
its
key
was
in
the
set.
So
now
the
attacker
learns
that
it
can
eliminate
half
of
the
keys
in
its
key
set,
thereby
reducing
so
next
slide.
Please
so
notice
that
this
means
that
the
attacker
can
learn
one
bit
of
information
about
the
key
because
of
this
non-committing
aad
scheme.
That's
used
here
next
slide
please.
U
So
we
found
that
these
attacks
were
practical
and
looked
at
a
few
schemes
in
depth,
including
shadow
socks,
proxy
servers,
as
well
as
early
implementations
of
opaque,
and
we
found
that
there
were
vulnerabilities
or
possible
partitioning
oracles
elsewhere,
including
an
hpk
as
well
as
audi
file,
encryption
tool
among
others,
and
there
have
been
other
vulnerabilities
starting
from
non-committing
aad
as
well,
including
in
facebook,
messenger's
content,
moderation,
message,
ranking
scheme
as
well
as
a
few
services
by
google
and
amazon,
which
were
found
by
researchers
at
google
and
amazon
last
year.
U
So
really,
this
all
shows
that
there's
a
growing
body
of
evidence
that
non-committing
aad
can
lead
to
vulnerabilities
next
slide.
Please
so
then
the
natural
question
is:
what
do
we
use
for?
A
key
committing
aad
scheme
looks
like
these,
and
this
is
a
bit
hard
to
answer
currently
because
there's
no
scheme
standardized
excite,
please.
U
But
there
have
been
a
few
schemes
suggested.
This
includes
the
zeros
block
check
scheme
which
modifies
an
aed
scheme
to
check
that
a
block
of
recovered
plain
text
is
an
all
zero
string.
So
this
is
currently
been
adopted
by
libsodium,
but
it
doesn't
current
extra
overhead
by
adding
64
bytes
to
each
cipher
text.
There's
also
the
hash
key
check
which
modifies
an
aad
scheme
to
check
a
sha-256
hash
of
the
key
during
decryption.
U
This
has
been
adopted
by
the
aws
encryption
sdk.
But
again
it
incurs
this
extra
overhead
over
the
base
scheme
by
adding
32
bytes.
Another
scheme
suggested
is
the
single
key
encryption
hmac,
which
doesn't
have
any
extra
overhead,
because
it
is
the
base
scheme,
but
potentially
could
be
less
efficient
than
these
alternatives.
U
And
while
the
implementations
to
our
knowledge,
don't
have
any
sort
of
side
channel,
it
could
be
that
the
zeros
block,
check
and
hash
key
check
could
have
some
side
channels
if
they're
implemented
incorrectly,
and
the
zeros
block
check
would
also
need
to
be
implemented
in
analysis
separately
for
each
aab
scheme.
So
each
of
these
themes-
sort
of
has
their
own
pros
and
cons,
except
so
currently
there's
some
questions
in
this
space.
So
chris
wood,
paul,
garg's
thomas
depart
and
I
are
planning
to
start
a
draft
for
keep
committing
aad.
U
So
we
would
love
to
hear
your
thoughts
about
needs
and
requirements
surrounding
this.
So
thank
you
very
much
and
I'm
happy
to
take
any
questions.
U
But
there's
also,
for
instance,
like
encrypt
and
hmac
as
a
standalone
aad
scheme
would
work.
We
could
also
in
the
draft,
specify
these
sorts
of
transforms
and
be
specific
about
the
trade-offs
with
these
multiple
schemes,
but
that's
one
way
to
go
thanks.
O
Please
rachel
yeah,
just
on
the
question
of
what
schemes
to
standardize
here.
I
think
it
might
be
plausible
to
take
a
kind
of
short-term
and
long-term
view
here.
I
think
the
the
just
encrypt
plus
hmac
is
probably
the
the
easy
quick
win
here,
because
there's
already
a
fair
bit
of
prior
employment
out
there.
O
That
would
benefit
from
standardization
and
security
analysis
to
make
sure
that
the
construction
is
put
together
right
way,
because
there's
there's
like
a
few
different
ways
that
people
put
together,
encrypt
and
hmac
to
come
up
with
an
a
and
what
they
believe
to
be
an
ada
which
is
not,
in
all
cases
so
like
having
a
standardized
construction.
For
that,
I
think,
would
be
useful
and
fairly
straightforward
to
put
together,
but
I
think
it
could
also
be
useful
to
you
know
to
explore
some
of
the
design
space
here.
O
U
Yeah,
that's
a
great
suggestion.
Yeah.
I
would
love
to
see
a
single
key
encrypted
hmac
standard.
There
aren't
any
to
my
knowledge,
so
I
think
that
is
a
source
for
people
to
refer
to
for
committing
aid.
Work
would
be
great.
H
O
A
link
in
the
chat-
I
would
love
to
get
that
in
analyzed
and
like
invested
by
someone
who's
better
at
this
than
I
am.
A
R
So
thanks
for
going
through
this,
this
is
the
question
I
had
was
pretty
simple.
The
two
options
you
had
with
the
zero
pad-
and
I
forget
the
other
one-
had
fixed
overheads,
yeah,
the
hash
key
check.
They
had
a
fixed
overhead
and
it
seems
to
me
obvious
that
the
hash
key
check
that
the
strength
of
that
is
based
on
the
strength
of
the
the
number
of
bytes
that
you
provide.
R
U
So
the
the
reason
it's
64
bytes
is
because
lip
sodium
is
using.
I
think
x,
cha-cha,
20
1305
and
their
single
block
of
sector
text
is
64
bytes,
so
they
just
made
an
all
zero
string
for
64
bytes,
but
you
certainly
don't
need
that
many
bytes,
but
so
so
that
would
be
helpful,
so
first
standard
to
say
how
long
of
a
zero
string
you
would
actually
need.
Q
U
Yeah,
that's
a
great
suggestion.
I'll
take
note
of
that.
A
B
B
Okay,
yeah
sorry
paging
everything
in
so
this
is
an
extension
that
the
sabode
and
I'll
put
together,
along
with
their
colleagues
for
extending
the
existing
vop
rf
construction
to
accommodate
public
metadata
into
the
output
of
the
opr
evaluation.
B
The
motivations
for
this
are
primarily
privacy
pass
in
this
particular
case,
which
has
an
issue
the
issue
referenced
here
for
accommodating
metadata,
both
on
the
client
side,
as
well
as
on
the
server
side.
You
know
the
I
guess
the
dimensions
of
the
metadata
are
still
like
a
topic
of
discussion
in
privacy
pass,
but
generally
there
there.
There
is
not
yet
a
construction
that
exists
for
safely
folding
in
this
types
of
metadata
and
there's
lots
of
different
types
of
metadata
you
might
want
to
use.
B
So,
for
example,
you
could,
if
you
wanted
to
bound
the
sort
of
the
scope.
Oh
I'm
sorry
go
back
to
the
previous
slide.
B
You
might
want
to
bound
the
scope
of
the
opr
evaluation
by
including,
like
an
expiration,
timestamp
or
something
as
public
metadata
or
something
else,
to
constrain
or
to
sort
of
augment
the
the
output.
We'll
note
that
there's
like
a
naive
way
to
do
this,
and
that
is
to
just
say
for
each
bit
of
metadata-
have
a
unique
public
key,
but
that
doesn't
scale
very
well
and
for
privacy
reasons.
We
don't
want
to
have
many
keys
floating
around
so
next
slide.
Please.
B
Right
so
the
the
basic
idea
is
to,
rather
than
just
run
the
voprf,
with
a
sort
of
fixed
public
key
to
derive
the
public
key
based
on
public
metadata.
B
So
in
this
construction
there
is
a
main
public
key
and
a
main
private
key,
and
there
is
attributes
that
are
represented
as
bit
vectors
and
the
from
each
bit
vector
you
can
derive
a
unique
key
pair
that
can
then
be
used
directly
in
the
existing
vo,
pure
construction,
without
augmentation
at
all.
B
If
you
advanced
one
slide,
the
the
key
point
here
is
that
the
derivation
from
root
or
main
public
key
to
an
attribute
or
metadata
specific
key
pair
is
verifiable,
using
sort
of
the
similar,
discrete
longer
quality
proofs
that
we
have
in
the
vo,
pref
doc.
Right
now.
Excuse
me.
B
B
It's
based
on
sort
of
well
understood,
prf
constructions
and,
and
it
folds
it
very
very
nicely
extends
the
existing
proof.
Construction
mechanisms
in
the
vopr
document,
the
all
the
key
derivation
stuff-
can
actually
happen
offline.
That
would
depend,
depending
on
the
size
of
the
metadata
that
could
be
quite
advantageous.
B
Considering
that
the
the
the
proofs
needed
for
checking
that
a
particular
derivation
is
correct
scale
with
the
number
of
bits
in
metadata,
so
I
I
like
typical
applications
or
expected
applications
might
use
anywhere
from
like
eight
to
32
bits
of
metadata,
which
is
reasonable
to
do
offline
and
then
cache
on
the
server
or
on
the
client.
B
Sorry-
and
so
all
this
is
say,
the
this
you
can
handle
metadata
quite
easily
offline
and
then
run
sort
of
the
the
super
efficient,
the
opr
protocol
online
and
the
output.
The
result
of
combining
these
two,
as
you
have
now,
will
be
a
prf
with
public
metadata
folded
in
next.
B
What
did
he
want
to
say
here,
not
sure
what
I
mean
it
doesn't
make
much
sense.
I
guess
to
go
into
the
details.
B
Yeah,
I
guess
we
can
just
talk
about
the
comparison,
so,
as
I
was
saying
earlier,
it's
it's
possible
to
to
to
do
this
sort
of
key
base
metadata
construction
right
now
in
a
naive
way,
you
can
have
a
single
bit
of
metadata
for
a
particular
key.
B
However,
that
like
scales,
as
I
said,
quite
poorly
for
end
bits,
you
have
and
public
keys
and
there's
no
proof
that
the
the
key
is
actually
correct,
but
this
would
be,
of
course,
compatible
with
vop
your
f
draft,
because
you,
you
just
have
a
pile
of
keys,
you
use,
and
you
pick
which
one
depending
on
which,
like
which
value
of
your
metadata.
B
You
want
compare
this
to
something
that
pythia,
which
is
a
pairing
based
opr
that
does
also
include
public
metadata,
has
constant
size,
public
keys
and
compute
overhead,
but
it
is
based
on
pairings
and
and
as
a
result,
different
hardness
assumptions
and
different
availability
and
implementation.
Difficulty.
B
The
attribute-based
voprf,
which
is
the
sort
of
the
construction
right
here,
has
a
logarithmic
public
key
size
and
that,
depending
on,
I
guess,
the
the
number
of
bits
in
your
metadata.
That
can
be,
I
guess,
good
or.
O
B
And,
as
I
said,
all
this
can
kind
of
happen
offline,
and
I
mean
there
are.
There
was
an
another
recent
result.
A
couple
weeks
ago,
I
don't
recall
exactly
when
it
came
out
that
has
a
a
different
alter,
a
different
construction
for
folding
in
public
metadata.
That
does
not
derive
it
from
the
the
keys
itself,
but
rather
augments
the
the
opr
evaluation
key
based
on
metadata
and
then
slightly
changes.
How
the
the
discrete
log
or
the
the
the
vopr
proof
is
generated.
B
This
is
great
and
that
public
key
size
is
still
constant,
there's
not
like
lost
burger
for
each
attributes,
but
it
does
not
fold
easily
into
the
vo
pure
draft
currently
because
it
requires
changes
to
the
proof.
It
also
requires
changes
to
the
opr,
like
the
private
key
evaluation
in
line
with
the
protocol
and
also
there's
no
there's
no
like
api
slot
for
folding
in
metadata
right
now
at
the
at
the
api
level
of
actually
doing
the
evaluation
on
the
server
side.
B
Whereas
with
the
with
the
proposal
here,
the
the
metadata
could
be
folded
into
the
public,
key
generation,
step
and
verification
step
that
happens
before
the
vopr
protocol
is
run,
which
is
a
nice
alternative.
It
was
also
pointed
out
in
the
privacy
passage.
Oh
sorry,
it
was
also
pointed
out
in
the
privacy
pass
list
that
one
could
use
as
an
alternative
to
the
single
bit
per
metadata
key
solution.
One
could
optimize
that
slightly
by
using
merkle
trees.
B
So,
for
example,
you
could
have
each
you,
you
imagine
you
had
like
a
list
of
keys
where
each
each
key
is
leaf
in
a
tree
and
imagine
you
built
like
a
merkle
tree
on
top
of
that
and
you
gave
clients
the
sign
tree
head
or
effectively
the
treehead
in
this
particular
case
and
then
in
running
like
the
vobrf
protocol.
B
What
you
give
them
is
sort
of
proof
of
the
the
key
to
use
is
the
leaf
key,
the
leaf
public
key
itself,
which
is
just
like
a
single
element
along
with
the
the
path
to
the
root
or
the
co-path
to
the
root.
B
So
they
can
verify
that
it
was
correctly
included
in
the
in
the
tree,
but
this
is
more
just
an
optimization
in
terms
of
you
know,
working
around
the
the
huge
pile
of
keys
in
the
naive
method,
but
maybe
something
worth
documenting
next
slide,
please
yeah.
So
the
the
question
for
the
group
is
first:
is
there
interest
in
you
know
a
vo
pf
with
public
metadata
it.
B
There
certainly
seems
to
be
a
number
of
instances
that
I'm
aware
of
where
being
able
to
fold
in
something
crypt,
something
public,
that's
cryptographically
bound
to
the
oprf
output
would
be
really
nice.
B
So
I
hope
the
answer
to
this
is
yes,
and
I
guess
if,
if
so,
what
what
sort
of
I
guess
assumptions?
Are
we
comfortable
with
relying
on
for
the
for
supporting
this?
So,
for
example,
the
the
the
attribute
based
the
opioid,
that
that
is
this
particular
presentation-
has
a
a
different
hardness
assumption
from
the
the
more
recent
anonymous
tokens
draft.
B
Our
paper
that
was
discussed
in
the
privacy
pass
mailing
list
unclear
what
the
the
difference
is
in
and
hardness
assumptions
what
the
sort
of
high
level
impact
is
on
the
security
of
the
scheme
and
it's
unclear
which
of
these
we're
comfortable
with
standardizing
now,
also
because
this
draft
sort
of
extends
the
existing
apis
and
the
the
shape
and
mechanics
of
the
voprf
draft
without
any
modification.
B
I
guess
we're
just
generally
interested
to
know
if
this
is
something
the
the
research
group
would
be
interested
in
working
on
and
adopting
either
as
a
separate
draft
or
folding
into
the
existing
draft.
I
think
both
seem
pretty
reasonable,
so
apologize
for
cloud
bringing
that
together,
I
did.
I
didn't
expect
to
present.
A
Thanks
hello,
grace,
that
was
a
great
presentation,
despite
the
fact
that
you
haven't
planned
to
do
this.
Thank
you
again.
We
have
time
only
for
one
question.
I
I've
seen.
A
B
There's
a
note
from
nick
in
that
the
term
attributes
might
be
super
confusing,
totally
agree.
I
I
I
I
it's
officially
called
the
attribute
based
oprf
in
the
paper,
but
it's
really
just
an
opr
with
public
metadata,
so
yeah
we
can
fix
the
name
fred.
B
I
I
I
didn't
catch
like
the
first
couple
bits,
but
what
I
think
I
heard
you
say
is:
could
we
like
generalize
this
particular
like
key
derivation,
verify
key
derivation
mechanism
and
and
perhaps
use
it
in
other
contexts
like
for
signature
schemes
or
whatnot?
I
believe
the
answer
is
yes,
because
the
the
key
derivation
mechanism
is
not
specific
to
the
opr
in
any
way
it
just
it's
a
it's.
B
A
way
to
deterministically
and
verifiably
derive
keys
that
are
can
be
used
in
oprf
protocols,
but
could
also
be
used
in,
like
other
ec
based
things
for
buy
signatures
and
whatnot.
So
yeah.
O
B
Think
it's
general
in
that
way
and
that's
actually
a
good
point
and
perhaps
a
reason
why
it
it
might
be
best
left
out
of
the
vo
pf
draft
right
now.
A
T
To
ask
about
the
threshold
encryption-
and
I
I
put
in
a
draft-
and
I
was
told
there
was
going
to
be
thing
on
the
list
and
it
never
happened,
and
then
I
was
promised
it
again
and
it
never
happened
and
it's
been
over
a
year.
T
Now,
oh,
it's
not
for
lack
of
prodding
the
chairs
of
the
working
group.
You
know
when
we
went
over
the
threshold
stuff
this
time.
Last
year
we
agreed
that
you
know
we
would
take
it
to
the
list
and
there'd
be
discussion
there
and
there
was
discussion
and
there
was
never
a.
It
was
never
put
to
the
list
whether
the
group
should
consider
the
threshold
encryption
draft.
A
I
Yeah
so
we
had,
we
discussed
the
threshold
signatures
as
a
topic
and
we
haven't
gotten
to
threshold
encryption
that
that's
my
understanding
right.
T
I
Yeah
does
anybody
else
in
the
in
the
working
group
or
the
research
group
sorry
have
interest
in
this?
I
think
this
is
something
that
we
can
bring
to
bring
to
the
list.
I
I
think
the
reluctance
to
do
this
immediately
had
to
do
with
lack
of
enthusiasm
beyond
the
requests
from
one
person.
So,
if
other
other
folks
in
this
meeting
have
interest
in
this
particular
topic
as
something
that
cfrd
should
should
approach,
I
encourage
you
to
step
to
the
mic
line.
B
Yeah,
this
was
just
a
wonderful
outfit,
something
jordan
was
discussing
and
proposed
earlier
in
the
in
a
cpas
presentation,
specifically
around,
like
reference
standard
reference,
implementations
for
the
voprf
hash
to
curve
and
the
pig
protocols.
Should
we
be
doing
something
here.
B
I
guess
sort
of
from
the
cfrg's
perspective
like
trying
to
collect
standard
reference
implementations
for
each
of
these
things
that
people
can
reference
like
for
the
the
drafts
that
I
I
assist
with
I've
been
using
sage,
because
we
want
to
make
it
very
clear
that
this
is
something
this
is
code
to
be
used
to
understand
the
protocol
and
the
specification,
definitely
not
something
you
should
ship,
and
I
hope
I
think
sage
is
annoying
enough
to
make
that
very
obvious.
B
But
there's
there's
a
good
argument
to
be
made
that
perhaps
there
should
be.
You
know,
standard
c
implementations
for
most
of
these
things
or
rust,
as
is
sort
of
common
in
you
know
these
cryptographic,
competitions
and
and
whatnot.
So
I
I'm
curious
to
know
a
if
people
are
interested
in
this
sort
of
thing
and
b,
how
we
might
sort
of
collaborate
and
and
make
it
happen
as
potential.