►
From YouTube: IETF114-CFRG-20220725-1400
Description
CFRG meeting session at IETF114
2022/07/25 1400
https://datatracker.ietf.org/meeting/114/proceedings/
C
All
right
attempt
number
2001,
so
qui,
very
quick
updates
on
documents.
We
have
quite
a
few
documents
in
flight.
We
have
one
document
in
that
will
be
going
to
sg
review,
which
is
ash
to
curve.
C
Madsen
cfrg
deterministic
signature
with
noise
document
was
very
interesting
to
watch
on
the
mailing
list
because
of
ipr
concerns,
even
though
the
api
is
expired,
I
chairs
are
consulting
with
irtf
chair
to
decide
what
the
best
way
forward
is
on
this.
They
seem
to
be
interested
in
the
topic,
but
a
lot
of
people
said
that
they
only
they
want
to
work
on
the
document,
condition
they.
So
this
is
I'm
not
entirely
sure
how
to
as
a
chair,
how
to
process
this
information.
C
D
A
Right:
here's
our
first
presentation,
it's
about
the
bbs
signature
scheme-
we're
obviously
very
tight
on
time
here,
so
we'll
sort
of
limit
the
questions
and
have
the
conversation
bring
it
online.
But
here
is
our
presenter.
E
C
You,
the
difficulty
is,
they
cannot
see
you
at
the
moment
right.
E
Okay,
that's
right!
Hi!
All
my
name's,
tobias
and
I'm
here
to
talk
today
about
a
signature
scene
called
the
bbs
signature
scheme.
So
next
slide,
please
so
at
a
high
level,
what
are
vbs
signatures
with
a
few
words
effectively?
It's
a
digital
scheme
supporting
sorry.
Next
there's
a
couple
of
transitions,
selective
disclosure
or
multi-message
signing.
We
call
it
next
one.
E
Thank
you.
Thank
you,
proof
of
position
enabled
and
finally
unlinkable
proofs
through
a
zero
knowledge.
Proof
protocol
next
slide,
so
just
a
bit
of
a
timeline
around
where
some
of
this
works
originated.
From
the
first
appearance
of
this
short
group,
signature
was
documented
in
2004,
which
is
where
the
name
bbs
is
derived
from
the
work
of
banay,
boyan
and
shakum.
E
There
was
been
multiple
revisions
in
academia
since
then,
in
a
paper
in
2006
and
again
in
2016,
and
then
as
a
working
group
at
the
decentralized
identity
foundation,
we
started
work
on
documenting
the
scheme
at
the
applied
crypto
working
group
in
2021.
next
slide,
so
at
a
high
level
of
the
scheme.
Next
slide,
it
is
pairing
based,
so
it
uses
type
three
pairings,
two
subgroups,
a
g1
and
g2
of
non-prime
order
in
the
current
cipher
suite
that's
defined
in
the
specification
today.
E
E
So
there's
a
obviously
a
problem
with
this
slide,
but
effectively
at
a
high
level
in
terms
of
concepts.
There
are
three
parties
involved
in
this
protocol,
sign
approval
and
verifier,
and
there
are
two
kind
of
core
constructs
which
are
signatures
and
proofs.
E
So
if
we
move
on
to
the
next
slide,
the
highest
level
summary
of
how
this
protocol
works
is
a
signer
signs,
a
set
of
messages
and
headers
messages
and
headers
are
the
two
kind
of
application
level,
payload
kind
of
classes
of
application
level.
Payloads
we
can
sign
with
the
scheme
loosely
a
header
is
information
that
must
always
be
revealed
and
any
proof
generated
under
this
protocol
and
messages.
It's
a
set
of
messages.
E
E
Header
selectively,
disclose,
create
a
proof
and
send
that
to
the
verifier
for
verification
next
slide,
so
just
moving
into
the
lower
level
sort
of
properties
of
how
sign
works
as
we,
as
I
said,
on
the
last
slide,
there
are
essentially
two
levels
of
application
level
payloads
here
the
header
which
is
effectively
a
message
or
a
payload,
that's
always
disclosed
and
any
proofs
generated
and
the
messages
are
selectively
disclosable.
E
The
sign
operation
is
effectively
the
signer
that
takes
all
of
that
information
together.
Signs
that
and
produces
a
signature,
some
notable
properties
is
the
signature,
is
constant
size,
even
under
n
number
of
messages,
and
that
signature
is
essentially
a
combination
of
a
single
ec
point
and
two
scalars
in
the
context
of
the
bls
12381
cipher
suite
that
is
112
bytes
in
size.
E
The
slide
has
cut
off
on
the
efficiency,
we're
trying
to
say
that
it's
of
n
efficiency
or
constant
time,
if
no
selective
disclosure
messages
are
used
in
terms
of
the
efficiency
of
the
sign
operation
and
another
thing
that
to
note
is
in
the
scheme:
you
can
either
sign
just
a
header
or
just
a
set
of
messages
or
both
so
the
scheme's
flexible.
In
that
regard
next
slide,
please.
E
So
verification
this
is
this
is
a
operation
that
would
primarily
be
done
by
approver,
because
the
signature
itself
would
never
be
revealed
to
a
verifier
directly.
That's
what
proofs
are
for,
but
effectively
to
verify
signature.
It
behaves
much
like
a
normal
digital
signature.
You
take
all
the
information
protected
by
the
signature,
which
is
the
n
number
of
messages
in
the
header
and
the
signature.
The
sign
is
public
key
and
you
get
a
result
around
whether
or
not
the
signature
is
valid.
Again.
That's
oen
efficiency
or
constant
time.
E
So
the
proof
protocol
is
effectively.
The
approver
receives
a
signature
with
a
set
of
messages
and
a
header,
and
then
they
decide
which
messages
they
would
like
to
selectively
reveal.
If
they
are
present
in
the
signature,
they
produce
a
presentation
header
which
can
be
used
to
put
in
announce
or
scope
the
proof
to
a
particular
domain
or
audience
combine
that
with
assigned
as
public
key
and
generate
a
proof.
So
the
size
of
the
proof
is
linear
to
the
number
of
hidden
messages.
There's
an
equation
there.
That
gives
you
an
idea
of
how
that
scales
again.
E
This
operation
is
owen,
efficient
or
constant
time,
if
no
selective
disclosure
messages
are
used
and
there's
obviously
a
choice
for
the
approver
to
make
around
which
messages
to
disclose.
Lastly,
next
slide,
please
so
bbs
the
verify
proof.
So
this
is
the
operation
that's
performed
by
a
verifier.
They
they
receive
essentially
the
set
of
revealed,
messages
and
header,
the
presentation,
header
and
proof
they
combine
this
information
with
the
signers
public
key
and
that
produces
a
result
of
whether
or
not
the
proof
is
valid
or
not
again,
o
inefficient
or
constant
time.
E
So
the
proof
just
a
note
on
what
the
proof
provides.
It
proves
integrity
and
authenticity
of
the
revealed
messages,
also
the
header,
and
it
proves
possession
of
the
signature.
The
presentation
header
is
essentially
brown
to
the
proof
as
well,
and
the
proof
is
essential
is
said
to
be
unlinkable
because
effectively,
the
proof
is
it's
generated
under
a
non-interactive,
zero
knowledge
proof
protocol
next
slide.
E
So
these
are
some
some
benchmarks
for
the
for
our
implementation
that
we
have
most
recently
done
for
the
most
recent
draft.
We
are
still
refining
some
of
these,
but
this
gives
an
idea
of
the
characteristics
that
that
we've
found
so
the
benchmarks
you've
got
a
logarithmic
scale
on
the
bottom
there.
So
this
really
shows
that
performance
scales
linearly
to
the
number
of
messages
in
the.
E
In
the
smallest
case,
where
you
are
maybe
just
signing
a
header
you're,
not
using
the
selected
disclosure
feature
afforded
by
the
bbs
signature,
you're
looking
at
a
signed
verified
times
of
you,
know,
sub
half
a
millisecond
and
obviously
the
proofs
just
for
note
on
these
benchmarks,
as
they
go
up,
were
generated
with
50
of
the
messages
were
disclosed
in
the
generator
proofs
and
there's
some
details
on
the
machine.
Those
benchmarks
are
run
on
next
slide.
Please
so
just
a
brief
section
on
some
of
the
use
cases
we
see
for
this
scheme.
E
The
first
is
privacy,
preserving
anonymous
credentials
so
effectively
the
ability
to
leverage
the
selective
disclosure
feature
to
produce
attribute-based
credentials
that
include
personal
information
about
someone
where
they
can
derive
proofs
of
the
subset
of
that
information
signed
by
the
original
issuer
to
different
relying
parties
free
from
the
risk
of
correlating
themselves
and
presentations,
and
also
providing
the
ability
for
them
to
granularly
manage
what
information
they
share
to
different.
Relying
parties
next
slide.
E
Another
application
is
what
we
call
proof
of
possession
enabled
security
or
access
tokens.
So
this
is
a
what
we
believe
is
a
feature
that
allows
you
to
say
issue
a
security
token
or
an
access
token.
E
E
Essentially
what
that
token
underlies
what
that
actually
proves
or
grants
the
client
to
different
servers
without
being
correlated
through
the
cryptographic
layer
say
a
fixed
signature,
because
again
the
proofs
are
indistinguishable
from
random
and
just
a
quick
note
there,
the
indistinguishable
from
random
is
it's
a
it's,
not
an
artifact
of
the
presentation
header
which
is
set
by
the
client.
It's
actually
built
into
the
protocol.
E
So,
just
a
quick
note
on
why
we
believe,
while
we're
presenting
this
work
to
the
cfr
gen,
why
we'd
like
to
continue
it
at
the
cfrg?
We
think
it
fits
with
numerous
existing
work
items
already
here,
which
is
notably
the
peering
friendly
curves
draft,
which
includes
the
definition
of
the
curve
that
we're
using
present
and
the
current
cyphers
we've
defined,
which
is
bls
1231.
E
It
also
bears
a
lot
of
a
relationship
and
and
structure
and
nature
to
the
existing
bls
signatures
draft,
and
we
also
make
heavy
use
of
of
hash
to
curve
in
our
draft
next
slide.
Please
so
as
effectively
we'd
like
to
call
for
adoption
of
this
draft-
and
you
know,
we
believe
that
it's
sufficiently
just
evolved
today
to
describe
the
scheme.
E
However,
it's
you
know
incomplete
as
a
draft
we've
got
numerous
issues
still
in
in
our
repository,
such
as
you
know,
we
would
appreciate
a
broader
review
of
the
scheme,
the
cipher
suite
kind
of
definition
that
we
have
within
the
spec
today,
which
allows
kind
of
extensibility
in
future
cipher
suites
to
be
defined.
We
we
still
have
a
series
of
refinements
to
do
there
and
then
beyond
that
also
clarify
some
of
the
extensibility
points
that
we
have
built
into
the
scan.
E
A
slight.
Please.
C
E
Yeah
so
this
afternoon
there
is
a
buff,
a
birds
of
a
feather
session
for
a
new,
a
call
for
what's
called
jason
webb
proofs,
which
proposes
to
essentially
extend
the
jose
family
of
cryptographic,
data
representation
formats
to
allow
and
accommodate
schemes
like
bbs.
So
yes
related
work
at
atf.
F
E
Yeah
yeah
thanks,
so
I've
definitely
seen
that
work
and
I
think
that's
fantastic
and
there's
there's
definitely
intersect
and
overlap
with
applications
like
privacy
pass.
E
So,
in
conclusion,
essentially,
it's
bbs
is
an
efficient
multi-message
digital
signature
scheme
supporting
select
disclosure
and
zero
knowledge
proofs.
It's
had
quite
a
long
line
of
research,
backing
it
out
improving
its
security
properties
and
efficiency.
There's
been
multiple
iterations
in
academia
over
the
years
as
well.
E
We
believe
there
are
multiple
use
cases
for
which
bbs
can
be
applied
and
the
current
cipher
suites,
based
on
the
bls
1231
curve.
Obviously
there's
already
work
going
on
at
the
atf.
E
G
That's
not
exactly
a
question,
but
I'm
very
interested
in
seeing
this
work
continue,
especially
in
the
context
of
securing
cyber
physical
supply
chains,
given
the
performance
and
the
size
of
the
proofs.
Thank
you.
E
B
E
Because
the
public
key
represents
the
public
key
of
the
issuer,
so
you've
got
heard
privacy
within
it's.
Essentially
a
group
signature,
so
you
say
one
issuer
issues
to
a
herd
of
you
know:
10
000
or
million
signatures.
Then
any
party
that
possesses
a
signature
within
that
group
has
anonymity
to
that
group.
A
Okay,
thanks
we're
not
going
to
do
an
adoption
call
here,
but
we'll
definitely
talk
with
the
chairs
and
we'll
decide
whether
to
take
it
to
the
list.
A
All
right,
dear
deardrive,.
H
Hello
there
I
am
yes,
cool
hi,
I'm
deirdre.
I
work
at
the
zcash
foundation.
This
is
frost
next
slide.
Please.
H
What
is
frost
frost
is
a
flexible
next
next,
please
next,
please
ground
optimized.
Next,
please!
This
is
two
rounds
for
frost
if
you're
familiar
with
any
of
the
frostwork
that
has
been
published
elsewhere.
The
thing
that
we're
specifying
in
this
document
is
the
two-round
variant
of
frost.
That's
where
the
flexible
comes
from.
There
were
two
variants.
H
H
I
thought
I
would
have
a
clicker
for
every
signature
you
create.
You
can
use
a
threshold
t
of
n
possible
signers
where
those
n
possible
signers
have
shares
of
the
secret
next.
Please
and
that's
the
signature
scene
and
the
resulting
signature
is
indistinguishable
from
a
single
signer
schnorr
signature
like
an
ed,
dsa
type
signature.
Next,
please.
H
So
high
level
you
have
already
done
some
keygen
and
then
the
document
that
we
are
proposing
is
the
signing
protocol
after
you've
already
done.
Keygen
first
round
involves
generating
dances,
rather
your
contributions
to
the
nonces
and
then
commitments
to
those
and
publishing
them,
and
then
the
second
round,
you
actually
use
those
and
all
the
other
signing
participants
commitments
with
your
key
share
to
generate
your
signature
share
and
then
the
coordinator,
whoever
that
is,
takes
all
these
shares.
H
H
Someone
is
operating
as
a
coordinator,
we've
seen
implementations
where
anyone
everyone
can
be
the
coordinator
in
the
thing
we're
describing
there
is
a
coordinator
next,
please,
as
I
described
you
generate
nonces,
you
create
these
commitments
to
your
nonces
and
you
publish
them
in
an
authenticated
channel,
not
a
confidential
channel
but
authenticated
channel.
H
Next,
please,
and
then
the
coordinator
or
whoever
who,
having
shuffled
around
not
shuffled,
distributed
all
the
commitments
to
these
nonces
to
all
the
parties
who
are
signing
uses
them
to
grant
to
generate
your
signature
share
with
those
commitments
and
your
signing
share
your
key
share.
Next,
please,
and
then
you
send
that
signing
share
back
to
the
coordinator
and
you
aggregate
all
those
signing
shares
back
into
the
final
signature.
Next,
please
so
fully
specified.
We've
been
it
cranking
on
it.
H
For
eight
months
now
we
have
four
cipher
suites
in
the
document
the
ristretto
primordia
group
p256
ed25519.
Can
you
go
back
and
ed
448?
H
We
have
specifically
specified
these
cipher
suites
and
the
api
of
the
document
to
be
compatible
with
edge,
5,
ed2519
and
ed48
verification,
and
we
also
have
multiple
interoperable
implementations
several
and
rust
at
least
one
and
z,
our
proof
of
concept
or
reference
implementation
in
python
and
sage
over
multiple
cipher
suites
restrain.
Those
popular
ed25519
is
very
popular
next,
please!
H
H
One
and
we
updated
the
protocol,
update
the
specification
to
include
this,
but
there
was
some
further
analysis
that
showed
this
introduced
a
inter-round
malleability
in
the
set
of
signers,
not
malleability,
of
this
resulting
signature,
but
it
wasn't
clear
that
the
people
who
started
the
signing
in
round
one
were
provably
the
same
set
that
finished
in
round
two.
So
we
backed
this
out.
We
decided
to
eat
the
cost
of
the
of
the
further
scalar
multiplications
and
just
went
back
to
the
one
where
that
was
no
longer
an
issue,
so
that
is
in
the
current
version.
H
In
v7
of
the
draft
and
another
big
update
is
we
were
having
trouble
having
a
consistent,
subroutine
definition
of
how
to
do
verification
for
these
signatures,
because
the
ed25519
document
and
other
documents
have
led
to
their
verification
subroutine,
they
don't
really
have
one.
They
have
a
two
lines
of
text
suggest
things
that
we
would
rather
are
musts
and
they're
sort
of
just
sort
of
like
anyway.
We
have
made
this
easier
by
having
per
cypher
suite
verification,
which
is
basically
the
same
operation.
H
You
would
do
for
singleton,
so
we're
just
trying
to
remain
compatible,
while
also
like
kind
of
giving
strong
recommendations
of
some
of
these
cofactor
curves
of
how
we
would
like
to
see,
implement
implementers,
implement
verification
and
then
for
general
schnorr
signatures
over
prime
order
groups
like
ristretto
and
like
p56,
that
don't
have
an
equivalent
verification
equation
defined
anywhere.
We
give
a
if
you
have
a
prime
order
group.
Here's
your
verification
procedure
for
a
signature
that
uses
the
primordial
group,
so
we
include
that
next,
please.
H
So
we
think
it's
in
a
good
place
and
we're
seeking
crypto
panel
review
and
wider
cfrd
review.
H
We've
been
cranking
on
this
and
talking
to
many
people
in
the
broader
cryptographic
community,
but
we
think
we
we
are
ready
for
the
cfrg
to
really
really
take
a
look
at
it
now,
because
we've
we've
cranked
out
a
lot,
is
there
anything
unclear
in
the
draft?
Is
there
anything
ambiguous
in
the
draft?
Is
there
anything
technically
incorrect,
it's
insecure
or
unsafe
in
the
specification?
H
H
We
do
have
we
haven't
added
yet,
but
we're
interested
in
adding
another
cypher
suite
over
sec
p256k1,
which
is
another
primordial
curve
that
we
would
like
to
see
supported.
That
hasn't
been
done
yet,
but
we'll
see
if
anyone
else
is
still
interested
next,
please
and
that's
pretty
much
it
do.
You
have
any
questions.
I
Hi
there,
daniel
kahn
gilmore
from
aclu,
thanks
for
working
on
this,
I'm
wondering
whether
you
think
that
we
need
to
update
80
32
because
of
the
looseness
of
the
255.19
signature.
Verification
like
do
you
see
a
path
forward
for
that.
Is
that
something
that
you
would
be
interested
in
in
seeing
happen?
What
are
the
roadblocks?
To
that?
I
mean
this
is
not
the
only
place
where
this
has
come
up.
H
I
would
be
very
interested
in
seeing
that
happening.
I
don't
know
what
the
roadblocks
are,
but
I
will
say
that
every
implementation
of
ed25519
worth
its
salt
and
I've
worked
on
a
few.
Are
all
they're
all
doing
the
co-factor
check,
especially
in
places
as
opposed
to
the
as
opposed
to
the
cofactorless
check,
which
is
one
is
sufficient,
not
strictly
required
and
the
strictly
require,
and
the
one
that
actually
multiplies
by
the
cofactor
is
the
one
you
need
for,
for
example,
concept,
consensus,
interoperability
and
also
you
want.
H
If
you
are
using
this,
where
low
order
points
or
torsion
points
are
an
issue
which
is
a
lot
of
places,
you
should
just
do
the
cofactor
mall
and
if
the,
if
the
extra
cost
of
that
is
an
issue,
I
I
don't
know
what
to
tell
you
like
you,
you
can
run
into
to
serious
security
issues
if
you're
not
doing
this
co-factor
check.
I
would
highly
like
to
see
that
document
updated
as
an
implementer
and
a
person
who
does
cryptography
and
that
gets
deployed
in
the
real
world.
A
Okay,
one
more
question
from
inside
the
room,
and
then
we
have
one
from
meat
echo
great.
D
Mike
ellsworth,
so
I'm
working
on
the
composite
sort
of
combining
pq
and
classical
together
for
security
stuff
within.
D
H
Yeah
a
lot
of
basic
group
assumptions
and
the
security
proof
reduces
to
one
more
diffie-hellman
assumptions,
so
we
can
talk,
but
there
are
some
security
reductions
that
may
not
be
available
to
pq
systems.
A
Okay,
the
question
online
is
no
longer
here.
Okay,
so
the
queue
is
closed.
All
right
thanks,
deirdre
thanks.
F
I'm
going
to
do
both
of
them.
It.
F
Which
order
all
right,
so
this
is
just
an
update
on
the
the
keep
landing
for
signature
schemes
draft
that
was
recently
adopted
next
slide.
Please.
F
So,
just
to
remind
folks
what
this
draft
is
all
about,
imagine
you
did
not
have
imagine
the
setting
where
you
have
a
a
what
we
call
a
proven.
A
verifier
approver
wants
to
sign
some
message
and
have
the
verifier
check
that
this
particular
signature
on
this
message
is
fine.
Kruger
gets
his
input,
a
secret
key
and
a
public
key
and
a
message
as
well.
F
Does
the
signing
thing
sends
the
message
in
the
public
key
and
the
proof
or
the
signature
all
the
way
over
to
the
verifier,
which
just
determines
whether
or
not
it's
valid
and
sort
of?
F
Obviously
we
don't
want
the
the
signature
itself
to
be
unforged
or
forgeable,
so
the
approver
can't
trick
the
verifier
into
accepting
this
particular
proof
of
the
signature
without
actually
like
running
the
signing
algorithm
next
slide,
please,
but
there
are
other
interesting
applications
wherein
you
know
I
might
not
want
the
the
verifier
to
learn
anything
else
about
the
approver
like
you
might
not
want
the
verifier
to
learn
that
this
specific
public
key
that
this
specific
prover
owned
was
used
to
verify
this
message,
his
application's
in
tor.
F
So
imagine
you
had
a
slightly
different
scenario
where
you
have
now
two
provers,
each
of
which
have
their
distinct
sets
of
you,
know
private
and
public
key
pairs
they're
both
signing
a
message:
they
run
their
signing
over
them
independently
and
they
send
them
to
some
with
some
middleman
or
intermediary.
We
call
the
mediator:
mediator,
choose
a
bit
chooses
a
bit
at
random
and
then
sends
the
message,
the
public
key
corresponding
to
the
bit
and
the
signature
corresponding
to
the
chosen
bit
to
the
verifier
and
the
verified
dose.
F
F
The
the
signature
itself
is
not
forgeable,
which
is
great
next
slide,
but
we
don't
have
this
other
property
of
sort
of
unlinkability
wherein
the
the
verifier
doesn't
learn,
basically,
which
prover
generated
this
particular
signature,
because
the
verifier
is
assumed
to
have
as
input
the
public
key
is
corresponding
to
zero
and
prover
one.
Then
they
could
just
simply
check.
Does
this
validate
with
public
key
zero,
or
does
this
validate
with
public
key
one,
and
that
determines
my
bit?
F
So
that's
the
property!
We
want
next
slide
please,
and
so
this.
This
signing
with
key,
blinding
extension
of
a
digital
signature
scheme
is
basically
how
we
go
about
constructing
this
and
it
basically
has
two
fundamental
properties,
the
first
of
which
is
that
the
per
message
public
keys
that
are
produced
are
independently
distributed
from
what
we
call
the
long-term
public
keys
of
a
particular
signer.
F
So
these
pk0
and
pk1
in
the
previous
slide,
and,
moreover,
that
the
signatures
that
are
produced
don't
leak
any
information
about
the
long-term
signing
keys,
and
this
is
what
the
this
construction
allows
us
to
to
achieve.
Next
slide,
please
so
I
said
it
was
a
sort
of
an
extension
of
a
basic
digital
signature
scheme
and
it
the
extension
basically
adds
in
three
new
functions
on
top
of
your
classical
key
generation.
F
Signing
and
verifying
three
of
functions
are
a
new
function
to
generate
a
blind
key
blind
key
gen,
a
function
to
blind
a
public
key
and
then
a
function
to
sign
with
a
blinded
key
and
the
the
correctness
property
that
we
want
is
sort
of
listed
at
the
bottom.
So
you
can
run
the
verification
algorithm
over
a
blinded
public
key
or
against
a
blinded
public
key
using
a
the
output
of
the
blind
key
sign
function
with
that
same
blinded
key
and
everything
should
work
out
just
fine.
F
The
verifier
is
totally
unaware
that
this
particular
procedure
took
place
behind
the
scenes.
It
just
knows
that
the
signature
is
valid
and
was
not
forged,
which
is
which
is
great
next
slide.
Please
there's
also
optionally
the
ability
to
add
yep
another
function
to
this
particular
construction,
the
the
ability
to
unblind
a
public
key.
F
So
if
I
give
you
a
blinded
public
key
as
well
as
the
blind
key
that
was
used
to
produce
that
binded
public
key,
you
can
invert
the
operation
and
recover
the
sort
of
the
long
term
or
the
the
the
input
public
key.
That
was
not
blinded.
So
here
you
just
basically
the
inverse
of
one
another,
and
I
don't
know
what
happened
to
me
echo,
but
I'm
just
gonna
keep
talking,
and
so
the
question
for
the
draft
specifically
was:
how
do
we
achieve
this
sort
of
optionality
in
practice?
F
X5,
please,
and
so
we
chose
to
do.
The
proposal
that
we
came
up
with
was
to
sort
of
contextualize
the
blind
public
key
function
itself.
So
previously
it
just
took
as
input
a
long-term
public
key,
a
fresh
ephemeral,
blinding
key
skb,
and
that
was
it
would
just
produce
a
blind
and
public
key.
But
now
we
slot
in
this
additional
context,
string
where
context
can
be
entirely
application
defined
so,
for
example,
in
the
the
privacy
pass
application
it's
empty.
F
It
doesn't
have
any
particular
value
and
that
allows
this
unblinding
operation
to
take
place,
because
the
person
who
wants
to
unblind
knows
what
context
is,
and
they
can
do
so
in
the
context
of
tor.
F
There's
a
slightly
different
construction
for
specifying
what
the
context
is
includes
the
long-term
public
key
and
like
a
time
stamp
or
whatever.
I
forget
exactly
how
it's
encoded,
but
that's
functionally
sort
of
what
happens
and
that
sort
of
prevents
this
unblinding
step
from
taking
place
because
you
don't
know
as
a
verifier
either.
One
of
these
you
don't
know
the
long-term
public
key.
F
The
proposed
change
is
up
on
a
pull
request.
It's
been
reviewed
by
a
couple
people.
It
seems
pretty
straightforward
and
obvious
so
we'll
likely
land
that
after
we
get
some
implementations
and
new
test
factors
going
next
slide,
please
okay
and
then
just
to
remind
people
about
the
implementation
status.
So
we
have
implemented
this
for
all
of
the
signature
teams
that
are
in
the
draft,
both
edu,
dsa
and
ecdsa,
and
there
are
different,
inter
they're
interoperable
these
different
implementations.
F
We
do
have
a
security
analysis
of
specifically
the
unlinkability
and
unforgeability
of
both
eddsa
and
essay
completed.
It's
under
peer
review
right
now,
so
not
publicly
available
yet,
but
if
we,
you
know,
ask
the
right
potential,
we
can
make
it
available
next.
Steps
for
this
are
to
hopefully
merge
the
pr
that
I
just
talked
about
and
then
start
soliciting
early
review,
ideally
from
the
crypto
review
panel,
because
if
that's
at
this
point,
we
think
the
document
is
feature
complete
next
slide.
I
think
that's
it
actually
yeah
questions.
A
If
there's
no
there's
queues
open
online,
anybody
joining
it
seems
not.
Thank
you,
chris.
Let's
move
on
to
the
next
presentation.
C
F
Okay,
so
this
is
a
really
quick
update,
thankfully,
in
the
interest
of
time
for
a
while.
Now
we
have
the
address
of
this.
This
draft
in
the
in
the
group
on
specifying
blind
signatures
based
on
rsa
that
have
been
used
in
a
huge
number
of
applications.
So
it
was
about
time
we
wrote
down
how
to
do
this
correctly.
Next
slide,
please!
F
F
The
main
insight
was
that
previously,
when
trying
to
reason
about
the
blindness,
property
of
blind
rsa
folks
did
not
consider
what
happens
if
the
signers
public
key
is
generated,
maliciously,
because
you're
trying
to
define
blindness
against
this
person,
who's
verifying
the
public
key
and
if
they
construct
this
public
key
in
a
particular
way.
In
a
malicious
way.
F
Information
can
leak
when
the
client
engages
with
the
protocol,
which
is
not
great,
and
this
is
specifically
or
especially
problematic
for
the
deterministic
variants
in
the
draft,
specifically
those
that
use
the
full
domain
hash
for
the
like,
actually
encoding
of
a
message
and
any
other
sort
of
deterministic
variant
like
pss,
with
a
zero
like
salt
or
whatever.
So
it's
important
to
know
that
this
is
only
a
problem
with
low
entropy
inputs.
F
So
if
you
are
using
blind
rsa
for
a
like
a
voting
application
where
you
know
you're,
there's
like
five
people
or
whatever
in
your
pool
running
the
protocol
with
this
type
of
input,
could
leak
your
vote
or
bits
of
your
vote.
But
if
you
have
applications
where
you
have
a
high
entropy
input
like
privacy
pass,
that
has
a
like
freshly
chosen.
32
byte
nodes
as
input-
it's
not
an
issue.
F
It's
also
worth
noting
that
it's
also
not
an
issue
if
you
have,
for
example,
proof
that
the
private
or
the
public
key
was
not
chosen
maliciously,
you
can
in
fact,
like
generate
non-interactive
zero-knowledge
proofs
that
someone
can
check
specifically
the
client
that
wants
to
engage
in
this
protocol
to
see
whether
or
not
the
the
public
key
was
constructed
correctly,
but
they
are
not
standard
and
they
only
give
us
sort
of
like
probabilistic
guarantees
that
things
are
okay,
so
that
said,
next
slide
a
resolution
to
this
issue
and
to
deal
with
this
emerging
security
analysis
is
basically
to
start
stripping
away
things
that
we,
we
included
for
the
purposes
of
you
know
having
the
draft
be
maximally
useful
for
a
number
of
applications
now,
the
first
of
which
is
we're
going
to
like
remove
all
the
deterministic
variants
from
the
draft
and
just
sort
of
enforce
it
require
that
pss
with
a
salt
length
that
matches
that
underlying
hash
digest
is
the
way
to
go,
perhaps
in
the
future.
F
If
there's,
you
know
some
new
insight
that
we
get
that
allows
us
to
like
bring
this
back
or
some
of
the
deterministic
variant
back
to
do
so
in
a
separate
draft.
I
don't
think
it
needs
to
block
this
particular
work
and
we
also
added
text
to
talk
about
what
happens
if
you're,
an
application
that
has
low
entropy
inputs,
specifically
what
you
should
do
to
add
entropy
to
your
input
generated
locally
on
the
client.
F
This
is
interesting
because,
if
there's
like
randomness
that's
generated
locally
before
invoking
the
signing
protocol,
that
randomness
has
to
be
like
part
of
the
message
now
that
the
verifiers
will
consume
and
verify
so
it
does
have
api
implications
if
you're,
if
you're,
you
know
using
the
construction
in
that
particular
mode
or
you're
like
generating
local
randomness
or
local
entropy,
but
if
you're
an
application
like
privacy
fast
that
has
high
entropy
inputs,
you
can
use
the
scheme
as
it
was
previously
before
without
any
without
any
problems.
F
So
at
this
point
before
we
actually
merge,
the
change
I
want
to
you
know:
do
the
check
with
folks
to
see
if
anyone
had
a
real
concrete
application,
where
deterministic
signing
is
actually
a
hard
requirement
such
that,
if
we
ripped
it
out,
it
would
not
break
your
use
case.
There's
a
lot
of
reasons
to
pull
this
out.
The
security
concerns
are
perhaps
paramount,
and
there
are
two
open
pull
requests
to.
F
Basically,
propose
a
way
to
to
enact
the
proposed
resolution
that
I
just
went
through,
one
of
which
I
forget
the
number
I
should
have
included
here,
my
apologies,
one
of
which
basically
introduces
like
an
external
api
on
top
of
the
internal
api
or
on
top
of
the
existing
api,
and
the
specification
and
this
external
api
would
be
the
thing
that's
responsible
for
sampling,
local
randomness
and
then
equally
or
also
passing
it
to
the
verifier
such
that
it
can
verify
randomness
plus
message
and
that's
sort
of
the
approach.
F
A
Okay,
great
anybody
have
any
feedback
on
this
question
for
the
rg.
I
should.
F
Know
also
that
at
this
point,
once
once
we
merge
this
resolution,
like
the
keep
lining
draft,
we're
basically
gonna
be
done,
we
will
have
the
security
analysis
and
and
everything
complete.
So
whatever
is
the
next
step
at
that
point,
yeah.
B
K
Well,
I'd
like
to
give
you
a
short
update
regarding
the
extra
essentially,
maybe,
essentially,
we
have
new
content
changed
in
the
current
draft,
six
which
I've
uploaded
yesterday
after
the
service
got
open
again,
as
you
go
to
slide
into
the
next
slide
and
basically,
what
we
have
been
doing
when
changing
from
draft
version
five
to
six
is
that
we
modified
the
write-up
so
and
became
clear
in
the
discussions
that
we
had
on
the
list
and
off
the
list
that
we
probably
should
focus
on
a
clear
set
of
readers
and
a
clear
set
of
audience
and,
as
we
have
completed
the
security
analysis
of
the
pace
in
the
asia
paper,
which
clearly
covers
the
scope
and
the
cipher
suites
that
we
are
discussing
in
the
draft.
K
So,
following
the
feedback
that
we
got
for
the
version
four
and
five
drafts,
we
rewrote
the
introduction
completely
and
focused
it
on
on
the
implement
implementers
and
that's
basically
the
change
which
we
have
applied,
which
covers
the
new
draft
version
which
is
online,
and
we
would
welcome
a
broader
review
and
also
maybe
a
review
from
a
native
english
speaker
regarding
the
draft
and
the
next
step
in
our
perspective,
would
be
a
review
from
the
crypto
review
panel
board
and
other
potential
users
and
people
that
would
be
using
cpas
so
that,
if
you
go
to
the
next
slide,
that's
essentially
the
message.
D
L
Yes,
thank
you,
hi
everyone
today
we're
just
giving
a
quick
presentation
with
tom
biges
about
the
post
quantum
cryptography
in
his
process.
My
name
is
viacelli.
L
Thank
you
for
having
us
here
next
slide,
please,
okay,
so
just
to
give
you
a
little
bit
of
a
disclaimer
slash
preface
here,
what
we're
going
to
be
presenting
is
not
a
complete
overview
of
the
wholeness
process,
because
that
will
take
a
really
long
time
to
do,
but
we're
just
mainly
summarizes
the
results
of
the
first
milestone
that
this
process
has
actually
reached
and
there's
some
bias
that
you're
going
to
hear
from
us.
L
But
we
try
to
make
the
presentation
as
neutral
as
possible,
and
we
also
not
we're
not
pointing
into
actually
choosing
any
a
specific
direction,
but
rather
treating
this
presentation
as
a
conversation
we
started.
If
you
want
to
learn
more,
our
recommendation
will
be
that
you
read
denise
report
of
the
first
milestone
that
they
reach
next
slide
and
now.
M
Yeah,
so
if
you
saw
the
report
or
some
stuff
on
twitter,
then
you
might
have
seen
that
kyber
is
the
ken
that
has
been
chosen
for
key
exchange
in
a
post
quantum
way,
and
there
are
actually
three
signature
schemes
we're
going
to
provide
a
very,
very
brief
overview
here,
and
basically
just
stealing
this
report
and
yeah
summarizing
it
very
very
briefly.
So
if
you
go
to
the
next
slide,
kyber
is
the
chem,
the
key
encapsulation
mechanism
that
has
been
chosen.
M
There
are
a
few
others
that
are
still
in
the
running
that
might
be
standardized
later,
but
that
will
probably
take
a
while
still
kyber
is
quite
fast,
quite
efficient
and
is
based
on
lattices,
something
which
is
important
in
the
context
of
making
protocols
and
putting
this
into
stuff
is
that
it's
important
to
realize
that
cams
are
not
the
same
stiff
helmet,
so
while
it
has
been
applied
into
in,
for
example,
tls
very
straightforwardly
because
canvas
are
interactive,
you
cannot
put
it,
for
example,
into
signal
next
slide.
Please.
L
Okay,
all
right,
so
I'm
just
talking
about
the
key
encapsulation
mechanisms,
which
are
basically
the
ones
that
necessarily
in
order
to
step
confidentiality
in
the
face
of
quantum
computers,
but
nist
also
decided
to
select
certain
algorithm
for
standardization
for
to
protect
the
authentication
in
the
face
of
quantum
computers,
and
those
are
the
signature
schemes
and
the
ones
that
were
chosen
were
three.
The
first
one
is
deleting,
which
is
very
general
purpose,
and
it's
the
main
recommendation
that
this
provides.
L
Next
slide,
please
the
second
signature
algorithm
that
needs
recommends
for
the
standardization.
No
sorry
that
this
is
going
to
standardize
and
it's
recommended
to
be
used.
It's
falcom,
which
sizes
are
indeed
better
if
you
compare
them
with
the
lithium
and
could
indeed
be
used
in
certain
protocols.
The
problem
with
this
is
that,
in
order
to
instantiate
falcom,
you
have
to
use
a
floating
point
arithmetic
and
that
could
be
really
cumbersome
for
certain
implementations,
because
the
implementation
have
to
be
done
very
much
correctly.
L
So
there's
the
assumption
that
you
could
use
this
scheme,
but
only
if
you
are
correctly
implementing
it
next
slide,
please
and
the
last
one,
the
last
initial
algorithm
that
was
proposed
to
or
that
is
going
to
be
standardized.
This
is
things
which
is
a
stateless
hash,
based
signature
that
maybe
you're
already
a
little
bit
familiar
with,
because
the
cfig
has
already
a
standardized
xmss
and
lms.
L
But,
as
you
can
see
here,
the
algorithm
has
nice
sizes
for
the
public
key,
but
not
so
much
for
the
signature,
and
indeed
because
it
has
a
lot
of
hash
function,
calls
it
is
pretty
slow
and
also
there
are
many
parameter
sets
that
essentially
are
going
to
be
determined
by
this,
but
as
it
is
currently
right
now,
this
system
algorithm
has
a
lot
of
parameters
and
the
reason
why
it
was
chosen
is
because
the
previous
schemes
as
sorry
the
lithium
is
based
on
lattice-based
assumptions,
and
this
one
is
based
on
a
set
of
different
assumptions.
L
So
it
provides
a
wider
range
of
security
notions
for
the
schemes
next
slide.
Please.
M
Yeah,
so
to
briefly
refresh
or
reflect
on
those
current
itf
standards,
it's
very
important
to
realize
that
those
are
stateful
hashtag
signatures,
so,
although
they
are
quite
a
bit
smaller
than
what
sphinx
bus
is
giving
you,
this
condition.
M
And
if
you,
for
example,
restore
your
key
from
a
backup
you're
completely
screwed,
so
that's
why
it's
important
to
note
that
this
is
definitely
a
very
significant
difference.
M
M
So
I
already
briefly
mentioned
that
there's
a
few
more
camps
that
are
still
in
the
race.
If
you
read
the
report,
it's
more
or
less
feels
like
this
is
either
waiting
for
more
research
and
waiting
too
much
mature
certain
scenes
schemes
a
bit
more
and
otherwise
isn't
exactly
sure
what
to
do
so.
Mclee's
bike
and
hqc
are
all
based
on
codes.
Classic
mclease
has
a
very
tiny
ciphertext,
but
probably
keys
in
the
megabytes.
So
this
isn't
sure
if
that's
actually
useful
tell
them
if
you
actually
do
like
it.
M
Psych
is
very
fancy:
esogenies
math,
but
it's
very
new,
still
quite
slow,
but
actually
very,
very
small
compared
to
geiber,
so
that
is
very
nice
and
bike
and
hqc
are
sort
of
similar
but
have
slightly
different
tradeoffs.
You
see
that
these
schemes
are
all
not
lattices,
which
seems
like
yeah
the
main
feature
in
these
alternates
and
then
there's
going
to
be
some
signatures
as
well,
and
that's
what
sofia
is
going
to
tell
you
about
in
the
next
slide.
L
Yes,
finally,
as
they
said,
the
nice
process
is
not
ended,
but
rather
they
reached
the
first
milestone.
One
thing
that
they
also
announced
is
that
the
aston
said
they're
going
to
be
still
advancing
certain
algorithms
for
a
fourth
run,
and
they
also
are
calling
now
for
a
next
call
of
proposals
specifically
for
digital
signature,
algorithms,
and
the
reason
is
because,
as
you
saw,
the
majority
of
digital
signatu
signature
algorithms
have
like
bigger
sizes
that
maybe
would
be
cumbersome
when
actually
putting
them
into
networking
protocols.
L
So
this
new
round
is
going
to
come,
there's
already
some
hope
of
certain
algorithms,
let's,
for
example,
mayo.
That
seems
very
nice,
but
it
still
needs
a
certain
security
checks
to
actually
attest
to
the
correct
security
of
the
algorithm
and,
of
course,
also
to
just
let
you
know,
don't
expect
any
mr
standards
before
2030,
because
at
least
the
end
of
this
next
call
of
the
process
is
going
to
take
several
years
as
it
did
with
the
previous
call.
L
Nexus
live,
please,
okay,
if
you
are
actually
very
interesting
and
we
want
to
really
start
running
code,
there's
already
all
of
these
libraries
specifically
libreoqs.
It's
a
really
great
library
because
it
performs
a
lot
of
safety
check
of
the
best
quantum
algorithms
and
it
has
many
bindings
to
different
other
programming
languages.
There's
also
the
eu
clean
and
pqm4,
especially
specifically
for
embedded
applications
nexus
live.
Please.
M
This
one,
I
think,
should
have
been
dropped
but
yeah.
So
these
are
some
of
the
questions
that
you
might
ask
today.
Basically,.
D
M
So
here
we
have
a
collection
of
links,
notably
this
report,
that
we
believe-
and
we
plagiarize
here
and
also
some
other
fun
research
papers,
as
well
as
some
blog
posts
that
have
real
work,
measurements
and
a
nice
talk
from
some
colleagues
of
mine
that
explain
kyra
for
more
general
audience
if
you're
not
very
into
lattices.
M
We
were
trying
to
keep
this
slightly
short
to
enable
asking
lots
of
questions
or
perhaps
start
discussion,
so
I
guess
we'll
be
taking
questions
and
there's
one
comment
I
can
make
already
responding
to
the
chat.
M
The
non-interactive
versus
interactive
thing
with
kens
is
a
very
important
issue
and
it's
important
to
realize
right
now
that
there's
no
solution
for
non-interactive
postpartum
key
exchange,
yet
that
we
have
a
lot
of
confidence
in
there
is
seaside
which
is
not
in
this
standardization
project
and
will
probably
not
enter
it
right
now.
Seaside
security
is
hotly
contested
and,
although
it's
quite
small,
it's
also
very,
very,
very,
very
slow,
even
at
the
very
aggressive
parameter
set
that
exists
today
and
there's
some
work
that
I've
seen.
M
That
suggests
that
if
you
want
to
go
conservative
for
that
scheme,
you're
looking
at
tens
of
seconds
of
operation
times,
which
is
not
nice
but
seaside,
yeah,
it's
very
new.
I
don't
think
that
will
be
standardized
anytime
soon.
So
it's
looking
that
for
now
we
will
not
have
non-interactive
key
exchange
post
quantumly.
L
L
M
Yeah,
so
this
is
all,
of
course,
the
there
are
minimum
schemes,
so
I
need
threshold
or
fancy
stuff.
I
think
there's
a
lot
of
research
to
be
done.
There
still
and
I
see
falcon
being
brought
up,
and
I
think
it's
important
to
emphasize
that
nist
explicitly
wrote
that
the
lithium
is
the
primary
candidate
for
signature
schemes,
because
falcon
is
very
easy
to
mess
up,
which
is
also
sort
of
evidenced
by
the
there's.
N
N
Nist
has
some
agreements
with
the
various
patent
holders,
but
they
have
not
published
them.
So
we
do
not
know
if
they'll
be
acceptable
to
everybody.
If
that's
not
the
case,
it
sounds
like
we
need
to
actually
have
an
alternative,
most
likely
and
true,
even
though
it
was
not
selected
by
nist.
L
Thank
you
for
raising
that.
That
will
be
true
and
specifically
underneath
report
announcing
the
selected
algorithms
for
the
standardization
niche
does
indeed
know
this,
and
they
think
I
said,
I
think
they
say
that
if
the
patents
have
not
been
resolved
by
the
end
of
this
year,
they
will
be
advancing
and
true
instead
of
kaiba.
N
M
I
am
we're
both
lawyers
and,
as
you
said,
this
is
the
a
lot
of
the
information
isn't
out
there.
Yet
I
don't
think
I
can
speak
for
any
company's
lawyers,
but
I
do
kind
of
expect
that
the
fact
of
the
matter
is
going
to
be
that
the
next
fib
standard
will
require
kyber
whatever.
If,
if
nis
does
res
reach
some
agreement,
I
have
no
idea
what
that
means
for
adoption,
etc.
I
think
it's
a
worthwhile
discussion
to
have,
especially
as
we
reach
more
information,
but
I
yeah.
M
N
L
Yeah
for
the
moment,
hip
certification,
if
you
concatenate
a
classical
algorithm
with
an
experimental
one
for
the
key
exchange
part,
I
think
it
was
fibs
140,
it's
compatible.
Yes,
if
that
will
change,
I
don't
know
so.
Maybe
this
is
something
that
indeed
could
be
started
at
a
discussion,
but
I
don't
think
we
will
get
a
lot
of
clarity
also
in
the
coming
months,
so
my
recommendation
would
be
to
kind
of
see
where
the
different
discussions
are
going
around
this.
N
Yeah
personally
I'll
be
putting
together
a
a,
I
submit,
a
draft
submission
to
to
the
cfrg
to
for,
and
then
true.
So,
let's
see,
if
if
they
can,
can
make
that
into
an
rfc
as
well,
just
in
case
the
the
nist
approval
process
does
not
a
patent
process
fails.
O
All
right,
I
was
curious.
You
noted
some
of
the
difficulties
with
falcon
implementations
etc
and
I'm
working
downstream
on
some
of
the
post-quantum
serializations
more
like
the
jose
cose
side
of
things,
question
about
how
should
we
be
thinking
about
non-lattice-based
stuff,
because
this
was
pretty
clear
that
there's
going
to
be
steps
down
some
of
that
pads?
Are
there
two
or
three
things
we
should
be
looking
at
early
just
to
be
aware
of
on
the
non-lattice-based
approaches
and
prioritizing
some
investigations
around.
M
I
think
I
could
comment
on
that.
So
a
schema
last
month.
M
But
rainbow
is
a
refinement
of
a
uov
and
it's
pretty
the
way
that
the
call
for
new
schemes
is
written.
It's
pretty
clear
that
they're
basically
calling
for
a
uav
submission,
so
then
you're
looking
at
computationally,
it's
going
to
be
somewhat
okay
for
the
values
of
okay,
but
public
keys
are
gonna,
be
massive
400
kilobytes
massive,
but
very
small
signatures.
On
the
other
hand,
so
I
think
if
you
want
to
play
with
something
right
now,
there's
some
code
for
rainbow
still
out.
M
I
expect
that
uov
code
will
also
be
available
soon,
and
there
was
already
some
discussion
on
the
pc
forum.
The
nist
mailing
list
about
parameter
sets
for
u
of
e,
so
you
can
get
some
idea
of
the
sizes
there,
but
I
think
that
is
the
most
significant
thing
that
will.
M
I
would
bet
money
on
that
that
will
make
it
into
the
on-ramp
for
new
signature
schemes,
there's
some
other
stuff
that
we
put
in
the
slides
and
that's
much
less
certain
if
that's
gonna
make
it
because
the
those
kinds
of
schemes
are
much
newer
and
this
did
specify
that
they
want
mature
crypto
analysis
to
exist
of
your
scheme
and
uov
itself.
M
Rainbow
was
a
variant
of
uov
and
the
attack
lies
in
that
variant
and
I
think
that
the
confidence
in
uav
itself
is
quite
high.
So
I
think
that
will
probably
be
accepted
into
this
thing.
A
Okay,
thank
you,
tom
sophia,
last
presentation
of
the
meeting
is
going
to
be
boss,
western.
B
P
P
P
There
are
so
it
will
take
some
time
before
nest.
Actually
has
it
standardized.
They
just
said
that
they
will
standardize
it.
We
expect,
probably
in
2024,
but
we
never
know
there
are
still
changes
likely
to
kyber,
but
we
ex,
I
expect
many
early
adopters
before
2024.
P
next
slide.
Please
that's
why
we
would
like
to
have
an
rfc
a
draft,
but
we
want
to
match
nist's
final
standard.
P
A
different
advantage
is
that
nist
will
not
include
a
machine,
readable
specification
and
we
intend
to
in
this
case
python,
but
maybe
something
else
also
it
will
have
the
idf
closer
to
the
standardization,
as
the
cardboard
team
is
very
eager
to
help
out
on
this
draft,
and
it
will
unlock
early
usage
in
in,
for
instance,
tls,
because
code
points
are
cheap.
P
So
any
questions
concerns
is
there
interest
for
this.
Q
Philharm
baker
yeah-
I'm
very
interested
in
this.
I
I
don't
particularly
care
if
it
matches
what
nist
comes
out
at
in
the
end,
because
what
I'm
concerned
about
at
the
moment
is
not
doing
pqc
cryptography.
I
can't
because
I'm
doing
threshold,
but
I
would
like
to
be
able
to
set
myself
up
a
safety
net
of
shared
secrets
and
routes
of
trust
that
a
pqc
hardened,
so
that
if
there
is
a
quantum
computer
produced,
I
can
then
back
out
and
recover
my
security
context.
Q
You
know
I
can
do
a
new
release
that
is
secure,
but
I'm
only
looking
to
establish
those
shared
secrets,
for
you
know,
limited
application,
and
the
other
thing
you
can
do
is
once
I've
got
a
shared
secret.
I
can
create
a
global
mix
in
that
will
pkc
harden
all
my
other
stuff,
like
russ
housley
did
in
his
draft.
A
Okay,
any
other
questions
in
the
room.
J
Hi
I'm
flo
from
uk
ntsc.
I
can
say
it's
really
really
good
to
see
this
happening,
I'm
kind
of
following
on
from
nist's
announcement.
I
guess
as
a
contrast
to
the
earlier
comment,
I
think
it's
really
important
that
eventually
we
do
match
with
nist
specifications.
A
Okay,
thanks
and
I
want
to
remind
folks
that
the
tls
meeting
is
later
in
the
day,
and
I
don't
know
if
the
tls
chairs
have
anything
to
share
about
opinions
with
respect
to
early
definition
of
pq
algorithms
for
key
exchange
in
cfrg,
whether
that's
helpful
or
not.
A
Okay,
well,
we
are
ending
a
little
bit
early
despite
the
delay
in
the
start.
Folks
did
kind
of
push
through
these
presentations
pretty
quickly.
So
thanks
everyone
for
attending,
and
thank
you
rich
for
taking
notes
and
we
will
have
the
have
the
notes
up
sometime
soon,
thanks
all
for
coming.
C
Yeah
I'll
just
surely
try
to
catch
up.