►
From YouTube: IETF99-CFRG-20170718-1550
Description
CFRG meeting session at IETF99
2017/07/18 1550
https://datatracker.ietf.org/meeting/99/proceedings/
A
C
C
D
D
D
D
D
D
D
The
other
thing
the
chairs
would
like
to
say
is:
we
started
crypto
review
panel
last
September
and
we
worked
it
slow,
getting
up
to
speed
and
getting
the
panel
reviewing
documents,
but
we
had
lots
of
very
good
reviews.
Recently,
GCMs
I
will
I
got
three
or
four
reviews.
You
know,
I
think
even
Kenny
and
I
was
surprised.
How
much
interest
was
in
this,
so
III
think
the
things
are
working
well.
So
we're
looking
forward
to
using
crypto
panel
more
in
the
future.
B
Okay,
so
first
up
we
have
stanislav,
is
going
to
update
us
on
rekeying
stands,
laugh
I,
think
we
allocated
ten
minutes
in
the
agenda
for
your
talk
and
please
try
to
stay.
This
is
for
all
speakers.
Please
try
to
stay
inside
the
pink
box,
that's
where
the
video
will
find
you
and
I'll
be
doing
just
ask
for
next
slide
each
time.
Okay,
thank
you.
E
Process
and
we
need
some
limits
to
be
mighty
to
be
taken
into
account.
First
of
all,
these
are
just
general
internal
properties,
estimations
for
few
staffers
if
they
have
some
potential
for
abilities
and
the
most
important
of
all
side
channel
the
newest
methods
of
work
theatres.
So,
for
example,
there
was
paper
recently
about
tempest
attacks
on
AAS.
We
is
attack
taken
only
50
seconds
to
get
an
a
ski
from
opens,
ourself
implementation.
E
So
taking
into
account
these
problems,
we
have
to
limit
the
key
usage,
the
key
lifetime,
and
when
our
limits
are
reached,
we
have
to
change
keys.
So,
of
course,
we
can
just
do
a
new
handshake,
key
establishment,
but
it
can
be
quite
expensive.
So
in
a
lot
of
stations
we
prefer
to
update
the
keys
so
to
do
13.
E
A
good
example
here
is
Arrakeen
procedure,
interest
1.3
as
a
key
update
procedure
and
I
think
that
this
is
a
good
example.
How,
when
and
and
in
which
stations
13
must
be
done
place
next
slide.
So
the
main
objective
for
the
document
is
to
prepare
a
document
with
a
mean
of
choices
for
the
developers
of
protocols
that
just
contain
a
lot
of
skill
and
efficient
procedures
that
solve
13
tasks
in
most
relevant
cases.
E
Of
course,
this
document
shouldn't
be
redundant
and
the
document
should
contain
general
recommendations
and
chase
principles
for
as
a
13
mechanisms,
one
of
another
one
or
another,
and
there
is
some
specific
parameters
we
consider
both
external
and
internal
key
methods,
parallel
serial
hash
based
or
cycle
based,
reasonable
and
without
metal
keys
for
the
most
the
most
for
the
best
majority
of
them.
The
security
increase
is
about
critical,
so
the
key
lifetime
can
be
increased,
approximately
quadratically
in
most
relevant
models.
Late
next
slide,
so
the
work
started
after
soul
meeting.
E
There
was
a
proposal
from
the
separate
G
chairs
to
create
such
document.
There
was
a
talk
on
Ricky
in
film,
so
meeting
we
had
discussion
and
the
end
work
started
as
a
zero
zero
draft
and
zero
1
draft
appeared
before
the
Chicago
meeting
and
in
Chicago
we
had
one
our
side
meeting,
honor
King
war.
We
had
some
important
decisions
about
the
document.
Please
next
slide,
please
next
I
think
so.
E
E
We
edit
a
remark
to
the
draft
and
also
the
service
on
the
slide
that
this
must
not
be
used
as
a
method
to
prolong
life
or
broken
ciphers.
It's
just
a
security
security
margin,
a
safety
margin
against
possible
future
attacks.
So
it's
not.
It
mustn't
be
used
to
prevent
life
or
something
already
vulnerable.
E
Some
words
about
what
quantum
issues
are
added:
no
post
quantum
issues
can
be
solved
better
in
because
if
we
talk
about
doors,
algorithms,
we
can
just
talk
about
attack
on
two
or
four
blocks
are
sufficient
for
us
to
mount
attack,
so
were
keen
couldn't
help
and
some
wars
about
reasons
that
remain
for
even
for
ciphers.
These
large
block
sizes
worried
about
a
channel
resistance,
FS
and
again
the
safety
margin
race.
Next
slide
about
the
commendation
guidelines.
E
We
rearranged
the
document
so
that,
as
a
recommendation,
part
in
the
maturation
part
is
before
as
a
mechanism
ourselves
and
this
there
was
decision
not
to
consider
any
related
questions
for
skip
soidiers,
because
all
models
are
quite
different.
This
next
item
and
all
the
mechanisms
themselves,
there
was
a
number
of
minor
iterations
about
constants
in
the
internal
wiki
in
for
CTR
mode.
The
most
important
direction
was
about
the
CCM
mode.
There
was
proposal
to
consider
also
SSA
mode,
Weiser,
King
and
I'll
return
to
this
question
a
bit
later.
Please
next
slide.
E
So
after
the
Chicago
median,
we
had
four
more
versions.
The
current
version
is
0
5.
We
had
two
major
revisions
after
a
list
of
durations
by
less
costly
by
shaker
on
admissible
ASCII,
and
the
current
version
is
0.
5,
of
course,
will
be
happy
to
have
any
reviews
alterations
for
this
version
is
next
slide.
We
hope
that
structure
the
principles
and
made
recommendations
in
this
version
are
quite
close
to
being
ready,
but
we
still
have
a
lot
of
important
question
to
be
solved.
E
First
of
all,
as
I
mentioned
before
internal
reinforce
the
same
now,
we
have
proposal
for
mechanisms
for
internal
routine
for
cesium,
it's
added
to
the
draft,
but
there's
a
problem
with
it
that
now
this
is
the
only
mechanism
that
doesn't
have
security
proof
for
all
other
mechanisms.
We
either
have
food
security
proofs
or
they
can
be
obtained
by
some.
E
Flight
notification
source
and
it
improves
because
principles
are
the
same-
that
in
textbooks
but
position
because
of
authenticity
problems
can't
get
security
proofs
using
since
we
have
now,
so
it
must
be
done
from
from
a
scratch.
If
so,
we
need
help
here
and
if
someone
would
like
to
help
it
will
degrade
and
for
now
we
think
that
if
we
unable
to
obtain
all
security
proofs
for
the
same
with
looking,
we
need
to
exclude
the
mechanism,
because
we
don't
want
something
without
guilty
proof,
explicit
or
implicit
to
be
in
the
document.
E
Of
course
we
will
need
to
attest
to
actors
and
also
I,
would
ask
for
its
directions
and
concerns
for
the
current
version.
First
of
all,
opinions
about
whether
all
the
concerns
from
the
Chicago
meeting
have
been
properly
addressed
and
any
new
comments.
First
of
all
about
the
recommendation,
as
and
use
cases
for
protocols,
for
instance,
for
from
the
IT
field,
I
think
it
will
be
very
important
to
have
as
much
refined
from
relations
about
recommendations
from
the
developers
of
protocols
as
we
can
next.
Quite
so,
thank
you
very
much
questions
stations.
B
E
I,
don't
think
in
the
scope
of
the
draft,
but
in
my
opinion,
document
is
about
ciphers
that
don't
have
any
even
any
theoretical
problems,
so
I,
don't
think
that
this
must
be
in
the
scope
of
the
new
protocols
and
I.
Don't
think
that
anything
must
be
strengthened
with
akin
if
we
have
some
theoretical
properties.
So
in
my
opinion
see
this
must
be
out
of
the
scope,
but
it
just
only
opinion.
We
don't
have
to
add
some
such
thoughts
in
the
document
itself.
E
So
are
there
any
ciphers
in
use
that
are
not
broken
and
32-bit
or
64-bit?
If
and
my
opinion
is
that,
if
real
security
for
now
for
the
node
X
is
the
same
as
a
purely
security,
then
it's
everything
is
okay
with
the
cipher,
so
we
have
some,
for
example,
32-bit
cipher.
Of
course
we
have
some
a
priori
limitations
for
its
usage,
of
course,
but
if
we
don't
have
any
attacks,
then
that
can
lower
as
a
theoretical
security,
then
we
can
understand
that
the
cipher
is
still
unwell,
doable
and
it
can
be
considered.
E
G
G
So
so
I
mean
actually
don't
actually
going
out
of
your
way
to
apply
this
mechanism,
so
that
you
can
use
three
days
seems
like
a
questionable
life
decision,
but
nonetheless,
I
think
it
is
a
almost
a
canonical
example
of
what
this
mechanism
can
do.
It
can
take,
it
can
take.
It
can
take
a
cipher
where
the
best
attacks
are
generic
attacks,
but
the
generic
bounds
are
not
great
and
make
them
more
comfortable.
E
B
Maybe,
and
we
could
ask
for
a
quick
show
of
hands
from
the
from
the
audience,
no
putting
you
guys
on
the
spot.
Who
has
read
the
draft?
Oh
that's,
fantastic
great:
are
there
any
yeah
she's
got
you
get
your
hands
down?
First.
Are
there
any
volunteers
to
do
a
thorough
review
of
the
draft
on
behalf
of
CFR
gee
now
that
it's
reaching
a
fairly
mature
state
any?
B
Can
you
put
your
hand
high,
and
then
we
can
take
a
photograph?
That
would
be
really
helpful.
Thank
you
so
much
anybody
else.
B
B
H
Okay,
so
I
presented
this
draft
at
sag
at
the
last
ITF
we
were
asked
to
bring
it
here.
It's
substantially
updated,
so
what
I'm
gonna
do
is
I'm
gonna
go
over
for
people
who
didn't
see
my
sag
presentation,
I'm
gonna
go
over
what
exactly
a
verifiable
function.
Random
function
is
and
why
we
should
standardize
them
I'm
going
to
talk
about
the
applications
and
since
that
meeting
I
have
more
information
about
what
the
applications
of
this
primitive
is.
People
are
using
it
in
ways
that
I
didn't
know
before
and
then
talk
about.
H
What's
new
in
this
draft,
okay,
so
to
get
an
idea
of
what
a
verifiable
random
function
is
we
all
know
what
a
hash
function
is.
It
doesn't
have
a
key.
If
you
take
an
input,
it
gives
you
an
output
and
to
verify
that
the
hash
was
done
correctly.
You
just
recompute
the
hash
yourself.
Okay,
so
there's
a
verification
mechanism
for
this
hash
function.
Next
slide,
a
PRF
has
a
symmetric
key.
The
holder
of
the
key
can
compute
the
hash,
but
if
you
don't
hold
the
key,
you
cannot
verify
that
the
hash
was
computed
correctly
right.
H
So
that
means
that
there's
no
one-to-one
relationship
between
the
input
and
the
output
with
a
pseudo-random
function
with
a
vrf.
It
is
a
keyed
hash
function.
Next
slide,
it's
keyed
hash
function,
but
this
time
we
have
an
asymmetric
key.
So
the
way
to
think
about
this
is
sort
of
like
the
public
key
version
of
of
a
keyed
hash
function
and
what
happens
is
in
order
to
compute
the
hash.
You
use
the
secret
key,
so
that
means
only
the
hash
sure
can
compute
the
hash,
but
anyone
can
verify
that
the
hash
was
computed
correctly.
H
So
that
gives
you
we're
gonna
see
that
that
gives
you
a
one-to-one
relationship
between
inputs
and
outputs.
If
you
know
the
public
key,
that's
what
we
want
from
Aviara
cool,
okay,
great
okay.
So
how
does
this
work
so
I
want
to
go
over
the
API
of
how
a
vrf
works.
So
what
happens?
Is
the
verifier
holds
the
public
key?
The
hash
sure
holds
the
secret
key.
He
sends
his
input
over
to
the
hasher,
the
hash
sure
computes,
the
value
called
the
proof.
H
So
the
way
that's
computed
is
by
taking
the
secret
key
in
the
input
and
computing.
Some
sum
value
called
the
proof.
Okay,
so
only
the
hash
sure
can
do
this.
Now,
the
verifier
needs
to
verify
that
this
hash
is
correct.
So
it's
going
to
run
the
verify
function
using
the
public
key,
the
input
in
the
proof.
H
If
that
comes
out
valid,
then
he
can
get
the
actual
vrf
hash
output,
which
just
is
derived
directly
from
the
proof,
so
you
can
think
of
it
as
just
taking
the
proof
in
hashing
it
in
some
way,
and
that
gives
you
the
vrf
hash
output.
Okay,
so
the
way
you
verify
is
with
the
proof,
but
the
actual
hash
output
is
computed
using
this
proof
to
hash
function
on
the
proof
itself
notice
that
the
proof
the
hash
function
is
not
keyed.
Okay,
so
the
keying
is
for
the
proving
function.
H
The
public
key
is
for
the
verify
function.
Okay,
so
what
are
these
things
useful
for
so
the
way
we
came
to
this
is
four
through
n
SEC,
five,
which
is
solving
a
zone
enumeration
problem
in
DNS
SEC.
It's
also
used
for
key
transparency
in
the
conex
project
and
all
sorts
of
derivatives
of
these
projects.
In
both
cases,
vrf
is
used
to
prevent
dictionary
attacks.
Okay,
we're
gonna
see
exactly
why
this
works.
H
Since
then,
it's
I
also
found
that
it's
being
used
by
cryptocurrency
called
Al
Gore
and
in
this
use
case
it's
a
little
bit
different.
What
they're
doing
is
to
do
randomized
selection
that
can
be
verified
and
I'm
not
going
to
talk
about
this
application
in
this
presentation,
but
I'm
happy
to
take
questions
about
that
after
we
started
talking
about
vieira,
have
some
various
venues
that
we
started
to
find
various
implementations
slightly
different
implementations
of
the
same
underlying
idea?
Okay,
so
it
comes
from
the
Sean
penderson
Sean
penderson
proves
for
that.
H
So
now
let
me
show
you
the
the
uniqueness
property,
which
is
one
of
the
key
properties
of
Revere
F.
So
what
uniqueness
says
is
that
there's
a
one
to
one
input,
one
to
one
relationship
between
the
input
and
the
hash
as
long
as
you
know
the
public
key
okay?
So
if
you
think
about
sha-256,
if
I
hash,
X
I'm
always
going
to
get
Y,
it's
not
going
to
come
out
to
a
different
value.
H
Every
time
I
do
the
hashing
a
vrf
will
give
you
the
same
property
that
every
input
maps
to
a
unique
output
given
the
public
key
okay,
and
that
that's
why
you
can
use
a
vrf.
The
same,
you
might
use
a
hash
function.
You
know
that
there's
a
one-to-one
relationship
between
the
input
and
the
output
more
formally,
what
this
says
is
even
an
adversary.
That
knows,
the
secret
key
cannot
find
two.
H
Two
outputs
that
map
to
the
same
input.
Okay,
so
it
can't
be
two
outputs
that
map
to
the
same
input.
We
know
that
there's
a
one-to-one
relationship
between
the
input
and
the
output,
even
if
you
know
the
secret
key
and
that's
the
same
as
with
sha-256
right.
We
have
this
type
of
relationship
with
that.
Similarly,
we
want
to
have
collision
resistance
even
in
the
face
of
an
adversary.
That
knows
the
secret
key
in
this
case
of
the
public
key
is
fixed,
even
an
adversary.
H
That
knows,
the
secret
key
can't
find
two
inputs
that
go
to
the
same
output.
So
these
two
properties
give
us
a
nice
one-to-one
relationship
that
allows
us
to
use
hash
functions
and
hash
based
data
structures.
Basically,
you
can
substitute
your
hash
function
with
a
vrf
okay,
but
why
would
you
want
to
use
a
vrf
into
this
last
property,
which
is
pseudo
randomness
and
what
it
says
is
if
I
take.
If
someone
gives
me
a
vrf
hash
output
I
have
no
idea
what
input
it
corresponds
to
okay.
H
So
if
I
just
get
the
hash
value,
so
notice
what's
happening
here,
the
hasher
is
computing.
The
proof
then
computing
the
hash
by
taking
proof
to
hash
of
the
proof
he
gives
me
the
vrf
hash
output.
I,
as
the
adversary,
have
no
idea
what
input
this
hash
corresponds
to
you.
I
cannot
do
a
dictionary
attack
where
I
test
all
the
inputs
in
my
dictionary,
hash
them
and
see
if
they
match
this
hash
value
that
was
given
to
me.
I
cannot
do
this
because
I
don't
have
the
secret
key
okay.
H
So
that's
why
vrf
is
actually
useful
in
a
lot
of
these
use.
Cases
is
because
it
prevents
dictionary
attacks.
Okay.
So
if
this
isn't
clear,
please
ask
me
questions
in
the
in
the
question
answer
session,
but
this
is
the
real
important
thing,
so
it
acts
like
a
hash
function
that
stops
dictionary
attacks,
and
this
is
just
the
formalization
of
what
I
just
said.
It
was
so
these
two
use
cases
which
are
where
we
started
with
this
are
about
preventing
dictionary
attacks.
H
I
just
want
to
briefly
show
you
kind
of
what
the
idea
is
here,
forgetting
about
V
ahrefs
for
a
second
and
I'm.
Sorry,
there's
public
key
and
secret
key
on
this
slide.
They
shouldn't
be
there
in
the
normal
case,
where
we
have
a
hash
based
data
structure
and
we
have
the
root
of
the
data
structure.
The
querier
will
send
a
query
to
the
hasher
he'll,
give
him
some
sort
of
information
from
the
data
structure,
so
this
case
I'm
drawing
the
commercial
tree
of
getting
the
Merkle
path.
Anyway,
it
doesn't
matter.
H
You
get
some
information,
he
this
information
about
about
the
data
structure
and
he
can
verify
that
his
input
is
in
the
data
structure.
But
if
he
does
this
a
bunch
of
times,
he
starts
to
learn
a
lot
about
the
data
structure
right
so
the
second
input,
he
gets
another
branch
and
you
can
imagine
it
keeps
doing
this
and
you
can
get
the
whole
Merkel
tree
and
so
on.
Then
he
can
start
to
do
things
like
dictionary
attacks
to
try
to
learn
what
exactly
is
present
or
absent
from
this
data
structure.
H
Okay,
so
this
is
where
this
is
where
VXR
come
in.
If,
instead
of
using
a
regular
hash
function,
we
use
vrf
hashes,
then
now,
when
I
want
to
get
the
answer
to
a
query.
I
have
to
ask
the
hash
sure
to
answer
the
query
for
me:
I'll
get
still
the
the
part
of
the
data
structure
that
I'm
querying
about,
but
I'll
also
get
this
proof
that
only
the
hash
sure
can
compute,
okay
and
then
I'll
use
that
proof
to
verify.
H
First
of
all
that
the
hash
are
answered
me
correctly,
so
I'm
going
to
take
the
prove,
the
the
proof,
the
input
and
the
public
key
and
run
the
verify
function.
Then
I'm
going
to
get
the
hash
value
and
check
that
the
hash
is
in
the
data
structure.
So
it's
just
like
what
you
would
normally
do
except
you
first
need
to
do
the
verify,
function
and
rejected
verify
fields.
Okay,
so
this
property
ensures
that
the
hash
sure
cannot
lie
to
you
about
the
output
and
the
vrf.
H
Pseudorandomness
property
ensures
that,
because
you
can't
compute
hashes
on
your
own,
you
can't
start
reversing
hashes
and
trying
to
enumerate
everything
in
this
data
structure
and
that's
what
we're
using
it
for
an
N,
SEC
5,
that's
what's
being
used
in
con
X
and
key
transparency
and
all
these
other
applications.
Ok,
so
what's
new
in
this
draft,
there's
two
vrf
specified
in
this
draft.
The
more
efficient
one
is
the
elliptic
curve,
vrf
we've
done
sort
of.
So
when
we
were
designing
this,
we
wanted
to
be
fast.
H
We
wanted
proofs
to
be
very
short
because
the
proofs
are
the
things
that
are
on
the
wire.
So,
as
you
can
see
here,
the
thing
that's
traveling
around
on
the
wire.
All
the
time
is
the
proof.
So
we
really
don't
want
that
to
be
excessively
long
for
various
applications.
This
is
important
in
order
to
optimize
all
this
stuff.
We
did
very
careful
security
proofs.
You
can
see
on
the
bottom
of
the
slide.
I
have
a
link
to
a
paper
that
we
have
on
the
crypto
eprint
archive.
H
You
can
see
all
the
security
proofs
for
all
of
these
brf's
that
are
specified
in
this
draft.
So
everything
we
specify
comes
with
the
security
proof
that
you
can
find
there
in
terms
of
the
elliptic
curve
vrf
it
so
the
way
we
wrote
it
is
we
wrote
the
general
vrf.
Then
we
have
cipher
suites.
So,
depending
on
what
curve
you
want
to
use,
what
hash
function
you
want
to
use
those
are
specified
in
the
cipher
suites
it.
For
this
particular
draft.
H
We
were
very
careful
to
deal
with
curves
with
cofactor
greater
than
1,
so
8255
1
9
curve
has
a
cofactor
greater
than
1.
We
were
careful
to
actually
make
sure
that
we
did
everything
correctly
with
the
cofactor.
Our
security
proofs
were
actually
also
updated
to
include
the
cofactor
in
the
security
proofs.
So
that's
the
one
major
change
that
we
did
here.
The
other
thing
that
we
did
is
that
we
added
a
key
validation
function.
H
So
if
you
saw
my
presentation
that
sag
what
we
were
talking
about,
we
were
saying
that
if
the
public
key
was
given
to
you
by
a
trusted
entity,
then
the
vrf
was
secure.
Ok,
so
if
you
trust
the
guy
who
gave
you
the
vrf
public
key,
then
the
vrf
was
secure.
With
this
draft,
we
found
a
way
to
get
around
that
requirement.
So
now
you
can
validate
that
the
public
Hugh
was
given
to
you
for
the
vrf
is
actually
a
good
public
key
for
a
vrf,
and
why
would
you
care
about
this
right?
H
Because
the
adversary
is
the
one
who
potentially
is
choosing
the
public
key
in
some
applications?
So
we
want
to
have
a
way
of
verifying
that
the
public
keys,
secure,
ok,
so
I'm
gonna
wrap
up
really
quickly
with
the
elliptic
curve
vrf
and
just
to
show
you
what
it
is.
It's
very,
very
high
level,
okay,
so
our
public,
our
secret
key,
is
X.
Our
public
key
is
G
to
the
X.
What
do
we
do
to
hash
an
input?
We
take
the
input.
We
hash
it
to
a
point
on
the
curve.
H
That's
H,
then
we
raise
H
to
X,
which
is
the
secret
key.
Ok,
so
that's
R.
And
then,
if
you
look
at
the
book
the
corner
of
my
screen,
the
screen
over
here,
you
can
see
that
the
actual
hash
output
is
just
the
hash
of
this
gamma.
So
we
take
H
to
the
power
of
X
and
compute.
The
hash
of
that
that's
the
V
RF
output.
Ok,
now
I
need
to
tell
you
what
the
vrf
proof
is.
What
the
vrf
proof
is,
is
this
value?
Gamma,
sorry
I
said
alpha,
but
I
meant
gamma.
H
This
value
gamma
is
part
of
the
proof
and
then
there's
these
two
values
C
and
s.
What
are
these
things?
There
are
zero
knowledge
proof
that
gamma
and
the
public.
He
have
the
same
discrete
log,
so
they're
raised
to
the
same
exponent.
Ok,
so
we
do.
We
attach
the
G
or
knowledge
proof
that
this
is
the
case,
and
that
proves
to
you
that
gamma
is
a
valid
hash
of
the
input
right,
because
gamma
is
H
to
the
X.
So
you
verify
that
what
our
gamma
is
is
H
raise
to
the
power
of
X.
H
You
know,
H,
you
don't
know
X,
but
you
use
this
year.
Knowledge
proof
to
verify
that.
Ok!
So
that's!
This
is
this
by
the
way.
This
is
the
pattern
that
you
see
in
all
the
different
implementations
that
we
found
in
different
places
and
the
details
aren't
exactly
how
you
specify
the
cofactor
and
so
on,
which
we've
done
in
this
draft.
Ok,
so
it
with
that
also.
B
I
H
G
J
Brian
40p
film
first
I
just
wanted
to
support
this.
You
know
general
effort,
I
think
you
know
VR
I
can
you
know.
Brf's
are
indeed
extremely
useful
and
actually
another
application
case.
I'm
familiar
with
some
very
nice
pass
a
distributed
password
protection
protocols
and
algorithms
by
Gregory
Nevin
and
some
some
others
are
used.
Vrs
that
I
know
of
so
that's
I
think
yet
another
interesting
application
area.
You
can
add
to
the
list.
Just
a
a
question
on
the
the
the
trusted
versus
untrusted
public
key
and
the
verification
function.
H
H
H
J
Okay,
I
I,
understood
decaf
was
mostly
a
mechanism
to
make
sure
that
you
don't
get
burned
by
the
cofactor
in
doing
you
know,
arithmetic,
so
so
that
you
canonicalize
all
points
that
have
that.
Are
you
know
the
you
know
the
same
modular,
the
cofactor
but
I'm,
not
an
expert
on
decaf,
so
it
you
know
you
could
be
exactly
right.
So
yeah.
L
H
So
I
don't
know
the
numbers
for
the
operations
of
the
how
fast
it
is
for
some
reason
before
this
meeting,
but
in
terms
of
the
proof.
So
that's
an
elliptic
curve
point
gamma.
That's
one
elliptic
curve
point.
So
if
you
do
point
compression
which
is
what's
in
the
draft,
that's
256
bits,
if
you
use
eg,
255
or
9c,
is
a
half
of
a
sha-256
output.
So
it's
128
bits
this
getting
this
down
to
128
bits
like
we
cut
it
in
half.
H
B
K
B
Like
to
do
hums
in
IOT,
fo,
Nixon
I
believe
we
are
so
I'm
gonna
ask
two
questions
one:
should
we
adopt
this
I
sort
of
hi
w
this
I
say
we'd
like
to
adopt
this
and
you
hum,
and
then
we
should
not
adopt
this,
and
then
you
hum
okay.
So,
let's
start
with
please
hum,
if
you
think
we
should
adopt
this
draft
at
the
CFR
G
doc
document
and
for
the
opposite,
if
you
think
we
should
not
adopt
this
as
IC
FRG
document,
please
hum
now
or
forever
hold
your
hum.
Okay
I
think
we're
I.
B
B
J
So
this
is
based
on
something
we've
been
working
on
for
for
quite
a
while
now
in
very
various
forms.
The
new
development
is
basically,
we
finally
wrote
the
first
draft
of
a
of
an
actual
attempt
at
at
an
actual
spec
for
for
a
simple
collective
signing,
algorithm
and
I'm,
going
to
mostly
just
go
over
I
won't
get
deep
into
the
detail,
technical
details
for
that
you
know
read
the
draft
if
you
haven't
already,
but
I
want
to
mainly
mainly
kind
of
motivate
it
and
talk
about
applications.
Why
why?
J
This
is
interesting,
because
because
at
this
stage
we're
just
trying
to
trying
to
see
if
there's
support
for
for
working
on
standardizing
of
this
now
this
is
one
of
the
many
types
of
cryptographic.
Algorithms,
that's
motivated
by
splitting
trust,
avoiding
single
points
of
failure
x'
by
by
building
constructions,
that
split
trust
more
widely,
so
that
so
that
no
single,
compromised,
node
or
participant
can
compromise
the
whole
system
and
collective
signing
or
aggregate
signing
or
multi
signatures.
J
You
have
a
lot
of
different
variations
and
several
different
names.
This
is
the
draft.
It
is
one
of
a
kind
of
constellations
of
possible
ways
to
design
collective
or
multi
signatures,
but
the
basic
idea
is
where
you
have
multiple
independent
parties
who
want
to
collaborate
to
validate
and
sign
some
message
or
statement
of
some
kind,
and
often
this
is
a
threshold
group
of
some
kind,
T
of
n
it
doesn't
have
to
be
it
can
be,
there
can
be
more
complex.
Predicates,
like
you
know,
t1
of
n1.
J
J
You
know
ensuring
that
you
know
many
witnesses
have
seen
something,
and
also
to
eliminate
single
points
of
failure
and
a
nice
property
of
the
of
snore
signatures
of
the
type
that
RFC
880
32
just
standardized
is
they
support
this
kind
of
multi
signatures
very,
very
easily
and
efficiently
again
without
getting
too
much
into
the
technical
details.
This
is.
This
is
just
kind
of
a
an
illustration
of
the
basic
way
this
works
any
any
snore
snore
signature
is
formally
a
challenge
response.
Sigma
protocol
basically
operates
in
three
stages.
J
The
signer
creates
a
snore
commit,
which
is
basically
a
picks,
a
random
value.
Little
v,
computes
G
to
the
G
to
the
B
which
to
produce
the
public,
commit
big
V
uses
a
hash
function
to
turn
that
into
a
challenge
and
then
computes
a
response
which
is
mathematically
related
to
the
private
key,
which
is
a
little
K
in
this
case,
and
the
classic
snore
signature
is,
is
the
challenge
response
now.
Rfc
3032
uses
the
slightly
different
RS
signature
format,
which
is
which
is
equivalent,
but
you
know
kind
of
formally.
The
difference
doesn't
really
matter.
J
It's
just
a
representation
thing,
so
the
the
fundamental
modification
to
support
multi
signatures
is
quite
simple.
All
you
do.
You
know,
at
least
in
the
basic
basic
form
you
have
multiple
signers.
Now
you
can
have
any
number
each
signer
has
to
create
their
own
have
have
their
own
Shanor
commit,
so
each
signer
picks
a
you
know.
We
wanted
little
v1
bit
little
v2
computes
a
corresponding
public
committed.
We
big,
v1,
big
v
to
somebody,
let's
call
it
leader,
but
it
can
can
be
anyone
there
there's.
No,
no
one
has
to
be
trusted,
especially
trusted.
Here.
J
Someone
basically
combines
all
the
commits
together,
computes
a
computes,
a
challenge.
I
should
mention
everybody
needs
to
verify
the
challenge.
There
were
there's
a
bug
in
the
first
draft,
it's
it's
being
fixed
that
that's
important,
but
so
everyone
can
can
compute
or
recompute
and
verify
the
challenge,
and
then
everybody
computes
a
response
corresponding
to
their
part
of
the
of
the
of
the
commit
and
then
outcomes,
basically,
when
you
combine
them
together
in
the
right
way
outcomes.
Basically,
a
single
standard,
snore
sing
that
signature
and
this
can
be
in
either
challenge
response,
form
or
RS
form.
J
The
draft
is
written
with
RS
form
to
be
consistent
with
RFC
a
t32,
and
so
it
kind
of
just
works,
and
so
basically
you
can
then
verify
one
of
these
collective
signatures
against
the
combined
two
public
public
key
of
all
the
of
all
the
signers
all
at
once,
with
basically
just
the
cost
of
a
signal,
a
single
signature
verification.
So
this
is,
you
know
nowhere
near
new,
there's
tons
of
background
theoretical
background
of
you
know
this
and
vary
in
different
variants
of
this
scheme.
J
That
won't
go
through
these
just
going
through
some
applications,
so
so,
actually
a
your
two
ago,
I
can't
remember
how
long
ago
I
presented
this
this
paper,
where
we
were
experimenting
with
making
this
really
scale
and
applying
it
foot
as
a
as
a
transparency
mechanism.
Basically
the
idea
is,
you
can
take
kind
of
any
critic,
security-critical
Authority
service,
maybe
the
you
know,
DNS
SEC
root
zone
or
a
CA,
or
you
know
any
protocol.
J
You
you
like,
if
you
want
to
ensure
that
if,
if
it
does
something
wrong,
a
lot
of
people
will
know
you
know
whatever
you
know
the
complete
collection
of
any
everything
the
authority
sign,
so
the
authority
can't
be
coerced
into
secretly
signing
some
backdoored.
You
know
some
evil,
something
or
other
without
anybody
knowing
well
you
you
attach
this
group
of
witnesses
so
that
you
know
the
witnesses
have
to
sign
to
and
make
sure
everything
that
gets
signed
gets
logged.
You
know
for
better,
better
or
worse
right,
so
without
getting
into
the
details
again.
J
Basically,
we
demonstrated
that
with
the
right
protocols
you
can,
you
can
make
this
extremely
scalable
up
to,
like
you
know,
the
thousands
or
tens
of
thousands
of
participants
can
can
work
together
to
collectively
sign
messages
in
a
matter
of
a
few
seconds.
We
don't
you
know,
I,
don't
think
that
any
immediate
application
is
going
to
really
need
that
scalability.
J
But
it's
nice
to
know
we
can
get
it
when
we,
when
we
went
to
verification
cost
in
terms
of
CPU
time,
goes
down
by
orders
of
magnitude
depending
on
the
you
know,
a
number
of
verifiers,
as
you
would
expect,
and
also
of
course,
you
get
significant
signature,
size
savings
and
now
one
one
caveat
the
reason
that
that
the
collective
version
is
not
absolutely
constant
time,
as
you
might
expect,
based
on
there's
a
simple
way.
I
explained
it
is
that
you
know
realistic,
collective
signing
scheme.
J
You've
got
to
have
a
way
to
tolerate
participants
that
are
failed
or
not
online.
At
the
time
you
want
to
sign,
and
so
you
have
to
be
able
to
adjust
down.
You
know
the
group
of
signers
to
some
set
of
signers
that
still
satisfies
a
threshold
and
so
in
the
design
proposed
in
the
draft
there's.
Basically,
we
basically
have
a
standard,
IDI,
25,
519
signature,
plus
a
bit
mask
of
the
actual
participants
and
the
bit
mask
is
is
part
of
the
signature.
J
In
that
it's
fully
checkable,
you
know
any
attempt
to
to
change
the
bit
mask
will
be
will
invalidate
the
signature,
but
but
so
we
basically
need
one
bit
of
per
participant,
and
so
so
it's
still
linear.
You
know
linear
space,
but
but
way
way
smaller.
So
and
although
it's
not
really
the
emphasis
of
the
draft,
we
can
make
this
and-
and
this
is
you
know
this
is
optional.
We
can
make
this
extremely
scalable
with
with
pre
structured
communication,
but
that's
probably
not
necessary.
J
Most,
you
know
small-scale
applications,
so
we'll
get
into
that
other
use
cases
so
so
cryptocurrency,
the
cryptocurrency
community,
has
been
expressed
quite
a
bit
of
interest
in
this
for
compressing
the
size
of
transactions
by
aggregating.
You
know
a
bunch
of
transaction
signatures
into
into
smaller
space.
I
again,
I
won't
get
into
the
details.
J
Actually,
our
latest
work
in
this
space,
just
coming
out
in
the
USENIX
security
next
month,
extends
this
to
use
these
collective
signatures
coming
out
of
a
blockchain
to
create
basically
a
cryptic
cryptographically,
verifiable
and
traversable
blockchain
set
that's
kind
of
time
travel
above
any
two
parties
have
two
reference
points
anywhere
on
on
a
you
know,
a
time
on
the
blockchain
they
can
prove
to
each
other,
that
you
know
any
transaction
they
care
about
is
indeed
there
and
you
know
with
respect
of
you
other
reference
point,
either
forward
or
backward
in
time.
J
So
for
details,
see
the
paper,
but
just
just
you
know
kind
of
laying
out
some
of
the
applications
we
think
are
interesting
and
I'm
sure
there
are
others.
I
would
welcome
suggestions
you
know
kind
of
if
you
know
of
other
applications
that
I
missed.
Please
let
us
know
anyway.
So
again,
I
won't
get
into
a
lot
of
detail
on
the
draft
itself.
J
We're
not
you
know
we're
not
we're
not
wedded
to
any
particular
format.
Now
there
are
plenty
of
questions,
so
you
know
if
the
group
decides
that
this
is
a
worthwhile
thing,
you
know
direction
to
try
to
standardize.
There
are
some,
of
course,
some
issues,
and
this
is
to
be
discussed
and
decided.
This
is
certainly
an
incomplete
list,
but
some
of
the
questions
that
we
already
know
about
one
is
there's
a
trade-off
between
getting
strict,
absolutely
strict
and
non
malleability
of
the
signatures,
which
would
certainly
be
desirable
for
various
reasons.
J
You
know
not
malleable
signatures
have
burned.
You
know
various
people
in
the
past
and
so
yeah
that
we
would
like
non
malleability.
On
the
other
hand,
we
can't
get
strict
non-vet
non
malleability,
as
well
as
protecting
against
a
certain
type
of
internal.
Do
s
attack.
So
so,
if
somebody
goes
offline
at
the
wrong
time
during
the
signing
process,
do
you
have
to
restart
from
scratch?
Or
can
you
continue
if
you
want
to
be
able
to
continue,
you
need
some
limited
degree
of
malleability,
so
a
trade-off
potentially
to
be
discussed.
J
Another
important
topic
is
the
way
to
defend
against
well-known,
related
key
attacks.
So
if
I
just
use
the
the
simple
algorithm
I
mentioned-
and
you
know,
take
any
collection
of
public
keys,
then
I'm
hosed,
it's
not
secure,
because
somebody,
an
adversary,
can
compute
a
malicious
public
key
related
to
other.
To
you
know
the
honest
nodes
public
keys
to
cancel
them
out.
That's
not
good
the
standard
way
to
do.
J
This
is
just
to
make
sure
that
everybody
who
has
a
public
key
in
one
of
these
groups
has
proved
knowledge
of
the
corresponding
private
key
standard
is
kind
of
standard
cryptographic
press
practice
anyway,
if
you're
using
PGP
keys
to
feed
this
thing,
you're
all
automatically
covered.
But
you
know
it's
an
important
caveat.
Another
alternative
way
of
doing
this.
J
If
we
care
about
you
know,
if
we
expect
that
list
to
get
very
big
and
we
care
about
the
amount
of
state
that
the
verifier
has
to
know
in
its
root
of
trust,
then
we
can.
You
know
we
have
ways
of
changing
that
again
at
some
cost
in
complexity
and
stuff,
so
and
they're
probably
going
to
be
more
issues.
So
that's
so
I
wanted
to
leave
it
at
that.
So
at
the
moment
the
main
priority
here
is
to
get
feedback
on
you
know
kind
of.
J
B
M
J
N
For
hon
Becca,
yeah
I'm,
very
interested
in
this
I
mean
I'm
not
managed
to
get
my
head
round
than
the
crypto,
yet
what
I
would
like,
and
why
could
you
really
use,
is
other
moment
for
encryption?
I
can
split
a
DP
hand.
Key
into
two
I
can
put
a
private
key
in
a
trusted
platform.
Module
inside
the
phone
that
is
unique
to
the
device,
never
leaves
it
and
then
have
a
second
encryption
going
on
at
the
application
level,
so
that
I
don't
need
to
trust
what
the
manufacturer
put
into
the
device
when
it
was
made.
J
N
J
N
J
That
that's
a
subtlety
so
that
whether
you
need
a
different
verifiers,
a
subtlety
that
maybe
should
be
added
to
the
issued
thing
it
could
be
designed
so
that
the
ED
2519
verifier
core,
you
know
kind
of
the
core
64
byte,
you
know
ed
25,
519
signature
is
exactly
compatible
with
the
classic
ed
25
519.
That
would
have
some
downs
certain
downsides.
You
know
we
because.
J
Prevent
us
from
messing
with
that
with
what
goes
into
the
hash
and
stuff
we
wouldn't
get
to
do.
That
would
exclude
the
possibility
of
adding
the
hash
of
the
bit
mask
in
there
too.
First
trick
malleability,
and
you
know
certain
things
like
that,
but
it's
that
is
an
option
that
that
could
be
could
be
considered
and
I'm
very
open
to
that.
Yeah.
N
I
mean
if
you
can
do
that
and
I
can
do
it
as
a
die.
I
will
take
any
of
the
other
compromises,
because
at
the
moment
the
only
way
that
I
can
create
a
signature
like
that,
basically
as
soon
as
the
public
as
soon
as
the
secret
that
was
used
to
create
it
is
known.
Well,
you
know
you
blow
up
the
whole
system,
so
I'd
be
very
interested.
If
you
could.
Okay.
J
O
Richard
Barnes
right,
it
seems
like
we
have
a
combination
here
in
this
draft
of
some
cryptographic
instructions
and
some
protocol
considerations
and
a
lot
of
these
issues
here
are
kind
of
at
the
intersection
of
those
things,
and
it
might
have
kind
of
some
disjoints
pools
of
expertise
in
terms
of
people.
We
need
to
review
this
I
mean.
Do
you
think
that
there
is
I
thought?
That's
what.
K
J
J
Cryptographic
part
is
basically
already
done
and
defined
and
formalized
and
proven
in
the
collection
of
papers
I
cited
on
an
earlier
slide.
We
really
haven't.
You
know
added
anything
to
the
to
that.
There's
I
do
the
the
only
subtlety
is,
you
know
the
bit
mask
and
how
that
interacts
with
the
kind
of
a
standard
cryptographic
construction
there.
J
There's
you
know
it,
it
would
be
worth
you
know,
looking
at
that,
a
little
bit
more
closely
than
we
have,
but
you
know
I'm
pretty
sure
that
it
doesn't
fundamentally
change
anything
on
the
on
the
crypto
side.
So
this
is
mostly
you
know
intended
to
be
about.
You
know,
kind
of
just
defining
a
standard,
usable
format
in
a
concrete
rather
than
abstract
form.
You
know,
kind
of
that.
I
can
actually
build
a
signer
and
verify
for
it
and-
and
you
know,
kind
of
a
protocol
and
actual
functional
protocol
for
producing
these
things.
P
Time,
Brian,
Ross,
Owsley
so
I'm
very
interested
in
something
that
can
be
validated
in
the
same
way
that
Philip
just
talked
about,
but
I
thought
it.
Maybe
I
misunderstood
that.
You
said
the
verifier
needed
all
of
the
public
keys
for
the
pool
of
potential
signers,
so
that
doesn't
look
like
a
public
key
on
the
verifier
side.
Yes,.
J
F
J
Be
careful
of
right
so
if
you
and
the
need
for
the
list
of
public
keys
is
partly
related
to
the
need
for
you
know,
if
you're
trusting
the
trusting
this
group
of
public
keys
at
some
point
you
or
someone
you
trust,
needs
to
verify
that
each
public
listed
public.
He
also
comes
with
a
self
signature
or
a
proof
that
the
owner
of
that
key
really
knows
the
the
private
key
corresponding
to
that,
and
so
at
that
point
you
know
somebody
you
or
somebody
you
trust
needs
the
whole
list.
K
P
You
know
the
Boyd
split
thing
and
if
several
have
dealers
and
then
of
course
you
got
to
make
sure
the
dealer
forgets
yeah
here,
you're
saying,
there's
a
combiner
or
there's
needs
to
be
somebody
that
when
each
of
them
makes
up
their
key
pair
make
sure
that
none
of
them
are
related
or
says
pick
again.
So.
K
J
Be
generated
for
this,
they
can
be
JEP
solutely
bog-standard
at
25,
519
or
ed
448,
public/private
key
pairs.
That
are
you
all
the
time.
First
standard.
You
know
you
know
our
FCAT
32
signatures
and
you
know
once
you
have
formed
a
list
of
these
public
keys
and
verified
that
you
know
kind
of
a
self
signature
of
any
kind
for
each.
K
J
Them
that
can
be
generated
in
a
totally
standard
way.
Then
all
you
do
to
for
the
actual
verification
of
a
collective
signature.
Is
you
basically
just
add,
together
on
the
elliptic
curve,
just
a
little
it
the
curve
point
addition,
the
public
keys
that
actually
participated
in
this
particular
collective
signature.
According
to
the
bitmask
in
the
signature.
Now
the
bid
mask
is
only
is
only
there
so
that
you
can
tolerate
failure.
So
you
know
if
you
want
five
of
ten
nodes
and
three
of
the
groups
aren't
currently
available.
J
You
know
somebody
can
still
form
a
signature
without
him
and
just
transparently
declare
okay,
seven
nodes
were
there,
but
those
three
were
missing.
So
the
verifier
has
to
you
know,
take
the
full
list
of
public
keys
and
then
subtract
out
the
three
that
weren't
there
and
then
after
forming
the
adjusted
correctly
adjusted,
you
know
aggregate
public
key.
Then
it's
a
completely
standard
and
2519
signature
verification
again.
So
that's
so.
In
other
words,
you
just
first
adjust
the
public
key,
the
aggregate
public
key
correctly,
and
then
it's
completely
bog-standard
ed,
ed,
25,
519
or
ed
448.
J
P
P
My
mind
is
trading
off
some
trusted
set
up
where
the
pool
has
to
all
be
there
and
cooperate
to
do
something
yeah,
but
then
everything
else
from
sign
of
the
signatures
that
are
produced
and
the
public
key
that
needs
to
be
certified.
Look
just
like
ones
that
don't
require
a
collaboration
would
be
a
big
big
again.
J
So
so
so
for
that,
so
for
that
purpose
that
combination
of
functionality,
you
probably
really
want
Shamir
signatures,
which
is
very
useful
as
well
and
I'd,
be
very
open
and
supportive
to
standardizing
that
too.
If
people
like
Shammi
are
better
than
better
than
this,
so
there
are
some
phrases:
I.
J
I
totally
agree
yes
and
I'm,
not
sure
where
so
so.
Just
to
you
know
a
couple
of
the
relevant
trade-offs,
so
you
know
kind
of
this.
Shamir
sharing
has
a
lot
more
complicated
set
up,
but
but
then
you
always
get
the
same
signature
for
it
for
any
given
set
of
signers
Shamir
has
so
if
you
want
privacy
of
the
signers
Shamir.
K
J
Better,
if
you
want
transparency
and
accountability,
then
this
is
better
because
with
Shamir
you
never
know
which
particular
threshold
actually
signed,
and
so,
if
an
evil
Thresh
a
malicious
colluding
threshold
signed
something
bad,
you
know
they
can
all
deny
responsibility.
I
didn't
I,
didn't
sign
that
you
know,
and
and
so
it's
there's,
there's
kind
of
a
somewhat
subtle,
important
trade-off
there.
Okay.
B
K
Q
Cms
EMS
Multi
signature,
so
so.
J
The
you
know
the
the
the
simple
way
to
to
actually
run
the
protocol.
If
you
know
we're
not
if
we're
scaling
to
like
tens
of
signer
tens
or
even
hundreds
of
signers
and
not
thousands
is
you
can
just
have
the
co-signers
connect
directly
to
the
leader
with
whatever
in
whatever
way
you
want
that
that
doesn't
even
you
know,
you
probably
want
those
connections
to
be
secured
for
do
s,
attack,
resistance
and
stuff.
J
But
if
you
know
if
they
aren't
secure,
it's
not
going
to
produce
bad
signatures
it'll
just
you
know
at
worst
make
the
protocol
fail
to
produce
the
signature
you
wanted
for
scalability.
You
know
kind
of
once
you
get
into
the
hundreds.
Certainly
thousands
range
you
start
to
not
want
all
of
the
co-signers
to
have
to
connect
to
the
leader
directly
and
then
the
you
know,
kind
of
the
tree
structures
or
other
topologies
become
useful.
But
that's
you
know,
I
I,
don't
expect
that's
going
to
be
the
first
use
case
anyway.
J
You
know
so
I,
but
you
know
so
our
experiments
tried
to
do.
You
know
kind
of
try
to
explore
that
space.
You
know
how
hard
how
hard
you
can
you
push
it
and
it
looks
like
you
can
push
it
pretty
far
fairly
efficiently,
but
you
know
I
think
I.
Don't
think
you
necessarily
need
to
go
to
the
full
level
of
complexity
and
a
lot
of
applications
is
that
thank.
J
D
Q
D
B
I'm
gonna
I
go
to
channel
Paul
Hoffman
to
the
best
of
my
ability.
That's
quite
a
tall
order.
Actually,
so
Paul
has
a
draft
Hoffman
c2
PQ,
which
has
been
around
for
a
while,
and
the
motivation
for
his
talk
is
to
try
and
reanimate
that
draft
and
get
people
interested
in
it.
So
why
should
see
frg
care,
as
he
says,
on
these
slides,
there's
a
lot
of
good
discussion
going
on
about
what
algorithms
we
should
be
using
post-mortem
thinking
about
that,
but
there's
not
enough
discussion
going
on
around
how
we
deploy
these
algorithms.
B
What
is
the
likely
timeline
for
the
arrival
of
large
scale?
Quantum
computers,
what
the
costs
of
deploying
those
algorithms
will
be
when
we
need
to
switch
over,
and
we
know
that
changing
algorithms,
especially
saying
algorithms,
is
expensive,
its
error-prone.
Actually,
it
takes
a
lot
of
time
as
well.
We
have
lots
of
experience
of
that
in
ATF,
so
coming
to
his
draft.
It's
not
manifestly
not
about
post-mortem
algorithms
themselves.
It's
about
how
we
handle
the
timing
issues
around
there
at
the
transition
to
post
quantum
algorithms.
B
The
draft
is
intended
to
address
a
number
of
different
audiences,
so
execs
c-level
people
I
suppose
we
want
to
understand
the
transition
and
what
needs
to
happen
when
we
get
there
security
experts
who
want
more
information
about
quantum
computers,
their
capabilities,
how
they
will
scale
what
the
cost
will
be,
how
fast
they
can
break
ease
and
cryptographers
and
physicists,
who
want
something
readable
that
they
can
point
people
to
so
a
standing
document
that
people
can
be
directed
to
when
they
have
questions
about?
Why
should
be?
Why
should
we
be
worried
about
quantum
computers?
B
Paul
has
admitted
that
the
draft
is
quite
incomplete.
It
needs
work,
there's
a
lot
of
sections
that
are
missing
material
and
references,
and
it's
really
very
much
an
early
draft.
There
is
not
enough
information
there
about
the
wide
range
of
guesses
that
people
have
about
when
this.
This
might
happen,
whether
it
will
happen-
and
indeed
it
may
be
too
early
to
give
useful
guesses
for
that
kind
of
thing,
but
at
least
we
could
document
that
fact
and
be
honest
about
the
uncertainty
surrounding
the
advent
of
large-scale
quantum
computer.
B
N
There
isn't
a
lot
that
I
can
go
on.
So
from
my
point
of
view.
Looking
at
protocols,
what
I'm
interested
in
is
the
size
of
the
pieces
of
data
and
rough
estimates
of
how
much
of
the
computing
power
and
whether
it's
going
to
be
practical
to
do
something
that
is
per
Squanto
in
public
key
or
whether
we
have
to
back
out
into
doing
Kerberos
on
a
large
scale,
which
is
certainly
possible
but
might
be
challenging.
B
B
R
Hi
David,
Murray,
Oh,
Cisco,
so
I
think
it
would
be
not
a
good
idea
if
this
draught
discouraged
people
from
using
256-bit
keys
and
so
I
understand
that
I
believe
I
understand.
Paul's
goal
is
to
make
people
think
about
the
costs
of
a
transition
and
not
rush
into
one
and
not
make
decisions
too
early
and
I
totally
respect
that
and
agree
with
that.
But
at
the
same
time
you
know
I
have
a
number
of
customers,
many
customers
who
use
aes-256
because
they
can
as
a
configuration
option.
They
say.
R
M
It
could
be
emphasized
more
strongly.
Hi
Steven,
fellows,
oh
I,
know
real
opinion
about
mothers-to-be
industry
or
somewhere
else,
but
I.
Second,
what
Phil
was
asking
I
think
you
know
a
guy
ETF
people
could
do
with
input
about
the
sizes
and
performance
as
known
today,
of
the
possible
options.
I
guess
the
sizes
is
probably
more
important
in
terms
of
you
know
future
protocol
changes
because
you
can
maybe
assume
that
some
performance,
decisis
could
be
probably
more
precisely
specified
or
range,
is
given
now
I'm,
not
sure
I.
Think.
B
We
can
certainly
do
that
for
hash
based
signatures.
We
indeed
have
two
drafts
working
their
way
through
CFR
G,
at
different
stages,
where
there
are
very
concrete
parameters,
I
think
for
things
like
key
exchange
in
public
key
encryption.
It's
still
very
much
a
matter
of
development.
That's
up
in
the
air.
B
It's
happening
right
now,
I
think
the
NIST
competition
for
which
submissions
are
due
in
November
this
year,
November
30th
is
gonna
flush
out
a
lot
of
concrete
proposals
with
parameters
and
then,
as
a
group,
we
might
be
in
a
position
to
get
a
sense,
so
the
kind
of
ballpark
of
the
kinds
of
numbers
we
may
be
looking
at
without
making
concrete
decisions
about
those
numbers.
So
we
should
be
able
to
get
at
least
order
of
magnitude
right
than
this.
Yes,.
M
M
S
Hi
I'm,
Don,
Piper,
I,
guess
to
me
this
sort
of
falls
in
the
realm
of
fear,
uncertainty
and
doubt
there's
a
lot
of
guessing
here
and
now.
There's
a
lot
of
hard
science
under
this
and
I'm
not
sure
what
the
point
is
to
bless
a
draft
that
kind
of
guesses
and
doesn't
suggest
any
sort
of
real
concrete
way
forward.
B
D
Seven
eight
hands:
now:
that's
not
bad.
If
people
think
it's
a
waste
of
time
and
we
shouldn't
be
doing
it
here-
show
friends.
F
D
R
So
what's
new.
Well,
we
did
a
security
tweak
and
that's
the
major
difference
between
version
7
and
version
6,
and
this
tweak
is
motivated
by
an
analysis
that
Scott
dead,
which
is
posted
up
on
ePrint
and
I'll.
Talk
about
that.
That's
the
the
most
interesting
part
of
my
talk
today
we
also
published
I
shouldn't,
say
we
panos
Campanas
one
of
my
colleagues
and
Scott
Fluer
published
a
comparison,
that's
also
on
ePrint
archive,
which
compares
this
draft.
R
This
specification
with
X,
MSS
and
Scott
also
posted
on
github
a
full-featured
C
implementation
and
I
won't
be
talking
about
that
much
later.
But
Scott
did
a
good
job
of
considering
the
different
optimization
strategies
needed,
especially
for
a
signer,
which
is
it's
a
little
non-trivial.
So
it's
nice
to
see
a
complete
implementation
of
something.
R
So,
as
far
as
security
goes
Scott
built
on
Jonathan,
Katz's
previous
analysis,
right,
which
was
called
analysis
of
a
proposed
hash
based,
signature,
standard
and
Scott,
says
further
analysis.
So
again,
this
is
in
the
random
Oracle
model
and
it
assumes
128
bit
security
level.
It
demonstrates
128
bit
security
level
for
the
sha-256
based
hash
based
signature
scheme
and
the
important
difference
is
going
to
be
described
by
the
following
right,
so
shot.
256
is
a
merkle-damgard
style.
Hash
function
right
and
there
are
different
security
assumptions
that
you
can
make
about
a
hash
function
in
this
model.
R
So
just
as
a
refresher,
the
merkle-damgard
hash
breaks
up.
The
message:
X
is
the
vector
at
the
top.
It
breaks
it
up
into
blocks,
feeds
these
as
inputs
into
the
compression
function,
denoted
F
here
and
there's
a
chaining
that
goes
on
of
the
compression
function
and
then
there's
the
padding
that
happens
at
the
end.
Then
you
might
truncate
the
output,
so
the
different
security
assumptions
you
could
make
are
these.
You
could
assume
the
entire
hash
function
as
a
random
Oracle.
R
This
is
what
Layton
and
McCauley
did
in
the
90s
and
what
Jonathan
Katz's
analysis
of
the
the
more
complex
and
more
complete
scheme
recently
did
right.
You
assume
the
entire
hash
function
as
a
random
Oracle.
Now
this
is
good
in
a
lot
of
ways,
but
it
also
you
know.
We
know
that
hat.
There
are
ways
that
hash
functions
are
not
random.
Oracle's
hash
functions
are
not
random
Oracle's
because
of
things
like
leg
length,
extensions
attacks.
R
Now
the
the
previous
graph
was
not
vulnerable
to
a
length
extension
attack,
but
we
did
analyze
that
I
should
say:
Scott
is
the
one
who
analyzed
it
in
this
other
model,
assuming
that
the
compression
function
is
a
random
Oracle
right.
So
this
assumption
is
much
closer
to
actual
crypt
analysis.
If
you
look
at
crypt
analysis
of
reduced
round
shot
256,
this
is
much
closer
to
that
assumption
and
in
analyzing
version
six
of
the
draft.
R
In
the
comparison
publication,
the
most
interesting
thing
is
probably
the
performance
of
the
the
two
different
schemes
and
essentially
the
LMS
scheme.
The
Layton
mccauley
scheme
is
over
three
times
faster,
and
these
performance
figures
come
from
the
published
X
MSS
code
being
compared
to
the
published
implementation
that
Scott
did
of
the
LMS
based
scheme,
and
we
actually
he
he
got
something
like
you
know,
four
to
five
times
faster.
Nearly
for
you
know,
the
different
operations
vary
in
the
different
parameter
sizes.
R
In
theory,
you
would
expect
something
like
a
three
times
faster,
so
I
think
Scott
spent
more
time,
optimizing
his
coat
and
his
implementation
strategy,
but
at
any
rate,
it's
a
very
appealing
performance
number.
So,
for
next
steps,
I'd
like
to
invite
your
input
on
draft
seven,
and
we
especially
encourage
you
to
look
at
the
security
analysis,
and
you
please
also
do
look
at
the
the
comparison
document.
If
you're
interested
in.
R
You
know
understanding
the
differences
either
in
security
or
performance,
and
if
I
could
you
know,
I
think
a
number
of
people
have
given
us
private
feedback
on
the
draft.
So
I
encourage
everybody
to
you
know
copy
the
CFR
G
list,
because
we
would
like
to
ask
for
this
document,
which
is
an
official
RG
document,
to
go
into
a
last
call
process,
so
it
can
proceed
forward
and
and
then
you
never
have
to
hear
me
talk
about
hash-based
signatures
again,
you
might
have
to
hear
Scott
talk
about
it
again.
R
R
Technology
wise-
and
we
did
you,
know,
changed
some
of
the
ordering
of
the
elements
and
so
forth,
and
that
we've
moved
to
a
selection
of
functions
to
make
a
post
quantum
secure.
But
our
our
goal
was
to
minimize
the
amount
of
innovation
that
we
were
doing,
so
that
implementers
could
claim
the
benefit
of
implementing
an
expired
patent
or
a
technique
where
most
of
the
description
comes
from
an
expired
patent.
So
we
think
this
is
interesting
and
we
welcome
your
feedback.
So
thanks
for
attention
I'll
happy
to
take
any
questions.
L
This
is
Jeffrey
Gaskin,
as
mentioned
in
a
couple
of
the
comments
on
the
West
on
the
previous
paper.
It
would
be
nice
to
have
like
sizes
of
signatures
and
public
keys
and
private
keys
documented
in
the
in
the
specification
of
the
algorithms.
So
it's
it's
implicit
in
there,
but
it'd
be
nice
to
to
see
it
explicitly
stated.
That's.
B
Have
a
quick
question:
could
you
could
you
go
back
to
the
slide
with
the
performance
fears?
Can
you
explain
what's
happening
in
the
second
half
of
the
table
where
you
seem
to
have
like
a
if
I
read
it
properly
a
1000
fold
improvement
for
public
key
generation,
or
is
that
my
misunderstanding,
I'm
sorry
say
again,
the
second
half
of
the
table,
PK
gen
4x
MSS,
is
something
like
3,000
seconds
and
you
have
four
point.
Six
four.
R
B
M
Well,
yeah
so
I
think
in
addition
to
size
information
again,
it
might
have
to
be
in
an
RFC
anywhere,
but
it
would
also
be
useful
to
kind
know
the
different
trade-offs
and
implementations
that
people
have
faced
so
whether
that
would
be
just
a
presentation
at
CF
or
G
at
some
point.
Maybe
that
might
be
enough,
but
there
seem
to
be
a
lot
of
them
at
least
from
having
read
the
xn
and
XM
SS
stuff.
There
seems
to
be
a
lot
of
different
possible
optimizations.
You
could
choose
to
do
I
assume.
R
So
the
the
the
published
code
is
pretty
decent
in
terms
of
that,
and
that
would
you
know
perhaps
Scott
could
describe
his
implementation
in
the
future.
I
think
that's
that's
a
very
long
topic
and
if
it's
of
interest
of
people
then
absolutely
yeah,
that
would
make
sense
to
document
some
of
that
better
yeah.
M
B
Q
T
T
T
Hashing
function
was
reduced
like
half
number
of
rounds
to
make
it
faster
and
I
believe
it
still
wear
right.
They
believe
still
very
secure
and
that's
function
called
the
SOF.
The
zof
variable
output
function,
shake
128
and
in
this
function,
not
just
the
hash
function,
but
the
segmented
hash
function.
A
parallel
of
version
arm,
which
looks
very
well
today,
would,
with
the
modern
machines
where
you
have
multiple
cores
and
multiple
processors
in
in
the
machines
arm
and
the
parallelism
is
really
good.
T
It
goes
with
the
size
of
a
message
and
they
chose
only
one
segmentation
size
so
which
is
good
for
for
for
a
hash
function
which
works
everywhere.
You
don't
have
to
choose
different
parameters
and
then
end
up
with
different
functions.
So
this
is
very
nice
function
and
they
got
security,
proof
in
2008
and
have
further
extension
and
other
up
there.
Proof
for
their
security
in
different
modes
and
references
are
listed
right
there
and
the
nice
thing
about
security
analysis
for
a
sponge
function
are
with
the
permutation,
is
a
4k
check.
T
You
just
increase
or
reduce
around
to
do
the
trade-off
between
security
and
performance
and
has
been
no
tricks
at
all,
since
it
was
designed
like
almost
10
years
ago,
and
the
purpose
doing
that
was
up
to
to
make
the
the
analysis
easier
and
to
and
and
so
that
people
couldn't
have
standard
security
better
and
also
you
didn't
have
to
do
any
tweets
which
changed
everything
which
happens
to
a
lot
of
different
ciphers.
You
know
when
we
do
the
tricks
and
then
your
raphelson
change,
but
this
reference
interchange
accept
only
the
around
constant.
T
So
this
really
excellent
design
would
hopped
out
to
get
a
lot
of
independent
analysis
and
to
make
people
feel
more
comfortable
and
more
confidence
about
his
security
instead
of
different
order
desires,
even
including
AES
in
the
past.
If
we
change
something
that
would
become
completely
different
cycle.
T
Even
though
it's
a
claim
security
level.
However,
the
attack
work
was
really
impressive
and
to
get
through
a
browse
and
indict
India
the
next
output
off
of
the
function.
So
it
was
really
good
analysis,
paper,
excellent
and
with
the
performance
number
arm
for
the
sorts
messages
and
long
messages
for
long
as
I
said,
you
have
excellent
perform
another
way
around
one
point:
something
and
even
go
down
to
zero
point:
seven
psycho
per
bytes,
but
bit
of
a
bias.
T
So
it's
very
impressive
on
modern
machines
and
again
they
shake
28.
The
the
function
is
then
can
be
used
as
a
normal
function
is
even
is
parallels
function,
but
it
is
a
normal
house
function.
You
can
replace
any
couldn't
have
function
with
this
function
and
used
to
you
will
gain
a
lot
of
performance.
T
So
the
so,
as
you
all
know,
the
decay
check
was
was
a
public
design.
You
had
a
public
design
personnel,
it
has
a
lot
of
analysis
and
an
security
proof
in
the
originals
back,
and
it
was
the
result
from
the
open
international
competition
that
we
ran
a
few
years
back
and
they've
been
very
actively
analyzed
by
the
crypto
community,
and
until
now
it's
been
very
actively
analyzed
by
different
fluorophores
around
award,
and
it
has
been
proven
very
secure
and
thing
is
to
speed
up
without
wasting
any
crypto
analysis.
T
Basically
right
now
reducing
around
and
any
analysis
you
had
in
the
past.
They
will
remain
valid.
That
means
because
you
did
not
change
the
permutation,
it's
just
different
different
constants
for
the
from
rounds.
That
was
that
was
it
and
everything
I
was
the
same
and
scalable
too.
If
your
machine
has
more
cores
more
more
processors
and
if
asking
the
hash
will
be
a
lot
faster
and
it's
not
limited
to
anything
and
with
that,
I
would
be
happy
to
try
to
answer
your
questions.
B
U
T
T
U
T
There
is
a
neat
customization,
a
spring
I
believe
it
is
optional,
I'm,
not
author,
so
but
I
believe
it
is
optional.
So,
basically
you
take
a
message
and
you
hash
it
out,
but
the
shake
is
an
X
or
F
function.
So
you
need
to
specify
the
lands
for
your
application.
We
want
to
use.
T
T
T
And
we
and
we
are
in
the
in
the
consideration
of
adopting
the
new
two
curves.
You
know
we
we
are
the
assembly
produce
over
here,
so
we
we
are
in
the
consideration
of
doing
that.
So
my
personal
expectation
is
that
there's
very
good
chance,
the
curves
will
will
probably
be
be
endorsed
or
allowed
by
nurse.
But
that's
my
personal
expectation.
The
only
thing
is
you
can
say
now
is
in
the
consideration
of
adopting
those
curves,
so
so
it
it
bears
on
how
things
are
now.
T
Thank
you
and
the
thing
is
because
the
kangaroo
drove
the
function
now
the
security
motion
is
still
compatible
to
this
CUDA
motion
offshore
or
even
a
slightly
better.
So
internal
security
is
solid
in
thumb
performance
it
two
times
faster.
Then
then
the
occurred
in
the
shake
128
for
short
messages,
but
for
long
messages
it's
a
lot
faster
and
one
size
fits
all.
B
Fantastic,
let's
see
if
we
can
get
the
technology
to
work
for
us,
so
he
needs
to
step
on
up
I.
Think
I
did
change.
I
did
try
and
contact
him
by
email
to
see
whether
he
was
actually
yeah,
but
whether
he
could
actually
hear
and
participate
so
Dan.
If
you
can
hear
us
time,
Brian
is
your
turn
to
do
what
you
need
to
do
in
vitro
and
take
the
mic.
I'll
get
your
slides
up.
V
V
V
B
B
V
V
V
V
V
There
is
basically
these
these
features
and
they
have
have
the
benefits
so
that
me,
the
main
feature
is
that
it
only
requires
six
symbols
up
for
the
Galois
field,
size,
its
prime
and
on
that
that's
good,
because
it
avoids
test,
keys,
backdoor
and
other
security
concerns,
and
it's
not
too
big
or
not
too
small
and
some
well-known
stuff.
So
let's
go
to
the
next
slide
so.
V
V
This
slide
is
obviously
too
much
to
present
all
the
information,
but
basically
it's
saying
there's
many
other
possible
fields,
and
some
of
them
are
better
in
some
aspects,
but
all
of
them
are
worse
in
in
at
least
some
minor
aspects,
so
none
of
them
is
better
in
all
respects
than
this.
This
field,
so
next
slide.
V
V
That
heuristic
still
is
is
somewhat
reasonable.
So
the
two
points
to
highlight
here
is
when
I
use
this
heuristic
for
the
field.
I
only
came
up
with
two
secure
and
fast
field
sizes,
that's
in
the
bullet
under
just
the
term
lucky
and
then
I
only
realized
as
I
was
preparing.
This
talk
that
the
one
that
I
prefer
of
those
two
shares
the
same
digits
as
the
birthdate
birth
year
of
ECC.
So
I
thought
that
was
a
happy
coincidence.
So
next
slide.
V
So
for
this
proposal,
I'm
I'm
proposing
to
use
a
very
special
curve
because
basically
it
matches
that
you
the
the
proposal
to
use
a
special
field.
It's
very
compact-
and
it
has
many
of
the
desirable
features
that
we
want
and
it's
it's
listed
here
and
you
can
and
read
it
off
offline
at
your
convenience.
So
next
slide.
V
V
K
V
V
So
this
slide
is
reminding
us
that
Miller
invented
ECC
in
1985
co-invented
it
with
Co
blitz
and
he
use
very
similar
equations
to
to
the
ones
that
I'm
proposing
to
use,
and
he
raised
the
issue.
He
asked
whether
it
was
prudent
to
use
such
a
special
equations,
and
so
indeed,
yes,
he
was
quite
correct
about
that.
Some
of
the
special
equations
were
attacked
the
famous
MOV
attack,
but
in
other
cases
the
cases
that
wood
that
I'm
proposing
others
but
no
published
attacks.
V
B
B
V
Okay,
so
this
is
a
slide
reviewing
Gordon's
attack
and
the
current
countermeasures,
and
the
main
point
is
that
the
risk
of
a
backdoor
in
in
diffie-hellman
is
is
actually
quite
real.
V
V
V
So
there's
two
main
main
benefits
of
using
this
prime
for
diffie-hellman
is
one
that
it
has
a
more
compact
description.
So
that's
one,
perhaps
informal
but
quantifiable
way
of
measuring
the
the
possibility
of
it
there
being
a
trapdoor,
and
the
second
security
benefit
is
something
quite
unrelated.
It
has
to
do
with
den
boars
proof
relating
the
difficulty
of
the
diffie-hellman
problem
compared
to
the
discrete
logarithm
problem,
and
if
you
use
this
Prime,
then
you
what
happens.
Is
you
basically
nearly
optimize
that
proof?
V
V
V
And-
and
this
slide
is
just
these
are
highly
speculative
heuristics-
that
are
nice,
but
they
shouldn't
figure
too
much
into
a
very
serious
analysis.
B
K
V
V
V
And
the
only
improvement
here
well,
the
first
improvement
is
that,
by
being
able
to
represent
this
Prime
in
fewer
symbols
that
there's
less
room
possibility
for
a
trapdoor,
admittedly
that's
an
informal
argument.
The
second
improvement
is
that
is
that
the
den
bore
proof.
So
if
you
took
the
that
RFC
79
19,
which
I
haven't
done
yet-
and
you
try
to
reconstruct
the
den
bore
proof
as
as
far
as
I
know,
and
the
expected
result
would
be
that
the
proof
would
break
down.
B
B
B
I
think
we're
at
the
end
of
our
formal
agenda
now,
so
if
there
are
any
other
any
other
business
that
anybody
would
like
to
raise,
bring
to
our
attention
tell
us
about
it.
Please
please
come
grab
the
mic,
then
Irish,
please
come
grab
the
mic
and
and
tell
us,
what's
on
your
mind,
are
there
things
that
CFR
g
should
be
doing
that
we're
not
doing
are
the
things
that
we
are
doing,
that
we
shouldn't
be
doing
I.
B
Could
go
on
okay,
finishing
on
time.
Okay,
in
that
case,
thank
you
for
everyone
for
their
inputs,
thanks
everyone
for
the
great
presentations
and
forgetting
slice
us
on
time.
It's
been
really
helpful
and
we'll
see
you
in
Singapore.
We
hope
thanks.
A
lot
have
a
great
time
at
their
social.
This
evening.