►
From YouTube: IETF101-CFRG-20180319-1550
Description
CFRG meeting session at IETF101
2018/03/19 1550
https://datatracker.ietf.org/meeting/101/proceedings/
A
Yes,
actually,
yes,.
C
C
D
All
right,
thank
you.
My
name
is
Karthik
Karthikeyan,
bhargavan
and
I'm,
going
to
be
talking
about
a
new
initiative
called
hack
spec,
which
stands
for
high
assurance
cryptographic.
Software
assurance
cryptographic,
specifications
for
which
I
need
your
help,
so
I'm
going
to
describe
a
little
bit
of
this.
This
is
an
initiative
that
came
out
of
a
group
of
researchers
who
matter
the
sidelines
of
the
elbe
all
crypto
this
year,
and
this
is
kind
of
a
problem
that
we
identified
and
we
have
a
proposed
solution
that
you
want
your
help
them.
D
So
most
of
us
will
agree
that
implementing
crypto
software,
which
meets
implementations
of
crypto
primitives
crypto
constructions
correctly,
is
hard.
Ok.
So
if
you
go
look
at
the
last
two
or
three
years
of
OpenSSL
series,
you'll
see
about
half
of
them
are
memory
safety
bugs
like
buffer
overflows
about
a
quarter
or
side
channel
bugs
like
branching
on
secrets
and
about
a
quarter
of
functional
correctness
bugs
like
carry
propagation
bugs
the
thing
about
these
kind
of
bugs
is
that
they're
low
probability,
which
means
if
you
test
and
test
and
test
the
code?
D
It's
still
unlikely
that
you
might
hit
the
case
where
this
bug
might
be
trigger,
but
an
attacker
who
knows
that
the
bug
exists
might
be
able
to
drive
the
implementation
towards
about
so
testing
doesn't
work.
What
works
better
is
formal
verification,
but
formal
verification
requires
a
lot
of
effort
and
expertise.
So
the
group
came
up
with
this
idea
that
maybe
we
can
try
to
make
the
effort
required
for
verifying
modern,
crypto
primitives
lower.
D
The
good
news
is
that
a
whole
bunch
of
research
groups
are
right
now
be
able
to
use
existing
research
tools
to
verify
fairly
sophisticated
cryptographic
implementations.
So
there
are
a
bunch
of
tools
that
will
verify
see
implementations
of
your
primitives,
including
all
the
ones
that
you
can
imagine,
there's
also
a
few
tools
that
will
verify
optimized
assembly
implementations
and
these
tools
have
now
reached
a
level
of
maturity
that
they're
being
included
in
production
software
like
boarding,
SSL,
NSS,
s2,
N
and
so
on,
actually
using
formal
verification
techniques.
D
D
It
could
be
like
functional
correctness,
memory
safety,
some
specific
kinds
of
side,
channel
resistance
that
you
need
even
your
cryptographic,
security
goals,
and
then
we
write
your
implementation
and
prove
that
in
this
implementation
meets
the
spec
that
you
set
out
first,
okay,
so
this
both
of
these
steps
actually
require
work.
The
if
you
go
look
at
the
various
tools
that
are
available
right
now.
Most
of
us
agree
on
what
the
implementation
language
should
be.
D
You
usually
see
your
assembly,
but
there
is
a
wide
variety
of
specification
languages,
so
some
people
write
specifications
in
general-purpose,
logical
languages
like
cork
and
F
star.
Other
people
write
it
in
domain-specific
languages
like
easy,
crypt
and
crypto,
and
the
reason
we
write
it
in
these
languages
is
that
these
languages,
which
see
specification
languages,
are
particularly
geared
towards
a
specific
verification
method
that
these
tools
are
using.
D
However,
what
it
means
is
that
the
first
step
any
of
us
has
to
do
when
verifying
code
is
to
take
the
beautiful
pseudocode
or
text
written
in
your
other
C's
and
encode
it
in
the
specific,
very
specification
language
that
you
have.
This
step
is
quite
error-prone,
quite
painful,
but
actually
is
needed,
because
this
is
what
we
need
in
order
to
get
the
verification
tool
to
work.
On
the
other
hand,
the
negative
is
that
the
resulting
spec
actually
looks
quite
different
from
the
RSC.
So
it's
difficult
to
understand
what
exactly
we
are
proving
right.
D
So
this
is
the
kind
of
problems
that
came
up
and
a
whole
bunch
of
us
met
with
crypto
developers
in
the
series
of
workshops
called
hacks,
which
is
on
the
side
of
real
world
crypto,
and
the
developers
essentially
said.
Well,
you
guys
have
lots
and
lots
of
proofs,
but
we
don't.
We
don't
understand
what
you're
proving
ok,
because
the
proofs
are
relying
on
the
specification
written
in
some
obscure
language
that
we
don't
understand
and
we
don't
have
the
time
to
learn.
D
So
this
is
where
the
idea
for
hack
spec
came
from,
which
is
the
idea
that
we
need
a
language
which
can
be,
which
has
a
well
understood,
syntax
and
semantics
that
both
developers
and
crypto
designers
and
people
are
writing.
Specs,
like
the
I,
like
the
CFR,
G
and
so
on,
can
understand
and
read
and
write
as
well
as
can
be
used
as
a
basis
for
formal
verification.
D
So
the
design
goals
for
hack
spec,
which
is
a
new
high
assurance
crypto
specification
language,
are
the
following.
You
want
the
specs
that
we
write
to
be
really
set
synched
and
readable
so
that
you
can
actually
the
people
in
this
room
can
read
and
write
these
specs
and
that
they
can
be
integrated
as
pseudocode
in
to
RFC's.
We
want
the
this
X
to
be
executable,
so
they
can
actually
be
treated
as
reference
implementations
and
they
will
pass
test
vectors
and
so
on,
and
we
actually
want
them
to
have
a
clean,
syntax
and
compact
formal
semantics.
D
So
we
all
can
understand
them
and
they
can
be
used
as
a
basis
for
formal
verification
by
a
variety
of
tools.
Ok,
so
that's
the
goal
that
you're
going
for
so
we
have
a
first
design,
it's
a
very
preliminary
design
and
looking
for
feedback
you're
all
looking
for
feedback
on
this,
and
we
went
around
and
looked
at
various
RFC's
that
have
been
standardized
recently
and
we
found
that
most
symmetric,
crypto
Hirsi
seem
to
be
using
pseudocode,
+
c
reference
implementations,
but
anything
that
uses
field
arithmetic
seems
to
be
using
Python
reference
implementations.
D
So
we
said:
okay,
maybe
Python
is
a
language.
To
start
with,
it
seems
to
be
used
by
quite
a
few
developers
and
designers
as
prototype
for
prototyping
and
testing.
So
we
pick
a
subset
of
Python
3.6
extended
with
type
annotations.
What
is
the
subset?
Well,
it's
a
really
really
minimal
subset.
We
have
like
machine
integers,
we
have
big
numbers
and
we
have
arrays
and
not
much
else.
Okay,
we
might
add
things
as
we
need
them,
but
you
really
want
to
keep
this
minimal,
because
you
want
this
to
be
a
really
compact
and
minimal
domain-specific
language.
D
The
types
are
useful
in
two
ways:
they
allow
us
to
catch
very
simple
in
silly
errors
like
writing
into
an
array
out
of
bounds
or
or
forgetting
to
add
one
or
my
subtract,
one
and
so
on,
but
they
also
help
us
to
do
very
precise
translations
into
various
other
formal
languages
which
are
usually
typed.
So
in
particular,
we
have
compilers
right
now
to
F
star
there're,
already
one
working
to
Easy
Clip,
that
is
on
the
way
and
other
compilers
that
we
are
developing
for
languages
like
crypto
land.
D
So
if
you
write
one
speck
in
hacks
pack,
which
looks
like
Python
with
a
few
more
type
annotations,
you
will
be
able
to
get
for
free,
formal
specifications
in
all
of
these
different
languages.
We
also
have
a
building
a
library
of
common
constructions
and
specifications.
We
have
written
quite
a
few
examples,
but
we
need
help
to
write
more
because
you
want
to
basically
exercise
this
language
quite
a
bit
more.
What
does
it
look
like?
Well,
here's
an
example.
D
This
is
the
matic
of
poli
one
3:05,
it's
a
very
small
sort
of
fragment
of
the
speck,
as
you
can
expect.
It's
Python
there's!
Nothing
special
happening
here,
there's
a
prime
there's
addition
and
subtraction
and
multiplication
on
this
on
this
field,
which
is
modulo
the
prime,
but
the
interesting
thing
there
is
we're
defining
a
type
called
FLM,
which
is
a
refinement
type.
It
says
this
is
a
type
of
element
set
up
between
zero
and
prime
minus
one.
D
This
means
that
everywhere
in
the
spec,
where
we
use
the
type
FLM,
we
will
be
implicitly
asking
the
verification
tool
to
prove
that
at
this
point,
the
value
that
we
are
dealing
with
is
in
the
field
is
between
zero
and
a
P
minus
one,
and
we
can
put
many
other
such
constraints.
Here's
a
fragment
of
the
charger
20
spec
that
we
wrote
again
looks
just
like
Python,
it's
nothing
very
surprising,
except
it
as
a
few,
more
type
annotations.
D
In
particular,
we
are
asking
that
all
the
array
indexes
that
you
use
are
between
0
and
15,
which
means
that
you
will
never
accidentally
access
the
state
array
outside
these
bonds.
Ok
again,
this
is
just
a
sanity
check,
but
it's
useful
to
kind
of
put
this
in
and
we
need
them
for
our
formal
models
later
on
anyway.
So
once
you
write
specs
in
this
language,
what
you
get
is
for
free.
D
You
get
compiled
a
compilation
into
models
like
this,
which
is
a
model
in
a
star
of
charge
at
many
and
then
you'll
be
able
to
prove
that
C
code.
That
looks
like
this
implements
the
spec
that
we
started
off
with,
so
you
just
write
the
spec
and
then
we
that
be
various
tools
that
will
be
able
to
prove
that
assembly
in
C
and
Java
and
LLVM,
or
whatever
meets
this
high-level
spec.
And
yes,
we
can
verify
code
that
uses
highly
optimized
instructions,
including
vectorization,
and
so
on.
D
There
are
tools
that
will
do
that,
for
you,
so
coming
to
the
end
of
my
talk,
what
we
need
here,
what
I
am
here
is
that
we
need
some
help.
We
want
your
help
in
promoting
high
assurance
implementations
for
the
standards
that
are
coming
out
of
this
body
in
particular.
If
any
of
you
is
writing
a
new
crypto,
primitive
or
has
just
recently
standardized
something,
maybe
you
want
to
write
your
pseudo
code
and
hack
spec?
D
Maybe,
in
addition
to
your
pseudo
code,
you
want
to
write
a
reference,
implementation,
hack,
spec,
so
come
look
at
our
specs.
Read
them
comment
on
them.
Add
more
specs.
If
you
can
give
us
feedback
on
the
language,
what
are
the
kinds
of
features
that
you
might
need
to
write
specs
for
your
new
crypto
committee
in
this
in
this
language
we
are.
We
are
happy
to
kind
of
evolve
our
design
as
depending
on
what
people
say
and
help
us
kind
of
test
out
our
various
compilers
to
various
languages
and
our
various
specs.
D
E
Hello,
I'm
Daniel
Kahn,
Gilmore
I.
Thank
you
for
this.
This
is
really
great
and
I'm
pleased
to
see
it.
One
of
the
things
that
maybe
you
don't
have
an
example
of
in
your
slides
is
how
to
express
the
formal
cryptographic
properties
that
you
want
from
the
code
right.
So
the
examples
that
you
showed
made
it
really
clear
how
to
express
numeric
constraints,
and
things
like
that.
Can
you
just
explain
a
little
bit
about
how
you
use
hex
back
to
express
those
things
right.
D
So
right
now,
the
way
we
write
crypto
security
guarantees
in
these
various
tools.
They're
different
tools
have
different
ways
of
doing
so,
and
we
haven't
really
resolved
how
we
would
do
that
in
a
language
like
hack
spec.
What
do
you
want?
A
classic
way
to
do,
that
is
to
define
two
specs
one,
which
represents
the
ideal
functionality
and
one
we
should
present
the
complete
functionality
and
kind
of
show
that
that
is
the
ideal
functionality
we
want
this
to
achieve
and
the
ideal
functionality
would
not
be
actually
executable.
D
F
D
So,
for,
for
example,
for
our
charge,
mining
spec
and
for
any
symmetric
spec
like
that,
what
we
do
is
we
restrict,
because
we,
because
you
have
a
type
system,
we
can
restrict
the
uses
of
all
the
integers
machine
in
this
world
to
be
secret,
independent
in
the
sense
that
you
cannot,
for
example,
compare
to
win
to
machine
integers.
You
can
only
compare
array,
indices
and
so
on,
but
you
cannot
compare
the
data
which
is
represented
by
you
into
eight.
You
cannot
compare
it.
You
cannot
crunch
on
it.
D
You
cannot
use
it
as
an
index
to
an
array.
You
can
write
these
kinds
of
things,
but
it'll
make
things
like
AES
with
s
box
impossible
to
write
in
the
language,
so
you
have
to
relax
it
at
some
places.
So
you
might
say:
ok
for
the
S
box,
I'll,
relax
it
for
the
bignum
arithmetic,
because
you're
using
big
numbs,
there
is
no
side
channel
guarantee
there,
but
once
you
once
you
exit
that
world
and
you
come
down
into
the
world
where
you
actually
have
concrete,
bytes
and
concrete
integers.
D
F
D
G
Phil
hamburger
yeah
I,
like
this
lockers,
you
know:
we've
used
Python
in
a
couple
of
specs,
yes
and
I've
spent
a
lot
of
time
trying
to
convert
them
into
c-sharp
or
C,
because
it'd
be
nice.
If
you
had
compilers
to
executable
languages
and
if
you
can
get
C
and
C
sharp
and
maybe
Java
that
basically
covers
all
the
things
you
need
to
cover
because
everything
else
can
cook
into
those.
That's
a
that's
a
good
point.
Thanks.
H
J
Alright,
hello,
everyone.
My
name
is
Chris
Wood
from
Apple
here
to
talk
about
something
some
work,
we're
doing
with
Katz
Kramer's
Luke,
Garrett
son
Esau,
not
gonna,
try
to
pronounce
your
last
name,
so
butcher
apologize
and
Nick
from
CloudFlare.
This
is
something
we
talked
about
a
sec
dispatch
at
the
last
ITF
and
we
were
encouraged
to
come
over
here
to
present
it
to
you
so
without
further
ado.
J
J
First,
one
being
the
Debian
bug
where
some
developer
tried
to
silence
some
warnings
that
were
brought
up
by
Bell,
grind
and
mistakenly
removed
some
critical
seeding
processes
from
the
or
in
some
critical
seeding
step
in
the
random
number
generation
step
which,
basically,
you
know,
hosed
everyone
and
made
the
output
be
sort
of
predictable,
which
is
not
so
great
and
then
there's
the
dual.
You
see,
bug
or
design
flaw
or
whatever
you
want
to
call
it,
which
could
have
been
actively
exploited
by
TL
server.
J
Also,
if
you
have
sort
of
a
you
know,
systemic
kind
of
implementation
flaw
or
deployment
flaw
for
some
pseudo-random
number
generator.
This
technique
ensures
that
the
failures
themselves
are
kind
of
localized
to
each
individual.
You
know
a
secret
key
instance,
so,
for
example,
the
randomness
that
you
might
get
from
one
particular
server
would
be
different
from
the
randomness
that
you
would
get
for
a
different
server.
They
happen
to
be
running
the
same,
the
exact
same
rng.
You
see
our
PMG
with
the
same
state.
So
not
everyone
is
hos.
Just
a
small
fraction.
J
However,
in
today's
deployments
private
keys
are
not
readily
accessible.
You
know,
if
you're
a
server,
you
typically
store
them
in
some
HSM
for
good
reason,
and
if
you're
a
client,
you
might
keep
them
in
an
enclave
and,
depending
on
what
api's
are
using
to
actually
interface
with
the
private
keys,
they
might
not
be
readily
available
to
you.
The
keys
themselves
might
be
of
varying
types
you
could
have
RSA
or
elliptic
curve
or
whatever,
meaning
that
the
output
size
or
whatever's
fed
into
this
mixing
step.
J
J
So
you
would
sign
some
fixed
tag
tag
one
with
your
C
key,
which
you
can
call
it
your
HSN,
which
you're
whatever
your
your
API
that
happens
to
manage
your
private
key
access
for
you,
hash
the
output
and
then
anytime.
You
wanted
to
extract
some
randomness.
You
would
actually
call
it
to
the
PRNG.
You
get
the
randomness
append
it
to
this
hashed,
the
hash
with
the
signature,
and
you
would
extract
from
that
a
key,
the
KDF
step,
and
then
you
expand
using
the
PRF
our
much
randomness.
J
You
need
I,
you
know,
I,
guess
the
KDF
I
we
use
KDF
and
PRF
notation
here,
but
really
it's
just
an
extract
and
expand
kind
of
design.
You
could
probably
swap
them
whatever
expander.
You
want,
of
course,
subject
to
analysis
and
everything
to
be
sure.
That's
safe,
so
some
details
about
the
particular
parameters
so
tag.
One
is
a
fixed
intended
to
be
a
fixed
string.
That's
used
across
you
know
a
particular
deployment
of
this
technique,
so
a
server
that
starts
up
starts.
J
You
know,
pumping
out
randomness
would
only
compute
one
signature
with
the
secret
key
over
a
fixed
tag
and
it
can
just
cash
that
you
know
value
or
catch
the
hash
of
that
value
and
then
never
do
anything
with
that
ever
again,
except
append
it
to
whatever
randomness
is
generated.
Tag
two
is
a
dynamic
string.
It
changes
every
single
notification
so
that
you're,
not
you,
don't,
have
colliding
randomness
output
and
that's
critically
important
in
fairly
simple
to
implement
as
well
a
comment
that
the
signature
absolutely
must
not
be
exposed.
J
J
There
was,
you
know,
flurry
of
comments
on
Twitter
first,
one
being,
why
bother
with
this?
At
all?
You
know
your
energies
are
very
easy
to
get
right,
but
just
point
back
to
the
previous
slide.
Yes,
in
principle,
they
should
be
easy
to
get
right.
However,
things
do
go
wrong
and
this
gives
us
an
extra
insurance
policy
to
kind
of
protect
against
those
particular
instances.
J
Second,
major
one
is:
why
would
you
ever
use
your
private
key
for
something?
That's
you
know
not
intended
use
for
coming
here.
Is
that
or
my
response
to
that
is
simply
here:
we're
not
it's
not
an
unintended
use
of
the
private
key
you're
still
computing
a
signature
with
a
signing
key.
It's
just
not
used
in
the
same
way
that
you
would
use
it
in
the
protocol
originally
so,
for
example,
you're
not
signing
the
key
sheriff
until
s1
or
or
whatever
and
I'm
sure
other
people
would
come
up
with
other
criticisms.
J
I'd
be
very
happy
to
hear
them
on
here
or
here
at
the
Lister.
However,
you
know,
however,
you,
like
so
some
open
issues
in
the
draft
right
now
with
the
retinas
wrapper
as
presented.
It's
not
an
easy
drop-in
replacement
for
something
like
that
random,
where
you
just
expand,
however
much
random
as
you
want,
so
we're
looking
at
various
other
alternatives
or
Jarius
extraction
alternatives
to
make
it
so
that
it's
very
easy
to
do
so,
I,
don't
mind.
J
J
We
have
a
very
trivial
implementation
of
this
technique
if
you're
interested
go
check
it
out,
then,
just
comment
that
we
could
probably
experiment
with
other
implementations
or
with
this
technique
in
popular
OS
libraries
like
popular
there,
SSL
and
then
SS
in,
among
others
that
have
user
space
PRNG
implementations
that
you
could
easily
amend
or
rap
to
make
this
happen.
So
that's
it
very
simple
idea:
I'd
love
to
hear
any
comments
and
criticisms.
You
have.
G
K
F
K
L
J
J
J
L
Leads
me
to
the
conclusion
that
perhaps
this
this
leads
to
assigning
Oracle,
that
I
I'm
not
sure
than
them.
So
perhaps
we
I
was
thinking.
This
would
be
a
fixed,
fixed
tag
based
on
you
know
this.
This
is
I'm
using
this
for
HTTP
server
authentication,
so
I
just
pick
a
string
that
that
is
constant
across
all
over
that
particular
application,
and
then
then
I
deal
with
the
the
problem
of
making
tag
to
unique
across
all
of
the
uses
of
the
same
yeah.
J
L
M
H
The
short
answer
is
as
many
as
you
like:
if
you
have
secured
a
secure
PRF
and
you
model
the
hash
function
as
a
random
Oracle
I
mean
there's
really
no
boned
okay,
but
you
don't
have
forward
security,
though
I
mean
once
X
is
compromised.
Yes,
but
we
just
haven't
specified
it
like
the
tightness
of
the
bound
position.
Thank.
H
A
couple
of
questions
too,
if
I
mean
yes
and
I
start,
you
used
dual
ec
as
part
of
your
motivation
for
this
yeah
I'm,
not
sure
that
motivation
really
works,
because
it's
known
by
now
that
a
back
door
generator
in
that
way
is
actually
equivalent
to
public
key
encryption,
which
means
you
can
spot
it
in
source
code.
If
you
have
access
to
store,
could
it
stands
out
a
mile
because
it's
basically
a
public
key
encryption
scheme?
H
On
the
other
hand,
if
you
don't
have
access
to
source
code,
you
can't
see
it
and
you
can't
detect
it.
Okay,
fair
point,
so
that
might
be
something
you
want
to
look
at
in
your
you
know,
section
and
zero
of
your
draft
or
something
yeah.
Our
motivation,
I'm.
The
other
question
I
had
is,
if
you
go
back
to
the
equation,
just
from
aluminum
suppose
we
use
this
process
to
generate
randomness
for
a
signature
scheme
which
is
already
the
signature
scheme,
that's
being
used
to
generate
the
randomness
yeah.
What
can
you
say
about
security?
J
N
Hello,
/
Mishler
descriptor
pro
one
command
for
kenneth
question.
Yes,
this
is
very
important
question.
That's
why,
in
the
new
version
of
the
draft
at
least
should
appear
about
this
monistic
signature,
algorithms
and
I
think
we
should
discuss
that
she's.
This
shoot
can
be
switched
to
must
in
the
future.
So.
N
Usage
of
non-deterministic
signature
algorithm
can
be
used
only
once
for
these
exact
usage
and
not
for
any
other
means
and
about
the
question
of
Martin
about
Sun
Oracle.
Of
course,
we
should
continue
improving
the
draft
about
as
a
form
of
tag
1
and
take
2
about
the
possible
forms
such
that
no
obvious
attacks
place
I
mean
Oracle
can
be
possible,
so
these
two
ways
must
be
taken
into
account
while
we
improve
out
draft.
So
thank
you
very
much
questions.
N
H
J
L
H
Excellent,
thank
you,
I
think
you're
sort
of
getting
to
this
point,
but
thank
Thank
You
Martin
for
accelerating
the
process.
So
maybe
we
can
just
take
a
hum
in
the
room.
Are
people
in
favor
of
the
research
group
adopting
this
as
a
research
group
draft
with
the
intention
of
starting
as
a
starting
point?
Yeah,
potentially
other
techniques
will
come
forward
and
other
people
want
to
get
involved
and
that's
how
it
always
is
with
with
an
internet
draft
in
a
research
group
yep.
H
H
Okay,
thank
you
and
if
you
are
against
you
know,
yeah.
H
J
H
J
Alright,
so
the
next
one
is
about
hashing
to
elliptic
curves.
This
is
work
that
Nik
and
I
I
did
not
intend
to
do
that.
Put
together.
There
has
been
some
work,
especially
with
relation
to
the
vrf
graph
that
was
recently
adopted.
You
know
nail
down
exactly
how
you
might
want
to
hash
arbitrary
things
strings
to
elliptic
curves,
and
there
are
a
variety
of
different
ways
to
go
about
doing
this.
J
If
you
just
look
in
the
literature,
so
we
wanted
to
write
down
what
we
thought
were
reasonable
mechanisms
for
doing
so,
exactly
how
you
do
them
and
hopefully,
depending
on
how
those
hacks
back
stuff
going
goes,
maybe
produce
some,
maybe
use
that
as
kind
of
a
vehicle
to
drive
that
particular
work
forward
as
well,
so
kartik
I'm
kind
of
well
sync
up
later
talk
about
it.
So
for
some
background,
hashing
two
curves
is
not
uncommon.
It's
used
in
a
wide
variety
of
protocols.
J
Pakes
use
it
quite
often
to
hash,
for
example,
of
the
private
key
or
the
secret
password
onto
the
curve
itself.
Bls
signatures
used
it
quite
frequently
or
used
it
as
well.
As
I
mentioned,
the
vrf
Draft
has
a
placeholder
for
the
elliptic
curve,
variant
of
the
vrf
that
uses
this
particular
technique
and
the
privacy
perhaps
work
from
CloudFlare
and
others
also
has
this
baked
into
at
one
step
of
the
protocol.
J
So
the
common
approach
to
going
about
doing
this
and
the
one
that's
written
down
in
the
vrf
draft
and
I
think
the
one
that's
maybe
implemented
in
privacy.
Is
they
try
an
increment
approach
where
you
simply
iterate,
essentially
until
you
have
something
under
the
curve,
so
you
have
some
spring
hash
it
up
with
some.
You
know
kid
hat
and
a
would
some
counter
and
check
to
see
if
the
output
can
be
decoded
to
a
point
on
the
curve
and,
if
so,
return
it
otherwise
keep
going
indefinitely.
J
The
problem
with
this
particular
approach
is
that
it's
obviously
not
constant
time.
The
number
of
iterations
that
you
do
depends
on
or
will
distinctly
reveal
what
the
actual
arm
reveal
something
about
the
input,
so
that's
not
always
necessary,
but
when
we
can
make
it
constant
time
it
might
as
well.
So
some
of
the
requirements
we
kind
of
set
out
for
this
particular
draft
are
number
one.
We
want
to
be
constant
time
number
two.
J
Privacy
paths,
for
example,
requires
that
the
mapping
not
be
invertible,
so
we
just
want
to
call
out-
and
it's
not
yet
written
down
the
draft,
but
we
want
to
call
it
which
of
these
algorithms
are
invertible
and
which
are
not,
but
it's
not
a
requirement
for
the
ones
that
we
write
down
right
now.
So
the
methods
that
we
chose
and
we
started
with
four-
are
the
aircard.
The
SW
I
forget
their
names.
The
simplified
has
to
be
you
and
alligator
two,
and
each
of
them
has
this
particular
requirement
on
the
curve
itself.
J
J
So
that's
what
I'm
doing
in
the
draft
we
focus
on
one
particular
interface
for
this
function,
simply
a
hash
function
which
I
shortened
that
to
H
you
see
here,
which
takes
some
input
alpha
maps,
which
is
at
least
nonzero
and
maps
to
some
point
and
the
curve.
So
alpha
can
be
arbitrary.
In
point,
we
q
to
denote
the
prime
order,
the
base
field.
U
is
a
point
of
order.
2.
We
went
specifically
in
the
case
of
alligator
2,
which
is
my
one
that
has
this
lovely
little
criteria.
J
F
of
X
is
the
curve
equation
and
then
H
of
X
Rho
alpha
is
something
that
we
use
to
hash
a
string
into
the
prime
order,
subgroup
of
the
curve
not
hash
to
the
curve,
which
are
two
distinct
things.
So
the
heart
method
is
fairly
simple.
You
can,
you
know,
write
some
sage
go
fairly
quickly
to
compute
this.
In
fact,
one
of
the
appendices
in
the
draft
has
this
some
modest
number
of
exponentiation,
some
scalar
multiplications
and
additions
and
subtractions
the
exact
cost.
We
still
need
to
do
in
the
draft.
J
There's
a
big
to-do
at
the
end,
just
to
compare
the
different.
You
know
the
computation
will
come
the
you
know
at
arithmetic
complexity
of
each
of
the
algorithms,
but
you
can,
you
know
just
take
a
glance
at
it
and
see
what
it's
like
alligator
sort
of
similar,
but
a
little
bit
simpler,
compute
this
value,
D
and
compute,
the
Legendre
symbol
and
then
just
depending
what
the
output
of
that
is
negative
one
or
not
choose
one
value,
the
other.
So
there's
some
constant
time
this
you
need
to
worry
about
with
the
implementing
this
particular
technique.
J
So
our
current
recommendations,
based
on
the
widely
used
curves
that
are
in
use
in
ITF
protocols,
are
P
256
stick
with
simplified
SW
384
use
a
cart
in
four
to
five
or
nine
four
four
eight
use
alligator
two
simple,
because
those
were
or
that
particular
algorithm
was
sort
of
tailored
to
that
you
know
Bernstein
like
world
so
and,
and
it
seems
to
work
so
I,
say
current,
because
those
are
free
to
change
depending
on
what
the
actual
cost
complexity
turns
out
to
be,
and
if
there's
a
new
algorithm
that
comes
out
so,
of
course,
I'm
happy
to
discuss
these
particular
recommendations
going
forward
like
I
said:
there's
a
lot
of
open
tasks
to
do.
J
We
just
wanted
to
get
something
written
down
and
something
that
people
could
talk
about,
need
to
complete
the
cost
analysis.
There's
a
huge
empty
section
on
the
SW
algorithm,
particularly
details
of
an
implementation
which
are
empty,
we'll
get
to
eventually
simply
that
draft,
unlike
him
up
on
this
mighty
quick,
so
next
time
around
we'll
get
to
it.
J
Sharon
Goldberg,
who
we
collaborated
with
on
this
particular
document,
pointed
out
that
it
would
be
useful
if
there
were
security
reductions
in
this
particular
work
where
possible.
So,
of
course
that's
something
we
should
consider
and
integrate
the
interface
details
are
sort
of
skimmed
over,
in
particular
how
you
convert.
You
know
octave
strings
to
integer
points,
so
we
just
need
to
look
at
the
vrf
graph
where
she's
done
a
she
and
her
co-authors
did
a
great
job,
laying
out
the
foundations
there
and
maybe
apply
that
here
and
also
kind
of
work.
J
Alongside
with
her
to
see,
if
there's
something
like
in
that
draft,
that
or
if
she
can
point
her
VC
vrf
stuff
over
to
this
particular
document,
so
we
don't
duplicate
work
and
then
I
was
alluding
to
earlier
it'd
be
really
nice.
If
we
could
use
this
as
a
driver
for
hacks
back
to
produce
some
verifiable
implantations-
and
you
know
test
out
the
miss
emerging
language-
and
the
last
thing
is
so
I
want
to
clarify
that
not
all
mappings
are
reversible,
and
specifically,
you
know
layout
in
potentially
in
the
security
consideration
section.
J
What
happens
if
you
are
using
a
hash
function
that
is
invertible,
so
some
open
issues
particular
the
language
around
hashing
to
a
prime
order.
Subgroup
of
the
curve
is
not
it
there's
much
to
be
desired
there.
Certainly,
we
flushed
out,
in
particular
many
of
the
techniques
that
we
see
always
multiply
the
output
by
the
cofactor
of
the
curve,
so
that
you
make
sure
that
you're
always
belong
to
the
primer
order.
Subgroup.
J
J
The
other
point
raised
by
Mike
amber
goes
that
you
don't
get
pure
indistinguishability
from
random
from
a
random
point
in
the
curve
for
each
algorithm,
that's
not
always
needed
for
every
single
protocol,
so
we
just
need
to
spell
it
very
clearly.
You
know
with
this
much
technical
precision
as
possible,
what
the
indistinguishability
guarantees
are
and
potentially
list
some
algorithm
are
some
protocols
in
which
this
is
and
is
not
needed.
J
So
that's
still
sort
of
an
open
task
and
it'd
be
interesting
to
hear
what
people
have
to
say
or
think
about
that,
and
so
that's
it
so
I
encourage
you.
If
you're
interested
go,
take
a
gander
at
the
graft,
it
simply
lists
the
different
algorithms
has
some
implementations
and
has
some
dues
that
we
need
to
fill
in
and
yep.
H
Q
We
should
so
actually
have
a
question:
Richard
Barnes
question
about
variants,
so
whether
this
is
related
to
different
problems,
I've
got
in
different
contexts,
so
in
the
MLS
that
I'll
be
discussing
the
buff
on
Thursday,
we
have
a
need
to
map
from
a
random
string
to
a
curve
point,
but
also
know
with
a
known
discrete
log.
So
you
know
me
up
to
an
easy
keep
hair
instead
of
just
a
point
now
that
draft
has
something
in
it
that
I,
just
sort
of
made
up
and
I
think
looks
kind
of
okay.
Q
H
Q
J
It
is
also
interesting
in
the
vrf
draft.
The
interface
is
not
exactly
equivalent
to
this;
they
take
in
a
public
key
and
a
string,
and
they
use
both
in
hashing
to
a
point
in
the
curve.
So
the
question
of
what
exactly
is
the
right
interface
here
is
still,
of
course,
up
for
discussion.
We
just
chose.
You
know
one
input,
the
arbitrary
input
because,
as
Kenny
said,
we
kind
of
want
that
to
be
the
thing
that
we
hashed
onto
the
Garba,
nothing
else,
but
of
course
we
can
talk
about
it.
If
we
need
to
change
it.
J
R
J
We
don't
have
a
proposal.
This
is
simply
a
coalition
of
all
of
the
existing
techniques
and
algorithms
that
have
been
already
published
into
a
single
document
in
an
attempt
to
kind
of
nail
down
exactly
how
you
would
do
this.
If
you
were
to
do
this
in
an
IETF
protocol,
because
it's
used
quite
often
and
as
you
point
to
the
CRF
draft-
there's
one
particular
use
case
so
that
I'm
sure
somewhere
in
like,
for
example,
in
the
alligator
paper
and
in
the
they
simplified
us
interview
paper
where
they
presented
that
particular
algorithm.
J
E
S
J
S
J
To
give
you
an
example
of
the
extreme
elliegator
to
only
ashes
on
to
half
of
the
points
on
the
curve,
so
that's
likely
in
distinguished
are
that's
lightly
distinguishable.
We
don't
sorry
we're
not
more
precise
than
that.
We
would
certainly
like
to
be,
though,
and
it
if
you
have
suggestions,
love
to
Europe
thanks.
T
Dan
Harkins
yeah
I
think
this
is
a
really
good
idea.
It's
very
important,
as
you
mentioned,
it
is
used
in
vrf.
It's
used
in
Pakes
I
got
beaten
up
pretty
severely
for
using
a
really
simple
and
lame
moy
of
hashing
philippa
curve,
and
had
this
been
around,
I
wouldn't
have
gotten
beaten
up
so
badly
good
I'd
probably
get
beaten
up
anyway.
F
J
T
Hi
I'm
Nick
Sullivan
from
CloudFlare.
This
is
sort
of
the
Nick
and
Chris
show
right
now,
but
right
now,
I'd
like
to
introduce
this
draft
that
we
wrote
that
introduces
a
new
construction
called
verifiable,
oblivious
pseudo-random
functions.
Now
it
builds
on
the
intuition
from
two
very
well-known
constructions.
One
is
V.
Rf
switch.
Is
a
public
key
out
analog
to
hashing,
so
you
can
compute
a
v
RF,
and
then
someone
can
verify
that
with
your
public
key
that
this
was
done
correctly.
T
T
You
lose
some
properties,
but
essentially,
if
you
go
through
what
a
vrf
or
what
a
PRF
is
supposed
to
be,
is
that
it's
a
function
that
given
F
X,
it's
infeasible
to
compute
this
function
without
knowing
a
secret
key
okay,
and
this
F
is
also
supposed
to
be
indistinguishable
from
from
random.
And
this
has
the
technique
that
we
want
to
incorporate
from
VRS.
Here
is
that
this
should
also
be
verifiable.
So
the
verify
somebody
who
requests
this
vo
PRF
to
be
computed
can
verify
that
it
was
actually
used.
T
It
was
computed
using
a
specific
private
key,
that's
associated
with
a
unknown
public
key.
We
would
also
like
this
to
be
oblivious
in
that
the
signer
does
not
know
if
review,
if
the
value
Y
that
is
revealed
to
them
cannot
identify
which
X
was
actually
used
or
which
computation
was
actually
used
to
to
compute
Y.
So,
yes,
if
X
is
revealed
to
the
signer,
they
can't
actually
identify
which
signature
or
which
of
these
computations
is,
is
the
one
that
was
used
to
compute
it.
T
So
this
is
very
similar
to
the
vrf
and
the
O
PRF,
but
what
you
do
lose
to
verifiable
and
oblivious
properties
are
used
together,
but
you
lose
the
ability
to
have
this
publicly
verifiable.
So
if
you
do
actually
expose
this
as
a
publicly
as
a
as
a
public
value,
then
the
signer
can
then
recover
the
obliviousness.
So
it
the
way
that
it
kind
of
will
work
fit
into
a
protocol
is
is
as
such
is
the
verifier
will
construct
a
message.
T
M
will
blind
that
message:
send
it
to
the
prover,
which
will
then
compute
a
signature
s,
a
signature
which
is
the
o
PRF
computation
of
VO
PRF
computation.
The
verifier
can
then
take
this
and
verify
that
this
is
associated
with
the
provers
public
key,
and
then
they
can
remove
the
blind
and
if
they
want
to
that,
can
confirm
the
the
output
here.
This
signature
is
associated
with
the
original
message.
M
send
that
back
to
the
approver,
they
can
verify
the
same
thing
and
this
original
vo
PRF
computation
versus
the
confirmation.
T
T
Eduroam
authentication
failed,
great,
ok,
let's,
let's
go
forward
so
I'm!
This
instantiation
was
that
actually
came
from
a
paper
from
a
few
years
ago
from
jirachi
khaosan
kochak,
and
this
describes
the
algorithm
which
lets
you
compute,
a
discrete
logarithm
equivalence.
So
if
you
have
Y
modulo
point
G
and
Z
modulo
point
M,
you
can
show
that
there
actually
have
been
exponentiated
with
the
same
scalars
K,
and
this
is
these-
are
the
details,
the
algorithm
and
in
the
protocol,
some
intuition
you
can
get
around
this?
T
Is
you
take
your
string,
which
is
your
sort
of
secret
value?
You
hash
this
into
a
curve
and
you
apply
a
blinding
factor
which
is
just
an
exponent,
which
is
just
a
multiplicative
value
here.
So
your
message
that
you're
sending
to
the
server
or
to
the
approver
in
this
case
is
the
hash
of
the
input
X
onto
the
curve.
So
you
have
a
curve
point
and
you
multiply
it
by
a
blinding
value
arm
on
the
prover
side,
you
take
that
value
and
you
multiply
it
by
the
secret
key
in
in
the
case
of
hash
curve.
T
This
is
just
another
secret
scalar,
and
then
you
generate
discrete
log
equivalence
proof
between
this
multiplication
of
the
point
and
the
a
public
pair
that
represents
a
a
base
point
and
the
base
point
multiplied
by
again
K,
which
is
the
the
secret
value
in
the
prover.
You
can
send
this
back
and
the
the
verifier
can
verify
this
discrete
logarithm
sort
of
divide
out
its
multiplicative
factor,
R
by
multiplying
by
the
inverse-
and
you
end
up
with
the
original
point,
hash
to
a
curve
multiplied
by
the
provers
private
key.
T
T
T
Alternatively,
you
don't
have
to
send
the
value
why
you
can
send
a
value
that
can
only
be
computed
with
the
knowledge
of
Y
and
the
prover
can
compute
that
same
y
as
well.
I'm,
multiplying
that
x,
hash
to
the
curve,
okay
and
and
then
you
can
validate
that
both
parties
share
this
Y
over
a
specific
mount,
a
bound
to
a
specific
message.
T
So
that's
that's
generally
how
these
this
construction
works
and
as
as
was
mentioned
before
this
is
we
have
an
application
to
this
called
privacy
pass,
which
is
in
an
upcoming
Pets
paper
with
myself
and
Alex
Davidson
and
few
other
folks.
The
way
that
this
works.
This
is
for
reducing
the
amount
of
CAPTCHAs
seen
on
the
Internet
in
a
privacy-preserving
way.
So
the
way
it
would
work
is
that
you
would
compute
a
token
and
or
a
set
of
tokens
and
win
solving
a
CAPTCHA.
T
The
CAPTCHA
server
will
compute
the
vo
PRF
over
your
blinded
tokens
and
send
them
back
to
you
and
then
the
next
time
you
see
a
CAPTCHA
on
a
completely
different
site
on
a
completely
different
origin.
You
can
submit
the
unblinded
the
unblinded
token,
and
then
the
server
has
no
way
to
actually
associate
whether
or
not
which-which
site
was
the
one
that
actually
issued
these
tokens.
So
it
allows
you
to
send
data
to
to
prove
that
you
have
solved
a
CAPTCHA
on
a
another
site
without
revealing
which
origin
that
site
was
originally
presented
it.
T
So
this
allows
a
sort
of
bypass
of
the
same
origin
policy
due
to
the
blinding
ability
of
the
vo
PRF,
and
this
is
implemented
in
a
browser
extension
for
Chrome
and
Firefox,
as
well
as
on
CloudFlare
servers.
Another
potential
application
of
this
that
we've
looked
into
is
privacy-preserving
password
leak
checks.
T
So
if
a
service
has
a
lot
of
passwords
that
have
been
leaked,
what
they
can
do
is
they
can
hash
these
to
a
curve
and
exponentiate
by
a
private
value
and-
and
if
you
want
to
check
whether
your
password
is
part
of
that
list,
you
can
submit
it
to
the
server
to
compute
a
vo
PRF
and
have
it
come
back
and
then
compare
it
with
the
the
entire
database
and
what
this
lets
you
do
as
a
server
that
that
has
access
to
all.
All
of
these.
T
The
private
keys
for
all
the
these
passwords
is
to
do
rate
limiting.
So
it
allows
you
to
check
to
see
which
password
you're
looking
for
and
the
server
doesn't
get
to
know
which
password
you're
actually
looking
for.
T
So
if
you
have
a
unique
password,
it
can't
identify
you
as
well
as
it
stops
people
from
being
able
to
do
a
kind
of
a
brute-force
crack
of
all
the
all
these
websites,
because
it's
it
requires
this
vo
PRF
computation,
and
these
are
some
of
the
the
foundations,
for
this
is
several
different
papers
and
as
well
as
this
relies
on
the
previous
presentation
which,
which
is
about
hashing,
to
elliptic
curves.
This
is
one
of
the
major
mode
of
motivating
factors
for
writing.
That
document,
and
that's
it
questions.
H
P
You
this
isn't
when
dureena
from
MSR
I'm
kind
of
struggling
to
understand
what
is
the
actual
contents
of
your
drafts
that
you're
proposing
I
didn't
read
it
so
I'm
trying
to
understand,
because
you
you
dr.
words,
several
kind
of
cryptographic
primitives
as
many
of
which
are
kind
of
fancy,
but
from
my
understanding
of
these
primitives
they
have
kind
of
different
security
models.
T
So
what
this
draft
contains
is
a
general
definition
of
EOP
ahrefs,
how
they
work
in
a
specific
instantiation
that
you
can
do
given
an
elliptic
curve,
as
well
as
a
hash
to
curve.
So
if
you
have
a
specific
curve
that
you
like
and
you
have
an
algorithm
to
hash
to
it,
that
is
one
way
this
tells
you
how
to
construct
a
protocol
of
this
sort.
T
T
T
T
T
Okay,
do
you
see
them
progressing
in
parallel
or
in
in
sequence,
I
think
they
can
progress
in
parallel
and
I
also
think
that
things
like
privacy
pass
as
a
protocol
could
be
proposed
for
say
the
HTTP
working
group
or
something
like
this.
That
would
rely
on
this
draft
being
adopted.
Mm-Hmm.
H
E
Hi
Daniel
con
Gilmore,
ACLU,
sorry
I,
was
really
loud.
I
am
really
grateful
to
see
this
work
happening.
One
of
the
concerns
is
about
how
you
ensure
that
the
the
key
for
privacy
pass,
particularly
the
Vikki,
remains
constant
right
or
else
the
person
doing
this
on.
You
can
keep
a
single
key
I
believe
you've
solved
that
for
the
tour
arrangement.
Do
you
have
suggestions
for
what
we
had
to
solve
it
for
III.
E
D
R
So
hello,
everyone,
my
name-
is
grilling
one
from
hobby,
so
my
talk
is
about
Pig,
so
PA
ke
password
authenticated
the
key
exchange,
so
in
the
first
part
I
was
introduced
paraquad
about
fake,
then
we
will
introduce
our
proposals.
I
mean
our
protocols
yeah.
Then
we
will
consider
whether
it's
a
good
candidate.
The
proposal
for
the
IFC
eight
one
two
five,
because
it
gives
some
requirements
about
the
pig.
R
So
here
is
the
terminology
or
some
main
concepts
about
to
take
increase.
You
are
not
so
familiar
with
this.
One.
Pig
basically
means
it
allows
two
parties
to
establish
a
secure
key
in
the
cuiprit
or
sense
by
using
a
common
shared,
a
secret
between
two
parties,
because
the
secret
is
assumed
is
a
very
short,
so
it
is
easy
to
memorize,
but
on
the
other
hand,
it
also
means
it.
Just
that
has
a
low
entropy
is
a.
R
It
could
be
very
easy
for
attacker
to
guess
the
password.
So
the
main
thing
we
need
to
think
about
about
the
security
of
the
pig
is
about
the
dictionary
attack.
Actually
two
versions,
one
called
the
unlike
dictionary
attack.
This
is
err
can
be
prevented
quite
easily
by
supposing
the
server
side
can
do
some
check
just
allow
you
twice
a
password
for
a
few
times
in
a
given
period.
Now
this
can
be
done
very
good
in
practice.
So
the
really
difficult
one
is
about
the
offline
cache
we
attack.
R
This
is
a
bigger
problem
yeah,
because
the
part
of
what
it
is
a
shot
and
many
people
use
a
pass
or,
according
to
some
petty
information,
like
your
your
your
name
or
some
your
birthday,
something
like
this
so
I
think
I
can
create
a
dictionary
for
possible
puzzle
world
and
try
this
one
by
doing
offline
calculations
or
flies
that
have
computers.
Even
super
super
powerful
computers
can
do
it
very
quickly,
so
the
sage
Dictionary
size
can
be
very
big,
actually
yeah,
another
security
requirements.
You
called
the
forward
the
security,
it
is
also
important.
R
Basically,
it
means
we
were
likely
to
have
such
a
protocol,
such
that
even
the
password
is
leaked
sometime
later,
even
in
this
case,
the
parallel
parallels
share.
The
secret
King,
the
session
King
yeah,
agreed
by
using
the
password,
could
still
be
secure.
This
is
important
one.
One
more
thing
is
that
actually
original
version
about
the
pig
is
that
both
post
parties
just
share
the
drawer
secret
that
the
password-
but
in
this
case,
especially
if
the
server
is
a
compromised
by
attacker,
so
the
server
can
get
the
password
for
all
the
users.
R
R
R
Okay,
so
the
main
challenger
for
the
peg
is
that
we
would
need
to
design
PA
ke
secure
against
the
off
like
extra
attack.
It's
not
challenged
apart
yeah,
so
the
main
limitations
is
the
most
obvious.
Existing
package
solutions
are
to
expect
the
first
one
is
most
of
our
quite
a
number
of
them
actually
don't
know
the
support
the
form
of
the
security,
but
this
one
is
actually
important
and
another
one
is
that
for
some
schemes,
they'll
have
provable
security,
but
only
in
the
multiplicative
group
sort
of
a
find.
R
The
fuse
basically
means
it
does
not
work
for
Olympic
Africa
Liberty
curve
lot
of
work.
For
you
see
see,
this
is
actually
important
one,
especially
if
you're
needed
contained
efficiency,
because
it
is
a
can
use
the
short
elements
and
can
get
a
very
tighter
security
key
yeah.
So
it
is
more
efficient
for
post
computation
and
that
communication
yeah.
So
we
have
a
two
protocols.
R
They
are
paper
protocols
which
meet
both
forward
the
security
and
it
can
works
in
any
group
so
including
ECC,
which
means
so
the
paper
actually
published
in
2000
HSE
says
a
nice
conference
just
a
one
year
go
and
a
public
evasion
is
actually
available
from
the
professor
David
upon
the
Chivas
website
from
yes,
so
you
can
get
a
version
as
well,
so
this
coupon
code
will
call
them
tbp
ke
to
base
to
basis
possible
the
explanation
case
change
so
to
base
just
a
term.
R
We
use
how
to
construct
a
protocol
and
is
a
verifier
version
called
the
VAE
PPP
e
ke.
So
this
can
be
used
to
prevent
the
server
corruption,
yeah,
suppose
protocol
approval
can
be
proved.
The
secure
understand
the
complexity
assumption.
So
this
means
we
have
a
cryptographic
security
guarantee
for
this
protocols
and
the
most
important
one
is
that
the
ECC
can
be
used
that
to
implement
is
the
protocols.
R
So
in
this
case,
post
computation
and
communication
can
be
hard
efficient,
okay,
so
here
using
some
a
very
quick
summary
about
the
existing
solutions,
as
there
is
a
lot
actually
here.
I
just
mentioned
two
most
ephemeris
war
and
also
call
it
related
to
our
research
work.
The
first
protocol
called
a
de
ke
encrypted
key
exchange.
This
is
the
first
epic
protocol,
yeah
I,
guess
they
so
actually
it
basically
is
that
DF
ami
exchange,
but
the
tff
I'm
exchanging,
is
not
secure,
but
due
to
the
Magnum
attack.
So
here
is
AC
protocol.
R
We
used
password
just
used
possible
as
there
as
a
symmetric
key
and
a
to
encrypt
the
temper
temporary
public
key
for
the
D
H
protocol
yeah.
In
this
way,
we
can
get
a
secure
protocol
one
secure
protocol
about
the
security.
Actually,
it
can
be
proved
secure
under
the
random
on
Komodo
yeah,
but
we
need
to
assume
that
the
symmetric
key
we
use
the
for
encryption
here
is
an
idea
cipher.
So
it
is,
has
no
flaws
like
an
idea.
R
R
For
getting
so
in
this
protocol
basic
idea,
that
idea
generates
the
generator
for
our
yeah
multiplication
group
G
from
the
password
use
a
hash
function.
Actually,
this
is
also
related
to
the
previous
presentation
about
the
hash
and
opponent
hash
and
some
value
to
a
point
in
ECC
yeah,
it's
related,
so
that
was
all
from.
My
point
is
also
increasing.
There
are
such
things
for
that
Joseph
for
this
one,
so
this
basically
means
the
attacker
can.
Actually
things
do
get?
R
Is
the
temporary
public
key
for
the
th,
but
they
don't
know
what
is
the
generator
because
I
don't
know
the
password
yeah?
So
from
this
point,
it's
quite
simple,
but
the
proof
is
actually
is
not
so
simple,
especially
saying
is
that
the
the
available
security
proof
only
works
for
multiplication
subgroup
of
a
finder
fields,
not
for
the
history
cooks.
So
this
means,
if
you
use
a
UCC,
you
can
see,
do
use
this
protocol,
but
it
is
risky
because
no
security
proof
guarantee
for
this
protocol
yeah
and
also
because
of
the
usual
reason.
R
So
actually,
the
real
efficiency
for
a
proviso
is
not
so
good.
Ok,
so
here
we
introduce
our
protocol.
Our
bottle
coding
is
something
quite
similar
to
the
PE
ke.
Due
to
this
reason,
we
called
to
base
PE
k,
so
in
our
protocol
with
just
a
general,
get
the
generator
G
from
two
bases.
You
multiply
the
right
to
PW
PW,
the
password
you
and
P
you,
and
they
are
other
other
two
pieces.
This
is
our
protocol
yeah
and
the
next
one
to
prevent
the
server
corruption.
R
So
we
need
to
do
it
another
way,
just
the
based
on
the
first
protocol.
So
what
we
do
is
that
the
client
signs
do
need
to
hold
the
password,
but
but
for
the
server
side
the
server
only
need
to
store
another
value.
It
is
called
the
way
to
the
h,
SP
w
SS
here
you
know
just
a
random
sort,
yeah
for
ye
for
each
user.
It's
the
difference
and
the
H
here
is
a
hash
value
yeah.
R
So
this
means
it's
the
server
side,
the
server
data
into
two
daughter,
the
the
password
yeah
and
in
the
implementation.
Are
you
how
to
design
the
protocol
two
basic
ideas?
The
first
idea
is
that
yeah,
the
user
need
to
prove
his
knowledge.
Sgw
is
the
hash
value
of
the
of
the
password
together
with
a
random
sort
yeah.
He
needed
to
prove
this
one
and
the
to
prove
this
one.
Basically,
the
well
used
as
their
knowledge
skill,
so
also
just
the
previous
slides
introduced.
R
The
idea
here
we
don't
mention
video
and
the
next
part
is
that,
because
their
own
knowledge,
like
removal,
there's
a
challenge,
commitment,
challenge
and
response,
and
again
this
one,
we
master
need
to
encrypt
the
response.
Otherwise,
the
protocol
is
not
secure
again
against
the
offline
attack.
Yeah,
so
detail
use
a
little
bit
of
complex,
but
the
basic
idea
is:
can.
C
R
Yeah
so
hey
the
policy,
could
you
prove?
We
actually
gave
us
a
guarantee
in
the
cases.
Yeah
like
a
website
is
supported
security,
and
we
also
consider
the
possible
so
that
we
can
get
a
little
bit
more.
Tighter
security
results.
Yeah
there's
some
examples
of
it
on
you
can
go
into
detail
here
is
the
comparison.
So
in
the
second
part,
the
last
line
is
about
our
Chester
PA
ke
protocol
hardly
sufficient
from
here.
R
You
can
see
it
is
supportable
so
for
the
security
and
also
work
seeing
the
ECC
group
and
the
last
one
lots
of
the
parts
in
the
last
align
to
our
verify
version
of
the
our
protocol.
It
also
supported
secure
forward
the
security
and
also
works
the
following
sec
here
is
our
preliminary
inflammation
implement.
This
is
the
recommendation
on
parameters
so
in
different
security
levels.
What's
the
password
we
can,
we
can
use
how
long
the
password
and
the
corresponding
ECC
element
size
and,
as
I
see,
is
the
our
preliminary
implementation
results.
R
Basically,
you
know
a
quite
normal
server,
so
each
patrol
can
be
wrong.
I
mean
mainly
for
the
computation
can
be
done
in
10
milliseconds.
This
is
the
requirements
from
I,
say
eight
one,
four,
five,
it
is
about
the
requirements
of
a
paper
protocols,
so
any
requirements
here.
So
we
can
say
our
protocol
satisfy
almost
all
the
requirements.
There
are
some.
We
need
a
little
bit
more
work,
so
we
always
think
it
could
be
a
Canadian
proposal
for
these
requirements,
so
they
think
it
all.
G
Right,
we
have
any
questions
please
and
fill
van
Baker
I
would
just
like
ITF
to
pick
exactly
one
I,
don't
care
what
it
is
so
long
as
you
pick
warm
and
his
patenting
unencumbered,
which
is
the
reason
that
we're
not
doing
any
of
this
today,
because
there
were
patents
and
they
got
in
the
way.
T
Ann
Harkins
again
so
related
to
what
Phil
said
requirement.
8
is
you
have
to
say
whether
your
company
has
any
IPR
in
this
scheme?
So
does
your
company
have
any
IPR
in
this
scheme?
I'll
wait.
I'll
have
okay.
So
on
the
comparison
you
say,
that's
spake
to
speak
and
SAE
do
not
provide
for
secrecy,
but
that's
not
true.
That's
for
compromised
speak
to
speak
and
SAE.
They
all
perfect.
R
T
N
Stanislas
mesh
laughs,
crypto
pro
I
would
like
to
add
a
few
words
to
the
dance
I
commend.
In
fact,
spec
2
is
as
for
secrecy.
Moreover,
dance
protocol,
KX
has
it
and,
as
I
understand,
has
all
positive
features
that
you
enumerated
in
your
slides.
So
it
will
be
very
good
if
you
may
be
in
the
Middle
East.
We
could
have
a
deeper
analysis
of
comparison
of
your
protocol
to
all
existing
ones,
because
you
have
a
lot
of
package
and
I
think
that
at
least
three
of
them
are
quite
similar
in
the
sense
of
security.
N
These
for
secrecy
with
support
of
elliptic
curves,
and
maybe
the
difference,
is
about
as
a
complexity
as
additional
security
properties
and
also
I
will
dead
that
your
conditional
there
file
will
take
a
same
very
similar
to
augmented
Peggy
and
in
fact
this
doesn't
prevent
as
a
password
to
be
leaked.
If
the
server
is
compromised.
C
V
Hello,
everyone
I'm
the
Nova
gear
I'm
from
the
University,
the
Netherlands
I
will
be
presenting
the
truck
for
counter
12,
so
first
of
all,
what's
kangaroo
12.
So
let's
have
a
look
at
check.
128
psych
128
use
a
sponsor
insertion.
So
basically
you
have
your
spawns
on
your
right,
which
is
divided
into
part
two
parts:
the
earth
part
and
the
inner
parts.
V
The
inner
part
is
what
guarantees
the
security
of
the
scheme
and,
in
this
pond
construction
you're
going
to
spit
your
message
into
small
blocks
that
you
will
absorb
and
between
each
absorption
you
will
apply
a
ROM
function
which
is
mixing
up
the
inner
part
and
the
other
parts
and
then
once
you've
mixed
up
everything.
You
want
to
extract
your
hash.
Basically,
so,
basically
you
extract
all
parts
and
then
you
mix
again
and
extract
the
point
of
this.
Is
you
can
extract
as
many
bits
as
you
want?
V
So
that's
why
we
call
it
an
excellent
able
output
function
because
they
are
lengths
is
arbitrary.
As
long
as
you
take
a
long
enough
in
the
case
of
Shake
128,
you
use
a
Kajaki
function
which
permutation
that
has
been
defined
in
250
Oh
Jo.
So,
while
sec
128
is
quite
nice
by
design,
it
does
not
allow
the
possibility
of
terrorism,
which
makes
it
a
bit
slow.
So
we
want
to
add
some,
sir
it
into
that
using
congroo
12,
so
basically
convert
12
is
the
same
idea,
but
we
apply
a
tree
structure
with
the
sponge.
V
V
So
we're
also
pretty
nice
in
term
of
design,
and
we
reduce
the
number
of
fronds
of
Ketchikan
to
24
to
12,
giving
us
already
a
bit
more
speed
and,
as
you
can
see
that
the
longer
the
message,
the
more
paracin
you
gain
so
and
it
also.
If
you
have
short
message,
you
don't
need
to
actually
hash
the
other
part,
so
you
actually
don't
have
penalties
for
short
message.
So
once
you
see
that
you
start
sinking,
what's
the
security
of
it,
so
Congress
wealth
is
claimed
as
secure
as
check
128.
V
In
all
the
words
128
bits
of
security.
It
has
because
it's
used
as
pawns
construction.
You
also
relies
on
the
generic
security
of
a
sponge
that
has
been
provided
by
the
attractive.
The
pearl
mode
is
based
on
secretory
hashing.
So
it's
also
proven
secure
bicycle
check
Jim
once
again,
and
on
top
of
it
we
have
the
same
run
function
as
defined
in
Phipps
your
choice,
okay,
check,
P,
but
only
weeds,
the
12
last
roams.
V
V
We
have
collision
are
over
5
roams,
but
we
also
have
collision
over
6
runs,
but
with
a
smaller
capacity.
So
this
gives
more
degrees
of
freedom.
The
difficulty
of
having
the
creation
of
over
capacity
of
160
bit
is
estimated.
If
you
take
a
birthday
paradox
as
super
80,
we
also
have
stream
prediction
over
8
runs,
but
we
have
12.
So
it's
fine
with
the
difficulty
up
to
2
per
128
time.
It's
also
possible
of
the
nine
rooms,
but
the
difficulty
is
125
1
to
2
per
256
time.
So
that's
already
above
the
security
claims.
V
That's
very
fine
and
there
have
been
lots
of
so
the
particular
is
is
being
done
on
kachuck.
So
that's
very
nice
we've
seen
that
conclude
roll
this
pretty
seeker.
Let's
see
how
fast
it
is.
We
have
the
number
of
Thrones,
so
it
will
logically
take
half
the
time,
so
it's
twice
faster
as
checks
128
and
then,
once
you
take
longer
inputs
and
the
latest
skylake
x
processors
only
using
a
VX
instruction,
then
you
can
get
down
to
0.55
cycles
per
byte.
At
that
point,
the
bottleneck
is
not
the
hashing
anymore,
but
more.
V
The
network
connection
and
so
on,
so
why
I
think
it's
an
interesting
I
see
that
we
could
have
it's
because
it's
on
the
same
basis
as
the
NIST,
it's
public
design,
completely
open.
It
has
been
the
result
of
the
Shah
three
international
competition.
It
survived
ten
years
of
cryptography
and
cryptanalysis
and
still
will
I
hope
survive
more.
V
It
provides
a
really
nice
speed
up
with
respect
to
sha-3
without
wasting
any
cryptanalysis
resource,
because
what
have
been
done
is
a
path
still
applies
today
to
get
Chuck
and
come
go
12
and
the
pair
isms
just
scales
with
the
implementation.
So
you
don't
have
to
provide
partner.
Initial
parameters
like
I
want
to
have
convert
well
with
8
parallelism
computing
at
the
same
time,
and
so
on
so
very
nice
for
more
details.
I
can
invite
you
to
have
a
look
at
the
current
draft.
V
H
H
H
V
H
R
V
I
Nick
Johnson
Foundation,
it
seems
like
you're
combining
two
different
trade-offs
here.
One
is
oh
sorry
to
two
innovations:
I
guess,
one
that
improves
speed
with
parallelism
it
with
no
cost
and
security
yeah,
and
a
second
trade-off
which
gives
you
more
speed
at
the
cost
of
additional
security.
Why
combine
these
together?
Why
not
just
offer
a
parallel
construction
that
uses
the
existing
number
of
rounds.
V
I
V
V
I
Able
to
go
back
to
the
slide
with
the
performance
measurements
for
a
moment
that
one
yeah
so
so
it
looks
like
you're
getting
comparing
lis.
The
shorter
looks
longer,
but
you're
getting
at
least
100
percent
speed-up
yeah
from
parallelism
as
well.
Yes,
I,
yeah,
I,
guess
I
I
still
don't
understand
the
justification
for
the
two
changes
together.
It
seems
to
me,
like
it'd,
be
a
very
useful
construction
with
standard
gear
check.
Well,.
V
H
We're
kind
of
over
time,
actually
we're
already
eating
into
people's
break,
so
I
think
we'll
need
to
leave
it
there
and
we
can
maybe
take
further
discussion
of
the
draft
to
the
list.
That's
okay!
Thank
you
from
the
chairs
for
attending
this
session
stay
here.
If
you
want
to
attend
the
next
session,
which
is
TOS
working
group
starting
in
about
15
minutes,
and
could
we
please
have
the
blue
sheets
back
to
the
front
police?
Would
somebody
bring
them
down?
Thank
you.
Thank
you.
Everyone
thank.