►
From YouTube: IETF102-CFRG-20180717-1720
Description
CFRG meeting session at IETF102
2018/07/17 1720
https://datatracker.ietf.org/meeting/102/proceedings/
A
B
B
C
B
B
B
D
B
Quickly
document
status
we
have
to
RFC
since
London
update
to
church
at
twenty
and
Polly
1505
and
X
MSS
no
documents
in
RFC
deters
Q.
There
are
four
now
five
documents
waiting
for
document
Shepards
for
one
Paul
Hoffman
volunteer
to
help
progress
cash
base.
Signature.
Thank
you
to
him.
So
we'll
draw
away
a
trying
to
figure
out
with
others
and
what
the
process
is
going
to
be
for
a
shepherd
who
is
not
a
research
group
chair
but
I'm
sure
we
can
make
tools
work
with
that.
B
B
B
G
Alright,
everyone,
my
name
is
Chris-
would
be
talking
to
you
about
our
draft
hashing
to
elliptic
curves,
just
a
quick
update,
because
I'm
only
mostly
trying
to
work
through
the
many
outstanding
to
do
is
we
had
in
the
first
revision
that
was
submitted
and
now
we're
in
a
good
state
where
we
can
actually
start.
You
know
iterating
out
within
the
research
group,
so
some
quick
background
this.
G
The
main
goal
of
this
was
to
sort
of
go
through
and
survey,
all
the
different
algorithms
that
we
could
use
to
hash,
to
look
the
curves
that
were
spread
throughout
the
literature
and
used
them
a
different
particular
applications,
Pakes
being
one
of
the
primary
use
case
here
and
just
write
them
down.
So
it's
easy
for
employers
to
actually
get
it
right
and
correct,
and
so
on
and
so
forth.
G
So
after
London,
the
initial
version
was
adopted
and
it
contained
a
couple
issues
that
were
pointed
out
on
the
list
thereafter
in
particular,
it
convoluted
some
terms
like
encoding,
serializing
and
hashing.
It
just
said
everything
was
hashing
and
that's
not
the
case.
So
it
took
some
time
to
do
a
couple
of
these
terms
and
make
it
more
precise
and
it
came
with
three
detailed
algorithms
and
accompanying
stage
implementations
upon
outstanding
to
do,
and
we
have
filled
that
to
do
in
the
latest
version.
So
the
main
updates
are
as
follows.
G
Like
I
said,
we
separated
encoding
civilization
and
the
random
Oracle
instantiation,
which
is
essentially
the
hashing
to
curve
functionality
and
now
there's
two
sections
in
the
document:
one
for
map
to
code,
which
is
the
deterministic
encoding
function,
the
other
for
hash
occur,
which
is
the
random
Oracle
functionality
which
is
required
for,
depending
on
your
application,
we
refactor
the
algorithm
recommendation
tables
such
that
it's
clear.
You
know,
I
hope.
That
is
clear
if
you
have
a
particular
use
case
for
particular
application
for
a
particular
curve
which
algorithm
you
should
use.
G
Of
course,
we're
open
to
recommendations,
or
you
know,
suggestions
for
improving
that
particular
table
and
that
format
and
also
the
recommendations
in
general
I
think
once
we
get
everything
can't
nailed
down
and
the
algorithms
you
know
actually
flushed
out.
The
recommendations
may
may
likely
change
and
we
also
took
an
action
last
time
to
work
with
Karthik
and
people
in
the
aspect
team
to
create
implementations
of
each
map
to
curve
algorithm.
G
So
now
the
appendix
contain
these
and
we're
trying
to
help
drive
that
effort
forward,
because
I
don't
know
how
much
you
recall
from
the
last
meeting
x-pac.
Is
this
basically
attempt
to
create
a
language
to
produce
formally
verify
cryptographic
implementations
and
we
felt
this
was
a
good
candidate
to
sort
of
prototype
that
prototype,
that
particular
technology
and
it's
worked
out.
Well,
we
found
some
bugs
in
the
actual
hex
back
library,
but
all
in
all
it
compiles
and
runs,
and
it's
pretty
great.
G
So
just
a
quick
overview
on
these
particular
changes.
Like
ice
was
saying.
The
maps
of
curve
functionality,
which
has
its
own
dedicated
section,
is,
as
I
said
earlier,
basically
a
function
that
takes
an
arbitrary
input
into
turistic
and
maps
it
to
a
point
on
a
curve
and
the
algorithms
which
we
include
are
the
same
as
last
time.
The
cards
SW
will
not
remember
the
names
so
bear
with
me.
G
In
the
curve
we
are
currently
considering
replacing
with
the
hash
encode
base
point
technique
which,
as
you
can
tell,
requires
less
map
to
curve
function,
our
computations
one
instead
of
two,
and
it
also
works
for
and
we
believe
it
works
for
all
of
the
curves
and
algorithms
and
consideration
particular
the
hashing
code.
Twice
we're
not
entirely
sure
if
it
works
for
alligator
two,
but
we
have
pretty
it's
pretty
easy
to
prove
that.
Does
the
hash
encode
base
point
function,
he
does
work
for
Alligator
too,
which
is
something
we
really
want
to
support.
G
You
know
that
particular
algorithm
in
the
document.
So
as
an
example,
if
you
wanted
to
implement
a
fake
from
the
BMP
2000
paper
on
P
256,
which
requires
a
random
Oracle,
we
recommend
you'd
end
up
using
the
hash
to
curve
functionality
with
SW,
because
that
works
for
any
curve
and
p226
being
one
like
I
said.
We
can
potentially
change
this
recommendation
later
on,
depending
on
what
all
the
cryptographers
tell
us.
G
So
we
are
more
than
welcome
to
take
are
more
than
happy
to
take
suggestions
on
that
particular
front
and,
like
I
said
again,
we
had
a
tax
back
because
we
want
to
drive
it
forward.
We
think
it's
good
and
the
implementations
and
being
pretty
concise
and
pretty
clear,
and
you
can
reason
about
them.
Mathematically
and
it's
essentially
just
you
know,
appreciate
for
a
Python
code
that
you
can
compile
to
see
or
whatever
else
you
want,
and
it's
good
and
comments
welcome
on
that
as
well.
G
There
are
currently
a
number
of
outstanding
issues
that
in
the
document
itself
and
I'm
sure
other
people
have
some
issues
that
may
or
may
not
have
been
raised.
So
the
first
one
in
particular,
is
adding
support
preparing
from
the
curves
there's
an
open
pull
request
to
do
this.
We
just
haven't
gone
through
and
actually
verify
that
the
algorithm
specified
is
correct.
It
references
paper
and
it's
in
my
cue
to
actually
read
the
paper
and
get
through
this.
G
G
Another
item
is
all
that
you,
okay,
writing.
The
cost
comparison
table,
which
you
know,
breaks
down
the
efficiency
of
the
computational
complexity
of
each
particular
algorithm,
and
so,
if
you
are
an
implementer,
you
can
sort
of
weigh
the
pros
and
cons
and
choose
one
that
fits
your
needs,
provided
that
you
know
meets
the
particular
requirements
that
are
outlined
in
tables
above
or
at
the
beginning
of
the
document.
There's
also
an
open
the
item
to
write
the
curve
transformation
function.
G
So
most
of
the
algorithms
assume,
like
the
point
that
you're
hashing
to
is
represented
and
a
fine,
is
an
alignment
coordinate
with
an
x
and
y
value.
But
you
might
want
some
map
to
Jacobian
coordinate
or
what
other
application
you
want,
and
so
we
just
need
to
sort
of
write
that
down
as
that's
an
implementation
detail
that
might
matter
for
a
particular
application.
G
So
there
are
some
open
questions,
I'm
hopeful
that
the
research
group
and
help
us
address,
in
particular
what
generic
hash
to
curve
construction
should
we
select
I
alluded
to
the
two
we're
using
earlier.
The
first
is
the
fashion
code
twice
and
the
one
that
we're
potentially
leading
towards
it's
the
hashing
code
base
point
which
does
work
for
all
the
curves
specified.
G
G
You
know
very
welcome
all
right
like
rationale
for,
including
is
that
if
you
wanted
to
go-
and
just
you
know
quickly
test
it
out,
here's
an
implementation,
you
can
quickly
grab
and
if
you
like,
wanna
run
it
like
your
application,
is
built
and
see
you
compile
it
to
see
and
you
have
it.
You
don't
have
to
actually
spend
the
time
to
go
through
and
get
it
wrong,
and
it
also
serves
as
a
very
concise
reference.
G
H
E
G
Unclear
right
now,
I
do
ideally
we
have
you
know
for
each
curve
and
for
each
you
know,
type
of
requirement,
be
a
deterministic,
encoding
or
hash
curve.
There's
one
variant
and
like
that's
what
everyone
shall
use
and
whether
or
not
we
keep
you
know
the
other
permutations
accommodations
document
is,
you
know
unclear
to
us
at
this
time.
I
think
we're,
including
them
all,
because
as
we
flush
it
out,
and
you
know,
move
it
forward
towards
publication,
we
want
to
make
sure
we
serve
the
design.
Space
is
exactly
as
possible,
but
yeah.
G
I
Rob
lost
just
one
bit
of
feedback
to
your
third
point
so
long
as
it's
clearly
labeled
that,
in
case
of
a
conflict
between
the
specification
and
the
example,
the
specification
wins,
obviously,
but
as
long
as
you're
clear
about
that
I
think
actually
having
the
reference
implementation
there.
If
it
fits,
is
a
good
idea,
great
okay,
I'm
all
about
reference
implementations.
So
thank.
F
J
Good
evening,
good
afternoon,
I
am
Leo
raison
Boston
University,
going
to
talk
about
the
updates
to
the
vrf
Draft
and
mostly
ask
for
feedback
on
various
issues
that
are
outstanding,
so
just
a
very,
very,
very
quick
intro.
What's
the
verifiable
random
function,
it's
like
a
hash
function,
but
with
a
public
key
and
a
secret
key,
the
hash
sure
has
the
secret
key.
The
verifiers
have
the
public
key,
so
the
verify
provides
that
input
to
the
hash
sure
the
hash
using
a
secret
key
can
provide
a
proof
to
the
verifier.
What
is
this
proof?
J
Do
the
proof
allows
the
verifier
to
do
two
things.
One
is
convert
this
proof
to
an
actual
hash
output
using
a
proof
to
hash
function,
and
the
other
is
to
check
that
this
was
the
correct
output
using
the
verify
function.
Okay.
So
it's
like
a
hash
in
that,
even
if
the
hash
here
is
adversarial,
there's
only
one
possible
output,
it's
unique
and
it's
collision
resistant,
but
without
the
whole
verification
thing
the
verifier
cannot
actually
know
what
the
hash
will
be.
So
it's
like
a
secret
key
hash.
J
So
the
verify,
if
the
verify
just
gets
the
hash
without
the
proof
will
not
be
able
to
tell
if
the
hash
is
right
or
wrong.
In
fact,
it'll
look
entirely
random
to
the
very
part.
There's
a
variety
of
applications.
I
won't
talk
about,
but
that's
where
we
are
and
specifically
I
want
to
talk
about
updates
to
the
elliptic
curve
PRF
in
the
draft
it's
gone
through
an
iteration.
So
how
does
it
work?
The
first
thing
we
do
is
hash
the
curve.
As
the
previous
talk
talked
about,
so
we
hashed
a
curve.
J
Then
we
do
have
two
hash
two
curve.
We
actually
construct
a
proof
of
three
things.
This
hash
value
raised
to
the
secret
key
and
a
couple
of
helper
values.
Cns
now
I'll
talk
about
in
a
second.
What
is
the
output
of
the
whole
thing?
The
proof
two
hash
just
takes
this
H
to
the
x
value
and
outputs
it.
So
how
do
you
know
that
this
is
the
correct
hash?
You
know
it
because
the
prover
will
prove
to
you
that
it
has
the
correct
exponent.
That
H
was
raised
to
the
correct
XOR.
J
X
is
the
secret
key,
so
you
can
approve
equality
of
two
discrete
logs.
How
do
you
do
that?
There's
a
bunch
of
algebra
which
I
won't
go
into,
but
it's
essentially
the
same
as
nor
signatures
or
EDD
si.
If
you're
more
familiar
with
that,
so
you
generate
a
nonce
and
we've
specified
deterministic
nonce
generation.
It
should
be
a
pseudo
random
nonce.
Essentially
you
compute
some
hashing
and
you
get
some
algebra
and
then
the
very
far
does
some
algebra
does
the
hash
to
curve
the
input
and
then
does
the
verification?
Okay.
J
So
the
takeaways
are
that
we
have
hashed
a
curve
and
we
have
non
station
and
we
can
talk
about
those
for
this
talk
some
features,
so
the
values
H
to
the
X
is
a
curve
point.
So
it's
going
to
be
like
256
bits.
S
is
mod.
The
fields
that's
going
to
be
256
bits.
C
is
short
because
collision
resistance
is
not
needed.
It's
only
under
28
bits,
so
this
is
useful
for
short
applications
where
space
is
an
issue
we
added
future
proofing.
J
So
the
first
thing
we
hash
is
the
cipher
suite
ID
so
that
if
cipher
Suites
change,
we
can
update
the
cipher
suite
ID
and
not
have
weird
hash
input,
collisions
where
the
same
input
represents
the
different
things.
This
was
a
problem
in
RFC,
a
t32,
the
EDD
si
RFC,
where
they
had
to
do
funny
things
in
order
to
avoid
hash
input,
collisions
what
has
changed
since
the
last
iteration.
We
added
a
new
non
generation,
it's
deterministic
and
for
the
P
256
NIST
curve,
it's
identical
to
the
e
CGS,
a
deterministic
draft
RFC
69
79.
J
J
We
have
a
cipher
suite
with
curve
25:59
sorry
25
5:19,
which
looks
a
lot
like
EDD
si.
In
many
ways,
nonce
generation
is
deterministic
based
on
eg
DSA.
The
hash
function
used
is
sha-512,
and
then
this
cipher
suite
sort
of
has
two
variants.
One
variant
where
hash
to
curve
is
just
try
and
increment.
You
try,
you
hope
you
get
a
girlfriend
and
the
other
one
is
alligator
alligator
squared,
which
is
a
you
know,
a
time
invariant
way
to
do
Hastur
curve.
J
What
we've
added
is
domain
separation
to
make
sure
that
you
know,
hashing
is
used
all
over
the
place
in
this
thing.
If
you
look
just
at
the
design
and
I
want
to
make
sure
that
two
inputs
to
the
hash,
different
hashes
for
different
purposes
are
never
the
same,
because
if
you
have
an
input,
hash
collision,
you
get
an
output,
hash
collision
and
that
can
cause
all
kinds
of
problems
and
the
way
we're
gonna
do.
It
is
two
things
one.
We
prevent
the
cipher
suite
everywhere
and
two.
J
We
just
have
a
bite
that
tells
you
which
use
of
the
hash.
This
is
1,
2
or
3.
We
do
that
everywhere,
except
nonce
generation.
We
cannot
add
it
to
nonce
generation.
If
you
want
to
be
the
same
as
other
standards,
because
they
don't
have
it
but
nas
generation
is
not
an
issue
because
it
uses
a
secret,
that's
not
used
on
URLs,
so
the
adversary
can't
make
it
collide
with
anything
else,
a
decision
that
we've
made.
J
But
if
people
really
disagree
we
want
to
hear
about
it
is
that
we
do
not
do
something
called
pre
hashing,
so
80
32
ETDs
there
is
EDG
SA
RFC
does
pre
hashing.
If
you
have
a
really
long
message,
you
first
hash
it,
and
then
you
do
everything
on
the
hash
value.
The
reason
they
need
to
do
it
is
because
the
message
participates
in
the
process
in
more
than
one
place.
So
if
it's
a
long
message,
you're
gonna
have
to
chew
through
it
more
than
once.
J
We
don't
have
this
problem,
because
the
hash
to
curve
the
age
that
you
see
on
your
left
already
acts
as
a
pre
hash.
Everything
else
is
based
on
H,
not
based
on
the
input.
This
is
the
only
place
where
the
input
goes
into
the
scheme,
so
once
we
have
that
it
doesn't
seem
to
make
sense
to
do.
On
top
of
that,
I
pre
hash
right,
so
that's
essentially
the
design
of
pre
hash
media
TSA
does
not
have
a
hashed
occur
step,
so
it
doesn't
get
to
take
advantage
of
this.
We
do
so.
J
It's
not
clear
that
we
need
to
255
1/9
cipher
suites
one
has
the
hash
to
curve
which
is
trying
to
increment
and
the
other
has
elliegator
and
we're
not
sure
we
need
to
keep
it.
We
can
kill
it.
We're
seeking
feedback
on
that
we're
seeking
feedback,
also
on
nonce
generation,
so
with
the
nonce,
and
these
schemes
is
very
important.
If
you
don't
have
a
random
nonce,
you
a
lot
of
trouble,
you
could
kick
and
leak.
It's
been
sort
of
relative
consensus
of
the
community
that
the
way
to
do
it
is
to
have
pseudo
randomness
generation.
J
The
question
is
exactly
how
what
RFC
a
t32
does.
It
has
three
options.
The
reason
has
three
options,
because
it
has
pre
hashing,
and/or
context,
and
the
options
are
here
in
the
screen.
The
second
and
third
are
kind
of
weird,
because
they're
trying
to
avoid
essentially
domains
up
to
create
domain
separation.
J
We
did
something
similar
to
RFC
1832,
but
not
identical,
because
they
have
an
input
and
we
have
the
hash
ticker
of
the
input
in
there
now,
switching
back
to
the
NIST
curve,
the
way
deterministic
ECDSA
RFC
69
79
does
it
is
by
using
an
H,
MACD
RPG
that
requires
10
applications
of
a
hash
function
to
get
a
nonce.
It's
not
clear.
J
That's
the
best
way
to
do
it
and
it's
subject
to
a
tiny
probability
of
a
timing
attack,
because
if
you
don't
get
a
nonce
in
the
right
range,
you
redo
and
alternative
is
to
use
sha-512,
but
the
suite
doesn't
have
sha-512
in
it.
It's
only
based
on
charge
of
56.
Do
we
add
that?
And
so
that's
a
decision
to
be
made?
J
Do
we
really
want
the
10
applications
of
sha-256,
also
seeking
feedback
on
various
minor
knits?
We
use
exponential,
notation
H
to
the
X,
because
that's
how
groups
are
written,
do
you
want
to
loop
the
curve
notation
x
times,
H?
What's
better,
do
people
want
support
of
contexts?
A
context
is
something
like
a
message,
but
not
a
message:
RFC
a
t32
has
that
we
do
not
minor
knits,
what's
the
best
terminology
for
take
a
hash
and
then
take
a
few
bites
of
it.
First
doc,
that's
last
doc,
that's
leftmost
rightmost.
J
K
Thank
you
very
much
turn
it
as
much
laughter
bro.
Thank
you
very
much
for
this
walk.
I
really
think
it's
very
interested
in
mr.
Luke
and
as
I
draft
has
significantly
improved
since
London.
Now
there
is
much
more
clear
section
about
secrecy
of
input
against
dictionary
attacks
like
new
formulation.
So
much
and
the
question
I
have
is,
if
I
understand
correctly,
the
vrf
notions
of
security
are
in
some
sense
more
strict
than
for
the
domestic
signatures
and
for
signatures.
K
J
G
Of
course,
but
yeah
thanks
for
the
talk,
this
is
great,
just
a
quick
question
regarding
the
new
hash
to
curve
stuff
that
you've
added
to
Stevens
Point
in
their
question.
During
my
talk,
it
would
be
great
if
we
could
somehow
converge
on
whatever
we
happen
to
choose
in
this
document,
X,
because
otherwise,
we'd
have
two
competing
hash
to
curve
implementations
or
algorithms
for
in
two
different
documents,
which
would
cause
great
much
confusion.
I
think
right
not
be
advisable,
so
I,
don't
know
how
you
want
to
do
that.
So.
A
J
G
J
J
G
J
L
Hi
Robin
Wilson-
this
is
probably
the
least
cryptographic
intervention
at
they'll.
Stay
tuned
feel
questions
three
they're
about
first
n
objects
lost
to
n
objects,
would
it
work
to
specify
an
offset
and
the
number
of
octaves?
So
as
long
as
you
don't
as
long
as
you're
sure
you're
not
going
to
overrun
the
end
of
the
hash
value
that
would
allow
you
to
specify
different
offsets
well
sure.
J
Sure
right,
it's
just
the
question
of
what
is
more
convenient
for
implementation.
Do
one
take
a
0
through
32
or
33
for
64,
and
how
do
we
specify
to
make
it
clear
and
all
the
sort
of
concerns
about
little-endian
and
big-endian
so
that
people
don't
get
confused
about
which
ones
it's
more
of
a
language
issue
than
anything
else?.
K
Again,
one
more
question:
shehram
spoke
about
some
plication
Slyke
I'll
grant
application
of
various.
Maybe
do
you
have
any
new
applications
of
that
could
be
interesting
to
do.
No
stray
the
technology,
because
I
think
the
main
questions
about
the
refs
are
applications,
because
it's
a
new
kind
of
primitive
kind
of
view
and
algorithm
is
a
good
example
of
application
where
it
is
really
applicable.
We
have
any
examples
so.
J
Varis
already
being
used
in
key
transparency
projects
and
connects
that
sort
of
thing
they
actually
implemented
and
used.
There
is
a
signal
implementation
of
vr
af-s
there
you
write
an
algorithm,
also,
possibly
reverse,
is
using
it
and
also
and
SEC
v,
which
is
a
dns
Janicek
piece
is
also
using
the
RFS
or
wanting
to
use
VRS
a
thank.
K
K
Imagine
tonight's
last
mascara,
Pro
and
I'm
presenting
I
joined
to
work
with
Cass
Kramer's,
liber8,
annex
11
and
Christopher
wood.
The
work
is
randomness
improvements
for
security
protocols,
as
this
draft
was
adopters,
safer,
G
draft
London
meeting
and
I
did
eight
about
this
document
and
I'll
outline.
The
questions
that
we
have
now
I
should
have
a
view.
So,
basically,
the
motivation
is
obvious.
K
K
K
So
we
try
to
consider
a
sainted
issues
and
most
of
them
are
outlined
in
the
previous
verses
versions
of
the
draft,
and
we
hope
to
find
a
solution
such
that
any
call
for
entropy
can
be
replaced
with
with
this
one.
So
the
construction,
construction
acid,
is
now
in
the
current
version
of
the
draft.
It
has
changed
slightly
so
that
G
is
output
of
some
sales
PRNG,
and
you
know
construction.
We
change
it.
K
We
replace
it
with
the
mixing
in
with
signatures,
wait
with
some
secret
key,
please
a
tag,
one
which
is
unique
and
environment,
specific
string,
and
all
of
these
is
mixed
in
with
function,
extract,
which
is
basically
a
period
for
KDF
and
then
expand
it
using
tech
tool,
which
is
basically
notes
to
the
output
X
top.
One
here
is
a
constant
range
which
is
specific
for
device
and
protocol
or
environment
and
tech.
Two
is
a
nonce.
Actually
that
can
include
a
timestamp
of
counter
that
must
be
unique.
The
changes
from
the
previous
version
are
the
following.
K
First
of
all
is
a
construction
itself.
It
has
been
generalized
so
now
it
has
lengths
of
n
of
the
same
length
as
for
initial
says
PNG,
so
it's
based
on
hkf
notions
of
stretch
and
expand.
Moreover,
we
think
about
fixing
the
construction
from
hkf
RFC,
and
we
added
some
clarifications
about
security
considerations
about
the
ground
attack
and
dead
McGrew.
In
his
recent
opinion
about
the
document
has
mentioned
that
we
should
stress
something
about
tag,
one
more
and
use
it
so,
namely
tag.
K
One
is
the
control
measure
against
many
possible
vectors
of
attacks,
including
the
edge
that
follow
from
clone
in
virtual
environments
or
several
machines
using
one
HSM
with
one
secret
key
and
we'll
try
to
clarify
this
point
in
the
draft.
More
specifically,
in
victory
is
a
timestamp
that
ensures
that
outputs
are
unique.
K
You
saw
that
it
will
be
reasonable
to
define
target
security
properties
of
our
construction.
First.
Two
of
these
properties
are
properties
that
guarantees
that
replacement
of
any
says
PNG
with
our
construction
won't
do
any
harm,
so
it
won't
be
worse,
at
least
so.
First
of
all
that,
if
sister
Angie
works
fine
in
some
security
model,
the
output
of
this
construction
will
be
good
in
the
same
security
model.
K
Secondly,
we
don't
harm
the
private
key,
so
we
don't
increase
as
a
probability
of
compromise
of
our
private
key,
maybe
only
for
some
visual
advantage
and
so
the
property
our
aim
and
meaning
no
fusing
this
construction.
That,
if
says
PNG,
is
broken
or
even
controlled
by
anniversary
or
even
just
degenerates
to
constant
string
along
the
output
of
the
proposed
construction
will
remain
in
the
thinkable.
In
responding
security
model,
we
relaxed
signature
scheme,
they
were
very
strict,
they
required
the
scheme
is
deterministic,
though
actually
it
doesn't
need
to
be
deterministic.
K
It
just
need
to
use
its
own
entropy
source.
Why
we
solder
this
taxation
is
needed,
because
if
you
use
HSN
with
its
own
internal
entropy
source
and
then
it's
okay,
the
problem
is
only
when
you
use
the
signature
scheme,
which
is
not
at
the
monistic
that
it
uses
the
same
entropy
source
S,
which
I
into
page.
So
the
is
clarification
was
added
to
the
document.
Some
minor
issues
based
on
one
of
opinions,
in
least,
were
added
about
comparison
to
C,
69
79.
K
Actually,
the
construction
can
look
more
or
less
comparable
to
the
construction
from
RFC
69
79,
but
it
has
completely
different
semantics
and
meaning,
for
example,
if
you
have
an
HSM
who
is
a
private
key
used
inside
of
it,
then
the
construction
of
69
79
it
can
be
demented
inside
the
HSN
and
our
construction
uses
its
it
from
outside.
So
who
uses
calls
to
Z
HSM,
as
I
stated
plans
actually
its
own
for
his
today's
message
to
the
list,
because
it
reflects
most
of
our
current
plans.
K
So,
first
of
all,
we
must
obtain
up-to-date
security
assessment
because
once
we
have
little
outdated
and
we
showed
specifications
to
the
draft
spare
specify
our
construction
and
add
recommendations,
some
of
them
were
a
legend
in
the
slides,
and
some
of
them
are
the
same.
That
was
in
the
mcribs
letter,
and
we
think
that
we
should
add
these
clarifications.
There's
a
document
itself
and
we
should
refer
in
security
considerations
more
to
be
stated
explicitly
so
that
we
can
cover
most
of
stations
that
we
wrote
about
which
can
lead
to
what
usage
of
his
construction.
H
Thank
you
for
the
document
in
the
presentation
of
David
micro
Cisco,
so
I
have
a
like
a
question
just
to
like
informally
talk
it
through.
So
one
of
the
assumptions
in
the
security
analysis
is
that
the
the
signature
of
tag
1
isn't
obtainable
by
an
adversary,
but
the
conventional
you
know
security
model
for
a
digital
signature.
Is
you
can
assume
the
the
attacker?
H
Can
you
know
to
you
know,
obtain
signatures
for
chosen
messages,
so
there's
some
I
think
you
just
need
to
like
explicitly
state
that
as
another
another
case
in
your
case
analysis
and
say:
well,
you
know
if
your,
if
your
PRNG
is
no
good
and
the
attacker
can
do
a
chosen
message
attack.
Well,
you
know
your
your
game
is
up,
but
it
would
have
been
anyway
or
something
like
that.
Yes,.
K
Agree
with
you
and
yes,
we
should
add
this
it's
basically.
Moreover,
it
was
issues
that
we
try
to
address
as
well
as
possible.
We
had
discussions
all
sorts
about
how
this
tag
1
should
be
chosen
in
a
way
that
it
can't
collide
in
real
world
with
plain
texts
that
are
signed.
Of
course
it's
it
can
be
it
and
it
may
not
be
fully
formal,
but
of
course
we
shoot
our
what's
the
security
considerations
and
we'll
do
it,
and
thank
you
very
much
for
attention
to
this
issue.
K
C
M
Good
afternoon
I
move
a
craft
chick
I'm
going
to
present
about
this
protocol.
We
called
opaque
it's
a
head-fake
passer,
authenticated
key
exchange
and
a
symmetric
one
in
the
sense
that
it's
not
two
friends
that
share
a
password.
It's
a
client
and
the
server
the
client
has
the
past.
All
of
the
user
has
a
password.
The
server
keeps
a
image
of
that.
M
These,
based
on
recent
work
that
we
presented
in
Euro,
Crypt,
2018
and
I
written
a
draft
about
this,
which
I
will
submit
officially
right
now
a
week
ago,
or
something
like
this
I
I
sent
an
email
to
to
the
message
to
the
list
with
the
pointer
to
turn
right
up
of
this
draft
again.
I'll
do
it.
It
submitted
officially
the
next
one,
okay,
so
an
asymmetric
page.
The
requirements
are
first,
that
it
will
be
PKI
free
not
to
be
based
on
PK
I,
not
like,
for
example,
the
current
practice
of
sending
a
password
over
TLS.
M
Tls
does
one
good
thing,
which
is:
it
uses
a
secret
salt
for
the
hashes,
which
makes
the
work
of
the
attacker
as
hard
as
possible,
meaning
that
not
the
only
point
in
which
the
attacker
can
start
running
an
offline
and
dictionary
attack.
An
exhaustive
dictionary
attack
is
only
when
it
breaks
in
to
the
server
now,
since
APEC
has
all
these
requirements
and
doesn't
need
PKI.
You
would
assume
that
all
these
apex
by
the
way
is
for
a
symmetric
or
augmented
the
both
terminologies
are
used.
M
So
you
will
assume
that
apex
are
always
better
than
TLS
a
password
over
TLS
that
the
next
one
they
actually
the.
This
is
not
the
case,
it's
kind
of
surprising,
but
all
all
APEC
protocols
known
to
this
date-
those
that
are
you,
know
the
practical
one,
the
impractical
ones,
the
proven
ones,
the
unproven
ones.
M
Next,
so
you
would.
You
could
think
that
maybe
there
is
something
inherent
here
that,
for
example,
if
you
want
to
do
a
PKI
free
APEC,
then
there
is
no
way
you
can
also
do
secret
salt,
and
the
answer
is-
and
this
is
not
the
case
next
one-
and
this
is
what
we
actually
show
in
this
protocol-
is
the
first
one
to
be
secure
against
pre-computation
against
pre-computation.
M
So
if,
if
you
don't
use
salt,
as
we
all
know,
then
you
can
build
Universal
dictionary
before
you
break
into
the
server.
If
you
use
salt,
but
the
salt
is
no
to
the
attacker,
then
at
least
the
attacker
can
build
a
targeted
dictionaries
against
a
user
and
immediately
when
it
breaks
into
the
server
it
confirmed.
The
password,
so
opaque
will
have
the
property
that
this
is
not
possible.
It
is
secure
against
pre-computation
and
also
is
a
formal
proof
of
security,
and
there
are
a
strong,
strong
definition
at
the
inside
that
it
was
written
there.
M
But
it's
important
to
note
that
there
were
provable
or
proven
fake
protocols
under
under
crypto
formal
models,
and
even
those
were
not
secure
in
this
sense,
and
that's
because
these
models
actually
allowed
for
pre-computation
attacks,
which
basically
goes
against
the
very
basis
of
very
basic
requirement
that
you
would
like
to
have
in
these
protocols.
Okay,
so
a
opaque
is
build
on
on
the
basis
of
this
primitive
called,
oblivious,
PRF,
oblivious
to
the
random
function.
What's
on
oblivious
pseudo
random
function,
so
a
pseudo
random
function.
M
We
all
know
it's
a
function
that,
if
you
don't
have
the
key
to
you,
don't
know
the
key
to
the
function.
Then
it
it
behaves
like
a
random
function,
indistinguishable
from
a
random
function.
What's
an
oblivious
PRF
on
a
blueish
PRF
is
a
PRF
with
a
protocol
between
a
client
and
a
server.
The
server
has
a
key
to
the
PRF.
The
client
has
an
input
to
the
PRF
at
the
end
of
the
protocol.
The
client
learns
the
the
value
of
the
function
from
this
point
on
the
X,
which
is
the
input
of
the
client.
M
While
the
server
does
not
learn
anything,
it
doesn't
learn
about.
The
input
of
the
user
doesn't
learn
anything
about
the
output,
so
he
generates
the
output,
but
doesn't
learn
anything
about
it.
Okay-
and
this
is
going
to
be
very,
very
useful
in
this
case.
So
again,
your
f
is
a
interactive
PRF
service
that
returns
PRF
results
under
the
key
that
the
server
has
that
the
server
does
not
learn
the
input
or
output
of
the
function
next
one.
M
So
the
idea
of
OPEC
is
very
natural
once
you
think
about
an
oblivious
PRF.
What
you're
going
to
do
is
that
the
user
will
take
it
the
password,
a
PWD
and
will
exchange
it
for
a
strong
key
produced
by
the
o
PRF.
Now,
since
this,
the
server
can
provide
the
the
value
of
the
PRF
on
the
password
without
learning
the
password
or
the
output,
then
basically
the
user
getting
some
magic
way.
It
replaces
its
password.
M
You
know
it's
low
entropy
password
with
a
strong
key
where
the
entropy
comes
from
the
key
from
the
the
OPF
key
of
the
server,
but
again
without
the
server
learning
anything
not
about
the
password
and
not
about
the
key
that
was
returned
by
the
PRF.
So
now
the
user
has
a
strong
T,
and
now
everything
is
simple.
What
the
user
will
do?
M
Here,
I'm
showing
one
implementation
of
the
of
a
PRF
these
just
blinded
the
diffie-hellman,
a
very,
very
simple
transform
I
will
not
get
into
the
details
of
the
the
function.
Again.
It's
simple,
but
I
don't
have
time.
I
only
I
want
to
highlight
the
performance
issue
here.
The
server
apply
requires
one
exponentiation
for
the
server
for
the
client
is
one
fixed
base,
exponentiation
and
one
variable
based
exponentiation,
and
this
is
just
a
single
round
message
from
client
to
server
and
from
server
to
client.
That
is
also
I
should
have
I'd
like
to.
M
M
Change
these
two
messages.
Actually,
if
you
have
an
implicitly
authenticated
key
exchange
like
QV
you
can
at
this,
is
the
whole
protocol.
What
you
see
what
you
see
there
just
one
one
message:
each
direction:
if
you
want
explicit
authentication
from
the
client,
then
you
add
one
more
one
more
message:
next,
one
against
this
is
very
fast
and
just
to
give
you
an
idea,
of
course,
the
details
need
to
be
read
more
carefully,
but
it
is
just
to
give
you
the
general
idea.
M
If
you
use
opaque
with
hmq
V,
which
is
a
particular,
is
a
very
efficient
implicitly
authenticated,
take
change,
then
you
basically
get
the
full
protocol,
the
full
opaque
in
two
and
a
half
X
pronunciations
for
each
party.
This
is
a
more
or
less
similar
to
speak
to
class,
which
is
one
protocol
that
has
been
or
is
being
proposed
exactly
for
this.
For
this
functionality,
the
difference
is
that
spec,
2
plus
is
insecure
against
pre-computation
attacks
against,
because
again,
because
it
cannot
use
a
sacred
salt
and
also
speak
to
Plus
does
not
have
a
proof.
M
The
draft
but
define
spec
2
plus
says
that
there
is
a
proof,
but
no
there
is
no
no
proof.
It
was
proposed
in
a
paper
by
David
cash
on
victor
Shoop
and
John
Kemeny
sh,
but
I
check
with
the
with
the
author's.
The
paper
doesn't
have
a
proof
and
they
don't
have
a
proof.
There
is
not
even
a
model
in
that
paper,
but
maybe
more
more
essential
is
the
fact
that
that
protocol
cannot
accommodate
secret
salt
and
they
opaque
can,
by
the
way
I'm
a
music
agent.
Give
us
an
example.
M
We
have
to
say
that
since
H
MTV
has
a
patent,
my
name
is
even
though
it
doesn't
belong
to
me.
But
but
what
I
do
want
to
say
is
that
if
there
will
be
interest
at
some
point
of
interest
and
arising
up
a
quiz
TV
the
most
probably,
that
will
not
be
obstructed
and
cannot
promise
but
I
think
that's
what
will
happen
now.
What
I
think
is
most
important
in
these
symmetric
pegs
is
to
find
ways
of
integrating
it
in
TLS.
M
Because
of
these
things,
that
OPEC
is
actually
a
compiler
from
any
K
exchange
to
a
page.
It
means
that
it's
very
easy
to
integrate
it
with
TLS
all
you're
going
to
do.
You
are
going
to
use
the
LPR
F
to
change
the
use
password
for
a
signature
key
okay,
and
now
you
can
use
that
signature
key
as
a
client
in
client
authentication
in
TLS.
M
If
you
want,
if
you
don't
care
about
encrypting
the
user
ID,
when
you
contact
the
server,
then
you
could
do
the
whole
thing
exactly
in
the
same
three
flights
of
a
regular
TLS.
If
you
want
to
encrypt
the
account
information,
then
you
either
need
to
use
a
resumption
case.
You
have
it
or
you
need
to
add
one
round
trip
to
two
TLS,
but
again
everything
is
very
easy
to
integrate
because
of
these
just
general
paradigm
of
take
my
password.
Give
me
a
key
and
Alton.
Do.
Is
that
T,
whatever
I
want
so.
M
Epic
security:
it
has
been
shown
to
be
secure
against
pre-computation
attacks
and
is
PKI
free
it.
It
has
forward
security
and
something
I
didn't
mention.
You
can
do,
make
it
harder
by
doing
iterated
hashing,
as
we
usually
do
in
regular
password
protocols,
and
these
hash
iterations
are
actually
offloaded
to
today
to
the
client.
The
next
one.
M
M
Private
salt
against
pre-computation
attack
uses
hash
iterations
on
the
on
the
on
the
user
or
client
side,
its
friendly
to
TLS
integration
and
two
things
I
didn't
mention
is,
since
the
user
is
actually
storing
it's
a
private
key
with
the
sick
with
the
server
you
can
be
used
exactly
the
same
mechanism
to
store
other
secrets.
You
know
your
Bitcoin
key
if
you
want
or
any
credential
or
anything
that
you
want
to
store
at
a
server.
And
finally,
if
you
want
to
make
your
server
even
more
secure,
you
can
make
it
know
into
a
threshold.
M
M
M
G
All
right
Chris
what
Apple
thanks
for
bringing
this!
This
is
really
great.
I,
definitely
want
to
see
this
happening
in
the
CFR
G.
Two
quick
questions.
Number
one
they'll
pure
up
that
you
specify
is
that
the
same
like
two
hash,
diffie-hellman
OPR,
that
you
and
stars
use
in
the
password-protected
vaults
paper,
or
has
that
changed.
M
M
G
M
G
M
Resize
is
not
every
kiss
every
protocol
with
the
KCI
property,
okay,
which
is
most
protocols
and
protocols
that
don't
have
case
here.
You
shouldn't
use
them
in
the
first
place.
Yes,
yes,
it's
a
technical
issue,
but
it's
significant.
It's
the
there
are
several
technical
issues
here
that
show
how
you
know.
Sensitively
thing
is
to,
and
the
proofs
actually
show
you
what
you
meet
that
yeah
and.
L
K
Especially,
thank
you
very
much
for
bringing
this
I
really
think.
It's
very
important
work,
just
small
clarification,
if
I
understand
correctly
that
we
can
modify
any
castle
secure,
key
agreement
to
the
password
of
syndicated
key
agreement
with
immigration,
our
private
key
to
the
server
and
hit
the
play.
But
if
anniversary
has
a
full
access
to
the
server
occasionally,
it
can
not
mount
only
dictionary
attack
on
the
password,
but
also
on
the
private
key
itself.
K
M
M
N
M
M
H
Thank
you.
I
gave
them
a
crew
Cisco,
so
you
go.
This
is
this
is
really
good
stuff
thanks
for
doing
it,
thanks
for
bringing
it
here
and
I'm
sure
Richard
will
be
Richard.
Barnes
will
be
interested
to
talk
to
you
more
about
this.
I
just
wanted
to
encourage
you
to
keep
this
up
and
I
want
to
say
the
there's.
H
Let's
say
a
way
to
use
passwords
to
authenticate
to
servers
such
that
you
don't
have
to
trust.
The
server
would
be
really
meaningful
because
of
the
you
know,
there's
it's
very
commonplace
nowadays,
for
you
know
phishing
attacks
to
steal
people's
passwords
right.
That
would
be
a
really
meaningful
improvement
in
the
security
of
the
internet.
If
we
could
eliminate
that
trust
in
the
server.
M
A
O
O
Picky
comment:
so
there
are
three
drafts
presented
today:
they're
all
elliptic
curve
drafts
we're
all
taking
things
to
the
exponent
hashing
to
the
curve
and
doing
things
so
we've
all
written
things
a
little
bit
differently.
If
you
look
at
like
the
the
way
that
our
draft,
the
vrf
draft
by
the
way,
I'm
Sharon
Goldberg
I'm,
one
of
the
co-authors
of
the
vrf
draft
that
was
presented
before
the
way
we've
written
it
in
the
way
you've
written
it
they're
pretty
different.
O
B
F
D
Hello,
everyone
I'm
very
gift
from
rabid
universities
in
other
hands
I'm,
going
to
quickly
make
a
quick
update
about
why
we
should
actually
become
go
1200
as
an
RFC.
So
kangaroo
12
is
an
interesting
function
because
well
it
supports
arbitrary
output
length,
so
you
can
actually
use
like
to
derive
entropy
from
a
source
with
128
bits
of
security,
and
there
is
no
Arab
see
that
provides
such
functionality
so
far.
It
also
provides
hashing
with
Verizon
as
cannibalism,
which
mean
that
your
outputs,
your
input,
lengths
grows.
D
Also
it's
based
on
the
shari
permutation,
so
kernel
12
is
a
hash
function
or
excellent
level
function.
It's
not
a
construction
with
generic
properties.
It's
really
a
hash
function
that
you
can
use
instead
of
shattering,
for
example,
and
it's
based
on
j-3.
So
from
that
we
can
reuse
the
code
of
katuk.
We
use
the
v8
instruction
that
are
currently
implemented
there
and
also
it's
like
at
least
twice
faster
than
shattering,
and
because
it's
based
on
sha
3
well
yeah,
then
it's
public
design
and
it
really
have
the
same
cryptanalysis
of
Shia
3.
D
So
it's
really
really
secure
design
and
not
just
something
that
is
new,
because
the
construction
is
based
on
proof
for
the
Spanish
construction
for
the
secretary
construction.
So
the
pair
ISM
is
proven
correct.
The
special
instruction
is
proven,
and
then
you
just
rely
on
the
security
of
sha-3.
So
if
you
don't
trust
a3,
then
you
cannot
trust
conlou
12,
but
I.
Don't
think
this
is
a
point
here.
Also.
My
current
best
attacks
are
like
creation
of
five
runs,
four
shot
three
and
we
have
stream
prediction
over.
D
Eight
runs
with
a
word
cloud
of
to
the
/
128.
There
is
shrimp
prediction
over
nine
runs
with
a
workload
of
two
to
birth,
256,
but
that's
way
above
the
security
requirement
that
we
want,
and
so
we
only
have
this
two
attacks
over
five
and
eight
runs
and
can
go.
12
has
12
rounds,
so
we
still
have
a
100%
security
margin
for
the
collision
and
at
least
50%
of
security
margin
for
shrimp
prediction.
So
for
that
it's
still
really
really
secure
and
not
just
diminishing
the
security
of
sha-3.
D
So
in
the
end,
why
is
it
interesting?
Well,
as
previously
said,
it's
an
open
design
result
of
an
worldwide
competition.
It
has
a
long-standing
scrutiny
from
the
crypto
research
groups
from
France
on
all
over
the
world.
It.
It
doubles
the
speed
with
respect
to
sha
3.
It
has
proven
to
like
security
and
thus
carries
the
prism,
is
actually
scalable
and
not
just
limited
to
8
cores
or
4
course.
K
Jeff,
thank
you
have
a
chat
question.
Now.
There
is
a
process
in
I
saw
it
see
27
of
any
amendment
on
hash
functions.
Do
you
have
any
plans
to
bring
in
this
to
ISIL
because
they
just
have
the
process
of
vision
of
hash
functions
now,
and
maybe
it
will
be
a
good
chance
to
bring
new
constructions,
are
I.
K
But
now
there
is
any
amendment
being
repaired
and
when
you
have
any
amendment
you
can
discuss
I
and
I
hope
that
the
time
is
not
in
the
pasture
of
the
deadline
of
edenia
factions,
and
maybe
it's
also
a
good
way
to
go,
but
of
course,
in
light
AF,
it's
a
good
way
to
proceed.
Also.
Thank
you
very
much.
Thank
you
for
watching.