►
From YouTube: IETF112-CFRG-20211111-1200
Description
CFRG meeting session at IETF112
2021/11/11 1200
https://datatracker.ietf.org/meeting/112/proceedings/
A
So
I
suppose
we
can
start
hello
everyone.
This
is
cephergy
alexis,
menninger
of
nick
sullivan
and
stanislav
the
chairs
and
we'll
start
with
our
traditional
presentation.
A
A
If
we
have
some
spare
time,
we
have
quite
a
lot
of
time
for
any
other
business.
Any
last
minute
agenda
version
one
two,
three,
okay,
then,
let's
discuss
our
current
document
status.
We
have
a
new
rfc.
That's
are
gonna.
Do
finally.
A
A
You
can
see
that
it
is
waiting
for
document
shepard,
but
actually
until
earlier
today,
the
shepherd's
review
was
uploaded
to
the
data
tracker.
So
now
it's
it
will
be
waiting
for
ifts
chair
and
we
have
a
lot
of
changes
in
other
drafts.
A
We
have
an
update
in
for
vprf
and
we'll
have
a
status
update
from
chris
wood
later
today
and
we
have
peripheral,
curves
updated
and
there
are
some
changes
in
the
author
team
and
we
have
updates
on
both
bank
drafts
on
our
opec
in
nc
pace
and
again,
we'll
have
an
update
on
cpace
from
beyond
us
later.
Today
we
have
updates
and
frost
and
our
recipient
signatures,
and
we
have
two
drafts
in
a
recessional
plus
call
or
collecting
some
feedback
after
the
group
called
that's
about
vrf
and
kangaroo
12.
A
A
Ic
and
potentially
is
ready
to
help
any
other
working
groups,
so
it
can
be
used
to
review
documents
coming
to
c4g
to
security
area
from
iec
possible.
Possibly
we
ask
for
reviews
from
cryptography
panel
members
before
research
group
last
calls
for
several
documents
and
for
most
difficult
documents.
A
A
We
would
be
happy
to
work
with
a
lot
of
as
a
current
experts
in
panel,
because
we
are
sure
that
the
current
panel
members
work
really
excellent
and
we
have
a
lot
of
great
reviews,
but
also
we
will
call
for
nominations
to
join
the
victory
panel.
I'll
send
a
message
to
the
cfg,
maybe
at
least
in
a
week
or
two.
A
Okay,
then,
let's
start
with
our
first
presentation,
this
will
be
from
chris
pattern:
verifiable
distributed
aggregation
functions.
Please
grace.
I.
B
Share
the
slides
yep
I
requested
to
share
my
slides,
I'm
waiting
for.
B
C
B
Okay,
so
yeah,
so
this
is
a.
This
is
a
new
document
this
it's
it's
very,
very
early
stages.
Right
now.
I
we're
not
asking
right
now
for
the
research
group
to
consider
adoption.
The
purpose
of
this
presentation
is
is
mostly
to
introduce
everyone
to
this
concept
and
and
solicit
feedback
on
on
on
the
document
and
and
it
helps
shaping
it
and
putting
it
in
the
right
direction.
B
So
this
dovetails
with
the
priv
buff
yesterday
and
I'm
not
gonna
assume.
Anyone
here
has
attended
that
buff
and
I'm
not
gonna
go
into
much
more
detail
than
richard
barnes
did
yesterday.
As
I
said,
the
intention
here
is
to
kind
of
introduce
everyone
to
this
idea.
B
So
what
is
a
verifiable
distributed,
aggregation
function
or
vdaf
or
vdf,
if
you
like,
so
the
the
the
purpose
of
the
buff
yesterday,
was
to
form
a
working
group
that
will
work
on
standardizing
protocols
for
privacy,
preserving
aggregation
of
user
measurements,
so
we're
taking
a
bunch
of
today.
Service
providers
are
taking
a
bunch
of
measurements
of
users
for
various
purposes
and
typically
very
often
they're,
often
they're
interested
just
in
the
the
the
aggregate
and
not
the
individual
measurements
themselves.
B
So
the
charters
right
now
envisions
doing
this
by
distributing
the
computation
among
multiple
aggregation
servers
so,
and
that
way
the
computation
can
be
carried
out
basically
on
secret,
shared
data.
So
no
one
actually
sees
any
input
in
the
clear
and
one
one
thing
we
have
to
make
sure
we
do
is
because
we're
working
on
see
with
secret
shared
data.
We
have
to
make
sure
that
inputs
are
valid.
Basically,
we
have
to
have
a
way
of
of
of
of
of
of
checking
for
invalid
inputs
submitted
by
clients.
B
So
there's
gonna
be
some
amount
of
coordination.
That's
required
among
the
aggregation
servers
to
ensure
correctness
of
the
computation.
So
what
we're
talking
about
here
is
basically
multi-party
computation,
but
a
very,
very
specific
kind
of
shape
of
npc,
not
general
purpose
npc
and
there's
a
lots
of
recent
work
in
the
literature
about
literature
about
this.
So
folks
might
have
heard
of
prio.
B
Prio
is
a
paper
from
henry
corgan
gibbs
and
dan
bonnay
from
2017,
and
it's
basically
set
out
to
solve
this
exact
problem,
and
this
is
what
really
started
our
interest
in
in
in
in
this
with
you
know,
this
is
a
solution
to
a
problem
that
we
have:
let's
go
and
standardize
it.
So
what
one
finds,
though,
there's
there's
a
lot
of
papers
about
this
and
there's
one
thing
that's
becomes
clear:
is
that
there's
no
one-size-fits-all
solution?
B
So
there's
no
one
protocol
in
the
literature
that's
going
to
solve
every
aggregation,
compute
every
aggregation
function.
One
wants
to
compute
now
creo
happens
to
solve
like
a
large
class
of
problems,
but
it
doesn't
solve
every
problem
and
another
thing
to
when
when
we
look
at
the
literature,
what
we
see
is
kind
of
a
wide
variety
of
security
and
operational
consideration,
so,
starting
with
operational
consideration.
So
how
do
client
does
the
client
interact
with
the
servers?
How
do
the
servers
interact
with
one
another?
B
What
is
the
kind
of
communication
pattern,
and
then
you
know
and
then
from
just
a
just
a
security
perspective
like
what
did
it?
What
what
is
it
that
we're
after
so
for
a
multi-party
computation?
It's
pretty
well
defined,
but
there
are
rough
edges
in
many
in
many
of
these
in
paper.
So
there's
there's
a
certain
amount
of
leakage,
sometimes
so
what
we
end
up
with
is
kind
of
a
lack
of
consistent
abstraction
boundary
for
standardization
efforts
like
priv
to
to
build
on.
B
So
what
we
want
to
do
in
this
document
is
provide
this
abstraction
boundary,
that's
kind
of
the
main
purpose,
so
we,
our
hope,
is
to
to
to
to
address
directly
the
operational
and
security
considerations
that
have
come
up
so
far
in
real
world
deployments,
and
a
couple
of
these
were
talked
about
in
the
buff
yesterday,
and
another
thing
we
want
to
do
is
provide
basically
a
design
target
for
crop
cryptographers
to
go
and
build
new
schemes,
so
they
don't
have
to
think
about
what
you
know.
B
What
is
what
is
the
shape
of
this
protocol
supposed
to
be
I'm
given
the
shape,
and
I
try
to
find
a
solution
that
that
works
in
this
framework
and
at
the
same
time
we
want
to
standardize
a
few
video
vdifs
from
the
literature,
in
particular
those
that
we've
looked
at
in
the
priv
buff
so
far.
B
So
what
is
a
vdaf,
so
aggregation
function
basically
means
we
have
the
sequence
of
inputs
and
we
want
to
compute
some
function
over
over
those
inputs,
the
order
of
the
inputs
shouldn't
matter
and
then
there's
also
this
thing
we
call
an
aggregation,
parameter
and
so
the
function,
the
aggregation
function,
takes
the
the
measurements,
the
aggregation,
parameter
and
outputs.
The
aggregate
result,
for
example.
B
This
could
be
the
arithmetic
mean
this
can
be
some
estimation
of
the
distribution
of
inputs
or,
or
you
know,
the
the
function
can
be
a
little
bit
more
sophisticated.
So,
like
let's
say
we
want
to
count
p
as
a
string
m
one
to
n
are
strings
and
we
want
to
count
the
number
of
times
that
p
occurs
in
the
set.
B
So
the
the
shape,
the
the
overall
shape
that
we
envision
for
this
is
is
the
following.
B
We
we,
we
divide
the
processing
of
inputs
into
kind
of
discrete
steps
that
are
executed
by
all
the
parties
in
the
protocol,
and
it
begins
with
the
clients,
mapping
their
measurement
to
a
sequence
of
input
shares.
We
call
this
the
sharding
step
and
then-
and
then
imagine,
there's
this
function
that
takes
these.
B
We
call
this
the
preparation
function
for
lack
of
a
better
term
that
takes
the
aggregation,
parameter,
the
the
in
one,
an
input,
share
and
outputs
an
output
share,
and
this
is
a
function,
that's
run
locally
by
the
aggregator
and
then
in
the
next
step.
The
aggregator
takes
all
of
its
output
shares,
combines
them
in
some
manner
and
outputs
what
we
call
an
aggregate
share
and
then
there's
another
server
called
the
collector.
B
Who
eventually
gets
all
of
these
aggregate
shares
and
it
can
combine
them
to
get
the
final
aggregate
results.
B
So
imagine
so
the
reason
that
this
works
is
imagine
you
have
the
the
output
shares
or
have
some
algebra
have
some
algebraic
structure.
They
might
be
in
a
in
a
ring
or
a
field
or
just
a
group,
and
we
can
compute
the
aggregate
results
just
by
adding
things
together.
So
we
can
find
the
output
shares
by
adding
them,
and
then
we
combine
the
aggregate
shares
by
adding
them
so
yeah.
So
this
is
a
mat.
This
is
this
is
how
we
get
privacy.
B
So
what,
as
you
can
imagine
here
when
no
matter
what
any
of
the
servers
do
in
this
protocol,
they
never
see
out
output
shares
in
the
clear
or
or
learn
anything
about
the
measurements.
As
long
as
one
of
the
aggregators
executes
the
protocol
honestly-
and
that
means
basically,
I
don't
share
any
of
the
information
any
of
the
my
my
output
shares
with
the
other
servers.
B
So
what
can
go
wrong
here?
Is
you
have
a
very
sparse
input
share
space,
so
the
client
is
client
maps,
its
measurements
to
a
sequence
of
input
shares,
and
you
know,
if
you're
doing
secret
sharing
over
some
finite
fields.
It's
it's
pretty
straightforward
for
a
client
to
just
pick
random
junk
and
when
the
aggregators
process
that
junk
and
output
aggregate
shares
to
the
collector
the
collectors
nick
going
to
combine
them
to
get
just
junk.
So
we
have
to
avoid
this
problem
and
systems
like
prior.
B
This
is
kind
of
a
first
class
consideration
for
them,
and
so
we're
going
to
try
to
bake
that
into
the
protocol.
We
want
to
make
that
into
the
protocol.
Somehow-
and
you
know
in
general,
what
we're
going
to
need
to
do
is
is
provide
some
way
for
the
aggregators
to
interact
with
one
another.
So
what
we're
going
to
do
is
replace
this
local
preparation
function,
preparation
step
into
a
multi-party
computation,
so
that
takes
the
aggregation,
parameter.
B
The
input,
shares
and
maps
it
to
the
output
shares,
and
the
idea
here
is
that,
no
matter
what
a
malicious
aggregator
does
an
active
attacker
does
as
long
as
the
other
aggregator
is
honest,
it
learns
nothing
about
it.
It
learns
nothing
useful
about
the
output
shares
that
it
would
learn
otherwise,
and
I
want
to
emphasize
again,
this
isn't
meant
to
be
general
purpose.
Npc
often.
B
The
solution
here
is
going
to
be
tailored
to
the
application,
so
we
have
we've
been
looking
at
two
protocols
from
the
two
basic
schemes
from
the
literature
that
have
this
shape:
one
we're
calling
prio
three.
This
is
based
on.
So
this
is
this
is
there's
a
there's,
the
original
prio
paper
from
2017
and
then
many
some
authors,
plus
some
other.
The
original
authors
and
plus
some
new
people
worked
on
this
really
really
cool
paper
called
distributed
serial
knowledge
proofs.
B
Well,
okay,
I
forget
what
the
name
of
the
paper,
but
I
call
it
zero
knowledge
proof
distributed
zero
knowledge,
proof
systems,
crypto
2019.
I
have
reference
at
the
end,
so
yeah
in
the
use
case
here
is
we
have
we
we're
encoding
measurements
as
as
vectors
of
elements
of
some
finite
field,
and
so
the
input
shares
are
just
additive
shares
over
over
that
finite
field
and
the
aggregation
function
is
any
any
function
that
can
be
defined
as
ad
as
just
adding
up
these
these
encoded
vectors
and
what
does
what
the?
B
What
the
distribu,
the
the
prepare
the
distributed
preparer
npc
thing
is
going
to
do
is
just
check
for
validity.
Where
validity
is
defined
by
an
arithmetic
circuit,
that's
evaluated
over
the
input.
I
am
running
low
on
time
and
I
want
to
save
time
for
questions.
So
I'm
going
to
skip
hits
for
now
and
I'll
just
say
that
we
have
we're
working
on
some
implementations.
B
We
have
the
the
two
main
schemes
we
were
interested
in
implementing
they.
You
know
they
they're
in
various
states,
but
we're
very
early
on
here,
but
I
can
report
some.
You
know
basic
basic
metrics
here,
the
very
the
for
preo3,
the
the
computationally
expensive
part,
is
proof
generation.
Basically,
this
sharding
step
and
for
for
aggregate
functions
that
we're
interested
in
this
is
pretty
performance,
so
the
so.
B
The
main
thing
I
think,
is
communication
costs,
but
we
can
we
can
so
these
numbers
are
kind
of
preliminary.
We
can
drive
these
down
more
with
with
optimizations
and-
and
I
want
to
emphasize
that
this
is
being
built
on
lots
of
prior
work
and-
and
I
won't
say
much
more
about
that-
because
I'm
I
I
want
to
save
time
for
questions,
so
I
will
leave
it
here
and
take
any
questions.
People
have.
D
Watson,
lad
cloudflare.
I
I'm
not
sure
I
understand
with
your
presentation
what
you're
asking
you're
saying
you
don't
necessarily
want
adoption.
Is
that,
just
now
or
in
in
the
future,
this
might
come
back.
You
might
ask
for
adoption
later.
B
Yeah,
so
I
think
we're
going
to
come
back
and
ask
for
adoption
later.
We
want
to
wait
until
the
working
group
is
formed
and
we
have
a
for
the
for
the
priv.
The
priv
buff
yesterday
is
the
the
hope
is
to
lead
us
to
form
a
working
group,
and
we
want
to
wait
until
that's
chartered.
So
we
have
a
pretty
clear
path
from
the
protocol.
That's
being
implemented
designed
there
and
and
the
thing
that
we're
using
here
basically.
B
But
I
would
love
it
if
people
read
the
document
and
and
and
told
me
what
they
think,
there's
like
you
know,
there's
a
lot
of
there's
a
lot
of
things
that
potentially
fit
in
this
framework,
and
I
would
love
to.
I
would
love
to
hear
about
new
constructions
that
potentially
don't
fit
and
and
consider
whether
we
should
change
the
syntax
to
to
account
for
them.
A
F
I
hear
you
so
again:
I'd
like
to
present
an
update
regarding
the
sea
pace
draft,
so
first
I'd
be
speaking
about
the
updates
regarding
the
security
analysis
papers
and
then
I'll
be
talking
about
the
major
rewrite
of
the
internet
draft
that
we
are
currently
working
on.
F
So
regarding
the
security
analysis,
there
are
two
major
updates.
First,
we
have
uploaded
a
new
revision
of
our
security
analysis
paper,
which
is
worked
by
julia
michelanny
with
the
proofs
regarding
cpace
and
then
there's
a
second
paper
by
edward
eaton
and
douglas
de
villa,
which
have
been
working
on
the
quantum
annoying
property
of
cpace.
F
So,
regarding
this
quantum
annoying
properties,
it
has
been
conjectured
that
the
attacker,
which
is
able
to
calculate
calculate
some
discrete
logarithm
oracles,
has
a
hard
work
of
of
attacking
cpas
and
in
fact
what
they
found
out.
They
have
formalized
the
assumption
and
they
found
out
that
the
dominating
term,
which
dominates
the
attacker's
advantage,
is
dominated
by
the
number,
but
the
possibility
of
having
online
queries
testing
one
password
online
or,
alternatively,
to
use
a
discrete
logarithmic
rhythm
oracle
for
testing
one
password
offline.
So
that's
the
essence.
F
F
So
we
call
that
initiator
responder
protocol
and
the
second
option
is
the
case
where
we
don't
have
a
prescribed
ordering
where
it's
not
clear,
which
parties
sends
the
first
message
and
in
the
proof
we
now
cover
both
settings.
F
We
ended
improving
readability
and,
most
importantly,
we
have
worked
on
clarificatifying
the
role
of
the
session
id
that's
beforehand.
We
only
had
the
security
proof
and
the
simulation
based
uc
model,
which
implied
that
there
needs
to
be
some
session
id
so
and
this
part
of
the
analysis
is
covered
in
the
new
appendix
b,
which
is
in
the
security
paper,
and
it
includes
a
game
based
proof
which
complements
the
security
analysis
that
we
had
beforehand.
F
F
And
with
this
proof
we
are
able
to
show
security
of
cpace
without
pre-established
session
id
values.
So,
as
a
result,
the
security
we
still
have
strong
security
guarantees,
but
the
guarantees
are
somewhat
different
in
comparison
to
the
case
without
where
we
have
the
simulation
based
proof.
So
for
details,
it's
I
think
it's
too
complicated
for
this
presentation.
It's
described
in
the
paper.
F
Here
we
have
a
generator
function,
which
maps
the
password
on
a
generator
on
the
group,
and
then
we
have
two
scalar
multiplication
methods
where
we
decided
to
use
two
methods
in
order
to
make
it
explicit
that
point,
verification
and
checking
for
low
order
for
the
identity
element
is
important,
so
we
split
it
up
so
that
we
can
mandate
the
check
and
make
a
transparent
way
it's
necessary
to
to
avoid
attacks
so
and
once
we're
having
these
four
functions,
we
have
in
an
appendix
g,
where
we
analyze
analyze,
how
to
best
implement
these
four
functions
for
the
different
application
settings
which
we
might
call
it
ecosystem.
F
So
we
adding
one
set:
how
to
implement
these
functions
best
when
using
short
wire
stress
functions
in
order
to
main
comp
best
compatibility
and
same
using
use
cases.
As
that
we
have
today
for
nist's,
curves
and
brain
blue
curves,
then
we
have
a
second
ecosystem,
where
we
focus
on
constraint,
devices
which
one
would
like
would
like
to
work
on:
montgomery,
curves
and
use
x4,
25519
or
x448,
and
then
cpase
for
idealized
group
abstractions
such
as
restretcher
and
dcaf.
F
Regarding
the
session
identifier,
we
come
up
with
a
recommendation
that
it's
a
good
idea
to
if
it's
possible,
in
the
application
to
first
agree
on
a
joint
session
identifier
between
the
two
parties.
So
both
users
should
contribute
to
the
randomness
when
generating
the
session
id.
So
it's
not
complicated.
F
So
more
details
are
given
here
in
the
reference
3,
which
I
understand
just
in
the
slides
and
the
main
c
messages
that
secure
also
without
a
pre-established
session
id.
But
the
pre-established
session
id
makes
a
difference.
And
specifically,
the
essential
point
is
that
there
come
is
less
which
can
grow
go
wrong
if
you
take
pace
as
one
sub
step
of
a
larger
construction
and
want
to
integrate
it
in
a
larger.
F
Application
and,
if
you're
having
a
session
id
and
which
is
unique
for
this
specific
protocol
run,
you
bind
the
protocol
run
to
this
specific
session
and
there's
less
than
what
can
go
wrong
if
you
have
it,
and
secondly,
there
is
some
impact
of
the
session
id
uniqueness
on
the
level
of
quantum
annoyance
guarantees.
So
for
details
on
this
minor
impact.
That's
I've
added
the
chapters
where
one
an
interested
reader
may
have
a
look
for
for
the
ugly
details
of
this
this
property.
F
Regarding
the
internet
draft,
we
are
currently
making
a
major
rewrite,
so
the
current
draft
version
2
is
a
mixture
between
a
scientific
paper
and
some
guidance
document
for
the
implementer
and
with
the
new
papers.
We,
I
think
we
have
the
basis
for
referring
the
theoretically
inclined
reader
to
the
to
the
paper
and
so
that
we
can
focus
on
the
implementer
for
the
draft
and
that's
what
we
are
currently
working
on.
F
It's
the
current
version
is
on
the
on
the
github,
it's
not
yet
uploaded
and
we
plan
to
upload
a
new
version
once
we
have
finished
the
test,
vector
generation
regarding
the
new
definitions
and
the
new
text
that
we
have
we
are
preparing,
so
the
structure
is
what
we
have
changed
is
that
we
now
consider
both
the
parallel
sessions,
which
remember
for
this
application,
such
as
this
magic
wormhole,
which
has
been
discussed
on
the
cfrg
list,
for
instance.
F
So
this
parallel
setting
and
in
the
classical
initiator
responder
setting
and
that's
included
now,
and
we
have
a
structure
where
we
now
first
give
a
generic
description
of
the
definition
of
the
protocol.
The
pro
definition
based
on
these
four
functions
that
I've
mentioned
before,
and
then
we
have
a
second
section
where
we
specify
how
exactly
these
four
functions
shall
be
implemented
for
three
specific
ecosystems.
So
short
wire
stress,
single
coordinate
letters
and
for
a
stretcher
and
decaf.
F
So,
regarding
the
protocol
description,
there's
one
change
regarding
the
protocol
messages
and
it
stems
from
as
a
result
of
the
game
based
proof
which
we
have
in
the
new
security
paper
and
essentially,
it
turned
out
in
order
to
to
reuse
the
existing
game-based
security
models,
for
instance
the
game-based
model,
which
has
been
used
for
spec,
2
and
other
protocols.
In
order
to
use
this
model,
we
needed
the
possibility
to
have
the
session
the
party
identifiers
as
part
of
the
protocol
messages.
F
So
in
order
to
deal
with
this,
setting
and
use
reduce
the
security
model
there.
We
decided
to
add
the
associated
data
field
to
the
protocol
description
so
that
we
can
not
only
have
the
public
share
y,
a
and
yb
in
the
messages,
but
also
an
associated
data
field.
So
typically,
this
would
include
any
data
that
the
users
will
need
to
will
want
to
authenticate,
such
as
party
identifiers,
and
this
also
allows
for
applications
now
where
we
don't,
where
we
have
no
unambiguous
party
identifier,
encoding
available
protocols.
F
Protocol
start,
for
instance,
if
you're
having
a
device
which
has
several
mac
addresses
and
the
the
software
which
is
running
the
cpase
protocol
does
not
know
on
which
of
the
different
mac
addresses.
The
communication
actually
happens,
it's
not
clear
which
party
identifier
or
which
mac
address
to
use
a
protocol
start.
F
So
it's
this
allows
us
this
associated
data
fields
allow
for
including
this
later
as
part
of
the
protocol
and
the
second
change
we,
which
we
have
in
the
current
draft,
which
on
github,
is
that
we
modified
how
we
are
hashing,
so
the
hash
functions
and
the
security
models
both
for
the
ethernet
stabila
model
and
in
our
own
paper.
We
model
the
hash
functions
as
idealized
random
oracles,
which
calculate
a
bit
string
so,
and
we,
if
we
don't
consider
imperfections
which
might
stem
from
mercury
dam
guard
constructions
such
such
as
256..
F
So,
in
order
to
deal
with
that,
we
now
are
having
a
prescription
that
we
prepend
the
length
of
each
substring
to
the
protocol
messages
to
all
subfields
and
that's
also
helpful.
First,
for
instance
when
prepending
the
length
of
the
public
share
and
the
length
of
the
associate
data
fields.
It's
helpful
for
passing
the
messages
by
the
receivers.
F
F
We,
the
sage
code,
is
largely
rewritten.
We
have
a
proof
of
concept,
implementation
and
sake
map
for
all
the
different
ecosystems.
Now
we
also
follow
the
directory
structure
and
build
system,
just
that
is
it
used
for
the
hash
to
curt
and
opirf
drafts,
and
the
goal
is
to
automatically
generate
the
markdown
sources
for
the
for
the
test
vectors
from
the
sage
script,
and
once
we
finish
this
sub
step,
the
plan
is
to
upload
a
new
ide
revision
to
the
iatf
server.
F
So,
basically,
that's
that
was
it
I'd
like
to
express
a
big
thank
to
christopher
wood
for
his
with
xml
to
markdown
conversion
for
the
internet
draft.
It's
a
big
difference
to
I.
If
I
knew
beforehand
that
it's
not
mandatory
to
use
an
xml
version
of
the
files,
that's,
I
would
have
spared
a
lot
of
time.
So
thank
you
very
much
for
that
and
we'd
appreciate
feedback
and
tints
on
all
aspects,
but
specifically
I'd
invite
feedback
regarding
the
object
style,
notation
that
we
have
now
in
the
github
version
of
the
draft.
F
So
where
we
compile,
for
instance,
the
different
functions
that
we
need
together
in
some
kind
of
object,
type
notation
with
a
class
and
method.
F
Then
we
are
a
bit
discussing
in
the
editor
team,
whether
we
should
be
considering
explicitly
both
the
initiator
and
responder
version
and
the
parallel
version,
or
should
we
focus
on
one
setting
for
concise
and
so
as,
for
instance,
is
the
case
for
the
spec
2
draft.
So
it
adds
some
complexity
and
we're
not
yet
decided
which
what
is
the
best
best
way.
Currently
we
have
both
versions
covered
and,
as
a
consequence,
both
two
versions
of
test
vectors.
F
So
then
there's
the
question:
how
to
best
prepend
the
field
length
to
the
octet
strings
for
the
prefix
free
encoding.
Currently,
we
are
suggesting
to
use
utf-8
as
it's
very
compact
and
in
most
cases
it's
just
the
plain
integer
as
and
it's
a
single
byte
for
lengths
up
to
127,
bytes
and
finally,
there's
still
some
issue
with
the
auto
magical
markdown
to
html
conversion
on
github.
So
if
anybody,
if
there's
somebody
who
has
more
experience
or
which
might
volunteer
to
ask
some
questions,
I'd
appreciate
that.
A
Then
my
maybe
one
small
question
from
my
side:
bjorn.
Could
you
please
give
us
the
outline
of
your
future
plans
of
when
do
you
want
us
to
ask
the
crypto
review
panel
for
another
set
of
reviews?
When
do
you
want
when
you
plan
to
ask
for
such
a
group
of
us
calls?
So
could
you
please
give
us
your
current
understanding.
F
So
my
plan
is
to
to
have
a
first
version
with
a
test
picture
this
week.
Then
I
think
we
will
we.
My
suggestion
will
be
that
we
invite
comments
for
say
two
or
three
weeks
on
the
cfrg
lists,
and
so
that's
the
feedback
and
hints
may
be
incorporated
so
that
we
might
be
having
a
second
published
version
of
the
draft
end
of
the
year.
F
So
and
hopefully
I
have,
I
think
the
security
analysis
is
now
very
detailed
and
very
explicit,
and
I
think
that
it
might
be
by
end
of
the
year
that
we
could
ask
for
a
more
a
lot.
Larger
review
of
the
draft
by
the
review
panel.
But
that
might
also
come
out
from
there
might
be
result
in
the
first
discussion
ground
on
the
cfrg
list.
So
that
there's
might
be
some
delay.
A
A
C
All
right
thanks
everyone
good
morning,
good
night,
I
guess
good
good
day
good
evening
good
afternoon,
we'll
have
you
what
time
it
is.
C
This
is
an
update
on
voprf
or
rf,
as
I
should
say
now,
since
the
the
last
time
we
talked
about
this,
the
draft
has
undergone
somewhat
of
a
significant
change
internally,
the
the
major
change
being
the
transition
from
the
sort
of
classical
to
have
to
to
hash
to
the
hominoprf
to
what's
referred
to
as
the
three
hash,
which
is
called
the
three
hashtag
helmand,
pio,
prf
or
partially
oblivious
opioid
or
partially
obvious
prf
and
I'll
explain
the
the
sort
of
difference
between
that.
In
just
a
moment.
C
There's
also
been
some
other
minor
updates
with
respect
to
cypher
suites
that
are
offering
in
the
spec
changing
hash
functions
to
you
know
better
align
with
the
corresponding
groups
and
then,
as
usual,
updating
test
factors
working
on
document,
editorial
clarity
and
so
on
on
the
the
the
poprf
difference
or
update.
Rather,
a
a
rpf,
unlike
an
opref,
is,
is
effectively
an
opr,
but
it
has
this
additional
input
t
so
in.
C
In
the
classical
case
you
have,
you
know
an
opioid
computing,
a
prf
over
a
server
private
key
which
is
hidden
from
the
client,
a
client's
input
x,
which
is
also
hidden
from
the
server.
Then
you
get
some
output
y
and
the
protocol.
The
interactive
protocol
takes
place
between
client
servers
such
that
the
client
doesn't
learn
the
value
of
k,
and
likewise
the
server
does
not
learn
the
value
of
x
and
the
client
is
the
only
one
that
learns
the
output
y.
Now.
C
This
t
value
that
is
new
to
the
popuf
construction
is
a
shared
public
value
that
both
client
and
server
know
prior
to
execution
of
the
protocol.
That's
effectively
metadata
or
you
can
think
of
it
as
metadata.
That's
a
bound
that
binds
to
the
output
y.
You
can
think
of
it
as
a
context,
string
and
input
string.
What
have
you
t
here?
I
think
just
generally
refers
to
tweak,
but
the
the
gist
is
that
you
have
effectively
a
a
more
generic
oprf
construction.
C
And
by
more
generic
I
mean
you
can
effectively
implement
or
use
it
as
an
opr
with
an
empty,
fixed,
constant,
shared
value
as
an
input
t.
So
if
you
run
the
popup
vertical
with
a
fixed
public
input,
what
you
have
functionally
is
an
opref
on
the
outside,
and
this
is
quite
useful,
because
now
we
have
a
construction
that
is
suitable
for
all
of
the
applications
for
which
the
opr
construction
was
already
useful,
but
also
those
that
would
benefit
from
additional
context
where
appropriate.
C
This
requires
sort
of
complicated
machinery
to
actually
rotate
the
key,
publish
the
key
verify
the
key
externally
and
get
it
down
to
clients.
In
contrast,
if
you
were
to
have
the
key
change
less
frequently,
but
perhaps
fold
in
sort
of
the
expiration
of
tokens
in
as
the
public
metadata,
your
your
resulting,
you
know.
E
C
Might
be
a
lot
simpler
and
that
you
could
effectively
just
pin
a
key
on
a
on
a
blackboard
or
chalkboard
somewhere
change
it
very
infrequently.
Perhaps
it's
in
an
hsn
or
some
other.
You
know
privileged
environment.
Where
it's
you
know,
the
risk
of
compromise
is
quite
low
and
still
get
the
same
functionality
as
if
you're
actually
rotating
the
key.
C
So
you
can,
you
can
imagine
I
guess
using
metadata
for
that
particular
purpose.
The
facebook,
private
stats
paper
or
meta
private
sets
paper.
However,
we'll
call
it
used
a
an
attribute-based,
vop
ref
in
a
similar
way.
The
attribute
here
was
the
expiration
timestamp
associated
with
the
oprf
output.
C
You
can
instantiate
the
exact
same
system
using
a
pop-rf
with
the
expiration
encoded
as
the
metadata
input
you
can
also,
rather
than
using.
You
know
time.
I
guess,
as
your
as
your
public
input
you
could
use.
You
know
some
other
value.
That's
it
constrains
the
the
pr
output
in
space.
So,
for
example,
you
can
imagine,
in
the
context
of
privacy,
pass
again
using
information
about
where
the
client
is
coming
from.
C
Perhaps
it's
perhaps
it's
country,
or
it's
like
network
asn,
or
something
like
that
to
ensure
that
tokens
for
this
particular
client
are
only
redeemable
in
that
particular
region.
So
you
can't
have
tokens
you
know
being
collected
in
one
region
and
then
spent
in
another,
so
they're
all
there
in
general.
C
There
are,
you
know
a
variety
of
use
cases
that
come
up
where
a
metadata
is
quite
useful
for
simplifying
you
know
the
sort
of
ecosystem
around
which
oprs
or
in
which
oprs
exist
and
then,
as
a
result,
this
this
more
general
construction,
seems
more
applicable
to
a
wider
set
of
use
cases
without
invalidating
any
existing
ones.
C
From
a
security
perspective.
The
the
the
paper
upon
which
this
is
built
demonstrates
that,
just
like
the
classical
two
hash,
sticky
helmet
opr,
it
reduces
to
the
the
classical
discrete
log
problem
in
the
algebraic
group
model.
However,
the
the
proof
of
this
reduction
is
done
with
a
game-based
definition
of
security
rather
than
it
doesn't.
C
It
doesn't
prove
that
it
satisfies
this
sort
of
ideal
opera
functionality
that
has
been
used
to
assess
the
security
of
opaque
and
similar
password-based
constructions
from
stosh
and
hugo
and
others.
C
We
also
have
shown
that
the
I
guess
superior,
the
security
primers
for
the
static
diffie-hellman
attack
or
the
xi'an
attack
are
identical
to
the
the
classical
opioid
problem
as
well,
meaning
that
you
don't
need
to
use
even
larger
groups
than
you
would
for
the
the
classical
do
hashtag
helmand
to
to
combat
this
particular
problem.
C
So
they're
similar
in
that
respect-
and
I
guess
the
takeaway
here-
is
that
we
have
effectively
confidence
equivalent
confidence
in
the
scarity
of
both
of
these
constructions.
But
the
lack
of
a
uc
proof
for
rehashed
helmet
is.
It
does
call
into
question
sort
of
the
security
analysis
for
opaque
in
particular,
because
opaque
is
dependent
on
the
vop
rf
draft.
The
vo
prf
now
uses
this
new
construction.
C
However,
we
are
actively
behind
the
scenes
working
with
the
authors
of
all
the
relevant
works,
to
ensure
that
this
analysis
is
done
and
all
signs
point
towards
it
being
feasible.
So
not
at
all
concerned
about.
You
know
the
demonstrating
that
this
new
construction
can
sufficiently
meet
the
ideal
opioid
functionality
that
is
necessary
for
opaque.
It's
just
the
work
that
needs
to
be
done,
as
is
you
know,
for
the
charter.
Cfrg.
C
There
is
one
important
difference
that
I
I
tried
to
know
on
the
list.
I'm
gonna
get
much
feedback,
and
that
is
the
the
classical
two
hash
oprf
is:
is
threshold
friendly,
meaning
you
can
secret
share
the
private
key
using
shamir
and
then
non-interactively
sort
of
compute
a
a
or
evaluate
you
know
an
input
in
a
threshold
manner
transparently
to
the
client,
which
is
quite
nice.
If
you
wanted
to
do
that,
for
your
application.
C
In
contrast,
the
three
hash
debbie
hellman,
because
it
doesn't
require
private
key
augmentation
during
evaluation,
that
the
same
techniques
don't
apply,
which
means
that
making
this
threshold
a
threshold
or
turning
this
into
a
threshold
deployment
would
require
sort
of
an
interactive
protocol
between
the
thing
that's
doing,
aggregation
of
different
shares
and
the
different
holders
of
secret
share
private
keys,
multi-round
here
being
an
interactive
multi-rom
being
just
a
single
round
between
them.
C
We
do
believe,
however,
that
for
the
purposes
of
you
know
a
fixed
public
input,
which
again
is
effectively
the
same
case
as
if
you're
you
know,
using
the
two
hashtv
helmand,
that
we
can
use
the
existing
threshold
mechanisms
for
for
turning
three
hash
to
be
helmet
into
a
threshold-friendly
protocol
in
practice,
but
that's
sort
of
intuition.
At
this
point,
we
need
to
write
the
code
to
verify.
That's
actually
correct.
C
However,
I
I
I
raised
this
because
the
the
original
vopr
of
spec
was
not
written
with
threshold
friendliness
in
white
in
particular.
Had
nothing
said
nothing
about
how
to
turn
this
existing
two
different
helmet
of
virus
into
or
how
to
implement
the
server
side
in
a
threshold
friendly
manner
said
nothing
in
terms
of
how
to
you
know
actually
do
the
secret
sharing,
how
to
distribute
the
key
amongst
all
these
different
participants
and
so
on,
and
what
the
aggregation
looked
like
so
from.
C
I
guess
one
perspective
you
can
imagine
that
this
is
not
a
functional
regression
in
terms
of
you
know,
set
of
features
supported
by
transfer,
but
moving
from
two
hash
to
three
hash.
C
So
if
we
ask
the
question,
you
know
if,
if
we
do,
if
we
actually
care
about
threshold
friendly
oprs
based
on
the
you
know
the
previous
scope
of
the
document
and
the
current
scope
of
the
document,
if
the
answer
is
no,
I
think
we
should
just
press
on
with.
What's
in
there
now
only
pick
one
particular
construction
that
is
specifically
the
more
generic
popuf
based
on
three
hashtag
helmet,
simply
because
it's
a
generalization
and
it
satisfies
more
use
cases.
C
However,
if
people
think
there
are
legitimate
needs
for
threshold
friendly,
oprs
and
I
say
needs
rather
than
once,
because
I
want
to
distinguish
sort
of
the
ability
to
threshold
from
the
actual
desire
to
use
this
in
practice.
I
don't
think
it's
in
our
best
interest
to
try
to
specify
functionality
that
people
will
not
use
that
will
just
further
complicate
things.
C
First
of
which
is,
do
we
now
do
the
work
to
try
to
you
know,
show
how
to
deploy
three
hashtag
helmet
in
a
threshold-friendly
manner,
or
do
we
specify
three
hdp
helmet
without
threshold
capabilities
and
two
hashtag
helmet
with
threshold
capabilities
and
are
these
treated
as
sort
of
separate
cryptographic
objects
with
distinct
apis?
C
And
I
guess
importantly,
in
my
view,
what
do
we
do
about
distributed
key
generation,
which
is
perhaps
the
harder
problem
in
this
space?
Indeed,
the
threshold
signing
draft
frost
kind
of
punts
on
this
and
assumes
you
know
either
you
have
some
trusted
deal
there,
that's
dealing
with
distributed
key
generation
or
you
have
some
out-of-band
protocol
for
other
out-of-band
protocol
for
actually
distributing
the
keys.
Peterson's
protocol
does
not
generalize
to,
or
there
is
no.
C
C
Anyways,
that's
it
for
the
update.
At
this
point,
I
think
we
need
to
hear
from
you
know
implementers
who
I
guess,
care
about
use
cases
of
oprs
and
care
about.
What's
in
the
specification
to
know
whether
or
not
this
is
heading
in
the
right
direction,
I
believe
it
is
and
and
to
clarify
what
the
current
direction
is.
It
is
simply
the
partially
oblivious
prf
construction
without
any
threshold
stuff
in
the
in
the
spec,
but
I
I
think
I
think
we
need
to
hear
from
others.
C
A
Thank
you
grace
yes,
bjorn.
F
Because,
in
order
to
join
forces,
maybe
because
we
considered
working
on
a
proof
which
considers
the
property
of
the
map
since,
in
the
current
map,
you
are
treating
the
map
as
a
ideal,
randomized
random
oracle,
and
we
are
aiming
at
considering
to
to
write
a
proof
regarding
the
properties
of
the
map,
just
as
we
we've
been
doing
for
sea
pace,
so
that
might
might
enter
in
the
storyline.
F
But
if
you're
working
on
this,
so
we
might
discuss
so
that
we
join
forces
so
that
we
have
less
effort
together.
On
that.
C
Yeah
I'll
follow
up
offline
and
we
can
chat
thanks.
Chris.
B
I've
I
have
one
comment,
but
I
want
to
start
with
a
kind
of
clarifying
question,
so
the
new
construction
requires
the
okay.
So
the
the
current
analysis
for
the
new
construction
is
in
the
generic
group
model.
Algebraic
group
model
yeah
yeah,
okay,
so
my
question
is:
does
two
that
two
hash
diffie-hellman
make
this
have
the
same?
Is
is
the
analysis
for
the
two
hash
diffie
helmet
in
the
same
model.
C
They're
not
in
the
same
model,
the
two
hdp
helmet
is
in
this
uc
model
and
three
http
helmet
is
in
this
algebraic
group
model
and
we
don't
quite
know
how
to
compare
them.
B
Okay,
okay,
I
mean,
I
think,
that,
for
that
reason
it
might
be
worth
keeping
them
both
around
just
because,
if
it
turns
out
the
the
algebraic
group
models
is
overly
optimistic.
It's
nice
to
have
a
backup,
but
I'm
also
trying
not
to
make
your
life
too
complicated.
It's
I
mean
I'm
fine.
C
To
write
text,
I'm
less
fine
with
specifying
things
that
have
basically
equivalent
security.
B
But
I
think
you
can
argue
that
they
don't
have
equivalent
security
if
one,
if
one
isn't
provably
secure
in
this
in
the
model
of
the
other.
So
I
mean
we
can
take
that
question
offline.
I
guess
sure.
B
Okay,
the
other
thing,
the
other
thing
I
wanted
to
say
the
the
thresholding
bit
makes
things
very
complicated
and
I
would
say,
without
an
explicit
use
case
that
someone
cares
about.
I
think
it
shouldn't
be
in
the
draft
just
to
keep
things
simpler.
C
Right
and
I
think
that's
my
intuition
as
well,
which
means
you're
effectively
left
with
the
question.
You
know,
if
you
don't
care
about
thresholding,
do
you
just
have
one
construction
or
do
you
have
both
constructions
and
based
on
this
slide,
I
from
my
implementers
and
a
use
case
perspective.
I
see
no
reason
to
have
both
okay.
Thank
you.
E
C
G
E
E
Okay,
cool,
in
which
case
maybe
we
should
just
stick
with
the
two,
perhaps
like
harder
to
improve
the
the
2dh
like.
If
we
can
just
say
like
this,
has
huge
potential
for
abuse
and
there's
nothing
we
can
do
about
it,
then
just
don't
do
it.
C
So
I'm
not
sure
I
agree
in
particular,
because
you
can
imagine
the
client
just
like
telling
the
server
hey
here's
my
ip
address,
rather
than
accidentally
mixing
it
into
the
opr.
There
are
lots
of
ways
in
which
you
can
reveal
the
same
sensitive
information
so
yeah,
I
I
I
think
it's
incumbent
on
applications
using
the
theoprf
to
ensure
that
the
you
know
the
the
width
of
the
metadata
and
the
type
of
metadata
is
not
revealing
in
any
particular
way.
I
don't
think
that's
you
know
anything.
C
H
Hi
sophia
silly
from
cloudflare,
just
to
clarify
also
just
leave
it
for
whatever
that
I
was
saying
in
the
too
hash
th
construction
also
there's
a
way
in
which
kind
of
a
unlinkability
attack
can
also
happen,
because
there's
no
way
that
you
can
actually
check
that
the
server
is
not
imposing
a
rotation
of
the
key
that
is
too
small
as
to,
for
example,
indeed
create
a
linkability
attack
with
the
metadata
constructions.
H
At
least
you
are
assured
that
that
is
not
happening,
but
it
is
the
decision
of
the
application
to
actually
define
in
such
a
correct
way,
which
kind
of
metadata
you
are
passing.
So
it's
not
of
the
contraction
itself
here
presented
in
the
cfsg,
but
rather
when
it's
going
to
be
implemented
in
the
application,
but
just
to
know
that
in
the
other
construction
in
the
2-hdh,
you
also
can
have
these
problems,
because
if
you
rotate
the
key
too
often,
then
you
can
also
be
diminishing.
Unlinkability.
A
A
So
now
bookmarks
shortcash
and
gamer
sir
kdf,
please
book.
A
G
G
Okay,
so
I'm
sitting
here
as
a
as
I
call
myself
a
crypto
plumber,
I'm
the
guy
who
takes
your
work
and
tries
to
put
it
to
some
use,
and
recently
I
have
been
dealing
with
small
hashes
and
working
a
lot
with
kmac.
I
like
to
share
some
of
my
experiences
and
some
of
my
challenges
here
with
you
and
ask
for
guidelines
coming
out
of
cfrg.
G
Let's
see
how
does
this
work?
How
do
I
get
into
my
slides?
Go
this
way
just
past
my
slides
there
we
go
so
first
of
all
small
hashes,
it's
hopefully
a
design
compromise.
Why
somebody's
using
a
small
hatch-
and
hopefully
they
looked
at
acceptable
risks
within
a
constrained
environment,
and
there
are
two
cases
that
I've
looked
at
dealing
with
small
hashes.
One
is
just
a
hash
over
clear
text
or
the
other
case
is
where
it's
a
keyed
hash
and
point
out
that
small
hashes
have
existed.
G
But
the
reason
you
point
out
to
me
on
this:
the
cfrg
list
that
modern
hashing
hardware
has
changed
the
game.
We
no
longer
say:
oh,
it's
g.
It's
hard
to
generate
a
million
10
million
6
million
hashes
to
try
to
force
a
collision
to
try
to
do
an
attack.
It's
no
longer
the
case,
so
the
math
preclusive
probability
is
no
longer
sufficient.
We
have
to
look
at
the
problem
differently
than
we
have
in
the
past
and
that
poses
a
problem
for
me
trying
to
figure
out.
What
can
I
do?
G
So
I
need
an
understandable
guidelines
for
us
developers
how
to
measure
the
risks
to
hash
compromise
and
what
is
the
exposure
to
attack
some
good
guidelines
for
this
I've
seen
some
really
bad
things
out
there.
I
I
pay
attention.
I
work
here
come
to
you,
people
for
for
guidelines.
Other
people,
don't
they
just
read
what
literature
is
readily
available
and
look
at
nav
link
2,
which
is
the
the
command
and
control
protocol
for
unmanned
aircraft
they
put
in
an
authentication
field,
a
6-bit
key
hash
for
message:
authentication.
G
This
whole
area
of
of
small
hashes
is
there's
no
real
good
guidelines.
You
have
to
be
willing
to
plow
through
the
literature,
talk
with
people
and
find
out
what
are
what
are
things
are
happening,
and
what
can
you
do
within
a
constrained
environment
where
you
only
have
so
many
bytes
to
work
with?
You
only
have
so
much
time
to
work
with,
and
you
only
have
so
much
memory,
slash
processing,
part
of
work
with,
so
what
guidelines
can
we
have?
G
Can
we
put
forward
for
making
hashes,
which
are
eight
bytes,
six
bytes,
don't
go
that
way
or
the
risk
to
keys
and
and
what
that
means.
There
is
nothing
that
somebody
can
point
me
to
read
this
through
to
get
to
get
a
good
understanding.
G
I'm
a
reviewer,
I
can
help
author
it,
but
no
way
do
I
have
the
knowledge
to
be
able
to
be
the
principal
on
such
such
a
document,
but
working
out
there,
people
talking
to
me,
particularly
in
this
area,
I
find
that
they
say
well
bob.
What
should
we
do?
My
answer
is,
I
really
don't
know,
so
that's
what
I
have
to
say
about
small
hashes
you'll
see
it
in
my
my
work
in
drip
for
the
unmanned
aircraft.
G
I
have
a
64-bit
hash
and
we
talk
about
attack
against
that,
and
I
just
mentioned
here
a
a
a
six
spike
keyed
hash.
That's
in
in
map
link,
so
there's
some
guidelines
needed
here.
G
G
I
know
that
this
will
say
why
by
the
deuce
saw
three:
it's
not
really
a
processing
advantage
over
saw
256
about
the
same
amount
of
resources
to
get
it
done
except
kmac
is
one
cat
check
function
where
it's
hmac
is
two
shaw
functions
and
again
when
I'm
working
with
constrained
environments-
and
I
wonder
about
big
heavy
hit
servers
that
they
may
have
this
the
same
issue-
that
half
the
processing
cost
is
something
of
importance.
G
Another
thing
which
I
have
over
over
the
decades
have
a
deal
with.
Well,
I
only
need
so
many
bites
for
my
my
hash,
I'm
truncating
it.
I
need
a
short
hash.
How
do
I
truncate
it
which
bites
do
I
take
out
to
get
my
hash
and
so
kmac?
G
You
tell
it
how
many
bites
you
want,
there's
no
designer
thought
process
and
what
they
should
do.
They
say
I
need
eight
bytes
get
eight
bytes
out
and
of
course,
we
have
recently
on
been
having
a
discussion
that
in
and
fips
202
shake
which
kmac
is
built
on
is
an
x
off
and
there's
questions.
Is
there
a
difference
between
a
hash
and
exoth
me?
I
don't
see
the
difference.
I
only
see
that
it's
a
question
of
the
security
strength
problem.
G
We
need
some,
maybe
some
more
clarification
on
that,
because
that
was
recently
brought
up
me.
I
didn't
see
it
as
as
a
problem,
but
apparently
some
do,
and
maybe
we
need
some
clarification
on
that
so
kmac
as
a
keen
hash.
I
look
as
a
winner
in
terms
of
design
and
in
terms
of
when
I'm
working
on
these
these
constrained
devices.
I
save
that
processing.
I
save
their
time,
but
I
take
it
one
step
further
and
that's
I
want
to
use
as
a
kdf
particular
interest
here
is
with
ecdh.
G
The
problem
is
or
the
challenges.
If
you
go
to
856
c
release
one,
it
does
not
recommend
kmac
as
a
two-step
kdf
until
108
gets
revised,
but
how
long?
And
there
is
no,
I
can't
find
any
draft
of
revision
for
108
what's
happening
there.
Why
do
they
have?
Why
do
we
have
to
revision
for
108
before
we
can
seriously
look
at
using
kmac
as
a
kdf,
because
when
you
look
at
hkdf
and
you
look
at
kmac,
you
ask
the
question:
what
is
the
difference?
G
G
I
have
some
guidance
from
team
ketchup
on
this
that
leading
me
to
believe
that
8k
mac
is
a
valid
kdf
for
a
a
diffie-hellman
key
extraction
function,
and
the
big
point
about
this
is
that,
whereas
kmac
compared
to
hvac
is
a
two
to
one
here,
it's
a
four
to
one
at
least
a
four
to
one,
because
hkdf
does
two
hmac
operations
where
here
we're
doing
one
k
map
function.
So
again,
I
need
to
derive
a
key.
G
G
No,
you
can't
you
can't
disrupt,
plus
also
it's
in
the
special
processing,
so
it
it
still
becomes
a
problem
that,
where
I
get
the
processing
power
for
for
when
I
have
to
key
generation
or
anything
like
that,
and
so
looking
at
savings
here
is,
is
important
to
me
in
the
design
and
so
there's
that.
But
then
there
comes
the
question
of.
I
need
multiple
shared
secrets:
how
to
do
it?
I
need
two
128-bit
keys.
G
Can
I
run
kmac
to
put
out
256
bits
and
then
split
it
in
half
to
yield
two
unique
case
of
128
bit
strength?
I
can't
find
anything
in
the
literature
which
gives
me
clear
direction
on
this.
Can
I
break
no
does
breaking
one
of
those
keys
break
the
other
again.
I
have
not
found
anything
on
guidelines
on
this
and
what
to
do
it
still
be
cheaper
to
run
the
kmac
twice
than
hkdf.
G
If
I
need
just
two
keys
and
there's
a
question
of
how
to
do
a
key
hierarchy,
there's
efficient
way
to
do
key
hierarchy
using
kmac
versus
way
that
we
have
done
key
hierarchies
in
the
past.
G
G
And
along
this
cfrg
led
on
eddsa,
we
still
don't
have
this
to
have
come
out
of
185-6
for
using
edsa.
Yet
we
are
using
edsa.
G
So
can
cfrg
lead
with
kmac
broader
use,
particularly
note
that
once
this
finishes
with
the
light
crypto
competition,
there
are
kmac
equivalences
in
many
of
the
proposed
algorithms,
particularly
kudiak,
which
very
much
it's
that's
a
ketchik
function,
so
we'd
be
looking
again
in
the
iot
world
in
the
constrained
systems
of
looking
very
heavily
at
at
using
one
of
these
lightweight
cryptos,
to
be
able
to
use
a
kmac
construction
for
both
keyed
hash
and
for
kdf.
G
G
And
if
we
were
to
do
this,
we're
less
likely
to
see
bad
designs
elsewhere?
If
you
look
again
at
what
map
link
2
did
the
fact
that
they
just
do
a
shaw
on
a
shared
secret
concatenated
with
them
with
parts
of
the
message
and
a
time
stamp,
and
I
was
there
in
96
when,
when
when
hugo
did
his
first
presentation
and
and
why
we
don't
do
this
and
why
we
need
hmac.
G
G
So
we
have
here
is
a
case
of
a
bad
design,
at
least
when
you
look
at
literature,
why
we
did
hmac,
and
yet
this
is
really
recent.
This
is
only
five
years
old
and
so
it's
a
problem.
Where
is
our
guidance
for
these?
These
sorts
of
things?
I
think
that
is
where
cfrg
comes
in.
I
think
this
is
a
role
that
cfrg
needs
to
do
you.
You
are
the
people.
G
This
is
a
place
where
the
experts
live
on
this
stuff,
where
the
knowledge
lives,
and
this
is
where
this
sort
of
guidelines
should
come
out
and
I
be
very
happy
to
work
to
produce
these
guidelines,
but
I
can't
leave
it.
I
have
to
turn
to
you
people
for
guidance
on
that,
so
that
is
my
my
material,
I'm
doing
small
hashes.
I
need
to
do
small
hashes.
I
have
payloads.
I
am
so
highly
constrained
on
how
much
data
I
can
put
out
so,
but
I
need
to
then
become
an
intelligent
balance
or
intelligent
risk
mitigation.
G
A
J
D
Watson
lad
at
the
start
of
your
presentation.
I
was
a
little
confused
when
you're
talking
about
48-bit
hashtags.
Are
these
message
authentication
codes
that
that
serve?
You
are
authenticating
a
message
and
you
can
only
have
48
bits
for
that
or
is
it?
Are
you
trying
or
they're
trying
to
be
used
as
a
hash
function.
G
That
particular
case
map
link,
that
is
a
message
authentication
and
once,
if
you
look
at
what
we're
doing
in
the
draft
ietf
drip
off
there,
it
is
an
we're
arguing
internally
should
be
an
8
or
12
byte
hash.
Can
we
get
by
with
an
8
byte
hash,
where
it's
a
hash
of
one
message
authenticated
in
a
later
message,
because
we
can't
add
a
an
authenticated
hash
into
the
original
message.
G
So
we
follow
it
later
with
another
mass
hat,
another
message
where
we
take
the
hash
and
we
authenticate
it,
and
so
how
small
can
we
get
by
with
the
hash
size
there
such
that
it's
just
a
hash
of
a
message
which
we
are
including
in
a
authenticated
frame?
So
I'm
using
these
in
a
bunch
of
different
ways.
But
what
what
is
my
attack?
What's
my
risk?
G
G
D
It's
very
different,
depending
on
what
exactly
you're
doing
for
a
mass.
That's!
Okay,
because
you
the
only
way
to
attack
the
mac.
If
it
has
a
key,
is
just
sort
of
send
to
the
48
messages
and
see
that
one
you
know
try
every
one,
but
for
the
other
things
you're
doing,
it
might
be
more
or
less
a
problem.
It's
very,
very
difficult
to
analyze.
G
Yeah
but
again,
if
it's,
if
it's,
if
it's
a,
if
it's
a
keyed
hash,
can
attack
against
it,
reveal
the
key
and
then
you
can
slip
your
own
data
in
particular
when
this
is
command
and
control
for
them
and
aircraft.
If
I
can
steal
the
key,
I
can
now
tell
that
aircraft
to
do
other
things
like
happen
to
the
unp
unauthenticated
messages
to
the
the
drone
over
iran
some
years
ago
and
how
they
managed
to
get
that
drone
to
land
there.
J
Okay
again,
I
would
highlight
watson's
question
about
there's
a
bunch
of
different
use
cases.
Please.
The
six
byte
thing
is
a
mac,
not
a
hash
and
yeah
me.
We
may
perhaps
we
should.
There
are
some
macs
where
doing
a
forge.
A
single
managing
final
forgery
doesn't
actually
allow
you
to
do
anything
else.
Other
macs,
you
really
can
we
need
to
publicize
that
difference.
J
In
addition,
in
terms
of
hashes,
you
also
need
to
distinguish
between
hash
hashes,
where
you
only
need
pre-image
or
second
frames,
resistance
and
hashes,
where
you
really
need
collision
resistance,
and
how
do
we
actually
give
useful
advice
to
a
non-crypto
person
to
able
to
make
that
distinction?
I
have
no
idea.
J
Okay
on
another
topic,
you
you
seem
to
advocate
kmac
for
for
small
devices.
One
problem
with
kmac
is:
it
has
a.
It
requires
a
sizeable
200
byte
estate
state
to
actually
compute
that
may
not
be
feasible
on
very
small
devices.
G
Well,
that's
true
and
that's
why
I'm
like
looking
to
the
lightweight
crypto
so
right
now,
I'm
working
with
kmac,
where
I
can
and
but
looking
forward
to
maybe
next
month
for
the
lightweight
crypto
competition
to
be
finished.
And
then
we
have
guidelines
on
alternatives
that
will
be
using
the
kmac
type
construction.
G
I
I
I've
been
looking
at
koodiak
and,
and
kudiak
is
really
it's.
I
ask
people
for
you,
like
other
than
this,
some
of
the
size
difference.
It
is
it's
working
the
same
way.
It's
still
a
sponge
function.
J
Okay,
I
john,
I
think
you
can
kind
of
continue
asking
questions.
F
I
I
think
a
guidance
document
would
be
very
helpful
for
the
community.
There
is
a
lot
of
different.
You
have
hkdf,
you
have
hmac,
you
have
kmac,
and
then
you
have
like
one
or
two
iterations
and
there's
a
lot
of
this
document.
Just
guiding
the
reader
to
the
this
document
would
be
helpful.
I
think
I
think
there's
also.
F
You
gave
an
example
of
people
using
hash
as
a
creating
their
own
mac,
but
there's
also
people
using
there's
a
lot
of
cases
where
people
use
hkdf
without
maybe
not
needing
hkdf.
Maybe
a
simple,
hmac
or
kmac
would
be
enough,
as
you
say,
so
I
think
the
guidance
documents
would
be
good.
G
Okay,
john
I'll
reach,
out
to
you,
I'm
like
next
month
to
start
working
on
us
on
such
a
a
document
and
we
can
maybe
start
getting
some
people
together
and,
and
I
I'll
supply
use
cases
and
we'll
start
could
start
framing
all
the
different
cases
on
the
whole
thing
and-
and
I
would
really
because
I've
been
struggling
here-
the
past
two
years.
K
Yeah
hi
bob.
I
like
the
work.
I
support
it,
I'd
just
like
to
flag
that
there's
some
other
thing
reasons.
K
When
I
was
trying
to
use
hmac,
I
discovered
how
you
know
it's
proven
correct,
but
that
doesn't
necessarily
mean
that
it
works
the
way
that
you
would
expect
it
in
that
there
are
certain
cases
where,
if
you
have
a
null
input
to
the
algorithm
that
has
the
same
effect
as
all
zeros
into
the
algorithm,
and
you
know
back
when
it
was
written
that
was
considered
acceptable
and
the
author
insists
that
that
is
still
acceptable.
K
I
think
that
stinks,
and
I
think
that
that
is
an
error
that
should,
at
the
very
least,
be
called
out
as
a
security
concern,
and
so
I
I
I
think
that
it
is
worth
going
back
and
looking
at
some
of
these
things,
because
you
know
it
should
take
more
than
a
very
vigilant
implementer
to
get
crypto
right,
particularly
when
we're
presenting
something
as
a
construction
that
we,
the
experts,
not
that
I
think
that
there
are
any
in
this
field.
K
Yeah
it's
a
bad
thing
to
think
think
of
yourself.
I
I
think
that
when
we
do
that,
we've
got
to
make
sure
that
it
really
is
as
safe
as
possible
and
as
a
booby
trap
free
as
possible.
G
And
that
brings
up
another
one
of
my
concerns
and
like
how
long
does
a
string
have
to
be
to
be
fit
into
the
hash
to
be
safe
in
terms
of
attack
or
other
things
like
I
need.
I
want
to
get
out
a
eight
byte
hash
and
I
only
have
24
bytes
that
I'm
hashing
is
that
okay,
or
should
it
be
adding
how
many
bytes
of
a
known
random
string,
and
do
I
add
that
known
random
string
at
the
beginning
or
at
the
end.
G
B
I
was
just
going
to
say
describing
the
use
case
would
be
a
good
start.
I
think
that,
depending
on
depending
on
I
mean
yeah
like
like
what's
been
said
before
like,
if,
if
if,
if,
if
you
need
collisions
resistance,
you
shouldn't
ever
truncate,
but
if
you
you
know,
if
you're,
if
you
have
like
for
certain
yeah,
I
I
would
like
to
see
the
use
cases
laid
out.
That's
all.
G
Start
doing
that
I
can
put
together
a
use
case
draft
as
a
starting
point
and
unfortunately
chris
sometimes
I
only
have
something
bites
available
in
my
payload.
So
I
need
to
say
this
is
a
size
that
I
can
carry.
What
do
I?
What
can
I
put
in
here,
which
is
meaningful.
G
So
control
maybe.
G
You
know
scott,
that
indicated
that,
in
terms
of
for
somebody
did
about
collision
resistance,
pre-image
second
pre-image,
what
know
what
what?
What?
What
are
you
dealing
with
granted?
Philip
something
else.
G
Okay,
stanislav,
I
think,
then,
that's
all
the
questions.
Nobody
else
is
in
the
queue.
A
G
Yeah,
I
will
take
it
to
the
list.
I'll
start
a
discussion
going
on
on
both
these
subjects
and
I'll
start
framing
out
the
use
cases
leading
up
up
to
draft
and
and
others
hopefully
will
will
give
me
the
the
expert
assistance
on
it,
and
I
thank
you
for
your
time.
Okay,.
A
Thank
you.
Thank
you.
Bob
thanks
a
lot
bye,
bye
and
the
last
item,
the
agenda,
private
access
tokens,
chris
wood.
C
All
right,
so
thanks
nancyleff
and
others
for
letting
us
talk
about
this.
This
is
not
an
individual
draft
for
cfrg,
so
we're
not
we're
not
explicitly
asking
for
adoption
or
anything.
The
the
purpose
of
this
presentation
is
more
to
so.
It's
a
wider
review
of
the
sort
of
cryptographic
techniques
that
are
used
in
this
private
access
tokens
document,
which
you
can
find
by
just
punching
in
this
name
or
title
into
your
favorite
search
engine.
C
It
was
sufficiently
different
enough
that
we
felt
like
it
definitely
needed
wider
review
from
experts
in
this
particular
community
and
beyond.
So
I
should
also
note
that
I'm
gonna
try
and
describe
sort
of
the
the
general
problem
somewhat
out
of
context
from
private
access
tokens
to
to
sort
of
best
focused
on
the
the
core
cryptographic
concepts.
C
But
if
there's
like
questions
around
the
the
wider
use
case
that
might
help,
you
know
feel
free
to
jump
in
the
queue
and
ask
so
seeking
clarity
here
as
well.
As
you
know
whether
or
not
the
thing
is
correct.
C
Okay
in
private
access
tokens,
we
have
the
sort
of
the
the
arrangement
sort
of
shown
on
the
screen.
You
have
a
number
of
clients,
each
distinct
connected
to
or
interacting
with
what
we
call
a
mediator
and
the
mediator
interacts
with
what
we
call
an
issuer.
C
The
mediator
has
a
fixed
commitment
for
each
individual
client,
so
ev,
every
single
client
has
their
own
secret
value
x,
and
the
mediator
holds
the
corresponding
public
commitment
to
that
value,
and
in
this
arrangement
our
goal
is
to
sort
of
compute
a
deterministic
function
or
value
y
over
the
client's
private
input
x
and
the
issuers
private
input
k
under
the
following
conditions,
the
first
of
which
is
that
the
mediator
only
learns
this
output
y.
C
If
the
client
specifically
engages
with
the
protocol
using
its
secret
value,
that
is
to
say,
the
mediator,
cannot
act
as
a
client
and
request
that
the
and
interact
with
the
issuer
to
compute.
Why,
without
without
having
x,
we
also
want
a
desire
for
the
client
to
not
be
able
to
engage
with
the
protocol
for
secret
values
that
it
does
not
own
so
x.
C
Prime
set,
are
you
know,
different
from
its
own
x
and
the
the
this
the
the
space
of
x's
is
sufficiently
large
enough
that
you
know
this
is
this
should
be
invisible,
but
we
have
additional
measures
in
place
to
prevent
this,
and
I
guess
importantly,
at
the
end
as
well.
We
also
don't
want
the
issuer
to
learn
x
and,
moreover,
we
don't
want
the
issuer
to
learn
when
two
x's,
our
two
requests,
that
sort
of
come
into
the
issuer
correspond
to
the
same
acts.
C
That's
you
know
evaluating
inputs
over
this
secret
with
the
secret
value
k
and
then
responding
to
the
mediator,
okay
to
to
sort
of
satisfy
or
to
instantiate
this
particular
solution
that
we
have
a
number
of
things
that
we
build
upon,
the
first
of
which
is
just
the
primary
group,
and
we
treat
all
of
the
secret
values
as
c
as
scalars
in
the
underlying
field
and
then
effectively
the
commitments
as
the
public
values
corresponding
to
these
secret
scalars.
C
We
also
make
use
of
a
non-interactive
schnorr
proof
of
knowledge
to
prove
the
discrete
log
of
a
particular
value
and
the
the
syntax
is
here
very
convenience
but
effectively
when,
when
we
write
this,
we
say
that
you
know
the.
The
proof
is
such
that
the
the
the
proof
proves
knowledge
of
the
discrete
log
without
actually
revealing
the
discrete
log
and
the
verify
the
corresponding
verification.
Part
of
that
only
returns.
C
True,
if
that
proof
checks
out
so
as
a
sort
of
first
step
leading
up
to
the
the
solution,
you
can
imagine
sort
of
doing
something.
That's
on
this
similar
child
on
the
screen
here
and
you
can.
You
can
kind
of
maybe
view
this,
this
interaction
as
like
say
blinded
dicky
hellman
in
a
way,
so
the
client,
the
very
first
thing
it
does-
is
generate
a
random
scalar
and
then
blinds
its
corresponding
public
value
x
sends
both
over
the
wire
to
the
mediator.
F
C
Who
does
evaluation,
as
it
would
over
that
particular
blinded
value
and
then
sends
the
result
back
to
the
media,
who
removes
the
blind
provided
by
the
the
client
getting
as
desired
a
function,
a
deterministic
function
over
the
client's
input
x,
the
issuer's
input
k
and
you
know,
insert
a
suitable
hash
function
and
our
key
derivation
functions
here
to
make
sure
that
you
know
y
is
sufficiently
indistinguishable
from
random.
But
this
is
the
gist.
C
The
problem
with
this
is
that,
if
you
were
to
assume
a
malicious
mediator
who
wants
to
compute,
why,
without
the
client's
engagement,
they
can
just
run
the
protocol
as
it
would
as
if
it
were
the
client
in
particular.
It
does
the
same
exact
thing:
the
client
does
it
generates
a
random
blind,
blinds
the
the
client's
public
value
which
it
has
and
then
interacts
with
the
issuer
to
compute
the
output.
C
So
we
need
some
way
to
protect
against
this
particular
against
this
particular
problem,
and
this
is
where
the
the
the
zero
knowledge
proofs
come
into
play,
so
they're
basically
appended
to
the
interaction
between
client
and
issuer,
such
that
when
the
client
generates
it
blind
and
it's
blind
in
public
value.
C
It
proves
at
the
same
time
that
it
knows
the
the
corresponding
super
value
x
and
presents
that
proof
to
both
the
mediator
and
the
and
the
issuer,
and
this
is
important
because
now
this
allows
the
issuer
to
check
that
this
request
that
it's
about
to
evaluate
the
request
is
generated
by
an
entity
which
knows
the
corresponding
discrete
log
does
not
rev
learn
the
discrete
log.
It
just
learns
that
you
know
only
the
entity
which
could
have
generated
the
request
did
so
in
possession
of
x.
C
In
this
case,
the
the
proof
itself
will
be
different
for
every
single
for
for
a
fixed
client,
the
proof
will
be
different
because
the
blind
is
different.
The
blinded
version
of
the
the
generator
that's
sent
across
between
from
client
to
issuer
is
also
different,
and
the
blinded
point
p
is
different
each
time.
So
we
still
maintain
the
the
sort
of
ideal
property
that
the
issuer
learns.
Nothing
about.
C
You
know
repeated
values
of
acts
upon
you
know,
subsequent
engagement
with
a
particular
client
and
the
rest
of
the
protocol
is
the
same.
The
you
still
have
this
divi
helmet
that
takes
place
after
the
issuer
checks,
the
proof
and
assuming
it
checks
out,
returns
the
result
to
the
the
mediator.
C
C
That's
effectively
it
so
the
I
guess
the
questions
for
the
group
are
a
we,
you
know.
Does
this
sort
of?
I
guess
security
model
makes
sense.
It
is-
and
I
guess
more
importantly,
is
the
problem
statement
somewhat
clear.
C
Does
this
you
know
about
the
envelope
sketch
of
a
protocol
sort
of
intuitively
meet
these
goals,
and
I
guess,
at
the
end
of
the
day,
we're
likely
looking
that
this
we're
hoping
that
this
is
effectively
a
prf.
You
know,
and
and
of
course
I've
alighted
some
of
the
the
particulars
that
would
that
would
be
necessary
to
make
this
a
the
prf,
but
does
this
intuitively
meet
that
goal?
So
I
I
will
pause
here
at
the
end
and
take
any
questions
you
may
have.
C
No,
no
questions
is
fine.
It
would
be
useful
to
know,
though
I
had
been
in
chat
or
elsewhere.
If
the
problem
statement
was
clear,
I
should
also
note
that
we're
in
no
particular
way
bound
to
or
tied
to
this
particular
solution.
C
The
the
general
desire
to
have
this
a
deterministic
function
over
some
secrets
that
the
client
has
and
the
issuer
has
is.
I
think,
that's
the
the
the
the
essence
of
the
the
problem.
C
So
if
there
is
a
simpler
way
to
solve
this
particular
problem,
perhaps
a
simpler
protocol,
we
would
love
to
to
use
that
this
is
just
this
is
the
current
proposed
solution.
C
Martin's,
question
no,
certainly
not,
but
these
are.
These
are
separable
problems
or
questions.
I
think
I
I
think
what
I'll
likely
do
is
try
and
condense.
This
particular
the
essence
of
this
problem
and
the
proposed
solution
to
the
list
and
follow
up
there.
A
C
I
hope
those
are
those
are
opinions
that
I
I
I
hope
it's
safe
to
assume
that
those
those
those
comments
imply
that
the
problem
statement
is
clear
and
yeah
there.
There
is
some
overlap
with
the
opr.
It
looks
sort
of
on
paper
similar
to
an
opr.
I
think
the
difference
is
you
do
note
that
the
there's
no
there's
no
hash
to
group
step
computed
by
or
done
by,
the
client
on
the
first
step.
So
it's
arguably
simpler.
C
C
Okay,
all
right,
I
will
I'll
take
it
to
the
list.
Thanks.
A
Yes,
yes,
so
chris,
so
any
other
business.
F
K
K
So
if
people
are
interested
in
seeing
what
threshold
can
be
applied
to
in
to
real
world
problems
like
managing
ssh
keys,
they
may
be
interested
in
that.
One
thing
that
I
need,
though,
is
I'm
using
threshold.
Encryption
and
nobody's
seems
to
be
very
interested
in
looking
at
the
drafts
on
how
I
do
that.
K
I
think
that
that
might
be
a
very
useful
common
application.
I
did
present
at
this
at
cfrg
and
I
was
told
that
there
would
be
a
a
review
on
the
list
deciding
whether
we
were
going
to
adopt
it.
That
was
over
two
years
ago.
I've
asked
periodically
since-
and
nothing
has
ever
happened.
B
I
clicked
the
button
I
didn't
mean
to
put
myself
in
the
queue.