►
From YouTube: CFRG WG Interim Meeting, 2020-07-15
Description
CFRG WG Interim Meeting, 2020-07-15
A
If
you
want
to
say
something
in
jabber.
Please
use
these
commands.
The
minutes
and
blue
sheets
will
be
available
by
this
link
in
code
dmd.
It's
instead
of
hydropod
platform.
Please
set
your
name
one
more
time
and
this
is
a
link
to
the
logo.
This
is
no
not
well,
I
think
nothing
new
for
everyone.
A
A
So,
let's
start
with
the
document
status,
so
we
haven't
obtained
any
new
earth
says
since
our
virtual
meeting
in
april,
we
don't
have
any
new
rfcs
in
the
rfc
editor
cure,
but
we
have
a
good
update
on
randomness
improvements
document.
Now
it's
in
isg
review
those.
I
believe,
believe
that
all
irsg
comments
have
been
addressed,
and
so
now
it's
in
conflict
review
in
sg
review,
etc.
So
we
hope
that
it
might
be
ready
before
bangkok
or
virtual
in
november.
A
We
have
an
update
of
argento,
so
the
authors
addressed
all
the
concerns
that
were
expressed
now
we're
waiting
for
our
reviewers
to
confirm
that
all
their
concerns
have
been
addressed
and
then
it
the
irs
g
review,
will
continue
about
active
safety
drafts.
First
of
all,
speak
2
will
have
a
presentation
from
watson.
A
Just
after
this
presentation
we
have
new
shepherd
of
the
document.
It's
me
and
I
think
that
we'll
proceed
with
this
document.
We
have
a
lot
of
good
reviews
of
it
because
it
was
involved
in
the
pack
selection
process.
So
a
lot
of
commands
a
lot
of
pros
and
cones
so
for
solutions
have
been
expressed.
So
I
think
that
it's
in
a
good
shape
and
after
a
few
updates.
In
my
opinion,
the
document
should
be
ready
for
further
steps.
A
A
A
A
A
We
still
have
some
discussions
about
draft
of
john
nations
about
vitaministic
signatures,
and
so
the
process
is
going
on.
We
have
a
new
related
item
for
aed
limits.
Chris
wood
will
have
10
minutes
to
discuss
this
document
today,
and
we
have
two
documents
that
are
about
pack
protocols
that
were
selected
during
like
selection
process.
So
it
is
opec
and
the
draft
by
hugh
kravchik.
A
With
result
and
with
with
your
help
of
julie
hesse
is
available,
it
has
been
recently
updated
and
in
the
near
future
it
will
be
adopted
as
a
safer
g
item.
After
the
next
version,
with
draft
rtf,
safer,
jennings
name
will
be
obtained.
The
same
thing
is
draft
about
cpas.
A
He
has
a
lot
of
very
important
comments
about
security
and
we
believe
that
his
involvement
will
will
also
help
to
make
the
draft
as
good
as
possible,
and
we
have
some
expired
drafts,
but
if
to
say
about
its
catcher,
in
fact,
it
needs
to
be
updated
and
we
think
that
it's
ready
for
the
safety
plus
code
so.
B
A
A
And
about
criteria
panel,
so
everybody
knows
the
history
that
it
was
formed
in
2016.
we
have
a
wiki
page,
and
now
we
have
a
lot
of
documents
going
through
this
panel.
A
lot
of
good
reviews
done
and
extremely
good
reviews
were
done
during
a
pet
selection
process.
Many
thanks
again
for
all
reviewers
and
also
currently,
we
try
to
ask
for
reviews
from
crypto
panel
for
every
document
that
is
ready,
or
we
think
that
it's
ready
for
a
search
group
last
call
and
our
current
members
is
on
the
slide.
A
So
thanks
to
everyone
in
the
every
panel
for
their
involvement,
the
reviews
are
really
good
and
we
think
that
it's
really
helpful
for
cfrg
to
have
good
opinions,
so
any
other
business.
Please
say
your
name
and
say
something:
you
know
what
you
want
to
bash
an
agenda
or
say
any
other
stuff.
A
C
Today,
I'll
be
discussing
speak
two,
so
next
slide
please.
This
is
there'll,
be
a
bunch
of
next
slides.
So
this
is
the
draft
and
defines
a
page
with
no
hash
to
curve
requirements.
You
can
use
hash
to
curve
if
you
want,
but
it's
a
pre-existing
work
item
from
before
the
pake
bake
off.
If
you
look
on
the
data
tractor
you'll
see
a
long
history
and
it's
supporting
work
in
the
kitchen
working
group
and
there's
also
deployments
magic
wormhole
other
ones.
C
It's
a
way
forward.
If
I'll
recently
regained
a
shepherd.
C
C
Little
nits
like
there's
a
typo
here
type
of
there.
If
you
sent
one
of
those
emails,
then
it's
entirely
possible
that
it's
not
reflected
in
the
revision,
just
because
of
the
you
know
got
lost
in
the
shuffle.
So
next
please
so
I
can't
clean
you
know
if
you
sent
me
an
email
about
a
typo,
it's
entirely
possible,
it's
not
there.
C
So
that
concludes
my
what
I
have
to
say.
Unless
there's
any
questions
from
the.
A
So,
watson,
if
you
mind
one
question
from
myself,
so
as
we
discussed
before
the
start
of
the
meeting,
as
far
as
as
I
understand,
you
will
double
check
that
all
the
concerns
that
you
are
expressed
during
the
selection
process
and
that
are
reflected
in
the
web
page
on
the
github
of
section
process
that
you
will
double
check
that
it's
addressed.
And
after
that
you
will
send
us
a
note
that
the
research
group
can
read
the
draft.
Knowing
that
all
previous
commands
have
been
addressed
or
at
least
answered
right.
C
Yeah
I
mean
so
my
recollection,
which
I've
refreshed
is
that
the
the
review
process
in
the
bake
off
really
you
know
just
sort
of
just
said.
Okay,
the
protocol
has
the
properties
it
has.
I
I
don't
think
there
were
specific
concerns
of
the
document,
but
I
will
double
check
those
reviews
carefully
and
that's
part
of
another
fact
of
the
process
where
it
was
very
much
a
bake-off
and
it
wasn't
a
document
review
aimed
at
finding
issues
with
documents
per
se.
A
D
Hey,
I
have
a
quick
question
and
that
is
why
did
you
put
like
there's
different
types
of
curves
involved
in
the
the
sweets
that
you
picked
right,
so
the
nest
curves?
I
think
it
was
at
two
five,
five
one
nine.
D
They
all
have
different
representations.
Why
wouldn't
we
just
pick
one
single
curve,
type.
C
As
in
your
so,
if
I
understand
your
question
correctly,
it's
why
don't
you
limit
the
draft
to
say
only
use
curves
in
short
virus
transformer
only
use
curves
in
edwards
form,
and
the
answer
is
that
this
is
based
on
application
demand
where
you
have
users
that
would
like
to
that
only
used
in
this
curves
that
have
libraries
that
handle
this
curves
in
the
usual
format,
and
they
would
like
to
expose
it
on
the
wire
or
they'll
like
to
expose
edwards
points
on
the
wire
and
the.
So
that's
that's
why
we
did
that.
D
Yeah,
but
you
you
mentioned
as
an
application
you
mentioned,
what's
also
called
wormhole,
which
I
know
has
been
a
project
in
the
eu
for
about
three
years.
He
didn't,
he
didn't
mention
any
other
application
right.
So
it
seems
that
this
is
a
crg
document.
Maybe
you
can
just
do
it
only
a
systematic
request
and
then
other
root
people
can
always
do
the
calculations
on
the
corresponding
address
curve
right.
C
So
you're
saying
we
should
remove
the
so
the
document
is
there
there's
nothing
specific
about
the
protocol
to
the
encoding
of
curve,
and
if
the
issue
is
the
test,
vectors
well
they're
test
vectors
they're
sitting
in
the
draft
they're
not,
but
there
are
applications.
I
mentioned
the
kitten
working
group,
so
there
there's
kerberos
applications,
and
that
is
where
those
vectors
came
from
the
magic
wormhole
is
not
the
only
application.
D
Here,
okay,
I
mean
I'm
just
a
little
bit
confused,
because
now
we
get
lots
more
representation
types
than
we
ideally
would
need.
We
would
now
have
to
maintain
also
the
efforts
stuff
and
that,
basically,
the
the
problem
I
see
long
term
is
that
the
ipf
we
get
basically
a
bunch
of
islands
with
different
requestations
all
propagated
into
different
areas.
D
Why
not
just
use
one
systematic
way
of
doing
things?
It
seems
to
be
far
easier
to
maintain
right.
C
C
What
came
up
is
that
really
the
burden
to
use
a
different
representation
is
not
very
much,
and
the
and
your
your
core
arithmetic
is
not
affected
by
this.
You
can
take
on
the
elliptic
curve
and
put
it
in
fire
stress
form
through
a
very
simple
transformation,
no
matter
what
representation
it
is,
and
so
it's
much
less
of
a
burden
in
practice.
D
Yeah,
so
I
I
do.
I
appreciate
that
if
basically,
the
core
functionality
is
not
impacted
because
you
have
an
easy
mapping.
Why
not,
then
do
the
the
the
easy
mapping
on
the
you
know
like
it's
it's?
It
seems
to
be
more
or
less
marketing
whitewashing
to
call
everything
to
consider
adwords
curves
to
be
completely
different
than
than
any
other
revised
truster
right.
B
A
John
casa,
that
it
might
be
worth
adding
the
pointer
to
the
recent
paper,
is,
you
say,
analysis
for
spec
2,
so
the
informal
inform
compatibility
models,
analysis
to
be
presented
as
bjorn
says.
Maybe
bjorn
will
command
it
himself.
E
So
why
don't?
Do
you
hear
me?
Yes,
yes,
okay,
there's
a
recent
paper
by,
I
don't
know
all
the
authors.
I
think
it's
also
by
michel,
abdullah
and
manuel,
who
have
added
a
very
long
paper
on
the
security
analysis
of
spec
2.
It
might
be
worth
to
add
the
link
in
the
reference
list
and,
regarding
the
the
question
on
how
to
handle
different
curves,
I
do
see
advantages
for
using
different
representations
or
specific
optimizations
for
for
some
applications.
E
So
I
think
it's
worth
considering
to
have
wireshark
curves,
in
addition
to
other
point
representation,
and
that
might
strongly
depend
on
the
application
type
that
you're
having
and
depending
on
how
you
implement
it.
If
you
enforce
some
in
some
some
representation,
you
might
have
as
a
result
an
increased
risk
of
implementation
pitfalls
on
the
other
side,
so
I
think
to
to
fix
one
representation
in
the
draft.
It
might
be
not
serving
all
of
the
applications
perfectly.
E
This
is
a
question
I'm
all
just
have
asked
also
to
stanislav,
which
we
will
have
to
discuss
also
for
departure
and
maybe
opaque,
for
instance,
for
the
question
whether
we
want
to
add
test
vectors
for
estrato
for
to
draft
such
a
stake,
two
or
the
departure
and
or
opaque
at
some
point
or
whether
we
don't
want
to
or
whether
we
want
to
stick
simply
to
to
short
wire
stress
representations
and
I'd
be
in
the
opinion.
E
To
better
add
these
options,
because
this
might
these
might
prevent
other
pitfalls
implementation
pitfalls
in
some
applications.
So
but
there's
no
right
or
wrong
on
on
this
topic,
I
think
so.
That's
what
I'd
have
to
add.
B
Just
one
more
comment:
this
is
ben
k
duck
who,
I
guess
on
paper,
is
editor
for
the
draft,
but
I
haven't
really
been
doing
much.
So
maybe
you
should
take
me
off,
but
I
can
speak
a
little
bit
to
the
kerberos
use
case,
and
I
think
I
see
greg
hudson
is
on
the
call
as
well,
but
for
for
kerberos
we
actually
have
the
implementation
of
the
edwards
25519
curve
that
we
can.
A
Thanks
for
coming,
if
you
don't
have
any
more
comments,
then
many
thanks
to
watson
and
before
we
go
to
the
presentation
from
chris
hood,
we
have
a
minor
agenda.
Jonathan
holland
said
in
the
chat
that
chris
bruce
wants
five
minutes
and
then
to
ask
for
reviews
from
7g
for
his
nprf
design.
So
after
all,
talks
we'll
have,
I
think,
enough
time
to
hear
chris
and
now
we
have
griezwood
and
his
presentation
about
a
dim,
a
d
limits.
A
F
G
Great
okay
yeah,
so
this
is
a
document
that
was
born
out
of
work
with
felix
spencer,
martin
thompson
and
kenny
patterson,
and
many
others
in
the
quick
working
group
in
the
tls
working
groups,
as
we
were,
trying
to
figure
out
how
limits
on
aad
algorithms
apply
to
quick,
drawing
on
experiences
that
we
got
from
tls
and
from
recent
research
that
came
out
in
the
past
year.
So
next
slide,
please.
G
G
You
can't,
for
example,
take
an
aed
algorithm,
a
single
key
and
just
encrypt
data,
and
definitely
doing
so
like
effectively
or
may
allow
you
know
an
adversary
to
learn
information
about
the
particular
algorithm
in
use,
or
let
me
take
a
step
back
and
say:
media
of
those
basically
have
limits,
and
two
of
the
limits
that
are,
I
guess,
immediately
or
critically
important
for
these
are
those
that
pertain
to
confidentiality
the
secrecy
of
the
day
that
you're
trying
to
encrypt
protect
and
the
authenticity
that
data
or
the
integrity
of
the
data
you're
trying
to
authenticate
the
confidentiality
limit
of
a
particular
aed
algorithm
effectively
corresponds
to
how
much
data
you
can
encrypt
before
you
give
an
adversary,
some
non-negligible
advantage
in
distinguishing
the
aed
from
a
random
permutation.
G
So
in
the
normal
you
know
in
the
normal
cpa
game,
where
an
adversary
is
interacting
with
either
a
random
permutation
or
a
particular
instance
of
an
a
aed
algorithm,
keep
with
a
randomly
generated
key
given
enough
queries
to
this
particular
encryption
oracle
effectively,
the
adversary
could
determine
whether
or
not
it's
interacting
with
one
of
these
primitives
over
the
other,
and
the
goal
is
to
make
it
so
that
the
adversary
can't
do
that
by
limiting
how
much
data
can
actually
be
encrypted
on
the
integrity
side,
we're
more
or
less
concerned
with
trying
to
limit
how
many
decryption
attempts
can
be
made
before
an
adversary
can
successfully
forge
an
aed
cipher
text
and
similar
game
aed,
algorithm
or
an
adversary
is
given
access
to
a
decryption
oracle.
G
G
So
one
of
the,
I
guess
important
things
to
know
is
that
these
limits
influence
the
protocols
which
make
use
of
these
aed
algorithms.
So,
for
example,
tl
1.3
has
limits
on
how
much
data
can
be
encrypted
for
a
given
connection
and
the
limits
pertain
to
or
specific
to
each
of
the
cypher
suites
that
are
supported
by
the
by
the
protocol.
G
So
you
can
check
out
rc46
section
5.5
for
more
details.
There
importantly
tls
only
focuses
on
the
confidentiality
limit
and
that's
because,
because
tls
typically
runs
traditionally
over
tcp,
which
insert
ensures
in
order
arrival,
an
adversary
can
only
really
only
try
to
forge
a
single
packet
before
effectively
tearing
down
or
causing
a
connection
failure,
so
in
practice
their
their
their.
The
integrity
limit
is
not
really
important
for
tls,
but
for
quick.
If
you
could
advance
to
the
next
slide,
please
oh
shoot
but
I'll
get
to
that
in
a
minute.
G
But
for
quick
it
is
the
problem.
Another
quick
thing
to
note
the
limits
that
are
existing
tls.
Are
they
pertaining
to
what
we
call
single
user
security,
which
is
effectively?
G
Of
that
specific
instance,
there
has
this
there's
this
other
related
notion
called
multi-user
security,
which
considers
all
uses
or
all
instances
of
an
aed
algorithm
sort
of
executing
in
parallel
with
different
independent
keys
and
the
adversary,
of
course,
being
able
to
see
all
of
the
cipher
texts
that
are
encrypted
in
transit
and
in
use,
but
instead
or
unlike
in
the
single
user
security
case,
where
the
adversary's
goal
is
to
sort
of
break
either
confidentiality
integrity
of
a
single
specific
aed
instance.
G
Here
the
goal
is
to
break
confidentiality
security
of
any
one
of
the
instances
it's
effectively
chosen
at
random,
so,
depending
on
you
know,
the
application
and
threat
mode
you're
concerned
about
single
user
or
multi-user
security.
Bounds
may
be
important
for
you
and
we
try
to
cover
them
both
in
the
draft
next
slide.
G
Please
going
back
to
why
this
draft
even
exists
in
the
first
place,
as
I
was
saying
at
the
beginning,
there's
some
recent
research
that
was
done
specifically
for
quick,
which
looked
at
the
effect
of
you,
know,
quix
transport
mechanisms
and
how
it
impacts
the
effectively
the
usage
of
aed
algorithms
inside
of
quick.
G
So,
as
I
was
saying
earlier,
tls
doesn't
really
consider
the
integrity
limit
to
be
part
of
a
real
problem
in
practice,
because,
again,
any
single
forgery
attempt
or
you
know,
adversary,
injecting
a
packet
into
a
connection
that
doesn't
decrypt
successfully
result
in
connection
failure
and
the
keys
are
thrown
away
and
everything's
done.
G
Quicken
contrast
allows
multiple
forgery
attempts,
because
the
protocol
is
specifically
designed
to
allow
out
of
order
delivery
of
packets,
so
endpoints
need
to
be
able
to
process
or
handle.
You
know
effectively
what
are
either
out
of
order,
packets
or
potentially
forager
attempts
from
from
an
adversary.
G
So
in
you
know,
as
a
result
of
this,
this
work
that
came
out,
we
had
to
go
back
and
look
at
all
of
the
different
cypher
suites
that
were
in
use
in
the
protocol
and
see
whether
or
not
they
needed
to
be
whether
the
limits
needed
to
be
adjusted
and
several
things
sort
of
came
out
of
that
that
re-analysis,
the
first
of
which
was
that
asccm
didn't
really
have
any
analysis
that
could
be
readily
easily
used.
G
So
at
first
we
were
considering
just
defending
asccn
altogether
inside
a
quick
which
would
leave
just
gcm
and
charge
a
poly
as
the
only
aads
that
were
left
but
they're.
You
know
with
working
with
martin
and
felix
and
some
others,
we
sort
of
think
we
got
that
squared
away
and
we,
you
know
we,
we
went
and
looked
at
the
relevant
results
and
we
extracted
the
confidentiality
and
integrity
limits
from
that
or
a
simplified
version
of
the
confidential
integrity
limits
of
that
and
baked
it
into
the
latest
quick
specification.
G
G
So
there's
a
change
proposed
change
to
the
quick
specification
up
right
now,
which
effectively
transitions
from
what
was
limits
previously
considered
in
a
single
user
security
model
over
to
a
multi-user
security
model,
and
the
reason
we
did
this
is
because
it
seems,
if
you
think,
about
a
single
connection,
it's
potentially
performing
multiple
key
updates,
each
key
update,
which
is
a
process
by
which
endpoints
hash
forward.
G
You
know
their
traffic
keys
and
you
know,
derive
new
encryption
keys
and
new
balances
and
whatnot
effectively
introduces
a
new,
independent
key
into
sort
of
the
scope
of
that
connection,
and
that
very
nicely
aligns
with
the
you
know.
The
notion
of
multi-user
security,
where
you
have
these
multiple
independent
aed
instances
and
an
adversary's
goal,
then
is
to
you
know,
break
any
single
one
of
them,
not
necessarily
a
specific
one.
G
So,
basically
we
started
to
question
whether
or
not
we
had
the
right
limits
in
place
for
all
the
cypher
suites
that
we
needed
to
consider
and
whether
or
not
those
limits
were
taking
into
account
the
right
adversarial
threat
model
to
sort
of
make
matters.
I
guess
harder
for
that
particular
work.
All
of
the
the
limits
that
were
known
or
published
for
these
cipher
suites
were
scattered
across
different
papers
using
different
notation,
which
was
sometimes
was
more
often
than
not
inconsistent.
G
Some
of
the
results
were
incomplete,
so,
for
example,
we
don't
really
have
any
separate
confidentiality
or
integrity
limits
for
cha
cha
poly.
The
only
known
analysis
on
the
limits
of
that
particular
aad
combine
the
two
together.
So,
instead
of
having
a
confidentiality
integrity
limit,
you
have
a
single
aad
limit,
so
how
many
packets
can
you
encrypt
and
how
many
packets
can
you
decrypt,
for
example,
or
really
it's
blocks?
How
many
like
plain
text
blocks?
G
Can
you
process
before
the
adversary
gains
a
non-negligible
advantage
in
the
in
a
usual
aad
style
game,
and
then
some
of
them
are
incomplete,
so
we
don't
have
multi-user
security
models
for
touch-up
either.
So,
given
this,
we
thought,
if
you
could
advance
to
the
next
slide,
please
it
would
be
very
useful
if
we
could
sort
of
extract
all
this
information.
G
That's
scattered
around
these
different
papers
and
documents
into
a
single
place
that
covers
all
of
the
known
aad
algorithms
that
we
are
using
for
quicken
tls
and
you
know,
presents
the
formulations
for
the
limits
in
a
very
simple,
easy
to
digest
manner,
specifically
targeted
towards
protocol
designers,
who
need
to
bake
these
limits
into
like
recall,
mechanisms
like
quick,
for
example,
and
we
try
to
present
the
limits
in
such
a
way
that
are
easily
sort
of
enforceable
in
practice.
G
So,
for
example,
that
means
basing
the
limits
on
the
number
of
message
blocks
that
are
processed
in
the
context
of
a
given
key.
For
example,
what
sort
of
successful
probability
is
reasonable
for
a
given
adversary
in
a
different
use
case?
We
for
quick,
specifically,
we
just
borrowed
the
same
success
probability
that
we
used
in
tls,
but
in
the
drafts
we
keep
it
as
a
variable
in
case
people
want
to
potentially
raise
it
or
lower
it
depending
on
how
paranoid
they
are
next
slide,
please.
G
So
the
the
scope
of
the
document
currently
includes
all
aeds
that
are
used
for
quick
and
tls,
so
it
doesn't
have
any
of
the.
For
example.
Civ
aad
algorithms,
like
asgcm
civ,
but
potentially
could
include
that
in
the
future
does
not
include
any
unauthenticated
blocks
of
remotes
like
counter
mode
or
cbc
mode
or
anything.
The
editor
copy
includes
both
single
multi-user
security
limits.
It
does
not.
G
The
draft
that's
published
in
the
data
tracker
currently
does
not
have
multi-user
security
and
we
included
both
specifically
because
we
wanted
to
you
know
not
to
encourage
or
recommend
any
particular
threat
model
for
a
given
application,
but
to
make
it
so
that
application
and
protocol
designers
could
choose.
You
know
whichever
limit
most
more
readily
applied
to
their
their
given
circumstance.
G
Next
slide,
please
right
so
so
from
future
work.
We
I
would
like
to
sort
of
expand
the
the
guidance
that
we
have
in
there
for
protocol
design
developers.
We
have
some
text
which
basically
describes
like
how
to
take
these
limits
and
how
to
sort
of
imply
or
apply
and
enforce
them
in
the
context
of
protocol.
G
But
surely
there
could
be
more
words
there
to
make
it
a
bit
easier
for
people
coming
at
this
from
more
of
a
transport
perspective,
it
would
be
great
if,
at
the
end
we
could
actually
compute
some
of
these
limits
and
bounds
for
common
parameters.
G
I
think
the
gcm
civ
rfc
has
such
bounds
pre-computed
already
in
an
appendix,
and
the
idea
would
be
to
do
something
similar
to
that.
There
is
the
link
here.
If
you
go
to
the
slides
and
you
download
them,
there
is
a
page
in
which
we
sort
of
allow
people
to
play
with
certain
parameters
to
compute
these
bounds,
sort,
dynamically
or
interactively,
and
the
idea
would
just
be
to
sort
of
include
that
information
in
an
appendix
so
that
people
don't
have
to
do
it
themselves
and
that's
actually.
G
The
change
that
we
made
to
the
quick
document
were
very
much
doing
exactly
that.
We
didn't
we
didn't.
While
we
did
some
of
the
derivation
work
in
the
document,
just
to
show
that
our
work
was
correct
or
to
encourage
people
that
are
that
our
work
was
correct.
At
the
end,
we
just
derived
the
limits
directly
and
don't
leave
it
as
an
exercise
to
the
reader
next
slide.
G
Please
so
some
questions
for
the
group
we
felix
mark-
and
I
felt
this
was
at
least
we
wanted
to
codify.
You
know
the
knowledge
that
we
gained
in
doing
this
analysis
for
quicken
tls.
So
the
very
least
it
was
useful
to
us
and
we're
wondering
whether
or
not
this
is
useful
to
the
cfrg
and
potentially
to
other
ietf
protocols.
So
it'd
be
helpful
to
know
whether
or
not
people
think
this
is
something
we
should
keep
doing
and
pursuing.
G
If
so,
it'd
be
great
to
know,
if
they're
people
who
are
willing
to
review
and
provide
feedback,
I
will
say
obviously
we
are
not
perfect
and
we
have
made
arithmetic
mistakes
along
the
way
and
we
may
have
made
mistakes
in
translating
results
from
papers.
It's
really
possible,
so
the
more
eyes
we
have
on
this
better.
G
Assuming
all
those
are
good,
curious
to
know
whether
or
not
the
rt
is
interested
in
adopting
and
moving
this
forward,
and
that
is
it.
Let's
see,
if
there's
some
questions.
A
H
Yes,
I
wonder
why
civ
modes
are
excluded,
especially
since
at
least
gcm
safe
already
comes
with
a
pre-computer
bound,
so
your
work
will
be
even
easier.
G
Yeah,
as
I
said
like
that's,
why
there's
an
ad
on
the
slide
they're
only
excluded,
because
we
didn't
type
the
words
into
the
keyboard,
yet
there's
no
reason
they
should
be
excluded
going
forward
like
we
don't
have
a
very
compelling
reason
to
exclude
them.
Obviously
this
is
a
particular
aed
that
people
are
using
in
practice.
We
would
like
to
cover
it
here
because
we
don't
want
to
end
up
in
a
situation
where
yet
again,
there
are,
you
know
more
bounds
or
more
of
these.
G
There
are
multiple
documents
you
have
to
consult
in
order
to
identify
what
the
bounds
are.
So
that's
the
reason
we
just
haven't
gotten
to
it.
Yet.
A
Thanks
yuri,
then,
if
you,
if
it's
possible,
a
comment
from
me
with
my
chairs
head
off,
I
really
support
this
document.
I
believe
this
is
really
important
and
I'm
really
happy
that
this
is
a
second
document.
On
the
topic
of
the
limits
of
usage
of
of
the
block
servers
in
various
modes,
we
recently
had.
A
Rfc8645
on
the
king
and
now
this
is
a
draft
on
division
limits
on
id
algorithms.
I
believe
that
is
really
an
important
work,
important
for
all
ietf,
and
I
believe
that
this
is
one
of
the
things
that
cfg
must
be
working
on,
providing
the
guidance
to
the
itf,
how
to
use
crypto.
So,
in
my
personal
opinion,
it's
really
a
very
important
work,
so
I
would
be
happy
to
support
it
and
to
review
it
if,
if
needed,
so
I
think
it's
really
great
work.
Thank
you
and
then
we
have
comments
in
the
chat
from
scotland.
I
And
specifically,
with
the
the
gcm
cyber
suite
and
teal,
if
I
don't
know
about
quick,
they
deliberately
include
32
bits
from
kdx
data
inside
the
knots
actually
used.
This
was
designed
specifically
to
increase
the
security
against
multi-user
attacks.
Does
your
doc
I'm
just
looking
at
your
document
now
I
haven't
seen
it
before.
Do
you
address
that
sort
of
thing.
I
G
No
yeah,
so
that's
a
good
question.
The
usage
or
the
aeds
that
we
cover
are
those
that
are
specifically
used
inside
of
tlo.
So,
yes,
the
it
does.
The
limits
are
derived
based
on
that
particular
usage
of
asgcm.
G
J
Hello,
I'm
very
fond
of
this
document.
I
strongly
support
its
adoption,
but
I
also
am
interested
in
adding
the
the
multilinear
gauss
mod
mgm
to
the
comparison.
J
Just
to
understand
its
place
in
the
aid
world,
thank
you.
G
And
specifically,
we
started
with
those
that
were
included
in
tls
and
quick,
so
I'm
not
personally
opposed
to
it,
but
I
would
like
to
hear
from
other
people
as
to
whether
or
not
it
makes
sense
to
include
it.
I'm
not
aware
of
any
like
large.
You
know
scale
uses
of
that
particular
aad.
G
Yep.
Thank
you.
If
you'd
like
to
file
there's
a
repository
for
the
draft
I'll
try
to
include
a
link
when
I'm
done.
If
you'd
like
to
find
an
issue
suggesting
to
include
it,
that
would
be
very
useful.
C
I'm
wondering
if
there's
because
these
two
theorems
for
for
cha-cha
and
and
aes
gcm
are
virtually
identical.
Is
there
some
way
we
could
more?
You
know
more
formally
get
the
right
result
and
just
plug
in
the
block
size
and
that
one's
a
prf
and
the
other
as
a
prp
and
get
out
the
right
result,
or
is
that
just
not
going
to
work.
G
K
Yeah
hi,
I'm
I'm
not
sure.
I
entirely
understood
your
question
watson
in
in
which
way
you
suggest
to
to
make
better
use
of
the
paper
balance,
or
is
that.
C
My
suggestion
is
that
if
you
write
down
so
you
want
to
check
that
this
formula
is
correct
and
right
now,
when
we,
when
the
when
the
correct
window,
is
just
look
at
this
paper,
you
know
there's
some
huge,
detailed
proof
for
asgcn.
There's
a
diff
there's
a
very
similar,
very
detailed
proof
for
cha-cha
whatever,
and
but
both
of
these
proofs
are
essentially
the
same.
K
I
think
chris
correct
me,
but
I
our
perspective
would
be
to
give
some
background
information
on
how
to
translate
kind
of
from
the
paper
results
from
the
paper
bounds
to
the
bounds
we
would
give
in
the
document
so
that,
if
you're
just
interested
in
like
how
to
apply
the
bounds,
you
wouldn't
like
kind
of
need
that
reference,
but
kind
of
as
like
a
way
to
track
down
where
the
different
numbers
come
from
along
that
way,
as
chris
already
mentioned,
we
will
need
to
explain
in
quite
some
well
we'll
have
to
take
care
and-
and
I
need
to
explain
in
some
detail-
that
the
different
published
papers
do
make
quite
varied
assumptions,
so
we're
trying
to
unify
them.
K
But
this
process
of
unifying
introduces
kind
of
assumptions
in
the
translation,
and
I
guess
we'll
hope
to
spell
those
out
as
explicitly
as
possible.
But
I
guess
that's
still
like
further
work
on
the
document.
G
L
Yeah,
so
I
obviously
support
the
document
and
plus
one
to
yuri
on
sav
and
then
I'm
not
even
sure
if
it's
a
good
question,
but
I'm
very
interested
in
in
analysis
of
these
modes
as
they
apply
to
a
post
quantum
threat
model,
obviously
confidentiality
and
not
integrity.
G
We
haven't
at
all
used
the
word
post
quantum
in
discussing
this
document.
Yet
so
I
I
I
we
haven't
really
thought
very
much
about
it.
G
B
M
Yeah
thanks,
I'm
sorry,
I
I
haven't
read
your
draft,
yet
I
just
pulled
it
up
and
I'm
scanning
through
it
look.
Do
you
talk
to
any
of
the
situations
where,
let's
say
the
aad
was
significantly
greater
than
the
amount
of
plaintext
and
what
are
there
any
limits
that
of
on
aed
that
can
affect
the
confidentiality
integrity,
bounds.
G
Yeah,
so
we
assume-
and
this
is
probably
something
we
need
to
make
very
clear
in
the
document-
that
the
aad
is
much
less
than
the
any
of
the
plaintext
that's
being
encrypted.
So,
if
you
think
of
quick
and
tls,
for
example,
it's
just
like
the
record
headers
or
the
the
packet
headers.
That
said,
there
are
certain
cipher
suites,
particularly
like
aescm,
to
which,
if
there
were
a
large
amount
of
aad,
that
would
affect
the
limits
and
the
simplifications
that
we've
made
would
be
not
accurate
in
those
cases.
G
So
I
don't
know
how
we
could.
G
We
have
not
discussed
how
to
potentially
deal
with
that
assumption
that
we've
made
that
the
aed
is
much
much
smaller
than
the
plaintext
and
typically
a
fixed
size.
But
it's
a
very
good
point
to
raise
and
I
will
I'll
file
an
issue
to
attract
it.
So
we
can
have
that
discussion.
H
A
Okay,
thanks
for
commenting,
so
if
we
don't
have
any
more
comments,
my
proposal
feature
will
be
to
announce
a
call
for
adoption
using
the
mailing
list.
I
believe
that
a
lot
of
people
will
support
this,
but
maybe
some
new
comments
will
cure.
So
if
no
one
objects
will
have
an
adoption
call
in
the
c4g
method
list:
okay,
it
seems
that
we
don't
have
a
new
command,
so
the
last
presentation
is
about
vo
perf
alex
please
hi.
Can
you
hear
me.
N
Okay,
yes
hi,
so
yeah,
I'm
alex,
I'm
just
gonna,
be
talking
about
one
of
the
cfrg's
drafts
that
are
currently
that's
currently
moving
forward
and
that's
on
oblivious
pseudorandom
functions,
and
so
this
talk
is
going
to
be
focusing
on
the
latest
updates
that
we've
had
in
the
in
the
zero
four
edition
and
and
then
we're
gonna.
Hopefully,
canvas
opinion
on
whether
on
the
state
of
the
draft
and
what
the
next
steps
will
be.
So
next
slide.
N
Please
all
right
next
slide,
please
so,
just
as
a
recap
in
case
you're
not
familiar
with
the
functionality,
so
an
obliviously,
the
random
function
and
has
a
client
and
a
server
and
the
client's
trying
to
learn
the
evaluation
of
a
pseudo
pseudorandom
function
on
the
server's
key
on
some
on
their
chosen
input
x
and
the
the
security
behind
the
obvious
surround
function
is
that
this
input
x
is
blinded
and
is
not
revealed
to
the
server
during
the
protocol.
So
generally,
what
happens
is
this?
N
The
client
sends
a
blinded
form
of
the
input
x
in
such
a
way
that
the
server
can
still
evaluate
the
sheet
of
random
function
in
even
with
the
input.
Blinded
and
then
sends
this
blinded
evaluation
back
and
then
the
client
can
even
output
the
pr
evaluation
by
unblinding
its
input
and
and
the
second
and
obviously
there's
another
security
assumption
which
we're
making
here,
which
is
the
server
and
the
client,
does
not
learn
anything
about
the
service
key.
N
So
next
slide,
please
and
then
there's
a
second
second
mode
of
the
obliviously
random
function
whereby
everything's
the
same
except
the
server
sends
back
zero
knowledge
proof
that
it
used
the
key
k
and
evaluating
the
c
random
function.
So
here
the
client
has
an
extra
step
where
they
verify
the
proof
and
then
blind-
and
this
just
proves
to
the
client
that
the
server
has
actually
evaluated
pursuant
function,
and
this
is
typically
useful
because
pseudo-random
function,
evaluations
are,
by
their
very
nature,
look
look
random.
N
So
typically
the
client
wouldn't
be
able
to
tell
if
the
server
just
sent
back
some
garbled
like
buttons,
so
this
proof
attests
to
the
fact
that
the
servers
actually
evaluated
the
function.
So
next
slide,
please.
N
So
in
this
draft
we
specifically
talk
about
constructing
oblivious
pseudorandom
functions
in
prime
order
groups,
and
this
this
slide
provides
like
a
little
bit
of
a
overview
of
how
this
works.
So
again,
the
the
client
server
have
have
their
input,
so
the
client's
input
x.
The
service
k
in
this
in
this
setting
x,
is
like
some
string
of
bytes
and
the
service
key
k
is
a
scalar
in
the
gallery
field
associated
with
some
primal
degree
and
there's
also
this
public
key,
which
is
kg.
N
So
g
is
the
generator
of
the
group,
and
this
is
an
optional
extra
if
you
want
verifiability
which
I'll
talk
about
in
a
moment.
So
the
first
step
is
that
the
client
computes
this
hashtag
group,
which
is
this
h2g
function
of
x.
N
So
this
converts
x,
which
is
string
bytes
into
a
group
element
and
then
multiplies
it
by
some
random
scalar
r,
which
is
known
as
the
blind,
so
it
can
and
this
computes
a
group
element
p,
which
it
sends
to
the
server
and
for
the
server
to
evaluate
the
blinded
c
random
function.
N
If
it
multiplies
this
group
element
p
by
the
key
k-
and
this
gives
us
q-
which
it
sends
back
to
the
client
and
in
the
in
the
verifiable
mode,
so
in
this
video
prf
mode,
the
server
will
also
provide
a
proof
that
the
discrete
log
of
the
public
key,
which
is
k
is
equivalent
to
the
discrete
log
of
q,
which
is
also
k
and
the
client
on
receiving
q.
In
this
option.
N
It
can
it's
not
it's
not
too
important
here
how
this
pof
evaluation
is
computed,
but
it's
based
on
a
paper
of
jaraki,
krav,
shik
and
kiyash
from
2014
and
essentially
h
is
computes
like
a
random
oracle
evaluation
over
x,
and
then
this
unblinding
of
q,
which
is
like
you,
take
the
inverse
of
r
and
you
multiply
it
by
q,
and
this
underlines
it
because
you
remove
the
r
and
the
verifying
the
proof
is,
is
a
simple
like
non-entitled
verification
of
a
proof.
N
So
this
is
how
the
vop
rf
works
in
the
final
degree
setting
so
next
slide,
please
so
the
reason.
The
reason
these
things
are
important
is
that
there's
a
number
of
applications
which
have
come
out
quite
recently,
so,
firstly,
there's
this
private
class
protocol,
which
has
recently
been
made
into
a
working
group
with
the
ietf
which
uses
of
voprfs
extensively
in
constructing
anonymity
deserving
authorization
protocols
on
the
internet.
N
There's
the
opaque
draft,
which
constructs
a
password
authenticated
key
exchange
based
on
oblivious
student
functions
and
there's
also
a
practical
instantiation
of
privacy.
Past
related
a
privacy
class
related
api
in
the
chrome
browser
known
as
the
trust
token
api,
and
this
uses
functionality
related
to
these
obviously
running
functions
as
well.
So
next
slide
please
and
next
slide.
So
in
this
version,
four
of
the
draft
there's
some
major
api
changes
which
were
made
so.
N
Firstly,
the
client
and
server
now
control
these
global
contacts,
which
which
essentially
dictate
whether
they
are
operating
in
the
oblivious
uranus,
which
mode
or
in
the
verifiable
literacy
development
function
mode,
and
this
just
makes
the
function
instantiations
easier
to
write.
So
we
can
now
coalesce
around
these
six
functions
and
two
of
these
are
new,
so
the
server
now
has
a
keygen
function,
which
was
required
for
the
opaque
application
and
there's
also
a
verify
finalize
operation.
N
So
the
verify
so
essentially,
these
functions
correspond
to
what
the
client
server
does
in
the
protocol.
So,
firstly,
the
client
runs
blind.
It
sends
the
output
of
the
server
and
the
server
turns
the
output.
It
evaluates
the
client
who
online
and
then
finalizes
to
return
the
rf
value,
and
this
verify
finalizing
is
an
optional
function.
N
That's
not
used
by
the
protocol
that
is
used
by
system
applications
for
verifying
that
the
client
has
performed
the
finalizer
appropriately,
and
there
was
a
previous
efficiency
improvement
known
as
batching,
whereby
the
client
could
evaluate
could
supply
multiple
inputs
to
the
server
and
receive
multiple
evaluations
with
a
single
proof
object
which
attested
to
the
to
the
fact
that
the
server
evaluated
all
these
tokens
correctly
and
and
we've
essentially
removed
this
as
as
like
as
default
functionality,
because
it
was,
it
was
complicating
matters,
and
so
we
now
present
this
batching
optimization
as
an
efficiency
improvement
which
you
can
make
optionally.
N
In
terms
of
cypher
suites,
we
previously
didn't
use
life
suites
associated
with
128
security
groups,
and
this
was
because
essentially
there's
this
a
way
of
converting
an
oblivious
uranus
function
into
an
oracle
for
creating
static,
huge
strong
diffie-hellman
samples
which
can
lower
the
security
of
the
group.
But
we've
re
we've
reinstated.
These
we've
reinstated
cyber
speech
associated
with
these
groups,
because
there
are
settings
such
as
an
opaque
where
this
cue,
this
oracle
can't
be
constructed,
and
so
you
don't
lower
the
security
load
group.
N
So
we've
presented
these
substitutes
back
into
the
document,
with
the
caveat
that
we've
still
got
the
security
analysis
associated
with
the
key,
strong
different
attacks,
and
if-
and
you
should
bear
this
in
mind
when
you're
implementing
these
side
sweeps
and
we
used
to
have
a
host
of
site
hash
functions
which
did
various
different
things.
But
now
we
can
we've
removed
all
of
these
and
they
also
used
to
be
hmac
and
hkdf,
and
we've
turned
this
all
into
a
single
sha.
512
hash
function.
N
So
in
terms
of
what
side
sweeps
we
now
actually
support,
so
we
have
cyber
streets
based
around
all
so
curve2519
and
q448,
and
then
we
have
all
this
curves
and
we
provide
advice
based
on
each
different
curve
on
how
to
instantiate
a
prime
order
group
from
any
of
these
curves,
and
that's
also
a
new
feature
of
this
draft
and
essentially
yeah.
N
The
only
other
thing
is
this
hash
function
so
currently,
as
I
said,
we
use
we
use
code25519
and
curve
448
directly,
but
in
a
few
in
well,
hopefully
in
a
future
version
of
draft
we're
looking
to
replace
these
with
either
the
strato
or
d
decaf,
because
they
give
them
a
much
better
interface
for
implementing
a
primary
group.
So
I
think
I'll
speak
about
this
later,
but
I
think,
hopefully
that's
going
to
be
something
that
we
do
in
the
future.
So
next
slide.
Please.
N
And
so
yeah
in
terms
of
the
primal,
the
group
api
we've
we've
added
now
like
a
concrete
api
for
what
we
expect
from
a
primordial
group.
We
used
to
expose
group
elements
in
the
inputs
and
outputs
of
the
function
and
we've
removed
that
and
we've
group
elements
are
now
only
referred
to
internally
in
each
of
the
api
functions.
So
now
they
just
invite
us
imports
and
return
bytes
and
outputs
and,
as
I
said,
we've
been,
we've
included
instructions
for
implementing
primal
degrees
for
all
the
ecc
science
suites.
N
I
think
in
the
nist
case.
It's
not
there's
not
too
many
considerations,
but
given
the
cofactor
cofactors
that
you
have
with
five
five
one,
nine
and
four
four
eight
there's
specific
things
that
you
have
to
watch
out
for
so
next
slide.
Please
so
there's
some
other
small
updates.
N
So
there's
now
a
stage
implementation,
which
is
for
as
a
proof
of
concept
of
the
entire
functionality,
and
there
was
a
lot
of
like
dst
usage,
which
wasn't
that
so
domain
separation
tag
usage
which
wasn't
consistent,
so
we've
improved
that
based
on
what
was
based
on
the
standard
setting
hash
to
curve
draft,
and
this
also
allows
us
to
remove
like
more
complicated
random
oracle
functionality
and
we
can
replace
it
with
just
sharp
flight
12
with
like
a
prefix
in
a
prefix
free
setting
and
we've
also
like
pulled
in
the
latest
updates
from
hashtag
for
instantiating
the
primary
group.
N
So
next
slide,
please,
I
don't
know
so
in
terms
of
to-do's.
There
are
some
proof
of
concepts
in
the
implementations
which
are
separate
and
they're
written
in,
go
rust
and
boring
ssl.
So
we
want
to
update
them
to
be
compliant
with
the
new
draft
and
it'd
be
good
to
get
some
more
opinions
on
implementing
restreto
and
decaf
sites.
N
Obviously,
there's
a
there's
an
existing
draft,
but
with
decaf.
I
think
mike
hambo
posted
sort
of
a
candidate
document
to
the
mailing
list
and
it'd
be
good
to
hear
whether
that
may
be
turned
into
either
a
future
document
or
incorporated
with
a
strata
in
the
future,
because
that's
something
we'd
be
interested
in
implementing
and
also
we
need
to
put
in
some
quite
stable
test
vectors
into
the
draft
as
well
in
the
next
slide,
please
and
so
yeah.
N
Finally,
so
I
think
some
core
questions
that
would
be
helpful
for
us
to
answer
is
we
think
the
api
should
be
stable
now
and
and
if
anyone
here
has
any
thoughts
on
the
state
of
the
document
on
where
it
could
be
improved
on
anything
that
we
might
have
missed,
it
might
be
useful
and
also,
if
there's
any
sort
of
what
what
people
would
like
to
see
in
terms
of
getting
this
document
to,
like
you,
know,
research
group
later
in
the
in
the
near
future.
A
Many
thanks
alex,
so
we
have
two
questions
in
the
chat.
First
of
all
the
question
from
yuri,
you
really
would
like
to
say
it.
H
It
would
appear
that
three
would
serve
you
better
in
both
kdf
role
and
macro.
N
N
It's
mainly,
we
use
it
to
generate
random
field
elements,
and
so
I
mean
if
you
haven't,
if
you
have
any
opinion,
if
you
think
that
shell
three
might
be
my
servers
better
in
that
case
and
that's
something
we're
willing
to
consider
what
what
sort
of
characteristics.
H
H
Three
has
being
a
prf
as
a
design
requirement
explicit,
and
it
was
included
in
its
analysis,
unlike
shawan
and
shatu.
H
In
my
opinion,
that
would
be
a
good,
a
reason
to
consider
rolling
it
in
now,
especially
at
the
state
where
your
draft
seems
to
seems
to
be
able
to
accommodate
accommodated
that
there
is
no
background
backwards
compatibility
yet
that
would
slow
you
down
and
prevent
from
changing.
A
Many
thanks,
then
there
is
a
question
from
bjorn
haase.
O
E
Version
of
the
map
for
generating
the
point-
and
this
implies
that
you
force
the
implementer
to
use
the
adwords
form,
because
you
need
to
add
two
arbitrary
points.
Is
this
actually
mandatory
or
would
not
be
a
simple
mapping?
Be
sufficient?
Because
in
this
phase
case,
you
would
be
free
to
use
use?
For
instance,
the
x2519
ladder
implementation.
N
N
So,
for
me
at
least,
the
benefit
of
using
restletter
g-cap
is
that
they
provide
a
unified
encoding
for
essentially
mapping
these
curves
to
prime
order
groups,
which
is
what,
which
is
the
situation
we
need
to
use
for
the
protocol.
So,
given
that
restreto
is
like
a
current
cfrg
research
group
item,
I
think
it
we
would
benefit
from
using
that
unified
encoding.
Rather
than
going.
Is
it
going
down
a
potentially
like
divergent
route
instantiating
the
primal
degree.
P
I
can
add
a
quick
comment
here
too,
in
supporting
curve25519
and
curve448
in
the
current
draft
for
prf.
There
is
an
expensive
check
that
involves
making
sure
that
a
potentially
maliciously
controlled
point
is
not
on
the
prime
order.
Subgroup
using
ristretto
or
decaf
eliminates
the
cost
of
that
check.
A
Right,
can
I
really
quick
if
I
understand
there
was
a
comment
from
your
house.
Another
question
from
your
house
about
chateau.
E
Just
a
question,
but
just
an
observation
that
many
implementations
which
use
crypto
5
f19
also
have
a
come
with
an
implementation
of
fla
512
because
of
the
use
in
ad
2519
signatures.
So
that's
a
good
reason
in
our
opinion
to
to
pair
these
two
algorithms
together-
and
this
is
not
so
common
with
with
shah
3.
So
one
would
might
have
more
difficulty
in
finding
implementations.
G
I
just
like
the
second
that
this
is
chris,
I
mean
shot.
Three
is
great:
it's
not
really
widespread
enough
that
it
would
make
the
implementation
story
for
protocols
dependent
on
oprf,
more
difficult,
like
opaque,
for
example,
requires
the
opr
and
if
we
were
to
integrate
opaque
into
tls,
as
we
were
sort
of
working
to
do,
that
would
introduce
this
new
shelter
dependency
on
like
a
tls
tank,
which
isn't
not
all
tls
decks
have
right
now.
So
I
think,
sticking
with
shopify
12
makes
sense.
A
Questions:
okay!
So
then
thanks
alex,
and
so
we
have
the
last
item
it
was
from
the
oh
sorry.
A
comment
from
yuri
yuri
would
like
to
disagree
about
sha
3.
H
Yuri
just
to
emphasize
the
point
that
it
is
relatively
new,
but,
unlike
a
previous
versions,
the
this
one
is
a
a
result
of
an
international
competition.
H
It
comes
with
formal
proofs
and
much
much
better
explicit
design
requirements,
and
since
the
protocols
and
functions
we
are
talking
about
are
also
new,
I
do
not
see
a
concern
with
appearing
something
that
hasn't
been
used
much
before
with
something
that
hasn't
been
used.
Much
before
the
argument
that
atls,
for
example,
does
not
normally
currently
uses
sha-3.
H
G
I
that,
if
we
need
it
later
on,
we
could
just
add
a
new
cipher
suite
to
introduce
south
three
like
for
right.
Now,
I
think
sticking
with
512
is
fine.
I
I
definitely
agree
with
the
points
that
you've
made,
but
the
entire
reason
we
have
cypher
suites
is
to
allow
the
success
ability
into
the
future.
So.
Q
Thank
you
for
including
me
on
short
notice,
so
empress,
and
so
we
have
a
cryptographically
relevant
change.
Suggestion
to
the
itf
standard
for
messaging
layer,
security
and
john
turner
suggested
to
me
yesterday
to
present
the
suggestion
here
to
get
your
attention
for
cryptographic
validation.
So
let
me
can
share.
Q
Q
Q
Q
Could
you
put
your
lease
account
and
visit
the
link
that
I
sent
in
an
email
today.
A
A
A
A
Q
So
the
in
the
very
good,
the
lower
diagram
is
the
one
that
I
would
like
to
talk
about.
Q
So
this
is
part
of
the
key
schedule
of
the
current
draft
of
the
messaging
layer
security
protocol
and
at
the
top
you
see
there
is
an
init
secret
that
goes
into
the
key
derivation
and
then
from
the
left
comes
the
psk
and
then
a
little
bit
later
there
comes
a
squid
secret
and
what
these
kind
of
extract
expand
extract
calls
do
is
that
they
that
they
combine
these
three
keys
into
the
epoch
secret,
which
should
be
see
the
random
of
either
of
the
three
keys
of
zero
random
and
then
the
last
expand
function
derives
a
bunch
of
several
several
keys
from
from
from
the
epoxy
grid.
Q
So
let
me
first
say
what
is
the
suggestion:
how
to
change
this
and
then
afterwards
explain
why
white
was
suggested
to
change
it.
So
the
suggestion
to
change
it
is
on
the
next
page.
Q
Yeah
very
good
thanks,
yeah,
perfect,
so
so
the
so.
The
idea
is
to
replace
this
very
sequential
construction
by
a
parallel
construction,
where
one
runs
each
of
these
three
secrets
for
an
expand
function
and
then
express
the
result
and
then
feeds
the
result
into
this
expand
function
too
and
to
derive
the
different
secrets
so
okay.
So
this
is
a
suggestion
for
the
change
and
why
was
this
change
suggested?
Q
So
the
first
observation
is
that
this
new
function
is
more
parallel
and
maybe
also
a
little
bit
simpler
construction.
But
the
actual
issue
is
that
the
extract
function
in
the
current
graph
needs
to
be
a
dual
prf.
Could
you
go
back
to
the
previous.
Q
Page
so
so
this
this
extract
function,
the
top
extract
function.
It
needs
to
behave
as
a
prf
when
it's
heat
by
the
psk
and
it
needs
to
be
behave.
Q
It
needs
to
behave
as
fpf
when
it's
keyed,
with
the
with
the
with
the
inner
secret
and
the
the
issue
is
if,
if
either
of
the
two
inputs
is
is
malleable,
then
one
might
apply
and
therefore
has
a
dual
prf
functionality
for
this
extract
function,
and
so
so
the
original
suggestion
for
this
extract
function
had
this
idea
that
for
one
of
the
the
direction,
the
key
material
is
only
used
once
and
so
in
this
case,
one
relies
on.
Q
Q
What
one
needs
to
observe
is
xx
or
is,
of
course,
like
extremely
malleable
operation:
it's
not
a
collision
resistant
operation.
So
what
is
used
in
these
three
expand
calls.
Is
this
context
value
which
needs
to
be
a
unique
value
and
a
label
to
have
domain
separation,
and
because
this
context,
value
is
the
same
in
all
the
three
examples:
the
resulting
values
always
change
simultaneously
and
the
so
this
unique
value
context
is
used
once
for
the
separation
for
the
pseudo-randomness
and
then
later
in
the
lower
expand.
Call.
Q
You
see
it's
also
used,
and
here
it's
it's,
ensuring
that
you
get
that
you
get
unique
outputs,
so
so
the
collision
resistance
is
provided
by
this
last
expand
call
and
having
a
different
context:
value
in
yeah
in.
Q
In
each
of
these
calls
to
to
this
big
alternative,
and
so
the
atf
working
group
for
mls,
like
this
idea
that
not
many
cryptographers
have
looked
at,
and
so
we
would
like
to
get
us
some
feedback,
and
so
I'm
sorry
for
not
having
kind
of
prepared
a
formal
presentation
of
this
proposal.
That
means
mainly
wanted
to
raise
your
attention.
There
is
this
paper
that
you
can,
if
you're
interested
or
you
can
also
contact
me
and
discuss
the
proof
or
give
me
any
feedback.
R
Hi
christopher
good
question:
can
we
include
your
your
paper
as
a
slides
in
our
slide
deck
for
cfg?
Yes,
thanks
that
that
would
be
just
useful
for
people
going
back
to
us
thanks.
A
S
Hi
yeah,
so
this
is
really
interesting
idea.
Can
you
talk
about
your
proofs
and
how
maybe
your
proofs
of
security
operate
within
the
larger
mls
sort
of
context,
and
if
there's
any
kind
of
gaps
that
you
see
between,
like
where
your
proofs
of
security
are
and
like,
where
the
larger
protocol
is
for
mls.
Q
Okay,
all
right
so
kind
of
the
so
the
and
sorry
well.
Yes,
I
just
spent
four
years
working
on
the
tls
key
schedule,
so
so
the
kind
of
the
the
ml
s
proof
as
a
kind
of
complete
proof
with
all
details
as
a
computational
proof,
at
least
is
not
it's
not
there,
but
this
is
kind
of
modular
change
which
you
could
also
adapt
in
the
same
way
for
for
for
tls.
So
this
only
changes.
Q
S
Okay,
so
you
don't
see
like
just
out
looking
at
this
at
a
high
level,
you
don't
see
any
gaps
or
assumptions
that
you're
making
between
how
to
find
the
tls
context
in
here.
Q
No,
I
so
so
I
think
the
like
from
my
perspective
and
the
most
important
part,
is
getting
a
hold
of
this
unique
context,
value
and
from
the
discussion
in
the
working
group.
This
seemed
not
to
be
a
problem,
but
yeah.
I
don't
see.
A
Then
there
is
a
question
for
our
comment
from
watson.
What's
in
place,.
C
So
my
question
is:
why
not
just
run
everything
through
shot?
Three,
I
mean
it's
supposed
to
have
the
with
appropriate
differentiators
on
each
input.
It's
supposed
to
have
the
right
properties
to
do
the
sort
of
randomness.
Q
So
so
so
you
mean
you
mean
to
replace
the
x
or
with
char
three
I
mean
to
repaint
everything
with
sha-3
yeah.
C
Q
Okay,
excellent
question,
so
so
the
the
the
issue
is
that
the
the
the
thing
that
I
would
like
to
would
like
to
to
to
prevent
is
that
key
material
is
used
twice
when
some
other
key
material
changes.
Q
So
so
so
say
say
you
have
you
have
a
you,
have
a
you
have
big
group
and,
and
something
goes
wrong
and
and
each
party
uses
like
some
of
these
secrets,
and
so
you
have
multiple
definitions.
Three
people
use
the
same,
commit
secrets.
Usually
people
use
the
same
psk.
Three
people
use
the
same
inner
secret
each
time
with
two
different
other
values.
Q
So
then
you
don't
have
any
entropy
left
and
you
need
some
sort
of
kind
of
multi-core
security
that
you
can
rely
on
from
your
primitive
and
I
don't
think
that
char
3
has
approved
for
those
type
of.
A
T
You
hear
me:
yes,
okay,
yeah.
I
think
the
other
key
thing
is
that
sha3
can't
be
parallelized.
So
if
you
have
a
hundred
thousand
inputs,
then
you
would
have
to
do
all
a
then
b,
then
c,
rather
than
doing
it
in
sets
of
trees
and
vectors.
T
But
but
well
moscow.
It
says
it
is
parallel.
Q
Yeah,
I
don't.
I
don't
think
this.
This
question
is
so
so
within
the
expand
functions,
expand
function
is
usually
implemented.
Using
an
implementation
of
an
hmac
hmac
uses
a
hash
function.
I
don't
know
what's
currently
what's
currently
supposed
to
be
that
hash
function
but
like
supposing
it's
char
3
then
I
think
it
might
be
used
anyway,
but
I
I'm
not
sure
which
hash
function
is
currently
supposed
to
be
used
in
the
house.
Q
Yeah,
but
but
anyway,
this
type
of
this
type
of
multi-core
security
is
not
is,
is
not
something
that
I
think
sha-3
has
approved,
for
it
would
be
an
extremely
exotic
property
to
provide.
A
Okay
thanks.
Maybe
if
there
is
some
discussion
about
k-mec
and
h-mac
and
c-shake,
it
can
be
moved
to
the
melodies.
In
fact,
we
have
three
minutes
left,
so
thanks
chris
will
have
to
upload
your
pdf
to
the
meeting
materials.
So
maybe
any
concluding
comments
from
anyone
any
other
business.
A
Very
short
one,
two,
three:
okay
thanks!
So
since
everyone
hope
that
will
have
some
chance
to
meet
in
person,
so
thank
you
very
much
very.