►
From YouTube: IETF113-CFRG-20220324-1330
Description
CFRG meeting session at IETF113
2022/03/24 1330
https://datatracker.ietf.org/meeting/113/proceedings/
B
So
we
do
actually
have
quite
a
back
agenda,
so
I'll
try
to
keep
on
top
of
it.
B
B
B
B
So
irtf
operates
on
the
note.
Well,
you
should
be
familiar
with
this
already.
If
not,
this
is
the
slides
with
relevant
rfcs,
but
one
of
the
main
points
is
always
be
professional,
be
nice
to
each
other
be
respectful.
B
B
Cool
so,
and
I
will
do
a
quick
update
on
the
document
status.
We
have
quite
a
few
documents
in
flight.
Actually,
no
any
agenda
bashing.
First.
E
Yeah
this
is
chris
wood
notice.
The
the
software
topic
on
the
pseudocode
and
sort
of
specification
discussion
is
first
I'd
like
to
recommend,
given
the
highly
technical
nature
of
the
agenda
that
that
be
moved
to
the
end.
So
we
reserve
time
for
those
more
important
things
I
I'm
happy
to
just
you
know
if
we
need
to
in
the
interest
of
time,
I
can
send
an
update
to
the
mailing
list
to
kick
off
discussion,
but
I
don't
want
to
take
away
time
from
the
more
technical
topics
that
could
use
airtime.
B
Okay,
that's
fine,
so
you
will
want
to
start
with
key
blinding
then
yeah,
great,
okay,
fine.
That
sounds
good.
B
Right
so
my
apologies
for
a
small
font.
We're
trying
to
get
trying
to
fit
in
all
the
documents
in
with
the
life
expired,
but
the
the
gist
of
it
is.
We
have
one
rfc
published,
which
is
hp,
key
rfc,
9180.
B
B
We
have
quite
a
few
active
documents.
Pretty
much
everything
was
updated.
Draw
vrf
is
waiting
for
the
document.
Shepard
kangaroo
12
is
also
waiting
for
chairs
to
review
result
of
second
research
group
last
call.
I
will
get
very
few
responses
to
this.
So
if
you
haven't
reviewed
the
document
or
confirmed
whether
you
still
think
it's
a
good
idea,
please
send
us
a
message
or
reply
to
the
mailing
list.
B
Actually,
the
cfrg
appearing
friendly
curves
is
while
it's
expired.
There
is
another
coeditor
for
the
document
and
it's
being
worked
on,
so
hopefully
it
will
be
refreshed
soon.
E
What's
the
status
of
the
restretto
draft,
the
the
last
time
the
state
I
do
know
it
was
updated
to
take
into
account
the
feedback
from
the
crypto
review
panel
review.
So
I'm
wondering
like
what
the
next
steps
are.
There
are
a
number
of
documents
that
are
building
up
dependencies
on
this
and
it'd
be
good
to
move
that
forward.
So
I'm
just
curious
what
the
next
steps
are.
B
B
So
the
other
thing
is
this
is
a
our
regular
slide
about
crypto
review
panel
that
serves
three
purposes.
It
helps
cfrg
itself,
security
area,
independence
stream,
edit
stream
to
review.
B
We
just
we
announced
new
round
of
membership
in
this
lost
atf,
with
discussion.
B
About
our
past
performances,
as
well
as
availability
of
different
people,
we
basically
capped
everybody
and
added
a
couple
of
names:
veranda,
kumar
and
ludwig
pereira.
B
Just
realized-
and
I
forgot
to
have
a
slide
zone
in
case
you
haven't
noticed.
We
now
have
secretary.
Thank
you,
chris
wood
for
being
a
secretary
in
helping
organize
us
and
help
us
with
a
website
and
github
and
background,
and
now,
let's
get
started.
G
Good
afternoon
I
have
a
question
on
the
crypto
review
panel.
I
noticed
that
sometimes
a
request
is
being
made
and
then
it
takes
an
extremely
long
time
to
get
the
review
done,
even
though
people
really
promised
to
do
it
in
about
a
month,
and
these
are
all
people
in
the
crypto
panel
who
volunteered
for
this
position.
G
G
I'm
not
very
still
waiting
for
review,
but
I
think
it's
it's
really
yeah.
It's
it's
really
hard
to
accept
a
four
months
delay
of
people
who
promise
to
do
something
in
one
month
right.
B
Right,
that's
fair
enough.
Yeah!
I
think
lesson
here
is
chairs
really
should
be
on
top
of
this
and
make
sure
that
if
you
cannot
fulfill
that,
we
should
should
be
able
to
find
another
reviewer.
B
A
B
Are
you
on
crypto
review
panel
or.
H
H
I'm
curious
how
how
many
requests
for
reviews
that
sorry
about
that,
I
I
I
misattributed
I
looked
at
the
name
in
the
queue
I'll
fix
that
in
the
notes.
I,
I
wonder
how
many
requests
for
review
the
the
the
crypto
review
panel
has
at
any
given
time.
I'm
just
curious.
B
B
I
G
Yes,
renee,
so
sorry,
just
one
more
question,
I
I
looked
at
some
of
the.
As
we
all
know,
the
crypto
mailing
list
is
only
sporadically
being
used
for
technical
discussion,
and
I
noticed
that
some
documents
apparently
were
ready
to
be
to
to
proceed
based
on
one
crypto
review
panel
review
and
it
was
absolutely
a
kind
of
a
dead
silence
on
the
mainly
mailing
list,
and
I
I
always
thought
that
crypto
review
panel
was
not
a
substitute
for
the
normal
operation
of
the
sea
of
rg.
B
No,
it's
absolutely
not
substitute,
that's
they
should
they
are
basically
working
the
same
with
the
director.
We
should
take
offline
any
specific
cases
when
you
have
concerns.
B
Right
with
this,
chris
wood
keep
landing
on
signature
schemes,
I'm
going
to.
E
E
Can
you
accept
the
request?
Oh,
are
you
driving.
B
No
hold
on,
I
can
pass
it
to
you.
That'd
probably
be
easier
hold
on.
Let
me
cancel
the
request
and
then
I'll.
E
Okay,
oh
oh
awesome.
I
know
you
could
do
that
all
right,
yeah
thanks
everyone.
This
is
a
presentation
of
a
new
draft
on
this
subject.
We
call
key
blinding
for
signature
schemes,
joint
work
with
frank,
denis
and
eating
it,
and
myself
so
for
some
context.
E
E
E
This
proof,
the
signature
is
unforgivable,
meaning
that,
if
you're
given
a
a
a
tuple
of
message,
public
key
and
signature,
it's
the
verifier
will
only
conclude
that
the
signature
is
correct
if
it
was
produced
by
someone
who
actually
has
access
to
the
corresponding
secret
key
with
overwhelming
probability.
E
This
is,
I
mean
this
is
just
a
basic
digital
signature
scheme,
but
but
there
are
scenarios,
wherein
you
know
the
the
you
might
not
want
the
verifier
to
learn
information
about
sort
of
the
the
prover
that
produced
the
signature
or,
more
specifically,
you
might
not
want
the
verifier
to
learn.
You
know
information
about
the
the
signers,
the
proofer's
long-term
public
key
through
either
the
the
public
he
used
to
verify
the
message
or
the
signature
itself.
E
This
particular
property
has
a
number
of
applications
in
practice.
Right
now,
for
example,
tor's
hidden
service
identity,
binding
protocol
uses
this
sort
of
construction
for
like
signing
things
in
such
a
way
that
you
can't
link
them
back
to
the
the
original
like
service
provider.
That's
also
used
in
like
cryptocurrencies
for
private
airdrop.
E
E
So
it
does
seem
to
be
a
sort
of
common
common
functionality,
and
the
purpose
of
this
draft
is
to
try
to
basically
you
know,
standardize
it
and
specify
it
so
that
we
can
have
interoperable
implementations
of
it
and
to
to
to
sort
of
explain
what
I
mean
by
this.
I
guess
not
revealing
anything
about
the
prover.
E
Let's
move
on
to
a
sort
of
alternate
setting
where
you
have
multiple
approvers
now,
so
you
have
on
the
left-hand
side,
two
provers
brewer,
zero
prover
one
each
have
their
own
unique
key
pair
and
each
are
given
the
same
input
message
to
sign.
E
Both
groupers
are
going
to
sign
the
message
with
their
respective
secret
keys,
send
their
their
triples
over
to
the
the
mediator,
who
is
going
to
choose
a
random
bit
and
then
decide
to
forward
on
the
triple
from
either
proof
or
zero
approver.
One
based
on
that
particular
bit
to
the
verifier
and
the
verifier
will
run
the
verification,
algorithm
and
try
to
guess
the
bit.
So
we
asked
two
questions
you
know.
E
Is
this
unforgivable
in
the
in
the
traditional
sense
like
that
I
described
earlier
if
you're,
given
you
know
this
triple
message,
public
key
signature:
can
you
verify
or
can
you
try
to
cheat
the
the
verifier,
and
the
answer
is
yes
in
particular,
because
this
is
just
a
basic
digital
signature.
E
But
then,
when
you
look
at
the
unlinkability
sort
of
property,
that
does
no,
that
does
not
hold
in
this
particular
case
in
particular,
because
the
verify
you'll
notice
on
the
right
hand,
side
has
as
input
the
public
keys
of
both
river
zero
and
proofer
one.
So
it's
pretty
trivial
to
check
whether
or
not
prover
zero
proofer
one
generated
this
particular
signature
by
just
looking
at
the
public
key
or
trying
to
verify
the
message
signature
under
either
one
of
the
keys.
E
So
what
we
want
is
for
this,
the
verifier
to
not
be
able
to
determine
the
bit
b
with
probability
negligibly
larger
than
half
no
better
than
a
random
guess,
and
so
this
is
where
the
sort
of
functional
requirements
come
to
play.
E
Moreover,
that
we
want
the
signatures
themselves
that
are
produced
to
not
leak
any
information
about
the
long-term
signing,
keys
and
sort
of
jointly
this.
This
captures
this
intuition
notion,
or
this
intuitive
notion
of
unlinkability
that
I
was
describing
earlier,
and
our
proposed
solution
is
a
signature
scheme
with
key
blinding.
E
E
So
given
a
public
key
and
a
blinding
key
which,
in
in
the
in
the
syntax
of
the
object,
is
just
another
private
key,
you
can
produce
a
blinded
representation
of
that
public
key
and
then
you
can
of
course,
unblind
it
using
the
unblinded,
the
original
unblinded
public
key,
and
we
have
this
separate
functionality,
which
is
called
blank
key
sign.
E
We're
open
to
name
changes
if
it's
helpful
for
clarity,
but
the
gist
is
that
if
you
run
blind
key
sign
with
a
long
term
signing
key
and
a
blind
key,
the
output
signature
will
be
valid
under
the
correspondingly
blinded
public
key
using
the
same
blind.
So
in
this
in
this
relation
at
the
bottom,
we
have
that
verifying
trying
to
run
the
verification
algorithm
with
the
blinded
public
key
over
the
message.
Using
a
signature,
output
from
blankie
sign
succeeds
or
is
equal
to
one.
E
So
if
you
were
then
to
plug
this
particular
scheme
back
into
the
multiple
prover
scenario
before
you
notice
that
the
the
provers
on
the
left
are
slightly
different
now
they
have
the
same
inputs
as
before.
They
have
the
same
long
term
signing
key
pairs
in
the
same
input
message,
but
in
actually
producing
a
signature.
E
The
mediator
will
choose
a
bit
just
as
it
would
before,
and
it
will
use
that
bit
to
determine
which
of
the
blinded
public
keys
to
gen
sent
to
the
verifier.
E
E
So
if
you
look
at
the
the
two
properties
that
we
were
targeting
before,
unforgeability
has
a
sort
of
a
signature
scheme.
It
still
has
this
property
that
the
the
the
signature
itself
must
have
been
produced
by
someone
with
the
long-term
private
key
with
overwhelming
probability
and
by
virtue
of
satisfying
the
sort
of
functional
requirements.
E
It
also
now
has
this
this
unlinkability
property
that
we
want,
in
particular
the
the
blinding
key
or
the
blinded
public
key
bk
is
independent
from
both
of
the
long-term
public
keys,
pk0
and
pk1
are
the
provers
and
like
similarly,
the
the
signature
itself
is
independent
from
the
long-term
signing
keys,
so
that
the
triple
itself
reveals
nothing
about
either
proof
of
zero
or
approver
one.
So
it
intuitively
the
the
verifier
cannot
guess
the
bit
with.
E
That's
effectively
it
for
the
functionality.
It's
very
simple.
The
eddsa
variant
is
in
the
draft
building
off
of
rc
032.
It
only
covers
the
pure
edsa,
none
of
the
context
or
prehashing
variants
or
whatever
that's
very
straightforward.
It's
often
used
as
an
offhand
demonstration
of
the
the
concept
in
academic
literature
and
whatnot.
There
also
is
an
ecdsa
variant
in
there
as
well.
This
is
a
bit
different
different
in
the
sense
that
it's
not
doing
the
obvious
thing.
E
I
guess
that
that
you
would
do
for
a
key
blinding
I
can.
I
can
elaborate
if
people
would
like,
but
it's
different
enough,
that
you
know
we're
doing
security
analysis
to
determine
whether
or
not
it
is
indeed
safe,
but
the
the
intuitive
intuition
is
that
it
is
it
is.
It
is
correct.
E
E
It's
fairly
simple,
straightforward
draft
to
implement
and
there
are
test
factors
available
for
those
who
would
like
to
implement
it
and,
as
I
said,
the
security
analysis
is
underway.
We
hope
to
make
those
results
public
as
soon
as
they
are
done
to
gain
confidence
in
the
the
constructions.
But,
as
I
said,
the
the
eds
eddsa
one
is
fairly
straightforward,
intuitive
it's
the
ecdsa
one.
E
That
is
the
the
more
interesting
variant,
and
with
that
I
guess
I
have
two
questions
for
the
group,
the
first
of
which
do
folks
find
this
to
be
a
compelling
use
case
and
problem
worth
working
on
and
trying
to
solve
with
this
particular
this
construction.
You
know
signature
schemes,
we
keep
winding
and
secondly,
are
folks
interested
in
adopting
this
as
a
an
initial
draft
to
to
do
exactly
that-
and
I
will
I
will
pause
here
for
questions
and
I'll
read
the.
B
Chat,
we
have
three
minutes
for
quick
questions.
If
there
are
any,
I
think
the
question
of
adoption
will
probably
discuss
between
chairs
and
might
take
it
to
the
mailing
list.
E
Worked
for
me
if,
if
folks
are
interested
in,
you
know
more
clarity
on
anything,
you
know
the
the
draft
link.
The
draft
has
a
link
to
the
repository
where
it's
being
developed
and,
of
course
we
welcome
any
and
all
of
your
feedback
and
contributions.
H
E
Right
so
the
ecdsa
variant,
the
the
the
intuitive
thing
that
you
might
do
is
given
a
like
a
private,
ecd,
ecdsa
key
and
a
blinding
key,
which
is
also
a
private
key.
You
might
like
just
multiply
the
two
together
and
similarly,
you
might
multiply
the
like
the
public
key
by
the
corresponding,
blinding
key
and
ergo
have
a
a
blinded
representation
of
both
the
private
key,
as
well
as
the
public
key.
E
Unfortunately,
there's
this
there's
a
body
of
work
on
related
key
attacks
and
how
they
can
be
used
to
produce
forgeries
for
schnorr-like
signatures
as
well
as
ecdsa,
and,
in
particular,
this
sort
of
naive
way
of
blinding
an
ecdsa
public
and
private
key
does
lend
itself
to
forgeries.
I
can
point
people
to
the
relevant
references
if
they're
interested,
so
the
tweak
we
made
was
to
basically
rather
than
maintain
sort
of
the
algebraic
relationship
between
the
the
blinding
key
and
its
impact
on
the
the
I'm
using
the
word
blind.
E
Okay,
the
the
trick
was
to,
rather
than
maintain
the
algebraic
relationship
between
the
the
input
blinding
key
and
the
output
blinded
public
key
to
hash
the
input
blinding
key
to
a
scalar
to
sort
of
blow
away.
E
This
this
algebraic
relationship
and
make
it
so
that,
basically,
you
need
to
produce
a
collision
in
this
hash
in
order
to
produce
a
forgery.
So
the
the
construction
does
kind
of
depend
on
security
in
the
random
oracle
model
intuitively,
but
that's
effectively
the
gist.
I
I
I
wish
I
had
it
written
down
in
slides,
but
perhaps
that
that
vocal
description
was
not
overly
clear,
but
chris
I
can.
I
can
send
you
pointers
to
the
the
diff
where
it
was
included.
If
that
would
be
helpful.
B
K
Okay,
next
slide
hold.
K
Okay,
great
so
I
see
there
was
like
a
hundred
people
in
the
in
the
in
the
in
the
session,
so
I
hope
to
get
some
some
feedback
on
this,
even
if
it's
just
kind
of
a
sense
of
what
people
think.
So
the
lake
working
group
is
defining
a
protocol
called
ad
hoc
for
authentication,
key
establishment
for
small
devices.
It's
you
know
you
can
think
of
it
kind
of
a
bit
like
dls.
K
It
does
the
same
kind
of
thing,
but
it's
you
know
really
tailored
towards
kind
of
small
packet
size,
few
octets
to
be
emitted.
And
if
you
care
about
the
requirements,
you
can
see
them
there
so
again,
like
tls,
there's
kind
of
a
concept
of
cypher
suites
and
in
those
kind
of
cypher
suites.
There's
a
signature
algorithm
represented
for
authenticating
one
of
the
parties.
K
There
are
suites
to
find
that
above
ecdsa
and
eddsa,
and
that
lake
working
group
is
trying
to
figure
out
how
to
do.
You
know
which,
which
signature
algorithms
should
be
part
of
a
mandatory
to
implement
set
of
cypher
suites
and,
as
always,
there's
lots
of
argument
in
itf
working
groups
about
that.
But
that's
not
what
we
want
to
talk
about
here.
So
next
slide.
K
So
in
this
particular
context,
there's
kind
of
an
additional
argument
to
the
usual
mti
cypher
switch
selection
argument
issues
essentially
where
an
adversary
controls
a
provision
device
and
can
mount.
K
You
know
fault
injection
attacks
and
extract
a
signing
key
and
for
the
kind
of
applications
the
ad
hoc
protocol
is
designed,
for
that
seems
much
more
likely
than
for
you
know,
tls,
as
used
in
in
data
centers
or
whatever,
and
thanks
to
renee
for
bringing
this
up
and
there's
a
link
there
to
the
thread
from
the
lake
working
group
and
the
context
is,
you
know,
small
relative,
relatively
inexpensive
commercial
devices
that
are
provisioned
private
key
is
probably
not
that
well
protected.
So
if
you
open
the
device,
it
won't
be
zeroed.
K
K
So
I
I
know
I
know
almost
nothing
about
fault
injection,
but
I
did
find
a
few
papers
that
were
interesting.
So
the
first
one
I
thought
was
was
actually
pretty
good
for
me
because
it
actually
explains
how
you
can
do
fault
injection
via
kind
of
a
low
voltage
attack,
it's
a
little
bit
old.
But
it's
it's
a
really
nice
explanation
of
how
these
attacks
can
be
kind
of
mounted
in
a
realistic
environment,
and
it
includes
an
example
of
it's,
not
a
novel
attack
on
rsa
signing,
but
it's
in
that
paper
there.
K
It
shows
how
such
a
such
a
fault
injection
attack
on
on
on
reading
from
memory
can
leak
in
rsa
private
key
there's
also
another
one
about
ecdsa
from
the
similar
time
frame.
Same
kind
of
attack
where
you're
you've
got
a
fault
injection
on
on
kind
of
memory
reads,
although
the
second
one
is
a
simulation
as
opposed
to
a
you
know,
an
actual
demonstration,
but
that's
on
ecdsa
and
then
there's
another
one
on
eddsa,
which
is
in
that
case.
K
It's
a
power
analysis
attack
rather
than
fault
ejection,
so
those
are
kind
of
useful
papers,
and
I
think
what
they
seem
to
show
is
that,
at
some
level,
all
of
the
possible
sensible
options
we
have
to
choose
from
might
be
vulnerable
to
these
kind
of
attacks
in
this
kind
of
context.
For
these
kind
of
devices
next
slide.
K
K
So
the
ask
is,
I
think
it
boils
down
to
two
questions
I
mean
we
seem
to
have
three
kinds
of
signature:
algorithms.
We
you
know
we
may
assuming
a
working
group
wants
to
pick
a
mandatory
to
implement
cypher
suite
with
a
signature
algorithm.
Is
there
really
much
difference
here
between
them
in
this
specific
kind
of
attack
context?
K
So
you
know
we
could
do
it
some
help,
if
that's
the
case
or
assertions
that
from
from
knowledgeable
people
that
it's
it's
more
or
less
the
same
or
it's
different
for
different
ones
or
could
cfrg
develop
something.
That's
usefully
better
in
this
attack
context.
So
for
context,
I
don't.
The
lake
working
group
has
kind
of
currently
you
know
picking
their
mti
choices
has
kind
of
oscillated
over
the
last
number
of
months.
I
think
the
current
they
currently
they've
landed
on
just
saying
just
three
two
five
six.
K
They
probably
won't
want
to
wait
for
an
answer.
So
it's
not
a
it's,
not
a
question
where
we
need
the
answer
right
now
and
are
kind
of
waiting
on
it
and
assisting
cfrt
doof.
But
I
think
it's
a
generic
problem
for
particularly
for
this
class
device
and
again
there
was.
There
was
a
thread
on
cfrg
list
that
didn't
really
reach
any
any
anything.
I
thought
was
a
clear
kind
of
answer.
K
Hence
this
presentation
so
that
slides
so
again,
just
to
summarize
the
attacker
here
controls
the
provision
device
is
able
to
mount
the
fault,
ejection
attacks.
Let's
say
an
extract,
designing
key.
B
K
G
Quick
remark,
so
indeed
I
raised
this
topic
at
the
lake
working
group,
I
think,
is
usually
the
deep
determinism
in
the
generation,
so
as
as
long
as
one
can
fix
that
this
edd
is
a
eddjj
may
be
fine.
If
one
cannot
fix
it.
Ec
dsa
already
doesn't
have
this
determinism.
G
K
So
that
that
the
audio
chopped
a
bit
there-
but
I
I
think
you
were
basically
saying
ecdsa-
is
better,
which
then
leaves
me
wondering
about
those
other
two
papers
that
I
referenced
so.
G
Actually,
steve-
I
I
didn't
say
ecd
is
a
better
ecd
is
only
ecdsa
is
only
better
because
the
noise
generation
is
non-deterministic
if
edd
is
a
would
be
specified
in
a
non-deterministic
way.
Then
the
problem
goes
away,
as
I
identified.
K
G
Well,
the
the
whole
side
channel
attack
arena
is
quite
large.
I
have
more
than
a
thousand
papers
on
this
topic,
so
obviously
there
are
attacks
on
on
all
different
variants,
but
there
are
attacks
that
can
easily
be
prevented
that
take
advantage
of
deterministic,
behavior
sure
yeah.
L
K
If
there
are
lots
of
you
know,
if,
if
we're
talking
about
a
manager
to
implement
cypher
suite
we're
asking
everybody
to
implement
it,
and
if
there
are
known
attacks
against
all
relevant
signature
schemes,
they
happen
to
be
different
attacks.
But
they're
known
in
this
context,
then
it's
not
clear
to
me
that
one
is
better
than
the
other.
G
I
I
I
I
think
the
problem
of
the
problem
is
that
lots
of
attacks
are
also
recycled,
for
example,
open,
ssl
versions
that
are
known
to
be
broken,
and
you
still
need
to
be
slightly
competent
in
in
terms
of
implementing
signature
schemes.
M
M
I
think
we
can
do
better
and
I
think
what
maybe
more
a
question
to
the
chairs
that
should
be
discussed
either
here
or
on
the
list
later.
But
how
do
we
progress
with
draft
mats
and
was
quite
large
support
for
adopting
them
during
the
adoption
call?
There
was
a
discussion
about
the
potential
alleged
ipr
on
it.
Do
we
have
should
we
have
a
second
call
for
adoption?
M
B
K
M
I
think
the
lawyer
in
court
would
say
that
anyway,
I
agree
with
you
in
principle
yeah.
That
sounds
great
for
the
second
adoption
call.
N
N
N
I
think
it
still
makes
sense
to
require
people
to
blind
the
input
blind,
the
random
number
in
some
deterministic
way
and
then
apply
the
non-deterministic
input,
so
we
get
as
much
robustness
as
possible,
but
I
think
I
think
that
we
need
that
anyway,
what
I'm
a
little
bit
nonplussed
about
it.
N
I've
never
understood
why
people
want
to
put
signatures
into
key
exchange
protocols
if
you're
doing
a
key
exchange.
Protocol
diffie-hellman
does
a
key
exchange
and
you
can
build
robust
systems
on
top
of
diffie-hellman,
as
the
signal
folk
approved
and
they've
got
a
proof.
Proof
of
correctness.
Of
that.
N
K
Bart
you
need
to
hit
the
unmute
button
as
well.
I
guess
or
request
audio
permission
there
you
go.
O
C
O
O
These
kinds
of
attacks-
fault
attacks-
they
don't,
they
aren't
just
limited
to
the
crypto
right.
They
allow
you
to
sort
of
in
my
understanding
anyway,
skip
arbitrary
instructions
that
your
target
is
is
doing
so
in
the
case
of
signature
verification
they
might
also
just
skip
over
the
entire
verify
call
altogether.
K
O
K
B
All
right,
my
apologies,
I
probably
I
not
not
going
to
pronounce
it
correctly
willing.
J
So
actually
I
just
have
a
simple
question.
So
what
does
mean
you
call
it
mti
signature?
What
is
it
mti.
K
Oh
sure,
apologies
for
not
explaining
it
so
the
when
ietf
pro
working
groups
are
developing
protocols.
Typically,
they
would
like
to
specify
a
mandatory
to
implement
that's
what
mti
stands
for
set
of
options
so
that
you
we
increase
interoperability,
because
we
hope
most
all
implementations
will
include
them.
So
the
question
here
is
which
one
do
we
which
of
these
signature
schemes?
Would
we
like
everybody
to
implement.
B
K
K
B
Okay,
I
suggest
interested
party
are
welcome
to
continue
on
cfrg
mailing
list.
I
can
certainly
try
again,
okay.
Well,
I
think
now
that
you
kind
of
raise
the
awareness
of
this.
Maybe
there
might
be
more
uptake
yeah.
Hopefully
your
presentation
may
have
helped
so
I'll.
K
B
Sounds
good
next
is
chris
patton
hold
on
just
yeah.
Let
me.
H
H
It's
fun
to
be
on
the
big
screen
once
again:
okay,
all
right!
So
this
is.
I
wanted
to
give
a
quick
update
on
an
individual
draft
that
I've
been
working
on
with
richard
barnes
and
philip
schiltmann.
This
is
called
verifiable,
distributed
aggregation
functions
and
I'm
going
to
give
a
quick
overview
of
this.
So
if
you
missed
ietf112,
don't
worry
about
it.
You
should
have
get
all
the
contacts
that
you
need
speaking
of
context,
so
what
this
is
really
about.
H
We've
recently
formed
a
new
ietf
working
group
called
ppm,
which
stands
for
privacy,
preserving
measurement.
That
group
is
meeting
tomorrow
by
the
way
for
anyone
who's
interested,
please
attend,
and
the
goal
of
this
working
group
is
to
standardize
cryptographic
techniques
for
what
we
call
privacy,
preserving
measurement
and
and
and
we're
thinking
right
now,
we're
mostly
thinking
in
terms
of
like
multi-party
computation.
H
So
what
this?
What
private?
What
ppm
kind
of
means?
Is
you
have
a
bunch
of
users,
and
you
want
to
you're
interested
in
aggregate
aggregate
statistics
about
these
users,
but
you
don't
want
to
see
their
individual
measurements
in
the
clear
and
so
you're
going
to
run
some
sort.
H
You
want
to
run
some
sort
of
multi-party
computation
to
to
make
sure
that
you
don't
so
yeah
so
right
now
we're
we're
working
on
a
what
we're
calling
our
first
protocol
and
we're
hoping
that
it'll
be
adopted
by
by
the
group
as
the
first
document,
and
what
this
does
is
specify
the
end
to
end
verification
and
aggregation
of
measurements
over
https.
H
In
this
document,
the
vdf
document
is
kind
of
the
core
cryptographic
component
of
the
ppm
protocol.
So
my
objective
for
this
talk
is
to
explain
to
you
what
this
draft
is
about,
and
I
also
want
to
ask
the
cfrg
if
this
is
a
ready
for
adoption
by
the
working
group.
H
Okay,
a
quick
overview
of
vdf,
so
the
the
main
cryptographic
technique,
we're
going
to
use
is
just
simple
secret
sharing.
So
the
the
way
our
architecture
works.
Is
you
have
a
bunch
of
clients
and
they're
going
to
be
sending
their
their
secret
shares
of
their
measurements
to
two
different
aggregation
servers
or
more
aggregate
one
or
more
two
or
more
aggregation
servers?
I
should
say,
and
then
aggregates
are
collected
by
another
server.
Another
party
called
the
collector
and
who,
who
assembles
the
final
result.
H
So
in
this
first
step
the
the
starting
step,
each
client
is
going
to
split
its
measurement
into
input
shares
as
we
call
them
and
sends
one
in
one
input
share
to
each
aggregator
and
then,
in
the
next
step.
The
aggregate
for
each
set
of
input
shares
where
the
the
aggregators
are
gonna
engage
in
in
a
multi-party
computation
to
basically
which
has
basically
two
goals.
H
This
one
is
to
one
is
to
verify
that
the
input
shares
that
they're
getting
correspond
to
a
valid
measurement
and
the
other
is
to
what
basically
prepare
the
input
for
aggregation
and
I'll
explain
I'll
yeah
so
I'll
get
into
that
into
a
little
in
a
little
bit
and
then
the
last
step.
So
the
preparation
step
is
kind
of
the
core
meat
of
the
protocol
and
then
the
the
all
they
have
to
do
after
they've
done.
This
step
is,
is
combine.
H
Their
output
shares
into
aggregate
shares
locally
and
then
send
aggregate
shares
to
the
collector
and
the
collector
combines
these
aggregate
shares
to
get
the
final
result.
So
I'm
going
to
give
a
couple
example
protocols.
So
hopefully
this
this
will
be
clear
in
a
moment.
The
the
spec
currently
has
the
goal
of
the
spec
right
now
is
is
to
actually
specify
two
instantiations
of
this
of
this
of
a
vdf
and
the
first
one
is
prio,
which
many
people
might
be
familiar
with.
H
It's
very
simple:
a
client
is
going
to
encode
its
its
measurement
as
a
as
a
as
a
as
a
vector
over
some
finite
field,
and
it's
going
to
it's
going
to
split
that
vector
into
secret
shares
and
send
one
one
vector
to
each
of
the
aggregators
and
then
in
the
to
prepare
these
to
prepare
these
input
shares
for
aggregation
they're,
going
to
just
make
sure
that
the
the
vectors
that
they
have
sum
up
to
a
valid
input
without
actually
learning
what
the
input
is
and
and
then
all
they
have
to
do
in
the
aggregation
step,
is
sum
up
their
vectors
and
then
all
the
collector
has
to
do
is
sum
up.
H
The
aggregate
shares
now
there's
another.
The
other
protocol
that
the
other
protocol
that
we
want
to
specify
is
called
it's
called
poplar.
So
this
is
a.
This
is
a
another
paper
from
the
from
the
same
group
of
folks.
Now.
The
problem
that
this
solves
is
is
the
heavy
hitters
problem,
and
this
is
where
you
have
each
of
the
clients
measurement
is
an
nbit
string
and
you
want
to
know
which
of
these
strings
occur
at
least
some
number
of
times
so
t
time.
H
So
in
our
example,
here
we
have
three
bit
strings
and
only
two
of
those
strings
occur
in
that
set
more
than
more
than
at
least
two
times,
and
the
solution
for
this
problem
I
won't
go
too
much
into
the
detail,
is,
is
called
an
incremental
distributed
point
function
and
what
this?
H
What
this
allows
you
to
do
is
kind
of
query
your,
so
so
clients
are
going
to
compute
what
are
called
idpf
shares
from
their
input,
and
what
this
idpf
share
allows
you
to
do
is
to
kind
of
query
your
the
the
input
on
like
a
on
a
candidate
prefix.
So
you
can
ask
is
so
if
your
input,
for
example,
is
zero
one
one.
H
You
can
ask
is
zero
prefix
of
the
string
which
it
is
or
is
one
a
prefix
of
the
string
which
is
not
etc,
and
so
what,
after
kind
of
what
the
query?
The
result
of
the
query
on
your
idpf
share
is
a
share
of
the
of
the
this
answer
like
yes,
this
is,
this
is
a
prefix.
No,
this
is
not
a
prefix
and
what
you
can
do
with
that
is
aggregate
them
into
shares
of
hit
counts.
H
So
basically,
your
aggregate
share
is,
is
the
is
a
secret
share
of
the
number
of
times
a
candidate
prefix
occurred
in
the
set
yeah,
so
this
this
turns
out
to
kind
of
nicely
fit
the
shape
with
a
few
minor
tweaks.
So
in
the
sharding
step,
the
client
generates
its
idpf
shares
from
its
input
string
and
now
in
the
preparation
phase,
we're
going
to
do
something
a
little
bit
more
complicated.
H
We're
going
to
we're
going
to
evaluate
our
idps
shares
on
a
set
of
candidate
prefixes,
but
then
we
need
to
verify
that
the
output
is
well
formed.
Basically,
we
should
only
have
the
the
each
aggregation.
Aggregator
should
only
have
a
share
of
a
vector
where
that
is.
That
is
where
you
can
only
have
one
candidate
prefix
for
about
like
a
given
input,
so
yeah
so
yeah.
H
I
I
I
I
think
I'm
running
out
of
running
a
little
bit
low
on
time,
so
I
I
won't
get
too
much
into
the
to
the
weeds
here.
Okay,
so
there's
this
idpf
share
evaluation
and
then
verification
that
the
output
share
that
you
get
is
is
well
formed
and
then,
in
the
aggregate,
aggregation
phase,
all
you
have
to
do
is
sum.
Your
output
shares
into
a
share
of
the
hit
counts
and
then
you
hand
those
to
the
collector,
and
so
the
way
this
is
gonna
help
you
solve
like
find.
H
The
heavy
hitters
is
you're.
Gonna
run
this.
The
collector
is
going
to
run
this
protocol
basically
prepare
aggregate
unshard
several
times
with
several
different
sets
of
candidate
prefixes
until
it
finds
the
set
of
heavy
hitters.
H
Okay,
so
very
quickly,
we've
made
a
lot
of
progress
since
iutf
112..
What
we
have
today
is
basically
a
complete
special
specification
of
prio
and
a
with
including
a
reference
implementation
that
generates
test
vectors
and
we
have
there's
at
least
one
implementation
of
this.
That's
that's
pretty
fast,
so
the
next
things
I
think
to
do
are
is
the
next.
The
next
chunk
of
work
is
to
complete
the
spec
for
poplar.
H
There
are
a
couple
of
implementations
of
pieces
of
this.
It's
it's
a
little
bit
more
complicated,
none
of
them
like
interoperate,
yet
so
this
is
why
we
want
to
to
to
have
a
spec
in
the
cfrg,
especially
because
people
want
to
implement
this.
The
other
things
we
need.
We
were
working
on
security
analysis
and
we
want
and
we'll
need
to
flesh
out
the
security
considerations
and,
of
course,
we're.
H
The
hope
is
that
this
you
know,
drives
cryptograph
cryptography
researchers
towards
direct
direct
cryptography
research
towards
the
design
of
vdfs
that
solve
different
private
data,
aggregation
problems,
yep
and
so
there's
some
a
few
other
open
issues,
but
I
think
I
in
my
view
this
is
I
I've
based
on
what
I've
seen
I'm
kind
of
new
to
this
this.
I
think
this
document
is
mature
enough
to
to
start
working
on
it
in
the
working
group.
I'm
curious
to
know
if
people
are
interested
and
what
would
be
the
next
step.
E
Yeah
thanks
chris
for
the
presentation,
I'm
I'm
obviously
supportive
of
this.
The
group
adopting
this
as
a
research
group
item
well
we're
in
the
purview
of
cfrg
to
work
on
bringing
something
like
this
to
the
rest
of
the
industry.
By
standardizing
it
I
mean,
as
chris
said,
we
have
a
a
number
of
implementations.
Already.
It's
prio
especially
has
a
lot
of
experience
in
like
running
code,
so
this
this
seems
like
kind
of
a
an
obvious
candidate
for
adoption,
and
I
would
like
to
I
would
like
to
see
that
up.
B
K
B
Okay,
cheers
we'll
talk
to
you
christopher,
and
I
think
tentatively,
we're
happy
to
do
adoption
call
but
we'll
just
double
check
between
ourselves.
Thank
you.
B
Shall
I
share
I'll
share
and
pass
to
you
controls
again
great.
Thank
you
all
right.
B
Oh
christopher,
you
need
to
stop
sharing.
R
Q
Oh
great,
I
have
the
slight
contrast
great.
So
yes
thank
you.
This
is
about
a
rather
applied
topic.
It's
about
an
exploit
of
aes,
gcm
authentication,
deck
for
hidden
communications.
Q
So
next
slide
yeah
first
about
hidden
communication,
just
very
very
briefly,
so
so
malware,
it's
about
malware
communication,
hidden
malware
communication
and
critical
infrastructures.
Q
In
the
example
below
you
have
alice
at
the
left
bob
at
the
right,
they
just
exchanged
some
benign
ip,
datagrams
and
and
mallory
somehow
got
access
to
an
asen
on
the
path
and
is
communicating
with
eve
just
by
exploiting
some
of
the
unused
fields
or
or
less
used
fields
like
ttl
et
cetera.
That
are
not
that
apparent
and,
and
so
they
misuse
existing
communications
can
happen
rather
as
a
covert
channel,
so
ipptl
flags
options.
Q
Now,
if
we
have
this
critical
infrastructure-
and
we
want
to
harden
this
infrastructure-
we
we
need
to
assume
that
any
system
is
vulnerable.
That
is,
the
question
is
not
who
discovers
the
the
question
is
not
when
or
if
a
zero
day
is
discovered,
but
who
discovers
this
vulnerability
and
when
what
is
typically
done
in
industries
now
to
protect
the
key
materials
ck
using
so-called
ckmds
crypto
key
management
devices,
so
trusted
platform,
medius
modules
or
smart
cards
that
are
uncompromisable
in
so
they
protect
the
key
material
physically
and
offers
some
well
known
api.
Q
This
key
material
is
cannot
be
leaked
because
it
is
physically
shielded,
meaning
there
are
just
some
well-known
apis
to
encrypt
or
decrypt
using
these
keys,
and
this
can
be
either
a
hardware
module
or
it
can
be
networked
as
you
see
at
the
right,
and
the
question
that
we
ask
ourselves
is:
can
malware
exploit
cryptography
for
hidden
communication
and
disheartened
systems,
and
the
answer
is
obviously
yes,
so
aesgcm,
I'm
not
going
into
details.
I
just
want
to
outline
so
so
we
have
some
some
input
parameters
to
the
algorithms
we
have
an
initialization
vector.
Q
We
have
some
plain
text:
plain
text
message
blocks
the
key
material
that
is
protected,
so
we
don't
believe
we
can
change
this
or
we
can
find
any
exploit
related
to
it,
and
we
have
output
parameters
shown
in
red,
so
ciphertext
blocks
and
the
authentication
tag
now
concerning
the
initialization
vector
there
have
been
already
some
discussions
on
the
lists
during
the
definition
of
aes
gcm
and
deployment.
Q
So
it
was
a
discussion
whether
initialization
vectors
should
be
chosen
randomly,
and
the
answer
was
no,
so
a
clear
recommendation
not
to
do
so
because,
in
particular
of
this,
this
ability
for
hidden
communication,
so
it
is
recommended
to
use
deterministic
counting
initialization
vectors
when
using
such
a
ckmd,
the
state
keeping
is
difficult.
This
segment.
These
are
typically
stateless
devices,
meaning
they
just
map
a
specific,
a
specific
id
and
a
specific
message
to
a
specific
key,
so
deterministic,
initialization
vectors
should
be
likely
managed
by
this
requesting
device.
Q
If
we
now
consider
a
legitimate
device
communication
in
this
context
that
we
have
analyzed,
we
have
at
the
left,
a
sender
application
that
uses
a
kind
of
security
proxy.
That
is
a
center
application.
A
benign
one
just
wants
to
send
some
plain
text
p
to
a
secure
to
a
receiver
application
at
the
right,
and
on
this
behalf,
it
uses
a
security
process
consisting
of
a
sender
and
a
sender
ckmd.
Q
So
if
the
sender
is
compromised,
nevertheless,
in
center,
ckmd
protects
the
key
material,
what
it
does.
So,
if
the
legitimate
message
arrives
at
the
sender,
it
computes
a
new
initialization
vector
sensor,
encrypt
request
is
a
plain
message:
initialization
vectors
to
the
ckmd,
which
encrypts
a
message
and
gives
back
a
cipher
text
and
a
authentication
tag
which
is
then
transmitted
to
the
receiver
security
proxy
that
decrypts
all
of
this
information
so
passes,
cipher,
initialization,
vector
and
authentication
tag
to
the
receiver
ckmd.
Q
However,
please
note
that
this
receiver,
ckmd
typically
will
refuse
the
decryption
if
one
of
these
three
parameters
is
faulty
meaning,
if
we
have
a
for
the
authentication
that
we
will
not
decrypt
and
as
a
response,
we
get
a
p
plain
text
again
and
the
receiver
proxy
then
forwards.
This
message
to
the
receiver
application.
Q
What
can
we
now
do
or
two
observations?
Firstly,
so
gcm
works
similar
to
stream
ciphers,
that
is,
we
can
decrypt
by
encrypting
with
the
same
initialization
vector
and
we
can
decrypt
by
circumventing
the
authenticate
authenticity
verification,
that's
a
problem
common
to
other
counter
encryption
modes
mapping,
this
generic
sequence,
diagram
to
an
iot
infrastructure.
Think
of
a
smart
metering
infrastructure.
For
instance,
you
have
iot
devices
that
are
potentially
compromisable,
so
you
have
physical
access
to
these
devices.
Q
Q
These
devices
are
concentrated,
so
the
traffic
from
this
device
is
concentrated
and
forwarded
to
an
edge
server
that
is
reachable
from
the
internet
and
therefore
might
also
be
compromisable.
An
idea
is
just
supervised
or
supervises
the
communication
to
this
edge
server,
and
it
is
worth
noting
that
yeah
passing
any
keys
to
this
ids
is
not
advisable.
We
know
about
solar
winds,
sunburst
orion
exploits
so
giving
keys
to
to
end
and
and
administrative
power
to
ids.
This
is
not
that
good
design
decision.
Q
What
we
want
to
do
is
to
implement
some
subliminal
communication
between
a
smart
meter
and
an
edge
server
without
this
information
being
without
this
information
exchange
being
visible
to
the
outside.
So
we
want
to
exploit
and
in
particular
the
authentication
tag
has
shown
to
be
available
for
this
and
want
to
circumvent
the
ckmd,
authentic
authenticity,
verification
in
the
receiver.
Q
But
please
note
that
this
communication
is
bidirectional,
so
we
can
use
exactly
the
same
approach
in
both
directions
and
the
solution
is
quite
straightforward,
so
subliminal
sender
is
compromised,
subliminal
receiver.
So
these
security
proxies
have
been
compromised.
They
cannot
leak
the
key
material,
but
nevertheless
they
can
act
whenever
a
sender,
application
or
central,
legitimate,
benign
message
that
is
to
be
encrypted.
Q
The
subliminal
sender
chooses
an
initialization
vector
encrypts
this
information
and
what
it
does
is
simply
replace
part
of
the
authentication
tag
with
the
subliminal
message
to
be
sent
to
the
receiver
and
sends
this
message
together
with
the
modified.
So
the
subliminal
authentication
tag
to
the
subliminal
receiver.
The
subliminal
receiver
uses
this
stream
cipher
property.
Q
It
simply
takes
a
random
string
of
length
corresponding
to
the
cipher
text.
Encrypts.
These
data
using
the
initialization
vector
and
gets
a
cr,
so
a
cipher
text
of
the
random
text
and
a
tag
that
it
doesn't
use
now,
unfortunately,
by
xoring,
the
random
text
and
the
ciphertext
we
obtain
the
cipher
and
by
exploring
the
cipher
with
the
ciphertext,
we
get
the
plain
text
so
subliminal.
Receiver.
Now
using
this
encrypt
operation
has
access
to
the
plain
text
it
has
from
the
authentication
text
it
obtains.
Q
The
only
issue
is
that
this
subliminal
information
here
needs
to
be
somehow
hidden
by
the
subliminal
sender.
Can
we
do
better?
Yes,
of
course,
we
can
change
the
entire
process
just
by
doing
the
entire,
exactly
the
same
process
and
this
hidden
information,
as
is
exod
into
the
into
the
authentication
tag
that
is
instead
of
replacing
we
xor.
It
is
the
authentication
tag
and
get
and
get
encryption
of
our
hidden
information
for
free
proceed
process
at
the
receiver
is
absolutely
the
same
one.
We
get
the
plain
text,
but
wait.
Q
L
Q
Moreover,
the
location
of
the
sender
of
this
hidden
information
can
be
anywhere
on
the
path
where
it
can
access
the
authentication
text,
so
it
needs
not
necessarily
be
collocated
with
the
device
I'm
getting
short
of
time.
So
yes,
mitigations.
Thank
you.
You
have
already
standardized
gcm
siv,
so
synthetic
initialization
vector
which
solves
exactly
the
problem,
so
it
is.
The
issue
is
that
we
have
initialization
vectors
that
can
be
repeatedly
used.
It's
clear
that
this
is
this
should
not
happen,
but
it
happens
in
this
setup.
Q
We
can
generate
ivs
on
the
ckmd,
but
this
can
lead
to
ckmd
originated
subliminal
channels
and
we
have
a
deployed
mass
of
of
systems
that
need
to
be
maintained.
That
is,
we
could
use
distant
keys
for
each
direction
for
what
reverse
or
we
could
use
the
initialization
vector
a
segment,
a
segmented
set
of
initialization
vectors
for
each
direction,
and
I'm
almost
done
so
just
combining
the
ckmds
with
the
gcm
encryption
can
show
security,
shortcomings
and,
yes,
obviously,
gcm
siv
is
recommended
or
some
some
remedies.
Q
But
it's
it's
a
general
architectural
problem
and
yes,
one
should
consider
this
opportunity.
You
have
a
link
over
here
that
leads
to
the
paper.
If
you
don't
have
access.
Please
drop
me
a
note
and
I'm
happy
to
send
you
a
pre-published
version
on
the
last
slide.
You
have
some
some
information
on
the
project
that
we're
doing,
and
this
was
a
result
of
this
project,
so
yeah
happy
to
take
questions
and
thank
you
for
the
opportunity
to
present.
B
Okay,
we
are
slightly
behind
on
I'm
sorry.
Now
it's
all
right,
I
I
locked
the
queue
so
two
questions,
jonathan
first
jonathan
hoyland's.
L
Cloud
player,
one
way
you
could
potentially
detect
this
is
in
use,
is
surely
by
having
some
middleman
munge
the
authentication
tag
right,
they
just
mess
it
up
and
the
the
receiver
should
then
reject
the
message,
but
it
won't
yes.
Q
Sure
there
are
many,
so,
for
instance,
you
could
also
intercept
here
the
network
communication.
Yes
by
seeing
that
you
have
two
encrypt
requests.
Instead
of
one,
you
could
identify
that
something's
happening
or
by
logging,
this
receiver
ckmd
to
see
that
you
have
a
mass
of
encrypt
request
instead
of
decrypt,
so
you
should
have
a
symmetry
ordinary
by
by
default,
but
so
so
we
have
several
counter
measures
they
are.
They
are
mentioned
in
the
in
the
paper,
but
thank
you
yes.
Obviously,
this
is
there.
R
Scott
okay!
Yes,
if
I
understand
that
you
are
assuming
the
ascender
and
receiver
crypto
engines
are
both
compromised.
If
you
assume
that
can't
they
replace
the
the
gcm
algorithms
with
anything,
they
find
convenient
and
use
that
as
an
exploit.
Why
is
it
specific
to
gcm.
Q
They
they
could,
in
theory,
replace
anything
so
so
it
was
a.
The
topic
was
derived
from
a
rather
applied
project
where
we
have
an
existing
system
and
there
we
have
aes
gcm
and
yes
in
in
theory,
you
could
replace
also
some
some
messages.
You
could
introduce
some
additional
information,
but
we
we
try
to
stay
with
the
minimum
of
of
exploit.
That
is
needed
to
transfer
information,
and
in
this
case
just
the
authentication
tag
is
sufficient
and
you
have
of.
B
All
right,
thank
you.
Can
you
please
stop
sharing
slides.
Q
Yes,
okay,
thank
you
right.
Thank
you
very
much.
B
Oh
okay,
fine
hold
on.
Did
you
okay,.
S
Okay,
all
right,
can
everyone
hear
me
and
see
the
slides?
S
Yes,
yeah,
all
right
so
hi?
My
name
is
nimrod
aviram
and
I
would
like
to
present
a
dual
prf
construction.
This
is
joint
work
with
benjamin
dowling,
ilan,
komargotsky,
kenny,
patterson,
ayal,
hernandez.
S
So
in
modern
protocols
we
usually
have
the
client
and
server
agree
on
a
shared
cryptographic
secret,
along
with
other
protocol
parameters,
and
then
we
feed
the
shared
secret
and
the
protocol
transcript
into
a
kdf.
A
key
derivation
function
to
arrive
at
the
pell
session
shared
secret
and
from
that
shared
secret,
we
derive
symmetric
keys.
S
That
key
derivation
function
should
be
total
random,
that
is,
the
output
should
be
indistinguishable
from
random
when
the
key
is
uniformly
distributed,
even
for
an
attacker
that
fully
controls
the
transcript,
so
we
can
use
hmac,
which
is
probably
a
prf
under
very
mild
assumptions,
and
it
would
appear
everything
is
well
and
good.
S
However,
in
some
cases
we
have
more
than
one
key
and
now
we're
asking
what
should
we
do
here,
for
example
in
tls
1.3,
where
we
use
both
diffie-hellman
key
exchange
and
appreciate
key,
and
this
happens
a
lot,
for
example
in
resumption,
which
is
very
widely
used
in
hybrid
key
exchange?
Well,
we
have
both
a
classical
key
exchange
algorithm
and
a
post
quantum
one
and
in
the
signal
double
ratchet
protocol,
where
we
combine
an
existing
shared
secret
state
with
new
keying
material
that
is,
output
of
difficult.
S
And
the
general
approach
for
doing
this
is
to
combine
the
two
keys
into
a
single
single,
unified
key
using
a
key
combiner
function
and
then
compute
hmac
with
that
key
of
a
say
the
protocol
transcript
and
have
that
be
the
output
of
the
entire
kdf.
S
This
is
largely
done
both
in
existing
constructions
and
in
our
proposal,
and
when
we
say
that
key
combiner
function
takes
two
keys,
we
mean
it
should
be
a
dual
prf,
that
is,
the
output
should
be
in
this
indistinguishable
form
random
when
one
key,
when
it
one
key,
is
uniformly
distributed
and
the
other
key
might
be
controlled
by
the
attacker.
And
this
scenario
is
actually
realistic
because
say
in
protocols.
S
S
So
it's
then
natural
to
ask:
can
we
use
hmac
as
that
key
combiner
function,
and
the
answer
seems
to
be
no,
because
hmac
is
generally
not
a
dual
prf
to
be
fair,
it
was
never
claimed
or
designed
to
be
a
dual
prf
under
any
assumption,
and
it's
definitely
not
one.
If
the
underlying
hash
function
is
not
collision
resistant.
S
That's
where
our
proposed
construction
comes
in,
so
we
have
a
construction
for
a
dual
prf.
The
construction
uses
an
underlying
hash
function
as
a
basic
building
block
the
hash
function
can
be
say,
shadow
56
or
any
other
standard
hash
function.
It
doesn't
even
have
to
be
collision
resistant
for
the
for
the
construction
to
be
secure.
S
The
whole
construction
is
fully
practical.
It
only
uses
symmetric
cryptography
and
is
overall
cheap
to
compute
and
we'll
get
to
that
in
a
minute.
It's
especially
cheap
when
we
compare
it
to
asymmetric
cryptography.
S
We
take
the
output
prepend,
some
common
reference
link
to
it
and
use
that
as
the
data
input
for
hmac,
then
we
do
the
same
thing
again
with
the
key
roles
swapped.
S
We
then
take
the
two
hmac
outputs
x
of
them
run
that
through
the
hash
function
one
last
time-
and
this
is
the
output
of
the
whole
construction
note-
how
everything
here
is
symmetric
and
standardized
cryptography,
except
for
the
f
function,
which
I
will
now
describe.
S
How
do
we
compute
the
expanding
injective?
One-Way
function?
F,
we
have
a
message
m
and
we
split
it
into
blocks
with
the
same
size
as
the
hash
function
block
size,
then,
for
each
message
block
we
are
running
through
the
hash
function
several
times
each
time,
while
first
processing
an
input
block
of
an
appropriate
index.
S
So
we
we
first
prepared
a
full
input
block
of
zeros
right
and
then
the
message-
and
we
run
that
through
the
hash
function
and
we
get
the
start
of
the
output,
then
we
do
the
same
thing
again,
but
with
the
whole
block
of
ones
at
the
start,
run
that
to
the
hash
function
and
get
the
second,
we
get
the
continuation
of
the
outputs
we
concatenate,
the
hash
function
results
of
all
of
these
computations-
and
this
is
the
expanding
function-
f-
note
that
it
is
also
symmetric
and
cheap
to
compute
this
in
this
diagram.
S
We
we
process
each
block
twice,
but
this
is
only
for
simplicity.
In
practice,
we
propose
our
pros
processing
each
block
three
times
the
amount
of
times
we
process.
Each
block
is
called
the
expansion
factor,
and
this
is
the
parameter
of
the
construction
and
what
why
do
we
choose
an
expansion
factor
of
think
so
what
this
expansion
factor
does
is
it
helps
the
function
be
injective
the
more
we
expand
the
input,
the
longer
the
output
and
the
higher
the
chance
that
the
function
is
injected.
S
S
It
will
be
very
hard
to
upgrade
and
we're
still
dealing
with
with
these
situations
now,
so
we
want
to
be
conservative
here,
so
we
assumed
that
whatever
hash
function,
we
use
will
eventually
be
as
broken
as
md5
is
broken
today,
so
even
with
nd5
and
coolant
cryptanalysis,
taking
an
expansion
factor
of
two
is
unbreakable
to
our
knowledge
and
taking
an
expansion
factor
of
3
seems
like
a
good
security
margin.
S
We
said
the
construction
is
fast,
it's
much
cheaper
than
asymmetric
cryptography
again
it
takes
roughly
seven
microseconds
to
combine
two
keys,
comparing
that
with
say,
hkdf
using
concatenation
hkdf
takes
roughly
one
microsecond,
so
the
overhead
is
roughly
six
microseconds
and
this
is
very
cheap
compared
to
asymmetric
cryptography.
S
So
even
just
doing
diffie-hellman
over
x255
19
with
two
exponentiation
still
connection,
each
exponentiation
takes
roughly
45
microseconds,
so
just
the
fieldman
is
about
19
microseconds
during
ecdsa
takes
either
80
or
25
microseconds,
and
if
we
want
to
also
use
entro,
that
would
add
some
more
so.
The
largest
overhead
we
came
across
is
when
doing
only
two
different
expansion,
exponentiations
and
a
signature,
and
even
then
the
overhead
is
only
five
percent.
S
S
S
S
This
would
implied
we
think
of
hmac
as
a
dual
prf,
even
though
it
is
generally
not
one
in
hybrid
key
exchange,
where
we
combine
our
classical
and
post
quantum
key
exchanges,
both
in
the
itf
work
and
in
similar
proposals
from
etsy
and
east.
S
We
actually
concatenate
two
keys
and
then
feed
them
to
the
kdf
as
usual,
and
even
with
only
a
single
key
in
tls
1.3,
where
we
only
use
diffie-hellman
that
dpman
output
is
actually
passed
through
the
message
input
of
hmac,
which
would
again
imply
we
treat
hmac
as
a
dual
prf.
S
So
we
think
standardizing
a
dual
prf
would
help
make
protocols
more
robust,
both
when
with
multiple
keys
and
also
with
a
single
key
and
we're
asking
we,
we
would
like
to
write
an
internet
draft
and
we
are
asking
all
are
people
interested.
Thank
you.
I'm
happy
to
take
questions.
S
Yeah
chris
wood,
please
go
ahead.
E
Yeah
thanks.
I
have
a
two
well
one
question
and
then
more
of
a
comment,
the
first
of
which
is
on
the
the
the
claim
that
hmac
or
hpdf
extract
is
not
a
dual
prf,
I'm
not
a
cryptographer,
so
I
defer
to
you
experts
to
determine
whether
or
not
that
is
or
is
not
true.
But
my
concern
is
that
many
of
the
proofs
that
I've
seen
for
tls
1.3
do
assume
that
hkdf
extract
is
indeed
a
dual
prf.
E
A
second
comment
is
on
the
the
the
idea
as
to
whether
or
not
to
write
a
draft.
I
think
having
something
that
has
like
rigorously
proven.
Dual
prereq
properties
would
be
nice,
however,
for
applications
like
mls,
which
require
more
than
one
key
as
input,
it
might
be
nice
if
an
nprf
was
the
construction
output.
On
the
other
end,
I
know
chris
forgetting
his
last
name
proposed
for
mls,
specifically
an
nnprf
construction.
E
S
S
E
Well,
I
I
just
I
pulled
up
a
paper
from
who
is
it
sorry
scrolling
about
that.
S
I'm
sorry
sorry
yeah,
I
think
we're
just
running
out
of
time,
so
I
would
say
that
we
have
a
good,
an
extensive
discussion
of
this
on
our
paper
on
eprint
and
and
we
can
take
it
to
the
list.
If
that's
okay.
E
Sure
I
just
wanted
to
just
comment
briefly
that,
like
the
they
in
various
proofs,
they
do
use
this
assumption
to
bound
to
make
certain
bounds
on
like
adversarial
advantage
and
they're
in
their
their
hops
amongst
various
games.
So
it
does
seem,
it
does
seem
relevant
and
I'll
I'll.
Take
a
look
at
the
paper.
We
can
try
to
find.
H
My
question
is
why
I
I
was
curious
about
like
your
benchmarks.
This
seems
comparable
like
is
this:
is
this
comparable
to
what
hkdf
does?
Why
compare
it
to
what's
the
value
of
comparing
it
to
like
asymmetric
crypto.
S
So
we're
trying
to
argue
that
if
we
add
this
to
a
protocol,
the
overhead
is
minimum.
H
Okay,
okay,
because
I
okay
okay,
so
I
think
I
guess
I
guess
the
the
comparison
the
the
one
apples
to
apples
comparison
is
is
hkdf
right,
so
like
yeah,
but
probably
fast,
hkdf,
we're
in
good
shape.
S
Yeah,
so
it's
it's
slower
than
hkdf,
but
we
think
it
shouldn't
matter
that
much
because
we
only
do
it
once
say
to
process
it.
Yeah.
H
B
Yes,
I
closed
the
queue
after
philip,
so
philip
quickly,
yeah.
N
I
I
like
this
a
lot.
There
are
some
other
things.
If
we're
going
to
revisit
hkdf,
there
were
a
few
landmines
that
I
came
across
when
I
was
trying
to
use
it
in
the
mesh.
N
Some
other
thing,
some
of
the
assumptions
that
are
reasonable
to
have
as
a
user
about
what
certain
inputs
will
do
are
not
actually
true,
and
you
know
that
that
that
just
hit
me
a
few
times
in
a
way.
There
was
surprise
that
should
be
eliminated.
N
The
other
point
anime,
is
you
you're
looking
at
hmac,
I
think
that
that's
the
right
thing
to
use
as
far
as
timing
is
concerned,
but
if
I
was
going
to
move
to
a
different
key
derivation
function
at
this
point,
I
would
not
want
to
use
hmac.
I'd
want
to
use
kmac,
because
you
know
hmac
is
a
hack
trying
to
make
a
digest
function
that
wasn't
designed
to
be
a
mac.
N
S
All
right
thanks,
I'm
not
familiar
enough
with
the
kmac
to
comment
on
this
and
also
a
great
hidf.
S
All
right
so,
and
that
is
not
always
advantageous,
but
I
think
we
should
discuss
it
further
on
the
list
if
you'd
like
I'd,
be
happy
to.
S
B
All
right,
thank
you.
If
you
can
stop
sharing
slides,
that
would
be
great.
I
B
Oh
actually,
that
might
help
all
right.
Fine,
okay,
the
next
one,
is
the
aegis,
aegis
family
of
authenticated
encryption,
algorithms.
Okay,
let's
do
that.
B
B
C
Okay,
so
I'll
talk
introduce
the
ages
fast
authenticated
encryption
family.
C
C
C
A
few
things
about
the
design
I'll
be
very
brief
and
not
go
too
much
in
boring
details.
The
design
was
actually
inspired
by
a
pelican
mac,
which
is
a
mac
designed
by
the
aes
designers
to
be
faster
than
aes.
Because
for
a
mac
you
can
be
faster
and
we
then
turn
this
into
a
stream
cipher,
which
is
actually
also
a
lot
faster
than
the
as
so.
C
We
have
a
128-bit
key
and
iv
and
all
the
words
are
here
128
bit,
but
there
is
a
large
state
of
8
times,
128
bits,
1024
bits
in
edges,
128,
l.
The
scheme
is
very
modular
and
easy
to
analyze,
so
you
take
your
key
and
your
iv,
you
do
10
steps
of
each
operation
and
then
for
every
next
step.
You
accept
two
message
word:
you
produce
two
key
stream
words
which
you
add
to
the
plain
text
and
you
output
the
cipher
text.
C
At
the
end,
you
add
some
linked
information
and
then
you
do
seven
more
steps
and
you
get
the
128
bit
mac
value
out.
So
it's
a
stream
cipher
with
built
in
mac
generation.
So
inside
the
each
is
operation.
So
there
is
eight
blocks
and
they're
updated.
They
all
use
one
round
of
aes
and
they
update,
as
you
can
see
in
parallel.
C
Properties.
We
propose
two
variants:
the
128
l
variant
has
128
with
security
against
confidence,
healthy
attacks
and
authentication.
C
There
is
also
a
256-bit
version
that
actually
have
has
268-bit
security
against
key
search,
and
we
decided
to
keep
a
128
bit
tags.
We
have
218
against
forgery
for
the
smaller
version
we
allow
use
of
248
messages
per
key,
which
I
want
to
point
out,
is
much
larger
than
what
you
can
get
for
the
same
security
level
with,
for
example,
gcm
or
ocb
for
the
large
version
lipsticks.
C
There
is
no
restrictions
or
number
of
messages
in
practice,
but
we
want
to
point
out
that
if
you
would
do
to
the
128
attempts,
you
could
do
an
online
forgery
attack.
So
you
need
a
huge
number
of
interactions
with
the
verifier
and
then
you
could
shorten
key
search,
but
we
decided
this
was
not
really
a
problem.
A
problem
for
security
security
properties
are
that,
unlike
gcm,
it's
key
committing
and
these
actually
stop
search
and
key
partitioning
attacks.
I
don't
have
time
to
go
into
detail.
C
Gcm
also
has
some
issues
when
you're
very
nonsense.
You
can
may
get
some
security
problems,
so
this
is
not
a
problem.
We
recommend
here
to
have
maximum
size
nozzles,
but
they
can
be
shortened.
What
we
don't
achieve
is
resistant
to
nones
reuse.
We
don't
also
achieve
resistance
to
or
you
could
be
in
trouble
if
you
start
releasing
plaintext
before
you
check
the
mac
and
we
also
didn't
design
the
scheme.
That's
compactly
committing
could
be
used,
for
example,
for
franking,
so
these
last
three
properties
are
by
design.
C
C
So
there
has
been
quite
some
independent
security
evaluation.
Each
cipher
was
designed
nine
years
ago
and
was
submitted
to
the
open
cesar
competition.
There
were
close
to
60
schemes
and,
in
the
end,
a
handful
of
those
were
kept
for
the
final
portfolio,
including
ages
128.
There
has
been
some
independent
analysis
by
other
teams.
There
has
been
some
correlation
analysis
on
his
256..
C
We
are
not
concerned
about
these
attacks,
of
course,
theoretically
it's
below
256,
but
they
require
between
150
to
160
ciphertex
blocks,
we're
not
concerned,
and
I
spoke
to
the
people
who
did
this
attack.
They
don't
see
how
to
improve
it.
They
don't
they
have
done
some
attempts
in
the
in
last
year,
but
they
didn't
get
much
further
than
a
small
factor
of
two
or
four
I'm
also
very
impressing
very
recently
this
this
month
actually-
and
this
week
at
fse,
two
attacks
have
been
published
on
aegis.
C
128
is
actually
not
the
version
in
the
draft,
not
in
the
draft,
but
because
in
the
draft
pf128l
in
256,
and
they
break
about
five
to
of
the
ten
rounds
with
a
reasonable
complexity.
C
This
is
interesting
because
it
shows
independent
analysis.
So
this
is
an
attack
in
which
you
vary
the
iv
with
chosen
iv
attacks,
but
we
also
want
to
point
out
that
if
you
take
four
or
five
rounds
out
of
ten
of
aes,
there
is
also
quite
efficient
attacks.
So
we
think
this
is
normal
that
in
the
as
base
scheme,
if
you
have
only
four
or
five
rounds,
that
there
is
an
efficient
attack,
this
is
not
a
reason
at
all
for
concern.
It
will
be
very
hard
to
scale
up
those
attacks.
C
As
for
aes
and
finally
performance,
the
scheme
is
highly
paralyzable
online
for
encryption
and
can
make
optimal
use
of
the
aes
instructions.
I'll
just
show
you
some
benchmark
numbers
on
the
next
slide,
so
intel
skylake,
as
you
see
on
the
bottom,
the
speed
goes
up
a
lot.
This
is
cycles
per
byte,
so
the
smaller
the
faster
once
you
go
below
one
above
1k
messages
I
zoomed
in
on
the
top,
and
there
you
see
that
it's
about
twice
as
fast
as
gcm.
It
goes
down
to
the
number.
C
I
promise
you
point
to
something
to
five
cycles
per
byte.
We
have
similar
results
for
arm,
which
I
will
skip,
and
I
conclude
here
now
so
saying
that
each
is
a
very
simple
design
that
is
a
variant
for
128
bit
increasing
bit
security.
It's
also
easy
to
analyze
easy
to
implement
and
it
offers
a
very
high
security
level.
The
design
is
targeted
for
platform
with
aes
support,
but
would
also
be
implemented
efficiently.
C
H
No
no
question
really:
I
just
wanted
to
say
I
think
aegis
is
a
beautiful
construction
and
I
would
love
to
see
it
standardized.
So
I
absolutely.
P
Support
the
work
cfrg
working
on
a
draft.
C
Well,
each
is
not
exactly
lightweight
it's
a
high
speed
design.
There
is
a
nist
lightweight
competition
going
on
and
we
didn't
submit
it
there
because
we
didn't
think
it
fitted
there.
It's
really
more
like
a
high
performance
design
and
nist
has
announced
that
they
will
complete
the
competition
soon,
but
there
is
no
results
there.
So
I
think
in
general
it's
definitely
one
of
the
faster
designs.
C
E
C
So
each
is
is
vulnerable
against
noise
release
by
design
it's
better
than
gcm
in
gcm
with
one
nonce,
you
lose
your
authentication
key
and
you
have
a
big
problem
in
aegis.
It's
clear
that
there
is
a
paper
that
was
actually
in
our
written
design
document.
The
details
were
not
worked
out,
but
the
paper
has
shown.
Is
it
needed
15
users
of
one
month
and
then
you
can
recover
the
state
now
this
is
a
design
decision.
C
We
believe
that
if
you
want
to
be
resistant
against
this,
you
need
two
passes
or
you
need
lower
performance.
We
don't
think
you
can
achieve
this
performance
for
an
on-to-use
scheme.
So
we
warn
inventors
implementers,
for
this
is
also
clearly
pointed
out
in
the
draft
that
you
have
to
be
careful
just
as
for
gcm,
that
you
don't
reuse
your
nuts
and
luckily
the
consequences
are
not
as
bad
as
for
gcm.
B
C
B
B
T
I
will
just
drive
right
yeah,
please,
okay,
thank
you,
we're
just
okay.
There
we
go
rock
and
roll,
I'm
dan
harkin,
so
this
is
a
proposal
for
making
some
changes
to
hpk
or
additions
to
hpke.
So
next
slide,
please
us.
It
was
recently
published
as
rfc
9180,
and
I
saw
I
think
it's
a
good
time
to
make
this
ask
again.
The
issue
is
that
hpk
is,
is
a
really
nice
construct
and
I
like
it
a
lot,
but
it
doesn't
work
for
some
use.
T
Cases,
for
instance,
for
devices
are
operating
in
a
constrained
environment.
The
serialization
and
deserialization
for
the
nist
keys
is
over
twice
as
long
as
it
needs
to
be,
and
so
I'm
proposing
that
we
could
address
that
by
using
the
rfc
6090
compact
output
serialization.
T
Another
issue
that
I
have
with
hpk
is
that
it
is.
It
doesn't
work
on
lossy
networks.
It
assumes
that
there
is
guaranteed
and
order
delivery
of
of
all
the
packets
that
are
being
used,
and
that's
because
the
sequence
counter
that's
being
used
with
the
the
aead
cipher
is
completely
inside
of
the
context.
Then
the
the
user
has
no
ability
to
to
know
what
the
sequence
number
is
or
to
have
any
control
over
it.
T
So
if
there's
any
packet
loss
or
packer
reordering
the
center
and
receiver
get
out
of
sync
and
everything
just
falls
apart
and
since
the
internet
does
not
provide
guaranteed
or
delivery
of
packets.
I
think
that
this
is
a
problem
we
should.
We
should
address
so
for
the
first
one.
It's
pretty
easy
to.
We
can
solve
that
by
just
assigning
some
new
chems
that
do
a
compact
serialization,
but
for
the
second
problem
next
slide,
please
I
have
a
couple
of
proposals.
T
One
of
them
is,
we
can
use
a
deterministic
aead
and
not
worry
about
the
nonce.
If
the
the
nonce
is
causing
problems,
then
let's
just
use
something
that
doesn't
care
about
a
nonce
like
siv.
Now,
of
course,
siv
has
problems
regarding
some
of
the
security
of
of
the
deterministic
mode,
but
you
can
achieve
semantic
security
with
siv.
If
the
plaintext
carries
enough
probabilism
that
the
nonce
normally
would
have
provided
in
a
a
normal
aed
scheme.
T
If
not
what
we
could
do
is
use
a
rolling
replay
window
analogous
to
what
was
done
with
ipsec
and
rfc
2401,
where
you've
got
a
bitmap
of
received
received
packets,
and
that
allows
you
to
receive
them
out
of
order
and
some
can
get
dropped.
Packets
that
are
way
too
old
will
get
thrown
on
the
floor
and
the
the
window
advances
as
you,
you,
you
verify
receive
packets.
T
T
This
shouldn't
be
a
problem
because
that
information
would
already
be
down
to
any
sort
of
attacker
who
can
count
and
we're
not
going
to
be
exporting
the
the
secret
knots
we're
just
going
to
be
exporting
the
four
octet
sequence
number
that
gets
xored
onto
that
thing.
So
next
slide
please.
T
So
what
I'm
asking
for
is
three
things
I
want
to
add
new
chems
for
the
nist
curves
that
do
compact
serialization
I'd
like
to
add
support
for
aes
siv
as
a
deterministic
cipher
mode
in
hpke
and
for
the
situations
where
aes
siv
would
not
be
appropriate
on
velocity
networks.
I'd
like
to
have
a
defined
way
to
use
a
rfc,
2401
style
replay
window
to
deal
with
packet
loss
and
reordering
next
slide.
Please
I
do
have
a
draft
it's
in
the
01
version.
T
Please
take
a
look
and
I
also
have
running
code.
If
you
want
to
take
a
look
at
my
github
there's
a
hpke
wrap
it's
fully
compliant
with
rsc2401.
T
It
handles
all
of
their
test
vectors,
but
I
also
added
additional
test
vectors
to
do
compact
representation
with
new
chems
and
authenticated
and
deterministic
authenticated
encryption
with
siv.
I
just
basically
stole
a
couple
of
values
from
the
the
reserved
number
space,
but
hopefully
that
won't
be
a
problem.
So
please
take
a
look
next
slide,
please.
So
we
have
running
code
and
I
hope
we
can
get
rough
consensus
to
adopt
this
as
a
work
item.
K
All
right
see
a
couple
of
comments.
One
is
there's
with
9180
when
I
combine
all
the
cams
and
kdfs
and
so
on.
I
have
480
different
combinations
of
algorithm,
so
there's
already
a
lot.
However,
these
seem
like
reasonable
changes.
K
The
other
comment
I
have
is
maybe
hpk
should
be
thought
of
as
moving
from
cfrg
to
an
ietf
context,
because
the
perhaps
the
part
where
we
needed
cfrg
expertise
is
in
the
past,
and
these
kind
of
changes
seem
like
more
engineering
changes
than
cfrg
like
changes.
I
I'm
not,
I
wouldn't
be
too
focused
on
it,
but
just
for
you
know
things
like
you
know,
changing
the
the
points
representations.
K
No,
I'm
not
I'm
not
recommending
that
it
just
occurs
to
me
that
that
might
start
to
make
sense,
because
maybe
the
you
know
maybe
the
cryptographic
bits
of
this
are
are
mostly
done
and
it's
more
about
kind
of
engineering.
At
this
stage
fair
comment,
and
then
I
actually
have
one
question
is
so:
does
your
code
now
include
the
kind
of
x2559?
K
No
okay,
so
it's
it's
just
the
nitch
curves
for
now
still
yeah,
okay
cool
well.
Otherwise
I
I'd
be
supportive
of
this
happening.
I
I'd
probably
be
a
little
bit
happier
if
it
didn't
happen
immediately.
I
don't
know
I
don't
know
I
have
to
think
about
it.
E
Yeah
thanks
dan.
I
also
think
that
these
are
perfectly
fine
additions.
Well,
the
the
the
chem
with
the
different
compressed
format
is
a
fine
addition.
I'm
a
little
bit
concerned
about
the
deterministic
aad
and
the
implications
it
has
on
how
you
use
it,
so
I
prefer
well,
I
I'm
less
less
supportive
of
that,
but
I
guess,
as
I
met
a
comment,
this
is
our
question.
E
The
the
registry
policy
for
editing
anything
any
new,
chem,
aad
or
kdf
is
specific
specification
required
which
doesn't
require
this
to
be
published
or
adopted,
or
anything.
It
just
requires
a
document
to
exist
and
that
approval
from
one
of
the
experts.
So
do
we
need
to
establish
experts
to
review
these
requests
so
that
we
can
get
dan's
algorithms
in
the
registry.
B
T
B
B
Well!
If
there
is
enough
support
to
do
this,
it
seems
like
it's
a
relatively
simple
and
smallish
document,
so
we
should
just
already
done,
like
you
know,
extra
lms
extensions,
for
example.
I
I
think
without
consulting
with
micro
chairs.
B
If
there
is
enough
interest-
let's,
let's
do
that
christopher.
E
So
less
work
overall
for
everyone,
but
I
will
I
will
email
the
chairs
and
see
if
we
can
get
the
designated
experts
set
up.
Okay,
thank
you.
B
T
T
We
had
another
christopher
in
there
didn't
we.
He
just
took
himself
off.
B
All
right
great,
we
have
three
minutes
left
and
chris
sacrificed
one
of
his
presentations
who
basically
used
up
all
of
this
time.
So
I
don't
think
there
is
a
point
unless
you
want
to
say
a
few
words
very
quickly.
E
Yeah
thanks
yeah,
there's,
definitely
not
time.
Try
to
give
it
just
a
brief
summary
of
what
I
was
going
to
talk
about.
E
It
was
mostly
going
to
be
a
reflection
on
you
know
the
documents
that
the
cfrg
is
producing
right
now
and
has
produced
in
the
past
and
their
sort
of
importance
in
the
community,
in
particular
the
itf
community,
and
how
they
use
cfrg
specifications
for
specifying
protocols,
conducting
security
analysis
and
whatnot,
trying
to
point
out
that
there
are
there's,
probably
a
number
of
ways
in
which
we
can
improve
as
a
group,
the
output
of
the
documents
in
terms
of
clarity,
consistency
and
and
correctness,
and
it's
kind
of
a
plea
for
you-
know
people
to
for
volunteers
who
are
interested
in
like
sort
of
the
editorial
production
of
these
documents
to
brainstorm
things
that
can
be
done
to
improve
document
quality.
E
Be
it
with
respect
to
the
pseudo
code.
That's
produced
the
terminology,
that's
used
in
the
documents,
you
name
it
and
up
won't,
go
in
the
slides
or
anything
but
I'll.
Try
to
summarize
and
send
a
message
to
the
list,
because
I
think
there's
there's
something
we
can
be
done.
That
can
be
done
here
and
that's.
B
All
right,
and
with
this
we
have
one
minute
left,
which
I
give
back
to
you.
Thank
you
for
coming
and
see
you
online
and
hopefully
again
next
time.