►
From YouTube: IETF111-CFRG-20210730-2300
Description
CFRG meeting session at IETF111
2021/07/30 2300
https://datatracker.ietf.org/meeting/111/proceedings/
A
B
B
Great
so
welcome
everyone.
This
is
been
long
iatf
and
welcome
to
the
last
session
of
the
of
the
the
whole
event.
So,
if
folks
aren't
aware
this
session
is
being
recorded,
there
is
a
minute
taker
in
cody
md.
Thank
you,
rich
sols
for
doing
so,
jabber
relay
the
jabber
should
be
connected
directly
through
to
the
chat.
So
it's
it
should
be
all
integrated.
If
there's
something
in
jabber
that
doesn't
end
up
there,
please
raise
it
in
the
chat.
B
Here's
the
participant
guide,
blue
sheets
are
automatically
generated.
So
if
you're
logged
in
and
you're
in
here,
you're
in
the
blue
sheets,
nothing
manual
to
do
this
should
be
all
familiar.
This
is
the
last
meeting.
Here's
the
note.
Well,
this
is
the
ietf
ipr
disclosure
rules.
This
is
this
the
same
one
across
all
of
the
ietf
and
irtf.
Every
meeting
you've
attended
should
be
the
same,
so
everyone
should
be
familiar
with
this.
If
there
is
ipr,
please
raise
it
and
the
details
are
here.
B
We
also
have
a
code
of
conduct
in
the
notewell
that
everyone
should
be
familiar
with.
We
have
a
great
ombuds
team
to
deal
with
anything
that
comes
up.
B
B
This
cfrg
may
seem
to
straddle
the
line
there,
but
it's
very
much.
On
the
irtf
side,
the
I
cfrg
produces
informational
documents
that
are
very
useful
for
the
ietf.
But
if
you
want
to
know
more
of
the
differences
follow
this
rfc
and
with
that
we
will
jump
into
the
overview.
So
here
is
the
agenda.
B
Does
anybody
have
any
agenda
bashing?
This
agenda
should
be
published
on
online
and
everyone
should
have
seen
it
before
going
once
going
twice.
Yeah,
we'll
we'll
step
through
this.
All
the
way
through
we've
got
a
very
packed
agenda.
So
please
presenters,
try
to
stay
on
time,
we'll
be
using
the
clock,
meat,
echo
mechanism
to
to
give
you
guys
everyone
here,
who's
presenting
a
counter.
B
So
first
research
group
document
status.
We
have
quite
a
few
active
ongoing
documents
here
in
the
group
and
they
are
in
various
states.
So
first
argon
2
is
in
the
rfc
editor's
queue.
Iasg
review,
nothing.
The
irsg
review
we
have
hpke,
which
we'll
have
a
quick
update
at
the
very
end
of
this
presentation
about
speak
to
is
on
draft
20..
This
is
updated
and
the
research
last
call
is
done
on
this
active
drafts,
we'll
be
going
over
quite
a
few
of
these
in
different
presentations.
B
But
here
are
the
active
ones
hash
to
curve
vrf.
We
had
our
first
research
group
last
call
for
vrf,
not
enough
responses
to
to
get
that
going.
B
So
take
a
look
on
the
on
the
list,
if
you're
interested
in
vrf
that
there
should
be
something
on
the
list
coming
on
coming
soon
about
potentially
a
second
research
group
last
call
there
kangaroo
12
has
been
updated
after
the
second
research
group
last
call
vo
prf,
updated
aead
limits,
updated
you'll,
see
an
update
on
both
of
those
opaque,
updated
cpace.
We
are
on
draft
one
here.
All
three
of
those
will
see.
Updates
in
this
meeting.
Frost
is
unchanged:
rsa
blind
signatures.
B
It
has
recently
been
adopted
and
we'll
see
a
presentation
about
that.
Some
of
the
adopted
documents
that
are
newly
expired
and
need
an
update
here
include
pairing,
friendly
curves,
which
has
been
updated
but
is
currently
expired.
Bls
signatures
and
restreto
decaf
expired
drafts.
These
are,
these
are
the
ones
in
the
queue
the
ones
in
the
list
that
that
are
not
newly
expired,
but
that
were
expired
last
time.
B
We've
had
several
radas
that
were
opened
over
the
last
couple
months,
three
of
which
have
been
verified
since
the
previous
update
and
a
number
of
new
eradas
have
been
opened,
and
so
these
are
in
the
queue
for
the
chairs
to
validate
it's
mostly
against
80,
32
and
84
35.,
and
with
that
we
will,
I
guess
continue
with
the
rest
of
the
presentations
here.
Again
is
the
agenda
again.
I
want
to
open
it
up
again
any
other
agenda
bashes.
Otherwise
we
will
hand
it
over
to
dan
harkins
dan.
C
D
B
There
hasn't
been
any
update
on
the
list
and
the
document
itself
hasn't
been
moved
over
to
the
cfrg
github
page.
So
I'm
not
aware
of
any
blockers
or
okay
or
issues.
C
A
C
C
The
object
cannot
be
found
here.
It
says,
tell
me
if
you're
seeing
slides.
C
Okay,
all
right
thanks!
So
next
slide,
please!
So
I
really
like
hpke.
I
think
it's
it's
great.
It's
very
well
defined
and
it's
complete
but
like
with
most
effusive
statements
these
days
it's
qualified,
so
one
of
my
qualifications
is.
It
has
some
bloat
and
that's
when
using
the
nist
curves,
it
uses
the
sec
uncompressed
form
of
serialization,
which
makes
the
public
key
more
than
twice
as
large.
C
It
needs
to
be
which
could
be
a
problem
for
constrained
environments
where
every
bit
counts
or
could
be
a
problem
with
people
who
want
to
use
the
hpke
api
in
the
one
shot
manner,
if
they're
using
a
short
message,
because
that
would
result
with
a
you
know,
the
message
is
going
to
be
dwarfed
by
the
overhead
necessary
to
to
make
the
call.
C
So
what
I
want
to
do
is
you
know
if
we
we
can
observe
that
hpke
uses?
What's
what
rfc
1690
calls
compact
output,
where
the
diffie-hellman
shared
secret
is
the
x-coordinate
of
the
the
elliptic
curve
point
that
gets
derived
as
a
result
of
the
ecdh
function?
And
if
the
sign
doesn't
matter
for
the
secret,
then
it
doesn't
matter
for
the
public
keys
that
went
in
to
create
the
secret.
C
C
C
So
my
other
qualifying
statement
for
my
effusive
statement
is
that
hpke
assumes
guaranteed
in
order
delivery
of
packets
and
that's
because
the
aead
ciphers
that
it
defines
I'll
take
nonces
and
the
nonce
management
is
entirely
inside
of
the
hp
ke
context.
C
So
that
means
that
if
you
receive
a
a
packet,
you
have
no
idea
what
the
nonce
was
used
to
encrypt
it,
which
wouldn't
be
a
problem
with
guaranteed
order
delivery.
But
if
there
is
a
loss
or
reordering,
then
things
are
going
to
stop
working
pretty
suddenly
and
there's
no
way
to
to
resynchronize
the
context
if
they
do
get
out
of
order.
So
it's
basically
a
tragic
occurrence
which
again
won't
be
a
problem
if
you
have
guaranteed
in
order
delivery,
but
that's
not
the
internet
so
next
slide.
Please.
C
So
I
don't
want,
I
want
to
fix
this,
but
I
don't
want
to
change
the
hpke
apis.
I
think
it's
good
that
users
don't
pass
nonces
through
the
apis.
I
think
it's
good
that
they
don't
have
to
care
about
the
nonsense.
I
just
want
to
make
sure
that,
in
the
event
of
loss,
we're
reordering
that
things
don't
don't
become
tragic
and
the
way
to
do
that
is
to
add
cipher
modes
that
support
deterministic,
authenticated
encryption
so
with
them.
C
C
You
want
outside
of
hpke,
and
the
reason
I
don't
want
to
is
because
I
like
hpke,
as
I
said
at
the
start,
I
think
it's
really
cool
and
I
want
to
use
it
and
I
don't
want
to
reduce
it
down
to
just
the
static
ephemeral,
difficult
key
exchange.
If
I
wanted
one
of
those,
I
would
of
course
use
one,
but
I
want
to
use
hpke
next
slide.
Please.
C
So
I
want
to
do
I
want
to
add
some
dae
ciphers
and
there
is
a
security
there's,
a
paper
that
talks
about
deterministic
authenticated
encryption.
It's
this
robo
wins
shrimpton
paper
referenced
here
from
eurocrypt
in
2006..
It
has
the
security
proof.
If
you're
into
that
sort
of
thing,
it
talks
about
deterministic,
authenticity
and
deterministic
privacy,
and
then
it
has
a
combined
all-in-one
notion
of
deterministic
security.
So
you
know
briefly,
the
the
notion
of
authenticity
is
that
the
adversary
is
unable
to
produce
a
non-trivial
forgery
and
for
privacy.
C
It's
that
the
adversary
is
unable
to
determine
whether
the
output
from
the
oracle
is
the
message
he
sent
or
it's
random
bits
and
he's
barred
from
from
doing
the
trivial
way
of
figuring.
That
out.
The
trivial
way,
of
course,
is
to
ask
the
same
query
twice
because,
as
the
name
implies,
this
is
deterministic,
so
we
can't
conceal
whether
a
given
plaintext
aad
combination
was
encrypted
twice
in
a
sequence
of
of
ciphertext.
C
So
this
is
a
different
security
guarantee
than
the
existing
hpke
authentication
or
aed
modes
provide.
So
there
are
some
security
considerations
to
think
about
for
some
use
cases.
This
won't
matter.
If
the
messages
they're
encrypting
are
item
potent
or
they
can
figure
out
how
to
handle
replay-
or
you
know,
they
just
want
authenticated
encryption
in
their
with
their
lossy
medium.
This
should
be
fine
for
other
cases.
C
If
they're,
if
they're
able
to
add
something
new
to
the
message,
they
can
get
around
this
issue
so
as
a
for
instance,
if
you're
some
sensor
and
you're
sending
out
some
environmental
recording
every
minute
or
something
like
that
in
your
lossy
network,
you
could,
for
instance,
put
the
time
since
epoch
in
the
aad
or
if
you
wanted
to
be
fancy
and
get
privacy.
C
You
could
put
time
since
epoch
as
a
tweak
into
the
plain
text,
and
this
is
because
of
the
notion
that,
if
there's,
if
any
bit
in
the
aad
or
the
plain
text
is
new,
then
the
synthetic
initialization
vector
that
gets
created
as
part
of
this
cipher
mode
will
look
random
and
therefore
the
output
from
the
encryption
will
likewise
look
random.
C
So,
of
course,
if,
if
these
security
considerations
are
not
applicable
for
you
and
you
still
have
velocity
network,
you
could
of
course
use
the
hpke
apis
in
the
one-shot
mode.
But
that's
that
would
be
very
wasteful
and
kind
of
a
pain
in
the
ass.
So
I
think
this
is
a
reasonable
approach
to
obtain
resistance
to
packet
loss
and
reordering
next
slide.
C
C
In
the
excuse
me,
the
deterministic
authenticated
encryption
mode,
so
there's
no
nonce
used
and
there's
no
nonce
generated
next
slide.
Please,
so
I
did
write
it
up
as
a
draft,
and
this
is
the
draft
here.
Please
take
a
look
in
the
interest
of
running
code.
I
did
implement
it.
I
added
it
to
my
implementation
of
hpke,
which
is
compliant
with
version
10,
the
latest
and
greatest.
C
So
my
code
can
pass
all
the
test
vectors
for
hpke
in
addition
to
compass,
test
vectors
that
I've
created
that
implement
the
compact
representation
and
the
dae
ciphers.
So
that's
the
running
code.
So
in
the
interest
of
rough
consensus,
I'm
wondering
if
there
is
interest
in
adopting
this
draft
as
a
cfrg
work.
E
Yes,
I
think
this
is
a
really
interesting
idea.
I
really
worry
about
different
code
points,
providing
different
security
properties
for
the
user
and,
if
you
want
like
so
I
mean
so
so,
the
the
the
generalization
of
dae
is
like
misuse.
E
Resistant,
authenticated,
encryption
and
they're
like
you're
as
long
as
you
know,
as
long
as
something
one
of
the
inputs
as
long
as
one
of
the
inputs
varies
across
encryption,
so
like
you
can
as
long
as
if
your
non
stays
the
same,
but
your
aed
changes
or
your
message
changes
that
you're
encrypting
you
get
standard,
indcca,
two
security,
so
I
feel
like
this
is
gonna
just
end
up,
pushing
pushing
the
task
of
making
sure
inputs
are
unique
to
the
aad
in
the
hpke
interface.
C
Yeah,
so
that's
what
I'm
trying
to
to
address
that.
If,
if
that
is
a
concern,
then
you
would
probably
have
to
try
and
either
add
it
as
aad
or
add
it
as
a
tweak
into
the
plain
text,
but
not
all
uses
would
pair,
for
instance.
But
if
you
do,
then
I'm
just
trying
to
give
a
giveaway
to
use
hpke
in
a
lost
unit.
Velocity
network,
which
we
don't
have
today.
F
Or
one
more
slide,
sorry,
I'm
going
to
try
to
raise
my
volume
here.
So
this
this.
The
second
bullet
in
that
implications
section
which
is.
G
F
Make
sure
that
you're,
including
something
new
I
mean,
I
think
this
is
similar
to
what
what
the
previous
speaker
was
concerned
about,
but
this
smells
to
me
like
sort
of
the
rough
equivalence
of
putting
the
nonces.
F
You
know
passing
around
the
nonsense
with
the
packets
which
the
slide
before
this
seemed
to
say
you
didn't
want
to
do
so.
It
seems
to
me
like,
like
I'm,
not
really
getting
like
I'm,
not
I'm
not
getting
the
rationale
here
either
we
say
this
is
for
folks
who
really
don't
who
really
do
have
item
potent
messages
that
top
bullet
right?
F
F
F
C
F
F
Then
you
know
you
can
try
the
the
most
recent
epic
you
know
in
that
256
cycle
and
you're
probably
going
to
hit
it
unless
your
packets
are
way
way
out
of
order,
in
which
case
you
try
the
next
one.
You
know
256
up
or
down
from
there,
but
it's
it's
not.
F
It
doesn't
seem
like
it's
a
huge
like
either
we're
asking
people
to
make
up
their
own
effectively
nonces
and
ensure
that
they're
accurate
and
have
all
these
properties
that
we
expect,
or
we
just
give
them
a
system
that
says
here's
how
you
do
it
and
I
I
don't
I'm
not
sure
I
understand
why,
like
if
you
want
to
argue
that
this
is
good
for
idempotent
messaging,
then
I
would
probably
agree
with
you
on
that.
F
C
So
right,
you're
you're
you're,
adding
something
new,
but
the
packets
can
still
be
reordered
or
lost
and
you're
still
able
to
to
decrypt
a
packet
in
c2
which
you're
not
able
to
do
with
hpke.
Now,
the
only
and
the
way
to
get
around
that
to.
If
we
wanted
to
require
people
to
to
add
nonsense
in
the
in
the
as
aad
it
would
require
us
to
change
the
hpk
apis
and
that
you
know
that's
another
thing
that
I
really
don't
want
to
do.
C
I
kind
of
like
the
way
that
they
are
that
you
don't
have
to
care.
You
just
go.
You
know
if
you
care,
then
you
know,
I
guess
you
know,
I'm
we're.
I'm
selling
a
gun
and
if
you
shoot
yourself
in
the
foot,
then
you
shouldn't
have
pointed
it
at
your
foot
and
pulled
the
trigger.
A
H
The
least
appropriate
analogy
here
I
think
I
I
agree
with
dkg
on
this.
I
think
that
there's
probably
a
case
here
for
modifying
the
hpke
interface,
so
that
you
could
say
skip
or
take
the
the
messages
out
of
order.
I
think
that
I
think
you've
made
the
case
for
that
reasonably
well,
I
think,
and
that
would
be
the
the
model
that
dkg
is
talking
about,
and
I
I
have
less
opinions
about
the
item.
H
Potent
deterministic
encryption
thing,
but
this
sort
of
middle
ground,
where
you
use
deterministic,
doesn't
seem
sensible
to
me.
It
seems
like
it's
more
dangerous
than
it.
A
A
A
I
Hopefully,
you
have
not
have
audio
now
hello
good
evening.
Does
this
look
okay
to
everyone
I'll
go
with?
Yes,.
I
Okay,
hello,
I'd
like
to
talk
about
a
duck
test
for
end-to-end,
secure
messaging
draft
muffet
end-to-end,
secure
messaging
o3.
What
is
this
internet
draft
about?
It's
about
end-to-end
secure,
including
end-to-end,
encrypted
messaging?
I
This
is
something
which
I'm
sure
we
all
already
understand,
but
my
thesis
with
this
internet
draft
is
that
our
intuitive
and
slightly
differing
understandings
are
no
longer
politically
fit
they.
They
do
not
help
us
in
technical
policy
related
debates,
and
so
I
would
like
to
address
this
specific
problem
that
is
well
there's
a
saying
in
business
that
if
you
want
to
change
something,
you
first
must
be
able
to
measure
it.
I
Therefore,
if
you
want
something
to
not
change,
you
must
also
be
able
to
measure
it
in
order
to
determine
that
it
has
not
changed
and
there
are
a
lot
of
people
who
want
to
change
end-to-end
security.
Three
examples,
for
instance,
include
unicef,
who
are
in
a
recent
paper,
were
discussing
the
potential
benefits
of
detecting
abusive
content
that
was
being
shared
over
end-to-end
messengers
by
putting
filters
on
people's
devices
on
people's
clients.
I
There's
gchq,
who,
a
couple
of
years
ago,
posted
the
ghost
proposal
where
they
would
like
to
have
an
invisible
participant
spliced
into
all
end-to-end
encrypted
conversations,
and
there
is
the
indian
government
that
wants,
for
instance,
in
this
case
whatsapp,
to
keep
a
hash
of
the
plain
text
of
all
messages
that
pass
through
it
in
order
that
they
can
determine
who
it
was,
who
first
shared
some
piece
of
concept,
content
that
they
find
offensive.
I
So
there
are
a
lot
of
proposals
and
uniformly
all
of
them
say
they
do
not
break
end-to-end
security.
They
are
just
adding
extra
protection.
How
may
we
test
the
assertion
that
they
do
not
break
end-to-end
security?
I
One
possibility
is
that
we
could
try
and
define
end-to-end
security
and
to
end
encryption
by
saying
what
algorithms
do
we
expect
it
to
deploy.
What
features
do
we
expect
it
to
have?
Who
are
the
actors?
Who
are
the
people
who
will
be
using
it
and
what
will
the
expectations
of
those
people
be?
And
this
is
very
much
the
method
being
employed
by
draft
nodal,
e2e
definition,
it's
another
internet
draft.
It's
got
potentially
complementary
goals
to
this
one,
and
it
specifically
is
looking
at
end-to-end
encryption
rather
than
more
narrow.
I
I
For
instance,
if
we
look
at
section
4.3,
it
makes
commentary
about
private
communications
which
are
intended
to
be
sent
to
intended
recipients
without
making
explicit
the
fact
that
the
recipients
are
the
ones
who
define
the
intention,
as
opposed
to,
for
instance,
and
some
other
interpretation
law
enforcement
deciding
that
they
should
be
intended
as
well.
It
also
talks
about
formally
interfering
with
channel
confidentiality
as
being
a
some
character
of
the
management
of
the
data
as
it
passes
through
the
algorithms
and
the
systems.
I
I
don't
actually
know
what
that
means.
I
posted
it
to
twitter
and
I
wound
up
with
five
or
six
different
interpretations,
two
of
which
interpreted
it
as
a
means
to
enable
law,
legal
access
or
legally
allow
permitted
access,
and
then
there's
also
the
matter
of
it
being
reliant
upon
expectations
and
violation
of
these
expectations
without
actually
saying
what
the
consequences
of
violating
these
expectations
are.
I
So
I'd
like
to
propose
an
alternative
which
is
rather
than
whether
it
looks
like
end-to-end
secure
messaging,
does
it
quack
like
end-to-end,
secure
messaging,
and
so
we
have
to
develop
a
test
for
end-to-end,
secure,
messaging
and
there's?
Fortunately,
three
really
big
hints
in
the
name.
Firstly,
it's
got
to
be
end
to
end
secure
messaging.
This
implies
that
there
are
ends
and
we
should
respect
them
for
a
first
cut.
I
will
just
sort
of
say
ends.
We
will
assume
are
participants
which
might
mean
a
sender
or
a
recipient.
I
Secondly,
it's
messaging
and
by
messaging,
what
I
mean
is
a
system
where
each
individual
message,
the
sender,
composes
and
sends
dispatches
and
at
that
point
is
frozen
into
it.
A
complete
and
immutable
set
of
recipients
whom
the
message
is
readable
or
accessible
by
this
is
distinct
from,
for
instance,
forum
type
software,
where
new
joiners
in
a
group
would
be
able
to
look
at
retrospective
previously
sent
content.
I
What
I'm
getting
at
here
specifically
is
forward
secrecy,
sort
of
things,
but
as
a
surprise,
plot
twist,
we
then
define
recipients,
not
critical
participants.
What
we
do
is
we
say:
a
recipient
is
any
entity
that
can
determine
one
bit
of
plain
text
with
more
than
50
certainty,
and
the
reason
for
doing
it
like
this
is
that
one
can
then
say
that
if
any
recipient
was
not
known
to
the
sender,
at
the
point
of
message,
composition
or
sending
the
system
does
not
implement
end-to-end
security.
That's
it
that's
the
whole
test.
I
It's
there
are
some
sort
of
contextual
things
which
I'll
get
into
in
a
second,
but
it's
very
simply
a
falsifiability
test.
This
also
provides
us
a
obvious
definition
of
backdoor
a
backdoor
is
any
mechanism
which
leaks
bits
to
a
non-recipient,
irrespective
of
whether
it
is
intentional
or
unintentional.
It's
basically
the
logical
inverse
of
a
recipient,
a
legitimate
one.
I
It
also
means
that
we
can
do
surveillance
or
management
or
moderation
or
helper
bots
or
compliance,
or
something
like
that
so
long
as
they
are
over
participants,
rather
than
the
gchq
ghost
scheme,
with
invisible
spooks.
What
we
actually
wind
up
with
is
gchq,
evidently
being
there
for
your
own
safety
or
your
local
police
force
or
whoever
it
might
be.
The
the
platform
safety
team
of
the
platform
that
you're
using
just
so
long
as
it
is
overt.
It's
fine,
nitpicks
and
edge
cases
I'll
go
through
these
really
quickly.
I
Firstly,
there
is
some
metadata
which
is
sensitive
and
needs
to
be
protected,
for
instance,
content
lengths.
We
should
protect
it.
Secondly,
there
is
some
metadata
which,
regrettably,
is
outside
of
the
scope
of
protection
or
just
hasn't
been
implemented.
That's
kind
of
okay,
so
long
as
it
doesn't
leak
anything
about
the
content,
it's
an
old
problem,
thematic
metadata,
rather
than
descriptive
metadata.
I
Thirdly,
encryption
of
the
messages
is
not
actually
necessary.
Some
growing
set
of
new
messenger
solutions.
Don't
actually
do
content
encryption,
they
don't
encrypt
the
messages.
This
is
why
we
call
it
secure
messaging
rather
than
encrypted
messaging
things
like
ricochet,
which
just
pass
messages
direct
point
to
point
over,
admittedly
end-to-end
encrypted
tour
on
your
network
links,
but
they
don't
bother
with
message
encryptions,
which
makes
it
somewhat
distinct.
I
Fourthly,
the
set
of
recipients
varies
between
centralized
and
decentralized
solutions.
This
is
a
technical
knit
for
when
you're,
using
something
like
email
or
ricochet.
You,
as
the
sender,
is
the
person
who
defines
who
the
recipients
are
at
the
point
of
composition,
literally
when
you're
using
something
like
signal
or
whatsapp.
There
is
a
shared
context
of
group
membership,
and
that
is
what
gets
used
to
generate
the
set
of
recipients.
I
Five
groups
should
not
be
allowed
to
have
people
injected
into
them;
they
should
be
enclosed
unless
they
open
themselves
up
for
public
subscription
six,
you
can
cut
and
paste
and
forward
and
quote
and
cite
old
messages.
This
is
an
exception
to
the
fact
that
you
can't
see
retrospective
content.
If
it
gets
re-upped
reboosted,
then
yeah
you'll
see
it
seven.
I
This
is
a
technical
niche
about
self-referentiality,
which
is
that
if
a
participant
is
also
a
platform,
they
have
to
not
cheat
if
they
have
a
mitm
capability
to
see
plain
text
without
acting
as
a
participant
in
the
protocol,
they're,
cheating
and
that's
wrong,
and
then
eight.
Finally,
this
is
the
big
question
and
actually
doesn't
just
mean
participant.
It
means
trusted
computing
base.
I
If
people
are
old
enough
to
remember
the
orange
book,
it's
actually
the
set
of
systems
which
the
user
deems
are
in
their
trusted
path
are
things
that
they
can
rely
on
and
depend
upon.
There's
a
marvelous
paper
which
goes
into
this
from
2011
by
sorry
forgot
the
names
for
the
moment,
clark
and
blumenthal
called
the
end-to-end
argument
and
application
design
the
role
of
trust.
I
It's
a
fantastic
paper
and
talks
about
trust's
trust,
trust
zone,
to
trust
zone
being
the
model
for
end-to-end
going
forward,
and
it's
I'm
surprised
that
it
hasn't
been
referenced
more
in
other
research.
This
means
that
when
alice
accesses
her
messenger
over
rdp,
it
is
not
a
failure
of
the
end-to-end
security.
If
bob
has
a
hacked
app,
it
is
not
a
failure.
If
carol's
phone
storage
is
forensically
analyzed
at
rest,
it
is
not
a
failure
if
dave's
keyboard
or
grammar
app
is
leaking
the
keystrokes
to
local
authorities.
That
is
also
not
a
failure.
I
These
are
all
part
of
the
trusted
path
or
they
should
be,
and
we've
got
a
lot
of
collateral
which
helps
us
understand
this
concept.
Although
the
content,
the
the
debate
about
these
topics,
especially
regarding
china
and
android,
keyboards
and
so
forth,
does
continue
today.
I
Also
from
that
paper,
I
am
delighted
to
have
rediscovered
rfc
2804
plays
out
the
itf's
position
about
wiretapping
that
the
ietf
does
not
consider
requirements
for
wiretapping
as
part
of
standards,
development
and
that's
great,
even
though
we
are
now
moving
from
the
world
where
wiretaps
were
done
centrally
to
where
they
are
being
and
placed
at
the
ends
on
client
device
devices.
It's
not
something
for
us
to
worry
our
heads
about.
We
should
focus
instead
on
delivering
standards
regarding
what
is
and
is
not
or
how
things
should
work.
I
Where
I
would
like
to
go
with
this
test
is
I
would
like
to
go
for
cfa
I
I
would
like
to
get
it
accepted
and
then
refine
it
to
through
rough
until
we
achieve
rough
consensus,
get
it
thoroughly
battle
tested
and
ship
it
as
an
rfc,
not
the
rfc,
but
one
falsifiable
test
for
assertions
regarding
the
end-to-end
security
of
a
system
and
anything
that
fails
to
meet
that
test
would
just
be
not
compliant
with
this
rfc
there
may
be
other
rfcs.
There
may
be
other
tests.
I
I
am
content
with
that,
and
then
the
goal
of
this
overall
is
to
inform
user
choice
and
assist
clarity
and
policy
discussion
and
and
in
design
of
new
systems
and
so
forth.
I
think
that's
all
of
my
slides,
so
thank
you
very
much
and
I'll
take
questions.
J
I
I
am
delighted
to
look
into
issues
like
that
from
my
experience,
working
on
rfc
7686,
where
I
was
very
much
an
author
under
the
tutelage
of
the
inestible
mark
nottingham,
I
learned
that
it
was
much
more
simple
and
much
faster
to
put
five
rfcs
through
ietf
feature,
which
does
two
things
than
one
rfc
that
does
ten,
and
so
I've
sought,
following
that
experience
to
narrow
the
specification,
what
I'm
targeting
and
I'm
targeting
it
specifically
to
messaging
systems,
as
I
have
defined
them
and
attempted
to
capture
pgp
signal,
whatsapp,
ricochet,
breyer,
wicker
and
things
of
this
nature,
and
I'm
trying
to
avoid
scope
creeps
so
that
it's
possible
to
ship
this
within
a
reasonable
time.
I
This
is
not
to
exclude
the
possibility
of
taking
it
forward
and
expanding
it.
This
was
mentioned
by
I
think,
martin
in
an
email
last
night,
or
maybe
I've
misremembered
that
I'm
afraid,
but
I'd
like
to
I'd
like
to
be
able
to
sort
of
expand
it
once
we
have
got
a
foundation
stone.
I
Regrettable
that
I've
had
to
cut
this
deck
down,
because
the
previous
version
of
the
deck
that
I
submitted
was
a
little
bit
too
large
and
stanislav,
asked
me
to
reduce
it
a
bit.
I
had
reference
to
reflections
on
trusting
trust
in
there.
For
this
reason,.
A
K
Yeah
hi
alec,
it's
matt,
so
my
concern
is
that
you're
choosing
a
new
name
for
something
that
the
rest
of
the
world
already
associates
with
end-to-end
encryption
and
a
lot
of
the
content
you
have
is
really
quite
good,
and
I
think
it
provides
a
much
clearer
and
conciser
description
and
ability
to
make
a
decision
as
to
whether
something
is
secure.
I
I
I
I
can
see
where
you're
coming
from
and
it's
something
which
I
wrestled
with
last
year
in
the
early
development
of
what
was
a
medium
post
at
the
time
before
it
turned
into
this,
whether
it
should
be
e2e
or
etsm's.
I
Secure,
messaging
ricochet
was
the
thing
which
twisted
it
for
me
that
it,
the
messages
are
not
encrypted,
so
calling
them
end
to
end
encrypted
seemed
odd,
even
though
you
and
I
both
deeply
know
that
this
is
going
over
tour
and
that
there
is
end-to-end
transport
encryption.
But
if
we
start
looking
at
transport
layers,
then
we
could
tie
ourselves
up
in
knots
by
saying
is
signal
running
over
tor
the
same
as
signal
running
over
non-tor,
because
we
have
changed
the
transport.
I
It's
a
layer,
boundary
issue
which
I
think
is
maybe
I'm
just
being
overly
uniform
being
a
little
bit
anal
about
it.
But
I
think
it's
clearer
this
way.
It's
regrettable
that
there's
diversity
of
opinion,
maybe
seeing
as
we
are
trying
to
build
various
standards,
but
this
this
is
just
a
test
to
achieve
a
specific
goal,
and
I
have
have
not
yet
come
to
come
to
the
sensation
of
there.
I
Being
actual
consensus
to
land
with,
maybe
this
is
part
of
the
discussion
we
should
be
having
during
the
development
of
this
is
a
draft.
I
From
my
previous
existence
at
sun
microsystems
ps,
I
have
seen
systems
with
extremely
secure
networks
implemented
by
burying
six
foot
of
armored
conduits
under
a
london
major
road
and
no
encryption
either
end
because
of
latency.
It
turns
out
you
don't
always
need
encryption
to
achieve
it.
Although
it's
terribly
helpful.
H
Miss
alec,
I
agree
with
matthew
about
the
the
scope
I
I
do
think
this
applies
to
the
ricochet
exactly
as
you
have
proposed
it.
I
think
it
applies
to
a
number
of
different
protocols,
exactly
as
you
have
proposed
it,
and
using
end-to-end
encryption
as
the
thing
that
you're
attempting
to
to
define
would
be
far
more
useful
than
trying
to
come
up
with
a
new
term,
because
that's
the
term
that
people
use.
I.
I
I
I'm
very
sympathetic
to
this,
but
but
I
it's
also
to
paint
a
large
target
on
this
draft
to
to
cause
expansion
of
the
scope,
and
that
would
mean
several
months,
possibly
even
a
a
a
year
or
two
in
discussion.
H
There,
anyway,
sorry,
that's
just
how
that's
just
how
these
things
work,
unfortunately,
so.
I
I
I'm
still
I'm
still
scarred
by
the
experience
of
7686
and
being
berated
for
having
taken
a
very
large
draft
that
tried
to
register
many
domain
names
and
cut
it
down
to
just
onion.
H
Yeah,
I
I
think
in
this
case
that
would
be
necessary,
and
I
also
would
like
to
have
this
happen
in
the
irt.
I
etf,
rather
than
the
irtf,
that.
I
That's
something
I'm
open
to
discussion
on
that,
but
I
I
was
taking
sort
of
some
advice
from
rich
and
a
few
other
people
and
I'm
open
to
sort
of
discussion
about
this.
I
I
don't
know
where
it
should
land
myself,
but
I
looking
at
the
charter
of
cfrg
the
idea
of
shipping
informational
rfcs
in
the
tradition
of
rfc
1321.
F
Thanks
alec,
I
appreciate
the
talk,
good,
clear
thinking
and
I
I
like
the
fact
that
you
made
it
focus
on
a
test.
I
think
that's
useful.
F
As
far
as
the
ricochet
versus
the
other
messages,
I
think
what
you're
pointing
out
here
is
just
that
story
and
forward
versus
online
connectivity
is
irrelevant
to
the
definition,
and
I
think
that's
fine.
I
think
I
think
it's
it's
a
nice
property
that
that
falls
out
of
the
way
you've
defined
it.
My
concern
with
the
the
term
end-to-end
encryption
is
that
it
doesn't
encompass
all
the
other
cryptographic
properties
that
we
want
besides
encryption,
but
I'm
also
aware
that
the
term
ended.
G
F
I
I
am
open
to
this
idea.
I
I'd
like
to
see
the
some
degree
of
acceptance
and
turning
it
into
a
draft
before,
rather
than
faffing
with
it
beforehand,
not
least
because
I
believe
we're
time
critical.
Looking
at
some
of
the
pr,
the
pressures
being
brought
by
various
governments
around
the
world,.
A
L
Okay,
hello,
my
name
is
armando
faz,
I'm
gonna
present
the
status
update
of
the
boprf
document
yeah.
So,
first
of
all,
let's
this
is
a
brief
reminder
about
what
oprf
is.
This
is
like
the
oblivious
computation,
the
computation
of
the
oblivious
of
the
random
function.
Basically,
this
is
a
two-party
protocol,
where
the
client
interacts
with
a
server
in
order
to
compute
a
sort
of
random
function
under
a
private
key
that
is
hauled
by
the
server.
L
So
there
are
two
main
properties
for
this
protocol.
We
want
that
this
interaction
be
oblivious
in
the
sense
that
the
client
can
only
learn
the
the
output
of
this
build
of
this
prf
without
learning
anything
about
the
private
key
of
the
server.
L
But
at
the
same
time,
we
want
that
this
server
doesn't
learn
anything
about
the
input
that
the
client
provides
does
doesn't
learn
about
the
output
of
the
of
the
computations,
so
yet
another
property
other
than
the
the
this
basic
mode
is
actually
verifiability
in
which
the
server
in
this
case
in
this
case
needs
to
commit
to
the
key
that
it
has
and
gives
to
the
client
a
proof
that
the
the
actually
the
prf
was
completed
using
this
private
key.
L
So
in
that
case
that
the
client
can,
after
verify
that
this
proof
actually
holds
well
with
this
in
mind.
So
so
what
is
new
from
the
latest
version
of
the
draft
from
the
previous
meeting?
So
basically,
how
does
the
server
compute
this
prf?
There
is
a
blind
mechanism
for
the
client
in
order
to
to
send
the
information
in
order
to
hide
that
information.
So
we
there
are
two
main
methods
for
blinding
this
information.
One
is
multiplicative
blending
or
additive
blending.
L
Multiplicative
refers
to
where
the
client
tries
to
multiply
using
like
a
scalar
multiplication
in
the
group,
and
then
it
can
hide
this
information
or
is
additive.
It
only
adds
another
point.
L
So
in
the
previous
meeting,
where
there
was
raised
at
this
point
whether
whether
it
was
secure
to
use
added
blinding
or
on
on
whether
what
are
the
conditions
in
which
this
can
be
used,
so
we
arrive
to
a
conclusion
so
in
in
which
the
verb
variable
mod
they
can
use
added
to
blind
it,
which
is
it's
still
okay,
to
use
it,
provided
that
this
that
the
key
was
committed
and-
and
this
is
a
secure
way
to
do
it-
this
doesn't
happen
on
the
on
the
basic
mode,
and
in
this
case
we
need
to
use
a
multiplication
multiplicative
blinding
some
other
changes
on
the
or
updates
on
the
draft
was
a
where
more
have
more
explicit
definition
of
the
errors.
L
Another
point
that
has
changed
is
that
we
adopt
the
is
shake
256
for
the
the
cav
for
48
cypher
switch,
because
usually
they
are
these
two
pair
of
algorithms.
Are
it's
common
to
find
a
pair
it
together?
L
We
also
have
the
generalization
of
the
sierra
knowledge
proof,
double
discrete
logarithm
equality
and,
and
then
this
can
be
useful
for
other
protocols.
Finally,
we
update
this
vector
and
made
some
editorial
changes
with
this
in
mind,
so
there
they're
reminding
what
issue
related
to
metadata
and
for
that
I
invited
sufficiently,
which
will
be
explaining
us.
What
is
this
new
approach
about
using
metadata
in
vio
pref,
so
sophie
had
it
over
yeah.
G
Thank
you
armando
next
slide,
please!
G
Yes!
So
right
now
there
is
a
proposal,
a
pull
request
open
against
the
draft
in
the
github
repository
how
to
add
public
metadata
into
the
protocol.
What
will
be
one
to
be
adding
public
metadata
into
the
underlying
primitive
is
because
certain
applications
of
the
voprf
are
using
are
trying
to
use
metadata
in
order
to
prevent
certain
attacks
or
in
order
to
solve
certain
implementation
blocks
that
might
exist.
G
So
the
proposal
to
actually
add
public
metadata
into
the
peer
prff
is
called
a
peer
peer
and
it
has
defined
in
this
paper
that
has
been
recently
published
on
the
eprint,
and
you
can
read
it
if
you
want
to
find
more
information
about
that
construction,
it
uses
the
same
the
same
specific
functions
for
the
for
the
api
as
the
via
prf.
As
you
can
see
here,
the
only
other
thing
is
that
you
have
a
metadata
that,
in
this
case
of
this
graphics
is
labeled
as
t
you
add
it
to
the
blinding
function.
G
G
You
add
it
to
the
blending
evaluation
function
or
evaluation,
as
it's
called
in
the
protocol,
and
then
you
have
and
then
you
do
two
different
operations,
because
you
add
the
secret
key
to
the
hash
of
the
tag,
and
you
also
use
this
twin
best
and
exponentiate
exponentiate
to
this
new
value
and
then
the
last
function,
which
is
the
online
thinner
finalize
that
is
defined
as
verified
analyze
or
finalize
in
the
current
status
of
the
draft,
also
uses
a
new
operation
because,
instead
of
raising
it
to
the
r
you
raise
into
one
one
over
the
other,
the
verification
and
zero
knowledge
proof
that
is
used
for
the
regeneration
of
the
proof
and
verification
of
it
is
not
changed.
G
Thank
you.
So,
from
a
practical
perspective,
this
is
how
it
looks
like
the
server
and
the
client
input
their
own
metadata.
This
metadata
can
be
of
any
length
and
it
can
be
optionally
added
by
the
client
and
the
server
the
currently
how
we
have
written
in
the
pr
for
the
cfrg
draft.
It
is
that,
even
if
the
client
or
the
server
or
not
the
metadata
is
completely
fine
because
there's
the
mode
in
which
no
string
of
metadata
is
added
and
therefore
an
execution
of
the
critical
runs
without
this
metadata.
G
So
it's
not
bounded
for
you
to
have
into
a
metadata.
So,
as
you
see
here,
the
server
and
the
client
will
be
the
ones
that
in
the
metadata
either
party
cannot,
or
neither
of
them
them
cannot
exceed
like
please
and
just
to
kind
of
give
you
some
a
small
consideration.
So
the
consideration.
What
I
already
said
some
time
ago
is
that
the
metadata
can
be
of
any
length,
but
this
should
be
actually
considered
by
any
application
that
is
going
to
be
using
it.
G
So,
yes,
it's
bound
it's
it's
not
bounded
to
any
limit
of
size,
but
that
probably
has
to
be
taken
into
consideration
for
privacy
reasons
and
also
for
usability
reasons
and,
as
I
said,
metadata
can
be
added
by
either
the
server
or
the
client
or
both
and
how
the
server
and
the
client
determine
what
kind
of
metadata
to
be
at
is
something
that
can
be
done
outside
of
the
protocol
itself
or
it
can
be
sent
alongside
during
the
protocol
execution,
something
to
notice.
G
That's
what
armando
just
said
at
the
beginning
of
the
presentation
that
right
now
we
were
adding
additive,
blinding
into
the
vipre
of
construction
and
prprf.
This
notion
of
adding
public
net
data,
that
is
new
construction,
does
not
work
with
additive
blinding
because
it's
not
only
using
the
public
key
for
the
for
the
evaluation,
but
it's
actually
using
a
generator
that
is
raised
to
the
secret
key
plus
the
hash
of
the
metadata.
G
So
it
doesn't
work
for
any
different
grinder,
unfortunately,
but
it
does
work
well
with
patching
and
we
have
even
a
proof
of
concept
right
now
on
stage
on
the
repository
also
working
with
sage
and
also
working
with
a
bopr
move
and
oprf
mode,
and
with
that
next
slide.
L
A
Okay,
that's
fine!
Let's
move
on
to
the
opec
place,
crease.
D
D
All
right,
so
this
is
just
a
quick
update
for
opaque.
It
should
be
significantly
less
than
15
minutes.
I
hope
so.
For
those
who
don't
know
opaque
was
the
selected
asymmetric
peak
from
the
the
the
competition
that
was
held
recently,
I
forget
exactly
when,
at
a
very,
very
high
level,
it
basically
takes
a
bunch
of
different
cryptographic,
protocols
and
primitives
and
then
stitches
them
together
in
such
a
way
that
you
have
an
apex
that
falls
out
in
the
other
end.
D
In
particular,
the
two
constructions
that
are
most
important
are
the
opr,
the
protocol
that
armando
was
just
describing
and
an
authenticated
key
exchange
protocol
like
sigma
or
3dh,
or
what
have
you
and
then
there's
also
sprinkled
throughout
some
other
dependencies
on
things
like
hash
functions,
encryption,
algorithms
and
so
on.
D
The
protocol
effectively
runs
in
two
phases:
there's
an
offline
registration
phase,
wherein
the
client
basically
takes
their
username
password
pair
and
sort
of
registers,
a
set
of
credentials
with
the
server
storing
those
credentials
on
the
server,
and
then
there
is
an
online
phase
wherein
the
client
uses
its
password
to
effectively
recover
the
credentials
like
recovering
them,
decrypting
them
rederiving
them.
What
have
you
and
then
using
those
credentials
and
a
mutually
authenticated
ake,
wherein
they
effectively
demonstrate
possession
of
the
password?
D
So
if
you
know
the
username
and
password,
you
can
run
the
login
flow
and
authenticate.
If
you
don't
know
that
you
can't
that's
effectively
the
gist
the
document,
because
opaque
is
so
generic
in
terms
of
its
shape
and
structure,
you
could
really
instantiate
it
in
a
number
of
different
ways,
using
different
oprs
using
different
akes.
D
Rather
than
have
a
sort
of
collection
of
options,
we
decided
to
pick
exactly
one
ake
and
pin
down
a
couple
different
algorithms
correspondingly
to
instantiate
it.
So
this
particular
variant
we
call
opaque
3dh
and
that
is
what's
specified
in
the
document
currently
and
there's
some
sort
of
text
in
the
appendix
which
just
describes
if
you
wanted
to
move
towards.
You
know
something
more
like
sigma,
where
you're
using
signatures,
rather
than
key
shares
to
authenticate
here's,
how
you
would
do
so,
roughly
speaking,
it
also
has
accommodations
for
hmqv.
D
Should
you
want
to
go
down
that
route,
tls
1.3
being
a
sigma-like
construction
as
well?
That
is
also
applicable
here
so
between
the
last
meeting
and
this
meeting,
some
pretty
substantial
updates
went
into
the
draft
in
particular
now
there's
this
notion
of
sort
of
internal
external
mode
for
the
the
protocol-
and
this
relates
to
how
credentials
are
what
credentials
are
actually
input
to
the
protocol
and
then
stored
on
the
server
in
the
external
mode.
D
Rather,
what
happens
is
that
the
the
username
password
pair
is
used
to
derive
a
public
private
key
pair
during
the
online
flow
and
then
authenticate,
and
so
these
just
differ
in
terms
of
what
are
the
sort
of
assumptions
about
what
the
client
has.
What
the
interface
looks
like
for
implementations
and
so
on,
the
internal
mode
is
certainly
much
easier
from
an
api
perspective.
D
Another
big
change
that
kind
of
went
in
was
to
add
the
new
mitigations
for
what
we
call
client
enumeration
attacks.
D
An
enumeration
attack
is
where
you
know,
an
adversary
sort
of
interacts
with
the
server
in
the
online
login
flow
to
effectively
try
to
determine
whether
or
not
a
particular
username
has
an
account
on
the
service
or
not,
and
depending
on
your
use
case,
that
may
or
may
not
be
a
problem
I
mean
you
can
imagine
if
this
were
deployed
in
a
web
scenario,
and
you
know
adversary's
trying
to
learn
if
a
particular
user
has
an
account
on
example.com
trying
to
exploit
this
particular
attack
to
learn
that
that
piece
of
information
might
be
might
be
serious,
given
the
circumstances
of
what
example.com
actually
is.
D
So
the
protocol
now
accommodates
that
by
basically
making
the
the
server's
response
during
the
login
flow
effectively
randomized
such
that,
if
you
have
the
right,
password,
you're
able
to
sort
of
you,
know
de-randomize
the
server's
response
and
process
it
accordingly.
If
you
don't
have
the
right
password,
all
you
see
are
random
bits,
so
the
server's
response
leaks,
nothing
about
whether
or
not
the
username
happened
to
be
registered
with
the
particular
server.
D
Of
course,
there's
some
obvious
caveats
here,
like
the
you
know,
timing
side
channels
with
respect
to
how
long
the
server
takes
to
actually
compute
the
response
are,
are
applicable
and
relevant,
and
the
draft
has
some
text
and
gives
some
guidance
in
terms
of
how
to
avoid
those
side
channels.
So
implementations
need
to
be
careful
with
respect
to
how
the
servers
hide
responses
generated,
but
in
general
it's
not
too
bad
and
to
help
sort
of
validate
that
you
know
this
particular
code
path
is
tested.
D
There
are
what
we
call
fake
text
factors
in
the
document
at
the
end,
such
that
you
can
make
sure
that
that
particular
code
path
is
hit
and
and
working
correctly,
though,
the
last
big
change
that
we
made
was
sort
of
in
the
ake
phase,
the
3dh
in
particular,
we
used
to
have
slots
from
the
client
to
the
server
and
from
the
server
to
client
for
applications
to
sort
of
plumb
in
sort
of
arbitrary
application.
Specific
info.
You
might
think
of
this,
like
extensions
in
the
world
of
tls.
D
This
was
deemed
sort
of
too
complicated
and
also
somewhat
annoying,
because
this
meant
that
the
the
key
exchange
messages
for
opaque
were
no
longer
a
fixed
size.
They
depended
on.
You
know
what
the
application
input
actually
was.
D
So
to
remedy
this,
we
sort
of
extracted
something
from
the
big
two
plus
draft,
wherein
there's
now
just
sort
of
agreed
upon
context,
string
that
client
server
at
the
application
layer,
sort
of
input
to
the
protocol,
that's
fed
into
the
transcript
and
the
key
schedule
used
ultimately
to
to
derive
the
ake
output
keys.
D
But
it's
not
included
in
the
protocol
messages
sent
over
the
wire,
which
has
the
same
effect
as
you
know,
allowing
the
client
server
to
specify
additional
information
for
the
ake,
but
has
the
benefit
of
not
inflating
or
you
know,
changing
anything
on
the
wire.
So
now
everything
is
a
fixed
length
during
the
ake
and
should
it's
really
quite
simple
to
implement,
especially
if
you
use
one
of
the
recommended
configurations
along
with
sort
of
these
major
updates.
D
There
are
a
number
of
minor
updates
as
well
documenting
sort
of
what
the
trust
boundary
was
between
that
you
know
and
receiving
messages,
potentially
untrusting
messages
on
the
wire.
What
sort
of
validation
you
should
do
on
those
particular
messages,
and
so
on?
Also
alignment
with
the
latest
updates
the
vo
pref
draft
to
make
the
test
factors.
You
know,
fall
on,
fall
into
place
and
then
just
general
of
course,
editorial
improvements.
D
We
think
they're
improvements.
They
may
not
be
actually
some
next
steps.
I
basically
think
we're
you
know
we're
at
a
point
now,
where
we
we
consider
ourselves
to
be
feature
complete,
be
great.
If
we
have
more
implementations,
we
have
summon
go
some
rust,
adding
some
in
cnc
plus
bus,
especially
if
you
know
tls
stack
integrations.
Are
you
know
a
a
a
a
future?
You
know
trajectory
for
this
particular
protocol
would
be
useful,
built
on
openssl
boring
a
cell.
D
What
have
you
there's
also
been
requests
for
sort
of
node
or
typescript
javascript
implementations,
so,
having
sort
of
a
you
know,
a
pile
of
implementations
that
are
you
know,
safe
to
use
would
be
would
be
great.
At
this
point,
we
think
we
need
just
additional
review
specifically
review
from
the
crypto
review
panel.
I've
been
discussing
with
the
chairs
what
the
procedure
is
in
terms
of
making
that
happen.
D
An
action
item
for
us
sort
of-
I
guess
in
parallel
for
figuring,
that
out,
is
going
through
the
reviews
that
came
in
during
the
selection,
competition
and
just
making
sure
that
there's
no
sort
of
outstanding
issues
that
we
have
not
addressed,
but
I
don't
think
that
blocks
anything
and
then
I
think
after
that,
we're
sort
of
ready
to
ship
unless
there's
any
you
know
major
features
or
are
blockers
that
people
think
we
should
add.
A
D
Yeah
yeah
sure
so
the
the
the
so
the
security
proofs
assume
that
you
know
all
things
centered
with
the
wire
are
like
valid
group
elements
and
so
on,
and
there
are
particular
sort
of
strange
scenarios
where
an
an
attacker
might
send
a
message
over
the
wire
that
is
effectively
the
identity
element,
for
example,
and
then
cause
the
server
to
evaluate
or
effectively
run
the
opr
evaluation
on
the
identity
element,
which
definitionally
means
that
the
the
service
evaluation
does
nothing
and
that
would
sort
of
allow
someone
who
learned
the
output
of
the
opr
to
run
like
a
dictionary
attack,
because
there's
no
there's
no
private
salt
or
anything
integrated
the.
D
So
the
the
text
around
here
was
basically
to
say:
you
know
if
you
get
something
centered
with
a
wire
that
was
like
the
identity
helmet,
you
just
drop
it
on
the
floor
and
return
an
error
because
by
the
spec,
the
client
should
never
send
the
identity
element
as
it's
it's
opr
message.
So
that's
that's
kind
of
what
I
mean
about
or
what
kind
of
what
I
mean
about
the
trust
boundaries
so
has
no
effect
on
the
proofs.
A
Okay
thanks,
I
did
it
and
another
question:
do
you
or
hugo
or
someone
wants
to
add
anything
to
the
existing
security
proofs?
Because
I
understand
that
the
draft
is
close
to
being
sent
to
the
chairs
to
us
for
the
additional
crypto
panel
review
and
for
the
last
call.
A
But
if
there
are
maybe
there
is
some
advancement
or
some
new
research
by
hugo
or
by
someone
in
the
sense
of
security
proofs
or
everything?
That's
done
and
nothing
is
left.
D
To
my
knowledge,
everything
is
done.
The
only
interesting
bit
is
in
the
in
the
existing
uc
formalization
for
the
opaque
paper.
The
the
registration
step
is
the
registration
functionality
is
not
interactive.
It.
It
just
sort
of
assumes
that
the
server
creates
like
the
the
client's
envelope
or
the
encrypted
credentials
honestly
and
like
and
stores
it,
whereas
in
this
particular
protocol
of
course,
we
use
like
an
interactive
protocol
between
client
server
to
run
the
opr
to
encrypt
the
credentials
and
store
them
on
the
server.
D
However,
there's
some
follow-up
work
coming
that
demonstrate
that
this
particular
variant
is
safe.
So,
yes,
there,
I
guess
there
is,
if
you,
if
you
just
look
at
the
opaque
paper,
there's
more
stuff
to
be
done,
but
it's
been
done
and
it
should
be
coming
out
any
time
now.
A
Okay,
thanks
a
lot,
I
think
it's
a
great
work
thanks
for
this
any
questions.
Any
comments
go
in
once
go
twice
some
trees
and
now
pray
food
with
our
blind
signature.
D
Just
I
guess
a
quick
question
before
we
move
on
so
the.
What
is,
what
are
the
next
steps
for
us
for
this
particular
draft?
Is
there
anything
like
we,
we
should
do
in
terms
of
like
cutting
a
new
version
or
or
will
the
chair
sort
of,
confer.
A
The
next
obvious
step
is
another
critical
panel
review
to
check
that
all
questions
that
were
raised
during
the
pet
selection
process
have
been
addressed,
and
I
think
that
we
can
start
it
very
soon.
So
please,
let
us
discuss
this
with
alexa
nick,
maybe
next
week
or
in
two
weeks,
and
then
I
think
we'll
start
call
for
crypto
panel
reviews
and
then
after
that,
we'll
have
some
more
questions.
And
after
that
we'll
be
very
close
to
your
costco.
A
D
D
Right,
okay,
so
this
is
an
update
to
the
blind
signature
draft
that
was
adopted
last
time.
So
as
a
reminder
of
what
the
protocol
does
and
what
it
looks
like
this
is
just
your
classic.
You
know
blind
rsa
between
client
and
server.
The
effective
flow
of
the
protocol
is
the
this.
D
So
the
client
inputs,
server,
public,
key
and
message
run
some
blind
function
to
produce
some
output
by
message
and
some
inverse
sends
the
blind
message
over
to
the
server
server.
Does
some
evaluation
on
that
message
sends
back
a
a
blind
signature
and
then
the
client,
finalizes
or
effectively
unblinds
and
internally
verifies
that
the
signature
is
correct,
using
the
the
corresponding
public
key
and
if
all
that
works
out
just
fine,
then
an
out
a
signature.
D
Pops
out,
on
the
other
end,
pretty
simple:
pretty
straightforward,
no
state
required
on
the
server
side
and,
of
course,
there's
state
required
on
the
client
side
between
the
the
blind
and
finalized
steps.
D
D
There
was
a
request
to
be
able
to
support
the
full
domain
hash
variant
alongside
the
pss
encoding
invariant
and
our
our
sort
of
response
to
that
was
to
just
just
support
pss,
but
allow
the
salt
length
to
be
zero,
which
is
currently
valid
for
pss,
where
in
the
salt
length
as
a
parameter,
and
we
we
so
we
added
a
test
factor
for
that
particular
variant.
Where
salt
length
is
zero,
you
still
use
the
pss
encoding
function
and
we
added
some
api
guidance
around.
D
You
know
our
ipi
considerations
around
when
you
might
want
to
use
a
deterministic
variant
versus
a
randomized
variant
and
the
the
sort
of
rationale
for
the
deterministic
variant
is
wherein
you're
in
a
scenario
where
you
don't
want
the
sort
of
the
salt
itself,
the
pss
salt,
to
to
be
sort
of
a
a
covert
channel,
for
you
know,
messages
that
the
server
should
not
otherwise
learn.
D
D
We
went
back
and
discussed
with
some
others
off
list
in
terms
of
what
the
existing
analysis
told
us
about
this
particular
construction
as
construction,
especially
when
using
pss
as
the
encoding
function,
and
we
have
quite
high
confidence
that
the
analysis
that
here
and
others
did
back
in
the
context
of
fdh
readily
applies
to
pss
and
we've
updated
the
text
accordingly
to
match,
and
we've
also
done
some
editorial
cleanup
and
made
some
sort
of
small
changes
to
help
interrupt
be
a
bit
more
smooth.
D
So,
for
example,
a
recent
change
that
was
added
was
on
the
server
side
when
you
get
a
message
over
the
wire
that
doesn't
happen
to
be
sort
of
a
valid
representative
of
a
message
that
I
can
sign.
So
it's
not
you
know
an
integer
between
zero
and
n
minus
1.
Where
n
is
the
rsa
modulus,
you
just
abort
that
the
client
for
the
protocol
should
never
send
such
a
message.
D
So
it
seems
fine
on
the
server
side,
it's
just
a
board
and
there
have
also
been
a
number
of
different
implementations
that
have
been
added
similar
to
hpk,
either
listed
on
the
draft
repository
readme.
You
can
also
click
on
the
link
if
you
have
access
to
slides,
to
follow
and
see
them
there.
D
There
are
three
open
issues
remaining
on
the
draft.
I
I
kind
of
think
I
rank
these
in
terms
of
their
contention.
I
guess
the
first
of
which
is
increasing
contention.
D
So
that
and
given
its
sort
of
strong
connection
to
the
literature,
our
proposal
to
sort
of
leave
it,
as
is
the
the
second
open
issue,
is
to
whether
or
not
we
should
extend
this
to
support
partially
blind
signatures.
So
it
is
possible
to
do
this
with
rsa,
but
it's
sort
of
funky.
The
the
basic
idea
is
you,
you
know,
sort
of
pin
the
modulus
and
then
from
that
you
you,
you
try
to
derive
different
public
private
key
pairs.
D
You
know
different
exponents
e
and
d
under
some
certain
constraint
that
each
each
private
exponent
has
a
unique
prime
factor
that
is
not
shared
with
the
other
with
the
other
private
exponents.
So
the
process
of
finding
this
is
sort
of.
I
guess
ad
hoc,
you
effectively
input.
D
You
know
you
pin
your
modulus
and
then
you
sort
of
try
to
search
this
space
until
you
find
candidate
key
pairs
and
then
you
output
them
and
those
become
the
things
and
it's,
I
guess
problematic
in
that
sense,
problematic
in
the
sense
that
it's
sort
of
a
sort
of
atypical
use
of
rsa,
where
you
have
a
single
modulus
and
then
multiple
different
public
and
private
exponents
shared
with
it.
D
But
it's
also
sort
of
problematic
from
a
performance
perspective,
because
it
means
with
very
high
probability
that
the
public
exponent
is
not
going
to
be
three,
which
is
typically
done
for
performance
reasons.
You're
likely
going
to
have
exponents
that
are
significantly
larger,
so
actually
doing
the
rsa
operations
is
going
to
take
a
a
much
longer
amount
of
time.
D
In
the
issue
29,
I
did
a
quick
benchmark
to
sort
of
demonstrate
that
this
is
indeed
true
using
the
the
the
search
algorithm
that
I
that
that
I
found
in
the
literature
that
does
describe
these,
for
you
know
finding
these
various
public
and
private
key
pairs,
and
so
that
the
results
are
sort
of
discouraging.
D
So
basically,
given
that
this
is
sort
of
awkward,
not
a
a
common
practice,
it
probably
needs
additional
security
analysis
in
terms
of
whether
or
not
it's
safe,
given
that
the
analysis
was
done,
sort
of
early
2000s
and
and
is
a
bad
performance.
I
can't
remember
if
I
said
that
already
I
I
think
the
the
editors
would
propose
that
we
just
closed
this
as
well
with
no
resolution.
D
It's
just
not
worth
the
complexity
and
the
additional
work
we
think
the
last
open
issue
is
to
whether
or
not
we
should
extend
the
scope,
which
was
sort
of
a
question
phrase
during
the
the
adoption
call
as
to
whether
or
not
should
focus
just
on
rsa
blind
signatures
or
try
to
rope
in
you
know
or
try
to
loop
in
other
other
signature
schemes
like
those
based
on
ecdsa
or
schnorr.
D
However,
given
the
recent
attacks
on
blind
signature
schemes
based
on
ecgs
and
schnoor,
it
seems
clear
that
you
know
finding
a
safe
variant
of
those
is
still
very
much
an
open
research
problem,
and
so
it
seems
a
bit
too
soon
or
or
premature
to
to
sort
of
try
to
fold
them
into
this
particular
document.
D
So
the
conclusion
that
we
have
for
all
three
is
to
just
close
them
out
and
at
which
point
I
think
the
document
will
be
in
very
good
shape
for
research
group
last
call
or
crypto
review.
Or
what
have
you
basically
good
to
go.
A
Thanks
a
lot
chris,
I
would
share
my
opinion
that
I
understand
that
the
current
status
of
analysis
of
rsa
blind
signatures
made
by
and
the
previous
works
the
status
is
quite
stable.
So
this
is
quite
a
mature
set
of
results.
In
contrast
to
a
lot
of
ecdsa
engineer
based
blood
signatures
where
there
is,
there
are
a
lot
of
schemes
that
have
been
broken
in
a
couple
of
years
after
being
created.
A
So
for
with
my
chair's
head
off,
I
would
support
leaving
this
only
these
are
safely
temperatures
without
addition,
any
other
stuff,
but
I'll,
be
happy
to
see
someone
from
the
group
to
provide
some
preliminary
review
of
the
current
draft,
because
it's
it
is,
it
was
only
adopted
this
may
or
this
april,
and
the
draft
is
quite
young,
and
I
would
like
to
have
more
people's
group
seen
it
working
with
it.
A
Please
daniel!
Yes,.
D
Yeah,
I
agree.
Any
sort
of
feedback
at
this
point
would
be
great.
We've
had
a
number
of
reviews
from
people,
who've
implemented
it
and
that's
caught
a
number
of
editorial
and
sort
of
typo
issues.
A
F
Those
as
interoperable
implementations-
and
you
also
mentioned
a
few
caveats
like
not
responding
to
a
peer
that
sends
the
identity
element.
G
D
No,
it's
okay!
For
that
one!
Yes,
we
are
for
this
one
frank
dennis
one
of
the
other
authors
of
the
implementations.
I
I
believe
he's
testing
enroll
up
against
all
the
ones
that
are
available
using
the
the
reference
python
implementation,
that's
in
the
draft
and
then
his
implementation.
D
I
can
certainly
confirm
with
him,
but
as
far
as
I
know,
yes,
they've
all
been
tested
for
interrupt
because
the
output
is
just
like
a
signature.
That's
you
know
verifiable
with
any
pss
stack,
so
it's
really
just
the
you
know.
Can
you
can
you
produce
given
these
random,
given
these
inputs
on
the
client
side,
this
expected
signature
on
the
output
side.
G
F
D
Okay,
you
have
another
draft
update,
so
this
is
the
draft
on
specifying
a
limits
for
authenticated
encryption,
algorithms
with
martin
and
felix.
D
So
since
the
last
version,
there
hasn't
been
a
lot
of
change,
though
martin
did
a
lot
of
work
in
revisiting
some
previous
some
previous
papers
and
adding
analysis
for
aads
that
are
useful
or
used
in
the
context
of
tls
1.2,
specifically
those
aads
gcm
variants
that
don't
do
nonce
randomization,
as
is
done
with
the
ls
1.3
previously
we're
just
assuming
that
sort
of
you
know
these.
These
blessed
ciphers
that
we
use
in
very
tls
and
quake
are
those
that
we
want
to
include
in
the
analysis.
D
But
I
forget
where
the
request
came
from
for
the
tls
1.2
variant,
but
regardless
it
came,
it
was
a
reasonable
request
and
martin
did
the
work
to
add
it.
So
thank
you.
Martin
also
added
some
limits
in
the
draft
based
on
you
know.
You
know
reasonable
parameter
sizes
like
reasonable,
fixed
packet
sizes,
reasonable
bounds
for
the
advantage
of
whatnot.
D
So
if,
if
you,
you
know,
if
you're
using
this
sort
of
document
as
a
consumer,
you
just
kind
of
want
to
know
what
is
the
content
that
I
plug
into
my
protocol?
Hopefully
that
table
will
help
you
and
then
there
was
also
some
some
various
editorial
cleanup
and
the
next
revision,
which
we
hope
to
come
soon.
We're
going
to
fix
some
issues
that
we
both
missed
in
the
tl
1.2
numbers
felix.
D
Our
resident
expert
did
a
more
careful
pass
than
we
did
and
he
has
a
pr
up
that
we
just
need
to
land.
They
also
have
a.
They
also
did
an
analysis,
felix
and
some
other
student
analysis
of
joshua
paulie
in
the
multi-user
setting
and
are
going
to
send
a
pr
to
the
draft,
adding
those
limits
alongside
the
gcm
ones,
when
it's
available,
so
we're
sort
of
waiting
for
that
to
come
to
sort
of
complement
the
existing
gcm
limits
that
we
have.
D
There
are
two
open
issues
that
are
probably
worth
some
discussion,
the
first
of
which
is
to
account
for
the
the
length
of
you
know:
additional
data,
that's
included
in
the
limits
we
sort
of
previously
just
had
made
the
simplification
that
the
this
additional
data
was
small
enough
that
it
it
the
it
was
just
overcome
by
the
the
the
size
of
the
plain
text
and
that's
also
been
the
the
pattern.
D
That's
done
been
taken
by
various
analyses,
in
particular
the
one
that
we
based
our
limits
on
2018
93,
sort
of
folds
together,
the
number
of
underlying
primitive
queries
to
be
just
basically
the
the
sum
of
plaintext
and
additional
data
together.
D
But
in
speaking
with
the
authors
there
in
particular
stefano,
it
became
clear
that
the
amount
of
work
to
sort
of
separate
these
two
out-
it's
not
that
much
and
so
we're
gonna
work
with
him
to
hopefully
see
if
we
can
split
these
two
things
up
and
get
a
a
bit
more
tighter,
a
bit
expressed
in
terms
of
express
in
terms
of
both
the
plaintext
size,
as
well
as
the
aad
size.
D
It's
unclear
how
we'll
like
actually
spell
the
pr
and
like
update
the
document
whether
or
not
you
know
the
the
main
document
will
be
written
in
terms
of
these
two
parameters,
or
this
will
be
just
an
appendix
or
whatnot,
because
we
want
to,
at
the
end
of
the
day,
be
pragmatic
and
be
make
this
to
be
maximally
useful
for
people
who
don't
really
understand,
I
guess
the
technical
details,
so
the
less
numbers
and
less
parameters
that
are
there,
probably
the
better
in
terms
of
actually
using
it
correctly.
D
So
the
the
other
issue
is
on
siv.
There
is
a
there's,
a
request
to
add
it,
and
basically
it's
an
action
item
that
needs
work,
given
that
it's
it's
not
currently
employed
by
any
of
the
the
sort
of
major
ietf
protocols.
Sort
of
our
conclusion
would
be
to
close
this
and
punt
it
to
a
future
draft.
Should
someone
feel
inclined
to
do
that.
Initial
analysis
extract
it
from
the
relevant
work
and
and
document
it
in
the
same
sort
of
context
we've
done
here.
D
Alternatively,
we
very
much
welcome
someone
to
contribute
a
pr
that
actually
adds
these
limits
on
our
behalf.
We
just
haven't
had
the
time
to
get
to
it
yet
quickly.
Looking
at
the
chat,
thank
you,
martin
for
filling
those.
Yes
dkg
was
asking
about
adding
ocb
or
ax
or
any
other
any
other
modes
and
yeah.
D
If,
if
someone,
if
someone
feels
so
inclined,
we
would
we
would
greatly
appreciate
it,
but
we've
just
not
had
the
the
time
to
get
to
it
yet
so
with
that,
I
once
we
sort
of
you
know,
fix
the
existing
those
1.2
numbers,
add
the
multi
checha
numbers
and
then
potentially
add
the
the
variant
wherein
the
aad
is
separate
from
the
plaintext
parameter.
D
Will
land
those
changes
close
out
open
issues,
cut
a
new
version
of
the
draft
and
I
think
we're
ready
to
move
forward
unless
there
are
any.
You
know
additional
features
that
people
think
there
are
things
people
think
we
should
block
on,
but
I
think
I
think
we're
in
good
shape
at
that
point,.
D
B
Other
drafts,
we
should
take
advantage
of
our
crypto
forum
participants
and
get
a
solid
review
from
them
going
forward,
but
I
think
that
most
of
these
things
make
sense.
Are
there?
Is
there
anybody
in
the
queue
who
has
any
comments
about
aead.
B
B
If
not
chris,
thank
you
for
the
presentations
and
we'll
move
on
to
michelle
for
a
cpace
update.
M
M
Most
of
the
work
has
been
done
in
in
updating
the
associated
paper,
with
the
the
analysis
of
the
the
secure
analysis
of
the
of
c
pace
and
like
just
to
remind
you
what
c
pace
is
so
c
c
pace
is
the
candidate
that
was
selected
by
the
cfrg
for
balance
speak
in
the
competition
about
a
a
bit
more
than
about
a
year
and
a
half
ago?
M
And
since
then,
we've
been
I've
been
working
with
bjorn
and
julia
on
the
secure
analysis
in
which
we
provide
analysis,
not
only
the
basic
version
of
cpas
but
actual
the
actual
variants
that
are
being
implemented,
so,
in
particular
the
current
space
protocol
variants,
they
we've
been
able
to
show
that
they
provide
strong,
secure
guarantees.
In
particular,
they
we've
been
able
to
show
that
they,
they
are
securing
their
universal
composibility
model
and
they
actually
achieve
adaptive
security
under
a
variant
of
the
of
the
computational
development.
M
Assumption
is
actually
known
as
the
simultaneous
development
assumption
and
in
the
analysis
we
also
have
been
able
to
show,
as
julia
mentioned
in
the
last
meeting,
instead
of
analyzing
the
map
to
point
function
as
an
ideal
as
a
random
arc,
we
actually
try
to
separate
that
in
into
the
map
to
function
from
the
the
hash
of
the
password,
so
we
actually
the
way
we
compute.
The
generator
is
a
map
to
function
of
the
hash
of
the
password.
M
The
hash
itself
is
modeled
as
a
random
oracle
to
a
finite
field
and
the
map
to
point
functions
actually
modeled.
Here
we
provide
specific
security
properties
for
that
and
we've
been
able
to
analyze
use
that
to
analyze
the
particular
instantiations
of
cpase.
M
So
one
thing
that
has
become
that
we
tried
to
clarify
more
recently
has
been:
they
rely,
the
the
way
we
rely
on
their
session
identifiers,
in
particular,
since
we're
working
in
the
uc
model.
We
assume
that
before
cpas
starts,
the
the
application
has
to
provide
a
unique
session
identifier
to
start
the
session
which,
which
is
required
for
composibility
for
multi-session
security,
and
this
is
a
so.
We
try
to
clarify
this
now
in
the
in.
M
The
current
version
of
the
draft
of
the
paper
which
currently
has
been
is
under
submission
to
asia,
crypt
and
in
this
new
version.
So
we,
as
I
said
beside
the
main
change,
has
been
clarifying
the
the
role
of
the
unique
session
identifiers
and
we
also
use
the
opportunity
to
clarify
the
proof's,
improved
readability
and
the
security
definitions.
M
But
regarding
the
session
identifier,
as
I
said
that
which
is
so,
they
are
required
for
composibility
guarantees
and
they
because
of
that,
they
need
to
be
added
for
as
input
to
all
the
hash
functions
that
we
use
in
our
protocol
and
they
need
to
be
established
before
running
c
pays.
So,
and
this
is
one
of
the
issues
that
I
think
feng
hao
mentioned
in
his
recent
paper
on
knee
print.
M
First,
there
are
several
ways
that,
if
the
application
is
not
able
to
provide
this
unique
session
identifiers,
there
are
ways
that
we
can
easily
create
such
a
identifier
by
first
running,
a
agreement
between
the
both
parties
involved
in
the
protocol.
M
And
since
this
is
not
private,
sometimes
there
are
ways
to
avoid
adding
the
number
of
at
the
round
complexity
of
the
protocol
by
piggybacking.
These
messages
to
to
the
messages
sent
by
the
application.
M
So,
but
because
this
can
certainly,
I
think,
was
one
of
the
complaints
main
complaints
of
hank
feng
howe
in
his
paper
that
this
require
adds
to
the
round
complexity.
M
One
of
the
next
steps
that
we're
going
to
provide
is
actually
a
game
day
secure
analysis
without
the
session
identifiers,
and
the
idea
is
that
suppose,
now
that
you
don't
have
this,
this
session
identifiers
are
not
available.
We'll
see,
pace
remain
secure
and
similar
to
the
other
cfrg
candidates,
like
aspect
2
or
jpeg.
M
We
can
also
provide
a
game
based
proof
for
for
cpace,
which
does
not
rely
on
the
session
identifiers
and
in
terms
of
guarantees.
The
guarantees
would
be
essentially
similar
to
that
of
the
other
product,
the
non-uc
particles,
so
in
particular
like
in
the
proof
of
aspect
2,
we
can
provide
a
proof
of
weak
forward
security
for
cpas
if
we
don't
consider
the
sids
and
this.
M
The
assumptions
used
for
this
would
be
essentially
that
the
same
one
that
I
used
now
in
the
proof
in
the
uc
version
and
but
and
we
would
get
perfect
for
what
security,
if
we
consider
the
addition
of
key
confirmations,
the
key
confirmation
step,
and
so,
but
this
is
something
that
we're
currently
working
and
still
not
not
done,
but
after
we
are
doing
after
we
finish
those
parts
we're
going
to
incorporate
these
changes
into
the
the
rfc.
M
So
but,
as
I
said
yes,
session,
ids
can
be
more
details.
Perhaps
this
is
one
of
sometimes
when
we
provide
the
protocol
that
are
secure
in
the
uc
environment.
We
always
assume
that
these
session
identifiers
are
provided
by
the
application
and
we
don't
pay
too
much
attention
to
it.
But
I
think
one
of
the
important
points
of
the
sea
based
traffic
is
to
try
to
clarify
the
situation
so
in
by
that
and
that's
why
we
we
want
to
make
sure
that
we
have
a
version
of
the
secure
analysis
with
and
without
session
ids.
M
N
Thank
you
very
much
for
your
presentation.
It
seems
that
there's
a
broader
issue
behind
the
draft
here,
because,
during
the
the
speak
comp
or
during
the
take
competition,
one
of
the
main
things
that
brought
up
was
the
importance
of
the
uc
proves
important
to
use
the
proofs,
and
now
we
learn
at
the
end
that
actually,
no
you
can't
you
can't
implement
and
deploy
it.
We're
going
to
use
a
game
based
proof
for
the
winner,
and
this
depends
on
hash
to
curve
as
well.
N
So
you
know,
do
we
need
to
reconsider
the
the
compass
realizing
that
okay,
we
actually
need
game-based
proofs.
We
need
to
assume
we
need
to
change
the
number
of
rounds
to
have
to
incorporate
this
id
if
that's
being
used
or
do
we
just
go
on
and
publish
this.
We
understand
that
okay,
it
is
what
it
is.
I
I
still
support
publication,
but
I
think
that
that
raises
a
question
about
how
we
run
how
we
run
this.
Our
contest
in
the
future.
M
Regarding
the
co,
how
to
run
the
contents
the
context
in
the
future,
I
don't
think
I'm
the
most
suitable
person
to
answer
that
question,
because
what
I
can
say
that
I,
like
I've,
been
looking
at
the
secure
analysis
of
all
the
candidates.
So
far,
so
I
was
not.
M
We
recently
provided
also
analysis
for
jpeg
in
the
game
we
provided
a
few
years
ago
in
the
game
based
version
and,
more
recently
in
the
uc
version.
The
point
is
in
terms
of
security.
I
my
feeling
that
they
all
achieve
a
similar
level
of
security.
M
M
Jpeg
uses
a
version
of
this
square
defilement
assumption,
but
these
are
like
those
are
all
similar
assumptions
in
the
in
the
underlying
group
in
terms
of
session.
Ids
is
a
matter
if
you
want
a
security
analysis
with
in
the
uc
model.
The
all
of
these
particles
will
require
an
additional
that
the
application
provides.
This
unique
id
session
id
so
doesn't
matter
if
we're
talking
about
aspect
2
or
cpase
or
jpeg,
so
they
all
would
need
that
and
once
the
so,
if
you
consider
uc
security,
you
have
more
or
less
you
would.
M
M
B
E
Yeah,
so
let
me
first
say
I'm
a
big
fan
of
c-pace:
it's
a
cool
protocol.
I
I
I
am
supportive
of
of
the
idea
of
having
game-based
proof
for
both
of
the
winners.
I
think
that
there
you
brought
up
some
practical
reasons
why
this
would
be
useful.
E
I
also
wanted
to
suggest
that
there
we
should
we
need
to
coordinate
on
the
sid
story
across
both
drafts.
I
think
because
I
think
this
is
theoretically
an
issue
for
both.
I
guess
different
camps
of
people
disagree
on
what
the
sid
means
in
the
uc
model
and,
like
you
know,
what
does
it
come
from?
Does
it
come
from
the
environment,
or
does
it
come
from
something
else,
but
in
any
case,
I
think
I
think
the
story
is
theoretically
the
same,
so
I
think
there
needs
to
be
some
coordination.
M
I
agree
with
that:
okay,
christopher.
D
Yeah
just
one
I
guess
one
could
follow.
Well,
I
mean
I'm
not
the
uc
expert.
So
certainly
we
talk
with
hugo
and
others
to
see
to
what
extent
sort
of
the
sid
needs
to
be
accountable
or
coordinated
with
cpace,
but
on
the
the
game
based
proof
requirement.
Chris,
were
you
suggesting
that
we
also
block
opaque
pending.
E
We
block
on
anything
like
I
think
it's
it's
I
I
doubt
I'm
I
it's
there's
pr,
there's
not
going
to
be
an
attack
in
a
game
based
model
on
against
opaque.
It
seems
really
really
unlikely.
I
just
think
that
I
I
wouldn't
block
on
it.
I
just
think
it's
a
really
good
idea
for
both
protocols,
not
just
cpas.
D
I
see
yeah,
I
mean
it's
interesting
that
the
the
sad
problem
is
not
sort
of
manifested
in
the
same
way
in
the
development
of
opaque.
I
I
I'm
like
completely
oblivious
to
it.
If
it
has
so
I
I
I
don't
understand
why
that
that
is
the
case,
but
anyways
it's
subtle
enough
to,
though
it
probably
warrants
further
investigation
discussion
with
hugo
and
michelle
another,
so
we
can
take
it
to
the
list
or
offline
or
whatever.
B
Since
we
can
extend
we're
ahead,
so
we
we
can
take
one
or
two
more
people
on
the
queue.
If
anyone
else
has
additional
questions
and
if
not,
we
can
move
on.
B
Thanks
michelle
one
note
on
this
last
slide
here
is
incorporating
changes
into
the
rfc.
It's
not
an
rfc
yet
so
I
would
say
into
the
draft
but
yeah.
Thank
you.
So
much.
B
Thanks
so
the
last
presentation
here-
and
we
are
ten
minutes
ahead
of
schedule-
is
a
update
on
hpke.
I
know
we've
had
a
earlier
presentation
about
it:
a
proposed
extensions
to
hpk,
so
chris
wood.
Please
take
it
away.
D
All
right,
so
this
is
this-
should
be
fairly
quick.
This
is
just
responding
to
an
issue
that
francis
gets
raised
on
the
list.
It's
sort
of
an
unfortunate
issue,
so
as
a
reminder,
hpe
has
in
it
a
number
of
different
dependencies,
one
of
which
is
a
cam,
one
of
which
is
the
kdf
and
the
other,
which
is
an
aad,
and
there
is
a
registry
created
and
maintained
by
this
document
for
each
different
type
of
algorithm.
D
We
did
not
realize
at
the
time
that
there
exists
already
a
registry
for
aad
algorithms
that
is
already
maintained
by
anna.
That
is
the
first
link
here,
and
this
was
pointed
out
to
us
from
franciscus
on
the
list.
The
registry
that
is
in
that
is
already
in
there
or
already
on
iana,
and
that
is
in
the
hpe
document
are
somewhat
different,
though,
for
starters,
the
hp
registry
requires
that
each
algorithm
specify
the
length
of
the
key
and
the
length
of
the
nonce
as
distinct
parameters.
D
D
Also,
importantly,
hpe
is
a
security
analysis
that
was
done
by
folks
in
india
requires
or
assumes,
rather
that
the
aad
is
indcca
too
secure
and
the
aad
algorithms
are
specified
in
this
document
an
hpg
document,
all
it
all
meet
that
requirement,
whereas
it's
not
the
case
for
the
one,
all
the
algorithms
that
are
in
the
existing
registry,
so
we'd
like
to
sort
of
see
if
we
can
just.
D
D
These
are
the
two
most
obvious
ones,
at
least,
first
of
which
is
to
just
continue
using
the
hpd
registry
and
use
that
definitely
going
forward,
the
second
of
which
is
to
switch
over
to
the
existing
registry,
extend
it
with
hpk
specific
parameters,
potentially
adding
in
you
know,
fields
indicating
whether
or
not
the
algorithms
are
recommended,
or
they
meet
the
security
that
the
security
bar
the
ind
cca2
requirement.
That
hpk
requires
this
is
sort
of
problematic,
in
that
it
would
be
a
breaking
change.
D
In
particular,
the
the
code
points
for
aad
algorithms
that
are
in
that
registry
do
not
match
that
which
are
the
hpk
registry,
and
in
considering
you
know
all
these
things,
it's
a
breaking
change
you'd
have
to
modify
the
existing
registry
and
sort
of
decorate
it
with
you
know,
what's
safe
for
hpk
specifically,
and
what's
not
also
taking
into
consideration
that
I'm
not
aware
of
any
uses
of
this
particular
registry,
this
existing
registry-
I
guess
that's,
probably
not
surprising,
considering
that
we
just
learned
about
its
existence
recently,
but
if
people
know
about
its
use
anywhere,
that
would
be.
D
That
would
be
good
to
know.
Basically,
my
our
proposal,
after
speaking
with
the
editors,
is
to
go
with
option.
One
continue
with
way
things
are,
and
just
sort
of
ignore
that
this
regis
existing
registry
is
out
there,
and
that
seems
like
it
would
be
the
the
simplest
thing,
though,
at
this
I
mean
I
would
like
to
hear
what
others
think.
H
We
have
known
about
this
registry
for
ages.
I
sort
of
assumed
that
the
authors
knew
about
this
registry
as
well
and
had
deliberately
made
a
new
registry,
because
it
is
different
because
it
is
different
and
that's
not
actually
a
terrible
thing.
H
The
aead
definitions
in
5116
allow
for
a
range
of
nonces,
which
I
think
is
actually
a
pretty
dumb
idea,
but
they
allow
for
a
range
of
knot
sizes,
and
we
have
to
have
one
for
this
one
and
so
on,
and
I
think
that
this
is
probably
the
the
no
change
option
is
probably
the
the
sensible
thing,
and
I
don't
just
say
that,
because
I've
shipped
code
that
uses
what's
in
the
draft.
D
Yeah
I
wish
I
could
say
we
knew
about
this
registry,
but
yeah
oh
well,
make
open
yeah
so
that
that
sounds
good.
Does
anyone
else
have
any
sort
of
strong
opinions
one
way
the
other,
because
if
not,
I
think
we're
just
going
to
make
sort
of
an
executive
decision
and
go
maintain
status
quo.
E
Just
curious,
if
there's
a
similar
problem
for
the
kdfs
or
the
chems,
probably
not
right.
D
We
I
I
did
look,
but
I
did
not
see
anything.
That
does
not
mean
it's
not
there,
but
I
mean
if,
if
we're
gonna
stick
with
this,
if
the
registries
are
similar
to
the
the
ada
one
that
we
have
here
and
if
we're
gonna
stick
with
the
hpk
option,
I
imagine
we
would
stick
with
the
hp
options
for
those
two
algorithms
as
well
for
similar
reasons.
B
Okay
thanks
and
we
have
10
minutes
left
in
the
meeting,
so
does
anyone
else
have
any
other
business?
If
not
we'll,
let
everyone
go
on
with
the
rest
of
their
evening
morning
or
afternoon.
Whatever
time
it
is.