►
From YouTube: LAMPS WG Interim Meeting, 2021-01-28
Description
LAMPS WG Interim Meeting, 2021-01-28
A
I'm
just
glad
it's
not
just
me,
sorry
guys.
Okay,
welcome
to
the
meeting.
Sorry,
we
lost
10
minutes
on
that.
Welcome
to
today's
virtual
interim.
A
So
today,
for
the
meeting
facilitation,
the
blue
sheets-
please
go
to
the
code,
emd
page
and
sign
in
to
the
blue
sheet
section
minute,
taker
rich!
Thank
you
for
volunteering,
others,
since
you
will
be
able
to
see
what
he's
doing
in
the
cody
md.
Please
you
know
if
he
gets
it
slightly
wrong,
adjust
it
yourself,
jabra
scribe,
I'm
not
expecting
that
we'll
be
needing
one
today
and
the
slides
are
available
at
that.
Url.
A
The
goal
of
today
is
since
we're
about
to
do
a
recharter
to
take
on
end-to-end
email
guidance.
It
seemed
like
a
good
time
to
ask
whether
there's
other
topics-
some
people
have
already
approached
the
chairs
about
post-quantum
cryptographic,
algorithms
and
transitioned
to
them.
So
this
virtual
interim
is
about
what,
if
anything,
we
want
to
put
into
that
recharter
to
address
that
topic.
A
A
A
Somebody
give
me
some
feedback
yeah,
I
can
see
okay.
This
is
a
presentation
I
gave
last
october
at
a
nist
workshop,
and
I
just
my
position
hasn't
changed,
so
I
didn't
update
the
slides.
A
A
A
We
have
to
recognize
that
upgrading
pkis
will
take
a
long
time
and
we
have
to
recognize
that
upgrading
security
protocols
will
take
a
long
time
as
well.
So
with
all
of
those
considerations,
I
have
some
thoughts
to
share.
A
The
two
certificate
approach
has
one
public
key
and
one
signature
as
we're
accustomed
to
today,
and
you
would
basically
use
one
with
traditional
algorithms
and
one
with
close
quantum
algorithms
during
the
transition.
A
A
A
You
need
to
modify
the
certificate
architecture,
I
mean
certificate.
Architecture
itself
doesn't
need
to
be
updated,
but
the
validation
needs
additional
complexity
to
handle
corner
cases
like
what
do
you
do
in
a
case
where
the
signature
of
the
post-quantum
is
good
and
the
traditional
is
not,
or
vice
versa,
so
these
half
right
cases
need
to
be
dealt
with,
and
we
also
have
some
pitfalls
from
experience
in
the
90s.
A
With
these
jumbo
certificates,
which
carried
a
key
agreement
key
and
a
public
key
for
a
signature
in
one
certificate
for
the
same
user,
we
we
had
corner
cases
there
that
were
problematic
and
I
just
would
like
to
avoid
them.
In
addition,
the
the
certificates
become
quite
large
with,
with
these
multiple
cases,
with
two
certificates.
The
security
protocols
do
need
a
new
field
for
carrying
additional
certificates
or
at
least
a
way
to
name
which
identity
certificates
if
they
are
to
carry
a
bag,
no
need
to
modify
the
certificate.
A
Architecture
and
validation
works
exactly
as
it
does
today.
It
avoids
the
known
pitfalls
with
jumbo,
and
I
admit
the
two
certificates
are
going
to
be
slightly
bigger
than
the
one
with
the
two
fields
in
it,
because
the
subject
issue
or
another
metadata
are
repeated
in
both
of
them.
B
A
We're
getting
an
echo,
and
so
the
the
nice
thing
about
the
transition
ending.
Is
you
just
stop
using
the
one
with
the
traditional
algorithm
in
it?
So
my
recommendation
is
that
we
use
separate
certificates
for
traditional
and
post
quantum
begin.
The
security
protocol
work
now
for
mixing
the
two
and
plan
for
the
day
when
we
only
have
post
quantum
in
use.
A
I
will
observe
that
when
I
was
the
security
area
director,
we
had
this
this
desire
to
shift
from
sha-1
to
sha-256
and
we
had
a
big
discussion
about
it
and
we
thought
that
would
take
five
years.
We
were
way
wrong.
It
took
closer
to
ten,
so
we
have
to
realize
we're
faced
with
a
transition
of
that
magnitude.
A
D
Can
I
make
a
few
comments?
I
don't
know
the
clarifications.
One
is
you
you
highlighted
how
in
the
half
or
right
case
for
finds
composite
certificates
that
it
was
difficult.
D
Actually,
if
you
want
to
actually
subserve
the
property
that
you
that
security
relies
on
both,
you
really
need
to
reject
in
all
cases,
unless
everything
verifies,
because
if
you
you
accept
it,
if
only
if
one
is
good,
one
is
bad.
That
means
that
the
attacker
only
needs
the
one
that
that
is
good
and
another
comment
is
in
the
tube
us
in
the
two-search
case.
You
said,
validation
is
easy.
D
Actually
it
becomes
more
complex
because
you
need
to
modify
each
and
every
individual
pro
security
protocol
to
actually
use
certificates
and
to
end
to
sign
the
appropriate
data.
I
don't
know
if
we
really
want
to
make
some
of
the
more
obscure
the
reliance
on
the
more
obscure
security
protocols
to
getting
that
right.
A
I
think
I
agree
with
most
of
what
you
said.
I
was
just
observing
that
in
the
certification
path-
validation
that
you,
when
you
have
the
case
where
one
signature
validates
and
the
other,
doesn't
you
have
to
deal
with
that
corner
case
and
you're,
saying
it's
not
really
hard
to
deal
with
you
just
if
anything
is
out
of
whack
just
say
no.
B
I
was
going
to
comment
on
the
validation
issue
just
a
little
bit
as
well.
I
think
it
need.
We
need
to
be
clear
that
there's
the
difference
between
the
validation
that
goes
into
creating
the
certificate,
that's
done
on
the
ca
side
and
there's
a
validation.
That's
done
on
the
relying
party
side
on
the
first
issue.
B
I
think
we
largely
don't
care
because
it
is
pretty
much
unchanged,
but
on
the
for
the
validation
with
two
certificates
on
the
relying
party
side,
I
am
a
little
bit
worried
about
the
fact
that
okay,
you
have
two
certificates
that
are
binding
subjects
to
private
to
public
keys.
But
if,
with
two
certificates,
you
have
the
additional
complexity
of
making
sure
that
it's
the
same
subject
for
both
certificates
and
that
I
think,
there's
going
to
be
a
significant
amount
of
fun
in
preventing
people
from
doing
some
interesting
mix
and
match
attacks.
B
A
I'd
like
to
respond
to
that
one,
just
very
briefly,
which
is,
I
think
we
have
the
same
problem
with
the
certificate
matching
that
we
have
today
with
when
we
have
separate
key
management
and
digital
signature
algorithms,
and
we
have
several
cases
like
in
dod,
where
they
have
figured
out
how
to
make
sure
that
all
works.
So
I
I
think
it's
just
the
same
already
known
how
to
deal
with
the
problem.
E
I
I
don't
think
if
I
can
respond
to
that,
I
don't
think
it's
that
these
problems
are
unsolvable.
I
think
it's
I
mean
both
both
ways.
There
will
be
solutions.
I
think
it's
the
question
of
trying
to
minimize
the
amount
of
foot
guns
trying
to
minimize
the
sure
tricky,
like
the
number
of
places,
we're
asking
protocol
designers
to
deal
with
tricky
things
minimization,
rather
than
an
impossibility,
type
question.
F
So
what
I
heard
is
from
our
crypto
experts
that
we
yet
don't
know
which
algorithms
and
that
we
be
need
to
be
agile,
to
also
transition
from
one
pq
algorithm,
maybe
to
the
next
one.
And
therefore
is
it
really
talk?
Are
we
talking
about
two
a
classical
and
the
pq
algorithm,
or
do
we
have
to
be
more
agile.
A
I
I
don't
think,
that's
actually
a
concern
hender,
because
we
all
have
that
same
problem
today
in
trying
to
decide
whether
we're
going
to
do
diffie-hellman
or
rsa
or
elliptic
curve.
On
the
traditional
side,
it's
we
have
to
be
agile
on
both
sides.
A
Okay,
I
actually
know
that
max
is
going
to
talk
about
that.
So
maybe
the
thing
we
should
do
is
go
to
that.
G
A
A
A
H
G
Thank
you
ross.
So
today
you
know
as
a
position
I'm
going
to
try
to
explain
how
our
industry
is
planning
to
use
it
and
what
are
the
reason
why
we
think
that
this
solution
might
provide
better
a
better
deployment
path
for
us
for
crypto
agility
purposes
and
and
for
this
particular
few
minutes
right,
I
want
you
to
abandon
the
assumption
that
you
are
living
in
a
rapidly
evolving
environment.
G
Okay,
our
our
focus
is
on
devices
that
have
very
long
expected
life
20
plus
year,
not
20
minus
is
plus
20
years,
and
I
think
about
think
about
when
you
think
what
I'm
talking
about
think
about
how
many
times
you
change
your
cable
modems.
G
So
our
task,
okay,
that
we
have
to
solve,
is
transitioning
hundreds
of
millions
of
these
devices
for
the
whole
industry
from
to
into
a
quantum
safe
state
at
some
point
and
be
able
for
them
to
operate
on
our
networks.
It
means
delivering
the
the
security
for
accessing
the
internet
for
hundreds
of
millions
of
people
and
that's
the
reason
why
we
really
need
to
have
a
solution
that
is
cost
efficient,
because
otherwise,
especially
in
secondary
markets,
I
envision
the
new,
the
new
algorithm
to
get
there
so
late
that
we
will
actually
have
failed
our
users.
G
G
Just
to
give
you
a
brief
introduction,
we
don't
use
tls,
you
know
in
our
networks,
we
use
some
parts
in
tls,
but
the
majority
of
our
authentication
uses
a
docsis
protocol.
Docsis
protocol
has
now
two
different
authentication
frameworks,
vpi
version
one
and
version
two
and
we
use
in,
and
we
have
a
large
the
deployed
pki
right.
So
I
said
it's:
it's
been
running
for
more
than
20
years,
hundreds
of
millions
of
certificates
valid
at
each
time
because
of
this
very
large
validity
period,
a
few
billions
issued
certificates
overall.
G
G
The
other
problem
that
we
need
to
solve
is
firmware
upgrades.
So
in
our
industry
because
of
the
choice
of
the
20
years
validity,
we
also
decided
that
we
needed
a
standardized
mechanism
to
upgrade
these
devices.
However,
you
can
only
upgrade
the
software,
but
the
secure
elements
or
the
crypto
elements
might
not
be
able
to
be
upgraded
to,
and
some
of
these
devices
might
not
be
have
enough
memory
to
actually
be
upgraded
to
use
the
new
algorithm
for
either
even
for
a
validating
other
entity's
certificate.
G
G
So
I
try
to
set
the
the
the
problem
here.
Really
we
will
live
in
a
mixed
environment
and
the
rest
of
the
talk
is
about
what
are
the
problems
that
we're
trying
to
address
with
composite
crypto.
So
it's
not
about
the
security,
it's
about
the
deployment
models
that
is
needed
for
us
to
address
this
large
number
and
all
the
ecosystem,
and
it's
very
particular,
and
I'm
stressing
the
the
fact
that
I
don't
see
this
to
be
an
alternative
and
sorry
it
to
be.
You
have
to
do
this
right.
G
No,
this
is
a
solution
that
can
be
deployed
in
your
environment.
If
you
have
the
same
problems
that
we
are
facing,
okay
and
this,
I
think
the
reason
we
bring
it
here,
because
I
think
that
these
problems
will
be
faced
by
many
environments
where
you
have
devices,
and
you
want
to
extend
the
lifetime
of
these
devices-
you
don't
want
to
incur
in
billions
of
dollars
to
replace
these
devices
every
time
you
have
a
crypto
failure,
so
we
are
trying
to
build
a
more
resilient
internet
for
everybody.
Next
slide.
G
So
at
this
point
we
have
two
choices.
We
can
do
two
things,
do
nothing
right
from
a
pki
standpoint
of
you
require,
as
russ
said,
and
as
other
people
pointed
out,
that
every
single
protocol
that
we
use
in
our
companies,
not
just
the
docsis,
that
we
use
to
protect
our
our
users,
but
every
single
proprietary
property
protocol
that
you
might
have
deployed
in
your
ecosystem
in
your
everywhere
around
the
world.
G
Okay,
especially
because
I
think
that
the
deployment
of
this
this
cryptography
will
be
more
difficult
in
secondary
market
and
growing
economies,
and
we
don't
want
to
artificially
inflate
the
costs
just
because
we
have
to
double
all
the
costs
and
then
we
will
be
reflected
in
devices
with
reflecting
services
and
the
possibility
to
access
the
internet.
I
don't
like
this
and
I
want
to
solve
this
in
a
different
way.
Okay,
so.
G
G
So
this
is
the.
This
is
the
situation
that
we
envisioned
in
our
networks
in
15
from
years
from
now,
not
today,
15
years
from
now,
okay,
the
majority
of
the
devices
will
still
be
classic
and
valid
capable
devices,
and
I'm
going
to
explain
what
they
are.
We
are
going
to
have.
You
know
a
certain
number
of
quantum
safe
devices,
but
I
expect
that
to
be
inferior
in
number,
as
they
click
the
other,
the
other
two
categories.
So
here
we
have
three
categories
of
devices:
classic
only
validation
capable
and
quantum
safe.
G
The
classic
only
is
devices
that
cannot
be
updated.
You
can
try,
they
don't
have
enough
memory.
The
secure
element
is
only
rsa,
nothing,
you
can
do
about
it,
the
validation
capable
device.
Instead,
they
have
more
memory.
You
can
upgrade
the
crypto
library
they
can
validate
the
post
quantum
algorithms,
and
maybe
you
can
also
add
post
quantum
cams.
So
you
can
start
transferring
securely
also
share
keys
and
we
will
look
about
why
that
is
important
and
then
the
last
one
is
the
quantum
safe
devices
that
we
envision,
that
they
will
not.
G
G
The
second
problem
that
they
say
on
the
on
the
rows
here,
you
can
see
the
different
operations
that
we
still
need
to
perform
under
the
quantum
thread
in
15
years
from
now:
okay
signing
with
classic
signing
with
quantum
save
verifying,
with
classic
and
verifying
with
quantum
safe.
And
if
you
pay
attention
to
this
table,
you
can
see
there's
two
main
problems
here,
right
next
slide.
G
The
first
one
is
that
we
need
something
that
allows
these
devices
that's
going
to
be
on
the
network
they're
going
to
protect
the
the
internet
traffic
for
for
for
everybody
in
the
in
our
network,
and
they
cannot
perform
their
their
post
quantum
authentications.
This
is
the
first
problem
that
we
need
to
solve.
The
second
problem
next
slide.
G
Is
that
these
devices,
the
the
classic
only
devices,
will
not
be
able
to
understand
these
quantum
safe
new
devices,
and
I
can
see
this
to
be
your
cable
modem
that
needs
to
to
authenticate
the
network.
The
network
is
quantum
safe,
your
classic
device
is
classic.
How
do
you
do
you
do
that
in
the
to
certificate
environment
without
raising
the
cost?
A
lot
with
all
the
changes
that
it
needed
to
be
deployed
next
slide.
G
So
this
is
just
a
summary
of
some
of
the
considerations
that
we
are
starting
to
make.
Okay,
of
course,
the
first
one
is:
do
you
need
to
change
the
pki?
No
in
the
two
certain
certificates?
No,
because
you
just
deploy
a
new
one,
so
you're
not
addressing
securing
that
infrastructure
you're,
just
deploying
a
new
one,
and
if
you
want
to
address
securing
that
infrastructure,
you
have
to
do
crazy.
Things
like
signing
cross,
signing
signing
with
a
certificate
from
my
infrastructure
to
the
other
and
creating
the
link.
That
team
was
actually
saying.
That's
a
problem.
G
Those
are
two
separate
certificates
can
be
two
separate
entities.
What
I
want
to
authenticate
is
one
entity
and
using
the
algorithm
that
I
can
understand
rsa
if
I'm
classic
only
or
post
quantum,
if
I
post
quantum
device,
there's
no
reason
for
us
to
not
have
this
option,
because
that's
exactly
what
we
need
to
do.
We
want
to
be
able
to
authenticate
the
same
entity
with
different
algorithm,
not
authenticating,
two
different
entities,
that's
just
not
the
same
thing.
Okay,
that's
why
it's
important
that
we
address
this
this
way,
not
the
other
way.
Next
slide.
G
So
here
I
want
to
provide
an
example
of
how
what
are
the
type
of
problems
that
we
have.
We
are
going
to
face
and
many
other
people
we're
going
to
face
these
problems.
Maybe
they're
not
thinking
about
that
now,
because
it's
15
from
years
from
now
and
they
don't
have
that
lifetime
expectancy,
but
in
10
years
from
now
they
will
find
themselves
in
this
same
situation.
G
So
what
they're
going
to
do
so,
let's
try
so
next
slide,
so
devices
classic
devices
can
perform
their
own
authentications
and
we
can
think
about
and
we
develop
an
internal
system
to
securely
deploy
this
symmetric
keys
early
because
right
now
we
don't
have
any
post
quantum
mechanism
and
we
have
some
property
methods
to
do
that.
G
G
So
next
slide
next
one:
okay,
previous
one-
there
should
be
another
one
anyway,
so
the
next
one,
so
the
if
we
have
composite
crypto
now
you
can
think
about
that
that
instead
of
having
to
protect
everything
and
have
to
rely
only
on
the
classic
crypto,
our
quantum
safe
device
now
can
validate
the
certificate
chain
with
the
post
quantum
algorithm
and
the
the
device
can
still
sign
with
it
on
rsa.
G
In
this
case,
we
will
still
have
to
combine
the
the
share
key
with
the
rsa
signatures
and
that's
what
we
are
actually
planning
to
do
so
next
slide.
G
If
you
think
about
the
authentication
on
the
opposite
side,
that's
where
you
have
the
biggest
problems
next
slide.
You
can
think
about
having
the
same
approach
having
a
outer
shell
that
the
classic
device
can
understand.
But
after
you
remove
that
there's
nothing
you
can
do
this.
This
transit
device
cannot
understand.
If
should
should
be,
this
change
should
be
trusted
or
not
next
slide.
G
So
what
you
do
you
the
two
certificate
approach
you
have
to
provide
two
separate
authentication
traces
and
these
devices
will
have
to
provide
rsa
signatures.
We
don't
want
that
is,
is
not
efficient.
It's
just
not
not
the
right
way
to
do
it
in
our
environment
next
slide.
G
If
we
have
the
composite
crypto
standardized,
then
that
will
allow
us
to
the
classic
device
to
actually
validate
the
chain,
not
the
last
signature.
That's
it
that's
the
part
they
are
still
working
if
we
can
combine
the
psk
in
some
in
some
in
some
ways,
maybe
but
the
signature.
In
this
case
the
security
is
provided
by
the
psk
usage
that
protects.
The
last
link
in
this
case
also
protects
the
classic
algorithm
for
chain
validation,
because
the
classic
device,
unfortunately,
cannot
use
the
post
quantum
algorithms.
The
next
slide.
G
And
the
other
thing
that
I
want
to
draw
your
attention
to
is
another
big
problem
is
indirect
authentication.
This
is
where
you,
the
device,
cannot
fetch
the
information
directly
but
has
to
rely
on
someone
else
on
the
network
to
catch
that
authentication
and
that
net.
That
fetching
can
have
happen
before.
Even
I
I
know,
if
the
device
can
can
understand
post
quantum
or
not,
that
means
I
have
to
fetch
both
of
them
all
the
time.
So
I
have
to
execute
the
protocol
twice.
G
Why
just
execute
the
same
protocol
and
rely
on
the
algorithm
that
you
trust
and
you
can
change
exactly
as
there
are
say
you
can
just
not
use
the
certificate
when
you
don't
trust
the
classic
algorithm.
You
can
just
not
validate
the
rsa.
If
you
can
use
the
post
quantum
and
for
devices
that
cannot,
they
can
still
they
can
still
validate
other
devices.
So
next
next
slide.
G
And
a
big
problem
is
for
firmware
update,
as
I,
as
I
mentioned
before.
Also
in
this
case,
you
can
do
it
with
two
certificates,
but
we
have
to
require
all
manufacturers
to
sign
with
multiple
certificates,
all
our
members
to
sign
with
multiple
certificates,
because
this
is
something
that
has
to
happen
even
before
you
know
which
device
this
goes
into.
Okay,
so
there
might
be,
and
you
need
to
provide
the
security
so
also
in
this
case,
for
classic
only
devices.
G
We
will
have
to
use
something
else
next
slide,
but
you
can
see
how
with
composite
crypto,
you
have
better
management
of
these
objects
right.
You
don't
have
multiple
signatures.
That
indicates
multiple
entities
signing
that,
but
you
have
a
single
signature.
That's
the
manufacturer
designs
with
multiple
keys
and
the
cosigner
in
our
case,
signs
with
multiple
keys,
but
you
can
very
easily
understand
who's.
The
cosigner
who's,
the
signer
instead
of
having
who's
the
manufacturer,
who's,
the
cosigner.
So
there's
a
big
difference
in
how
you
treat
your
protocol,
your
security,
your
roles
in
your
protocol
next
slide.
G
As
I
said
before
for
classifying
devices,
we
will
have
to
use
something
to
protect
against
the
quantum
threat
and
in
this
case
we
can
still
think
about
the
same
approach
that
we
used
before,
but
think
about
this.
To
be
every
time
you
need
to
authenticate
something
that
is
offline
or
csp
responses,
secure
time,
delivery
document
signing
this
is
what
addressed
the
the
problem.
And,
yes,
you
can
do
it
with
two
certificates.
Yes,
but
then
you
encourage
the
problem
of.
G
Is
these
two
certificates
represent
the
same
entity
or
are
those
two
certificate,
different
entities
which
one
and
if
I
can
trust,
only
one
of
them,
because
I
understand
only
one
of
the
two
algorithm
is
that
is
that
that's
a
cover,
my
my
my
the
logic
in
my
protocol.
So
this
is
this.
Is
the
the
problem
next
slide.
G
And
last,
this
is
something
that
I
have
a
very
particular
vision
on.
Most
of
you
probably
don't
agree
with
me,
but
this
added
value
in
having
the
both
signatures
in
those
certificates
is
that
you
can
differentiate
between
forged
certificates
and
known
for
certificates,
because
at
this
point
to
forge
that
root
ca,
you
will
need
to
factor
or
to
guess,
also
deprive
the
post
quantum
private
key.
That
is
today
and
a
protection
that,
with
the
two
certificate
approach
you
cannot
have,
and
I
think
that
this
concludes
I'm
sorry.
G
I
want
to
say,
because,
ultimately,
what
we
are
trying
to
deploy
here
is
a
solution
to
solve
a
real
problem
and
in
the
past,
I've
overhad
problems
with
working
in
pkis,
because
people
are
trying
to
block
it
for
politics,
reason
not
for
technical
for
technical
reasons
and
I'm
and
I'm
strike
this
and
the
fact
that
we
had
to
have
this
meeting
where
I
have
to
explain
why
and
how
we're
going
to
use
this
protocol
is
just
not
okay.
G
Itf
should
be
about
the
technical
merits
and
how
we
want
to
use
this
protocol
is
our
own
business.
Does
this
protocol
provide
a
solution
that
solves
a
problem?
Yes
as
good
technical
matter?
Yes,
so
there's
no
reason
for
this
group
to
block
the
work
that
we
are
doing.
That
is
important
for
hundreds
of
millions
of
people,
and
this
is
true
for
every
aspect
that
we
work
in
pkis,
and
I
want
just
to
stress
this.
G
I'm
sorry,
I'm
very
frustrated
because,
because
I
want,
I
really
want
this
to
to
move
forward
and
not
just
there
are
some
other
issues
that
we
will
have
to
focus
on.
Revocation
ocsp
needs
to
be
reward
on
because
it's
too
inefficient
there's
secure
time
delivery.
We
still
have
no
solution
for
that
and
that's
a
problem
for
devices
that
cannot
access
the
network
to
fetch
that
information
when
they
need
to
validate
certificates.
G
This
is
what
I'm
talking
about.
I
hope
that
this
gives
us
the
opportunity
to
think
about
how
we
proceed
in
this
work
group,
and
what
is
what
are
we
trying
to
achieve
if
you're
trying
to
say
we
shouldn't
do
this
work
because
there's
a
technical
merit,
I
I
agree
with
you
and
we
will
talk
about
and
make
it
better.
G
But
if
you
just
try
to
stop
this
this
for
no
technical
reason,
I
would
tell
you
I
will
call
you
out
and
I
will
make
sure
that
your
country,
your
objections,
are
really
reported
up
because
that's
not
okay
with
me.
We
are
talking
about
the
security
and
privacy
for
hundreds
of
millions
of
people,
there's
ethics
that
needs
to
be
applied
here.
Okay,
so
this
is,
I
conclude
my
my
my
opinion.
G
A
I
I
G
What
are
you
referring
to,
but
in
general
think
about
this?
G
The
way
that,
probably,
whenever
we
see
the
slide,
but
in
general,
I
think
that
the
problem
is
that
the
link
between
these
two
different
certificates
and
that's
the
that's
the
problem
that
you
know
and
that's
the
problem
that
we're
trying
to
understand
having
a
certificate
with
the
same
subject
can
be
an
indication
of
the
same
entity,
but
only
that
depends
on
the
policy
that
is
followed
by
the
ca.
So
that
depends
right.
That
depends.
Can
you
have
the
same?
G
I
G
I
Okay,
all
right,
and
so
the
other
was
there
was
no
protocol
changes
for
certificate
selections.
That
is
like
the
client
being
like
well,
since
I
only
have
one
cert
to
pick.
I
only
pick
this
one
as
opposed
to.
If
you
have
two,
is
that
what
you're
trying
to
deploy
across
and.
G
Also
on
the
server
side,
in
the
sense
that
in
our
protocol,
for
example,
on
the
server
side,
the
cmts
or
your
cable
modem
termination
system
actually
use
a
certificate
right,
so
he
has
to
sign
his
own
messages.
So
this
yeah.
J
A
Did
present
internet
draft
at
a
previous
face-to-face
ietf
meeting,
and
I
think
that
was
part
of
the
frustration
that
he's
having
to
justify
why
that
solution
is
needed.
Basically,
I
asked
him
to
say
you
know
what
problem
it
was
solving
for
him,
but
it's
composite
is
the
was
in
the
name.
I
don't
remember
off
the
top
of
my
head
and
I
don't
know
whether
it's
expired
max.
Has
it.
D
Actually,
it's
under,
I
believe
it's
currently
under
newton's
word
mike
moonsworth's
name,
and
it
has
expired.
G
C
E
A
I
E
So,
given
that
max-
and
I
are
the
co-authors
on
the
same
draft-
our
presentations
are
going
to
have
a
very
similar
slant.
I
think
I
can
probably
for
this
audience
blast
through
the
first
half
of
my
slides,
I'm
gonna.
I
do
have
a
lot
to
say
on
this
topic.
I've
been
focused
on
this
for
several
years.
Hopefully
I've
condensed.
E
I
think
I
have
a
lot
of
valid
and
different
points
to
make.
Hopefully
I
can
get
through
this
in
a
reasonable
amount
of
time
next,
so
I'm
going
to
first,
I
have
a
specific
lamps
recharge
suggestion,
specific
recharger
text
to
propose
I'll
just
blast
through
that
quickly
here
and
then
I'll
put
it
on
the
mailing
list
for
people
to
read
it
in
full.
E
Then
I'm
going
to
make
a
pitch
for
composite
signatures
as
a
standard,
and
here
note
like
russ
and
max,
have
been
talking
about
composite
certificates.
I
want
to
take
a
bit
of
a
step
back
and
argue
in
favor
of
a
composite
signature,
algorithm
point
by
itself.
There's
value
in
just
doing
that
work
and
then,
given
that
work,
entrust
will
go
and
stick
it
inserts,
but
we
actually
don't
need
a
standard
once
there's
a
signature
algorithm
or
we
actually
don't
need
any
more
standards.
E
We
can
just
go
put
it
inserts
and
use
it,
but
I
will
also
make
a
pitch
that
that
is
independently
useful,
that
the
composite
certs
is
independently
useful
from
composite
signatures.
Composite
signatures
is
actually
the
thing
we
should
be
working
on.
So
this
is
my
talk
next
next,
I'm
going
to
motivate
here.
I
have
for
this
audience
I'll
skip
through
this,
but
motivation
for
why
hybrid
and
dual
cryptography
is
a
good
idea.
E
Basically
we're
late
in
the
nist
cycle
and
still
coming
up
with
new
cryptographic
attacks
this
the
lattice
stuff,
the
code
based
stuff,
the
multivariate
cryptography
stuff.
This
is
like
really
heavy
math
right,
rsa
is
sort
of
high
school-ish.
Ecc
is
sort
of
undergrad
mathish.
This
stuff
is
heavy.
There
are
not
enough
experts
in
the
world
to
meaningfully
do
crypt
analysis,
and
so
it's
the
timelines
of
crypto
analysis,
is
much
longer.
You
know.
Last
week
a
bunch
of
new
attack
papers
came
out
against
the
round
three
finalists
and
nist
is
weighing.
E
F
E
K
E
With
the
existing
stuff
that
we
have,
you
know
sure
the
existing
stuff's
not
going
to
have
the
quantum
computers,
but
it's
been
hardened
over
decades
right.
We
know
what
we
like
it
next,
so
I'm
borrowing
nomenclature
here
directly
from
nist,
I'm
using
the
terms,
hybrid
key
establishment
and
I'm
using
the
terms
dual.
E
This
speaks
to
max's
presentation.
I
think
he's
been
over
this
a
bit.
This
is
our
take
on
what
the
transition
period
looks
like
also
borrowing
ideas
from
professor
michaela
mosca,
and
I
want
to
point
out
that
this
hybrid
period
sort
of
applies
in
two
different
senses
we're
talking
sort
of
the
first
bullet
at
the
bottom.
There
was
max's
point
about
20-year
lifetime
devices.
You
know,
when
can
you
turn
off
your
rsa
infrastructure?
E
I
think
there's
a
second
valid
timeline
to
consider
here,
which
is
the
timeline
between
the
publication
of
a
zero
day
cve
and
how
quickly
you
need
to
scramble
a
patch
for
your.
I
don't
know
dilithium
implementation,
and
if
we
go
with
a
with
a
assuming
that
that
these
lattice
algorithms
are
going
to
be
broken
and
fixed
multiple
times,
then
it
might.
I
think
it's
in
scope
to
consider
mechanisms
that
reduce
the
severity
of
cves
that
give
us
more
breathing
room
on
any
particular
bug
fix
patch.
E
E
So
here's
my
lamps
recharger
text
proposal,
I'm
not
married
to
this
text,
but
I
want
to
throw
something
here.
Basically,
the
last
sentence
is
the
is
the
good
stuff.
The
lamps
working
group
will
update
documents
produced
by
the
pkx
and
s9
working
groups
to
specify
hybrid,
key
establishment,
encryption
and
dual
signature
mechanisms.
I
think
we
should
do
it.
E
On
the
list,
there
was
a
call
a
bunch
of
months
ago
for
standardizing
nomenclature,
here's
my
proposal
to
standardize
the
terms
for
discussion,
I'm
referring
for
this
talk,
and
I
propose
that
we
refer
to
hybrid
key
establishment
and
dual
signatures.
As
the
general
concepts
that
match
you
know,
alice
and
bob
pictures
with
arrows
composite
is
a
specific
instantiation
of
dual
signature.
E
It's
it's
the
instantiation
that
takes
multiple
keys
and
multiple
signatures
and
wraps
them
up
into
a
single
object,
as
opposed
to
a
multi-multi-key
or
multi-cert
or
multi-signature,
where
you've
got
independently
floating
things,
so
so
hybrid
and
dual
or
abstract
concepts
composite
is
a
specific
instantiation
and
then
draft
allensworth.
There
is
the
thing
that
maxiner
co-authors,
on
which
is
a
a
concrete
proposal
for
composite,
including
both
the
asm1
structures
and
the
generation
of
verification
logic.
E
E
So
composite
signatures,
here's
my
main
pitch.
I
think
that
this
working
group
should
standardize
a
signature,
algorithm
oid
on
the
wire
structures
and,
most
importantly,
the
generation
of
verification
logic
for
composite
signatures.
I
think
that's
the
most
important
thing
incumbent
on
us
to
get
right
next.
E
E
I
could
well
imagine
a
world
in
which
you
want
to
get
your
backwards
compatibility
through
a
type
1
mechanism,
and
I'm
imagining
that
your
green
key
would
be
an
rsa
only
key,
that's
your
backwards
compatibility
and
then
your
red
key
is
a
composite
key.
That's
your
your
high
security!
So
I
I
I
don't.
I
don't
see
this.
We
we've
been
trying
to
pitch
this
as
an
either
or
which
one's
better.
I
don't
think
it's
a
which
one's
better,
I
think
both
are
valid.
I
think
both
could
even
be
valid
together.
E
So
these
sort
of
the
stock
arguments
for
composite
protocol
integration,
simplicity.
You
know
it's
just
another
public
key,
just
another
signature.
As
max
said
this
is
the
crypto
agility
built
into
x509.
You
know,
let's
use
it.
The
second
half
of
this
slide
was
covered
by
russ.
Already
I
mean
you
did
a
good
job
of
summarizing
the
points
when
you're
carrying
multiple
keys
and
signatures.
There's
going
to
be
gotchas,
there's
going
to
be
edge,
cases
that
we're
now
forwarding
on
to
every
protocol
designer
is
going
to
have
to
wrestle
with
the
same
problem.
E
So,
having
written
this
draft,
which
we
wrote
with
max
at
cable,
labs
and
cisco
and
isera
and
entrust
and
having
spent
a
year
of
bi-weekly
meetings,
getting
this
draft
right
like
it
was
a
it
was
a
pain
in
the
butt.
There
was
a
lot
of
edge
cases
to
consider
and
the
the
generation
of
verification,
algorithms,
we
rolled
back
and
forth
all
sorts
of
variations
and
came
out
with
you
really
got
to
keep
it
simple.
E
E
Here's
here's
a
vetted
way
to
do
it,
there's
an
oid
for
that
next
and
this
sort
of
says
the
same
thing:
we've
there's
a
bonus
material
at
the
end
of
this
slide
deck
showing
that
cms
the
base,
cms
algorithm
with
multiple
signer,
infos
or
structures
with
multiple
interface,
doesn't
accomplish
dual
signature
because
it
doesn't
prevent
stripping
attacks
stripping
attacks
that
new
thing
you
have
to
worry
about
with
a
post
quantum
world
that
you
didn't
have
to
worry
about
before.
E
I
do
acknowledge
5752,
which
adds
the
multi-signatures
and
that's
great
as
an
extension
mechanism,
but
sort
of
makes
the
point
that
getting
this
right
isn't
trivial.
It's
not.
Free
protocols
are
going
to
need
extension
mechanisms
like
like
they
did
in
5752.
In
order
to
get
this
right.
Do
we
really
want
to
force
every
protocol
to
add
those
extensions,
or
do
we
want
to
do
it
once
in
a
central
document?.
A
Just
for
history,
a
point
there
5752
was
about
rsa
with
sha1
and
rsa
was
shot
256.
two
signatures.
You
didn't
want
the
weaker
one
script
right,
which
is
exactly.
E
E
E
E
You
can
imagine
if
they
have
fully
compromised
say
they
know
about
a
zero
day
in
the
pq
they
can
room,
they
can
fully
remove
any
reference
to
rsa
from
the
certificate
and
then
resign
it
with
their
compromised
with
their
compromised
signing
attack,
whereas
if
the
ca
chain
is
composite,
all
the
way
to
the
root
you've
got
much
stronger
protection
to
detect
that
hey.
There
should
be
a
second
signature
on
this
document,
so
we
think
that
putting
it
inserts
all
the
way
to
the
root
is
actually
necessary
to
get
the
security
properties.
You're.
E
Looking
for
here
I'll
also
point
out
failure
modes
in
end
users-
and
I
know
standards
bodies,
don't
love
thinking
about
end
user
frustration,
but
I
will
point
it
out
that
now
we're
going
to
be
asking
users
to
juggle
two
csrs
and
two
certificates
this
this
sounds.
This
sounds
like
a
usability
nightmare
and
the
last
point
there
is
yeah
protocol
complexity.
There's
that's
been
mentioned
already.
Next.
E
So
this
is,
and
then
management
complexity-
and
this
is
to
max
to
the
whole,
just
did
a
whole
presentation
on
this.
Doubling
the
number
of
certs
means
double
the
amount
of
headache
to
administer
in
large
pkis.
You
know
think
about
joe
admin
who
just
wants
to
load
up
some
s,
mime
certs
for
their
end
users.
They
don't
know
or
care
how
this
stuff
works.
E
Now
they're
gonna
have
to
juggle
two
csrs
and
make
sure
the
right
private
keys
get
matched
with
the
right
csr
and
like
as
much
as
I'd
be
happy
to
sell
twice
as
many
certificates.
I
just
don't
want
to
go
there
like
it
just
sounds
like
a
customer
support
disaster,
whereas
binding
you
know
single
cert
that
just
does
the
strong
crypto
just
sounds
cleaner
from
a
customer
support
perspective.
E
Next-
and
I
have
here
a
couple-
these
are
the
standard
arguments
that
I've
seen
having
signature
primitives
that
combine
arbitrary
pairs,
increases,
risk
increases,
complexity,
increases
foot
guns,
yeah
cool,
good
argument.
I
do
see
that
point
and
that
comes
back
to
whether
we
need
to
support
explicit
pairs
or
whether
we
want
whether
we
think
we're
going
to
need
entities
to
have
three
or
more
key
pairs.
You
know
we
want
to
hedge
our
bets
that
any
particular
lattice
scheme
might
get
broken.
E
Maybe
you
want
rsa
in
two
different
lattice
schemes,
and
now
maybe
we
should
be
considering
things
that
are
fully
flexible
to
arbitrary
numbers
of
keys
and
signatures.
If
we
want
to
go
pairwise,
we
are
working
on
an
update
to
the
draft
that
takes
explicit
pairs
and
if,
in
the
bonus
material,
I've
got
the
asn1
structures
for
that.
E
So
you
could
define
a
signature,
album
composite
rsa
and
dilithium
and
then
you've
explicitly
locked
in
the
pair
and
we're
trying
to
define
the
the
asn1
information
classes
so
that
that
just
compiles
cleanly
using
our
on
the
wire
structures.
So
you
don't
need
to
worry
about
that.
You
can
just
go
ahead
and
define
a
pair
and
it
falls
out
of
our
draft.
E
First
off
I
read
through
that
thread,
russ
mo
almost
all
the
criticism
of
jumbos
that
I
saw
related
to
the
mixed
key
usages
that
your
signing
key
should
have
non-repudiation
properties,
but
your
encryption
key
should
be
escrowed,
and
how
do
you
handle
those
conflicting
requirements
in
the
same
cert
like
those
were
the
art
kinds
of
debates
I
saw.
E
I
didn't
see
a
whole
lot
in
that
thread
that
jumped
out
as
being
applicable
to
composite,
with
the
exception
of
size
and-
and
I
do
want
to
address
the
size
argument
where
we're
about
to
have
a
presentation.
The
next
presentation,
I
think,
is
on
hash
based
signatures.
Those
are
big
right.
Those
are
like
20
kilobytes
for
a
signature
for
hash,
based
lattices
you're,
looking
at
five
five-ish
kilobytes
for
a
public
key
and
a
signature,
and
then
one
more
kilobyte
for
the
rsa.
E
But
you
get
the
security
of
you,
get
one
step
up
of
security
properties
for
not
a
step
up
of
size
using
hash
base.
As
my
comparison,
so
I
I
don't
see
the
size
argument
and
I
think
if,
if
anything,
it
works
in
favor
of
composite
using
two
smaller
algorithms
instead
of
one
big
one-
and
I
think
that's
the
end
of
my
slides
unless
the
bonus
material
at
the
end
next
slide,
I
think,
is
a.
E
D
D
E
D
Again,
this
is
things
plus,
it's
already
been.
Basically,
it's
design
it
scales
up
to
two
to
essentially
as
many
singers
as
you
could
possibly
want.
C
Yeah,
it's
john
gray.
I
was
just
going
to
add
so
I'm
I'm
from
entrust
as
well,
so
as
as
an
implementer
of
this
spec
or
looking
at
implementing
and
doing
basically
prototypes
with
it.
It
definitely
like
the
protocol
backwards.
Compatibility
has
been
it's
it's
a
great
feature
of
of
the
spec
because,
for
instance,
5280
I'm
able
to
do
csrs
I'm
able
to
create
certificates
and
do
everything
I
don't
have
to
change
any
of
my
logic
and
my
code
for
how
5280
validation
works.
C
It
just
works
when
you
consider
it
as
a
signature
algorithm,
because
it's
just
another
signature
algorithm,
so
you
implement
the
signature
primitive
which
just
happens
to
consist
of
you
know
two
keys
or
you
know,
post
quantum
and
a
and
a
classical,
and
it
just
works
everywhere.
It's
quite
it's
quite
quite
nice
having
that
property.
So
I
just
wanted
to
point
that
out
it's
having
some
experience
with
that
and
actually
implementing.
L
Yep
hi
hi
everybody,
so
I'm
I'm
new.
This
is
the
first
time
I
present
something
itf.
L
L
Okay,
so
I'm
a
colleague
of
andrik,
which
I
who
I
believe
you
know
quite
good
already,
I'm
also
working
at
siemens.
So
probably
I
have
a
take
on
the
problem,
which
is,
let's
say,
a
completely
different
angle:
oh
no,
not
completely
slightly
different
next
slide,
please
so
how
to
say
we
probably
in
in
our
industry
or
in
our.
We
don't
have
such
a
clear
view,
as
most
of
you
guys
have.
L
We
have
to
deal
with
many
many
products
which
who
have
which
have,
I
would
say
rather
different
use
cases
and
rather
different
life
cycles.
So
sometimes
it's
a
little
bit
a
little
bit
difficult
to
to
see
the
light.
How
to
say-
and
we
are
here
so
I'm
here
today-
mostly
to
share
a
little
bit
of
experience
with
it
so
far-
try
to
gather
ideas
and
gather
opinions,
of
course
and
suggestions.
L
Of
course,
many
of
the
products
in
our
industry
have
also
have
a
very
long
lifetime.
We
also
have
products
which
go
to
yeah,
20
plus
years
and
but
also
have
different
products
with
different
life
cycles.
So
maybe
they
cannot
be
bundled
all
together
and,
of
course
we.
L
So
I
believe
for
us,
protection
of
firmware
and
the
overall
protection
of
up
how
to
say
have
a
secure
update
mechanism
in
place
is
in
focus
because
we
believe
that
that
basically
would
be
the
how
to
say
the
building
block
that
we
need
to
develop
for
the
deploy,
successful,
post,
quantum
migration
strategy-
and
this
is
where,
where
we
are
looking
at
right
now
and
yeah,
I
would
say
the
topic
with
which
we
are
trying
to
make
some
experience
and
next
slide.
Please.
L
So
we
have
started
looking
at
stateful
hash
based
signature
schemes
with
we
think
they
have
to
say
they
could
be
quite
useful
in
supporting
our
use
case
or
protecting
firmware.
So
typically
with
industrial
products.
You
don't
many
signatures
during
the
lifetime
you
are
in,
but
you
are
of
course
interested
to
have
those
signatures
secure
and
if
we
have
to
look
at
something
that
is
already
quite
I
mean
secure
today
as
a
matter
of
fact
already
standardized
by
the
nist
we
decided
to
so
I
mean
we
have
hash
based
signatures.
L
Stateful
hash,
based
signatures
in
the
size
of
the
the
private
key
generation
time
doesn't
seem
to
be
a
big
issue,
especially
if
we
think
to
generate
these
keys
and
how
to
use
them
in
a
central,
centrally
controlled
infrastructure
and
deciding
the
size
of
the
signature
appears
to
be
quite
small.
So
it's
quite
small,
so
it
could
be
a
good
fit
for
embedded
devices
when
when
it
comes,
of
course,
also
to
other
algorithms
like
hash
base,
those
things
plus
and
all
the
other
nist
finalists.
L
L
The
target,
especially
for
this
type
of
use
case,
should
be
to
use
a
common
security
infrastructure
which
is
centrally
managed
to,
of
course,
to
host
the
key
pair
and
to
perform
signatures
and,
and
the
benefit
would
of
course
be
that
we
can
easily
enforce
some
tighter
controls
and
especially
when
we
speak
about
stateful,
this
seems
to
be
rather
crucial
because
we
need
to
have
quite
how
to
say
strong,
technical
ways
to
to
keep
an
eye
on
the
state
so
to
make
sure
that
the
state
is
not
altered
next
slide.
Please.
L
I
just
say
with
this
type
of
use
case,
so
we
started
also
getting
in
touch
with
some
vendors,
and
you
can
imagine
also
hsn
vendors,
of
course,
and
we
noticed
that
there
is
quite
some
job
to
do
there.
So
it
looks
like
that
there
is
no
cross
vendor
crossfinder
integration,
so
there
is
no
how
to
say
common
yeah,
common
standard
or
common,
simply
definition
of
the
algorithms
or
even
the
input
or
the
output
for
what
matters.
L
So
it
is
at
the
moment,
it's
quite
difficult,
even
to
make
his
experience
with
with
the
tooling
that
is
provided
by
the
vendors.
L
Nevertheless,
we
started
trying
out
something
with
xmss.
We
simply
started
from
xmss.
We
don't
have
really
how
to
say
preferences
for
it,
any
preference
for
it
and
yeah.
As
I
was
saying,
we
tried
to
basically
see
how
it
would
play
out.
You
know
in
such
an
environment
where
you
would
have
a
central
security
infrastructure
that
you
use
to
to
perform
signatures,
but
yeah
we
see
that
there
is
as
some
as
I
was
mentioning
before
this
cross.
L
Vendor
compatibility
is
simply
not
there,
but
even
in
the
xmss
itself
in
the
algorithm,
it
seems
that
there
are
some
shortcomings
that
still
need
to
be
addressed
as
a
matter
of
fact
colleague
of
mine,
open
and
errata
to
address
some
of
these
problems
but
yeah.
I
believe
this
is
a
normal
life
of
the
algorithms
and
rfcs.
So
I
guess
we'll
anyway,
get
that
sword
sorted
out
over
time
and
then
the
other
point
that
yeah
we
noticed,
and
I
already
got
an
answer
from
from
us.
L
Thank
you
for
that
see
that,
as
of
today,
there
is
only
an
rsc
to
to
use
lms
in
cms
and
none
for
xmss.
L
We
understand
that
the
reason
is
that
simply
nobody
until
now,
of
course,
volunteered
to
to
put
some
effort
into
that,
but
we
think,
even
if,
when
we
speak
about
firmware
signing,
maybe
cms
might
not
be
the
only
the
only
way.
L
Maybe
there
is
a
lot
of
preparatory
stuff
or
plane
signature
or
what
not,
but
we
think
that,
of
course
doing
some
putting
some
effort
in
this
kind
of
activities
and
this
standardization
can
anyway
help
finding
out
issues
about
the
algorithms
and
eventually
help
us
with
the
these
integration
issues
that
we
we
face
otherwise
and
next
slide.
Please.
L
So
yeah
then
this
basically
just
leads
to
my
last
slide,
very
short
presentation.
Some
open
questions
for
basically
this
roundtable,
which
we
would
like
to
discuss,
hopefully
get
inputs
and
feedbacks,
and
the
first
thing
would
it
make
sense
to
to
put
some
effort
in
drafting
an
rc
for
you,
the
usage
exercise
in
cms
or
even
other
formats
that
could
be
relevant
to
why
not
even
other
algorithm
algorithms.
L
Then.
Second,
second
bullet
point:
I
already
got
the
answer
from
russ
again.
Thank
you.
We
were
not
sure
why
there
was
only
and
as
of
today,
nrc
to
standardize
the
users
of
lms
and
cms,
but
that's
quite
clear,
but
of
course
we
can
discuss
about
it
and
then
yeah
as
I
was
mentioning
also
if
we
should
already
start
looking
at
other
algorithms.
L
We
we
see
that
there
could
be
some
interest
also
in
I'd,
say,
mixing
these
hash
bays,
stateful
algorithms
with
long-lived
certificates.
So
I
I
guess
we
don't
have
a
clear
opinion
on
whether
it
should
be
a
composite
or
any
other
format,
but
yeah.
There
could
be
definitely
some
value
there,
because
we
might
need
to
deploy
today
trust
anchors
that
need
to
be
valid,
of
course,
for
a
very
long
period
of
time,
so
so
yeah.
L
If
it
would
be
interesting
for
this
working
group
to
to
look
into
these
approach
as
well,
and
that
will
be
it
from
my
side.
So
thank
you
for
the
attention.
If
you
have
any
questions
so.
A
Antonio,
the
xmss
is
already
within
scope
within
the
charter,
so
if
somebody
is
interested
in
doing
that,
work
just
submit
a
draft
and
the
group
can
look
at
it.
A
I
think,
in
terms
of
the
sphinx
plus
we're
waiting
for
this
to
decide
whether
that's
going
to
be
a
standard
or
not
and
in
the
last
point
is
what
the
last
three
presentations
have
been
about.
So
we'll
see
how
that
turns
out
right
right.
D
E
D
K
Okay,
yeah
regarding
to
the
spring
plus
right
now
our
our
opinion
is
that
likely
we
will
standardize
the
spin
plus
some
of
the
version
after
the
round
three.
That
is
current
thinking.
K
I
cannot
make
promise
for
this,
but
that
is
our
current
thinking
and
I
think
likely
that
is
going
to
happen,
but
what
pacific
versions
of
spring
plus
we
have
not
decided
yet.
K
And
yeah,
and
if
yeah
for
the
other
signature,
algorithms,
we
we
will
try
to
make
some
sort
of
conclusion
or
some
sort
of
official
opinion
about
them
at
the
end
of
round
three.
K
A
M
Yes,
thanks
glass,
I'm
tada
koito
from
second,
we
are
learning
certificate
authority
and
also
some
key
management
service.
Next,
please
yeah!
Well,
I
make
like
today's
overview
in
this
single
slide
and
my
presentation
is
all
about
how
to
support
diverse
sites
of
data,
and
current
pqc
seems
okay
with
autism,
authentication
and
small
data.
M
While
current
speech
seems
okay
with
signing
for
large
data,
but
only
with
prefresh
and
for
like,
however,
it
seems
we
need
to
differentiate
two
use
case
or
to
use
the
same
protocol
for
to
use
case,
and
it
seems
we
have
an
issue.
So
I'm
trying
to
share
this
issue
and
like
want
to
ask
what
we
should
do
next,
please.
M
They
have
some
like
marginalized
city
data
and
which
about
two
gigabytes
for
single
inspection,
and
we
do
kind
of
document
signing
or
those
data
with
certificate,
and
also
we
have
some
p
cat
data
for
something
like
building
or
whatever,
and
we
have
a
pdf
file
for
candid
building
chat,
chat
data
and
that
can
be
up
to
something
like
10
gigabyte
and
we
signed
those
data
with
with
document
science
technology
and
not
not
that,
like
those
like,
for
example,
those
multi-slicing
medical
data,
data's
like
lifetime
of
the
data
can
be
life
similar
to
lifetime
of
the
human
and
those
cat.
M
Data
can
be
like
the
lifetime
of
those
care.
Data
can
be
like
time
of
the
building
which
I'm
not
quite
sure
how
long
it
should
be,
but
anyway,
so
for
those
things
like,
we
have
a
huge
data
science
and
locate
a
long
lifetime
of
the
data,
and
I
feel
like
we.
I
need
to
prepare
our
company
to
prepare
for
pqc,
like
already
in
for
those
use
case
and
next
place.
M
And
we
implement
some
pqc
signature
already
on
the
hsm
and
it
was
slow.
It
is
partly
because
it
was
software
implementation
and
we
don't
have
a
hardware
thread
but
like
it
was
slow,
even
with
10
mega
file.
Some
like
we
need
to.
In
our
use
case,
we
need
to
sign
for
10
gigabyte,
byte
file
or
something
which
is
program
and
next
slide
pages.
M
Here
so
in
the
current
practice
we
have
a
data,
controller
server,
render
controller
service
only
calculate
hash
and
that
data
control
server.
M
M
anyway.
So
in
that,
like
like
architecture,
we
are
happy
that,
like
because
key
management
service
don't
need
to
watch
some
like
data,
and
sometimes
they
have
a
like.
It
is
fine
we
like,
but
one
because
of
the
hsm,
doesn't
need
to
handle
large
data
and
two
because
hsm
doesn't
need
to
handle
potentially
privacy
data,
and
it
is
good,
but
we
it
seems
we
cannot
do
that
approach
with
post
current
some
of
current
post-quantum
cryptography
things
next
celebrities
because,
like
it
seems
we
cannot
separate
hash
and
asymmetric
operation.
M
Some
pc
algorithm
because,
like
we
need
some
secret
secret
information
as
an
input
of
the
hash,
algorithm
and
well
so
so
yeah
next
step.
Please
so
to
solve
this
problem,
we
may
be
able
to
introduce
one
more
like
external
prehash
outside
the
hsm
and
inside
the
hsm,
like
we
may
able
to
use
nice
picture
sharing
or
something
like
that
and
make
some
public
key
cryptographic
like
protocol,
and
it
might
be
like
one
approach,
but
there
is
a
problem
with
that
approach.
Next
slide,
please.
M
So
like,
as
I
think
it
was
panels
like
unknown
king
who
pointed
but
if
like
we
use
with
without
prehash
approach
for
small
data
and
we
pretty
hash
with
such
data,
the
first
problem
is
like
I'm
not
quite
sure
we
we
can
have
a
clear
line
between
those
like
upper
use
case
and
lower
use
case
might
be.
M
Please.
So
I'm
thinking
that
like
well,
I'm
not
quite
sure
if
it
is
like,
I
guess,
like
huaqua
or
nisswa
or
whatever,
but
we
may
make
compiler
for
pqc
signature
protocol
and
hash
algorithm
like
take
input
or
like
pqc,
algorithm
and
hash
signature
and
other
input
and
compile
both
of
them
and
output
general
pkc
protocol
compiler.
M
So
that,
like
p,
pqc
algorithm
can
be
any
pq
heroine
and
rehash
alone
can
be
pretty
harsh,
algorithm
and
possibly
to
generate
general
pqc
signature
volume
and
that
designing
those
compiler
might
be
like
done
prior
to
this
speaking
project,
but
but
I'm
not
sure
where
to
discuss
the
topics
yet
in
any
way.
Yeah.
The
next
slide.
Please.
M
M
It's
called
sign
signature
text,
which
is
called
a
single
signature,
dictionary
file,
it's
the
green
side
and
between
green
we
have
a
pkcs7
or
cadets
signature
or
timestamp
data,
and
they
have
an
end
of
the
five
things,
and
so
that
is
pdf
like
document
and
that
pink
signature
is
signed
for
those
blue
part,
so
blue
part,
so
tv
signed
area
is
kind
of
separated
up
and
down,
and
we
concatenate
those
two
and
sign
and
put
into
that
pink
area
and
left
hand.
Side
is
sequential,
like
pdf
serial
signature.
M
So
so,
when
we
update
pdx,
that's
pdf
signature
version
2
we
like
yet
the
tps
spot
for
signature.
2
became
a
version
one
part
and
like
separated
version,
2
part
and
sign
those
like
dbs,
but
to
assign
those
separated
tvs
but
and
put
the
second
signature
into
sims,
2
area,
yeah
and
so
on
to
signature
version
3
of
4,
and
we
put
those
signatures
in
sequentially
next
step.
Please.
M
M
So
if
we
want
to
support
like
one
signature
like
hybrid
one
certificate,
one
certificate,
one
signature
story,
one
certificate
story
for
pc
and
composite
signature-
it's
very
tough,
but
we
have
to
like.
It
is
very.
It
shouldn't
be
that
much
problem
for
a
separate
signature
story,
since
we
had
already
deal
with
xiaoweng
and
shuttle
migration
and
it
should
work
with
like
chatu
to
pqc
story.
Also.
M
So
it
can
be
used
for
something
like
rsa
signature.
One
then
I'll
say
signature
too
pikachu's
signature
like
she
can
show
version
studio,
but
buzzing
up
and
yeah
anyway.
Yeah.
That
is
end
of
my
slide.
Yeah.
D
Okay,
start
off
with,
I
thought
that
you
were
suggesting
basically
having
an
option
a
standard
way
to
having
an
optional
pre-hash,
at
least
for
dilithium.
I
don't
know
if
that
is
strictly
required.
D
Dilithium
works,
in
particular
by
hashing
the
message
and
then
appending
some
some
data
to
complete
the
hash,
oh
and
and
just
in
using
the
hash.
What
you
could
do
is
having
the
off
the
your,
your
your
your
off
each
hsm
processor,
compute
the
hash
and
give
the
hsn
the
hash
state,
and
there
your
hsm
is
able
to
compute
everything
without
without
hashing
the
the
full
message
would
that
be
a
viable
alternative
which
would
would
be
completely
compatible
with
standard
dialysium
implementations.
M
M
M
It
might
be
possible
to
solve
the
station,
but
in
that
case,
like
implementation
of
hsm
became
much
more
complex,
much
more
complex,
and
and
for
that
like
we,
I
think
we
need
consensus
of
that
and
also
like,
if,
like
data
controller,
only
needs
some
public.
You
know
like
sword
or
something
which
is
not
necessary
to
be
secret.
M
It
might
be
okay
but
like
for
that,
which
had
better
have
a
consensus
of
that
like
data
flow,
so
I'm
thinking
like
we
may
need
to
have
some
consensus.
What
like
should
we
do
for
those
data's,
like
large
data,
sets.
D
D
E
A
Yeah,
I
think
that
scott
is
just
proposing
an
alternate
api
for
passing
the
hash
state
so
that
the
hsm
can
pick
up
where
the
outside
left
off.
M
M
Well,
don't
we
need
like
war
like
even
for
that,
like
alternative
api,
we
had
better
have
a
consensus
about
interface
of
api,
so
that
interoperability
problem
doesn't
happen.
M
D
A
A
Okay,
I
want
to
highlight,
while
people
are
thinking
that
please
go
to
the
blue
sheet,
so
make
sure
you
signed
in,
we
only
have
20
people
there.
We
have
more
than
20
people
in
the
webex.
A
A
Thank
you,
I'm
not
watching
my
mail
client
right
now
pushing
slides
forward
and
such
okay.
So
that
was
a
signature
approach.
Is
there
anything
we
want
to
do
about
composite
key
management
max?
I
think
you
got
some
things
in
your
slides
about
that.
G
A
A
shared
secret
that
is
produced
by
the
by
a
traditional
and
a.
H
G
That
is
something
that
we
have
our
solution,
but
we
are
open
to
to
looking
a
standardized
one.
Let's
put
it
this
way,.
G
E
I
proposed,
I
included
all
three
key
establishment,
encryption
and
signatures
in
hybrid
and
dual
modes.
I
personally
don't
feel
comfortable
speaking
to
key
exchange
and
encryptions.
Why
my
talk
and
my
work
is
focusing
on
signatures,
but
I
do
think
it's.
I
do
think
it's
work
that
this
working
group
needs
to
look
at.
B
H
E
E
E
J
Does
this
work
require,
or
would
it
benefit
from
revising
5280,
which
I'm
increasingly
think
has
gotten
long
in
the
tooth
in
a
lot
of
places
or
pushing
it
to
internet
standard,
or
something
like
that?
J
B
I
I
think
I
agree
that
the
charter
text
would
be
usefully
improved
if
it
were
more
clear
to
make
that
explicit.
A
Though,
but
we
could
do
that
on
the
mail
list
without
using
valuable
facetime
to
make
that
point,
I
think,
as
long
as
this
is
the
scope
that
people
are
interested
in,
I
think
we've
come
a
really
long
way
in
the
time
we've
put
into
this
today.
A
Hybrid
key
exchange
and
dual
signatures
is
the
point
of
this
text
I
believe
and
then
what
protocols
it
applies
to.
L
E
K
And,
and
in
here
we
have
roughly
the
term
hybrid
key
exchange
and
do
signatures,
but
how
how
we
will
we
will
finally
make
the
formal
definition
for
these
terms
right
later
right.
It's
just
a
concept.
K
A
We
can
certainly
define
these
terms
in
the
charter
as
well.
M
A
D
B
And
x9
is
extremely
unlikely
to
get
into
the
devoid
assignment
business
they're
doing
some
hybrid
stuff
in
f1,
but
I
doubt
that
we'll
have
voids
it's
probably
more.
It's
intended
to
be
more
along
the
lines
of
transition
to
hybrid
algorithms,
so
we
should
keep
a
close
scion
that
work,
but
I
don't
think
it
will
impact
our
stuff
too
much.
A
Okay,
so
this
came
to
consensus
much
faster
than
I
expected,
but
it
looks
like
that.
There's
interest
in
doing
this
work
and
we
have
a
charter
discussion
now
started
on
the
list.
Let's
bring
that
to
closure,
we
will
add
the
text
to
do
the
guidelines
that
dkg
brought
to
us
and
this
and
then
pass
it
to
our
area
director
for
chartering.