►
From YouTube: WG KMS Bi-Weekly Meeting for 20220301
Description
WG KMS Bi-Weekly Meeting for 20220301
A
All
right,
hey
everyone.
This
is
the
8th
kms
meeting
for
march
1st
2022..
I
think
an
issue
I'd
wanted
to
sort
of
talk
about.
Maybe
your
learnings
and
stuff
from
some
poc
work.
B
Yeah
so
after
the
last
call
what
I
started
doing
was
I
started
making
some
of
the
changes
on
my
branch,
so
I
got
as
far
as
creating
a
new
api,
the
proto
api
for
v2
alpha
one,
adding
the
new
uid
field
and
then
the
kick
version,
and
then
I
am
actually
in
the
process
of
using
the
proto
serializer
so
that
we
can
use
that
for
storing.
B
B
So
I'll
share
that
branch
with
the
group
and
then
simultaneously
I'm
making
changes
on
the
azure
kms
provider,
so
on
the
kms
provider
site,
I'm
just
trying
to
do
a
very
basic
where
I'm
generating
the
key
similar
to
how
we're
doing
the
deck
right
now
and
then
trying
to
use
it
for
certain
number
of
operations
before
regenerating
the
key
in
memory.
So
I'm
doing
that
just
to
mimic
what
we
talked
about
right
and
then
right
now.
How
I'm
thinking
is.
B
I
generate
the
key
use
it
for
thousand
encrypt
operations
and
then
once
I
hit
1000,
then
I
basically
regenerate
the
key
and
then
I
think
that
part
is
where
we
would
end
up
replacing
with
some
library
like
pink,
where
there's
a
key
set,
or
we
could
add
that
in
the
reference
implementation
on
how
we
decide
to
go,
but
I'm
working
on
that
and
then
I'm
going
to
string
those
two
together
and
then
test
it
out
this
week
and
then
I'll
have
the
branches
shared
on
the
slack
thread.
Then
we
can
just
go
from
there.
A
Okay,
so
just
to
make
sure
I
understood
the
azure
kms
changes
are
basically
the
starting
points
of
a
a
key
hierarchy:
implementation
within
the
kms
plugin
itself
and
so
presumably
you're
generating
that
in-memory
key
encrypting
it
with
the
real
kms
key,
I
guess,
and
just
and
then
returning
it
as
part
of
your
response
to
the
kubernetes
api
and
in
the
next
iteration
the
of
the
proto
api.
It
would
be
like
in
the
version
field
or
something
somehow,
okay,.
B
Yeah
and
then
I
think
this
is
probably
a
good
first
step
so
that
we
just
validate
this
works
and
then
the
second
thing
is
obviously
like
the
migration
right,
like
that,
I'm
not
focusing
on
the
storage
migration
part
of
it
yet,
but
once
we
have
this
and
we
say
yes,
this
is
approach
that
we
want
to
take.
Then
we
can
actually
go
and
think
about
how
we
want
to
do
the
migration
in
this
part,
and
then
we
can
make
changes
in
my
branch
or
like
we
can
work
on
it
together.
A
When
you
say
migration,
are
you
referring
to
the
act
of
migrating
from
an
older
encryption
format
to
a
newer
one
or
are
you?
Are
you
talking
about
like
in
place,
rotation
or
something
else.
B
So,
actually,
that's
a
good
thing
right,
like
both
of
it.
So
first
one
when
I
talk
about
migration,
it's
the
storage
migration
from
changing
it
from
the
old
format
to
how
we're
doing
it
the
new
way.
So
that's
the
first
one
and
then
the
once
we
have
this
in
place.
Then
we
also
need
to
see
what
we
want
to
do
with
rotation.
B
Is
that
something
that
we
want
to
do
without
any
user
intervention
or
because
I
think
we
talked
about
having
another
rpc
request
which
says
like
in
from
kms
to
api
server,
maybe
to
say
that
hey
the
keys
are
rotated.
So
if
you
have
anything
old
like
call
me
with
all
the
old
keys
and
then
we
can
do
the
rotation
now,
so
I
think
that's
the
second
piece
of
it.
A
Okay,
so
I
I
guess,
on
storage
migration,
I
would
ask
since
we're
implementing
a
new
api,
a
new
proto
api
right.
Wouldn't
we
be
able
to
sort
of
cheat
on
that
one
and
just
say
that
you
just
configured
the
kms
plugin
twice.
A
Yeah
so,
like
you
know
you,
you
start
off
by
having
it
as
a
read,
key
redeploy
turn
it
into
a
write.
Key
redeploy:
do
storage
migration
and
it
will
just
it'll
just
be
everything
will
be
just
forced
re-written
through
the
new
version
right,
so
so
that
you
know
I'm
thinking
about
a
migration
in
the
sense
of
like
and
releases
from
now.
Let's
say
all
this
stuff
is
stable
and
ready
to
go
and
people
are
like
cool.
A
I'm
gonna
upgrade
from
kms
version,
one
beta
one,
two
version
two
stable
right
and
like
that's
just
a
one-time
event
right,
it
doesn't
have
to
be
automated
or
beautiful.
Just
it
just
has
to
be
possible
and
safe
to
do
and
needs
to
support
rollback,
which
I
think,
all
the
existing
mechanisms.
Basically
do
it.
B
A
Yeah
so
yeah,
so
there
will
be
some
pain,
one
time
sure
like
yeah,
it's,
but
it's
it's
new
clusters.
You
just
don't
worry
about
it.
You
just
start
with
a
new
api
and
old
clusters.
You
have
to
do
a
one-time
migration.
Okay!
Is
there
anything
like
you
wanted
to
share
your
screen
and
share,
or
anything
like
that.
I
did
make
you
co-host,
so
you
can
do
whatever
you
want,
yeah.
No,
I'm.
A
A
All
right
was
there
any
thing
specific
we
want
to
continue
talking
about
in
regards
to
like
just
the
overall
improvement
effort,
because
I
so
and
this
you're
working
on
basically
the
the
new
api
and
the
reference
implementation
kind
of
vaguely
together
right
because
you're
like
I'm,
going
to
glue
them
together
and
let's
see
if
it
actually
makes
sense
and
it
works
right.
So
you
know
you're
doing
exploration,
effort
which
is
appreciated
and
really
helpful.
B
I
think
one
thing
in
outstanding:
in
performance
I
mean
it
was
grouped
in
performance,
but
the
option
to
choose
encryption
protocol.
I
think
we
talked
very
briefly
about
it,
but
that's
a
separate
work
stream,
so
I
was
wondering
if
that's
something
that
we
still
want
to
do
like
default
it
or
do
we
allow
the
user
to
choose
which
one
they
want.
A
A
A
B
B
Yeah
and
then
here,
if
you
scroll
down,
so
we
I
think,
with
the
hierarchy,
we
are
talking
about
cache,
warm
up,
deck
generation
and
all
those
but
yeah
right
above
observability.
There
is
a
section
called
choosing
the
option
to
choose
encryption
protocol
and
then
hardware
a
lot
of
things
config.
So
like
these
two
are
points
I
think
we
haven't
talked
about,
and
I
think
we
can
just
discuss
that
if
it's
not
something
we
want
to,
do
we
that's
fine,
but
I
think
we
do
that.
B
But
there
are
also
two
layers
of
rotation
right
like
one
is
basically
like
the
deck
that
are
the
deck
you're
generating
in
the
key
encryption.
Key
you're
generating
in
memory
gets
changed
like
after
every
certain
number
of
operations,
and
then
the
second
rotation
that
can
happen
is
the
one
that
you
actually
store
in
the
external
secret
store
and
at
least
from
what
I've
seen
in
all
the
providers.
Today,
the
kms
providers,
when
they
get
deployed,
they
are
deployed
with
a
certain
key
kms
entry
and
then
a
version.
B
B
And
I
think
also
with
this,
there
is
one
problem
right
like
the
thing
is
today
when
we
update
the
encryption,
config
and
restart
api
server.
If
something
is
wrong
with
the
encryption
config,
then
we
basically
the
api
server,
fails
to
start
right,
like
it
says,
headset
failed
and
all
that-
and
I
think
that
is
interesting
with
the
hot
reload
like
if
it's
watching
on
that
file
and
it
automatically
picks
it
up.
A
Yeah,
so
certainly-
and
certainly
I
don't
think
we
would
want
it
to
be
possible
for
a
running
api
server
to
immediately
go
into
a
failing
state.
Just
because
you
have
a
error
in
your
config,
I
could
see
arguments
saying
that
the
encryption
configuration
is
important
and
sensitive,
and
you
should
you
should
honor
it
all
the
time
like
explicitly,
because
that's
like
how
you
encrypt
data
at
rest,
but
at
the
same
time
I
don't
think
anybody
wants
their
api
server
to
just
stop
working
because
they
have
a
typo
somewhere.
A
A
A
A
That's
that's
the
full
full
process
across
your
entire
infrastructure.
That's
really
painful,
just
just
even
saying
it
out
loud,
just
makes
me
sad
so
with
hot
reload,
it
would
be
add,
update
the
file
across
the
api
servers
to
add
them
as
read
keys.
A
A
You
don't
have
to
wait
for
the
rights
to
be
observed
yet
because
you
know
that
there
was
a
read
key,
then
you
can
drop
all
the
lead
keys,
the
old
ones.
You
also
don't
have
to
wait
for
that
to
be
observed,
ish
as
long
as
you're
willing
to
say,
you're
not
going
to
do
these
rotations
quickly
back
to
back,
because
you're
going
to
do
them
quickly
back
to
that,
and
you
also
need
a
barrier
at
the
end
of
that
to
say
that
you
observe
that
state.
A
A
B
I
think,
like
I
mean
the
file
changes
still
might
be
required,
but
again
rpc
would
probably
be
a
good
way
to
do
it.
Dynamically
right
like
and
it's
almost
like
when
the
provider
new
provider
part
starts
up.
It
just
makes
an
rpc
call
to
say:
hey
like
I
am
available,
and
this
is
my
key
and
then
the
api
server
then
goes
through
the
encryption
config
and
says
out
of
these
providers,
which
is
the
first
one.
So
like
it's
like.
Oh
this
one
is
the
one
that
I
need
to
encrypt
everything
with.
A
A
A
B
So
then,
when
this
dynamic
registration
happens,
like
all
the
api
server
basically
has
a
map
of
all
the
entries,
so
when
it
has
to
decrypt
it
loops
through
all
of
it
and
then
decrypts
it,
I
think
the
key
is
we
need
to
decide.
How
does
it
decide
which
one
to
call
for
encrypt
like?
Oh?
Is
it
like
a
rank
basis?
How
does
it
happen?
B
Like
that's
one
idea
that
area
that
we
need
to
figure
out
and
then
based
on
the
health
check
that
it's
doing
like
if
the
health
check
fails
over
like
a
period
of
time,
we
can
have
a
definition
on
what
that
period
is.
If
the
health
check
fails
over
that
period,
then
basically,
api
server
can
remove
that
plug-in
from
the
plug-in
from
the
map
like
it
doesn't
have
to
call
it
going
forward,
because
there
is
no
mechanism
where
a
dying
cam
came
as
provider
will
say
like
it
be
registered
me.
B
A
So
I'm
not
familiar
with
csi
enough,
like
so
like
what
what's
like
the
mechanism
for
knowing
that,
like
a
trusted
actor
is
the
one
that's
saying
that
I
can
be
a
csi
driver.
Just
trust
me,
I'm
here
is
it
just.
Is
it
just
by
virtue
of
the
fact
that
it
can
create
a
unix
domain,
socket.
B
So
it's
it's
a
mixture
of.
So
there
is
this
component
in
csi
called
node
driver
registrar,
it's
a
site
card
container
from
the
csi
team.
So
when
you
actually
implement
a
csi
driver,
you
add
that
site
card
in
your
driver
and
when
your
driver
port
starts
up
no
driver
registrar
talks
to
your
csi
driver
to
get
metadata
on
what
the
driver
name
is,
what
version
it's
running
and
then
there
is
also
a
corresponding
csi
driver.
B
Kubernetes
object,
so
the
node
driver
register
when
it
gets
this
information
it
there
is
a
a
well-known,
socket
path
on
tubelet.
So
there
is
a
file
system
path,
var
tubelet,
plug-in
registry.
Something
like
that.
So
you
mount
that
volume
and
then,
in
that
particular
volume
the
no
driver
register
adds
information
about
this
driver.
So
I
think
the
file
system
is
the
trust
who
mounts
it
and
then
who
adds
to
it
and
once
that
is
added,
cubelet
basically
has
a
file
system
watcher.
So
it
says:
hey,
I
see
this
new
driver.
B
That's
been
added,
which
means,
if
anyone
references
this
driver,
I
know
how
to
make
an
rpc
call
to
this
particular
driver
and
then,
in
terms
of
verification.
I
think
it
also
goes
and
verifies
that
there
is
a
csi
driver
object
for
this
with
this
particular
name,
and
then
it
just
says.
Whatever
information
I
have
is
also
valid
based
on
the
kubernetes
object,
and
then
it
says
everything
is
good
and
then
it
just
trusts.
It.
A
C
A
A
You
know,
certainly
you
can
be
running
an
api
so
on
a
cubelet,
it's
a
control,
plane
keyboard.
So
certainly
that
seems
fine
and
that's
actually
the
case
that
I'm
trying
to
reason
to
write
which
is
cubey
and
those
styles
of
environments.
This
is
it's
just
a
much
easier
problem
when
you
have
a
one
order,
orchestrator
above
you
helping
to
do
their
rollouts.
A
B
B
I
I
think
we
start
health
check
after
40
seconds
so
after
40
seconds,
if
kms
is
not
running,
api
server
keeps
restarting.
So
if
we
do
decide,
we
want
to
do
dynamic.
Then
kms
is
no
longer
system
critical,
it's
just
more
like
an
add-on
that
gets
injected
into
the
path
for
the
api
server
request,
but
is
there
is
way
we
can
do
this
with
our
back,
like
our
back
permission,
so
we
can
scope
it
down
to
who
has
permissions
to
register
dynamically.
A
Yeah,
I
I
so
I
guess
the
failure
mode
of
csi
is
much
less
benign,
and
just
I
mean
it's
still
really
bad,
like
your
apps
can
stop
working,
which
is
kind
of
the
whole
point.
So
you
know,
depending
on
who
you
ask,
that's
super
bad,
I
I
guess
I
guess
I
would
do
kind
of
very
right
like
like
if
you
register
a
kms
provider
and
it's
working
fine,
but
then,
for
some
reason
it
stops.
Working.
A
C
Thought
is
like
that
is
like
uber
permission
right
like
if
you
can
get
on
the
host,
you
can
do
anything
right.
It's
almost
like
the
static
pod
right,
where
it's
a
it's
a
very
highly
privileged
component
and
if
it
dies
like,
if
you
get
hacked,
then
the
whole
cluster
doesn't
you
would
be
able
to
hack
anything
else.
A
C
A
Right
right:
well,
I
mean
yeah,
so
that
makes
the
problem
even
worse,
but
you
know
today
I
think
we
hand
wave
this
and
say
that.
Well,
you
better
make
sure
your
api
server
configuration
is
exactly
the
same
across
all
your
api
servers,
because
if
it's
not
you're,
just
in
some
undefined
state,
like
nothing
good
will
come
out
of
it,
which
I
mean.
I
guess
it's
fine,
I'm
just
trying
to
think
of
like
if
we
can
do
anything
here
to
make
this
more
practical.
A
C
Also
like
how
is
this
different
from
like
when
you
upgrade
a
kubernetes
cluster
like
you're,
when
you
upgrade
kubernetes
cluster,
the
control
plane
like
the
masternodes,
are
upgrading
at
different
times,
and
they
a
lot
of
files.
Files
on
the
file
system,
get
updated
right,
like
the.
D
C
A
A
But
but
maybe
that's,
maybe
that's
just
sort
of
the
nature
of
the
kubernetes
workflow
that,
like
maybe
you
would
do
an
upgrade
like
from
version
a
to
still
version
a
but
like
somehow
pass
in
new
encryption
configuration
so
like
it
just
kind
of
changes,
the
one
file
on
disk
and
nothing
else,
actually
changes.
I
don't
actually
know
how
cubadian
upgrades
exactly
work.
C
Well,
another
thing
is
like
be:
I
mean
again
today
like
the
manual
process
right
is,
we
need
to
make
sure
both
of
them
are
there
at
the
same
time
for
a
while,
until
we
know
like
the
old
one
is
safely
removable
right.
So
if
we're
doing
this
in
an
automated
way
like
that
is
still
very
much
the
case
right,
like
I,
I
think
as
an
operator
knowing
the
old
one
and
the
new
one
are
running
in
parallel
and
then
I
I
can
easily
access
the
old
stuff.
A
A
You
need
all
of
those
steps
to
know
that
you
can
safely
remove
things,
and
you
need
a
guarantee
that
the
storage
migration
cannot
be
confused
by
any
key
hierarchy
mechanism
going
on
now.
That
one,
I
think,
is
pretty
easy
if
it's
literally
separate
levels,
that's
very
easily
detectable
within
the
way
stuff
is
implemented
today,.
A
Yeah,
so
I'm
trying
to
remember,
I
vaguely
remember
like
there
was
like
the
api
server
id
kept
thing
where
each
api
server
had
like
a
unique
identifier.
Did
that
ever
go
anywhere
like
do
we
do?
We
have
like
the
notion
of
like
an
api
server
in
the
kubernetes
api?
That
represents
each
instance
like
do
we
have
a
place
where
one
could
actually
put
status
of
an
api
server,
a
particular
api
server.
A
A
First
of
all,
so
we
even
had
like
do.
We
have
a
way
to
query
that,
because
that's
not
it's
not
like
part
of
the
kubernetes
surface
right
that
you
can
normally
address
right.
You
can't
normally
just
do
like,
for
example,
when
you
do
qctl
get
nodes
on
the
cloud
providers,
usually
the
nodes
don't
contain
the
api
server.
You
know
they
only
contain
the
workload
nodes
on
purpose
right,
and
I
don't
expect
that
to
change.
So
how?
A
How
do
you
know
how
many
api
solutions
have
and
what
state
that
they've
reached
right
like
I,
I
guess
in
a
sense.
A
I'm
trying
to
be
very
sympathetic
to
the
individual
that
has
to
orchestrate
this
and,
if
we're
willing
to
accept
it's
a
file
based
api
and
thus
will
always
require
some
level
of
fancy
per
cluster
specific
orchestration
based
on.
However,
you
manage
files
on
disks
for
your
accessory,
even
if
you're
willing
to
accept
that,
and
it
has
hot
reloading
and
all
that
the
rest
of
it
doesn't
necessarily
have
to
be
painful.
Then
awful
right
like
there
could
be
some
way
to
observe
this.
B
Yeah,
I
think
that
I
I
mean
you
talked
about
cure
right
like
how
on
a
self-hosted
cluster-
and
I
think
also
the
other
problem-
is
for
different
cloud
providers.
They
have
to
implement
this
in
a
different
way
like
each
one
who
supports
kms
and
they
have
to
implement
rotation
in
their
own
way
and
then
also
it's
not
deterministic
like
you
have
both
the
providers
running
simultaneously.
B
But
I
you
need
to
make
a
decision
on
when
you
think
all
the
rotation
is
complete
so
like
periodically,
you
go
and
rotate
every
name
space
and
then,
when
you're
completely
certain
based
on
some
data
that
you
are
maintaining
that
you
have
rotated
in
secrets
in
every
name
space,
then
you
go
and
update
the
encryption
config,
but
yeah.
There
is
no
kubernetes
way
which
says
like
hey:
the
rotation
is
done.
A
Right
right,
so
so
two
things
right.
So
if
if
you,
if
you
can
validate
that,
you
know
that
a
particular
level
has
been
observed
by
all
of
your
api
servers,
so
you
have
you
have
a
store,
that's
in
your
cloud
somewhere.
That's
like
these
are
the
api
servers
that
belong
to
this
cluster
and
you
can
validate
that
they
reached
a
particular
config.
I
believe
the
the
storage,
migrator
and
api
discovery
has
enough
information
with
the
api
storage
version
hash
thingy
to
like,
I
think
it
has
enough
information
in
there.
A
Yeah,
basically,
I'm
trying
to
get
this
problem
down
to
the
orchestrator
is
responsible
for
updating
the
file,
we're
responsible
for
making
it
so
that
there's
it's
clear
through
some
nice
kubernetes
mechanism,
the
state
of
the
system
after
that's,
been
done
so
that
way,
the
rest
of
the
pieces
of
the
system
can
just
be
generically
automated
across
all
the
cloud
providers
and
qradium
and
everybody
else
you
just
have
to
build
the
one
mechanism
for
your
infrastructure.
That's.
A
B
C
I
think
that
that
ensures,
like
all
the
api
servers
are
green,
are
have
the
same
state,
but
you
still
don't
know
when
all
the
rotations
are
done
right,
like.
A
No,
that's
what
I'm
talking
about
the
storage
version
hash
and
the
storage
migrated
right.
That's
why
that
was
all
built
to
be
able
to
assert
that
when
you
asked
for
a
storage
migration
for
a
particular
resource
at
a
particular
time
when
that
the
status
of
that
request
says
it's
complete
okay,
it
actually
happened.
A
A
C
A
Like
the
storage
version
migrated
yeah,
it's
still
an
optional
component
that
you
don't
have
to
use
kubernetes
everybody
gets
to
do
whatever
they
want.
C
Like
very
often,
during
upgrade
like
you,
after
the
upgrade
you
test
like
a
new
api
and
you're
like
wait,
this
doesn't
work
and
it's
because,
like
something
got
stuck
in
the
middle
and
and
you
have
to
like,
go
back
and
restart
the
api
server,
it's
like
having
something
like
this
is
like.
Okay,
you
just
know
it
will
work
right
like
it's
kind
of
interesting,
you
sure,
you're
sure
kubernetes
is
in
production.
A
A
Probably
it's
not
that
hard
to
do
as
long
as
you're,
careful
with
your
edge
cases
and
yeah.
I
think
that's
probably
fine.
I
have
talked
about
who's
got
opinions
on
this.
A
C
D
A
A
A
I'm
a
little
bit
wary
of
making
this
configurable
because
it
pins
our
choices
right.
It
means
that
say
we
tell
you
you
can
use
a
ascbc
or
as
gcm
that
basically
how
the
internals
of
the
envelope
encryption
work
today,
which
are
an
implementation
detail
of
the
system,
are
effectively
codified.
You
can't
change
them.
A
Right
today,
there's
no
way
to
influence
the
fact
that
cbc
is
used
for
the
act
of
encrypting.
A
single
set
of
bytes,
representing
one
object
at
one
instance
with
one
particular
data
encryption.
A
A
Like
if
you
truly
care
about
this,
I'm
not
sure
if
this
just
being
able
to
change
this
particular
thing
is
what
you
want
right
like
like
the
the
whole
idea
of
an
in
process.
Data
encryption
key
generated
by
a
go
system
is
a
little
wonky
right
like
wouldn't
you
actually
just
want
your
kms
to
directly
encrypt
the
data
itself
like
if
you
truly
care.
C
A
C
I
think
it
would
give
them
a
way
to
I
mean
whatever
we
said
as
default
is
one
thing
which
can
be
something
that
is
debatable
within
the
community
right
and
then
the
other
thing
is:
if
somebody
wants
something
that
is
not
the
default,
how
do
they?
How
would
they
do
it
like?
What's
our
guidance
here.
A
Well,
what
what
I'm
saying
is
that
this
configuration
is
really
basically
saying
that
you
know
that
we
want
to
do
envelope
encryption
using
this
data,
encryption
key
scheme
right
and
we're
going
to
let
you
tweak
a
tiny
little
bit
of
it.
I'm
I'm
mostly
saying
I'm
not
sure
that
satisfies
anyone
that
would
actually
care
right
like
if
I
truly
wanted
my
pcs
and
hardware-based
kms
to
be
the
one
doing
the
encryption.
I
would
just
want
you
to
pass
the
actual
bytes
of
the
object
like
the
kubernetes
object
to
it.
A
Let
it
encrypt
it
hand
you
back
the
encrypted
blob
and
that's
how
the
encryption
works.
There's
no
in-memory
keys!
There's
none
of
this
gobbledygook
going
on
right,
like
my
kms,
is
actually
doing
all
the
encryption
right
and
you're
you're
not
involved
in
this
arbitrary
layering
and
stuff.
Like
that's
what
I
would
actually
want.
C
A
Like
if
you're
saying
that
there's
people
that
have
opinions
and
want
to
change
this
well,
first
of
all
one,
I
don't
think
I've
actually
seen
them
I
haven't
I
haven't.
Had
anyone
ask
me
for
disc
config,
I've
asked
I
had
a
lot
of
random
ass
random
thing,
but
not
this
one
but,
like
I
don't
know
if
this
solves
a
problem
that
people
actually
have
right
like
if
it's.
A
If
it's
a
yes
gcm
and
it's
that's
what
we
just
say
it
is,
and
we
like,
I
do
think
we
should
encode
that
decision
into
the
storage
so
that
it's
observable
in
the
storage.
If
we
decide
to
change
our
mind,
because
today
it's
not
right,
it's
just
implicit.
So
I
think
that's
wrong.
I
think
it
should
be
explicit
and
part
of
the
schema
right
just
as
a
thing
right,
but
I'm
less
convinced
that
we
should
give
people
a
knob
that
I'm
not
really
sure
what
it
changes.
B
D
D
Wouldn't
you
you
wouldn't
register
that
so
so
it
could
actually
be
decrypted
and
it
makes
sense
or
not,
but
obviously
it
would
be
better
if
it
wouldn't
make
sense,
but
you
could
kind
of
form
the
cipher
text,
and
this
is
why
a
lot
of
people
have
an
issue
with
that.
So
yes,
so
it's
not
possible
yeah.
I
don't
think
that
anyone
would
have
big
issues
with
asgcm
and
maybe
if
you
would
come
up
with
wishes
for
the
no
details,
so
so
this
is
possible.
D
Maybe
they
say
you
want
that
128
bit
key
or
256,
so
you
want
to
to.
I
don't
know
what
play
play
with
the
hash
right,
but
I
think
tbc
was
a
particular
thing
and
and
for
countries
like
russia
and
china
I
would
just
assume
that
they
give
them
access
to
whatever
they
have.
I
don't
know
if
I
so.
D
I
once
read
the
blog
post
years
ago,
where
I
heard
that
that
the
russian
state-
so
I
don't
know
if
it's
still
true,
but
I
just
read
that
they
just
have
a
key
for
it
right
so
to
create
an
additional
key
with
whom
or
they
have
an
admin
user
account
with
how
they
can
just
access
it.
I
don't
don't
think
that
they
kind
of
have
won't,
have
weak
encryption,
that
they
can
kind
of
brute
force
it,
because
even
on
weaker
side
sapphires,
it's
it's
up
a
lot
of
resources.
A
So
I
think
the
last
comment
there
was
basically
for
areas
where
the
government
requires
access
to
this.
You
just
you,
don't
really
give
them
access
to
the
encryption.
You
just
give
them
admin
access
to
the
kubernetes
api
and
if
they
wanted,
they
do
it
like
they.
Just
they
just
skip
all
the
encryption
right.
You
just
ask
for
the
data
if
they
want
it,
which
is
fine,
there's
no
big
deal
yeah.
I
guess
yeah
like
so.
I
remember
you
know
alex
opening
this
pr.
A
You
know
this
was
two
years
ago
right
and
I
was
like
yeah,
that's
cool,
let's,
let's,
if
you
don't
like
it,
let's
just
make
a
gcm
right.
That
was
my
response.
I
was
like.
I
don't
want
to
config
like
there's,
actually
no
reason
to
use
cbc.
This
is
like
the
wrong
choice.
It
should
just
have
been
gcm
like
there
isn't
actually
any
meaningful
reason,
but
if
there's
no
meaningful
reason
to
ever
use
cbc,
then
just
just
use
gcm
and
you're
done
with
it.
A
A
Secret
box
basically
fails
the
canonical
check
of.
Can
you
use
it
in
the
government?
No,
you
can't
so
like.
I
don't
know
what
the
point
of
offering
it
would
be.
C
A
A
A
A
C
D
Well,
the
advantage
of
gcm
is
that
so
when
you
do
cbc
in
production,
actually
what
you
would
do
is
so
you
not
only
use
it
for
encrypting,
but
you
also
need
to
to
hash
the
cipher
text
and
send
the
hash
alongside.
So
you
can
verify
that
nothing
was
changed
and
this
is-
and
you
could
do
a
lot
of
things
wrong.
So
people
like,
for
example,
hashing
the
plain
text
instead
of
the
ciphertext
and
stuff
like
this,
and
it
gets
people
can
shoot
themselves
quickly
into
the
food
when
they
try
to
do
it.
D
A
A
A
D
One
thing
so
so
I
I
don't
think
it's
so
definitely
we
should
be
able
to
quickly
change
the
cipher.
If,
if
don't
know,
some
someone
is
able
to
find
a
weakness
in
it
and
we
need
to
exchange
it
right.
So
it's
definitely
worth
it
in
a
place
where
it
wouldn't
take
us
a
week
or
something
to
to
changes
but
more
like
in
in
the
in
the
frame
of
one
two
days.
D
So
it
should
not
be.
We
don't
need
to
invest
time
until
people
complete
them.
It
converges
configurable,
but
it
should
be
easily
exchangeable
in
code
right
that
we
kind
of
patch
version
and
in
the
last
two
minutes.
So
with
regards
to
the
reference
implementation.
So
I
started
to
look
into
the
pix11,
which
does
not
have
so
broad
support
and
go,
but
there's
actually
one
wrapper
around
some
c
libraries.
D
A
Did
the
wrapper
that
you
found?
Is
that
the
one
that
let's
encrypt
uses?
A
Yes,
okay?
I
just
wanted
to
make
sure,
because
that's
the
one
that
I
know
of
I
mean
if
it's
good
enough
for
them.
It's
good
enough
for
us.
D
Oh
yeah
yeah,
I
I
ch
there's
a
several
bigger
projects
who
reference
who
use
this
library,
so
so,
if
they
trust
it,
maybe
we
can
trust
it
too,
but
yeah.
It
doesn't
feel
like
first
class
support
yet
for
for
the
go
language
so.
A
Yeah,
I
I
guess
as
a
somewhere,
we
should
record
the
need
to
review
the
the
wrapper
right
like
just
just
actually
look
at
it,
see
what
it
does
and
make
sure
it's
begging,
sound
c
code.
Slash,
go
code.
A
Yeah
yeah,
but
which
is
okay,
I
guess
for
the
reference
implementation.
We
would
just
want
to
make
sure
that
you
can
use
the
library
parts
of
it
without
needing
seagull
right.
It
needs
to
be
separated
out
with
little
tags
and
stuff
so
that
it's
possible
to
use
the
library
without
seagull,
but
that
shouldn't
be
do
that
right.
D
Okay,
so
we've
just
post
something
in
a
chat
once
a
cr
have
something
to
show,
but
I
mean
I
push
it
to
repository,
so
you
can
see
already
something
right
now,
but
I
don't
see
any
benefit
right
now
in
it.
I
would
just
post
it
in
a
chat
and
present
something
in
the
next
kms
speaking
awesome.
That's
good!.