►
From YouTube: Secrets Store CSI Community Meeting - 2021-04-01
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
thank
you
for
joining
this
csi
secrets
store.
Call
today
is
april
fools
april,
1st
2021
for
those
who
are
new
or
end
or
joining
just
remember.
This
call
falls
under
the
cncf
called
the
conduct
and
let's
go
ahead
and
get
into
our
agenda
for
today.
A
Again,
if
you
are
joining,
go
ahead,
and
just
add
your
name
as
an
attendee
and
your
organization,
you
are
affiliated.
A
B
Yeah,
so
the
release
is
planned
for
today,
the
4.0.21.
So
there
are
three
outstanding
prs.
I
think
two
of
them
have
gotten
merged.
They
were
reviewed
this
morning,
so
there's
one
last
pi
to
bump
the
version
to
order
total
21..
So
once
we
merge
that
we'll
go
ahead
with
the
process,
so
hopefully
we'll
have
a
release
type
in
the
next
few
hours,
with
all
the
optimizations
that
have
gotten
it.
B
So
this
is
actually,
I
think,
one
of
the
biggest
releases,
because
we
have
had
a
bunch
of
pr's
going
in
terms
of
features
and
also
in
terms
of
optimization
and
also
we
are
adding.
We
have
added
a
load
test
dock,
so
we've
added
the
load,
testing
environment
and
then
the
kind
of
test
that
we
ran
and
then
the
memory
limits
that
we
saw
and
then
I've
also
added
a
small
snippet
in
the
dock,
for
users
to
understand
how
our
memory
consumption
increases.
A
C
Yeah,
so
this
kind
of
goes
into
the
next
topic,
but
as
I
was
looking
at
the
how
the
driver
might
do
the
writing
of
files
instead
of
the
plugins,
I'm
not
quite
like
the
there's.
Some
changes
to
the
proto
that
are
going
into
the
next
release
to
allow
us
to
just
start
exploring
moving
writing
of
files
to
the
driver
instead
of
the
provider.
C
But
one
of
the
things
that
you
know
like
kubernetes
supports
is:
if
the
secret
changes
and
the
file
is,
the
key
is
removed
from
the
secret
right.
It
will
delete
that
file
from
disk,
but
I'm
not
quite
sure
of
how
we
would
want
to
support
that
that
kind
of
flow,
because,
right
now
the
plugins
are
turned
back.
You
know
all
of
the
files
for
it
and
then
the
the
provider
just
writes
those
it's
what's
gonna
be
released
in
the
next
release.
First
right
and
that
driver
writes
those
files.
C
So
until
like
an
unmount,
nothing
would
delete
a
file,
and
so
yeah,
I'm
actually
thinking
about
this.
I'm
not
sure
that
the
like
gcp
plug-in
handles
this
case
either
of
deleting
files,
I'm
not
sure
if
yeah
any
of
the
plugins
do.
But
it
was
just
something
that,
like
I
deleted
a
lot
of
code
when
I
was
copying
from
the
atomic
writer
to
to
ours
that
notice
that
a
lot
of
there's
a
lot
of
logic
around
these
sort
of
mutations
see.
C
I
thought
I
would
at
least
point
that
out
and
see
if
anyone.
B
Had
any
thoughts
yeah
I
mean,
I
don't
think
you
were
right
right.
None
of
the
plugins
are
actually
delete
today,
because
when
we
get
a
call
from
the
driver
to
add
these
files,
we
basically
look
through
all
of
them
and
write
those
files,
but
we
actually
don't
go
and
clean
up
anything.
That's
there
already
on
the
files,
and
I
think
the
one
possible
approach
is
probably
just
so
right
now.
B
We
just
write
only
those
so
basically
we
write
and
then
we
delete
all
the
extra
ones
and
just
make
sure
that
there's
only
the
latest,
because
it's
possible
over
time
the
user
updates
the
secret
provider
class
right.
So
they
have
a
spca
deployed,
they
have
a
part
deployed
and
then
they
go
and
modify
the
spc
to
remove
some
of
the
objects
that
they
no
longer
need.
B
Then
I
think
at
that
point
they
would
just
expect
the
driver
to
handle
cleaning
it
up
so
that
the
part
no
longer
has
access
to
those
files
via
kubernetes
secret
or
from
the
files,
because
for
kubernetes
secret.
We
handle
that
case
like
we're
trying
to
see
if
only
the
keys
that
are
defined
in
the
spc.
We
try.
C
C
C
Yeah,
probably
not
put
like
a
v1
label
right
on
the
driver
before
like
if
it
doesn't
at
least
have
kind
of
like
that
feature
parody
with
normal
kubernetes
secrets.
C
And
then
the
next
topic
that
I
wanted
to
bring
up
is
atomic
file
updates,
where
another
big
chunk
of
code
that
I
deleted.
While
I
was
doing
this
pr
481
was
the
kubernetes
secrets
will
write
to
a
a
separate
folder
and
then,
after
all,
the
writes
are
done.
It
will
do
sim
link.
C
The
folder
that
is
actually
visible
to
the
pod
to
the
folder,
with
the
speakers
in
it
and
since
a
sim
link,
I
guess,
is
an
atomic
operation.
This
makes
the
view
of
all
the
secrets
change
at
once.
C
Instead
of
you
know,
as
each
file
is
written
or
like
you
know,
if
I
guess
something
is
reading,
the
file
you
know
like
the
driver,
isn't
writing
to
it.
While
it's
another
process
has
it
open.
This
is
also
like.
C
I
don't
think
this
is
anything
new
that
the
the
pr
added
like
a
new
problem,
but
it
was
kind
of
like
hey
there's
this
difference
between
kubernetes
seekers
behavior
and
what
our
drivers
currently
do
and
whether
or
not
we
thought
that
was
kind
of
worth,
creating
a
bug
for
and
tracking
and
yeah.
C
But
yeah
I
wanted
to
to
bring
that
up
and
see
if
anyone
else
had
thoughts
on
that.
B
Maybe
to
understand
the
motor
we
can
ask
on
the
slack
chat,
so
maybe
we
can
bring
sick
storage
to
see
what
was
the
reason
behind
adding
atomic
writer
and
what
are
the
benefits
of
using
it
and
then,
if
it's
worth
using
it
in
our
code,
so
that
it's
it
handles
concurrent
updates
and
it's
more
secure.
Then
maybe
we
can
switch
to
that.
C
D
The
the
kubernetes
issue
you
linked
it
does
talk
about
refresh
right.
So
is
this
how
the
it
is,
how
like
kubernetes
refreshes
the
secret
and
config
map
and
project
it
in
the
okay.
C
D
Okay,
maybe
that's
why.
C
Yeah
they
write
to
a
separate
directory
and
then
then
do
the
sim
link.
After
all,
the
writes
are
done
for,
I
think
it's
config
maps
and
secrets,
but
I
guess
I'm
curious,
like
it's
interesting
that
we
haven't
run
into
a
problem
without
doing
this,
so
that's
sort
of
like
I'm
not
sure
how
much
of
a
problem
it
is,
but
it
seems,
seems
worth
at
least
like
addressing
the
difference
between
the
behaviors.
There.
C
Yeah
I'll
at
least
create
two
bugs
then,
for
like
deleting
files
and
atomic
updates,
and
then
we
can
talk
with
the
storage
slack
right.
D
Thanks:
okay,
I'm
not
sure
who
the
note
taker
is
just
today.
Can
someone
update
the
previous
stuff?
You
don't
see
any
updates
yet.
A
Okay
and
rita,
discuss
comments
raised
in
a
disconnection
scenario.
D
Yeah,
I
I
think
there
was
a
lot
of
people
who
commented
on
the
doc.
Let
me
find
the
link
that
I
misstarted.
D
D
I
think
one-
I
guess
big
item
that
I
was
kind
of
concerned
about-
is
the
fact
that
if
we,
if
we
make
mounting
as
an
optional
thing
right,
we
could
end
up
in
a
in
a
situation
where
the
source
of
truth
may
be
different
for
the
kubernetes
secret
versus
the
mounted
file.
Because
so
far
we've
been
saying,
the
mounted
file
is
a
source
of
truth
right
and
the
kubernetes
secret
content
is
just
a
mirror
of
that
right.
D
That
has
always
been
the
stance,
but
with
this
potential
change,
how
does
that
you
know
we?
We
will
basically
now
fall
back
to
okay.
If
mounting
is
an
optional
thing,
then
we
assume
the
secret
kubernetes
secret
data
is
the
source
of
truth
right
and
then
everybody
will
basically
use
that
instead
of
the
file
mounted
so
just
wanna
to
see
if
other
people
have
more
thoughts
on
this.
B
Could
just
the
secret
automatically
directly
instead
of
mounting
this
doing
the
csi
volume,
so
they
do
have
the
csi
volume
so
that
they
force
the
csi
driver
to
create
it,
and
then
they
have
another
volume
which
is
basically
mounting
the
kubernetes
secret,
and
then
they
can
read
from
the
community
secret
all
the
time,
and
I
think
my
concern
there
was
the
no
volume
mount
right
like
for
the
first
part.
B
Yes,
the
csi
volume
is
being
used
because
it
needs
to
be
being
made
as
community
secret,
but
if
they
scale
that
deployment
to
more
replicas,
then
the
csi
amount
is
just
an
op
call
and
subsequently
they're,
just
using
the
secret
and
the
reasoning
behind
adding
the
kubernetes
secret
as
the
source
of
truth
is
it's
a
fallback
logic
only
in
case
of
disconnected
scenario.
So
only
if
the
failure
policy
field
is
set,
then
the
source
of
truth
becomes
the
kubernetes
secret,
but
in
all
other
cases
the
mounted
files
would
still
remain
the
source.
B
Kubernetes
are
the
source
of
this
truth,
but
I
think
I
understand
what
you're
saying
it
can
get
confusing
for
users,
because
there's
a
difference
in
behavior
just
based
on
a
toggle,
and
for
that
reason
I
also
added
the
other
proposal
that
we
were
discussing.
So
I
just
started
the
proposal
two,
which
was
using
the
dummy
part
right
so
rather
than
using
a
dummy
pod.
B
I
added
that
as
a
dummy
deployment,
so
they
use
a
dummy
deployment
with
the
csi
volume
to
create
the
kubernetes
secret
when
the
cluster
has
network
access
and
after
that,
for
all
their
application
pods,
they
don't
have
a
csi
volume.
They
just
mount
the
kubernetes
secret
as
volume,
that's
it
so
they
have
no
dependency
on
the
csi
driver
after
the
dummy
part
is
deployed.
B
So
in
that
way
the
kubernetes
secret
will
remain
all
the
time
and
then
they
can
just
have
a
single
way
of
doing
things
which
is
always
reading
from
the
secret
volume,
and
I
think
one
thing
with
that
approach
is
also.
We
don't
need
all
these
ttl
fields,
so
we
can
still
tie
the
dummy
deployment
to
the.
So
we
can
tie
the
secret
to
the
dummy
deployment
as
an
owner,
and
the
reason
for
you
is
using
a
deployment
is
with
the
new
release,
we're
going
to
be
using
the
replica
set
as
the
owner
reference.
B
So
that
means
even
during
the
disconnected
state,
if
a
pod
dies,
it
will
get
recreated.
But
since
we
are
using
the
replica
set,
the
secret
is
still
impact,
so
the
user
can
control
the
lifetime
of
the
secret
by
controlling
the
lifetime
of
their
dummy
deployment.
So
we
are
not
changing
any
default
behavior
by
supporting
a
new
new
dtl
and
we
don't
need
to
have
a
custom
logic
to
go
clean
it
up.
Apart
from
what
we
do
today,.
B
And
I
think
one
concern
that
had
come
up
then,
when
we
had
discussed,
was
obviously
having
that
deployment
for
each
secret
provider
class
because
it
could
be
a
one-to-one
mapping.
There
is
a
potential
to
support
one
to
n
mapping,
so
one
dummy
deployment
for
n
secret
provider
classes
so
for
ev
within
a
name
space.
If
they
have
multiple
secret
provider
classes,
they
can
just
use
a
single
dummy
deployment
with
all
the
secret
provider
classes
with
different
volumes.
D
B
Is
no,
it's
no
longer
know
of
call
it's
actually
something
that's
doing
the
mount
and
succeeding,
and
then
it's
creating
the
kubernetes
secret
for
the
dummy
deployment
and
once
network
access
is
back,
the
csr
driver
will
go
and
rotate
all
the
contents
for
the
dummy
deployment
and
also
the
kubernetes
secret,
and
because
kubernetes
automatically
updates
secret
volumes
based
on
update
to
kubernetes
secret.
They
still
continue
to
have
the
rotation
feature
in
the
cluster.
C
I
had
only
read
it
like
initially
a
while
back
I
I'm
yeah,
I
don't
have.
I
haven't
thought
about
the
disconnected
scenarios
in
the
last
like
two
weeks,
so.
A
B
Right
so
typically,
the
pattern
that
we
see
is
for
every
secret
provider
class.
There's
it's
a
one-to-one
mapping
between
this
kubernetes
secret
object
and
the
secret
provider
class.
So
you
they
want
to
mirror
everything
in
the
secret
provider
class
as
a
kubernetes
secret,
so
that
it's
available
for
them
during
the
disconnect
period
and
we
could
recommend
a
one
is
to
one
mapping
between
the
dummy
deployment
and
the
secret
provider
class.
But
if
they
have
lots
of
secret
provider
class,
it
can
become
too
many
dummy
deployments.
B
Okay
and
the
reason
for
using
a
deployment
instead
of
static
part
is
just
to
make
sure
that
they
don't
accidentally
delete
the
kubernetes
secret
by
deleting
their
static
part
during
the
disconnected
period,
because
we
don't
want
to
change
the
logic
for
the
owner
reference
right,
so
we'll
still
add
the
replica
set
as
the
owner.
So
if
they
scale
it
up
scale
it
down
to
zero,
it
still
won't
affect
us
because
the
replica
set
is
the
owner.
Only
when
the
deployment
is
deleted.
Will
this
kubernetes
secret
also
be
deleted.
D
I
mean
I
I
I
like
this-
the
second
proposal
approach,
because
it
requires
less
changes.
I
think,
and
it's
more
consistent
to
the
current
pattern
right,
but
what
I
add
another
con
to
this
list
is
now
the
user
has
to
run
and
maintain
this
dummy
workload
that
can't
that
has
like
resource
footprint
on
the
cluster
right,
not
super
great,
if
they're,
if
they
have
concerns
about
additional
things
that
take
up
resource.
B
Yeah,
I
think
that's
a
valid
concern
like,
but
maybe
we
need
some
feedback
on
this
from
some
users
as
well,
because
if
we
try
to
understand
if
they're,
okay
with
having
one
dummy
deployment
for
the
entire
namespace,
it
can
be
a
very
small
part
which
is
not
really
doing
anything
in
terms
of
maintenance.
I
I
think
it's
yeah,
I
mean
yeah.
They
have
to
deploy
this
and
maintain
it
over
time.
B
D
B
D
Yeah
that
definitely
looks
yeah
curious.
What
users
will
say
about
all
this
yeah.
D
A
Yeah
and
I
think
if
users
are
selective
about
which
secrets
they
want
to
maintain
and
then
disconnected,
and
it
wouldn't
be
all
secrets
in
the
cluster
right,
they
would
just
elect,
maybe
a
few
that
they
care
about
possibly
well.
Do
you
think
this
is
all
up
kind
of
global
type
of
config?
A
customer
would
use
this
for.
B
So,
typically,
whatever
they
have
in
the
secret
provider
class,
the
assumption
is
they
need
that,
even
during
the
disconnect
period
right.
So
today,
everything
in
the
secret
provider
class
parameters
field
we
added
to
the
mount,
and
then
they
can
just
provide
a
subset
of
that
for
the
secret
objects
to
be
synced
as
kubernetes
secret.
B
So
they
could
have
four
files
in
the
mount,
but
they
only
sing
two
of
those
as
kubernetes
secret,
but
during
this
period,
if
they
think
they
would
require
all
those
fields,
then
they
can
just
add
all
the
four
parameters
to
the
secret
objects
and
have
same
sync
all
of
them,
but
if
they
only
still
need
a
subset
of
it
and
they
are
fine
with
the
rest
of
them
showing
up
after
the
network
access
is
again
available
again,
then
they
can
just
have
a
few
fields
available
there,
but
I
think
it
it
makes
it
easier
for
the
user
to
understand,
because
when
they
deploy
their
application
part,
they
don't
have
to
have
the
csi
volume
which
is
going
to
be
empty.
B
A
Okay,
so
sounds
like
you
know:
maybe
we
can
talk
to
a
couple
of
customers
who
are,
you
know
really
looking
to
have
this
feature
and
kind
of
have
them
rounded
out
for
us,
so
to
speak,
yeah,
okay,
all
right.
D
And
for
all
the
other
folks
on
the
call
like
please
take
a
look
at
the
different
approaches
and
please
raise
your
hand
if
you
sit.
If
you
think
it's
it's
a
bad
idea
or
or
a
better
way
of
doing
this.
A
Yeah,
let's
see
shouldn't
just
reply,
so
did
we
answer
this?
How
do
you
make
sure
that
the
files
are
no
longer
required?
Is
that
do
we
answer
that,
for
you,
shin.
B
Yeah,
I
think
you
briefly
touched
on
it
just
so
looking
at
the
mount
response
right.
So
basically,
looking
at
the
mount
response
to
see
the
files
that's
returned
and
as
I'm
writing
the
notes,
like
one
thing
that
I
I
think
we
also
should
consider
is
we
cannot
use
the
mount
files
as
the
source
of
truth
until
we
make
that
the
default
option,
because
until
all
the
migrators,
all
the
providers
migrate
to
it,
it's
possible
that
the
amount
files
returned
is
empty
and
we
don't
want
to
accidentally
go
delete.
Anything
that's
already
there.
B
A
A
A
A
So
hopefully
we
will
see
you
then,
and
in
the
meantime,
we'll
try
to
get
some
customer
feedback
on
this
disconnection
scenario.
So
again,
if
you
have
customers
that
may
be
interested
interested,
let
us
know.