►
From YouTube: Secrets Store CSI Community Meeting - 2021-12-09
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
today's
csi
secret
store
community
call
is
december.
9
2021.,
just
the
usual
info
they
give
out
is
this
call
is
governed
by
the
cncf,
so
all
of
the
code
of
conduct
rules
apply
if
you're
not
familiar
with
that.
A
Please
check
out
the
code
of
conduct
markdown
in
the
repo
and
with
that,
let's
go
ahead
and
get
into
it
again
if
you're
new
go
ahead
and
add
your
name
to
the
attendees
list,
so
that
we
know
you're
here
and
what
we
like
to
do
is
we
do
like
to
kind
of
do
an
intro
to
any
new
community
people
showing
up
so
new
to
me
is
drew
so
drew.
A
If
you
want
to
introduce
yourself
to
the
community
and
let
us
know
you
know
how
you
feel
about
the
project
and
what
you,
what
your
motivation
is.
B
No
problem
so
I'm
drew,
I
work
for
a
consulting
firm
and
so
without
naming
anything,
because
I'm
not
here
officially
by
any
means
whatsoever
interested
in
this
project,
definitely
talking
about
getting
secrets
or
handling
secrets
a
little
bit
better
and
particularly
we
use
aks,
but
others
are
out
there.
Of
course,
aws
and
whatever
else
and
integrating
those
with
key
vaults
or
whatever
vault
someone
might
be
using,
is
fantastic
and
I'm
trying
to
get
away
from
using
environment
variables
and
stuff
like
that.
A
Welcome
to
the
community
drew
thanks
for
that,
all
right
with
that.
Let's
go
ahead
and
get
into
it
I'll
start
off
with
the
announcement,
so
the
next
bi-weekly
meeting
would
land
on
the
the
23rd
of
december.
I
think
that's
probably
going
to
be
in
the
middle
of
holiday
season
for
most
people,
so
I
vote
to.
I
was
just
to
suspend
it
for
that
week.
A
That
is
christmas
week,
and
I
know
actually
that
is
holidays
for
us
here
at
microsoft
and
probably
other
corporations
as
well.
A
So
if
everyone's
fine
with
that
we'll
resume
in
the
new
year
january,
6th
that'll
be
the
two
weeks
following
thursday
and
if
that's
okay,
we'll
go
ahead
and
update
everyone
with
that,
any
any
objections
or
anything
is
everyone
cool
with
that.
C
A
Yes,
all
right,
yeah,
hey,
just
gonna,
ask
all
right:
let's
get
into
the
agenda
here,
I
got
one,
that's
kicking
off.
I
think
this
one's
kind
of
been
sporadic
but
we're
actually
getting
a
lot
of
signal
and
and
I'd
love
to
hear
from
some
of
the
other
maintainers
on
other
clouds.
A
But
we
are
starting
to
get
a
lot
of
discussion
on
the
availability
of
having
the
csi
secret
store
experience
on
kind
of
like
serverless
platforms.
So
you
know
in
microsoft
we
have
our
azure
container
instances
and
I
believe
in
my
quick
research,
the
equivalence
on
gcpa
would
be
like
cloud
run
and
aws
would
be
fargate,
and
so
I
want
to
kind
of
open
this
up
and
and
see
if,
if
anyone
else
on
those
other
cloud
platforms
are
getting
a
similar,
similar
ask
and
then
at
a
high
level.
A
How
do
you
think
we
can
make
something?
I'm
assuming
this
wouldn't
be
exactly
the
same,
but
if
the
experience
is
the
same,
I
think
that's
what
people
are
looking
for,
which
is
hey.
Can
I
bring
in
some
volume
that
is
you
know,
ephemeral
or
or
some
way
for
us
to
make
that
experience
as
if
it's
ephemeral
if
a
container
goes
away,
and
you
know
that
that's
pretty
much
as
much
as
I
really
have
thought
about
it.
So
any
any
comments
or
any
questions
from
some
people
about
the
about
the
ask.
C
I
can
just
tell
you
what
we've
done
at
google.
The
cloud
run
platform
has
done
an
integration
with
just
secret
manager
for
the
managed
offering
that
is
separate
and
not
using
like
our
crds
and
stuff.
It
was
yeah
like
an
effort
that
was
started
a
while
ago.
You
know,
and
so
like.
I
think,
I'm
not
that
in
tune
with
like
k,
native
and
cloud
runs
kind
of
like
governance
or
or
how
they
yeah,
how
they
choose.
C
You
know,
what's
in
cloud,
run
versus
k
native,
but
I
think
I
think
the
strategy
there
may
be
to
engage
and
I'm
not
actually
sure
if
aci
or
fargate
is
k
native
or
if
those
are
other
other
platforms,
but
like
yeah
I
could.
I
could
see
a
world
where
it
would
be
nice
to
reuse
the
the
crd
that
we've
defined,
and
you
know.
A
Just
a
little
info
on
on
our
side,
so
we
have
aci,
which
is
the
azure
container
instances
that
surplus
platform.
A
So
so
we
have
a
an
integration
to
where
what
people
can
do
is
they
can
kind
of
burst
out
into
aci
instances
from
our
aks
platform,
and
so
the
thought
is
hey,
I'm
using
csi
secrets
on
the
cluster
itself
and-
and
I
got
this
kind
of
burst
out
option,
but
if
I'm
bursting
out
and
already
have
this
set
up
for
my
my
workloads,
I
would
still
want
the
you
know
the
portion
of
the
workload
that's
bursting
out
to
our
serverless.
A
You
know
offering
to
still
have
access
to
those
secrets
for
that
application,
and
so
that's
that's
kind
of
how
this
is
coming
about
real
quick,
tell
me
so
when
you
mention
you
cloud
run.
Has
that
integration
with
your
secrets
management?
A
Is
that
just
through,
like
in
an
api
that
that's
needed
in
the
application,
or
is
that
underlying
just
just
kind
of
set
up
when,
like
like
a
container,
gets
deployed?
It's.
C
Is
very
similar
to
the
actual,
like
experience
of
the
csi
driver,
okay,
but
implementation.
That's
going
to
be
different
and
like
it's
not
using
the
like
the
crd
or
like
the
csi
driver
interface,
because
I
think,
like
k
native
specifically
doesn't
support
csi,
but
like
the
user's
experience
on
cloud
run,
is
that
it
can
load
secrets
to
the
file
system
or
environment
variables.
But
it's
yeah
not
an
open
protocol.
A
Yeah,
maybe
we
can
maybe
we
can
chat
about
that,
because
I
guess
I
guess
the
biggest
thing
that
we
would
have
to
solve
for
is-
and
I
guess
if,
if
this
is
really
what
why
people
are
using
this
solution,
anyways
can
that
experience
be
like
a
ephemeral
thing,
because
I
guess
there'll
be
concerns
as
there'll
be
concerns
around,
because
we
can
probably
get
into
kind
of
mounting
some
file
system.
But
then
it's
like
okay.
A
How
can
we
ensure
that,
for
the
round
trip
of
that
container,
when
it
goes
away
that
you
know
we
screw
up
that
file
system
and
and
make
it
as
secure
as
like
a
femoral
storage?
So,
okay
yeah,
I
think
you
know,
maybe
we
can
figure
out
if
this
is
something
we
want
to
pursue
all
up
on
the
project,
initially
any
thoughts
on
that.
No,
so
if
k
native
doesn't.
C
C
Yeah
yeah,
because
I
I
think
it's
just
csi
is
usually
associated
with
persistent
storage
and
I
think
that's
kind
of
against
a
lot
of
the
k-native
goals
and
yeah.
A
This
issue-
you
you
put
up
here.
D
A
A
Okay,
yeah,
I
mean
what
I
wanted
to
do
is
see
if,
if
there
was,
I
know
we're
getting
kind
of
a
good
bit
of
signal
on
our
side
was
just
trying
to
see
if
anyone
else
was
getting
some
signal.
This
is
something
that
we
should.
D
Okay,
yeah
and
then
even
if
they
have
to
do
a
persistent
like
I
mean
the
csr
driver,
would
still
work
with
that
right,
like
it
all
just
comes
down
to
how
you
configure
the,
but
I
mean
in
general,
I
think
we
can
see
if
this
is
something
like
native
is
interested
in.
D
If
it,
if
they
are,
then
maybe
we
can
see
if
there
are
any
changes
that
we
can
make
to
make
it
work
with
k-native
like
based
on
their
feedback
like
just
so
maybe
we
can
get
added
to
their
docs
as
like
a
possible
solution
or
something
from
the
user
perspective.
So
that
might
be
a
nice
way,
at
least
for
all
the
providers
using
k
native,
and
then
I
think
when
it
comes
to
aci,
and
all
that,
like,
I
think,
just
having
csi
support
will
probably
be
the
way
to
go.
C
C
Be
the
most
like
compatible
strategy
right
yeah,
I
think,
like
we
got
a
lot
of
signal
too
about
wanting
like
secrets
on
the
serverless
platform
and
like
cloud
run,
and
I'd
say
that
you
know
it
works
on,
like
our
cloud
run,
managed
service,
but
it's
I'm
not
sure
if
it
works.
On
the
you
know,
cloud
run
on
kubernetes
because
of
the
k
native,
like
not
supporting
csi
thing
so.
C
A
C
C
I'm
not
sure
on
the
the
burstin
terminology,
but
we
have
like
a
cloud
run
managed
which
is
the
like.
You
do
not
need
a
kubernetes
cluster.
It
is,
you
know
a
lot
more
like
I
think
cloud
functions
are
cloud
lambda,
but
it
takes
that
the
k
native
crd
stuff.
You
can
apply
it
and
scale
it
without
ever
like
managing
kubernetes.
C
With
your
cluster
yep,
I
don't.
C
A
Okay,
all
right,
I
think
we'll
we'll
investigate
this
a
little
more
and
go
from
there.
A
All
right,
so
that's
what
we
have
on
that.
So
next
up
is
drew.
You
want
to
chat
about
rotating
secrets
when
the
pod
is
not
running.
B
B
Of
piggybacked
on
a
different
issue,
but
I
was
kind
of
hoping
it
would
be
a
middle
ground
solution
for
it,
but
basically,
whenever
there's
a
crown
job,
that's
running
a
or
sorry
a
crown
job
that
has
completed.
If
the
pod
is
lingering
there,
the
secrets
maintain
because
the
pod
still
exists.
They
still
reference
the
secrets,
so
the
driver
does
not
remove
them
and
that's
perfectly
fine,
but
they
never
rotate.
B
The
secrets
would
stay
refreshed
and
the
person
who
originally
opened
this
incident
or
I'm
sorry.
This
issue
would
be
able
to
keep
their
secrets.
If
they
didn't
clear
their
pots
like
if
they
delete
their
pots
and
the
secrets
are
gone,
but
that's
a
whole
other
thing,
but
that's
that's.
Basically.
The
gist
of
this
is
trying
to
keep
secrets
updated
for
pods
that
are
basically
sitting
idle,
but
they're
actually
shut
down
and
they
just
don't
rotate.
B
Yes,
if
it's
being
deleted,
it
gets,
I
think,
there's
the
deletion
time
stamp
in
there
and
all
that
fun
stuff.
But
if
it's
just
successful
before
it
gets
pruned
based
on
the
job
history
limits,
then
it
just
sits
there
as
successful,
and
you
know
every
two
minutes
or
whatever
the
interval
is
set
to
it's
just.
D
B
Yes,
in
this
case,
these
I've
got
four
or
five
jobs
that
depend
on
it
and
each
of
them
logs
into
an
azure
resource.
B
So
if
the
password
or
the
resource
secrets
get
refreshed
like
in
azure,
but
of
course
the
key
vault
has
the
update,
but
the
kubernetes
secret
value
itself
doesn't
get
refreshed
the
next
time
the
pod
starts
it's
going
to
fail
and
these
jobs
take
seconds,
so
they
never
run
long
enough
for
the
interval
to
catch
it.
The
next
time,
essentially.
D
Yeah,
I
think
initially,
when
we
added
rotation,
we
basically
supported
rotation
for
every
scenario.
Right,
like
I
mean
other
than
terminating
part,
but
I
think
there
was
this
specific
use
case
where
jobs
were
left
behind
and
then
that
was
adding
way
too
many
calls
for
the
user,
and
that
is
when
we
decided
that
I
mean
I
think
there
was
a
bug
before
as
well
like
when
we
initially
supported
rotation.
D
D
D
B
D
So
it
mounts
the
secret
for
only
that
container,
so
the
jobs
don't
actually
need
to
mount
it,
and
then
that
will
also
create
a
corresponding
kubernetes
secret.
So
it
will
handle
the
rotation
and
all
that
periodically,
irrespective
of
when
your
jobs
come
in
and
go,
and
then
your
jobs
can
just
reference
that
kubernetes
secret
as
an
environment.
Variable.
B
Okay,
that's
what
we're
doing
so,
the
jobs
reference
that
the
jobs
actually
have
the
volume
amounts
in
them,
but
because
the
jobs,
because
the
successful
job
limits
and
we
we
always
keep
at
least
one.
So
we
can
check
our
logs
on
it
because
they
never
prune
all
the
pods.
The
secrets
never
get
wiped.
D
D
B
I
I
mean
I
could
I
could
try
that
I'll
have
to
talk
that
over
with
our
devs
just
to
make
sure
they
don't
have
an
issue
with
it.
But
I
guess
I
have
a
follow-up
question,
so
the
bug
you
reported
is:
is
there
when
I
was
going
through
the
code
and
I'm
not
greatly
familiar
with
go
so
I
could
have
been
missing
it,
but
is
there
any
oh
gosh?
B
How
do
I
say
it
reduction
and
duplication,
so
it
looked
like
the
code
is
going
down
pod
by
pod
and
finding
anything
that
had
amount
to
the
secrets.
So
whatever
it
was,
the
provider
class
excuse
me,
but
it
doesn't
determine
if
that
provider
class
is
also
used
in
a
different
pod.
So
if
you
use
the
same
secret
in
multiple
pods,
it's
going
to
try
and
rotate
it.
Those
multiple
times
is
that
correct.
D
It
is
right
so,
like
the
csi
driver
runs
in
the
context
of
a
pod
so
like
when
cubelet
calls
this
initially
for
the
mount
for
each
individual
part
right
and
then
secret
provider
class
can
be
used
by
multiple
parts.
But
when
we
have
to
do
a
particular
operation,
we
only
do
it
for
that
part.
So
then,
if
the
secret
provider
class
has
a
secret
kubernetes
secret
defined,
then
we
have
to
also
update
that
kubernetes
secret
because
we
still
don't
know
the
purview
of
all
the
other
parts.
That's
running.
C
Yeah
yeah,
to
give
you
kind
of
the
idea
of
where,
like
without
you
know,
having
to
scroll
through
all
of
the
meeting
notes.
Some
of
the
the
areas
that
were
like
going
in
the
future
is
for
rotation
instead
of
there's
a
lot
of
code
around
a
rotation.
C
But
we
would
like
at
some
point
in
the
future
to
move
that
to
a
csi
interface
added,
a
edited
way
to
get
a
mount
call
repeatedly
so
like,
instead
of
us,
looping
through
pods
and
looking
at
intervals
to
just
let
it
reissue
amount
call
to
us
where
one
we
think
that
might
be
better
for
rotation
or
simplify
code,
around
rotation
and
then
secret
syncing.
C
We
had
some
thoughts
about
actually
removing
that
from
the
csi
driver
and
making
kind
of
like
a
standalone
project
that
can
do
secret
seeking
syncing
without
having
to
have
like
this
dummy
pod.
That
anish
mentioned
to
allow
the
same
use
of
like
the
the
crd
but
kind
of
decouple
that
from
kind
of
the
csi
driver
interface,
which
is
very
much
like
pod
to
volume.
So
it
just
as
a
heads
up,
that's
kind
of
like
the
next
phase
of
development.
That
we've
got.
D
Yeah
one
thing
we
often
get
asked:
is
they
don't
what
it's
like
some
users?
Typically,
they
use
the
kubernetes
secret
for
environment
variables
or
they
don't,
or
they
just
use
the
secrets
from
the
file
system
right.
So
we
often
get
asked
about
decoupling
these
two
things,
because
if
today,
if
you
have
to
do
the
kubernetes
secret,
you
also
have
to
do
the
mount,
even
though
you
don't
want
to
use
it.
So
that's
why
we
are
actually
going
to
look
into
the
path
where
we
can
decouple
and
have
them
as
separate
projects
like
tommy
suggested
right.
B
D
Yeah,
I
think
also
one
interesting
thing
is,
as
tommy
was
suggesting
right
like
in
the
future
for
rotation.
So
today
we
had
to
build
our
own
rotation
reconciler.
That
will
do
this
rotation
periodically,
but
in
the
future
we
are
switching
to
what
the
kubernetes
csi
already
has
it's
something
called
request
republish.
So
what
that
will
do
is
it
will
call
the
csi
driver
periodically
for
every
part
and
say
like
hey,
go,
do
your
thing
if
you
have
to
rotate
something
rotate
it
and
then
update
the
mount
and
then
give
me
a
response
back
up.
D
So
one
interesting
thing
is
even
today:
if
we
decide
to
remove
this
check
and
support
terminating
parts,
I
don't
think
tubelet
actually
does
that
for
terminating
points
so
like.
I
think
that
is
one
interesting
use
case.
So
if
a
port
is
terminating,
I
think
cubelet
at
that
point
doesn't
call
the
csr
driver
periodically,
because
at
that
point
it
thinks
the
part
is
going
to
go
away
and
it
doesn't
care
about
the
volume.
B
Okay,
so
if
you
remove
this
it,
the
the
what
you're
saying
is
further
down
where
it
says
the
volume
mount
or
update
whatever
it
was.
It's
not
actually
going
to
update.
D
Right
because
I
think
at
that
point
cubelet
also
skips
parts
which
are
in
terminating
state,
because
it's
it's
terminating
is
just
the
last
step
before
it's
actually
going
to
go
away
right.
So
cubelet
at
that
point
is
not
going
to
go
through
the
overhead
of
calling
every
driver
for
all
the
volumes
that's
associated
with
that
part,
but
I
think
once
we
do
decouple,
then
all
these
problems
won't
exist
like
because
you
brought
up
a
good
point
right
like
if
we
are
checking
for
each
individual
part
and
updating
the
same
secret
over
and
over
again.
D
B
Gotcha,
okay,
I
appreciate
the
input
and
I'll
definitely
try
that
dummy
pod
situation
and
I
guess,
keep
an
eye
on
it
and
see
what
where
the
project
goes.
D
C
C
Yeah
that
just
reminded
me
that
it's
early
for
west
coast,
folks
so
yeah.
A
A
All
right,
okay,
cool,
good
stuff,
all
right.
I
think
that
is
the
end
of
all
the
posted
discussion.
I
don't
know
if
anyone
else
has
anything
that
they
want
to
discuss
while
we're
on
the
call.
D
I
was
just
going
to
say,
like
I
mean
the
1.1
release
will
probably
be
something
that
will
happen
next
year,
because
I
mean,
I
think
the
winding
down
and
like
folks
are
also
gone
for
the
holidays.
But
at
some
point
I
think,
like
I'm,
going
to
get
together
with
tommy
just
to
see
what
are
the
things
that
we
want
for
1.1.
D
So
we
created
a
milestone
before,
but
I
think
we
just
want
to
like
revisit
it
and
see
if
that's
still
the
ones
that
we
want
to
do
and
also
like
set
a
date
for
release
in
jan
for
the
1.1
release.
D
A
All
right,
that's
it!
We
do
got
some
time,
I
don't
know
if
there's
any
outstanding
or
any
issues
we
want
to
highlight.
Well,
we
got
maintainers
here
to
look.
C
A
Okay,
got
it:
okay,
all
right,
there's
nothing
else.
We
will
go
ahead
and
end
the
call
thanks
for
everyone
showing
up
thanks
to
you
for
showing
up
today
again,
as
mentioned
earlier,
we're
going
to
skip
next
next
scheduled
meeting,
which
will
have
landed
on
december
23rd.
A
We
will
pick
this
up
in
2022
and
that's
going
to
be
thursday
january
6th.
I
believe
and
hope
everyone
has
a
good
holiday
and
safe
holiday
with
your
family
and
friends.