►
From YouTube: Secrets Store CSI Community Meeting 2020 10 01
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
thanks
for
joining
this
is
our
secret
store.
Csi
community
call
it's
october
1st
and
we
are
in
october
yeah
it's
moving
fast
again,
it's
a
small
crowd,
but
just
ensure
that
you
have
are
you
tagging
yourself
in
the
attendees
list
and
with
that
I
am
going
to
do
double
duty
or
I'm
going
to
attempt
to
do
double
duty,
but
I
think
what
I'm
going
to
do
now
is
do
notes
in
real
time.
B
A
A
Oh,
I'm
always
looking
for
that.
It's
tough
doing
the
moderation
and
then
yeah.
One
thing
I
want
to
remind
is,
if
we
get
into
conversation,
feel
free
to
use
the
the
raised
hand,
option
and
zoom.
So
I'm
in
I
am
looking
at
the
attendees
list
so
I'll
see
if
someone's
raising
hand
I'll
make
sure
I'll
get
you
into
the
conversation
and
with
that,
let's
go
ahead
and
go
through
the
agenda
items.
A
D
Yeah,
so
the
rotation
pierre,
was
reviewed.
Thank
you,
tommy
and
rita
for
taking
a
look
at
it.
So
we've
merged
the
pr
and
we've
also
completed
some
testing.
D
Users
can
enable
it
by
setting
the
flag,
enable
secret
rotation
to
true
and
what
it
does
is
periodically
every
two
minutes
or
a
value
configured
by
the
user.
The
driver
would
reach
out
to
the
provider
to
check
if
there
is
a
new
version
of
the
secret
or
other
secret
objects
available.
D
So
if
they're
actually
using
the
contents
from
the
file
system,
then
they
could
have
a
watches
on
the
file
and,
if
they're
doing
sync
kubernetes
secret
to
fetch
as
environment
variables,
then
they
would
need
to
restart
the
pod.
So
we'll
be
adding
all
these
as
documentation
best
practices
for
rotation
and
the
plan
is
to
cut
release
tomorrow.
A
Thanks
to
this,
does
anyone
have
any
any
comments
to
an
issue?
Are
we
good
with
with
any
of
that
or
any
questions.
A
Okay,
take
silences
we're
okay,
but
just
feel
free
to
review
the
pr
as
well
next
up
helm.
Chart
two
update
is
this:
is
this
you
still
an
issue.
D
Yeah
so
I
added
this.
A
D
So
I
added
this
to
the
agenda
last
week,
so
I've
been
working
with
the
contributor,
so
he's
made
the
changes
and
then
so
the
pr
currently
looks
good,
but
I
just
wanted
to
bring
it
up
for
discussion
again,
because
one
concern
with
moving
the
crds
to
the
crds
directory.
D
D
So
the
way
it
is
done
right
now
is
just
to
give
a
little
background.
Is
the
crds
is
in
the
templates
directory
right,
so
it
works
for
help
2
and
m3,
but
the
concern
with
that
is
when
the
user
deletes
the
helm,
charts.
The
crd
definitions
also
get
deleted,
which
is
bad
because
it
leads
to
data
loss,
so
the
user
loses
all
their
custom
resources
and
the
right
way
to
do
it
is
to
have
have
it
in
the
crds
directory,
and
if
we
move
it,
the
problem
is
during
upgrades.
The
schema
doesn't
get
updated.
B
So,
like
I'm
new
here,
so
please
like
sorry
if
I'm
butting
in-
and
this
is
something
we
did
run
into
with
cert
manager
as
well-
and,
I
think
probably
caused
well.
Some
people
pain,
some
people,
good
things,
and
it's
been
kind
of
like
a
ongoing
discussion.
B
I'm
just
adding
a
link
to
the
agenda
or
to
the
notes
there
as
well,
because
there's
been
an
ongoing
upstream
discussion
for
a
while
about
what
can
be
done.
What
we
found
at
least
was
that
using
the
crds
directory
at
all
kind
of
just
caused
problems,
long
term,
because
we
couldn't,
like
you
say,
update
anything
after
we've,
initially
put
it
out
there.
B
The
current
stance
upstream
is
that
in
helm
seems
to
be
that
we
need
to
have
some
changes
to
kubernetes
itself
and
it's
been
going
back
and
forth
for
quite
a
while,
in
my
experience
at
least,
insert
manager.
B
I
would
advise
probably
also
avoiding
using
the
crds
directory,
because
even
if
the
behavior
is
different
between
two
and
three,
some
users
will
be
upgrading
from
two
to
three
or
may
have
just
different,
like
this
there's
all
sorts
of
different
things
that
can
be
going
on,
because
I
think
you
mentioned
it's
only
held
too
the
in
terms
of
like
what
your
options
then
are.
If
you
actually
put
your
crds
into
the
template
directory,
it
treats
them
just
like
normal
resources.
B
So
then,
as
long
as
you're,
you
are
like
a
responsible
crd
author
and
don't
make
breaking
changes,
or
if
you
do
at
least,
then
you
need
to
make
that
very
clear
to
users.
It
does
work
and
we
recently
added
that
as
a
default
off
option,
and
then
we
ship
the
crds
separately,
as
well
as
their
own
yaml
file
for
users
that
just
want
to
apply
them
or
like
keep
ctl
apply
ahead
of
upgrading.
The
only
other
thing
I'll
say
as
well
is,
if
you
get
into
the
world
of
conversion
web
hooks.
B
This
whole
thing
becomes
a
lot
more
complicated
because
you
end
up
needing
to
manage
the
ordering
of
an
upgrade
a
lot
more
carefully,
because
the
web
hook
in
an
ideal
world
you'd
upgrade
the
web
hook
and
then
you'd
go
and
update.
Your
crd,
like
your
weapon,
needs
to
understand
the
new
api
version
before
the
crds
actually
introduce
that
api
version
and
then,
after
that,
your
controller
can
be
outdated,
so
it
almost
becomes
a
three-stage
process
and
as
far
as
I've
seen,
there's
not
really
much
prior
art
in
this
area
for
how
to
manage
it.
B
D
B
Yeah,
so
we
that's
kind
of
the
reason
why
we
also
advise
people
not
to
use
that
option
which
yeah
that's,
why
we
didn't
have
it
before
and
why
it
does
default
to
off.
We
put
a
big
warning
on
there
saying
if
you
do
use
this,
then,
like
you
know,
be
very
careful
with
how
I'm
uninstalling
it
it
doesn't
really
fit.
Well,
I
I
don't
think
it's
a
particularly
good
idea.
Personally,
I
see
a
lot
of
pain
coming
from
it,
especially
now
we
do
have
a
conversion
web
hook.
B
It's
you
can
get
into
situations
where
it's,
because
if
you
update
the
crd
to
introduce
a
new
api
version
before
the
conversion
web
book
is
updated,
helm
can
internally
do
an
additional
list
on
the
resources
that
it's
creating
and
then
it's
unable
to
actually
list
your
crd
resource
at
the
new
api
version,
which
will
be
the
default
depending
on
your
preferred
version
that
you've
configured
and
that
can
get
helm
into
a
state.
Where
then
can't
actually
update
the
thing,
and
it's
not
just
helm
that
has
this
problem.
B
It's
other
get
ops
controllers
that
then
build
on
top
of
it
yeah
and
it
also
happens
with
yaml.
So
all
in
all
I
I
would
personally
advise
not
to
have
crds
near
your
home
chart
at
the
minute,
but
yeah.
There
will
also
be
issues
open,
then
saying
we
want
the
one
click
install
so.
D
E
B
Yeah
it
I
mean,
oh
sorry,
yeah.
You
can't.
F
Oh
sorry,
I
I
I
just
want
to
clarify
what
you
just
said
so
so
it
sounds
like
today,
you're
recommending
your
users
to
not
to
use
home
uninstall
right.
B
So
having
that
as
a
separate
step,
you
could
actually
have
it
as
a
separate
helm,
chart
and
just
put
those
in
the
templates
directory
and
have
kind
of
like
secret
store,
csi
driver
crds,
and
then
that
way,
it's
a
lot
more
explicit.
When
users
do
a
hellman
install
that
they're
uninstalling
like
the
crds,
that's
something
we
haven't
done
in
set
manager,
but
has
always
kind
of
been
a
thing.
I've
considered
at
least.
B
D
Yeah,
that's
helpful.
Okay,
I
mean
also,
we,
I
think
initially
it
was
part
of
the
0.015
milestone,
but
we're
thinking
of
moving
that
to
0.0.16,
because
there
are
some
changes
in
crds
for
15
and
we
don't
want
to
spring
it
to
the
users
with
this
release.
So
we
want
to
slowly
add
that
in
a
later
release,
where
there's
not
a
lot
of
changes
in
crds
and
the
changes
are
isolated
to
just
the
helm,
changes.
B
F
Right
but
the
other
concern
is
like,
then
users
may
not
have
that
those
definitions
in
the
cluster
when
they
start
running
the
solution
right.
So
it's
something
that
we
have
to
explicitly
call
out
as
a
dependency
right
unless
we
embed
it
as
a
dependence,
a
dependence
chart
of
of
the
csi
driver
chart,
which
kind
of
goes
back
to
ends
up
in
the
same
place.
B
B
No,
it's
it's
a
real
pain
point
yeah!
It's
really
that's
a
good
tldr!
There
is
no
good
solution
right
now.
I
think
the
conservative
solution
is
probably
what
I'd
recommend,
but
we've
definitely
burnt
ourselves
most
where
we've
tried
to
do
things
and
then
rolled
out
to
all
our
users
and
then
realized
where
the
issues
kind
of
come
up,
and
it's
as
I
I
mean,
are
you
only
talk?
Are
you
have
you
got
v1
beta,
1
crds?
Or
are
you
talk?
What's
your
minimum
kubernetes
version
as
well,
116.,
so
v1,
okay,
yeah?
B
At
the
end,
I
don't
think
there
is
a
good
solution.
Having
cube
ctl
apply
as
a
manual
step
beforehand
or
having
a
separate
helm
chart
or
putting
the
helm
charts
into
the
templates
directory
or
the
warning
to
your
users
about
anything
else.
I'd
say
is
probably
the
better
one.
The
difficult
thing
with
that,
though,
is,
if
you
know
at
some
point,
you
might
need
a
conversion
web
hook
and
you
do
put
them
into
the
helm
chart.
B
We
don't
really
know
where,
like
what
the
eventual
support
in
helm
for
all
this
is
gonna
look
like,
and
it
may
just
be
that
one
of
your
releases
when
it
comes
to
trying
to
fix
this
involves.
Okay,
you're
gonna
have
to
back
up
all
your
resources
and
uninstall
and
go
yeah
might
be
an
issue.
B
Yeah
that
yeah
okay
yeah,
there's
special
behavior
for
crds
in
that
directory,
where
yeah
they
a
lot
of
the
discussion
has
been
around
what
happens
if
two
charts
both
define
the
same
crds
and
things,
I
think
some
of
it's
a
little
bit
contrived,
because
I
don't
personally,
this
is
my
opinion.
I
don't
think
that
should
happen.
I
think
there's
one
owner
of
a
crd
and
that
should
be
you
know
it's
kind
of
like
an
api
server
side
right.
B
Api
groups
right:
it's
yeah,
it's
a
cluster-wide
thing.
A
B
Yeah
there's
a
fair
bit
of
history.
There's
a
few
different
clothing.
A
Elevated
yeah,
I'm
looking
at
this.
This
goes
right
back:
okay,
okay,
we'll
move
on
we'll
we'll
investigate
this,
and
let's
see
how
best
to
resolve
that.
A
All
right
next
topic
we're
going
to
talk
about
discuss
the
comment
on
sidecar
for
the
hashicorp
vault
provider.
E
Okay,
yeah,
I
apologize,
jason
was
gonna
turn
up,
but
something
came
up
for
him,
so
he
wasn't
able
to
come.
I
can,
I
can
probably
get
a
bit
of
context.
I
think
I've
read
the
comment,
but
yeah
did
you
have
a
like
a
specific
question
about
the
his
idea
of
using
a
sidecar
for
the
provider?
Basically.
D
Yeah,
I
mean
so,
I
think,
like
what
we
want
to
discuss
was
the
rotation
right
now
is
handled
at
the
driver
level
right,
so
the
driver
decides
when
the
secret
or
the
content
has
to
be
rotated
and
then
invokes
the
provider.
And
then
you
know
the
provider,
then
just
rewrites
the
mount
with
the
latest
content
and
then,
after
that,
the
driver
is
still
responsible
for
updating
the
kubernetes
secrets
that
were
synced
with
those
mounted
contents.
D
E
Yeah
yeah,
that
makes
sense
yeah.
That's
a
bigger
point.
I
guess
our
situation
is
kind
of
interesting,
because
we've
already
got
like
a
whole
ton
of
functionality.
That
looks
an
awful
lot
like
what
we
want
to
put
into
the
provider
and
the
vault
agent.
So
if
yeah,
when,
where
there's
opportunity
to
solve
problems
by
reusing
the
vault
agent,
it
would
be
nice
to
but
yeah,
that's
a
that's
a
good
point.
E
I
suppose
the
concern
from
our
side
about
the
driver
being
in
control
of
secret
renewals
is
that
it
doesn't
give
the
the
level
operator
much
control
over
the
the
load
coming
into
volt.
So
if
they
set
it,
you
know
ttl
on
their
secrets
over
an
hour,
but
it
gets
renewed
every
minute.
Then
it's
just
a
bit
wasteful,
but
yeah.
I
I
I
I
know
this
is
an
area
of
active
development
and
we're
still
very
much
in
the
discovery
phase
ourselves.
E
So
it's
it's
an
interesting
thing
to
discuss,
but
I
I
don't
have
a
clear
idea
of
what
what
we
want
us
to
look
like
yet
personally.
D
I
mean
also
for
the
rotation
when,
when
the
driver
invokes
the
provider,
the
provider
can
actually
keep
state
right.
So
I'm
assuming
even
with
the
sidecar
the
sidecar,
is
going
to
maintain
state
in
memory
to
say
the
dtl,
for
this
particular
object
is
this.
So
then,
I'm
not
going
to
update
until
this
dtl
is
expired.
E
B
D
This
was
basically
the
parent
comment
on
which
jason
had
commented,
so
tommy
had
added
this,
where
that
is
the
request
was,
in
the
future
to
add
a
reconcile
interval
per
secret
provider
class,
so
the
user
can
say
for
secret
provider
class
one
rotate
this
every
two
hours
and
for
secret
provider
class
to
rotate
this
every
four
hours.
D
So
this
was
a
comment
on
the
rotation
pia.
So
I've
opened
an
issue
for
discussion
to
see
what
the
use
case
behind
this
is
and
do
we,
if
this
is
required
in
the
future
releases.
C
Basically,
I
think
you
provided
a
good
alternative
there
where,
for
the,
if
the
plug-ins
are
able
to
maintain
state,
then
they
could
could
as
part
of
the
plug-in
information
right
like
have
a
provider-specific
parameter
for
you
know
how
often
so
like
it
might
be
two
minutes
now,
but
you
might
be
able
to
delay
that
to
hours
in
the
provider
configuration.
So
that's,
probably
a
good
intermediate
one
yeah,
the
just
the
used
case
was.
I
didn't
want
to
call
my
secret
manager
api.
C
You
know
so
much
when
I
know
the
secret
only
rotates
every
like
you
know,
30
days,
I
don't
need
to
call
it
every
two
minutes.
No,
that
was
the
main
like
use
case.
There.
F
Just
to
clarify,
but
that
logic
today
is
in
the
rotation
reconciler
right
so
are
you?
Are
we
suggesting
if,
if
we
do
it
at
the
spc
level,
you
know
we
still
need
the
reconciler
to
actually
do
the
call,
though
I
mean
to
trigger
the
the
call.
C
Yeah,
I
think.
D
Yeah,
so
the
rotation
reconciler
still
makes
the
call
to
the
provider
right,
so
it
could
be
the
default
value.
We're
setting
is
two
minutes,
because
that
also
covers
cases
where
the
pod
is
created.
Secret
provider
class
is
created,
but
the
user
updates
the
secret
provider
class.
After
so
like
they
add,
more
secret
objects
to
it.
So
the
rotation
takes
care
of
adding
these
new
files
to
the
mounted
path
and
that
and
then
the
this,
the
rotation
reconciler
is
still
going
to
periodically
call
the
provider.
F
I
see
so
so
so,
basically,
the
tick
that
we
have
in
the
reconciler
would
be
a
like
lowest
denominator
of
all
the
potentially
of
all
those
settings
right
whatever
this
parameter
is.
E
In
terms
of
just
thinking
out
an
alternative,
if,
if
the
driver,
if
the
response
to
a
mount
request,
handed
back
from
the
provider
like
a
ttl
that
says,
you
know,
please,
please
renew
me
in
in
10
minutes.
If
my
mount
still
exists
and
the
driver
held
that
state
would
that
solution
look
yeah?
How
did
that
solution?
Look
from
from
your
point
of
view,
because
I'm
not
familiar
enough
with
the
state
management
side
of
things
in
the
in
the
driver
to
know
how
feasible.
D
It
is,
but
so
today,
all
the
state
that
the
driver
maintains
is
in
the
crd
right,
so
the
secret
provider
class
part
status
is
where
driver
maintains
all
the
required
states.
So
it
basically
caches
the
current
secret
objects
in
the
pod
and
also
the
versions
that's
being
run,
and
then
the
driver
has
the
capability
to
send
that
to
the
provider.
To
say
these
are
the
current
versions.
Now
you
get
to
decide
if
there's
a
latest
version
and
if
there's
a
diff
then
go
ahead
and
update
it.
D
D
E
I
guess
the
the
wrinkle
to
that
would
obviously
be
that
the
pods
are
very
ephemeral
and
and
that
in
memory
state
might
get
wiped
at
any
time.
E
But
as
long
as
they're,
not
so
ephemeral
that,
like
I
guess,
the
default
behavior,
if
they
don't
have
that
state
held
in
memory,
would
just
be
that
they
would
go
and
fetch
the
secret.
But
it
seems
seems
reasonable
because
you
wouldn't
expect
the
pod
to
be
so
ephemeral
that
it's
having
to
do
it.
Every
time.
C
Yeah,
I
think
the
just
one
thing
here
is
like:
if
it
turns
out
that
every
plugin
ends
up
writing
some
our
code
for
dealing
with
different
like
intervals,
then
it
may
be
useful
to
in
the
driver,
have
the
kind
of
like
that,
a
back
off
signal
kind
of
like
tom,
proposed
or
or
a
generic
whole
interval
kind
of
thing.
D
Yeah,
I
think
that
makes
sense.
I
mean
once
we
have
this
released
as
an
alpha
feature.
We
can
also
see
what
the
user
feedback
is,
whether
they
actually
want
the
reconciler
running
periodically,
and
the
period
is
also
configurable
by
the
user.
A
Okay,
just
this
is
some
good
stuff,
so
our
last
one
is
general
discussion
around
the
secrets
provided
class
and
restrictions
on
volume,
attributes.
B
Yeah,
so
I
mean
this
was
kind
of
more
just
me
joining.
I
didn't
want
to
take
up
all
your
time
so
at
the
end,
but
yeah
so
we're
basically
building
a
similar.
So
well,
not
a
similar
integration.
We
need
to
integrate
with
one
of
our
internal
secret
stores,
so
we're
building
something
out
there
and
obviously
the
secret
store.
Csi
driver
is
kind
of
exactly
the
sort
of
project
that
we're
looking
to
use.
B
The
dock
so
went
to
look
over
kind
of
the
way
providers
work
and
noticed
the
changes
to
well
the
secret
provider
class.
First
of
all-
and
I
was
linked
to
the
issue
of
the
pr
or
issue
where
that
was
actually
introduced
and
kind
of
the
motivations
behind
it,
and
I
see
it's
to
kind
of
make
the
make
all
your
pod
definitions
and,
like
the
user
side
of
things
very
agnostic
between
clusters,
which
makes
a
lot
of
sense.
B
So
so
two
things
actually,
first
of
all,
we
internally,
whereas
you
may
have
like
here
with
vault,
we,
you
may
configure
a
secret
provider
class
and
I
think
you
have
attributes
like
the
server
url
references
to
authentication
credentials.
Things
like
that.
First
of
all,
we
don't
really
have
that
as
a
like
a
requirement,
there's
kind
of
the
internal
sort
of
identity
there
can
be
used
instead
to
authenticate
and
we're
kind
of
talking
to
a
single
service,
and
so
on
so
kind
of.
Ideally
there
we
wouldn't
well.
B
We
don't
really
have
much
need
for
any
of
the
things
on
the
sql
provider
class.
Like
the
global
configuration
there
isn't
really
so
much
global
configuration
for
us
that
isn't
kind
of
the
biggest
issue
at
all,
for
I
don't
think
I
think
it
does.
It
does
bring
some
benefits
with
being
able
to
like
sink
into
other
into
kubernetes
secret
resources
too.
B
So
I
think
that's
okay,
but
then,
on
the
other
side
of
it
was
the
restrictions
around
volume
attributes
and
not
being
able
to
provide
any
more
so
like
it
seems
right
now
and
maybe
like
well,
I'm
sure
you
can
provide
me
some
insight
on
how
users
actually
end
up
using
it.
It
seems
right
now.
Users
will
need
to
configure
exactly
what
they
what
secrets
they
want
to
consume
in
their
pod
in
their
secret
provider
class
resource,
which
is
kind
of
adding
on
an
additional
thing
for
users
to
understand.
C
B
I
guess
the
way
around
that
is
to
configure
syncing
into
a
kubernetes
secret
and
then
mounting
it,
but
specifically
that's
something
we're
trying
to
get
away
from
doing
so.
I
guess
yeah,
ideally
there'd
be
some
way
for
us
to
reference,
the
name
of
a
secret
or
like
a
bucket
full
of
secrets,
or
something
like
that
from
within
our
pod
definition,
in
a
similar
way
to
with
like
a
kubernetes
secret
when
you
mount
it
in,
you
can
define
the
name
and
the
key
of
the
resource
to
mount
in,
without
that.
B
What
we're
kind
of
looking
at
doing
is
just
mounting
everything
in.
If
you
reference
it
yeah,
I
guess
we're
just
trying
to
avoid
introducing
new
concepts
that
every
single
end
user,
like
every
developer,
ends
up
needing
to
use.
It's
not
so
much
an
issue
of
like
us
introducing
crds,
it's
more
just
the
fact
that
there's
something
that
everyone
will
need
to
understand.
B
So
I
guess
that
was
I've
waffled
on
a
lot
there,
but
my
kind
of
general.
That
was
my
first
things
I
was
running
into.
I
don't
know
if
there's
any
thoughts
or
anything.
D
So
the
secret
provider
class
today
has
the
schema
is
not
fixed
right.
So
the
only
thing
that's
fixed
in
the
schema
is
basically
the
provider
name
which
has
to
be
set
for
each
secret
provider
class,
but
within
the
parameter
field,
it's
mostly
just
a
map
of
values
that
each
provider
could
actually
potentially
use
so
for
the
vault,
it's
basically
the
vault,
url
and
authentication
method
and
few
other
things.
And
if
you
actually
take
a
look
at
the
azure
keyword
provider,
we
have
a
couple
of
other
details
like
we
need
the
tenant
id.
D
So
all
those
parameters
initially
were
filled
out
as
volume
attributes
in
the
pod
spec
and
then
we've
slowly
migrated
users
to
start
using
the
secret
provider
class
and
then,
with
the
0.0.12
release.
It's
become
a
mandatory
feature
to
have-
and
I
think,
like
I
added
on
the
slack
issue.
One
thing
is:
it
also
covers
the
hardback
scenarios,
where
users
who
can
create
the
secret
provider
class
can
be
controlled,
and
then
we
have
the
one-to-one
mapping
right.
So
it's
name
space
code.
C
C
I
think
some
of,
like
the
vault,
like
the
kb
default
back
end
is
somewhat
similar,
but,
like
the
certs
back
end
is
like
only
a
single
value
like
I
think
yeah,
so
it
it
did
then
shove
all
of
the
you
know
the
path
to
file
name
stuff
into
the
secret
provider
class,
since
those
can
be
different
based
on
whatever
abstraction
you're
using
but
yeah.
I
guess
you're
saying
now:
every
developer
needs
to
know
about
a
secret
provider
class
and
needs
to
create
it
for
their
pod
or
workload.
C
I
think
it
finished
oh
yeah
yeah.
I'm
done,
I
think
it's
it's
unfortunate,
but
that
is
the
like
lowest
common
denominator.
I
guess
so.
I
think.
B
One
way
to
kind
of
like
succinct,
maybe
like
put
together
what
I'm
I'm
thinking
here,
though,
is
the
intention
with
moving
to
the
secret
provider
class,
was
to
make
it
so
that
your
pods
are
then
like
portable
between
clusters
and
so
on.
I,
I
think,
that's
a
good
good
effort,
but
that
kind
of
implies
like
that
users,
then
don't
need
to
think
about
it
after
they've
done
that,
but
because
the
mapping
of
which
secrets
that
you
actually
want
to
expose
and
like
your
secret
store
are
combined
together.
B
It
kind
of
doesn't
achieve
that,
because
users
now
still
have
to
configure
the
secret
store
specific
parameters,
but
they
just
configure
on
a
different
resource,
whereas
if
we
had
almost,
even
if
you
didn't
want
to
use
volume
attributes,
you
had
some
like
the
like
some
resource,
really
poorly
named,
like
the
sequence
that
this
pod
needs,
or
some
kind
of
a
some
other
structure.
B
You
can,
then
you
can
actually
start
to
unpick
the
concerns
so
that
your
administrators
do
configure
your
provider
class,
which
tells
it
how
to
talk
to
the
back
end
and
then
the
users
just
say
well.
I
want
this
particular
thing
or
I
just
want
my
secrets.
I
yeah
I
don't
want
to
stomp
in
and
just
say
this
is
how
you
like.
You
should
do
anything,
I'm
more
just
kind
of
the
way.
I
expected
the
thing
that
I
expected
to
come
across
when
I
kind
of
started
getting
into
it.
I
guess.
F
Yeah,
hey,
so
I
am.
I
also
linked
to
the
issue
where
we
talked
about
where
we
introduced
spc
the
first
time,
and
it
was
a
requested
by
I
guess-
sig
storage.
F
I
was
looking
at
this
and
recommended
that
we
support
pod
portability
and
relying
on
our
back
for
separation
of
concerns
right,
because
I
think
the
thought
is
the
the
role
the
people
who
actually
create
this
resource
could
be
different
from
the
consumer
of
that
secret,
like
the
app
developer,
and
we
thought
that
was
a
great
idea
and
that's
why
we
added
the
spc
construct
a
resource
I
kind
of
want
to
clarify.
Something
is
the
way
we
have
with
the
way
users
consume.
F
The
spc
today
is
by
you
know,
specifying
the
csi
driver
in
the
volume
and
provide
the
name
of
the
spc
right.
That
is
very
similar
to
I
guess
like
in
terms
of
kubernetes
resources.
It
is
one
resource
referencing,
another
resource
in
the
cluster
right,
so
that's
very
similar.
So
I'm
I'm
curious.
F
You
know
why
you
think
what
like
what
is
a
gap
there,
because
it's
very
similar
to
kubernetes
secrets,
so
I'm
just
gonna,
I
just
linked
it
here.
B
Yeah
yeah
I'll
take
a
look
over
the
links.
I
think
what
it
yeah.
I
guess
for
me.
It
feels
like
we
haven't.
It
doesn't
achieve
the
portability
bits
because,
where
you
said
there
that
it
it's
probably
too
like
you,
you
could
have
two
separate
people,
configuring
the
different
things
I
can't.
I
would
have
thought
that
probably
doesn't
end
up
being
the
case,
because
the
developers
need
to
say
what
secrets
they
need.
You
can't
just
configure
and
say
I
just
want
to
talk
to.
B
Oh,
I
don't
know
from
what
I
understand
you
don't
just
say
I
want
to
talk
to
volt
and
now,
like
everything
in
vault
gets
mounted
like
the
developer
still
needs
to
say.
I
want
to
talk
to
volt
and
also
well.
They
don't
necessarily
want
to
say
vault,
but
they
need
to
say
I
want
the
api
key
for
this
service
made
available
to
my
application
and
right
now
the
place
where
you
go
to
say.
B
I
want
the
api
key
for
my
service
is
in
the
secret
provider
class,
so
you
don't
actually
get
the
r
back
benefit,
because
your
users
still
need
to
actually
manage
those
secret
provider
class
from
what
I'm
understanding
at
least
I
may
have
misunderstood,
but
they're
having
to
edit
the
secret
provider
class
anyway
to
say
that
they
want
that
api
key
or
this
one
or
you
know
a
secret
from
that
particular
author.
Sorry,
a
certificate
from
that
authority
in
the
case
of
like
the
pki
back
end,
is
that
yeah
that
I
guess
that's
where.
F
I
see
so
so
in
in
a
way
the
the
app
developer
will
have
to
know.
This
is
the
mount
path
and
everything
I
expect
to
be
mounted
is
in
that
path,
and-
and
I
I
I
have
to
assume
that
whoever
created
the
spc
has
specified
those
things
already
right
that
I
think
that's
your
your
concern
right.
B
So
can
you
create
me
a
different
secret
provider
class
and
actually
you'll
be
duplicating
the
connection
information
between
them
as
well
as
then
just
having
like
once
one
string
different
being
the
path,
whereas
I
guess
I
expected
the
secret
provider
class
to
just
contain
the
connection
information
and
either
volume
attributes
or
some
other
resource,
or
something
like
that
which
says
that
path
for
that
pod
or
the
volume
attributes
could
just
do
it
embedded
in
there,
but
either
way
yeah.
That's
it.
I
feel
like
there's
still
quite
a
lot
of
if
we're
separating
out
the
roles.
B
E
Yeah,
I
was
just
gonna
say
for
what
it's
worth
yeah
I
I
can
see
why
it
is
the
way
it
is,
but
I
also
share
the
same
usability
concerns
with
regards
to
kind
of
having
to
separate
out.
You
know
the
operator
and
the
developer
side.
It's
gonna
be
yeah,
quite
an
unpleasant
experience
for
users
if
they
need.
You
know,
20
30,
40
secrets
in
there
in
their
cluster,
and
I
don't
think
that's
going
to
be
an
uncommon
case
either,
but
yeah
it
it's
difficult
itself
too.
E
I'm
not
having
any
solutions
here,
but
yeah
just
just
add
yeah.
It
would
be
nice
if
we
could
come
up
with
something
to
solve
this.
F
Yeah,
I
definitely
understand
the
the
use
case
here
and
the
concerns
and
just
to
clarify
this
is
this
is
more
for
the
mounted
file
scenario
right,
like
assuming
users,
don't
use
kubernete
the
sync
to
kubernetes
secret
feature
right.
B
Yeah
I
mean
I
I
would
again,
I
would
have
expected
that
to
have
been
maybe
its
own
distinct
resource.
We
actually
had
a
similar
talk.
I
think
it
kind
of
overlaps
with
cert
manager
again
where
people
wanted
to
be
able
to
store
like
they
wanted
to
change
where
they're,
storing
their
secrets
and
things
like
that.
So,
instead
of
they,
instead
of
storing
their
sign
certificates
and
private
keys
in
kubernetes
secrets,
they
were
talking
about
storing
them
in
vault,
actually,
which
you
know.
B
Different
people
have
different
requirements
and
there
we
were
talking
about
kind
of
like
certificate
projection
as
a
resource,
so
you're
almost
projecting
a
secret
from
a
secret
store
into
a
kubernetes
secret
and
we
didn't
really
go.
There
is
no
projection
resource
in
there,
but
it
was
kind
of
a
lot
of
the
discussions
we
are
having.
So
I
guess
in
a
way
as
well.
I
almost
expected
that
to
be
separate
and
with
the
way,
I
suppose,
yeah
you
sync
from
the
driver
right
so
yeah,
the
drivers,
okay,
so
yeah
yeah.
I
don't.
B
F
B
So
I
mean
I,
I
don't
have
a
concrete
proposal.
No,
I
guess
I
would.
B
I
would
think,
for
I
mean
I
it
depends
on
if
you
want
to
do
like
actually
allow
volume
attributes.
I
guess
ultimately,
regardless,
if
you
call
it
a
volume
attribute
or
some
other
thing,
if
the
it's
about
who's
got
to
edit
it
and
if
the
developers
still
got
to
edit
it,
then
it's
really
what
what's
easiest
for
them,
because
it's
their
responsibilities.
B
It
is,
whilst
also
being
structurally
sound,
even
if
it's
just
like
some
opaque
string
that
we
have
a
defined
key
for
in
the
volume
attributes,
which
is
something
like
the
name
or
like
a
uri
for
a
secret.
And
that
way
you
can
have
volt
colon
slash,
slash
or
you
know,
I
don't
know
the
product
name
for
the
azure
one.
But
the
c,
maybe
something
like
that.
I
I
don't
know
almost
like
object,
storage
style,
things,
gs
and
s3.
E
Presumably
you're
talking
about
the
secret
provider
class,
still
configuring
the
actual
connection
to
the
secret
store,
so
you
wouldn't
need
the
anything
provider
specific
showing
up
in
the
volume
attribute.
It
would
just
be
like
a
path
like
yeah,
some
opaque
string
that
that
refers
to
something
internal
to
the
specific
spc
provider.
You're
talking
about
right.
B
B
C
Yeah,
so
I
think
like
if
you
want
to
separate
the
connection
information
from
the
the
secrets,
like
really
the
exact
secrets
that
the
you
know
thing
needs.
C
This
storage
like
equivalent,
I
believe,
is
like
the
storage
class,
defines
the
connection
information
and
then
you
have
the
like
persistent
volume
claims
that
reference
the
storage
class,
and
then
you
have
your
pod
reference,
the
person
volume
claim
like,
but
that
adds
like
another
hoop.
I
don't
know
that
that
like
that
is
how
it,
I
believe,
is
currently
solved
on
the
persistent
polling
side.
But
I
I
don't
know
that
that
makes
it
any
better,
because
now
there
are
even
more
objects
to
deal
with.
B
So
in
a
way,
if
we're
like
replicating
that
that
kind
of
makes
sense,
because
then
we
could
have
like
what
we
could,
what
they
call
a
system
volume
claim
is
almost
our
resource.
That
tells
it
to
sync
resources
into
a
kubernetes
secret
and
then
something
like
the
csi
driver.
Like
the
ethereal
thing,
there
is
kind
of
like
you,
defining
an
inline
persistent
volume
claim
if
that
makes
sense
in
terms
of
like
concepts
used
to
solve
different
things,.
B
Not
class
because
you
don't
define
an
inline
storage
class,
you
define
an
inline,
persistent
volume
claim
so
I'd
say
it's
more
just
like
what
we've
got
today
with
csi,
and
then
you
know,
I
guess
the
the
thing
is
volume
attributes
is
like
a
map,
so
we
need
to
define
the
schema
for
that,
like
the
schema
for
system
volume
claim
is
things
like
the
fs
type
and
the
storage
class
name,
which
we
do
have
like
the
provider
class
name
or
the
wherever
secret
provider
class
name.
B
B
F
I
think
that
would
definitely
be
helpful.
You
know
in
terms
of
making
sure
this
discussion
will
will
make
more
dis
progress
on
this
discussion
and.
B
So
I
wouldn't
expect
to
be
able
to
do.
I
would
expect
to
still
have
to
reference
the
name
of
the
secret
provider
class.
I'd
say
I
think
that
does
make
sense
and
when
you
map
it
to
things
like
storage
classes,
similar
to
how,
if
you're,
defining
like
an
actual
inline
pvc,
you
have
to
say
which
storage
class
you
want.
B
I
almost
see
the
csi,
like
secret
provider
being
analogous
to
or
analytics
or
whatever
it
is
to
it.
It's
yeah
you'd
have
to
define
your
storage
class
name
and
your
secret
provider
class
name
for
secrets.
It's
more
around,
like
the
additional
parameters,
like
you
say,
like
an
opaque
name
string
or
some
common
denominator,
which
I
don't
think
is
easy
to
find.
But
yeah.
E
A
All
right,
yeah
james,
if
you
can,
I
don't
know
if
you
want
to
drop
what
you're
thinking
in
the
notes.
I
guess
maybe
you
have
and
then
yeah
we
could.
We
could
focus
on
this
topic.
B
F
A
All
right,
okay,
that's
it!
We
went
a
little
long,
but
definitely
some
great
conversation
about
some
record
stuff.
So
so
that's
good
next
meeting
will
be
october
15th.
A
So
that's
in
two
weeks
time
and
I'll
go
ahead
and
prepare
the
doc
for
the
next
session
here.
So
if
you
want
to
put
those
agenda
items
in
go
ahead
for
free
to
do
so
before
we
depart,
is
there
any
other
comments?
Anyone
wants
to
speak
about
and
does
anyone
wish
to
go
ahead
and
be
the
moderator
and
or
the
note
taker
next
meeting
you
can
go
ahead
and
add
your
name
to
it
once
I
format
the
doc
we'll
go
from
there.