►
From YouTube: Secrets Store CSI Community Meeting - 2021-06-24
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
everyone
welcome
to
the
csi
secret
store
community
call.
This
today
is
june
24
2021.
This
call
falls
under
the
cncf
code
of
conduct
and
the
video
is
being
recorded
and
will
be
published
to
youtube,
and
then
please
go
ahead
and
add
your
name
to
the
list
of
attendees.
For
today
I
will
be
moderating
the
call.
Does
anyone
want
to
help
me
with
notes
for
the
call
today.
A
Awesome
so,
let's
jump
into
the
agenda.
The
first
one
is
an
item
that
I
added.
So
basically
it's
a
demo
for
the
rotation
using
the
csi
driver
requires
republish
feature,
so
I
wrote
up
a
small
design
doc
for
it.
So
just
a
little
overview
is
the
secret
store.
Csi
driver
supports
auto
rotation
today
of
mountain
contents
and
sync
community
secret.
We
added
support
for
this
in
0.0.16
our
version,
and
then
this
was
done
by
adding
an
additional
controller
that
runs
periodically.
A
So
it
runs
on
a
pole,
interval
and
then
what
it
basically
does
is
it
lists
all
the
secret
provider
class
part
starters,
that's
assigned
for
a
particular
node,
and
then
it
enumerates
through
that
calls
the
provider
for
each
one
of
them
gets
the
latest
contents
from
external
secret
store,
updates
the
mount
and
then
subsequently
also
updates
the
kubernetes
secret
and
in
this
proposal,
what
I'm
suggesting
is
we
switched
to
using
this
new
feature
called
require
republish,
which
was
added
in
kubernetes
120,
and
what
essentially,
this
does
is
cubelet
calls
no
publish
volume
during
the
initial
pod
volume
request
right
and
with
require
republish
equal
to
true.
A
A
So
essentially
it
takes
care
of
this
periodic
logic
that
we
have
built
in
and
also
this
requires
republishes
what
is
being
used
today
in
kubernetes
for
other
resources
that
get
mounted
into
the
part
so
for,
if
you
mount
a
kubernetes
secret
as
a
volume
in
the
pot
today
and
then,
if
you
update
the
community
secret
later,
this
reconcile
loop,
that
is
in
cubelet,
is
what
updates
the
amount
with
the
latest
secret.
A
So
basically,
we
are
trying
to
use
the
same
feature
that
has
been
battled
tested
in
communities
for
a
while.
Now
and
then
this
feature
was
added
in
120..
It
requires
a
feature
flag
in
120,
but
from
121
onwards
there
is
no
feature
flag
required
and
in
terms
of
the
implementation
details
like
the
first
thing
is
in
the
csi
driver.
We
need
to
enable
this
to
let
cubelet
know
that
it
needs
to
call
us
periodically
so
for
that
we
need
to
set,
require
republish
equal
to
true
and
then
in
the
note,
publish
volume
implementation.
A
Today,
the
when
the
volume
is
already
mounted
and
if
cubelet
calls
us,
we
say
the
volume
is
mounted
and
then
just
return
early
from
the
call,
but
instead
we
need
to
go
through
the
entire
note,
published
volume
and
then
call
the
provider
to
get
the
contents
write
it
in
the
pod
mount
and
then
we
create
secret
provider
class
part
status
at
the
end
of
node,
publish
volume
so
in
and
instead
of
just
creating.
We
would
also
need
to
update
so
this
way
when
kubrick
calls
this
periodically.
A
We
handle
the
rotation
of
the
mounted
contents
and
then,
when
we
update
the
secret
provider
class
part
status
with
the
latest
secret
inversion,
the
controller
that
we
have
to
sync
kubernetes
secret
will
take
care
of
also
going
and
updating
the
kubernetes
secrets.
That's
there.
So
that
way
we
handle
the
rotation
for
mount
as
well
as
the
sync
affinity
secret.
A
A
A
Okay,
so
if
you
look
at
the
pod
here,
if
you
see
I
have
not
enabled
secret
rotation,
I
have
only
enabled
requires
republish
on
the
csi
driver
and
then
let's
fail
the
logs
here
and
then
I
have
a
secret
provider
class
and
a
part
spec,
that's
already
defined.
A
So
if
you
look
at
this,
it's
a
standard
secret
provider
class,
it's
sinking
secret
objects.
So
it's
syncing
three
secrets
like
one
of
the
tls
one
one
is
opaque
and
then
one
of
the
docker
config
and
then
it's
using
the
azure
provider
and
then
it's
also
getting
a
couple
of
objects
that
needs
to
be
mounted
so
secret,
one
secret,
two
then
and
then
different
certificates
right
and
then
the
deployment
is
also
pretty
standard.
A
A
Then
okay,
so
we've
confirmed
that
this
is
a
secret
right.
Now,
I'm
just
gonna
go
ahead
and
rotate
this
in
the
external
secret
store
and
then,
if
we
give
it
a
minute,
we
should
see
cubelet
call
the
driver
again
for
the
pod
volume
run
and
it's
always
a
minute
plus
some
amount
of
jitter,
which
is
more
frequent
than
what
we
set
as
default
today,
which
is
two
minutes,
but
let's
just
see
when
the
log
shows
up
and
then.
A
A
Okay,
so
if
we
see
now,
we
can
see
that
we
were
called
and
then
so.
Basically
we
said
the
volume
is
already
mounted,
but
because
of
the
changes
that
I
have,
we
also
went
ahead
and
called
the
provider
again
to
update
the
mount.
And
now,
if
we
look
at
the
part,
so
we
can
see
that
this
value
has
been
updated
to
the
latest
and
then
let's
also
check.
A
Yeah,
so
if
you
look
at
the
control,
if
you
look
at
the
kubernetes
secret,
so
that
also
got
updated.
So
this
fits
really
nice
with
the
flow
that
we
have
today
and
then
in
terms
of
the
trade-offs.
Basically,
with
this
change,
we
don't
need
the
rotation
reconciler
built
into
the
driver.
So
we
can
just
rely
on
cubelet
to
call
the
node
publish
volume
all
the
time
so
that
this
means
there's
no
extra
set
of
api
calls.
A
We
don't
need
any
of
the
caches
that
we
need
to
maintain
for
the
rotation
and
then
basically,
it's
a
single
code
path
for
volume
mount
which
we
already
have
and
has
been
tested
a
lot.
So
it's
more
streamlined
and
it's
all.
It's
already
item
pretend
so
we
know
we
always
just
need
to
test
that
one
codepath
and
then
today,
as
part
of
the
note,
publish
volume
request,
the
cubelet
gives
us
quite
a
bit
of
information
about
the
part
so
like
the
pod
name,
pod,
namespace
and
service
account
name
and
then
also
with
121.
A
It
gives
us
the
service
account
token
as
well,
and
with
this
change
we
will
be
able
to
use
a
service
account
token
for
a
standard
note
publish
as
well
as
the
republish
part
of
it.
So
that
means
we
can
just
pass
it
on
to
the
providers
every
time
that
we
call
the
provider
and
then
the
provider
will
be
just
be
able
to
use
that
and
they
can
skip
generating
the
token
and
then
the
one
of
the
big
caveat
is
we.
The
csr
driver
is
supported
for
116
plus,
but
request.
A
Republish
is
only
available
from
120
plus.
So
that
means
there
is
a
period
of
time
that
we
still
need
to
support
the
rotation
reconciler
in
the
csi
driver.
Even
if
we
have
request
republish
as
an
option-
and
this
is
one
thing
that
I
still
need
to
test
and
we
need
to
design.
But
I
think
using
this
feature
will
also
give
us
the
option
to
define
the
rotation
pole
interval
per
secret
provider
class
part
status
without
a
lot
of
first,
you
could
provide
a
class
without
a
lot
of
complexity.
A
So
I
think,
like
this
opens
the
door
for
that
particular
thing,
and
then
one
thing
I
added
was:
we
also
just
need
to
test
what
the
impact
is
in
terms
of
the
republish
time,
if
there
are
large
number
of
volumes
within
that
single
node.
So
that
is
one
thing
that
I
need
to
test,
but
yeah,
it's
the
gist
of
it,
and
that
was
the
demo.
D
So,
are
you
saying
that
the
the
poll
interval
or
the
republish
interval
is
not
configurable
on
kubernetes.
D
D
I
would
think
that
I
would
think
that's
gonna
be
important,
because
there
are
customers
who
who
have
like
tens
of
thousands
of
instances,
and
you
know,
because
there's
some
billing
fractional
cent
billing
per
api
call,
but
it
adds
up
over
like
a
day
or
a
month
or
whatever,
and
it
could
be
quite
expensive
if,
if
that
feature's
not
in
there.
A
A
All
right
guys,
but
I
I
think
yeah
I
mean
right
now:
it's
on
the
driver
level
right
and
then,
like
providers,
do
kind
of
caching
on
the
provider
end
for
the
same
particular
reason
that
they
don't
want
to
make
too
many
calls.
But
this
does
give
us
that
flexibility.
C
Questions
are
we
meaning
is
this
feature
meant
to
phase
out
rotation
after
after
1.20
like
we're
relying
on
this
feature
moving
forward.
A
Yes,
I
mean
that's
the
way,
I'm
seeing
it,
but
basically
I'm
looking
for
feedback
on
the
design
and
like
what
other
folks
think.
But
if
we
do
rely
on
this,
like
I
said
because
this
part
of
request
republish
has
been
battle
tested
in
kubernetes,
because
it
does
that
today
for
all
the
volume.
So
this
was
just
added
for
csi
recently
and
also
it
it
removed
a
lot
of
overhead
that
we
have
to
maintain
today,
like
as
part
of
the
rotation
reconciler.
A
So
here,
basically
we
try
to
reconstruct
everything
that
we
know
that
the
cubelet
passes
to
the
csi
driver.
So,
like
the
pod
name,
namespace
part
uid
service
account
name,
so
it
removes
all
this
overhead
from
the
driver.
So
instead
we
can
always
rely
on
cubelet,
giving
us
a
single
config
for
the
pod,
and
then
we
just
pass
that
on
to
the
provider.
D
So
currently,
in
rotation,
the
existing
versions
are
passed
and
but
not
on
the
initial
amount.
How
will
it
work
in
the
remote
case.
A
A
Yeah,
so
I
have
the
implementation
now
in
my
branch
like
if
you
want
to
take
a
look,
but
it
basically
changes
all
the
create
flows
to
create
or
update,
and
then
we
can
also
do
a
get
and
not
publish
to
get
the
versions
and
pass
it
to
the
provider.
So
that
way,
every
time
we
call
we
check
if
it
exists.
If
it
does,
then
we
pass
the
current
versions
to
the
provider
and
then
the
provider
can
decide
what
to
do
with.
A
B
So
I
just
have
one
last
question
regarding
obviously
this
means,
like
you
know
you
we
could
use
the
amount
for
rotation.
Therefore,
you
know
there's
no
need
for
the
rotation
reconciler
right,
so
the
other
proposal,
where
we're
saying
we
want
to
decouple
the
volume
mount
if
we
do
decide
to
do
that
down
the
road.
B
A
B
A
Yeah,
I
think
like
once
we
decouple
it.
It
comes
down
to
the
csi
driver
still
having
the
benefit
of
this
request,
republish
and
then
having
cubelet
call
it
for
rotation,
but
the
decoupled
sync
controller
will
have
the
rotation
reconciling
so
like
it
will.
It
doesn't
even
have
to
have
a
rotation
reconciler
per
se.
It's
just
more
that
it
recues
the
item
more
periodically
and
then
it
will
do
a
creator
update.
So
we
don't
need
an
explicit
rotation
reconcile.
It.
E
Sorry
question
I
had
was
with
regards
to
the
first
point,
which
is
the
rotation
reconcile
runs
periodically
as
part
of
the
csi
drivers
are
required.
So
I
understand
that
piece,
but
I
didn't
understand
the
point
under
it.
This
removes
the
extra
set
of
api
calls.
The
rotation
reconciler
needs
to
make,
wouldn't
it
need
to
make
the
same
set
to
get
the
updated
secret.
A
So
today,
in
terms
of
so
today,
we
have,
I
mean
we're
working
towards
that
change
right,
like
we're,
trying
to
switch
to
controller
runtime
cache
for
the
sync
controller
and
rotation.
So
we
maintain
a
common
cache
for
pods
secrets
which
are
created
by
the
driver
and
then
secret
product
class
port
status.
But
there
is
also
this
additional
cache
that
we
need
to
maintain
for
non-published
secret
ref.
A
But
if
we
switch
to
this
feature,
cubelet
already
gives
us
the
credentials
from
that
secret
so
like
that
call
is
no
longer
required
like
we
can
just
rely
on
cubelet,
giving
it
to
us
all
the
time
and
just
we
pass
it
on
to
the
provider.
Instead
of
doing
all
these
extra
set
of
options,
okay,.
E
A
A
Yeah,
so
for
the
secret
that
driver
is
using
for
the
mount,
it
will
call
you
during
the
next
republish.
So
even
if
you
see
today,
if
you
go
and
update
the
no
public
secret
reference,
nothing
changes
right,
like
the
rotation,
reconciler
picks
it
up
in
the
next
periodic
update
and
similarly
cubelet
will
give
you
the
latest
credentials
in
the
next
periodic
update.
So
basically,
this
shifts
the
api
call
from
the
rotation
preconceller
to
relying
on
what
cubelet
already
has
today
for
the
node
published
secretary.
F
F
Yeah,
I
was
just
gonna
say
that
it
really
complements
the
token
request,
like
pr
that
I
think
micah
has
us
out
or
like
right
now.
The
gcp
provider
has
to
have
permissions
to
get
the
like
service
account
tokens
and
pods
like
there's
some
security
benefits
of
it
too.
If,
like
you,
can
remove
those
permissions
from
some
of
the
yeah
just
we
can
only
remove
a
bunch
of
our
back
rules
for
being
able
to
access
secrets
or
create
tokens
for
pods.
So.
A
D
D
So
issue
11
for
us,
the
the
the
customers
are
so
it's
natural
for
our
customers
to
store
their
secrets.
D
As
jason
jason
blobs,
the
our
secrets
can
be
in
any
format,
but
most
of
our
customers
are
storing
it
in
json
format,
for
various
reasons
and
what
they
would
like
to
do
is
when
they're
setting
kubernetes
secrets
through
the
driver,
they
would
like
to
be
able
to
parse
out
like
like,
for
example,
if
they
have
a
username
and
password,
they
would
want
to
set
a
shell
variable
for
username
and
a
different
shelf
variable
for
password.
D
Now
I've
noticed
that
well,
even
the
poster
in
this
issue
pointed
out
that
vault,
the
vault
provider
does
this
in
the
provider,
and
I'm
wondering
if
there
is
a
reason
it's
not
done
in
in
in
the
driver
itself
and
if
there's
a
reason
not
to
do
it
or
if
people
have
objections
to
doing
that
way
or
if
we
want
to
do
it.
That
way,
because
it
seems
like
that
would
be
more
natural.
D
We
wouldn't
have
to
fetch
the
secret
multiple
times
if
they
want
to
set
multiple
fields
and
doing
it
in
driver
makes
it
available
for
everybody,
otherwise
each
provider
will
have
to
implement
this
separately.
So
I'm
wondering
what
everyone's
opinion
on
that
is.
F
D
F
Oh
agents
see
this
doc.
This.
F
Yeah,
so
the
I
think
it
would
be
easy
to
do
in
the
driver
for
syncing
secrets
of
being
able
to,
but
in
terms
of
mapping
like
the
external
secret
to
files.
So
if,
if
there
are
customers
that
want
to
be
able
to
do
this
with
the
files
and
the
file
system,
then
I
think
it's
harder
to
do
in
the
driver.
D
Yeah
the
request
we
got
was
specifically
for
studying
environment
variables
and
yeah.
We
weren't
that
would
be
a
side
effect
if
they
wanted
to
set
the
environment
variable
and
we
implemented
it
in
the
provider.
That
would
be
the
side
effect
that
it.
It
only
sets
the
one
json
value
in
the
mounted
secret,
also,
which
might
be
what
they
want,
but
I
don't
think
most
of
them
want
that.
A
D
Yeah
well
in
this
case,
for
these
guys
yeah,
some
of
them
actually
use
the
json.
They
parse
it
in
their
application
and
use
it.
Some
of
them
never
even
see
the
secret,
because
there
are
other
components
that
they
can
use,
that
parse
the
json
for
them
like
there's
a
jdbc
driver
plug-in,
for
example,
that
or
wrapper
that
will
parse
it
and
pass
it
on
to
gdbc.
D
So
they
would
never
see
it.
But
in
this
case
they
have
some
existing
applications
that
are
using
environment
variables
and
they
want
to
sort
of
just
integrate
it
into
there
without
having
to
make
code
changes.
A
Okay
yeah,
so
I
think
I
was
referring
to
the
second
subset
like
folks
who
are
reading
it
from
the
json.
I
am,
and
I
think,
at
least
from
the
wall
perspective,
from
what
I've
seen
like
they
basically
pass
it
out
at
the
provider
and
just
write
it
as
provided
as
different
files,
if
possible,
right
like
so
that
the
user
can.
B
A
Access
it
as
different
files,
the
one
benefit
I
see
for
it
being
in
the
provider
is
if
there
are
different
types
like
I
mean
today,
it's
are
we
supporting
json
but
like
tomorrow,
if
there's
any
other
different
type
that
we
need
to
support,
I
think
providers
know
the
different
types
that
they
have.
It
might
not
be
the
same
for
every
provider.
So
if
we
introduce
this
as
a
concrete
type
in
the
driver's
spec,
it
could
be
a
little
different,
but
I
think
like
that
is
where
my
concerns
are
from.
D
Yeah,
that's
that's
kind
of
what
the
options
are
addressing,
so
this
would
be
specific
to
j
json
or
using
json
e-path
as
a
way
to
specify
it.
D
But
we
would,
you
know
always
either
have
jsme
path
in
in
the
namespace
of
the
the
object
or
in
the
keyword
somewhere,
so
that
if
a
another
type
was
introduced
you
just
either.
You
know
like
if
it
was
xml
or
pam,
or
something
like
that.
You
would
either
create
a
different
variable
name
with
pem
or
use
a
you
know,
a
different
prefix,
for
whatever
the
value
is.
F
But
I
think
my
only
objection
is
it
only
helping
on
the
secret
sync
feature
which,
like
I'm,
not
a
big
fan
of
anyway,
but
like
I
don't
know,
I
don't
think
like
because
it
could
be
confusing
like
it's
already
confusing
of
like
what
is
provider
configuration
versus
driver
configuration
like
it
seems
like
developers
and
consumers
of
the
csi
driver
can
can
get
mixed
up
there
and
then,
if
we
had
like
kind
of
the
feature
in
both
places
like
it
could
be
confusing.
F
But
I
could
see
this
being
generally
useful
across
the
providers.
So
I
don't
know
I'm
having
a
hard
time
having
like
strong
opinions
about
it.
It
seems
reasonable
to
me.
I
just.
F
I
would
like
it
to
be
able
to
apply
to
the
files,
not
just
the
sync
secrets,
but
I
I'm
not
sure
of
a
way
to
do
that
in
in
the
driver,
because
of
the
way
the
driver
works.
So.
D
Yeah
right
now
we're
trying
to
decide
whether
we
implement
it
in
our
provider
or
we
we
do
it
through
the
driver
and
that's
what
we're
looking
to
see
what
everybody
else's
opinion
is
on
this
we're
going
to
probably
do
this
either
way
it's
just
whether
we
do
something
that
is
more
generally
useful
or
just
specific
to
our
implementation.
A
But
I
can
also
share
that
link
and
then
maybe
we
can
see
if
what
commonalities
are
there
and
then
we
can
also
maybe
look
at
the
world
provider
to
make
a
decision
because,
like
tommy
said,
I
don't
have
a
strong
preference
right
now,
but
we
can
review
the
doc
that
you
shared.
I
mean
I
can
do
the
dog
that
you
shared
and
then
I
can
add
any
feedback
that
I
have.
B
B
But
yeah
there's
no
age
right
so,
like
maybe
just
open
ones
that
so
we
can
cross-reference
and
then
that
way,
if
people
are
interested,
they
can
you
know,
plus
one
or
whatever,
but
one
one
thing
that
I
have
concern.
One
is
I
I
really
see
the
value
of
this,
but
the
concern
that
I
have
is
it's
kind
of
a
slippery
slope:
scope
like
where
it's
it.
It
could
be
the
beginning
of
oh,
but
I
want
this
other
thing
so
then,
how
do
we?
B
D
Yeah,
I
don't
know
what
the
criteria
would
be
for
other
features.
For
this
particular
thing.
I
could
see
people
wanting
other
formats
and
whether
or
not
that
format
you
know
you
choose
to
support
that
format
or
not.
I'm
not
sure
how
you
would
decide
that,
except
maybe
on
upvotes,
but
if
we
do
open
an
issue
for
this
one,
I
don't
think
I
think
there's
a
lot
of
our
a
lot
of
customer
requests
for
it
on
our
side.
D
So
I
don't
think
we
would
have
a
lot
of
time
to
wait
around
to
see
if
the
other
feature
was
upvoted.
D
So
if
you
guys
aren't
like
or
overwhelmingly
in
favor
of
this,
it's
likely
we'll
just
go
and
implement
it
in
our
driver
or
our
plug-in
right
now
or
or
soon.
D
A
D
D
G
G
For
what
it's
worth,
I
do
agree
that
this,
this
sort
of
feature
probably
belongs
in
their
provider,
because
what
you're
going
to
pull
out
of
a
secrets?
Contents
is
very
kind
of
heavily
coupled
to
the
upstream
provider
you're
talking
to,
and
we
do
have
quite
a
nice
kind
of
clean
separation
of
concerns
between
driver
and
provider
as
with
with
files
being
the
interface,
and
this
would
kind
of
muddy
the
water
on
on
kind
of
what
each
half
is
responsible
for
there.
G
But
I
don't
that
said
I
don't
have
strong
opinions
against
it
going
into
the
driver.
It
just
feels,
like
a
provider
feature
to
me
personally,.
A
D
Open
the
issue
on
the
driver
and
we'll
probably
go
from
there
make
our
our
decision.
It
doesn't
sound
like
there's
a
lot
of
overwhelming
support
or
or
desire
for
this
at
the
driver
level,
or
at
least
not
from
in
the
community
here.
So
it
sounds
at
this
point
likely
that
we'll
just
implement
it
in
the
provider,
but
I
guess
we'll
we'll
give
it
a
little
bit
more
time
and
see
what
what
everyone's
opinion
is
on
that.
D
C
Yeah,
that's
me:
if
it's
okay,
can
I
go
ahead
and
share
my
screen.
A
C
A
C
Yeah
so
last
time,
when
we
kind
of
talked
about
this,
it
seemed
like
it
was
best
to
just
narrow
the
focus
down
to
the
secret
objects.
Instead
of
trying
to
also
implement
a
wildcard
feature
in
the
parameters
section
of
the
secret
provider
class.
C
Property
on
the
secret
provider
class,
so
my
design
proposal
is
that
when
the
user
is
listing
out
all
the
secrets
in
the
parameters
field,
they
can
also
add
a
secret
list
to
add
that
secret
too.
So
in
this
case
we're
adding
both
of
these
secrets
to
the
db
creds
list
and
then
in
the
secret
objects.
They
can
just
reference
the
list
and
that
will
be
transformed
to
fit
the
shape
that
is
already
existing,
which
is
just
the
data
property
with
the
key
and
object
name
properties.
C
So
yeah,
I
basically
just
have
the
secret
list
that
I'm
adding
the
db
creds
to
and
I'm
just
referencing
it
down
in
the
secret
objects
as
the
list
and
I'm
still
getting
all
the
secrets.
A
C
Yeah,
so
that's
like
well
yeah,
so
the
this
the
secret
list
is
in
the
in
the
field,
for
the
parameters
for
the
provider
and
in
the
reconcile
method
of
the
secret
provider.
Pod
status.
C
A
Okay,
yes
yeah.
I
think
that
that
is
a
little
tricky,
because
the
schema
for
that
is
not
set
right,
like
typically,
anything
within
parameters
is
more
just
like
a
map.
So
it's
like
a
map
for
a
string
and
then
like
a
generic
interface.
So
the
schema
is
not
something
that's
well
defined.
So
if
different
providers
have
the
configurations
in
different
way,
then
it's
difficult
for
the
driver
to
parse
that
field
out,
because
it's
way
in
to
provide
a
configuration.
F
Yeah,
let's
just
say,
like
an
example
that
the
gcp
provider
doesn't
call
it
objects.
We
call
it
secrets
under
parameters
and
just
have
resource
name
and
file
name
instead
of
like
object
like
yeah.
A
lot
of
that
is
pretty
different
for
the
gcp
one,
which
would
make
it,
I
think,
difficult
to
kind
of
do
in
a
way
that
all
of
the
providers
can
support
is
that
I
think
I
can
get
an
example
of
the
secret
thinking
on
the
gcp
here
I'll.
Send
it
to
you
on
slack
not
a
day.
C
Okay
yeah,
so
then
the
the
main
concern
there
is
just
the
the
shape
that
each
provider
is
expecting
could
be
different.
A
A
For
sync
is
kubernetes
secret,
like
it
was
added
as
a
core
type
with
explicit
fields,
so
that
the
user
can
set
those
so
that
the
driver
understands
those
fields
right
like,
and
that
is
the
reason
that
there
are
two
different
times
that
the
user
has
to
define.
The
same
thing
like
one
is
within
the
provider
spec,
which
could
be
very
generic
click
with
any
type
of
keys,
and
then
secret
objects
field
with
the
two
mandatory
fields,
which
is
object,
name
and
then
like
key,
so
that
the
driver
can
understand
those
fields.
C
What
kind
of
recommendations
could
you
think
of
to
to
modify
this?
That
would
make
it
a
little
more
like
generic,
for
our
usable
depend,
regardless
of
what
providers
being
used.
I
guess
that's
where
I'm
getting
stuck.
B
I
feel
like
what
you
want
is
basically
an
easy
way
to
just
say:
hey
look
at
this
mounted
path
and
then
loop
through
the
files
in
that
path,
and
then
we
assume
that
the
secret
that
the
name
of
the
secret
you
want
to
sync
with
is
the
name
of
the
file.
I
feel
like
that's
what
you
want
like
an
easy
way
to
just
say:
use
everything
in
that
path,
and
I
guess
the
driver
kind
of
knows
that
right,
like
the
driver,
knows
where
the
path
is.
G
Oh
sorry,
that
was
me
plus
wanting
what
you
were
saying
and
I
think
yeah
like
that
seems
the
the
extent
of
the
knowledge
that
we
can
confidently
say.
The
driver
has
like
what
file
name:
yeah
yeah,
like
the
object
name
and
the
file
name
kind
of
yeah
synonymous
here,
and
we,
if
we
just
say
yeah,
list
list
out
every
file
name
from
this
path.
B
Yeah,
so
if
we
can
assume
that
you
want
everything
in
the
file
path
to
be
mounted
and
the
object
name
is
like
the
secret
name
is
exactly
the
object
name
then,
potentially
you
could
just
do
it
in
like
a
wildcard
format,
but
I'm
not
sure
if
there's
some
like
how
would
you
know
that
the
secret
type
like
there's
a
lot
of
implications
here.
A
F
Right
I
mean
they
could
still
specify
the
type,
I'm
not
understanding
the
the
problem
there.
B
Oh,
I
think
I
think
what
ding
wants
is
like
a
like
a
simple
way
of
saying:
hey
for
the
the
number
of
files
in
this
path.
I
I
want
to
sync
everything
so
that
you
don't
have
to
like
loop
through
them,
specify
one
by
one,
but
the
problem
with
that
is
the
driver
does
not
know
what
kind
of
what
type
of
secret
you
want
to
create
out
of
those
content
right.
F
B
B
Well,
right
now
we
do
right
right
now.
We
we
expect
the
user
to
tell
us
exactly
what
type
and
the
name
and
the
the
mapping
right
we'd
have
to
do
it
one
at
a
time,
but
I
think
the
proposal
here
is
looking
at
like
data
from
so
it's
just
looking
at
one
mount
path,
and
then
everything
in
that
path.
F
Yeah,
my
understanding
is
that
you're
trying
to
reduce
the
amount
of
boilerplate
for
all
of
the
keys
within
a
secret,
but
that
each
secret
would
be
defined
separately.
B
The
secret
type
like,
is
it
a
opaque
one
or
is
it
a
tls
one
like
that's
one
to
one
right.
A
A
The
data
is
going
to
be
empty
so
or
a
wild
card,
but
I
think,
like
there
are
a
couple
of
assumptions
right
like
one
is
we're
saying
everything
in
the
secret
product
class
is
going
to
be
seeing
discovery
secret,
like
that
is
one
assumption
that
we're
making
and
then
I
think,
also
the
second
one
is
the
reason
I
bought
up
tls
is:
there
would
be
a
dot
certain
dot
key.
But
let's
say
if
there
is
a
csrt
that
the
user
wants
like.
A
A
But
again
that
is
also
an
assumption,
because
in
the
example
that
I
was
showing
in
the
demo
before
I
had
tls
certs,
but
I
also
had
like
individual
secrets
that
were
added
to
the
mount
so
that
that
could
be
a
use
case
where
there
is
a
mix.
But
if
we
do
assume
this,
then
we
have
to
tell
users
explicitly
that
each
secret
provider
class
has
to
be
only
for
a
specific
secret
object.
F
I
was
gonna
say
that
the
new,
the
atomic
writer
does
support
some
folders.
So,
like
the
assumption
about
that,
everything
needs
to
be
synced
could
can
change
we're
like
right
now.
I
think
some
of
the
providers
don't
support
like
paths.
It's
only
file
names
in
the
sql
provider
class
but
yeah.
F
A
Blocker,
just
in
the
interest
of
time,
we
have
seven
minutes
and
three
more
agenda
items,
but
in
terms
of
action
items,
then
what
what
do
you
think
would
be
the
good
next
step?
Do
you
think
we
should
add
whatever
we
discussed
today
in
the
design
dock
and
then
go
over
it
or
do
you
think
we
should
add
to
the
gita?
We
should.
What
would
you
prefer.
C
Well,
I
I
do
want
to
kind
of
review
like
everyone's
list
of
concerns
today
that
were
brought
up.
I
haven't
had
as
much
time
to
invest
in
this
as
I've
wanted,
so
I
do
want
to
spend
a
little
more
time
on
it
and
see
if
I
can
take
those
other
factors
into
consideration
for
sure.
F
Yeah
the
well
just
the
second
one
is
just
that:
a
reminder
that
we're
trying
to
stick
to
the
monthly
release
cycles,
and
it's
like
the
second
wednesday
of
the
month
we're
trying.
So
the
next
release
of
the
driver
is
slated
for
15
july
and
we're
targeting
a
0.1.0
or,
and
then
one
of
the
issues
that
we
just
had
for.
That
was
the
grpc
interface.
F
D
Which
feature
what
we
talking
about
is
this
the
this
isn't
just
the
support
for
helm
right.
This
is.
F
No,
this
is
instead
of
the
providers.
Like
writing.
The
files
directly
respond
with
the
view
of
the
file
system
right
and.
D
Okay,
cool
all
right
yeah,
we
saw
this.
We
were
just
planning
to
implement
it.
When
we
got
a
chance
as
a
feature
flag,
we
don't
have
schedule
fart
at
this
time.
The
only
concern
was
what
we
talked
about
a
long
time
ago
or
well
a
few
meetings
ago
about
the
fact
that
we
have
that
optimization,
where
we
don't
write
the
secrets
out
again
if
they
haven't
changed,
but
I
think
you
addressed
that
right.
F
Yeah,
I
I
in
the
design
doc
I
can
post
a
link
to
how
I
tried
to
address.
It
was
not
there.
F
No
partial
responses
or
partial
rights
like
the
way
that
the
the
file
writing
library
works
is
it
does
a
diff
of
the
file
system
and
the
intended
state.
So
it
always
needs
the
intended
state.
But
if
you
return
an
empty
response,
it
won't
do
the
right.
F
So
if,
like
one
of
100
secrets
change,
you
would
need
to
fetch.
You
know
the
100
secrets,
but
in
the
case,
where
nothing
changes
there
would
you
could
you
could
keep
the
optimization.
D
Yeah,
I
understand
what
you're
saying
I
don't
think
we
would
probably
want
to
do
that.
We
would.
We
would
have
to
try
and
find
the
existing
secrets
and
return
them
somehow
and
still
keep
the
optimization.
D
Yeah,
that's
that's
what
I'm
kind
of
implying
there
is
probably
what
we
would
have
to
do,
but
in
in
any
case
we
do
plan
to
provide
something
when
we
can
get
it
in
in
our
schedule.
D
F
A
This
is
where
it
is
right,
so
we
said
we're
going
to
make
it
we're
going
to
add
it,
and
then
it's
going
to
be
made
required
at
the
next.
So
this
one
basically
translate
to
0.1.0,
but
in
terms
of
schedule,
v
0.1.0
is
planned
for
july
15th
and
then
1.0
right
now
is
planned
for
august,
provided
we
don't
hit
any
blockers.
D
So
I
guess
I
missed
that
you
need
it
by
august
or
by
july,
15th.
D
F
Yeah,
so
that's
just
why
I
wanted
to
to
bring
it
up.
I
imagine
we'll
just
probably
push
it.
A
A
D
Yeah
we
could
document
that
we
don't
work
with
the
current
release
of
the
driver
too.
That
would
be
another
option
if
you
guys
don't
want
to
wait.
F
We
can
I
I
can
stay
on,
keep
keep
going
but
yeah.
I
just
wanted
to
call
that
case.
B
The
last
one
is
hopefully
quick.
I
I
just
want
to
make
sure
there
were
some
helm.
Three
crd
changes
that
went
in
and
I
think
we
want
to
ship
zero
one
zero
with
that
right.
I'm
just
kind
of
curious
if
we
could
make
sure
that
the
upgradability
of
crds
could
be
included
in
as
a
zero
one,
zero
blocker.
A
Yeah,
I
think,
like
I
looked
at
some
of
the
projects
in
kubernetes
and
like
one
of
the
most
common
approach
for
cid's
upgrade
with
helm,
is:
have
the
user
do
a
cube.
Ctl
apply
before
they
do,
and
hence
it
helm
upgrade
so
cube
city
will
apply
off
the
crds
directory
from
the
hem
sets
and
then
a
cube
serial
upgrade
helm
upgrade
to
upgrade
all
the
control
plane,
components.
B
A
B
Sure
I
mean
we
can
decide
if
this
has
to
go
in
zero
one
zero,
but
it
would
be
nice
if
it
did.
It
is,
is
my
point:
okay,.