►
From YouTube: Secrets Store CSI Community Meeting - 2021-10-28
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
everyone
welcome
to
the
csi
secret
store
community
call
today
is
the
october
28th.
A
Please
add
your
name
to
the
attendees
list.
If
you
haven't
already,
we
have
a
very
short
agenda
so
I'll
help
moderate.
Does
anyone
want
to
help
take
down
notes
for
the
call
today.
A
So
the
first
app,
so
we
actually
don't
have
any
announcements
here,
but
one
announcement
is
we
published
the
1.0
release
a
couple
of
weeks
back
thanks
to
all
the
contributors
for
helping
out
with
all
the
features
the
fixes
are
over
the
time
it's
been
a
while,
and
I
think
we've
gotten
it
out.
Hopefully
we'll
start
seeing
more
adoption
now
that
we
have
the
one
authorities
and
then
the
apis
are
also
v1.
A
Okay,
the
first
item
on
the
agenda
is
for
the
file
system.
Permissions
tommy.
Do
you
want
to
talk
about
it.
B
Yep,
I
just
wanted
to
call
out
there
these
two
issues
from
that
are
basically
just
asking
for
the
ability
to
set
different
file
system
permissions
on
the
mounts
and,
I
think
actually,
like
our
file.
Writer
is
in
a
place
and
the
grpc
interface
like.
I
think,
a
lot
of
it
is
wired
up,
but
I
think
kind
of
like
two
ways
to
do.
It
is
to
one
extend
the
crd
config
to
like
file
system
permissions
to
apply
to
all
files
and
the
other.
B
Is
kind
of
like
the
the
provider-specific
emo
is
where
there's
per
file
information
configured
so
it
it
may
require
providers
kind
of
like
to
extend
their
email
to
support
like
hey
this
file.
Has
this
permission
and
comes
from
the
secret?
That
kind
of
thing
just
wanted
to
raise
that
before
I
did
too
much
into
like
writing
up
a
design,
see
if
people
like
the
idea
of
being
able
to
set
file
system
permissions
and
if
anyone
has
any
opinions
kind
of
on
the.
B
You
know
approach
of
whether
or
not
like
yeah,
the
crd
or
or
providers
or
both
so.
B
And
then
I
think
the
other
thing
is
that,
right
now
we
require
the
volumes
to
be
mounted
as
read-only
and
whether
or
not
we
want
to
support
like
the
right
operation
to
the
volume
from
the
pod,
which
I
think
would
be
needed
for
workloads
that
need
to
or
like
want
to
delete
the
secrets
from
the
file
system.
After
reading
them.
A
Yes,
the
read-only
requirement
was
actually
mandated
when
we
moved
to
kubernetes
six
and
it
was
mandated
by
six
storage
and
cigars
just
being
the
csi
driver
and
writing
the
contents.
They
didn't
want
the
workload
to
modify
it.
So
if,
if
that's
something
that
we
want
to
change,
then
we
probably
want
to
check
again
with
cigar
and
sick
stories,
to
see
if
it's
something
that
they
would
like
us
to
have.
A
But
apart
from
that
and
the
files
permissions,
I
think
there
has
been
quite
a
bit
of
ask
in
other
providers
also
as
far
as
seen
mostly
I
direct
users
from
the
azure
provider
to
just
open
an
issue
here.
So
we
have
like
a
summary
of
issues.
I
think
that
is
something
that
we
can
support
like.
I
would
plus
one.
B
Okay,
I'm
just
making
up
there.
A
A
Yeah
extending
the
current
yaml
that
we
have
to
get
the
permissions
from
the
user
and
then
so
we
already
have
it
as
part
of
the
gipc
response.
Today
we
just
set
it
to
the
default
that
we
had
before,
but
I
think
we
can
have
that
based.
C
B
A
C
B
Yeah,
okay,
I'll
probably
try
to
get
a
little
bit
right
up
before,
just
like
digging
into
coding
that
to
see
what
the
changes
to
the
crds
and
the
providers
would
be.
B
I'm
working
off
memory
here
so
I'll
I'll.
Look
into
that
because
yeah,
I
guess
like
we
might
already
have
a
spot
for
the
default
permission
that
the
driver
sends
the
providers
like
I'm
not
sure,
if
I'm
not
sure
where
that
permission
is
originally
populated
from.
B
Oh
so
it
would
need
to
change
so
I'm
just
doing
a
search
through
our
secret
provider
class
cmo
and
there's
no
way
to
set
permissions
in
the
secret
provider
class.
So
I
think
that
that
permission
is
just
like
a
hard-coded
thing
that
we
tell
the
know
in
the
rpc
to
the
provider.
B
We
say
here
the
file
system
permissions.
I
believe,
that's
hard
coded.
So
if
we
wanted
a
configurable
default
that
applies
to
all
files,
then
then
the
crd
would
need
to
be
updated.
So
do.
A
B
Yeah
yeah,
that
might
be
a
good
way
to
start.
This
will
make
it
a
provider
feature
and
then
promote
to
like
a
driver
level.
E
But
but
wouldn't
that
apply
to
like
default,
all
the
permissions,
like
the
I
think,
one
of
the
benefit
of
secret
product
class
is
that
it's
a
namespace,
specific
right
or
the
it
is
bound
for
a
particular
namespace.
So
if
it's
at
the
namespace
level,
it
would
be
restricted
to
that
particular
name,
and
it
would
not
be
the
the
whole.
E
B
Sorry
guys
I
I
was
going
to
say,
I
think
what
anish
and
I
were
discussing
is
it
being
part
of
the
the
provider
emil
data
portion
like
the
provider-specific
emo
within
the
secret
provider
class
versus
it
being
a
a
configuration
in
the
secret
provider
class
itself.
B
B
A
A
Okay,
the
next
one
is
1.0
and
the
current
alpha
functionality
that
we
have.
B
Yeah
I
added
this
here
where,
like
yes,
the
announcement
is,
we
we
cut
a
1.0
as
part
of
that
we
kind
of
have
this
core
functionality
and
then
they're.
You
know
some
things
that
are
not
yet
stable
and
you
know
where
the
core
functionality
is
basically
like
the
secret
provider
class.
The
providers
work,
and
you
know
we
will
write
files
to
you-
know
pods
file
system,
but
the
right
now.
B
What's
still
alpha
is
rotation
and
then
syncing
is
kubernetes
secrets
and
I
wanted
to
just
you
know
in
the
meeting
specifically
call
out
that
we
do
have
these
things
still
as
alpha
and
see
if
there
were
thoughts
on
like
priorities,
kind
of
our
strategies
moving
either
of
these
too
stable
and
my
kind
of
like
two
cents
is
that
the
auto
rotation,
I
think,
is
the
feature
that
I
think
would
be
best
to
to
focus
on.
B
First,
I'm
trying
to
promote
that
to
stable
and
that
it's
probably
kind
of
blocked,
I
would
say,
on
the
requires
republish
type
work,
but
just
wanted
to
see
if
the
community
had
any
other
thoughts
on
the
alpha
functionality.
A
Yeah
and
then
just
to
give
a
little
update
on
the
auto
rotation.
So
today
we
have
the
rotation
reconciler,
which
does
it
every
two
minutes
right
and
I
think
a
couple
of
months
back,
I
had
a
demo
with
requires
republish,
so
request.
Republish
is
where
cubelet
will
automatically
call
the
driver
periodically
about
every
minute
to
go
and
refresh
the
mount
and
then
also
do
any
back-end
operations
that
are
required.
A
It'll
be
good
for
us
to
switch
to
requires
republish
in
the
long
term
because
it
basically
we
don't
need
to
have
this
custom
controller
running
in
ours.
We
can
rely
on
cubelet
to
give
us
all
the
required
information
so,
for
instance,
as
part
of
a
rotation
today,
we
need
to
get
the
note
published
secret
reference
every
time
we
do
rotation,
because
that's
not
something
that
we
have,
but
if
cubelet
calls
us
periodically
the
benefit,
is
we
get
that
as
part
of
the
amount
request?
A
And
then
the
second
thing
is,
if
you're,
using
the
tokens,
our
tokens
feature,
which
is
available
in
120
121
cubelet
will
also
give
us
the
tokens
as
part
of
the
mount
request.
So
basically,
we
have
all
the
required
information,
and
at
this
point
it's
as
simple
as
the
driver,
just
determining
which
provider
to
contact
and
then
sending
out
all
the
required
information
right
and
require
republish
is
only
available
from
kubernetes
121
and
we
still
need
to
support.
120.
A
119
is
going
to
be
end
of
life,
probably
today,
so
we
need
to
still
support
120.
So
what
we
were
thinking
so
based
on
my
discussion
with
tommy
was
we
can
make
few
changes
in
the
rotation
reconciler
today
to
have
some
kind
of
backward
compatibility
that
will
work
with
120
and
for
121.
We
can
gradually
switch
over
to
just
using
the
requires
republish
feature
and
the
required
change
for
backward
compatibility
is
today,
providers
like
gcp
and
hashicorp.
They
generate
the
service
account
token
that
they
require
by
contacting
the
api
server
directly.
A
One
thing
that
we
were
thinking
is
we'll
make
that
code,
part
of
the
driver,
which
means
during
the
rotation
part
of
it.
The
driver
will
generate
the
token
and
then
it
will
pass
it
on
to
the
provider
exactly
in
the
format
how
cubelet
passes
it
to
the
driver.
A
So
this
part
of
code
will
work
for
120
and
then,
when
we
switch
to
121
with
requires
republish,
we
don't
need
that
part
of
code.
It's
just
cubelet
will
give
us
all
that
required
information
so
until
121
the
driver
will
handle
it
and
then
post
121
cubelet
will
provide
us
with
everything.
A
So
we
were
thinking
we
want
to
start
these
changes
as
part
of
the
1.1
milestone.
So
there's
a
required
republish
issue
which
we
want
to
do
and
once
we
have
that
we
can
basically
put
the
requires,
we
publish
behind
the
same,
enable
rotation
feature
flag
so
based
on
the
kubernetes
version,
we
can
switch
the
request
republish
on
our
own
rotation
reconciler
and
then
eventually
get
rid
of
our
rotation
controller
that
we
have
in
the
driver,
long
term,
and
that
will
be
the
way
we
can
actually
get
this
table.
F
Oh
sounds
fantastic
to
me:
let's
have
one
question
for
my
own
understanding:
how
does
how
do
you
like
define
or
configure
how,
how
frequently
cubelet
will
call
requires
republish.
A
That's
a
good
question,
so
that
is
not
configurable,
but
what
we
were
thinking
is
keeping.
B
A
Current
format,
where
we
define
the
rotation
pole
interval
on
this
driver
we'll
keep
that
and
then
basically,
the
driver
will
see
if
the
call
for
request
republish
is
made
within
that
duration.
If
it
is,
then
it
will
skip,
it
will
just
send
an
okay
response
to
cubelet
and
not
call
the
provider
only
if
the
next
request
republished
call
is
beyond
the
rotation
polenta,
it
will
send
it
to
the
provider,
but
one
other
good
thing
is.
This
also
opens
up
the
option
for
us
to
define
our
rotation
power
interval
per
secret
provider
class.
A
So
we
were
thinking
once
we
introduced
these
changes
into
the
driver
and
get
a
release.
Maybe
the
next
phase
that
we
can
think
about
is
having
a
rotation
pole
interval
in
the
secret
provider
class,
so
the
user
can
decide
which
secret
which?
What
what
is
the
poll
interval
for
different
secret
provider
classes.
F
Yeah,
that
would
make
a
lot
of
sense:
yeah,
okay
and
yeah.
Until
we
have
that
additional
option,
not
much
will
change
from
the
provider's
point
of
view.
It's
still
be
gated
by
by
the
driver,
so
yeah
that
sounds
sounds
like
a
good
good
path
to
me.
A
Then
we
already
have
a
design
dock
for
request
we
publish.
I
will
update
that
with
what
we
discussed
today
in
the
backward
compatibility
plan
and
then
once
we
get
a
sign
off,
I
can
open
up
here
which
one
I.
A
Have
so
this
is
for
rotation
that
that's
what
we've
been
thinking
to
do
to
get
it
to
stable
in
terms
of
feature,
the
other
one
is
the
secret
sync
and
for
secret
sync.
We
were
thinking
we
want
to
go
down
the
route
of
extracting
it
out
of
this
project
and
having
a
standalone
controller.
A
So
this
was
already.
This
was
initially
discussed
with
cigar
and
cigar
like
the
idea.
So
the
next
steps
is,
we
have
an
initial
design
dock,
so
we
basically
update
that
to
cover
all
the
odd
scenarios
and
a
full
end
to
end
design
on
how
we
want
to
implement
it,
and
we
can
take
that
back
to
cigars
again
in
the
next
couple
of
weeks.
A
C
B
I
was
just
gonna
add
that
yeah
there's
been
a
number
of
like
feature
requests
like
I'm
just
trying
to
sync
this.
Do
I
really
need
to
make
a
mount
and
it
being
separate,
wouldn't
make
it
so
that
people
could
use
the
secret
provider
class,
but
not
have
to
have
like
two
mounts
of
the
like
unnecessary
amounts
of
their
secrets.
E
So
my
question
is:
if
we
are
going
to
anyway
spin
off
a
different
sub
project
right
then,
are
we
focusing
it
to
be
stable
on
that
project,
or
I
mean
we
might?
We
should
not
be
focusing
here
right.
A
That's
the
goal,
because
for
users
who
are
already
using
the
csi
driver,
it
will
be
a
seamless
migration
for
them,
so
they
can
still
use
their
existing
secret
provider
class
and
then
that
will
just
be
used
to
sync
as
kubernetes
secret.
The
secret
objects
is
what
really
tells
the
controller
today
that
this
needs
to
be
synced
as
kubernetes
secret,
so
that
field
will
still
stay
and
then
the
objects
field
is
something
like.
The
parameters
which
is
specific
to
the
provider
is
still
the
same.
A
So
the
provider
gets
all
the
contents
and
then
returns
it
back
as
the
rpc
response
and
then
let's
spot.
The
sync
controller
can
just
look
at
secret
objects
and
then
sync
them
as
kubernetes.
C
A
B
A
The
odds
like
with
driver
all
your
authentication
permissions
are
provided
to
the
application
part
rather
than
the
driver
right.
So
what
we're
trying
to
do
is
design
the
controller
in
a
similar
way
where
all
the
required
permissions
are
to
be
given
to
your
application
and
not
to
the
single
controller.
That's
going
to
do
everything.
A
So
if
you
handle
that,
then
it's
basically
just
a
spin
off,
I
would
say
it's
enough:
maybe
it's
just
another
ad
direction
for
the
driver.
So
what
was
supported
on
the
driver?
We
just
have
that
as
a
separate
project,
so
we're
just
splitting
the
current
project
into
two
and
then
it'll
just
be
part
of
the
cigar
community,
because
it's
essentially
built
as
one
project.
A
Okay,
I
see
thank
you
yeah
and
then
there
was
initial
interest
when
we
presented
it.
But
again
we
just
need
to
get
a
sign
off
right.
So
when
we
have
the
design,
if
cigar
still
thinks
this
fits
well
as
a
cigar
project,
then
it
will
just
fall
under
cigar.
But
if
not,
then
we
need
to
see
how
we
want
to
do
it.
A
A
But
I
think
the
goal
for
this
part
is
how
do
we
separate
out
mountain
sync,
because
today
most
of
the
users
wanted
mostly
for
sync
and
then
making
them
do
the
mount
just
because
they
want
sync?
It's
it's
really
hard
to
explain
and
also
in
terms
of
user
experience.
It
can
be
really
hard
right
because
they
just
have
something
in
the
mount
which
they
are
not
really
using.
So
the
end
goal
is
to
try
and
make
these
two
things
separate.
A
So,
like
the
user
gets
to
decide
what
they
want
to
do
either
sync
or
mount,
but
just
not
rely
on
it
for
everything
and
also,
I
think
with
sync.
We
don't
really
need
this
to
run
as
a
damon
say
like
when
it's
running
as
a
driver,
it
needs
to
be
run
as
a
damon
said,
because
tubelet
on
every
node
needs
to
talk
to
the
driver
on
every
node,
but
when
it's
sync,
all
you
really
need
is
a
single
controller
that
can
do
cluster
wide
operations.
C
F
How
would
how
would
identity
work
for
for
that
project
because,
obviously
we're
currently
using
the
requesting
pods
identity
and
I'm
just
trying
to
figure
out
how
yeah
how
the
providers
would
yeah
what
what
identity
they
would
use
for
fetching
secrets.
A
So
we
had
a
very
brief
discussion.
Like
I
mean
me
and
tommy
had
chatted
about
it,
but
at
least
for
the
identity
piece
like
we
can
still
leverage
the
application.
Pods
service
account
so
today
for
the
csi
driver
for
mount.
We
generate
the
token
based
on
the
pod
service
account
right.
So
we
see
what
service
account
is
referencing
and
then
we
just
generate
a
token
for
it.
So
for
the
controller
we
can
still
try
to
do
the
same.
Even
though
there's
a
secret
provider
class,
we
were
thinking.
A
The
secret
provider
class
is
referencing
so,
rather
than
just
having
a
part
type
or
service
account,
we
can
have
a
secret
provider
class
type
to
the
service
account
entity,
but
that
was
the
initial
thoughts,
but
those
are
the
key
pieces
that
cigar
needs
before
it
can
sign
off
like
if
we
can
keep
the
same
model
where
we
can
have
the
identity
permissions
from
the
app.
Instead
of
giving
it
to
the
controller
like
that
would
be
the
desired
desired,.
F
A
Yeah
and
permissions
on
the
controller,
there
are
already
a
good
number
of
projects
today
that
do
it
really
well
like
where
they
take
the
permissions
on
the
controller.
So
I
think
the
key
piece
would
be
the
identity
of
like
a
separate
identity
to
use,
rather
than
having
a
global
permissions
for
the
control.
B
B
At
least
those
the
provider
hopefully
will
not
need
to
change.
Just
then
support
like
seekers
and
king
separately.
F
I
did
have
one
small
question
for
the
end
of
the
agenda.
The
I
was
doing
some
version
upgrades
the
other
day
and
I
noticed
the
the
proto
definitions.
Let
me
just
put
a
link
in
the
chat.
The
proto
definitions
for
the
providers
are
still
under
v1
alpha
one.
I
wasn't
sure
whether
that's
on
its
way
to
getting
upgraded
or
if
we
still
expect
that
to
stay
labeled
v1
alpha
one
for
a
while.
B
F
Okay
yeah,
I
just
wasn't
sure
if
it
was
meant
to
be
tied
to
the
crd
versions
or
not.
A
So
I
at
the
least,
I
think
so
there
are
a
few
fields
that
we
want
to
deprecate
in
the
current
one
that
we
have
right
like,
for
instance,
the
permission.
A
Probably
we
want
to
deprecate
in
the
future
where,
if
there's
no
permission
specified,
then
the
driver
picks
up
a
default,
and
then
the
provider
accepts
the
permissions
in
the
yaml
and
then
gives
it
to
the
driver.
So
if
there
are
fields
that
we
want
to
remove,
I
think
maybe
we
can
are
targeting
on
getting
rid
of
those
and
then
once
we
think
we
have
the
most
minimal
set
that
we
need.
A
Then
probably
we
can
say
like
let's
plan
to
go
to
v1
data
one,
because
once
we
go
to
beta,
I
think
we
can't
remove
any
of
the
existing
fields.
Additional
fields
would
still
be
fine,
but
we
just
have
to
make
sure
that
we
remove
all
the
fields
that
we
don't
want
to
currently,
and
then
we
can
actually
plan
to
go
to
v1
beta
1.,
like
it
doesn't
have
to
be
tied
to
any
of
the
driver
provider
milestones,
maybe
like
it's
not
like.
A
F
Yeah
I
was
just
trying
to
get
rid
of
all
the
v1
alpha.
One
references
like
oh.
B
F
A
I
think
the
great
test
is
failing
so
I'll
open
an
issue
for
this
and
we
can
take
a
look,
but
I
think
the
long
term
plan
is.
We
move
this
to
the
e
to
e
provider
as
well,
so
we
move
upgrade
test
to
e
to
e
provider
and
then
the
only
provider
test
that
we
have
is
just
the
basic
test
for
every
provider,
just
for
conformance.
E
I
think
we
should
we
should.
I
think
there
are
couple
other
things
in
that
that
we
thought
you
know.
Let's
let
it
sit
there
and
then
we
will
remove.
Probably
we
should
take
a
holistic
look
at
what
all
the
tests
we
want
to
remove,
probably
yeah,
but
I
think
we
can
hopefully
something.
A
A
But
I
think
the
other
jobs
are
running
fine,
the
rest
of
the
jobs
that
we
have
are.
There
was
a
small
issue
with
the
new
change
in
the
github
ui
and
helm
install,
but
I
think
helm
fixed
that
so
most
of
the
other
jobs
are
passing.
A
A
So
add
this
pr
and
then
tommy,
I
had
a
comment:
if
we
don't
have
any
plus
ones
on
this,
then
I'm
fine
with
closing
the
pr
but
I'll
just
it'll,
be
great.
If
others
can
take
a
look
as
well
like
the
idea
is
of
switching
it
to
errors,
dot,
wrap
and
just
so
that
we
wrap
all
the
errors
that
we
return
today.
A
All
the
other
way
we
can
do
it
is
we
can
use
format.f
and
then
just
we
use
person
w
to
go
through
our
code
audit
and
then
make
sure
that
all
the
errors
are
wrapped
up.
So
take
a
look
at
this
pi.
A
This
one
is
still
a
draft,
so
it's
I'm
updating
some
of
the
helm
labels,
but
we
can
see
when
the
user
moves
it
to
review,
ready
and
for
sink
all
feature.
I
think
tommy
had
added
a
comment.
It
requires
a
rebase
and
then.