►
From YouTube: Secrets Store CSI Community Meeting - 2021-08-05
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
everyone
welcome
to
csi
secret
store
community
call.
Today
is
the
august
5th
2021
this
fall
fall.
This
call
falls
under
the
cncf
code
of
conduct
and
is
being
recorded
and
will
also
be
published
to
youtube.
A
A
So
we
released
oh
1.0.
One
issue
that
we
got,
which
was
created
on
the
provider
side
was
the
upgrade.
Jobs
are
don't
truncate
the
name.
So
if
the
user
is
using
an
extremely
long
name
today
for
the
helm
release,
then
the
release
fails
with
the
name
cannot
be
longer
than
63
characters.
A
This
is
a
more
obvious
issue
if
the
secret
driver
charts
are
used
as
dependencies
in
provided
charge,
and
I
think
like
one
way
to
do
that,
one
way
to
avoid
hitting
that
issue
is
by
actually
setting
a
full
name
override
for
the
dependent
charts
in
the
provider
charts.
So
that
is
something
that
we'll
be
including
in
the
next
release,
but
that
was
the
only
one
issue
that
we
got
and
then
the
second
one
was
if
you
install,
which
nelec
talked
about
before
the
recording.
A
That
is,
if
you
install
the
v0.1.0
helm
release
and
then,
if
you
want
to
roll
back
to
a
previous
version,
which
is
0.23
because
the
crds
have
been
moved
to
crds
directory
helm
will
not
allow
you
to
do
that
and
then
the
error
would
be
that
the
cids
don't
have
the
annotation
saying
it's
managed
by
helm.
So
the
rollback
is
not
possible
and
right
now
it's
not
there
in
our
release,
notes
and
we'll
be
adding
it
to
our
policemen.
A
So
this
was
one
issue
that
I
wanted
to
discuss.
So
what
we've
been
seeing
is
when
we
actually
cut
a
release.
The
image
build
on
google
cloud
builder
takes
about
1
hour
15
minutes,
so
we
build
off
six
images,
our
two
linux
that
is
amd64
and
r64,
and
then
for
windows.
We
build
1809,
1903,
1909
and
2004..
A
This
build
specifically
on
gcp,
takes
about
one
hour,
15
minutes
and
most
of
the
time
is
spent
on
downloading
dependencies
and
then
building
the
binary.
So
there
were
a
couple
of
options
that
we
can
try
to
speed
it
up
like
one
is:
build
the
binary
outside.
So
what
build
ones
for
linux,
amd6
for
m64
and
then
once
for
windows
amd
64..
A
That
was
the
initial
approach
that
I
had
documented
in
this
issue.
But
one
problem
with
that
is
these
builds
will
no
longer
be
reproducible
because
the
binary
is
built
on
the
host
and
then
it's
being
loaded
into
the
docker
container.
A
So
the
other
approach
that
most
of
the
kubernetes
projects
does
and
like
how
the
cnc
approach
does
is
basically
render
the
dependencies,
so
they
have
a
vendor
directory
that
they
maintain
and
for
go
build.
It's
just
go
build
hyphen
mod
equal
to
vendor.
So
I
wanted
to
bring
that
up
for
discussion
here
and
see
what
the
other
contributors
think
about
going
with
this
approach.
So
if
we
do
this
it'll
speed
up
the
build
process
considerably.
B
I
think
that
makes
sense.
Is
there
any?
I
guess
what
do
other
projects
do
to
make
sure
that
it's
always
up
to
date,
since
it's
built
outside.
A
B
A
A
Thank
you
and
I
think,
like
one
pressing
issue
that
we've
seen
lately
is
basically
the
windows
tests
have
been
really
flaky
and
the
reason
for
that
is
this
bug
in
no
driver
register,
so
no
driver
registrar
is
the
component
that
registers
the
csi
driver
with
cubelet
during
startup
and
with
windows.
What
we've
been
seeing
is
when
the
no
driver
register
comes
up.
Cubelet
does
not
make
the
rpc
call
to
invoke
the
driver
registration.
So
basically,
this
is
the
call
that
the
register
has
to
get
from
cubelet
and
what
we're
seeing
on
windows
is.
A
This
call
net
does
not
come,
and
this
has
been
happening
a
lot
with
121.
So
two
or
three
weeks
back,
we
upgraded
our
kubernetes
versions
to
121
for
all
the
linux
in
the
windows
test,
and
then
we
saw
windows
failure
like
quite
often
so
we
rolled
back
to
120,
and
then
we
have
been
discussing
on
this
issue.
A
I
think
the
issues
are
somewhere
in
the
cubelet
code,
but
I
think
the
node
driver
register
have
implemented
a
temporary
fix,
which
is
basically
adding
a
new
modern
driver
register
which
can
enable
liveness
check.
So
we
can
configure
a
liveness
probe
or
no
driver
register
to
check
if
the
driver
is
registered.
If
the
liveness
probe
fails,
it
will
restart
the
no
driver
register
and
it
will
do
that
until
the
driver
is
registered
successfully.
So
this
pr
to
add
that
mode
has
been
merged
and
then
the
release
is
planned
for
the
next
two
weeks.
A
So
once
that
is
done,
we
can
update
to
the
latest
image
and
then
we
can
also
start
testing
on
121
kubernetes
version
for
windows.
A
B
Basically,
okay,
is
there
a
test
for
the
no
regis
like?
Why
did
we
not
see
this
before
the
release.
B
Maybe
we
can
add
a
test
for
this
if,
if
it
doesn't
exist
already
on
for
for
sick
windows
yeah,
maybe
that's
a
good
follow-up
for
them.
I
don't
know.
A
B
C
A
A
Yeah-
and
I
think
that
is
the
main
issues,
but
we
have
a
stable
milestone
planned
for
september
13th.
So
that's
what
we're
working
towards
basically
enhancing
the
tests
that
we
have
making
sure
it's
not
flaky
and
then
also
making
sure
all
the
dogs
are
there
and
then
also
just
the
production.
Readiness.
A
The
stable
release
will
be
with
what
we
have
today
that
has
been
well
tested
and
used.
So
that
is
basically
features
like
rotation,
sync
secrets,
which
is
there
today
and
we'll
have
a
1.0
release
with
that.
So
before
the
1.0,
I'm
thinking,
we
will
have
another
mid
release,
because
the
last
release
was
the
first
one
where
we
actually
did
release
branches
and
all
the
other
release
processes
so
like
we
had
to
make
changes
in
the
pipelines
and
everything
to
get
it
working.
A
So
now
that
we've
got
that
done,
we're
going
to
try
another
release,
which
will
include
like
few
bug,
fixes
like
on
the
hands
outside
and
try
out
a
release
branch
and
to
just
get
the
process
hardened,
and
then
we
also
have
a
pi
to
document
the
process
so
like
when
we
do.
The
next
release
we'll
basically
merge
the
document
here
as
well,
so
that
others
can
cut
the
release
and
after
that
the
next
will
be
the
stable
release
that
we
are
planning
for.
B
One
question
about
the
staple
release
like
are:
we
are:
do
we
differentiate
like
which
features
are
alpha
beta
versus
when
we
say
stable?
Is
it
just
a
driver
or
all
the
features
are
stable
as
well
all
the
future
flags.
A
So
it's
only
the
driver
default
feature,
so
that
is
the
mount,
and
the
sink
rotation
remains
in
alpha.
That's
because
we
eventually
want
to
move
the
requires
republish.
So
that
is
where
we
want
to
head
to.
B
Gotcha,
sometimes
something
to
kind
of
think
about
is
if
we
were
to
move
the
sink
to
another
project.
B
A
Yeah
we
have,
we
haven't
actually
published
the
an
issue
saying
that
is
where
we're
headed,
but
I
think
once
so
me
and
tommy
are
working
on
the
design
dock
once
we
have
that
we'll
create
an
issue
and
then
tag
the
design
dock.
So
we
can
start
the
conversation
there
to
see
if
users
have
any
feedback
on
it
as
well.
B
Okay,
great
and
and
definitely
open
an
issue,
so
people
can
chime
in.
D
E
E
I
noticed
that
when
I
was
like,
I
got
all
the
secrets
to
sync
and
the
user
could
specify
the
type,
but
when
I
deleted
the
pods
with
those
mounted
secrets,
the
secrets
weren't
deleting
so
I
wanted
to
ask
how
the
driver
knows
to
delete
the
secret
when
the
pod
has
been
deleted.
A
Yeah,
so
for
the
sync
kubernetes
secret,
what
the
driver
does
is
when
it
creates
it.
It
also
adds
the
port
as
on
a
reference,
so
you
create
a
pod
initially
and
then
basically,
that's
referencing,
a
secret
provider
class
and
then
it's
also
requesting
for
syncing
as
kubernetes
secret
right.
At
that
point,
the
driver
creates
a
kubernetes
secret
and
then
in
owner
references
the
driver
adds
the
pod
as
the
owner
for
the
kubernetes
secret.
A
So
when
you
delete
the
pod,
the
owner
is
deleted,
so
kubernetes
will
automatically
garbage
collected
rather
than
the
driver
doing
it
periodically.
E
Okay,
so
it
needs
to
have
that
owner
reference
in
order
to
be
able
to
delete
it
or
to
know
to
delete
it.
Okay,
okay,
that's
probably
enough
info
for
me
to
dive
in
and
try
to
figure
out
why
it's
not
deleting,
but
I
might
I
might
message
you
to
confirm
if
I'm
understanding
that
correctly,
if
I
can't
find
it.
A
E
E
Baby
steps
yeah
very,
very
small
steps,
but
I
would
like
to
hopefully
have
it
done
by
like
the
next.
E
Oh,
that
was
another
thing
I
I've
been
doing
all
of
the
testing
in
vault.
I
might
need
help
on
seeing
if
it
works
with
like
azure
as
well
as
a
gcp,
because
I
don't
have
accounts
for
either
of
those
and
I'm
not
sure
I'll,
be
able
to
find
time
to
to
figure
it
out.
So
I
I
might
reach
out
for
some
help
on
testing
with
those
providers.
A
Okay,
without
putting
the
lake
on
this
partner,
like
do
you
want
to
do
a
demo
now,
oh
yeah,
sure.
C
B
B
A
C
Okay,
so
what
we
are
demoing
is
basically
where
so
recently
we
implemented
crd
upgrade
hooks
so
with
which
we
will
be
managing
how
we
are
installing
the
crds
and
how
we
are
upgrading
the
crds
since
with
helm3
helm
is
not
going
to
manage
to
say
it
is
for
the
user,
so
we
implemented
the
custom
helmholtz
which
will
allow
us
to
do
that
and
before
showing
you
what
we
implemented,
I
will
first
show
how
it
works
and
then
we'll
take
a
look
as
to
how
we
implemented
it
and
then
what
we'll
try
to
do
afterwards
is
we
will
try
to
modify
the
modifier
rcrd
and
see
if
elm
upgrades
have
upgrade
work
works
as
well,
so
okay,
so
we
have
a
make
target
which
will
install
the
deploy
release.
C
Okay,
so
I
think
I
was
testing
before
so
that's
why
it
was
saying
it
was.
It
was
already
there,
but
if
you
can
see
over
here,
we
created
basically
three
resources.
We
created
a
service
account,
we
created
a
role
and
a
role
binding
for
that
service
account,
and
then
we
actually
create
a
crd
hook
in
order
to
create
the
or
install
the
crds.
So
if
you
see
over
here,
we
first
delete
any
existing
resources
that
it
has
so
will.
C
If,
if
there
are
any
any
of
these
resources
present
in
the
cluster,
it
will
delete
itself
and
then
it
will
create
it
again.
So
we
have
service
account
created
and
we
have
cluster
role
created
and
then
customer
role
binding
created
and
then
it
will
install
the
I
mean
it
will
install
this
job,
which,
which
will
run
the
pod
and
inside
the
part.
We
are
just
doing
the
code
catalog.
C
So
that
way
it
will
install
the
crds.
And
then,
if
we
come
down
here,
we
will
see
all
of
our
hook
resources.
What
are
the
resources
that
we
are
creating,
and
probably
it's
better
if
I
show
it
in.
C
C
So
basically,
this
is
the
upgrade
book
we
have
and,
as
I
was
mentioning,
we
are
creating
a
different
resources
in
order
to
give
the
right
permission
for
this.
So
we
have
a
cluster
role
and
then
cluster
role,
binding
and.
C
We
have
service
account
which
we
hooked
or
which
we
give
permission
to
permission
so
that
the
pod,
which
is
actually
doing
the
crd
install,
will
have
all
the
necessary
permission
to
it,
and
then
we
have
a
job.
So
initially
we
had
a
part
initial,
the
way
we
had
implemented
it.
We
we
just
had
a
pod
and
pod
used
to
do.
Like
the
same,
I
mean
it
was
using
the
same
image
and
it
it
was
just
doing
the,
but
then
the
issue
there.
C
What
we
found
was
it
was
not
waiting
for
the.
It
was
not
waiting
for
this.
This
command
to
be
fully
completed,
and
it
was
exiting
before
that
and
what
what
anish
found
is
like.
We
can
use
a
job
instead
and
then
that
way
the
job
will
actually
wait
for
paul
to
or
to
finish
or
part
or
to
go
towards
the
complete
state,
and
then
that
way
we
know
for
sure
that
crds
are
installed
and
then
we'll
move
ahead
with
the
remaining
has
installed.
C
So
that
was
the
update
we
did
to
this
installation
so
yeah.
So
this
is
what
pretty
much
goes
with
the
resource.
One
thing
that
I
personally
learned
was
the
health
annotation
and
specifically
the
delete
policies
like
this
is.
I
was
worried
about
so
like
we
are
giving
a
lot
of
this
permissions
to
this
service
account
and
how
can
we
get
rid
of
it,
but
with
this,
if
you
specify
this
annotation
and
takes
care
of
deleting
all
the
resources?
So
if
we
switch
back
here.
C
C
C
B
While
you
do
that,
I
do
have
a
question
about
the
jaw
versus
pod
thing.
Sure.
C
C
B
Just
wondering
other
projects
have
been
using
pod
for
that
hooks
for
the
upgrade
hook.
So
I'm
wondering
if
other
projects
have
maybe
seen
the
same
issue
where
the
the
the
you
know
the
cube
ctl
apply.
It
hasn't
finished
before
all
other
components
run
in
the
helm,
upgrade
process.
A
So
this
could
be
also
just
a
helm,
related
issue,
because
I
was
trying
this
out
a
few
days
before
release,
and
then
I
observed
that
helm
is
only
waiting
for
the
part
to
reach
completed
state
right
and
like
every
time
I
was
looking
at
the
logs.
The
part
starts
up
and
even
before
it
starts
running,
it
just
goes
to
complete
and
starts
terminating,
and
then
we
tried
to
repro
this
with
an
older
version.
So
the
lake
had
an
older
version
of
helm,
but
we
actually
didn't
see
the
issue
there.
B
Right
and
I
guess
in
this
case
it
doesn't
matter
right,
because
it's
it's
the
process
that
will
that
just
runs
at
the
beginning
anyway,
right
meaning
it
doesn't
need
to
be,
it
doesn't
need
to
stay
around.
So
that's
right!
Okay,
I'm
I'm
gonna,
ask
other
project
if
they're
seeing
something
similar
with
the
pod,
because
almost
everybody
use
pod,
just
because
it's
the
first
thing
that
people
think
about
right,
curious,
like
which
helm
did
you
try
that
had
this
issue
or
version.
C
To
see
so,
what
I
was
trying
to
what
I
was
trying
to
do
was
like,
like
just
see
the
yaml
and.
C
C
Okay,
I'm
not
sure
I
actually
am
running
off
of
my
custom
branch
as
well,
which
I'm
doing
modification
for
the
provider.
So
I'm
not
sure
if
that
is
creating
any
problem,
but
but
that's
the
I
mean
I
pretty
much
wanted
to
show
the
the
installation
part.
The
only
thing
I
was
gonna
try
to
show
is
like.
Maybe
if
we
can
add
another
parameter
over
here
and
do
helm
upgrade,
so
it
also
reflects
the
the
actual
I
mean
it
will
also
reflect
into
that
yaml
ncrds.
E
Sorry
try
the
command
that
anish
put
into
the
chat.
C
C
C
C
C
C
D
D
C
C
C
C
B
The
change
you
made
in
the
yaml:
how
does
that
get
applied
right,
like
the
oh.
C
How
does
it
get
it?
Sorry,
actually
you're
right,
we
are
using
the
release
chart.
So
we
are,
we
didn't,
do
any
changes
over
there
and
I
am
just
doing
the
changes
directly
in
here.
So
if
I
apply
directly
from
this
directory,
then
it
will
take
effect.
Sorry,
my
bad-
I
am
installing
directly
from
this
guy
here.
C
That's
why
it
is
not
that's
why
it
is
not
reflected
sorry,
maybe
I
can
prepare
it
really
well
for
the
next
time
and
then
demo
it
again.
A
B
C
B
Yeah
I'll
take
a
look
and
I'll
I'll
test
it
on
a
couple
of
other
repos.
Just
to
make
sure
this
is
like
a
if
it's
a
common
issue,
then
we
should
definitely
open
up
an
issue
on
helm,
though.
A
B
A
A
Okay,
I
think
that's
it
is
there
anything
else
that
anyone
wants
to
discuss.