►
From YouTube: Secrets Store CSI Community Meeting - 2021-03-18
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
welcome
everyone
today
is
thursday
march
18th.
Welcome
to
our
csi
secrets.
Community
call
just
a
reminder
if
you're
new
to
the
community
call
this
call.
This
call
is
governed
by
the
cncf
code
of
conduct,
so
I'll
just
be
nice
with
everyone
and
I
think
you'll
be
okay.
A
A
And
I
guess
before
we
kick
it
off,
let's
see
we
do
have
someone,
I
believe
is
new
to
the
channel
or
the
community
called
josh.
A
Good
morning,
hey
good
morning,
all
right,
so,
let's
get
into
it
I'll,
be
moderating
and
I'll
try
to
do
double
duty.
Unless
someone
wants
to
help
me
out
with
taking
some
notes
and
with
that,
let's
go
ahead
and
get
into
the
first
agenda
agenda
item
by
anish
nish.
Let's
talk
about
some
of
the
highly
talked
about
disconnected
scenarios
with
this
design.
So
and
this
take
it
away.
B
Yeah,
so
I
have
a
design
dog
out
there
to
go
with
disconnected
scenarios
so
initially,
when
I
drafted
it,
it
was
kind
of
a
summary
of
what
we
had
already
discussed
in
the
previous
community
called,
and
I
also
did
share
that
with
folks
internally
and
got
some
feedback
so
based
on
that,
I
made
few
changes
last
night,
so
the
idea
is
in
case
of
disconnected
scenarios.
If
existing
deployment
is
scaled
up
or
an
existing
part
gets
restarted,
we
want
to
be.
B
We
want
to
allow
the
pod
to
run
without
any
mount
failures,
and
we
also
have
to
figure
out
a
way
to
provide
the
secrets
from
external
secret
store
that,
to
that
particular
part
without
any
network
access,
so
for
the
first
problem.
The
way
that
we
can
solve
it
is,
we
add,
an
optional
field
called
allow
mount
creators
in
the
secret
provider
class.
B
So
when
the
csi
driver
gets
a
mount
at
the
no
published
volume
call
for
a
restarted
powder
and
existing
part,
that's
scaled
up.
If
allow
mount
failure
is
set
to
true,
then
it
will
mount
the
wall,
the
temp
fs,
and
then
it
will
call
the
provider
to
get
the
secrets.
But
if
the
provider
call
fails,
it
will
still
assume
that
is
a
success
and
then
return
to
cubelet
so
that
the
part
can
start
successfully.
B
So
that's
the
first
step
right
to
allow
the
port
to
start
and
then
the
second
thing
is:
how
does
the
part
get
access
to
the
secrets
from
the
external
secret
store?
So
the
last
time
we
discussed
the
idea
was,
we
will
have
the
application,
have
a
fallback
logic.
So
if
the
files
are
not
found
in
the
pod
mount,
then
the
application
would
rely
on
the
kubernetes
secret
that
was
synced
before
so.
B
One
concern
with
that
is
applications
that
are
designed
for
disconnected
scenarios
have
to
have
a
understanding
of
kubernetes
secrets,
and
they
need
to
have
this
custom
logic
just
to
be
able
to
work
in
disconnected
scenarios.
But
I
think
one
thing
that
we
can
do
is
instead
of
having
that
fallback
logic
in
the
application.
We
can
move
the
fallback
logic
into
the
csi
driver,
so
in
case
of
disconnected
scenarios
we
will
use
the
kubernetes
secret
that
was
synced
before
as
our
external
secret
store.
B
So
when,
after
the
amount
we
invoke
the
provider
call
and
then
we
get
ask
the
provider
to
give
us
all
the
contents
for
the
secret
provider
class.
But
if
the
call
fails,
then
we
look
through
the
secret
provider
class
and
see
if
all
those
contents
was
previously
seen
as
kubernetes
secret.
If
they
were,
then
we
can
read
the
kubernetes
secret
from
the
csi
driver
and
then
write
the
contents
of
the
secret
into
the
pod
mod.
So
that
way,
even
pods
and
disconnected
scenarios
will
start
up
with
the
volume
mount
and
will
have
all
the
contents.
B
So
this
is
what
I
have
in
the
design
dock
right.
So
one
thing
is:
when
the
user
creates
a
secret
provider
class
in
disconnected
scenario
clusters,
they
have
to
make
sure
that
everything
that
they
have
is
the
objects,
so
everything
that
they
want
to
get
from
the
extern
sequester
is
also
synced
in
the
kubernetes
secret
object.
So
if
they
have
it
synced
in
the
kubernetes
secret
object,
then
we
can
stream
from
that
and
mount
it
for
all
the
pods.
During
the
time
when
there
is
no
network.
B
C
B
Yeah
in
terms
of
the
applications
running
during
disconnected
or
prior
to
disconnection,
it
should
be
the
same
like
they
should
still
be
able
to
just
have
a
single
logic
which
reads
the
secrets
from
the
pod
file
system
and
we'll
just
move
the
fallback
logic
to
the
csr
driver.
So
the
csr
driver
knows
how
to
handle
it
during
disconnected
periods
or
when
it
has
network
access.
C
Actually,
yeah
reading
the
comment
that
you
had
around
the
fallback
logic,
I
actually
had
a
question
like
if,
if
as
a
application
developer,
if
I
were
to
actually,
if
I'm
looking
for
the
kubernetes
secret
volume
and
not
the
csi
volume,
then
there
really
shouldn't
be
any
anything
that
I
need
to
do
right
like
all,
because
I
just
assume
that
the
kubernetes
secrets
will
have
what
I
need
and
if
it's
disconnected
it
it
it
would,
it
would
have
the
data
right.
B
So,
if
so,
that
that's
one
option
right,
but
that
also
means
that
they
always
use
the
kubernetes
secret
volume.
So,
instead
of
having
a
fallback
logic,
so
they
define
the
csi
volume,
but
the
csi
volume
will
trigger
creation
of
kubernetes
secret
and
then
they
have
the
kubernetes
secret
as
kubernetes
volume.
B
So
if
they
do,
I
mean
I
think,
as
a
user,
they
would
just
want
to
use
a
single
volume
right.
They
wouldn't
want
to
have
the
csi
to
trigger
kubernetes
and
use
that
as
another
volume,
so
they
can
always
just
rely
on
the
csi
volume
and
we
can
just
say
during
disconnected
periods.
The
kubernetes
secret
is
our
external
secret
store.
So
we
will
automatically
just
read
from
that
and
populate
it
for
you
in
just
the
single
volume.
So
you
can
just
always
read
from
this
particular
one.
C
Gotcha
so-
and
you
said,
the
assumption
is
that
the
did
you
say
that
the
that
the
assumption
is
that
they've
turned
on
other
features
right,
like
the
rotate
sql
rotation,
stuff.
B
C
B
Not
really
required
with
this,
we
don't
need
to
have
rotation,
the
only
assumption
is,
or
actually
we
should
tell
users
recommendation
is
every
secret.
That's
critical
for
them.
You're,
even
during
disconnect
period,
has
to
be
part
of
their
sync
secret
object.
So
in
the
secret
object
they
need
to
have
all
the
credentials
as
key
values
so
that
everything
is
already
there.
And
then
we
populate
all
that
in
the
pod
mount
when
it's
required.
B
So,
for
instance,
today
in
the
secret
provider
class,
you
could
have
secret
one
secret,
two
and
secret3,
but
in
secret
objects.
If
you
only
have
secret
three,
because
you
need
that
as
environment
variable,
then
we
will
miss
out
secret
one
and
two
right,
but
if
they
need
all
of
those
in
the
pod
mount,
they
just
have
to
add
everything
to
the
secret
objects
at
it,
so
that
when
we
try
to
read
from
kubernetes
secret,
we
give
them
an
exact
copy
of
what
they
really
want.
C
B
Now
they
can
use
it
right,
but
I'm
just
saying
if
they
have
to
use
it,
it's
they
need
to
have
two
volumes
essentially,
and
I
don't
think
as
an
app
developer.
I
want
to
do
that
right,
because
the
csi
volume
has
no
real
significance
at
all
here.
If
the
kubernetes
secret
is
all
I
want
to
do,
if
they
can
just
automatically
create
the
community
secret
on
their
own
and
they
can
use
it
as
a
secret
volume,
but
having
the
csi
volume
just
to
create
the
community
secret
might
be
a
major
ask
from
the
application
developer.
B
Sorry
yeah,
yeah
tls
purposes
are
for
environment
variables,
but
for
disconnected
scenarios,
if
we
are
saying
the
app
has
to
always
rely
on
the
secret
volume,
then
the
csi
volume
has
no
real
significance
at
all.
Here.
C
B
B
So
I
think
the
user
would
assume
that
it's
still
required
in
some
way
and
I
think
if
it's
still
required,
we
can
try
to
give
them
what
they
need
in
the
single
mount
path.
So
they
don't
have
to
do
something
special
for
disconnected
scenarios,
so
they
don't
have
to
go
and
mount
a
different
path
for
disconnected
and
use
the
default
path
for
the
non-disconnected.
A
A
A
Okay,
next
up
you
again
anish
we're
talking
about
v00
that
oh
21
release.
B
Yeah
so
the
we
merged
the
low
test
based
changes
and
optimizations
the
this
week
and
we
also
merged
a
few
e
to
e
tests
so
the
plan,
so
the
initial
plan
was
thinking
we
should
release
it
today,
but
there's
also
this
one
pr
that
tommy
opened.
So
basically
that's
the
first
step
where
the
driver
starts:
writing
the
files
in
the
pod
mount
instead
of
the
providers.
So
it's
a
backward,
compatible
change,
it's
still
optional.
B
So
I
was
thinking
we
should
probably
review
and
get
that
also
merged
in
the
next
release,
so
that
providers
can
start
implementing
the
changes
with
an
available
release
for
this.
So
if
you
want
to
do
that,
I
was
thinking
we
can
move
the
release
by
two
or
three
days,
so
we
can
do
it
early
next
week
and
I
wanted
to
know
what
others
were
thinking.
B
A
Okay,
all
right
next
up
rita,
wants
to
talk
about
the
seagull
and
the
cici
jobs
for
the
project.
C
Oh,
it
was
just
something
that
that
anish
brought
up
yesterday
I
dropped
earlier
in
the
six
segoth
call,
and
apparently
they
were
also
looking
at
this
project's
ci
jobs
as
part
of
their
usual
issue,
triage
and
ci
health.
So
I
thought
just
want
to
mention
that
sigoth
is
looking
at
this
project's
health,
so
yay.
A
Let's
talk,
token
requests,
let's
open
it
up.
E
Yeah
thanks,
so
I'm
pretty
new
to
this
project,
but
I'm
very
excited
about
it.
E
I
was
looking
at
the
the
token
request
issue
and
I
started
a
pr
just
to
try
to
wrap
my
head
around
like
what
would
be
needed
to
support
that,
and
I
think
my
my
my
understanding
was
that
basically,
when
cuba
calls
like
the,
I
think
it's
no
published
volume
that
if
it's
already
been
called,
then
it
just
basically
skips
calling
mount
again,
but
that's
how
the
cube
propagates
token
refresh
to
propagate
it
to
the
driver.
E
So
I
was
wondering
I
guess
it's
kind
of
a
it's,
a
pretty
big
change,
probably
to
to
assume
that
mount
could
be
reinvoked
on
the
driver.
I
just
wondered
if
that
was
like
an
expectation
or
if
that
wasn't
an
expectation
for
for
providers
to
say
mount
could
be
reinvoked
or
if-
and
you
know,
a
second
call
needed
to
be
made
to
that
might
be
expected
to
be
atomic
or
something
like
that.
D
D
B
Describe
that
better
yeah,
so
as
part
of
the
mount
request,
we
have
this
field
called
object
versions.
So
we
tell
the
provider
what
current
object
versions
are
mounted
and
sync,
this
kubernetes
secret
and
the
provider
can
basically
read
this
and
decide
if
they
want
to
make
an
external
api
call.
I
mean
based
on
any
internal
cache
mechanisms,
or
they
could
just
respond
back
without
doing
any
external
call
and
say
this
everything
is
good.
A
B
Any
impact
and
any
additional
call
for
the
same
target
path
and
same
volume
should
still
yield
it
always
with
the
same
result.
So
we
have
it
that
way
and
the
right
now
in
not
publish
if
the
volume
is
already
mounted.
Yes,
we
just
say
it's
already
mounted
and
returned
back
without
calling
provider,
but
I
think
with
the
republish
volume
we
should
go
ahead
and
just
call
the
provider
again
to
see
make
sure
that
everything
is
correct
and
then
the
values
haven't
been
overwritten.
So
I
think
that
will
be
the
right
approach
to
do.
E
D
Oh
sorry,
are
there
any
plugins
that
use
that,
like?
I
know
that
gcp
one
doesn't
consider
the
object
version
stuff.
I
don't
know
if
the
azure
one
is
using
that
field.
Yet.
A
E
Gotcha,
okay,
cool:
well,
it's
a
draft
pr
right
now,
so
I'll
try
to
spend
some
time
on
it
and
make
it
you
know
any
anything
else.
It
needs
before.
E
Someone
really
reviews
it,
and
in
doing
this
I
also
kind
of
realized
like
what
so
there's
there's,
not
an
aws
provider
publicly.
Yet
like
it's
it's
in
development,
I
just
was
reviewing
the
code
this
week.
For
it
the
it
would
be
awesome
to
have
an
e
to
e
test
provider
that
was
just
like
didn't
require
a
real.
E
You
know
external
api
call,
so
you
didn't
have
to
have
any
external
infrastructure
or
api
key
set
up,
but
just
could
like
simulate
this.
You
know
this
this
invocation
just
for
testing
purposes,
so
I
I
I
don't
necessarily
need
to
tie
that
to
this
pr,
because
that
might
be
more
work.
E
That
would
make
me
get
kind
of
gargantuan,
but
I
might
also
try
to
like
start
that,
just
so
that
in
my
own
testing
of
this
I
got
my
head
around
what's
going
on,
but
I
wondered
if
that
could
be
like
a
separate
pr
and
if
that
would
be
cool.
E
Okay
and
then
I
sorry
one
other
issue
that
came
up
as
I
was
like
doing
this
was,
I
think
you
commented
on
this
already,
and
it
was
the
propagating
the
volume
context
down
to
the
provider
like
that's
not
done
today,
even
if
it
was
just
for
setup,
even
if
it
wasn't
on
like
republish
or
anything,
but
if
it
was
just
for
setup
that
way
like
right,
because
right
now,
you're
propagating
like
pod
name,
name,
space,
uid
and
service
account,
which
is
great-
and
you
know
with
this
other.
E
This
pr
have
like
the
tokens
themselves,
but
any
metadata
that,
like
a
pod
creator,
defines,
could
be
useful
to
the
to
the
product.
You
know
to
the
provider.
The
provider
can
choose
what
to
do
with
it
or
not,
but
it
could
be
additional.
You
know
key
value
metadata
just
for
auditing
or
whatever
like.
I
could
see
that
being
a
useful
feature,
I'll
I'll.
E
B
Yeah,
I
think
I
am
I'm
not
sure
if
it
was
intentional,
I
think
when
it
started
off
initially
it
was
only
few
fields
and
that
was
required,
so
the
driver
would
just
set
the
ones
required,
but
then
the
more
I
thought
about
it.
It
did
make
sense
to
just
pass
the
entire
thing
right
because
in
terms
of
new
features
that
get
added,
we
don't
want
to
have
to
update
the
driver
just
to
support
that
feature.
If
it's
just
part
of
a
map,
all
these
key
value
pairs.
B
So
we
could
just
update,
note,
publish
to
send
the
entire
volume
context
and
then
for
rotation
purposes.
We
still
have
to
rely
on
manually,
constructing
the
volume
context,
so
we'll
keep
updating
that,
but
I
think,
as
we
think,
about
using
republish
for
rotation.
Maybe
it's
just
all
one
single
path
right.
So
then
we
don't
have
to
have
these
multiple
ways
to
do
things
so.
C
Can
you
open
an
issue
for
that
to
so
we
can
track.
B
C
B
B
As
an
option-
and
then
I
think
last
month
is
when
all
providers
migrated
to
using
it
so
now
that's
the
default
way
to
do
it.
B
So
yeah
the
rfp
has
been
extended
to
march
26th
for
the
security
audit.
We
have
one
proposal
from
one
vendor
and
the
order
team
is
basically
reviewing
it
right
now.
The
csi
driver
is
still
part
of
it.
It's
under
secrets,
management
and
yeah.
So
the
rfp
is
extended.
The
audit
team
is
going
to
be
reaching
out
to
few
other
vendors
to
see
if
you
can
get
a
proposal
and
then
after
that,
they'll
make
a
decision
on
which
vendor
to
go
with.
A
All
right
that
takes
us
to
the
end
any
other
comments
or
anything.
Anyone
else
wants
to
talk
about
before
we
in
the
meeting.
B
A
Looking
forward
to
that
all
right,
well,
thanks
everyone
for
showing
up.
Ironically,
our
next
call
will
land
on
april
fool's
day
so
april,
1st
in
a
couple
of
weeks.
A
So
with
that
we'll
go
ahead
in
the
call,
if
you
got
any
agenda
items
from
the
next
call,
please
go
ahead
and
I'll
have
the
format
ready
for
you
and
this
call
which
has
been
recorded.
We
should
have
this
published
within
the
next
24
hours
and
we'll
link
that
out
to
the
slack
site
for
others
to
view
all
right.
Thank
you
everyone
and
have
a
good
day,
and
we
will
see
you
on
the
next
call.