►
From YouTube: Secrets Store CSI Community Meeting - 2023-08-31
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
everyone
welcome
to
the
CSI
Secret
store.
Community
call
today
is
August
31st
2023..
This
call
is
recorded
and
will
be
published
on
YouTube
and
it
follows
cncf
code
of
conduct.
A
A
Okay,
so
the
only
there
are
only
few
items
in
the
description
items,
so
the
first
one
is
follow-up.
Discussion
on
Secret
store,
CSI
driver
caching
for
some
context.
We
discussed
about
this
in
the
cigarth
call
list
today
and.
A
Do
you
want
to
talk
about
it
just
to
go
what
we
discovered
today.
B
A
I
so
I
didn't
think
Amit
was
not
on
the
bus,
I
was
just
thinking.
Should
we
just
give
him
like
a
tldr
on
what
it
is
and
it's.
B
Yeah
I
can
I
can
take
that
so
I
guess
at
a
high
level
right.
We,
we
spoke
about
two
distinct,
but
you
know,
at
least
at
some
level
related
changes
that
we're
considering
in
secret
CSI.
B
One
of
them
is
the
capability
of
running
secret
CSI
in
environment
that
may
be
disconnected
from
the
remote
service.
That
provides
the
secrets
for
some
amount
of
time
and
it
may
go
through
like
node
upgrades
or
restarts
during
that
time.
So
it's
not
just
Disconnected
by
itself.
It's
disconnected
plus
restarts.
B
B
It's
come
under
like
scrutiny
from
basically
sort
of
violating
some
of
the
principles
of
the
secret
CSI
project,
but
also
I,
know
at
a
technical
level.
Initiative
has
issues
just
in
the
way
it's
implemented
because
it
arbitrarily
couples
a
secret
or
a
CSI
amount
action
with
a
kubernetes
sync
action
which
confuses
everybody,
because
they're
like
I
thought,
I
was
going
to
think
a
secret.
What
is
this
Mount
thing
so
that
causes
its
own
category
of
problems
right
and
the
discussion
around
that
was
focused
on
the
ideas
of
well?
B
Could
we
split
up
the
project
in
a
way
where
people
that
want
to
sing
Secrets
can
continue
to
do
so,
people
that
want
to
use
secret
CSI
for
CSI
based
things
can
also
continue
to
do
so,
and
if
you
want
to
do
both
things,
you
can
just
run
both
projects,
and
one
of
the
aspects
of
that
was
was
cigarth
leadership,
okay,
sort
of
also
adopting
the
second
project
as
a
segaatsa
project,
with
my
cigar,
lead
hat
on
I.
Obviously,
don't
have
an
issue
with
that.
B
Otherwise,
I
wouldn't
have
asked
him
this
to
talk
about
it,
but,
for
example,
Jordan
had
no
issues
because
I
think
we
already
own
secret
CSI
and
adding
basically
the
same
thing
twice,
but
with
better
separation
of
concerns
seems
like
a
net
positive
and
I.
Think
because
we
retain
the
aspect
of
providers
are
still
out
of
tree.
It
retains
the
spirit
of
the
project
of
not
becoming
a
king
maker
in
the
secret
space,
so
I
think
all
of
those
things
are
appropriate
did
I
miss
anything
y'all.
A
A
D
Questions
yeah,
so
kind
of
a
basic
one.
I
haven't
really
gone
through
the
dark
in
depth
yet,
but
for
anyone
who
wants
to
use
the
sync
feature
in
that
case
they
would
have
to
install
this
new
controller
right.
A
Right
so
I
can
give
you
a
little
context
there
right
today.
It's
all
bundled
into
the
CSR
driver
like
more
mentioned
right
and
the
way
it
works
is,
if
you
have
to
sync,
is
kubernetes
seated.
You
also
have
to
do
the.
D
A
Which
is
not
a
desirable
Behavior
and
with
the
proposal
we
are
saying
it
becomes
its
own
project
and
then
the
controller
is
there
and
the
good
part
about
it?
Is
providers
don't
have
to
make
any
changes
like
all
the
providers,
so
like
the
gcp
provided
the
same
content
and
everything
right?
The
only
thing
is,
it
all
gets
packaged
in
the
Pod
as
a
sidecar
container.
So
you
have
this
controller
container.
Then
you
have
like
a
Google
provider,
Azure
provider,
hash
record,
like
all
the
providers
that
we
support.
A
Today
the
user
gets
to
decide
which
providers
that
they
want
to
install
in
the
cluster,
and
then
the
communication
is
through
Unix
domain
socket.
So
it
reuses
the
same
interface
that
the
CSI
driver
uses
today
and
all
it
will
do
is
it
will
only
sync
as
kubernetes
easy
right
so
like
in
it
will
skip
all
the
mount
and
all
of
those
aspects
of
it
and
when
you
create
a
secret,
provide
a
class
and
then
another
custom
resource
which
wraps
that
secret
provider
class
and
also
like
gives
like
a
workload
identity
to
use.
A
The
controller
will
use
that
go
and
talk
to
the
provider,
get
all
the
secrets
from
external
Secret
store,
and
then
it
will
sync
that
is
kubernetes
secret
and
then
periodically.
It'll
also
update
the
kubernetes
secret.
If
there
is
a
newer
version
available
in
the
external
world.
D
Got
it
so
it
will
keep
on
pulling
or
some
kind
of
a
busy
wait.
Yes
got
it.
Okay,
yeah
I'll
go
through
the
design.
Talk,
I
didn't
get
a
chance,
and
if
there
are
any
questions,
I'll
post
it
on
slack,
okay,.
A
Yeah
I
think
so
the
dashing
part
of
it
is
like
a
driver
feature
that
is
still
an
odd
pin
from
the
provider
perspective
right
like
to
give
you
a
very
little
detail
on
the
how
this
would
work
is
the
driver
will
still
call
the
provider
in
the
caching
approach
to
say,
like
hey,
go,
get
me
these
secrets
from
your
external
voice.
The
provider
will
go,
get
it
right
at
that
point.
A
A
Is
still
controlled
by
providers
like
providers
get
to
decide
if
they
want
to
consume
it
or
not,
but
in
case
of
the
second
feature
set
that
we're
talking
about
splitting
the
project
and
making
it
its
own
like
we
want
all
providers
the
supported
providers
today
to
say
it
is
something
that
they're
okay
with,
because
it's
a
completely
different
project
you
will
get
asked
questions
around
it
like
also
creates
an
optional
feature
in
the
driver
too.
It's
already
there,
but
again
just
going
to
be
an
optional
project,
but
yeah.
A
We
need
buy-in
from
all
the
providers
to
say
like
yeah.
This
is
something
we
are
okay
with
like
we're,
okay
with
being
packaged
in
that
hand
chat,
and
all
of
that
so
like
I,
think
we
need
consensus
there
and
hopefully,
in
the
coming.
Community
calls
we'll
have
more
providers
all
to
join
us
to
say,
like
what
their
opinion
is.
D
Sure
yeah
yeah
thanks
for
that
context,
I
will
definitely
go
through
the
that
the
split
thing
proposal
and
I'll
I'll
go
through
it.
I'll
give
my
feedback
in
one
of
the
subsequent
calls
one
question
about
the
the
caching
thing
right,
so
you
mentioned
that
the
provider
can
say
whether
it's
okay
for
the
driver
to
look
up
the
cash
and
serve
from
there,
but
does
it
mean
that
the
driver
has
already
decided
that
it
is
going
to
cash
for
everything
or
because.
C
A
C
A
A
Okay,
yeah
and
I
think
more.
You
were
mentioning
about
like
if
anyone
had
thought
right,
I
think
for
this
split
proposal.
I
think
a
lot
of
his
I
mean
most
of
them
agree
that
it's
something
that
we
want
to
do
and
I
think
we
just
have
to
build
that
consensus
among
all
the
providers,
if
that's
okay,
so
I
can
start
the
process
of
also
like
the
lazy
consensus
right.
A
So
I
can
try
and
get
that
over
slack
to
see
if
all
the
providers
are
okay
with
us
splitting
the
project
and
then
like
building
the
next
steps
for
those
for
the
caching.
One
I
think
the
concern
that
they
bought
up
around
the
CID
was
interesting,
the
whole
node
isolation
and
all
of
those,
but
also
like
some
of
the
other
suggestions
that
they
had
around
like
TV
or
any
of
that,
like
that
sounds
like
way
too
complex.
B
It's
a
full
hour,
basically
of
us
talking
in
painful
detail
Anish.
What
were
you
saying
about
I
think
you
said
something
in
summary
about
yesterday's
calling
regards
to
like
complications
and
such.
A
A
I
was
saying,
like
the
other
suggestions
that
they
had
around
like
TV
PVC
and
like
storing
it
with
this
or
like
some
buildings
or
consensus,
and
all
of
that
like
that,
becomes
like
a
much
bigger
problem.
I
was
saying
like
that.
That
makes
like
a
complex
design
and
all
of
that
right.
B
Yeah,
so
I
don't
disagree
with
the
complexity,
I
think
having
thought
about
it,
some
more
like
I,
do
I
do
think
we
have
to
come
up
with
something,
though,
to
help
with
the
node
isolation
aspects,
though,.
B
I'm
not
100
sure
what
that
would
look
like
like
one
thought
process
I
had
was:
can
we
start
smaller
and
instead
of
supporting
the
complete
like
like
in
the
in
the
first
phase,
instead
of
supporting
like
restart
and
like
cross
node
movement
workloads?
If
we
start
off
with
a
local
cache
that
lets
us
build
out
the
provider
aspect,
the
SPC
aspect
and
like
a
bunch
of
the
driver,
changes
and
I,
don't
think
any
of
those
are
controversial
because
then
you
don't
you
don't
have
any
node
isolation
issues.
B
In
memory
right
like
so
basically,
instead
of
having
the
crd
right
now
and
using
it
to
coordinate
across
nodes
and
across
drivers
right,
if
you
just
had
it
in
memory,
we
move
towards
the
goal
of
what
we
want
without
having
the
like.
Basically,
all
the
concerns
that
were
raised
in
the
call
are
at
least
alleviated,
and
we
move
one
step
closer
to
the
thing
we
want,
and
then
we
can
start
figuring
out.
Okay,
so
now
I
have
this
in
memory
cache.
B
Like
the
vague
thoughts
I
had
around
that
were
like
if
we,
if
we
built
like
a
small
service
that
runs
like
co-located
with
like
runs
in
the
same
namespace
as
the
driver,
but
not
as
a
Daemon
set.
So
just
like
a
probably
just
a
deployment,
and
if
it
exposes
some
kind
of
API
for
putting
things
in
the
cache
and
pulling
things
out
of
the
cache,
then
we
could
enforce
the
boundary
right
there
right
so
like
if
you're
a
if
you're
a
driver
instance
right.
B
And
it
would
just
be
a
regular
single
like
it
would
be
a
what
am
I
trying
to
say
it
would
be
a
regular
deployment.
So
not
a
Daemon
set
and
not
cross-cheduled
across
everything.
So
you
could
you
could
schedule
it
on
your
infra
nodes.
If
you
want
to
right,
you
don't
have
to
run
it.
B
A
A
But
if
we
move
the
problem
of
node
isolation
from
the
drivers
to
that,
then
I
think
it
just
comes
down
to
then
the
problem
of
node
isolation
is
solved,
but
it
comes
down
to
like
do.
We
want
to
purchase
that
as
API
objects
or
not.
B
B
Right
so
I
was
wondering
if,
if
we
should
let
that
at
least
sort
of
be
partially
a
deployment
configuration
so
like
the
thought
process,
there
would
be
like
I,
think
having
purely
driver
local
caches
has
like
I.
Don't
think
the
restart
problem
is
the
foundational
one.
I
think
the
actual
problem
is
no
sorry,
not
nodes,
workloads,
move
across
nodes
and
that's
like
a
normal
thing
in
kubernetes
and
so
having
a
cache.
That's
purely
driver,
local
meaning,
it's
node,
local
Just,
Produce,
weird
flakiness,
when
you're
offline,
because
it'll
be
like.
B
Oh,
if
I
happen
to
schedule
the
right
note,
I
work
and
if
I
don't
happen
to
schedule
the
right
note:
I,
don't
work
so
don't
want
that.
That
to
me
is
just
like
broken,
but
by
if
you
have
a
centralized
place
that
is
holding
the
state,
then
you
don't
have
that
weird
flaky,
Behavior
and
I
think
what
that
enables
is.
Then
you
can
pick.
Do
you
care
about
restart
or
not?
B
B
B
But
I
think
we
can
I
think
there
is
a
significant
benefit
by
actually
option
giving
people
the
choice,
because,
instead
of
arbitrarily
saying
that,
if
you
use
this
feature,
you
must
support
restart
and
thus
you
lose
the
ability
to
never
store
anything
in
the
API
I
think
giving
the
flexibility
there
might
be
useful
in
regards
to
how
we
could
maybe
protect
the
key
yeah
that
one
gets
hard.
B
I
guess
you
could
I
guess
you
could
purposely
only
have
one
instance
of
this
cash
thing
running
instead
of
like
multiple
instances
purely
around
the
aspect
of
like
only
interfacing,
with
like
one
node
and
like
maybe
one
TPM
or
something
I'm,
just
trying
to
think
about
like
how
like,
if
you're,
if
you're
less
concerned
about.
B
Like
stuff
for
such
a
service
and
more
concerned
about
it
being
able
to
like
retain
the
key
without
ever
having
to
write
it
to
the
API
yeah.
B
B
B
A
Is
it
safe
to
assume
that
whoever
is
going
to
do
this
disconnected
stuff
with
like
they're
going
to
consume
CSI
driver?
They
would
also
figure
out
a
solution
for
KMS.
Then
we
can
do
some
kind
of
hierarchy
in
that
single
service
that
could
encrypt
it.
The
encrypt
the
contents
and
the
custom
research
right
using
like
the
key
stored
in
external
world,
build
it
here.
B
A
A
Just
have
a
remote
key
and
then
you
can
encrypt
that
local
Tech
with
that
remote
kick
and
then
use
the
local
kit
for
encrypting
all
the
crd
stuff
and
then
store
that
local
kick
with
the
CID
the
encrypted
local
gig.
B
B
Oh
I
mean
but
yeah,
okay
I
see
what
you
mean.
Why
would
we
not
just?
B
Why
would
we
not
reuse
the
kdf
logic,
I
guess
I
mean
again:
yeah
yeah
I
mean
that
that
I
guess
that's
a
separate
problem,
but
but
at
a
high
level,
though
the
suggestion
is,
why
not
just
use
the
same
interface
that
kubernetes
exposes
for
KMS
V2
as
the
encryption.
B
B
B
That
could
work
I
guess
so
it
would.
It
would
allow
people
to
have
existing
KMS
V2
plugins,
which
will
be
all
the
cloud
providers
in
like
a
couple
of
months
to
just
use
them
if
they
want
to
again
if
they
enable
the
feature
to
begin
with.
A
A
But
I
mean
we
are
going
back
to
what
you
said
like
starting
off
with
just
building
the
interfaces
and
all
of
that
and
starting
off
with
the
local
cache
I
think
sounds
like
a
good
first
step.
A
A
B
Yeah
I
didn't
get
any
feeling
from
anyone
on
the
call
that
they
were
against,
like
the
like
the
feature
as
like
a
principle
like
like
we
shouldn't,
do
the
thing
at
all
right,
so
I,
don't
I,
don't
think
we're
blocked.
There,
though,
like
you
know
like
basically
like
it
was
I,
think
Michael
and
Tahir
and
Jordan
right.
So
basically
three
three
separate
folks
from
Google,
my
Frozen
again,
yes,.
C
So
I
guess
I'm
I'm,
okay,
with
building
this
like
here,
I
feel
caches
as
well
and
then
going
with
with
that,
should
we
maybe
go
to
seagull
more
frequently
to
to
get
this
sort
of
videos.
C
Should
we
go
to
see
both
more
frequently
with
with
our
thoughts
on
on
the
caching
bit
to
get
there.
A
We
were
talking
that
you
were
talking
about
cigars,
not
opposing
the
feature
in
general.
It's
just
like
yeah.
B
So
I
I
didn't
see
any
pushback
in
in
I.
Don't
like
a
matter
of
principle
right,
so
I
don't
think
we're
blocked
there
I
think
we
can
keep
progressing
I,
don't
I,
guess
I,
don't
fully
know
how
like
releases
for
secret
CSI
work,
but
I,
don't
I,
don't
particularly
feel
bad
like
if
we
have
a
release
where,
like
various
iterations
of
the
feature,
are
there
like
as
long
as
it's
not
like
somehow
insecure
or
like
broken
or
whatever,
like
I,
don't
I,
don't
think!
That's
a
big
deal.
C
B
Yeah
normally
I
think
any
anything
new
starts
out
as
an
alpha
feature.
Yeah.
B
I
think
Andre.
If
you
were
asking
about
like
having
more
often
discussions
with
cigarth
yeah
I
mean
any
any
sub-project
is
always
welcome
to
or
really
anyone
is
always
welcome
to
come
have
discussions
with
us,
yeah
I
think
we
were
like
if
I
had
any
feedback
I
think.
Maybe
we
waited
a
little
bit
too
long
to
have
that
discussion,
but
it's
okay
right,
like
I,
don't
I,
don't
think
we
necessarily
are
too
far
behind
or
anything
also
did
that
spell
separate
wrong.
B
B
On
the
so
I
think
on
the
other
aspect,
right
the
separating
a
project
out
I.
Think
in
this
you
have
the
next
sort
of
big
action
item
right,
which
is
the
doc.
So
that
way
we
can
sort
of
formalize
what
we're
requesting
yeah.
B
A
So
I
will
have
the
doc
I'll.
Also
add
it
to
the
cigarth
agenda
for
next
week,
just
to
share
the
dock
and
then
do
a
demo
and
the
one
I
did
in
the
CSI
call
and
I
will
start
a
thread
on
slack,
just
tagging,
all
the
providers
so
that
we
get
that
consensus
that
everyone's
okay
with
doing
something
like
this.
B
Okay,
I
guess
I
was
gonna.
Ask
so
like
amid
I,
see
you're
from
Google.
Do
you
foresee,
like
gke,
using
either
of
the
features
the
the
controller
or
the
the
caching
just
curious.
D
I'm
not
really
sure
at
this
point,
but
then
yeah
I
think
both
of
them
are
really
helpful
features
for
customers,
so
the
disconnected
thing
and
sync
itself,
so
we
advertise
so
sync
is
right
now
in
Alpha
feature
right.
If
people
look
up
the
talks,
so
if
it's
like
a
separate
component,
I
think
that
that
that
might
actually
be
good.
So.
B
But
so
that
does
remind
me
at
some
point
after
the
separate
project
exists
or
I
guess
what
I
would
say
is
once
the
separate
project
exists.
I
think
to
me.
That
is
the
point
where
we
would
deprecate.
That
Alpha
feature
is
that
contentious.
B
After
the
separate
project,
that
is
the
controller
exists,
so
you
actually
have
a
different
way
of
doing
the
same
thing
instead
of
right,
I,
don't
know
if
it's
really
very
valuable
to
deprecate
the
feature
with
no
other
option.
I,
don't
really
know
what
that
means.
But
I
was
like
it's
deprecated
sucks
like
that.
That's
not
useful,
I
think
if,
once
a
separate
project
exists,
I
think
we
can
deprecate
it
and
I.
B
B
There
just
has
to
be
some
definition
of
like
how
long
has
V1
maintained
after
V2.
If
that's
the
approach
we
take.
A
D
A
D
D
A
D
B
Okay,
I
had
one
other
thought
about
the
other
feature
that
I
forgot.
I.
Think
I
spoke
with
Andrea
about
this,
a
while
back,
but
I
want
to
ask
a
larger
group.
So
I
know
caching,
as
described
in
that
Doc
is
very
much
focused
on
offline
scenarios
and
I.
Don't
think
we
need
to
discuss
this
as
part
of
that
doc,
but
I
just
was
curious
if
folks
had
opinions
on
caching
as
purely
as
a
performance
Improvement.
B
So
as
a
for
example,
I
could
imagine
a
provider
if
we
gave
them
a
hint
that
we
had
something
in
our
cash
and
it
was
a
certain
age
old
like
from
the
current
time
the
provider
could
simply
say
I'm
not
going
to
bother
fetching
that
again.
If
you
have
it,
it's
probably
good
enough.
Just
just
use
it
so
again
it
would.
It
would
end
up
being
opt-in
right
because
you
would
have
to
enable
the
cache
and
you
would
have
to
have
a
provider
that
supported
the
feature.
D
So
if
I
can
understand
that
correctly
right,
so
a
provider
could
sort
of
advertise
that,
let's
say
if
you,
if
the
driver
has
a
has
a
secret
and
it's
like
less
than
x
seconds
old.
Don't
even
bother
asking
me
kind
of
a
thing.
C
B
But
could
could
Emit
and
Andrea
repeat
exactly
what
you
just
said,
because
my
wife
turned
on
the
Bluetooth
speaker
and
my
Mac
decided
to
start
using
it
so
I'm
sure
that
she
was
just
immediately
blasted
by
this
call
on
the
other
room.
B
D
So
I
I
was
just
asking
like.
Does
it
basically
mean
that
if,
let's
say
the
secret
is
was
fetched
in
in
the
last
X
number
of
seconds,
then
the
driver
doesn't
even
ask
the
provider
to
get
it
again.
It
just
uses
what
it
has
kind
of
a
thing.
B
So
I
think
on,
like
the
order
of
seconds
such
such
a
thing
is,
the
driver
could
just
automatically
do
like,
literally
on
the
order
of
like
10
seconds.
The
driver
could
do
that,
but
I'm
more
asking
about,
say
on
a
longer
time
window.
In
that
case
the
driver
would
ask
the
provider
hey
I,
want
you
to
fetch
the
secret,
but
by
the
way
I
happen
to
have
it
in
my
cash.
My
cash
is
like
four
minutes
out
of
date.
From
the
current
time,
you
can
decide
to
go
fetch
it
again.
D
Yeah
so
I'm
just
wondering
right,
the
provider
doesn't
really
have
enough
context
to
decide
this
on
a
case-by-case
basis
like
like,
for
example,
right.
How
would
it
probably
take
a
different
decision
for
a
secret
Foo
and
a
secret
bar
like
there's,
there's
nothing
that
tells
it
like.
I
should
behave
differently
for
one
of
them,
assuming
all
other
parameters
are
same,
like
it's
four
minutes
uniformly.
D
So
could
this
be
sort
of
a
thing?
That's
configured
in
the
mount
configuration
by
the
part
owner,
like
you
know,
I
can't
tolerate
like
a
10
minute
delay,
whereas
the
other
part
owner
configures,
like
I
I,
can't
hold
it
like
a
one
minute
delay
and
based
on
that.
Maybe
the
driver
takes
the
provided
like
changes,
its
decision.
B
I
think
either
way
you
have
to
have
some
way
of
communicating
the
configuration
between
both
desperate
systems.
Right,
like
you,
the
driver
has
to
know
some
way
per
provider.
I
don't
remember,
are
there?
Is
there
any
place
that
you
have
such
a
config
like
config?
That
is
like
our
provider,
that
the
driver
knows
about.
D
I,
don't
think
so,
I'm
not
actually
sure
so,
I'll,
probably
let
Anisha
someone
else.
No.
A
But
we
typically
like,
if
you're,
adding
some
new
feature,
I
mean
new
attribute
right
like
it
comes
down
to
like.
Is
this
only
provider
specific?
If
it
is,
then
providers
can
just
add
it
as
part
of
their
email,
a
little
parameter
schema
and
then,
if
we
think
it
benefits
all
the
providers,
then
basically
it
becomes
part
of
the
core
API.
D
A
B
Yeah
so
I
mean
I,
I,
don't
know
the
answer
to
the
question
of.
Should
we
have
the
level
of
like
per
mount
granularity,
maybe
like
I
I
could
I
could
imagine
a
provider
having
its
own
config?
That
defines
like
what
is
allowed
and
what's
not.
B
It
could
have
a
config,
that's
basically
static
for
all
things,
or
it
could
have
a
config
that
I
don't
know
a
regex
or
something
that
says
like.
B
Oh,
these
are
my
important
secrets,
so
those
have
to
have
a
small
cache
because
they
change
frequently-
and
these
are
my
secrets-
that
don't
change
okay,
like
I,
do
think
there
is
a
certain
Nuance
right
like
if
you
have
a
certificate
that
lasts
for
like
a
year
and
it
doesn't
get
and
you
don't
expect
it
to
be
revoked
then,
like
you,
don't
need
to
refresh
it
all
the
time
you
don't
need
to
be
asked
right.
But
if
you
have
like
a
token
that's
changing
on
the
order
of
hours,
then
you
might
be
like
yeah.
B
No
just
keep
asking
me,
because
it's
a
much
more
hot
secret
or
whatever,
but
I.
Don't
I,
don't
know
if
if
it
should
be
static
because
I
feel
like
we
end
up
limiting
providers,
then
I
could
go
either
way.
I
I,
mostly
what
I
don't
like
today
is
we
like,
if
you
just
run
a
bunch
of
Pods?
At
the
same
time,
we
will
just
Hammer
the
hell
out
of
the
provider
and
if
the
provider
doesn't
have
like
an
inbuilt
cache
to
like
do
something
about
it,
it's
just
like.
B
Oh,
you
have
10
pods,
and
each
of
them
has
like
five
Secrets.
Now
that
was
awesome
and
then
just
sort
of
scales
out
really
poorly,
especially
on
a
restart.
C
Just
another
question
here
in
this
case
would
also
probably
have
to
pay
attention
to
what
we're
doing
with
every
consider
like
rotating
Secrets,
because
it
seems
a
bit
complexual
that
we
want
to
rotate
the
secret
and,
at
the
same
time
as
they
can.
C
A
A
Okay,
yeah,
the
only
other
discussion
item.
I
added
was
the
release
1.3.5.
So
there
are
no
cves
in
one
three
four,
but
there
was
like
a
hidden
chart
change
to
basically
use
digests
for
images.
So
I
was
thinking
we
can
cut
release
next
week.
For
this
one.