►
From YouTube: Secrets Store CSI Community Meeting - 2023-03-02
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
everyone
welcome
to
the
CSI
Secret
store.
Community
call
today
is
the
March
2nd
2023.
This
call
is
recorded
and
will
be
published
on
YouTube,
and
it
follows
the
CNC
of
code
of
audit
and
if
you
haven't
already
added
your
name
to
the
list
of
attendees,
please
ahead
go
ahead
and
do
so
I
think
in
terms
of
newcomers.
There's
Anjali
on
the
call
Anjali
video
quickly
want
to
introduce
yourself.
A
A
Okay
in
terms
of
announcement,
we
didn't
add
anything
here,
but
just
quickly
wanted
to
say
that
we
released
1.3.1
and
during
the
last
release
cycle
last
month
that
had
cve
patches
and
few
and
a
bug
fix
and
then
I
think
we
are
again
up
for
the
next
release
next
week,
so
like
after
the
Patch
Tuesday.
So
we
will
cut
the
1.3.2
release
and
I.
A
Think
I
will
have
an
elect
right
out
this
time,
the
release
and
then
I
think
that'll
be
the
base
image
patches
again,
like
we've
also
merged
a
couple
of
PR.
So
we'll
probably
cherry
pick
those
to
just
get
that
in
so
we
can
cut
the
release
after
next
Tuesday
so
that
pass
release
Tuesdays
done
in
terms
of
moderating
I
can
moderate
the
call
anyone
wants
to
help
take
notes.
A
Okay,
so
I
think
in
terms
of
agenda.
We
only
have
one
item
that
Xander
added.
Do
you
want
to
talk
about
it.
D
Yeah
I
can
give
a
little
primer,
so
the
the
number
one
customer
request
that
I've
been
hearing
and
and
given
like
there's
a
certain
amount
of
bias
there
right
because,
like
a
lot
of
this,
is
coming
from
teams
inside
Microsoft
that
are
using
the
CSI
Secret
store
driver,
and
so
that's
it's
kind
of
skewed
towards
Microsoft's
needs
in
a
sense,
but
I
think
there's,
there's
value
outside
too
is
is
using
the
driver
in
disconnected
scenarios,
and
so
we've
been,
you
know
getting
a
lot
of
like
okay.
D
How
can
I
use
the
driver
still
and
up
until
now,
we've
been
steering
those
folks
towards
like
external
Seekers
operator
and
kubernetes
secrets,
because
then
you
know
you
don't
have
the
issue
like
when
the
pot
starts,
try
to
use
driver
to
reach
out
to
the
Secret
store
and
fails
so
I
started,
putting
this
design
doc
together,
just
to
kind
of
noodle
on
ways
that
we
could
support
this
pattern
through
a
a
cache
of
of
secrets
that
were
fetched
and
then
I've
got
a
little
section
on
there.
D
That
was
kind
of
informed
by
chatting
with
Mo,
very
briefly
on
how
we
could
store
the
the
cash
Secrets
but
yeah.
If
folks
could
like
take
a
look
at
this
and
just
add
comments
or
I
guess
we
can
chat
about
the
design
here
to
kind
of
help
flesh
things
out
like
what
this
could
look
like
and
yeah.
So
the
initial
thought
was
like
on
the
provider
class.
D
You
could
add
a
TTL
field
and
you
know
I
think
that
would
be
you'd
have
the
TTL
at
the
whole
provider
level
than
rather
than
individual
Secrets.
But
that's
you
know:
I
have
that
dominant
open
questions,
whether
we
want
like
a
separate
TTL
per
on
a
per
secret
basis
and
then
on
the
status.
You
can
have
a
time
stamp
that
reflects
the
last
fetch
like
when
the
secret
was
refreshed
and
then
scroll
down
a
little
bit.
This
is
the
part
that
you
know
chatting
with
MO.
D
You
can
have
a
a
cash
volume
that
stores
the
secrets
there
via
a
kubernetes
secret
in
the
namespace
of
the
driver
that
could
store
key.
That
would
be
used
to
encrypt
the
the
volume
that
stores
the
actual
cash
yeah.
So
it's
kind
of
a
a
rough
design
right
now,
yeah,
that's
that's
what
I
have
so
far.
E
Oh
so
I
I
haven't
thought
about
the
whole
problem
space
like
enough
to
necessarily
have
like
super
good
suggestions
yet,
but
like
one
like
very
similar
approach
that
I
thought
about
with
I
think
I
talked
to
Anish
about
this,
like
in
passing
a
while
back,
was
if
it
would
either
make
sense
or
be
maybe
even
necessary
to
instead
of
like
having
this
purely
at
their
driver
level
having
like
expanding
the
scope
of
the
API
of
the
actual
providers.
E
So
when
they
respond
with
a
secret
to
the
driver
that
at
that
instant,
they
tell
us
that
they
want
caching
for
how
long
and
like
what
parameters
the
cache
is
valid
under
when
I'm
unclear
about
this,
because
I
just
haven't
dug
deep
enough
into
like
it's
been
a
really
long
time
since
I
looked
at
the
driver
and
provider
apis.
But
what
I
don't
think
we
want
to
happen
is
what
happens
today
on
like
cubelets,
where,
if
you
have
a
protected
image
on
a
node,
that
is
only
pullable.
E
Don't
think
we
want
to
have
that
same
sort
of
like
misbehavior
like
it's
one
thing
to
say,
you
can
kind
of
pull
like
just
data
that
is
just
code
to
run
like
I
mean
I,
don't
like
that
at
all
anyway,
but
like
this
is
worse
to
me
like
it
was
like
it
is
the
thing
that
you
were
calling
a
secret.
So
I,
don't
think
you
just
want,
oh
because
I
happen
to
hit
the
driver
in
a
scenario
where
it
can
use
the
cache
that
it
just
lets
me
have
the
cash.
E
The
the
other
thing
that
I
don't
know
if
it's
written
in
here.
What
I
talked
to
Nish
about
is
that
the
cash
would
not
be
like
in
the
sense
like
a
traditional
cache.
It
would
be
more
of
like
a
fallback
cash
so
like
the
driver
would
always
try
to
do
a
live
pull,
and
if
the
error
it
got
was
concretely
something
that
it
could
Define
as
like
the
network
is
down
or
whatever
then,
and
only
then
would
it
say
all
right.
E
Let
me
look
at
my
cash
because
for
as
a
for
example,
if
it
gets
an
error,
that's
like
401,
like
this
user
or
this
entity,
is
not
allowed
to
pull
the
secret
right.
We
want
to
respect
the
fact
that
some
access
was
revoked
even
if
it
was
previously
present,
but
that
that's
like
I,
think
the
limit
of
like
how
much
I've
thought
about
this,
so
others
go
for
it.
Oh.
E
F
I
think
yeah
you're,
bringing
up
like
what
the
keys
of
the
cash
are
is
probably
the
biggest
thing
I'm
less
concerned
about
it
being
at
the
driver
versus
the
provider
or
like
our
secret
level,
like
I
think
generally
at
the
secret
provider
class,
is
probably
like
a
fair
place
to
put
it
but
yeah
you
need
I.
Guess
you
do
need
at
the
individual
secret
level,
be
like
off
responses
right,
so
yeah
I
think
the
cash
key
and
then
whether
or
not
that's
doable
at
the
provider
versus
or
driver
versus
provider,.
A
Yeah
we
did
talk
about.
We
did
start
at
the
provider
level,
the
discussion
when
it
started
and
then
I
think
later
we
were
thinking
it
could
be
moved
to
the
driver
level
just
because
the
provider
gets
the
secrets
and
then
returns
it
so
like
when
it
returns
it.
It
can
also
add
this
additional
metadata
telling
driver
like
hey.
A
This
is
something
that
you
can
use
and
also,
if
you
think
about
it,
the
provider
could
still
cash
it
so,
like
I
was
looking
at
AWS
SDK,
so
the
their
sdks
already
have
some
kind
of
caching
so
like.
If
you
make
calls
for
like
the
same
secret
again
like
at
least
the
goal
is
because
they
have
caching,
where
it
cashes
it.
The
first
time
keeps
it
with
the
TTL
policy
and
then
the
next
call
with
that
SDK
will
just
fetch
it
from
the
cache
rather
than
making
an
external
call
right.
A
So,
even
if
you
keep
it,
this
thing
at
the
driver
level
like
the
provider
would
still
do
whatever
it
needs
to
like
it
can
even
get
from
cash
or
it
can
go
and
talk
to
my
excellent
SQL
store
and
then
return.
It
to
the
driver,
but
I
think
having
it
at
the
driver
would
make
it
a
little
more
complex
just
because
of
the
all
the
semantics,
but
it
would
just
benefit
every
provider
that
wants
to
use
disconnected
scenarios
like
it's
still
an
opt-in
kind
of
feature.
A
F
I
think
some
disconnected
conversations
do
come
up.
I,
don't
think.
We've
had
an
explicit
ask
for
the
driver.
F
I'm
I'm
not
sure
I
haven't
been
as
involved
in
the
secret
manager
stuff
lately,
so
I
can
ask
RPM
about
it.
Sorry,
as
you
were
talking
I
was
thinking
through,
like
part
of
the
cache
key,
would
just
be
the
identity
of
the
Pod
right
making
the
request.
E
F
Oh
yeah,
like
the
the
Pod
identity,
though
comes,
is
provided
through
the
like
the
the
service
account
right
is
one
of
the
inputs
to
the
amount
to
request.
It
should
I
think
be
stable
across
scaling.
Events
I
think
we
have
the
identity.
Not
just
the
token
is
right,
so,
like
I
need
to
look
up
the
code
to
see
what.
E
Yeah
and
I
don't
fully
remember
exactly
the
details,
provided
there,
I
I
think
you
would
maybe
be
sufficient
to
just
say
it's
the
the
service
account.
You
know
with
the
asterisk
saying
that
we
should
then
also
make
sure
to
encourage
people
not
to
reuse
service
accounts
across
services,
like
distinct
things,
because
then
our
cash
key
is
based
off
of
one
thing,
but
then
their
workloads
are
based
off
another,
and
that
doesn't
seem
like
a
recipe
for
success.
F
Oh
I'm,
sorry
I,
maybe
I'm
missing
something
on
the
like
the
kubernetes
service
account
identity
is
what
at
least
the
gcp
driver
is
using
to
like
make
the
call
to
the
external,
like
secret,
API,
so
I
think
as
long
as
like,
if
you're
using
the
same
service
account
within
kubernetes
for
two
different
things,
you'd
you'd
still
like
be
able
to
access
those
Secrets
anyway,.
E
Yeah
I
suppose
that's
fair
I
vaguely
remember
like
I,
remember
looking.
E
So
at
every
I,
I
was
a
little
bit
confused
when
I
was
looking
at
like
the
csci
driver
spec
and
like
the
the
like,
the
the
mounts
and
stuff
and
I
was
looking
at
the
kubernetes
code
base.
Is
it
that
the
provider
is
given
like
unique
service
account
tokens
with
unique
audiences
for
that
particular
deployment
or
whatever,
like.
A
I
said
so
in
the
CSI
driver
object.
There's
this
field
called
token
request,
so
basically
there
you
can
configure
what
audiences
you
want
the
token
for
like,
and
then
you
can
also
configure
like
multiple
or
different
audiences.
So,
depending
on
how
many
audiences
you
configure,
you
will
get
a
token
for
each
audience
and
then
you
can
also
configure
the
TTL.
So
this
is
same
as
the
predicted
service
account
token
volume.
But
you
just
add
these
things.
In
the
token
request
field
right.
E
E
So
when
the
driver
is
sorry
when
the
provider
is
presented
with
all
of
this
information,
it's
a
list,
it's
passed
in
a
list
of
tokens
and
those
tokens
all
are
matched
one
to
one
within
with
the
audiences
requested
there.
But
the
actual
service
account
in
the
token,
is
the
Pod
service
account
right.
So
it's
the
pods
idea,
that's
right.
Okay,
so
I
mean
I
I,
guess
that's
probably
sufficient
right,
I,
don't
know.
Something
was
bugging
me
about
this
and
I
was
concerned
and
I
was
like.
A
A
F
Yeah
I've
I've
just
been
dumping,
like
notes
in
the
comments,
as
everyone's
talking
like
I,
do
know
that
the
gcp
one
we
at
least
have
like
a
few
different
options
where,
like
we
get
that
identity
as
input,
but
it
is
possible
based
on,
like
other
configuration
for
the
provider,
to
use
like
a
different
identity
like
the
identity
of
the
node
instead
of
the
Pod,
so
like
I
could
see
that
at
least
breaking
some
close
on
the
like.
F
The
vast
majority
of
flows
on
gcp
is
using
the
Pod
identity,
but
we
do
have
like
some
ways
to
do
other
things.
So
I
I
tried
to
mention
that
in
the
comment
and
I
think
one
way
would
be
like
providers
just
to
just
return
like
a
caching
key
or
some
like
opaque
objects.
That,
like
the
provider
says,
this
is
like
an
okay
thing
to
cash
on
versus
that.
F
Yeah
like
like
managing
the
cash
and
the
encryption
on
the
cash
and
stuff
like
I
could
see
implementing
in
the
driver,
I
think
we're
having
a
hard
like
as
I'm
hearing
the
discussion,
I'm
less
convinced
about
yeah,
the
caching
key
and
identity
and
like
making
sure
we're
not
creating
a
confused
deputy.
E
F
Yeah,
like
I,
think
it's
easy
enough
to
like
add
Proto
fields
and
like
check
whether
you
know
we're
not
a
provider
like
we
could
have
something
or
whether
or
not
a
provider
supports
this
functionality
right.
D
Well,
I'm
just
going
to
keep
working
on
this
design.
Doc
then,
and
you
know,
feel
free
to
to
keep
checking
in
on
this
and
adding
stuff
to
it
and
I'll
I'll
continue.
You
know,
soliciting
feedback
in
the
community
meeting
and
I
think
it'll,
at
least
on
the
Microsoft
side.
It'll
be
a
little
while
before
we
can
like
actually
play
with
any
kind
of
implementation
of
anything,
at
least
until
after
127
launches
yeah.
So
the
design
will
will
continue.
A
E
Do
do
folks
have
any
ideas
on
the
the
storage
of
the
cash
like
is
or
is
like.
The
the
suggestion
I
made
about
having
an
encrypted
volume
was
like
this
literally
the
first
thing
that
came
to
mind.
E
D
A
Go
ahead
so
I'm
gonna
say
the
only
thing
is:
if
we
use
a
local
volume
gas,
then
we
would
still.
We
would
be
in
that
behavior,
where
we
can
only
say
parts
that
land
on
that
particular
node,
which
has
the
cache,
would
work
in
case
of
disconnected
scenarios
right
like
the
cache,
is
not
distributed
across
all
the
nodes.
E
Yeah
that
that
would
be
annoying
I
guess
and
certainly
you
could
get
very
fancy
and
replicate
your
little
cash.
Oh.
C
That's
waiting,
I
had
a
difficult
problem.
E
C
But
but
it
wouldn't
be,
I
mean
it
could
be
still
fine
right
unless
the
nodes
are
out
of
capacity.
I
mean
we
can
just
say.
Like
I
mean
the
cash
will
be
only
on
like
those
nodes
where
the
pods
have
previously
been
running,
and
even
in
this
connector
scenario
new
or
try
to
come
up,
it
will
eventually
probably
get
scheduled
on
the
on
the
particular
part.
I
mean
they
can
have
like,
probably
a
node
Affinity,
or
something
like
that.
C
Right
right,
what
I'm,
what
I
was
thinking
is
maybe
a
workload
that
needs
to
be
run
in
this
in
a
disconnected
scenario,
will
always
be
scheduled
on
a
particular
note,
so
we
can
have
that
node
Affinity
from
the
gate
code.
So
every
time
the
workload
gets
scheduled,
it
will
be
on
a
specific
node
and
naturally,
then
that
node
would
have
cached.
C
E
So
right
now,
if
you
have
this
problem
and
you're,
basically
stuck
with
using
like
ESO
right,
which
will
put
the
secrets
into
kubernetes
secrets
in
sort
of
the
namespaces
of
the
workloads
right
I
mean
I.
Suppose
you
could
make
the
argument
that
if
we
put
the
secrets
in
the
driver's
name,
space
they're
still
protected
by
the
namespace
boundary,
but
they're
not
sort
of
protected
from
actors
on
the
cluster
that
have
like
wide
secret
read
access
like
Ingress
controllers.
E
E
I
mean
I
suppose
we
could
I
suppose
we
could
solve
this
by
making
a
like
an
a
new
crd
called
like
encrypted
secret
right
and
then
just
having
one
kubernetes
secret.
That
holds
our
encryption
key
and
then
having
encrypted
blobs
stored
in
the
other
ones.
E
So
the
the
kubernetes
secret
that
would
be
stored
on
the
cluster
is
just
like
a
random
encryption
key,
and
we
at
this
point
you
could
make
it
like
a
one-to-one
like,
so
they
would
for
every
custom
resource.
That's
going
to
hold
encrypted
data.
There
would
be
a
like
a
one-to-one
mapping
with
a
secret
like
a
kubernetes
secret,
but
that's
just
a
key
like
an
encryption
key
used
once
to
encrypt
and
decrypt
that
one
value.
But
then,
if
you
have
a
custom
resource,
that's
holding
it
that's
just
encrypted
blobs.
E
Nothing
should
have
access
to
read
those,
including
any
Ingress
controllers
or
any
users
all
right.
So
you
sort
of
offset
it
you're,
basically
indirecting
the
check.
So
that
way,
the
only
thing
that
has
access
to
it
is
basically
the
driver
or
a
cluster
admin
which
we
don't
care
about,
big
plus
or
admin,
because
they
can
just
exec
into
the
driver
and
do
whatever
they.
D
I
am
also
talking
to
hashicorp,
probably
tomorrow
about
since
vault
is
the
only
like
big
secret
store.
I
know
of
that
can
run
on
cluster
I
just
wanted
to
chat
with
them
and
see
if
they've
had
like
any
users
like
mentioning
this
problem
space
before
like
trying
to
use
a
cloud
provider's
Secret
store
in
a
disconnected
scenario
and
yeah.
E
E
A
F
There's
like
one
of
the
design
considerations
like
years
ago,
was
a
desire
for
different,
like
notes
to
like,
like
one
node,
wouldn't
be
able
to
act
as
a
service
account
of
like
alt
nodes.
I'm,
not
sure,
if
that,
like
ever
fully
got
realized
in
kubernetes,
who
are
like
yeah.
Some
service
accounts
can
only
execute
on
like
a
subset
of
nodes
or
like
in
a
pool
or
something
like
that,
and
so
like
one.
F
One
of
the
things
that
we've
been
doing
on
the
CSI
driver
was
like
moving
more
towards
spot
identity,
Perfection
of
the
secrets,
so
that
we're
not
using
just
like
a
a
global
Identity
or
work
globally,
having
like
full
access
to
secrets
on
the
cluster,
so
I'm
I'm,
not
sure
if
that,
like
goal
of
kubernetes
security,
was
ever
realized.
But
there
might
be
some
like
having
a
cache.
F
That's
usable
across
nodes
in
the
cluster
May
make
that
configuration
more
difficult,
but
people
that
want
that
configuration
may
not
overlap,
but
the
people
that
want
disconnected
environments
so
I
just
remember
that
being
discussed
a
while
ago.
I'm
not
sure
if
it
still
applies.
But.
E
Yeah
so
for
cubelets,
there's
the
node
restriction,
admission
plug-in
and
the
node
authorizer.
So
if
a
cubelet
attempts
to
like
fetch
a
service
account
token
for
an
identity,
that's
not
scheduled
on
it,
it
will
fail
as
unauthorized,
but
otherwise
it
will
succeed.
So
that's
already
enforced,
like
in
in
terms
of
like
workloads
that
are
running
this
one's
a
little
bit,
I.
Think
more
nuanced,
because
it's
more
of
like
I
think
the
relationship
is
not
known
to
kubernetes
that
this
driver
services
all
of
these
workloads.
E
But
you
know
this
instance
of
this
driver
is
on
this
node,
which
has
these
pods
like
I,
think
that
that
part
of
the
graph
is
unknown
to
it
and
like
in
the
design
with
the
custom
resources.
It
does
end
up
that
every
driver
instance
across
every
cubelet
would
be
able
to
read
all
of
the
secrets
that
are
like
in
its
namespace
and
stuff.
I.
F
I
guess
this
makes
it
those
to
like
a
node
that
wasn't
able
to
fetch
a
secret
like
may
now
be
able
to
like,
if
there's
a
global
cash.
Anything
to
think
about
this.
Some
more
yeah.
F
It
like
it
may
just
be
worth
I'm
calling
out
in
the
design
like
I'm,
not
sure
if
it's
needed
to
solve
it
here.
E
No
I
think
it's
a
good
point,
though
yeah
yeah
I
cannot
immediately
convince
myself
that
we
haven't
made
it
worse,
so
I
think
figuring
out.
A
No
sorry,
okay,
I
think
so
we
have
a
few
other
open
questions,
so
I
I.
We
can
summarize
that
here
but
I
think
like
the
primary
ones,
the
cash
key
and
then
also
understanding,
if
you
want
to
replicate
this
cache
or
if
you
want
to
keep
the
local
volume
Behavior
like
I,
think
like
those
are
two
areas
that
I
think
we
need
as
a
Next
Step
and
like.
B
Yeah,
so
probably
this
was
already
discussed.
I
have
just
newly
joining,
so
you
know
one
thing
that
was
flagged
by
you
know.
One
of
our
customers
was
that
The
Secret
store,
CSI
driver
does
require
root
level.
Privileges
and
you
know,
especially
with
openshift
SEC,
is
in
restricted
mode.
How
does
this
impact?
Will
you
know
we
do?
We
need,
you
know,
changes
in
the
secs
and
most
likely
we
do
because
it
has.
It
requires
root
privileges,
any
thoughts
on
restricting
privileges
in
future.
A
B
A
Then,
on
the
provider
side
I
think
like
it
was
mostly
around
being
able
to
create
the
the
socket
like.
That
was
one
of
the
reasons.
There
is
an
open
issue
to
see
if
we
can
drop
the
the
root
requirement
on
the
providers,
but
the
socket
is
still
a
blocker.
Like
I
mean
we
can
go,
look
at
it,
but
as
of
now
I
think
like
both
of
this
is
required,
which
is
why
we
have
this
documented
here.
A
I
mean
we
can
certainly
apply
and
see
if
we
can
drop
it
like
I,
think
the
last
time
we
tried
it
I
think
actually
got
provided.
They
had
open
the
issue
to
see.
A
If
we
can
do
some
kind
of
Investigation,
we
did
try
to
do
that,
but
figured
like
it
is
not
possible
to
drop
it
like,
at
least
in
terms
of
the
requirements
we
try
to
like
get
rid
of
a
lot
of
the
things
that
would
need
those
privileges
but
I
think
there's
still
like
us
one
permission
which
needs
that
so
as
of
now
I
think,
there's
not
a
lot
of
work
going
on,
but
we
can
again
revisit
in
the
future
and
see
if
we
can
drop
it.
B
Yeah,
if
it's
on
the
roadmap,
also
that
that
would
help.
A
B
It
is
coming
I
think
Ronald
is
going
to
work
on
it
from
red
hat
at
some
point
is
what
I've
heard.
B
Yes,
yeah,
but
that's
something
that
is,
you
know,
definitely
something
we've
heard
everyone
wants
it
right.
So.
E
I
I
think
that
kind
of
stuff
might
be
like
a
prereq
right
as
if
you
have
to
have
I,
think
a
little
bit
more
support
from
the
underlying
OS
to
express
the
permissions
you
need
as
tightly
as
you
can
and
I.
Think
right
now.
We've
done
that
right.
We've
said
that
we
need
root
because
we
have
to
mount
like
at
the
end
of
the
day.
E
Being
a
CSI
driver
is
a
privileged
thing
if
you're
gonna
do
like
host
path,
UDS
and
Mount
stuff
and
then
have
the
ability
to
keep
changing
the
mounts,
because
that's
what
we
need
to
be
able
to
do
to
refresh
secrets
and
stuff
we've
kind
of
put
ourselves
in
that
Catch-22
right.
B
Sure
yeah
yeah
I
think,
but
that
opens
up
a
whole
bunch
of
permission
level
things
that
we
have
to
handle
at
the
SEC
level
and
at
the
PSA
level,
to
make
sure
that
you
know
this
is
I.
E
B
A
B
Think
the
username
Space
Cat
yeah,
sorry.
A
I
couldn't
say
we
will
reevaluate
like
when
that
is
available
like
just
so
that,
like
users
know
like
we
have,
we
are
waiting
on
something
to
try
this
out
again
like
we
can
document
that
and
we
should
not
installed.
A
Okay,
any
other
items
that
anyone
wants
to
discuss
on
the
call.
A
No
okay
I
mean
we
can
also
quickly
look
at
the
test
grade.
A
F
A
F
Yeah
I
don't
know:
I
have
found
that
to
be
like
necessary
on
certain
services
to
have
much
better,
like
histogram
about
Trace
I
haven't
seen
like
the
CSI
drivers
behaving
like
poorly
enough
to
require
it,
but
I'm
I
have
definitely
seen
in
other
Production
Services
needing
to
to
tune
this.
F
So
a
runtime
configuration
of
that
I
could
see
as
being
useful
or
not
it'd
be
best.
If,
like
there's
other
services
that
are
like
yeah
other
code,
we
could
copy
for
like
what
a
flag
would
look
like
to
configure
this,
but
I
think
it's
a
reasonable
request.
E
Yeah
I
was
just
going
to
say.
Oh,
this
sounds
really
strange
to
me
because
none
of
the
kubernetes
metrics
are
configurable
like
this.
As
far
as
I
know,
like
you,
you
get
the
metrics
that
you
get
and
if
you
don't
like
them,
you
make
an
issue
and
we
make
them
better.
But
you
don't
you
don't
get
to
configure
them
at
runtime.
F
That's
also
probably
true
that,
like
you
need
to
match
the
boundaries
to
like
how
things
normally
perform
or
how
they
perform
in
bad
situations.
So
if
these
aren't
good
values,
I
guess
that,
like
it'd,
be
useful
to
know
what
good
value
boundaries
would
be
or
what
they
would
want
and
like
we
could
just
change
it
here.
A
A
So
we
can
just
increase
the
range
and
then
just
add
more
like
we
can
go
more
into
the
values
here
and
then
like
that
would
be
a
good
start
and
then
I
think
beyond
that,
if
more
users
say
like
hey,
this
doesn't
work
out
for
me,
I
need
something
more
then
I
think
we
can
revisit
and
say.
Do
we
want
to
expose
this
as
a
configuration.
E
E
Is
it
misbehave
or
the
provider
that
you're
using
that
it
misbehaves
where
you
need
to
know
by
looking
at
the
metrics
that
something
is
gone
and
like
you
need
to
like?
Have
your
engineers
go
fix
it
or
something
that
just
sounds
concerning
to
me
at
a
different
level
of
like
the
thing
is
just
not
functional
enough
for
you.
A
A
A
F
That
sounds
like
the
thing
where
we
have
to
open
PR
of
like
a
way
to
like
mutate,
the
secrets
to
add
additional
information
in
order
to
split
them
out
into
different
files.
There's.
F
C
I
think
for
additional
data
we
said:
like
providers
can
do
it
right.
A
F
A
F
Like
yeah
I
think
we
we
want
to
pick
that
back
up
but,
like
I,
think
that
the
way
I
read
that
bug
could
be
solved
by
this
word
like
if
they
just
want
other
non-secret
data.
Statically
added
to
the
amount
and
to
just
configure
that,
as
part
of
the
secret
provider
class,
that
the
generic
like
mutation
stuff,
would
be
a.
F
A
Okay,
yeah
and
then
I'll
reach
out
today
and,
like
I,
think
I'll
see
if
he
can
join
the
call
the
next
name,
but
if
not,
maybe
we
can
pick
it
up.
Yeah.
A
Yeah
this
one
ladding,
the
oci,
provided
they
reached
out
and
then
their
playing
card,
a
new
provider
I
actually
saw
they
all
created
a
repo.
So
they
have
a
provider
now
and
I.
Think
it's
like
testing
images,
but
whenever
they
reach
out
in
terms
of
being
added
a
supported
provider,
we
can
work
with
them
through
our
process
and
see
if
we
can
get
them.
Onboarded.
A
Okay
in
terms
of
full
requests,
so
for
this
adding
keyless,
so
I
think
in
terms
of
being
a
supportive
provider
they've
satisfied
all
the
criteria
like
the
only
last
pending
thing
was
the
code
review
that
we
had
to
do
and
I
did
that
and
I
opened
new
issues
on
the
repo
and
I.
The
issues
are
not
being
closed
yet
so
I've
reached
out
to
this
person
on
slack
telling
them
like
hey.
A
If
you
can
resolve
those
issues,
we
can
do
another
review
and
then,
if
everything
looks
good,
we
can
add
that
as
a
supported
provider,
they
already
added
the
you
know
the
CI
pipeline
for
the
optional
test,
and
all
of
that,
so
the
only
pending
review
is
action.
Items
from
the
code
review
so
I'm
following
up
on
this
and
then
I'll
give
a
status
update
on
the
next
level.
A
And
this
foreign,
this
is
to
add
Alibaba
as
a
supported
provider,
so
they
also
opened
testing
for
a
PR
and
I
think
they
are.
They
are
testing
it
right
now,
but
once
it's
ready
for
review
I'll
tag
all
the
folks
here,
so
we
can
review
this
PR
and
also
the
testing
strap
here
and
then
help
them
merge
this
so
that
we
can
also,
like
eventually,
add
Alibaba
as
a
supported
provider.
A
A
No
okay
so
see
you
all
in
the
next
Community
call
on
March
16th
cool.
Thank
you.
Everyone
for
joining.