►
From YouTube: Secrets Store CSI Community Meeting - 2020-10-29
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
it's
recording,
hey
everyone!
Welcome
to
the
secret
store,
csi
community
meeting
for
october
29th.
This
is
a
cncf
meeting,
so
it
falls
under
cncf
code
of
conduct
and
the
video
will
be
published
to
youtube.
A
A
A
Yes,
so
yeah,
so
we
talked
about
very
briefly
about
this
proposal
last
week.
So
basically,
what
we've
done
is
we've
tried
to
add
a
list
of
items
that
we
think
should
be
done
before
we
can
cut
a
stable
release
for
the
driver
and
also
this
I'll
list,
a
couple
of
criteria
that
the
providers
should
complete
so
that
the
provider
can
also
cut
a
stable
release.
B
I'm
realizing
that
I
do
not
have
comment
access
on
this
stock.
I
did
get.
I
think
it's
anyone
with
a
link
can
view
at
the
bottom
there
and
then
you
click
change
and
then
a
viewer.
You
can
change
the
commenter
okay
thanks.
B
Yeah
I've
meant
to
look
at
this
more
this
past
two
weeks
than
get
distracted,
but
yeah
I'm
getting
to
look
through
this
now
too.
C
Hey
yeah,
sorry,
I'm
in
the
same
position
yeah
also
happy
to
look
through
it
now.
A
Okay,
yeah,
so,
okay,
so
going
over
the
list,
basically
for
the
driver.
What
we
propose
is
right
now.
First
thing
is
high
availability
right
because
the
csi
driver
is
is
in
the
actual
path
where,
when
applications
get
deployed
and
if
the
csi
driver
is
not
up
and
running
or
if
it's
running
but
it's
having
issues,
then
this
could
lead
to
downtime
for
the
applications
in
terms
of
new
deployment
as
well
as
upgrades.
A
Also
it's
running
as
a
daemon
said,
so
there's
one
replica
running
on
each
node,
so
we
just
need
to
ensure
that
when
multiple
controllers
are
updating
the
single
kubernetes
secret,
they're
able
to
do
that,
gracefully
and
even
in
turn,
when
there's
conflict,
they're
able
to
do
an
update.
Also,
I
think
when
we
consider
this
part,
we
also
have
to
ensure
that
the
number
of
api
calls
incurred
is
not
too
high,
because
kubernetes
returns
a
409
conflict
and
then
we
have
retries
in
place.
A
A
So
this
is
what
ensure
retries
for
critical
api
calls
without
exerting
too
much
load
on
the
kubernetes
api
server
and
in
terms
of
again
in
terms
of
abs
overload.
What
we
have
done
already
in
the
code
for
rotation
or
the
sync
controller
is
try
to
do
filtered
watches
wherever
possible.
So
we
try
to
only
filter
on
pods
running
on
the
same
node
as
the
driver
and
then
also
for
secrets.
A
We
only
filter
on
secrets
with
a
particular
label.
So
I
think
that
way
we
ensure
that
the
memory
growth
is
not
too
much
based
on
the
cluster
size,
but
I
think
one
issue
now
is
controller.
Runtime
doesn't
offer
option
option
to
do
filtered
watches,
so
filtered
watch
can
only
be
done
with
client
go
and
I
think
at
some
point,
when
controller
runtime
does
support.
A
And
if
you
send
the
entire
grpc
call
to
the
provider.
Then
it's
item
put
in
so
the
provider
can
handle
scenarios
where
certain
objects
have
already
been
mounted
to
the
part
and
then
the
rest
of
them
are
being
rewritten.
But
the
provider
should
be
able
to
handle
such
scenarios
as
well.
C
That's
a
that's
an
interesting
one
for
vault,
because
we've
got
dynamic
secrets,
so
yeah,
I'm
not
sure,
not
sure,
like
implementation-wise
how
how
we
could
do
that.
Do
you
have
you
had
any
thoughts
on
dynamic
secrets.
C
Yeah
sure
so,
like
a
common
example
is
you
might
have
a
database
dynamic
role
set
up
and
every
time
you
read
the
database
credentials
path.
It
will
create
a
new
user
and
return
a
json
blob
with
username
and
password
back
to
you.
C
C
Yeah
yeah,
so
I
guess
it.
It
depends
on
what
how
partial
a
partial
failure
can
be
like
if,
if
we
ensure
that
kind
of
the
the
units
of
of
like
retry
are
around
like
whole
blobs,
so
you
know
something
that
includes
both
the
username
and
the
password.
A
Then
probably
the
right
for
those
particular
keys
have
to
be
atomic,
so
write
all
or
write
nothing
yeah
instead
of
just
a
partial
right,
because
when
I
say
partial
failures
today
like
for
keyword,
maybe
is
if
they
provide
two
objects
in
order
that
one
of
them
is
not
available
in
keyword,
then
we
write
the
first
one,
but
we
failed
the
entire
grpc
call.
So
then
the
driver
fails
the
mount
and
say:
is
it's
not
succeeded
right
like
so?
The
pod
is
still
not
starting
up.
C
Yeah
yeah,
I
think
that's,
that's
that's
a
pretty
reasonable
take
on
it,
just
something
to
be
careful
about,
while,
while
we're
doing
the
provider
side
on
on
our
side,
yeah
sounds
good.
D
Is
that
a
feature
that
you
can
configure
or.
C
It's
it
varies
from
kind
of
end
point
to
end
point.
Most
databases
support
both
static
and
dynamic
roles,
so
it
just
depends
how
the
user
wants
to
set
up
for
their
database.
B
And
then
a
mission
right
to
your
suggestion
was
like
that.
Just
partial
failures
should
fail
the
mount
that
may
create
extra
database
users
that
should
get
cleaned
up
by
vault
later,
but
that.
C
B
C
The
extra
database
users
wouldn't
be
too
much
of
a
concern
because
they
should
have
a
ttl
on
them,
so
they
should
get
cleaned
up
automatically
yeah.
I
I
think
yeah
as
long
as
as
long
as
we
have
atomicity
on
mounting
all
parts
of
a
single
single
credential
read,
then
we
should
be
all
right.
A
A
A
A
A
What
we
want
to
do
is
we
want
to
enable
all
of
them
by
default,
so
run
the
driver
with
all
the
features,
enabled
and
run
load
tests,
basically
to
set
a
benchmark
on
the
memory
and
cpu
usage,
and
I
think
what
we
want
to
work
on
from
there
is
to
see
further
optimization
in
terms
of
watches
and
then
ensure
that
the
api
server
doesn't
crash
and
then
in
scenarios,
where
there's
a
burst,
we
want
to
ensure
that
we're
able
to
handle
that
burst.
Gracefully
by
having
some
kind
of
rate
limiting
at
the
driver.
A
Okay,
yeah
tom,
do
you
have
any
questions.
C
Yeah
sounds
good
to
me
and
it'd
be
nice
to
kind
of
publish
in
our
readme
as
well.
Some
of
these
benchmarks,
as.
A
A
So
the
next
one
is
the
is
a
very
important
one
for
the
testing
part
of
it
right.
So
basically,
what
we
want
to
do
is
determine
all
the
test
gaps
and
improve
improve
the
test
coverage,
so
that
includes
e
to
e,
as
well
as
unit
test
so
for
e
to
e.
Today
I
think
so
we
use
bats
and
then
we
are
also
trying
to
cover
all
the
features,
but
there
is
no
coverage
for
all
the
providers
in
in
the
driver.
A
So
basically
we
are
only
testing
with
some
some
features
with
some
providers,
so
what
we
want
to
do
is
basically
figure
out
which
all
features
have
not
have
been
covered
and
then
add
e
to
e
test
for
all
the
providers
with
that,
and
the
same
applies
for
unit
testers.
So
there
are,
I
think
our
unit
is
coverage
is
pretty
low.
A
I
think
for
this
one.
The
also
the
next
step
is
basically
the
pr
that
we
have
for
moving
to
ginkgo.
So
I
think
that's
the
next
right
step
forwards
in
terms
of
ability
to
add
new
tests
more
easily,
so
it's
under
review.
Hopefully
we
can
get
that
merged
soon
and
then
we
can
add
more
tests
with
the
new
framework.
A
So
this
is
basically
the
point,
so
we
should.
We
should
have
the
same
test
coverage
for
all
the
drivers
with
all
the
current
supported
providers
so
that
we
have
more
coverage
from
there
and
right.
D
Now:
okay,
question
about
the
previous
one
like
where
you
said,
enable
all
features
and
determine
memory,
cpu
usage
right.
Does
that
also
imply
that
we
will
be
running
this
with
all
the
drive,
all
the
providers,
all
the
supported
providers.
A
No,
it's
only
for
each
individual
provider.
I
mean
the
load
test
will
probably
have
three
instances
right:
one
per
provider.
D
A
So
I
think
the
metrics
for
load
test
is,
I
mean
the
impact
could
be
around
latency
right
like
one
is
latency,
but
also
the
other
one
could
be
in
the
api
server,
because
I
know
gcp
provider
queries,
api
server,
service
accounts,
but
other
than
latency.
Most
of
the
other
metrics
should
still
rely
mostly
on
the
driver
side
of
it.
A
So
the
provider
load
tests
are
different,
where
the
providers
basically
determine
what
kind
of
load
they
can
accommodate
when
they
talk
to
the
external
secret
store
right.
So
that's
where
the
rate
limits
come
in
and
how
many
calls
concurrently
they
can
make
to
their
back
end
azure
key
vault
or
gcp
secret
store
right.
D
I
guess
it's
also
like
the
it
because
we're
enabling
all
the
features
that
implies.
This
is
only
going
to
be
tested
with
providers
that
have
that
have
implemented
all
the
features.
A
B
I
was
also
going
to
say
the
individual
providers
now
will
need
their
own
memory
and
cpu
limits
so
that'll,
be,
I
guess,
on
us
to
determine.
B
B
A
Okay,
so
for
testing
today
we
test
against
the,
so
we
build
a
kind
cluster
and
then
we
test
against
it.
So
that
means
it's
testing.
The
latest
kubernetes
version
supported
with
kind
and
also
it's
testing
the
latest
for
the
driver.
What's
one
test
that
we
want
to
cover
is
basically
backward
compatibility
so
to
see
that
upgrade
from
n
minus
2
version,
minor
versions
of
the
driver
to
the
current
version
does
not
break
any
existing
workload,
so
it
still
continues
to
work
and
then
new
pods
work
with
the
new
latest
driver.
A
So
this
is
something
that
we
want
to
test,
because
users
are
constantly
upgrading
their
driver
versions
and
in
terms
of
how
we
deploy
the
driver
for
e2e.
Today
we
deploy
it
with
helm,
but
also
currently
in
the
driver,
repo
we
support,
deploy
manifest
and
there
is
no
way
to
validate
if
these
deployment
manifests
are
valid.
A
B
Anyway,
the
integration
tests
right
now
don't
use
helm
or
they
do
use
home.
A
The
e2e
test
use
help
today,
so
in
the
make
file
we
have
an
install
driver
which
installs
it
and
then
the
provider
is
just
applied.
Using
cube,
ctl
apply
okay.
So
if
the.
A
Like
so,
I
think
yeah,
so
I
think
that's
where
the
disconnect
is
so
the
providers
do
support.
If
the
providers
do
support
handset,
I
think
we
still
want
to
test
out
just
the
head
shots
and
the
ammo
files
in
the
driver
reaper.
So
that's
everything
under
manifest
staging
directory,
so
for
driver
test
we
still
install
the
driver
separately
and
then
the
provider
for
the
provider.
A
Okay
and
then
basically
determine
test
flakes
and
deflate
the
test,
so
we
want
our
e2e
runs
to
be
more
reliable,
so
instead
of
relying
more
on
the
retesting
scenarios,
so
in
the
last
week
we
had
we
had
to
do
re-test
quite
a
bit
and
we've
pushed
a
couple
of
years
in
effort
to
deflate
the
tests
and
then
now
they're
a
little
more
reliable,
but
I
think
we
basically
want
to
avoid
sleeps
or
other
non-deterministic
weights
in
the
test
and
then
switch
to
a
more
deterministic
where
we
way
where
we
have
re-tries
and
basically
wait
on
a
pod
to
start
or
wait
for
something
to
happen.
A
A
And
the
last
one
is
this
is
for
the
driver
is
the
ability
to
run
the
e2e
test
locally.
So
today
I
think
for
a
hashicorp
vault
provider,
it's
still
possible
because
everything
is
set
up
as
part
of
the
e2e
test.
Suite
so
bats
setup
sets
up
the
keys
and
loads
it,
and
then
it
runs
the
tests
against
it.
So,
but
for
azure
and
gcp,
I
think
I
mean
the
two
providers,
but
also
for
every
other
provider
that
gets
added.
A
A
And
then
the
next
one
is
documentation.
Basically,
so
we
want
to
document
all
the
feature:
gaps
and
known
limitations
at
some
of
the
known
limitations.
We
do
communicate
with
the
user
on
github
issues,
but
if
we
can
document
that,
then
it's
easier
to
just
point
them
to
that
link
and
then
we
can
just
keep
growing
that
list
if
it.
If
there
are
issues,
then
the
next
one
is
basically
best
practices
guide
so
based
on
our
testing
and
what
we
think
is
the
right
way
to
configure.
A
We
should
probably
write
a
best
practices
doc
so
that
we
can
point
users
to
it
and
say
how
to
configure
the
secret
provider
class,
how
to
do
it
within
the
namespace
and
also
best
practices
around
the
secret
provider,
class
and
external
secret
strip.
So
basically
we
want
to
say
use
one
secret
provider
class
for
external,
so
you
could
store
it
or
how
to
split
around.
So
basically,
it's
just
going
over
general
best
practices
that
we
think
and
then
we
can
keep
adding
on
to
that
list.
A
As
we
add
more
features
to
the
project,
then
the
today
we
have
a
couple
of
readmes.
Apart
from
the
main
readme,
we
have
have
a
docs
folder,
which
goes
over
all
the
different
features
individually.
So
what
we
want
to
do
is
we
want
to
consolidate
all
of
those
and
just
host
it
on
a
netflix
site.
A
So
I
have
a
open
pr
for
it
and
right
now,
I'm
in
the
process
of
requesting
a
netlife
account
from
kubernetes.org
once
they
do
that,
we
should
be
able
to
see
a
preview
on
every
change
in
the
pr,
and
then
we
can
use
that
to
basically
see
what
the
rendered
web
page
will
look
like
and
then
go
on
with
that.
A
So
I
think
this
will
also
provide
us
the
option
to
write
more
docs
for
each
individual
feature
like
we
can
go
as
elaborate
as
possible
yeah,
and
then
this
is
the
docs
for
the
new
contributor.
So
this
will
be
around
how
they
can
start
contributing
to
the
project
and
also
how
to
run
the
test
locally,
based
on
the
changes
that
they
make.
A
So
the
update
demo
in
the
readme
is
the
readme
currently
has
a
demo
gif,
but
that's
outdated.
So
I
will
appear
that
I'm
working
on
to
update
that
so
that,
when
users
just
land
on
the
repo,
if
they
want
to
take
a
look,
they
can
still
see
the
entire
flow
from
deploying
to
actually
viewing
the
secrets.
In
the
part.
A
And
then
troubleshooting
guide,
so
basically,
what
we
want
to
do
is
add
a
troubleshooting
guide
and
also
expand
it
based
on
github
issues
that
we
help
users
with
so
as
in
when
we
help
users.
If
you
think
this
is
something
that
makes
sense,
we
should
add
it
to
the
troubleshooting
guide,
starting
with
how
they
can
identify
this
particular
issue
and
then
what
actions
that
they
can
take
to
mitigate
this
particular
thing,
and
I
think
if
we
can
do
that,
it's
probably
easier
for
users
who
are
facing
issues.
A
A
And
we
want
to
document
the
load
test,
results
based
on
all
the
runs
and
the
future
for
roadmap
maintain
this
project
to
maintain
as
part
of
the
project.
But
so
today
we
have
a
project
board
and
then
we're
trying
to
add
the
new
issues
that
come
in
to
either
backlog
to
do
in
progress,
so
backlog
is
basically
something
that
we
haven't.
A
But
what
we
want
to
do
is
we
want
to
maintain
the
project
board
and
then
keep
it
up
to
date,
so
that
any
user
who
goes
and
looks
at
the
roadmap
knows
what
we're
working
on
currently
what
they
can
expect
to
see
in
the
next
release,
and
it
kind
kind
of
gives
them
the
whole
picture
and
the
release
management
dock,
and
I
this
so.
We
have
a
basic
dock
right
now,
but
most
of
the
releases
I've
been
doing
it.
D
And
just
to
add,
like
I
think
of
part
of
this,
we
should
define
how,
when
to
patch
things,
you
know
supportability,
as
I
kind
of
mentioned
up
there,
like
n
minus
two
minor
versions.
D
So
what
does
it
mean
when
we
patch,
when
do
we
patch
and
how,
which
minor
versions
we
actually
patch
for
security
issues
or
bug,
fixes
and
then
also?
How
do
we,
what
are
like
the
supported
kubernetes
versions
or
just
supportability
matrix?
I
guess
that
should
be
part
of
that.
D
A
Okay,
the
next
one
is
metrics,
basically,
the
basic
system,
health
metrics
that
show
the
system
is
working.
So
I
think
this
also
so
we
already
have
a
liveness
probe
today
for
the
csi
driver
that
the
csi
community
provides
so
there's
a
sidecar
container,
which
constantly
checks
the
socket
to
make
sure
that
the
driver
is
up
and
running,
but
we
also
want
to
have
an
option
to
do
readiness
probe
so
that
the
application
pods
can
rely
on
that
in
terms
of
deployment.
A
So
they
could
do
that,
something
within
it
container
to
check
if
the
drive
is
up
and
running,
because
there
are
scenarios
where,
if
a
new
node
comes
up,
our
community
scheduler
schedules
the
workload
part
before
the
driver
part
and
in
those
scenarios
drivers
the
workload
part
sees
mount
failures
because
the
driver
is
not
it
running
and
it
basically
just
populates.
Quite
a
few
events
with
mount
fail
mount
fail.
A
And
basically,
we
want
to
see
if
you
can
add
metrics,
to
identify
and
expose
the
drifts
between
the
state
of
the
pods
or
provide
a
fail
to
update
one
or
more
parts.
So,
basically,
when
there
are
a
burst
of
pods
that
get
deployed,
we
want
to
make
sure
that
even
if
a
single
failure
happens
for
a
part,
we
want
to
highlight
that
and
then
show
that
in
the
driver
logs
right.
A
So
the
next
thing
is
logging
framework,
so
right
now
we're
using
loggers
and
then
in
terms
of
debug
or
warning
it's
the
log.
The
logs
are
printed
out
pretty
frequently
even
debug,
and
I
think
what
we
want
to
do
is
revisit
that
and
switch
to
using
klog.
So
I
opened
a
pr
last
night
to
switch
to
using
k
log
and
also
there's
a
really
good
blog
on
structured
logging
with
k
log
with
v2,
so
I've
gone
through
that
and
I've
tried
to
incorporate
that
into
the
driver.
A
It
does
it,
I
mean
right
now
in
the
pr.
I
varies
a
little
bit
for
different
packages,
because
the
part
detail
is
not
available
for
all
events
right
like
for
node
unpublished.
We
only
get
the
pod
uid
and
for
node
publish
we
have
the
pod
name
and
for
the
other
consulars
we
have
the
pod
name
as
well.
A
A
Also,
the
next
one
is
basically
to
audit
the
current
logging.
So
once
we
do
the
k
log
changes,
I
think
what
we
want
to
do
is
basically
do
an
audit
on
all
the
logs
that
we
have
to
make
sure
that
we
have
everything
that
we
need
even
across
different
log
levels
for
debugging.
So
if
a
user
comes
with
an
issue
and
if
we
can
have
them
increase
verbosity
and
still
get
all
the
logs,
then
that
will
be
good
for
us
as
a
debug
and
configurable
log.
A
Verbosity
is
something
that
we'll
be
able
to
achieve
with
k
log.
So
as
part
of
this
audit,
we
also
have
to
determine
that
the
different
log
levels
that
we
define,
whether
they
make
sense
if
we
want
to
bump
the
log
levels
and
we
just
need
to
have
a
complete
log
coverage
so
that
anytime
we
have
issues.
A
The
logs
are
the
single
source
of
truth
for
us
and
in
terms
of
security.
Basically,
the
we
want
to
go
through
a
cncf
security
audit,
so
have
the
cncf
sponsored
security
audit,
go
through
threat,
modeling
and
also
the
results
so
that
we
know
that
it's
secure
and
then,
if
there
are
any
vulnerabilities,
then
we
fix
them
before
we
call
the
project
is
stable
and
also
for
other
attack
scenarios
or
threat
modeling
that
we
do
with
the
audit
as
well
as
outside
of
it.
A
I
think
we
want
to
translate
those
to
e
to
e
test
once
we
have
a
mitigation
so
that
we
know
that
any
future
prs
don't
break
or
cause
the
issue
to
reoccur.
Basically,
and
then
we
want
to
add
psp's
port
security
policies
so
that
out
of
the
box,
we
tell
the
user
how
they
can
configure
the
least
privilege
for
the
driver
so
and
then
also
the
our
back
requirements
for
each
feature.
So
today
we
do
have
our
all
our
back
rules,
but
we
don't
document
it.
A
So
I
think
it
will
be
a
good
thing
to
document
the
user
knows
why
certain
outback
permissions
are
being
required
by
the
driver
and
for
which
feature
is
being
used.
So
that
means,
if
they
disable
a
feature,
they
cannot
go
through
the
dock
and
say
yeah.
This
is
not
required
for
this
feature,
so
let's
go
disable.
These
are
back
permissions.
A
I
think
the
next
one
for
provider
is
basically
to
hide
hand,
chart
support,
so
the
for
the
provider
is
mostly
suggestions
based
on
what
we've
seen
from
users
and
what
we
are
currently
doing.
So
this
we
still
add
to
it.
But
the
first
thing
is
basically
headshot
support.
So
today
for
folks
installing
the
driver
and
provider
they
have
to
do
the
installation
in
the
driver,
repo
then
go
back
to
the
provider
and
then
install
the
provider.
A
The
next
one
is
windows
support,
so
driver
added
support
for
windows
in
0.3.9.
So
I
think
we
also
it'll
be
great.
If
the
providers
support
running
on
windows,
nodes
in
terms
of
provider
changes,
the
changes
are
very
minimal
and
for
actually
supporting
windows
the
I
think
it
requires
building
multi-arch
images,
so
basically
building
binaries
for
linux
and
windows,
and
also
publishing
docker
images
for
linux
and
windows.
So
that's
the
most
that's
required
for
the
windows,
support
and
also
provider
testing.
A
So
we
do
the
driver
provider
tests
on
the
driver
side,
but
that's
again
just
to
validate
the
driver
features,
but
on
the
provider
side
it's
possible.
There
are
different
back-end
scenarios,
that's
being
supported,
so
basically
we
want
to
have
test
coverage
in
the
provider,
so
that
includes
backward
compatibility
tests.
So
again,
this
is
update
to
n
minus
two
minor
versions,
so
this
could
be
the
provider
versions
or
the
driver
versions,
with
whatever
the
provider
is
using
and
again
determine
test
gaps
and
improve
test
coverage.
A
So
this
is
the
basically
e2e
and
unit
test,
so
the
providers
are
confident
before
they
cut
the
release
and
the
high
availability
applies
to
this.
One
too,
so
the
provider
is
basically
robust
to
infra
failures.
Network
field
is
another
operational
field
is
because
the
provider
is
the
only
part,
that's
actually
talking
to
the
external
secret
store.
So
we
want
to
make
sure
that
it's
reliable,
because
any
failure
there
will
just
result
in
the
pod
mount
to
fail.
A
So
and
then
this
is
an
important
point
because
they
found
an
issue
in
one
of
the
driver
releases
where
provider
binary
is
in
work,
but
then
there
is
no
context
being
passed.
So
if
the
provider
doesn't
honor
the
context,
then
what
happens?
Is
it
just
times
out
and
sometimes
it's
just
like
false
negative
and
the
mount
succeeds.
But
then
the
files
are
not
there.
So
right
now
for
grpc,
we
are
passing
the
context.
The
parent
contacts
from
cubelet
all
the
way
to
the
provider.
A
So
the
provider
must
ensure
that
at
every
request
it
uses
that
context
and
then
it
corners
it
in
the
right
way
so
that
if
it
is
not
able
to
do
it
within
the
context
timeout,
it
just
returns
an
error,
and
this
is
the
same
as
the
one
for
the
driver.
So
basically
the
provider
should
be,
providers
should
be
able
to
handle
partial
failures
and
when
driver
calls
something
repeatedly,
the
provider
should
be
provide.
A
call
should
be
added
basically
and
in
terms
of
documentation.
A
A
So
like
right
now
in
terms
of
features,
we
have
sync
secret
and
rotation
and
going
forward
we'll
have
more,
but
in
from
the
provided
docs,
it's
basically
to
say
these
particular
features
are
supported
for
this
provider
and
also
we
want
to
elaborate
more
on
those
docs
for
those
in
terms
of
setup
and
all
that
that
will
be
great
as
well,
because
it's
most
of
the
time
users
land
directly
on
the
provider
repo
and
they
go
through
the
docs
there.
A
And
if
you
provide
a
handshake
to
also
install
the
driver,
they
are
doing
everything
just
in
the
provider.
So
it's
good
to
have
these
docks
there,
so
that
providers
I
mean
the
users
just
go
through
the
single
dock,
install
setup
and
then
they
just
get
going
with
it
and
for
troubleshooting
guide.
Basically,
this
could
be
provider
specific
and
also
if
the
provider
wishes
to
add
driver
dog
driver
debug
there.
That
will
also
be
great.
But
with
the
grpc
changes,
all
the
provider
logs
are
in
the
grpc
port,
I'm
in
the
provider
part.
A
A
And
in
terms
of
performance,
basically
determine
system
failure
at
scale,
so
this
is
this
metrics
involves
the
rate
limiting
right.
So
this
is
basically
the
scale
of
api
calls
to
the
back-end
secret
store.
So
we
want
to
make
sure
that
the
provider
can
handle
a
burst
of
calls
and
then
also
what
is
the
load
and
how
gracefully
it
can
handle
it.
A
So
if
there
is
rate
limiting
if
it's
honoring
the
retry
after
header
and
then
atomicity
also
comes
into
play,
because
if
it's
getting
2000
requests
and
the
rate
limit
is
1500,
it
has
to
make
sure
all
the
1500
succeeds.
But
the
500
failures
are
all
atomic
so
like
it's
all
or
nothing.
So
this
scale
test
is
basically
to
ensure
atomicity
at
the
driver
of
the
provider
and
also
honoring
retry
afters,
and
also
the
rate
limiting.
A
And
this
part
is
the
same
as
that
of
the
driveway
right,
so
we
want
to
enable
all
the
features
that
the
provider
can
work
with
for
the
driver
and
then
run
a
set
of
scale
tests,
because
in
terms,
if
we
look
at
rotation,
then
the
provider
is
being
called
periodically
by
the
driver.
So
we
want
to
ensure
that
the
provider
can
handle
such
scenarios,
and
if
it's
a
limitation,
then
we
can
also
go
and
recommend
what
is
a
good
value
to
set
as
the
polling
timeout
for
the
rotation
as
well
and
in
terms
of
metrics.
A
Basically
basic
system
health
metrics
that
show
if
the
system
is
working,
so
we
don't
have
any
kind
of
metrics
today.
So
if
it's
running
as
a
binary,
there's
no
metrics
and
for
grpc,
we,
we
don't
have
any
kind
of
liveness
or
readiness
health
check
for
the
provider
uptime.
So
I
think
that's
something
that
we
should
definitely
add
before.
We
call
stable
and
grpc
server,
because
that's
the
direction
that
we
want
to
go
with.
A
So
we
want
to
ensure
that
all
providers
implement
the
grpc
server
and
then
they're
running
the
latest
version
and
in
terms
of
logs.
Whatever
applies
to
the
driver
applies
here
as
well,
so
base
we
basic
logs
for
diagnostics
and
configurable
versity.
So
we
want
to
pick
a
login
framework.
C
Sounds
good
to
me.
One
question
I
do
have
is:
will
the
cncf
security
audit
cover
the
providers
as
well
or
just
the
driver?
D
I
think,
usually
is
the
cncf
project,
so
in
this
case
it'll
be
the
driver,
but
I
don't
really
know
how
they
can
realistically
do
ades
without
the
providers.
So
yeah.
B
One
idea
I
had
on
that
is:
maybe
there
being
like
a
reference
provider
that
uses
kubernetes
secrets
as
the
external
superchargers
like.
So
it's
just
like
a
circular
dependency
there,
but
like
look.
D
D
I
don't
know
I
do
like
the
idea
of
like
a
stub,
I
mean
I
think,
anish
added,
like
a
provider,
a
mock
provider,
so
in
theory
they
could
use
that.
D
But
well,
I
guess
we'll
figure
it
out
when
we
get
there
like,
we
have
to
like
kick
off
the
whole,
like
figure
out
who's
like
there's,
there's
a
company,
I
think
called
trail
bits
or
something
they've
done
these
type
of
audit
for
other
projects
before
so
we
just
have
to
work
with
them
to
figure
out
how
they
want
to
do
it,
and
then
we
can
write
issues.
I
guess.
D
Lgtm,
I
guess
my
question
is
like
what
do
we
do
next,
so
I
love
that
we
did
this
together.
I
think
we
should
perhaps
think
about
creating
issues
and
tag
them
in
like
a
stable,
like
a
milestone
called
stable.
That's
like,
I
think,
that's
super
helpful
for
consumers
of
this
solution
to
know.
Okay,
how
far
are
we
from
cutting
a
stable
release
for
both
the
driver
and
the
providers
right?
So
what?
What?
D
What
do
we
want
to
do
about
creating
issues
in
those
pers
in
in
in
in
the
in
all
the
different
rebugs?
I
guess.
A
A
Okay,
we
don't
have
any
other
items
on
the
agenda.
A
C
Yeah
I've
I've
kind
of
got
it
working
locally,
but
it's
not
kind
of
polished
enough
to
actually
open
it
on
them
on
the
reaper.
Yet
we're
still
kind
of
we've
been
fairly
heads
down
on
getting
ready
for
the
next
world
release,
but
we
should
be
getting
some
more
time
to
finish
off
the
the
design
in
the
near
future.
So
then
that's
yeah,
the
grpc
will
be
one
of
the
first
things
we
do
once
we've
once
we've
got
that
nailed
down.
D
Sorry,
I
think
I
was
just
kind
of
saying,
like
as
a
user.
I
think
it
may
be
helpful
to
also
like
create
milestones
in
the
vault
and,
like
other
providers
repos
just
to
give
people
signal
like
hey
we're
working
on
this,
and
this
is
like
the
the
day
that
we
we
think
we
want
to
deliver
that.
I
I
keep
I
mean
we
do
see,
questions
on
on
on
github
a
lot.
D
A
Have
the
drpc
support?
We
will
make
the
default
grp
supported
providers
list
in
the
driver
so
that
users,
when
they
install
like
they
know
that
every
provider
just
works,
because
today
the
flag
needs
to
be
manually
configured
for
certain
providers.
So
I
think
we
can
just
have
it
pre-configured
and
then
users
can
just
start
using
it.
C
Right
yeah,
sorry,
I
hadn't
realized
is:
is
the
last
hold
out
on
that
yeah?
Okay,
we
can
we
try
and
punt
priority
on
that
up
a
bit.