►
From YouTube: Secrets Store CSI Community Meeting - 2021-02-18
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
everyone
welcome
to
the
csi
secret
store
community
meeting,
it's
feb
18th.
This
call
falls
under
the
cncf
code
of
conduct
and
the
video
will
be
published.
A
I
think
we
have
a
small
agenda
for
today,
so
the
first
one
on
the
agenda
is
to
discuss
about
csi
driver
token
requests.
I
think
tom,
you
added
that.
B
Yeah
yeah,
it
was
just
a
quick
question
really
to
ask.
I
recently
learned
about
this:
read
about
the
the
alpha
feature
in
1.20
that
allows
csi
drivers
to
request
service
account,
tokens
and
yeah
still
behind
a
feature
gate
and
in
alpha
in
1.20.
A
Yeah,
I
think
this
one
and
also
the
one
other
feature
for
the
remount,
so
I've
been
exploring
that
as
an
alternative
option
for
rotation
as
well.
Both
these
require
our
features
require
the
feature
gate
to
be
enabled,
so
we
can
definitely
explore
this
and
add
this
with
the
documentation
and
see
if
users
are
interested
in
trying
it.
B
Out
yeah
cool
yeah,
I
guess
yeah.
Let
me
know
if
I
can
give
any
help.
Yeah
that'd
be
great
to
have
that
on
the
horizon.
A
That,
okay
tom,
did
you
have
anything
else
for
that.
One.
B
No
no
yeah!
If,
if
that's
all
sounds
straightforward,
then
yeah
awesome
sounds
good
thanks.
A
Okay
and
then
for
the
next
one,
I
just
added
that
now
for
our
disconnected
scenarios,
so
I've
created
a
github
issue
for
it,
and
I've
started
a
doc
design
talk
basically
discussing
what
we
had
based
on
all
the
discussions
that
we
had.
Rather
during
the
last
call.
I
will
share
it
with
everyone
on
slack
this
week
and
then
we
can
probably
go
over
it
in
the
next
community
called.
If
that
sounds
good,
we
can
go
over
at
adoc
and
then
we
can
also
discuss
it
during
the
community
call
again
next
week.
A
And
the
next
one
is
the
proposal
that
tommy's
added.
D
Yeah
thanks
for
the
comments
I
said
tom
just
left
some
I
haven't
gotten
to
like
those
yet
I
think
I
ran
some
kind
of
like
load
tests
on
the
http
driver
and
yeah.
I
think
the
largest
kind
of
issue
is
like.
Is
there
a
max
size
of
a
secret
provider
class
like
currently
or
like
should,
or
can
we
enforce
one
in
the
future
and
how
breaking
of
the
change
would
that
be?
D
If
we
don't
want
to
enforce
any
any
limits
there,
then
I've
been
looking
into
like
streaming
rpc
responses,
that's
where
just
like
one
secret
at
a
time
could
be
streamed
back
to
be
written
and
another
like
possible.
Intermediate
step
is
just
a
a
library
that
the
you
know,
kind
of
like
write,
the
code
that
the
driver
would
do
in
a
shared
place
and
then,
let's
individual
plugins
kind
of
you
know,
use
those
functions.
D
D
So
to
do
yes,
I
will,
I
think,
continue
looking
a
little
bit
at
the
streaming
stuff
and
maybe
like
authoring
a
library
that
could
be
used
either
way.
But
I
think
the
the
big
question
before
I
want
to
go
further
with
a
proposal
is
kind
of
the
deciding
on
whether
or
not
a
max
size
of
a
secret
provider
class
is
is
something
that's
feasible
and
the
one
area
where
there
kind
of
is
is
the
kubernetes
syncing.
A
A
Yeah,
that
makes
sense,
I
mean
typically
for
the
sync
secrets
for
most
of
users:
I've
seen
they
do
it
either
for
ingress
dls
certs.
So
they
do
it
only
for
a
subset
of
secrets
or
certificates
in
the
secret
provider
class
or
for
the
few
username
passwords
that
they
want
as
environment
variables
right.
A
So
there
is
a
distinction
between
the
objects
that
they
actually
fetch
from
the
external
secret
store
and
they
think
is
kubernetes
secret.
So
it's
mostly
just
being
a
subset
of
the
contacts
that
they
fetch
from
external
secret
stores,
but
yeah
the.
I
think
the
only
path
that
we
really
have
to
worry
is
during
the
note,
publish
where
everything
has
to
be
fetched
and
written.
So
that
is
when
we
could
hit
the
memory
limits.
D
B
I
think
I
think
it
would
be
extremely
rare
for
for
users
to
get
over
four
megabytes
in
a
single
secret
provider
class
such
that
I
feel
like
it
would
be
a
reasonable
kind
of
path
forward
to
just
stick
with
that
default
size
limit
initially,
and
if
we
get
requests
it's
it's
not
hard
to
add
an
escape
patch.
It
just
says
add
a
config
caption
option
that
says
you
will
will
set
their
max
received
message
size
for
you
and
bump
out
or
or
not.
B
You
know,
even
even
make
it
not
configurable
for
everyone,
because
there's
there's
not
much
harm
to
bumping
up
that
that
max
size
and
it's
quite
a
kind
of
low
risk
change
to
make
so
yeah
I'd,
I'd
kind
of
just
say:
go
ahead
with
it
personally.
D
Okay,
so
you're
supportive
of
just
like
maybe
adding
flags
and
documenting
like
hey
you,
you
bump
this
flag
and
this
container
limit
up.
If
you're
running
into
like
document
how
you
might
know,
if
you're
running
to
the
problem
and
then
document.
B
That
yeah,
as
long
as
we
make
it
nice
and
easy,
you
know,
there's
a
good
log
message
for
when
you
bump
into
this
problem,
and
we
can
kind
of
surface
that
in
a
good
way
for
the
user.
I
think
the
vast
majority
of
people
won't
bump
into
at
all,
and
I
think
we
can
make
it
easy
to
solve
for
the
people
who
do
bump
into
it.
A
Yeah,
I
think
we
can
also
test
out
the
limits
and
see
I
mean
the
120
number
that
I
have.
Maybe
I
can
help
test
that
with
you.
If
you
need
and
then
we
can
come
up
with
a
number
that
we
think
is
appropriate
and
if
it
still
falls
under
4
mb,
I
think
what
tom
said
totally
makes
sense.
We
can
document
it
then.
Essentially
it
won't
be
a
breaking
change
right
because
it's
just
in
introducing
an
improvement
and
then
it
will
probably
affect
future
users.
D
Okay,
so
should
I
look
to
probably
spec
out
an
implementation
and
we
just
keep
in
mind
the
limits
so.
C
Actually,
having
a
limit
is
really
good
for
supportability,
especially
when
we
like
do
load
tests,
it's
we
should
establish.
This
is
what
was
tested,
and
this
is
what
is
supported.
Anything
beyond
that
is,
you
know
at
your
own
risk.
D
Yeah,
I
might
suggest
that
we
start
with
like
the
limit
of
one
megabyte
like
one,
not
four,
so
that
you
could
use
the
secret
syncing
feature
like
just
so
that
those
I'll
look
at
yeah,
so
you're,
not
surprised
that,
like
oh
part
of
my
secret
sinking,
isn't
like
that,
there's
just
one
clear
limit
to
bump.
I
can
document
that
if
you
go
above
like
that,
then
some
secret
syncing.
A
So
the
next
one
on
the
next
two
on
the
agenda.
I
just
started
those
so
the
first
one
is
the
security
audit,
so
the
rfpr
has
been
merged
and
the
audit
team
has
created
google
sheets
through
which
they
will
be
asked
questions.
A
So
if
there's
any
questions
related
to
the
csi
driver
project,
I
am
the
point
of
contact
right
now,
so
I'll
get
an
email,
but
I'll
also
share
it
with
the
broader
community,
so
that
folks
can
also
help
answer
that,
but
it's
a
bi-weekly
meeting,
so
we
had
a
meeting
yesterday
and
then
we'll
have
the
next
one
after
two
weeks
and
the
next
one
that
I
added
was.
Oh
sorry,
anyone
had
any
questions
about
it.
Yeah.
D
It
is
the
the
scope
I
guess
for
that
has
been
determined
like.
Is
there
a
point
in
time
where
they're
gonna,
like
snapchat
the
repo
and
like
that,
is
what
they
are
evaluating.
A
And
I
think
so.
The
next
item
was
0.0.20
release
right,
so
that
release
is
going
to
stub
out
all
the
piece
of
code
that
we
no
longer
use
so
that
at
least
now
we
have
a
streamlined
set
of
code.
I
mean
changes
that
we
want
them
to
test,
but
yeah
I
mean
I
think,
like
we
discussed
on
the
last
call.
We
also
want
to
have
these
the
driver
changes
where
the
driver
is
doing
it
right,
so
whatever
we
want
to
have
for
gas
and
them
to
test
it
out.
A
Yeah
and
the
last
one
that
I
had
in
the
agenda
was
the
release
is
going
to
be
cut
today
for
0.020.
A
Actually,
I
think
one
interesting
thing
that
came
up
our
user
at
ping.
The
ping
me
on
slack
was
related
to
logging
to
standard
error
and
the
standard
error
threshold.
So
I've
been
discussing
that
offline
with
tommy,
so
we
use
k
log
framework
for
logging
and
users
want
to
only
log
error
and
warning,
so
that
is
configurable
with
standard
error
threshold.
So
if
they
set
the
threshold
and
only
logs
with
that
level
and
beyond
will
be
logged
out.
But
the
problem
with
that
is,
you
also
need
to
set
log
to
standard
error
is
false.
A
E
I
guess
just
a
question
to
the
community-
hopefully
doesn't
get
too
loud
here,
but
I'm
curious
from
other
the
other
community
members
that
work
for
cloud
providers.
How
are
you
or
are
you
getting
questions
about
your
serverless
platforms
and
how
you
can
integrate
csi
on
some
of
your
services
offerings.
E
E
C
D
A
E
D
I
I
haven't
heard
too
much
from
that,
but
yeah
like
I
could
see
supporting
like
more
ephemeral
containers.
Is
you
know
that,
like
this
right,
the
secrets
driver
is
could
be
something
that
would
be
valuable
to
stateless
services
and
canada.
C
F
Oh
yeah
thanks
yeah,
I'm.
This
is
ten
from
uber
seattle.
I
am
a
software
engineer,
so
at
uber
we
do
not
use
communities
at
all.
Previously
I
worked.
I
was
working
at
a
vietnamese
kansas
mission
council,
so
I'm
just
curious
like
how
kubernetes
handles
the
security
at
uber.
We
deploy
hashgraph
enterprise
version
and
we
we
have
services
like
directive
fetch
from
the
hatchback
boat,
and
we
also
have
some
very
similar
user
usage
pattern
like
kubernetes
operator
that
mounts
the
security
into
the
container
during
a
long
time.
F
But
all
these
happen
on
top
of
mesos
instead
of
kinetics.
So
uber
does
have
some
like
poc
work,
and
I
know
there
are
a
bunch
of
teams,
try
to
evaluate
kinetics
this
year.
F
So
I'm
just
curious
like
how
you
know
qnets
handle
the
security,
especially
gather
the
securities
from
high
school
votes
and
then
mount
or
inject
the
secrets
into
container
so
yeah.
I
happen
to
know
the
this
project,
so
I'm
very
curious
like
how
these
things
work
and.
D
I
think
that
is
interesting.
We're
I
think,
we've
been
pretty
kubernetes
focused
on
the
the
driver
here,
where
the
I
think
the
actual
like
csi
driver
should
work
on
mesos,
like
the
csi
interface,
should
work
between
this
too.
The
off
at
the
plug-in
level,
to
be
able
to
off
to
the
secret
manager
is
one
area
where
I'm
like.
I
know
at
least
on
the
gcp
secret
plugin
that
we
have
it's
like.
It
assumes
kubernetes,
so
yeah.
F
F
I
didn't
notice
that
this
project
can
be
directly
applied
to
methods,
but
since
we
are
evaluated
like
units
and
try
to
introduce
kubernetes
into
uber,
so
I
would
say
guys
try
to
invent
the
you
know
the
new
usage
directly
to
like
humanities.
I
guess
like
in
next
two
or
three
years.
We
start
slowly
migrating
this,
but
it's
gonna
be
very
slow
again
for
scale.
D
What
I
just
meant
was
like
the
csi
driver
interface
is
written
such
like
that
standard
is
created
so
that
it
can
be
applicable
across
orchestration
frameworks,
whether
or
not
it's
kubernetes
or
mesos.
There
are
a
number
of
features
right
that
the
driver
like
syncing
to
kubernetes
secrets,
just
as
soon
as
kubernetes
and
specifically
in
the
gcp
one
we
assume
like
kubernetes
identity.
F
Work
we
use
aspire,
so
we
use
fire,
so
every
container
has
their
own
spy
assert
and
from
there
spicer
we
generate
x509
like
it's.
We
get
like
a
standard
x509
certificate
from
there.
We,
you
know,
use
that
certificate,
talk
to
like
hashtag
votes
and
to
get
the
secrets.
Basically,
it's
kind
of
like
from
there.
You
generate
a
jedi
token
very
similar,
like
a
service,
account
token,
like
a
kubernetes,
so
yeah,
we
don't
have
some
way
for
service
identities.
On
top
of
measures.
B
Yeah
feel
free
to
open
an
issue
on
the
the
vlog
provider
if
you'd
like
to
kind
of
talk
about
yeah
features
or
things
that
would
help
you,
because
it's
super
interesting
to
hear
about
your
use
case.
Thanks
for
sharing.
F
Yeah,
oh
okay,
one
thing
to
share:
we
do
have
another
deployment
on
tracker
on
gcp
at
as
tier
zero,
so
like
for
lower
level
infra.
So
like
the
director
fascist
decrease
from
from
gcp.
The
gcp
here
is
like
we
deploy
hash
of
votes
on
top
of
gcp.
B
C
G
G
G
I
am
exploring
this
project
as
a
potential
way
to
figure
out
how
to
reference
secrets
on
git,
and
my
use
case
in
general
is
about
in
whole,
githubs
or
definitive
kit
world.
We
need
to
figure
out
how
to
store
secret
somewhere
and
then
consume
that
in
our
clusters,
storing
them
in
git
is
always
not
considered
a
good
thing
to
do,
even
that's
more
good
ops.
So
in
that
case
I
was
wondering
if
this
is
a
project
where
which
I
could
potentially
use
to
store
secrets
involved
but
consume
them
in
my
workloads.
G
E
G
Yeah,
I
think,
secrets
right.
I
think
I
I
I
could
give
a
more
realistic
scenario.
I
think
so.
Let's
say
you
know,
I
have
my
kubernetes
manifests
on
git
and
I
have
something
like
argo
cd,
which
applies
them
on
the
cluster
and
gets
them
to
run
in
a
continuous
fashion.
G
We
today
have
explored
something
like
seal
secrets
to
encrypt
secrets
and
keep
them
on
grid,
but
I
know
that
a
lot
of
folks
a
lot
of
companies
and
customers
would
never
want
to
do
that,
no
matter
how
encrypted
things
are,
so
they
would
typically
prefer
referencing
to
the
secret,
but
not
necessarily
storing
the
secret
contents
and
gate
itself,
which
is
where
I'm
trying
to
get
to
hey.
G
G
I
would
need
to
have,
let's
say
a
pod
or
a
deployment
or
any
workload
with
the
prospect
workload
have
a
csi
section
there,
which
references
a
note
published
secret
ref,
and
then
probably
the
only
thing
that
an
admin
would
need
to
get
onto
the
cluster
will
be
something
like
the
credentials
needed
to
access
walter
as
your.
G
But
then
in
general,
I'm
interested
in
figuring
out
if
we
can
at
least
try
out
having
an
example
of
how
we
could
have
a
typical
workload
which
consumes
secrets
have
all
manifest
stored
on
git,
which
includes
the
secret
store,
csi
driver
implementation
manifest
as
well.
I
was
trying
to
explore
that
with
respect
to
this
project,
does
that.
G
B
Complete
sense
and
yeah,
it
sounds
like
it
fits
your
use
case
quite
well,
yeah.
As
you
say,
the
contents
of
the
secret
provider
class
are
not
considered
secret,
they're
kind
of
yeah,
referencing,
secret
materials
and
yeah.
B
I
just
wanted
to
mention
if,
if
you'd
like,
to
see
like
a
fully
worked
example
and
to
end
the
the
integration
tests
in
the
involved
provider,
I
probably
like
a
good
good
launching
point
if
you
can
kind
of
follow
through
all
the
way,
from
kind
of
cluster
creation
in
the
circle
ci
file
through
to
like
installing
in
the
make
file
and
then
there's
some
some
batch
tests
that
actually
run
through
kind
of
yeah
the
whole
end
to
end
things
that
might
be
might
be
a
good
reference
point.
G
For
you
getting
started,
thank
you
yeah.
I
think
I
was
taking
a
look
at
those
and
they
were
really
helpful.
I
I
had
a
quick
follow-up
question
on
that.
If
that's
okay,
yeah
absolutely
actually
yeah,
I
think
give
me
a
minute
I'll,
probably
turn
on
my
camera
as
well
yeah.
I
think
my
general
follow-up
question
there
would
be,
and
this
probably
defeats
the
purpose
of
having
a
csi
storage
driver,
which
is,
I
was
kind
of
wondering.
G
Is
there
a
way
for
me
to
request
the
secret
and
put
it
on
the
cluster
without
consuming
it
in
my
workload,
but
in
that
case
it's
no
longer
a
csi
driver
altogether,
but
then
I'm
kind
of
thinking
about
something
like
godaddy
scene
sequence.
Sorry,
both
godaddy
external
secrets,
had
a
way
to
actually
provide
an
interface
to
different
secret
stores
out
there.
G
B
B
Syncing
to
kubernetes
secret
without
having
to
mount
it
into
a
pod,
essentially
totally
yeah
yeah
yeah.
I
think
there's
a
there's
an
open
feature
request
for
that
in
the
csi
drivers
issues.
I
don't
know
what
the
latest
on
that
is
yeah.
That's
certainly
a
question
that
other
people
have
asked
as
well.
G
Got
it
is
that
something
I
could
go
and
probably
explore
a
little
further,
just
checking
in
case
it's
already
like
a
like,
like
a
thumbs
down
from
a
bunch
of
folks,
or
do
you
think
it's
something
that's
definitely
worth
exploring
and
seeing
if
that's
something
good.
C
You
should
definitely
chime
in
on
that
issue.
I
think
we
actually
talked
about
this
in
the
last
community
call
about
some
alternatives,
maybe
five
toggling
for
people
who
need
that,
but
there's
a
lot
of
implications
with
this.
In
terms
of
like
you
know,
when
do
you
delete
and
recycle
the
the.
G
C
Right
so
there's
a
lot
of
open-ended
questions
there,
but
definitely
you
should
chime
in
on
that
issue,
for
what
your
use
cases
are
and
what
the
desirable
outcome
is.
E
Awesome,
I
think,
we're
at
the
end
of
our
agenda
items.
I
believe.
C
Yeah
I'm
trying
to
find
that
issue,
so
we
can
link
it
so
that
it's
easily
accessible
for
people
to
chime
in.
G
I
think-
and
I
know
that
note-
I
would
like
to
quickly
point
out-
that's
probably
an
fbi
for
this
project
and
there,
so,
I
think,
godaddy
external
secrets
and
I'm
not
related
to
godaddy
in
any
way
something,
that's
something
that
I
discovered.
So
I
think
they
and
container
solutions
they
together
are
collaborating
on
another
project
which
is
which
is
probably
which
is
significantly
younger
than
this
one.
G
It's
probably
two
months
old,
which
is
kind
of
trying
to
solve
that
use
case
of
bringing
in
the
secret
from
another
storage
and
getting
onto
your
cluster
as
a
cube
as
a
community
secret.
It's
just
nfiii
as
a
worth
taking
a
look.
G
C
Okay,
thank
you
for
for
pointing
us
to
the
and
tom
thanks
for
the
link
for
that.
E
F
C
I
guess
in
theory,
you
can
check
out
the
mock
the
provider
mock.
F
Can
I
use
the
open
source
mode
as
provider
security
provider?
If
I
understand
correctly.
B
Yeah
yeah
definitely
yeah
the
again
yeah
the
the
integration
tests,
kind
of
yeah,
an
isolated
unit
that
runs
completely
open
source
stuff,
so
yeah
that
should
work
fine.
A
Yeah
the
way
we
have
it
is,
I
mean
we
support
three
providers
today
and
then
the
e
to
e
tests
mostly
run
once
you
set
up
certain
parameters,
so
the
wall
t2e
runs
by
bootstrapping
wall
with
the
cluster
and
then
running
for
the
other
two
cloud
providers.
The
way
we
do
it
is,
you
can
set
up
like
an
azure
keyword
or
a
google
secrets
manager
outside
of
the
e2e
test,
and
then
you
can
just
point
the
e
to
e
to
run
against
it.
A
E
E
Let's
see
that
is,
this
will
be
our
last
meeting
for
this
month.
So
we
will
reconvene
on
march
4th
here
in
a
couple
of
weeks
and
as
rita,
and
this
mentioned
feel
free
to
use
the
slack
channels
and
the
issues
and
and
the
community
community's
pretty
active
on
that.
E
With
that
we'll
go
ahead
and
end
the
meeting
thanks.
Everyone
see
you
in
a
couple
weeks.