►
From YouTube: Secrets Store CSI Community Meeting 2020-09-17
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
So
what
we
decided
was
we
want
to
release
0.0.14
today,
so
that
providers
can
enter
in
the
latest
release
and
implement
the
grpc
server
changes
for
the
provider,
so
we've
merged
all
the
required
changes
for
0.14
right
now
we
have
a
pr
open
for
pumping
the
images
once
that's
merged,
we'll
have
the
automated
build
and
then
we'll
cut
a
release
today
and
the
rotation
pr
which
is
out
there.
B
We
have
moved
it
to
0.0.15
milestone
because
we
want
to
isolate
the
changes
so
that
the
big
change
in
0.0.15
will
only
be
rotation,
and
then
users
know
that
the
new
feature
is
being
added
as
part
of
that
release,
but
the
rotation
pr
is
also
out
there
and
then
rita
and
tommy
have
taken
time
to
review.
It
it'll
be
nice
to
get
more
reviews
on
that
vr
when
everyone
gets
chance.
A
A
All
right
thanks
for
that
finish,
I'm
just
going
down
the
line.
Next,
we
have
tom
talk
about
the
default.
I
think
that's
supposed
to
be
grpc.
You
know
this
is
pg.
C
D
So
thanks,
I
was
just
wondering
if
we
have
a
like
a
reference
implementation.
I
hadn't
realized.
We
had
a
fake
service,
so
I
checked
that
out,
but
I
was
looking
at
the
azir
provider
and
saw
that
the
proto
definitions
were
copied
over,
but
I
guess
I
was
assuming
we'd
have
kind
of
one
place
for
them
to
be
defined
finish.
Do
you
have
any
thoughts
on
how
how
we
should
share
those
proto
definitions.
B
Yeah,
so
if
you
look
at
the
provider,
I
mean
sorry
the
secret,
through
csi
driver,
we
have
the
proto
definitions
under
the
provider
directory,
so
I
just
shared
the
link
on
the
zoom
chat.
So
the
expectation
is
all
the
proto.
Definitions
exist
in
the
driver
under
this
directory,
and
then
the
provider
would
just
render
this
in
with
that
particular
tag.
So
once
0.0.14
is
released,
what
I'm
going
to
try
and
do
for
the
azure
keyword
provider
is
just
vendor
in
this
directory
and
then
just
use
the
proto
definitions
from
there.
D
Okay,
nice
one
cool
yeah,
that's
what
I
was
hoping
for,
and
am
I
right
thinking
that
to
do
this
I'll
need
to
make
the
the
vote
provider
pod
a
privileged
pod
and
share
the
host
networking
as
well.
B
Now
so
for
the
keyword
provider,
I'm
piggybacking
on
host
networking,
because
there
are
certain
token
request-
calls
that
we
make
that
we
want
to
stay
on
the
host,
but
host
networking
is
is
not
mandatory.
I
mean
if
it's
required,
then
you
probably
need
to
turn
it
on,
but
for
the
keyword
provider
we
have
a
necessity
to
use
the
host
network,
but.
D
Right:
cool,
okay,
thanks
yeah,
I'm
just
getting
my
teeth
stuck
into
this.
So
thanks
for
clearing
up.
F
Yeah,
I
just
wanted
to
give
a
quick
update
on
the
vault
csi
driver
tom,
and
I
have
started
this
csi
update
project.
So
we
had
our
kickoff
earlier
this
week
and
we
are
you
know,
agreeing
internally
what
a
ga
driver
might
look
like.
F
So
the
features
that
we
want
to
include
like
rotating
and
renewing
of
secrets
and
leases
being
able
to
camera
up
top
my
head
everything,
but
there's
a
there's,
a
list
of
things
that
we're
going
to
be
going
over
the
next
month
and
digging
into
the
code
and
seeing,
if
they're
even
possible.
So
I
suspect
next
month
we're
going
to
have
a
lot
more
questions
about
csi
in
general
and
how
we
can
make
some
of
our
requirements
work
with
the
driver.
F
A
Weeks,
awesome,
that's
good
news.
I
know.
We've
we've
been
getting
a
lot
of
people
asking
what
you
guys
provided.
So
it's
good
to
see
that
moving
along.
G
Yeah
jason,
if,
if
you
want
to
share
even
if
it's
a
draft
form
of
hey
here,
are
the
bullet
points
of
what
we're
looking
at
like
as
soon
as
possible?
I
think
that
would
be
really
good
that
way.
At
least
we
can
also
say
hey.
Here's
also,
these
are
the
issues
tracking.
These
items
that
we're
already
aware
of
versus
hey
here
are
bullet
points
that
we
never
really
thought
was
like
a
requirement
for
stable
ga
right,
so
I
think
yeah.
So
I
see
the
link
that
you
just
shared.
F
I
just
shared
a
gist
in
chat.
That
is
just
some
notes.
I've
been
taking,
so
we
have
internal
documentation
that
I
can't
share
as
well.
F
But
these
are
some
of
like
the
high
level
bullet
points
that
we
want
to
have
a
feature
parity
with
our
vault
agent
injector,
our
other
secret
injection
tool,
so
trying
to
get
as
close
to
feature
parity
plus
having
things
like
the
environment
variables,
kubernetes
secrets,
syncing
make
csi
driver
really
valuable
to
us,
so
mostly
just
trying
to
get
this
in
sync
with
the
other
project
and
add
the
cool
stuff.
On
top.
G
So
may
I
ask
that
everybody
on
this
call
maybe
take
a
look
at
this
and
then
you
know,
maybe
we
can
follow
up
and
talk
about
this
in
more
depth
and
then
falling
community
call.
I
don't
know
if
we
have
enough
time,
but
we
can
also
talk
about
this.
Some
of
these
here
on
this
call,
if
we
need
to
if
we
have
time.
F
G
Okay,
let's
definitely
follow
up
we'll,
definitely
take
a
look
at
this
and
follow
up
asynchronously
before
the
next
call.
A
All
right
next
reader
you're
up
talking
about
the
default,
our
back
required
for
the
driver.
G
Yeah,
so
this,
so
if
you
can
click
on
the
link,
so
this
was
like
a
discussion
that
came
out
of
the
the
rotation
pr
it
essentially,
you
know
we're
adding
additional
r
backs
based
on
what
users
select
as
the
features
that
they
want
to
use,
for
example,
sync
secrets
or,
and
the
rotation
reconciler
right
so
based
on
that
there
is
a
discussion
around
hey.
What
is
the
default?
Our
back?
G
We
want
for
the
driver
and
you
know,
is
it
a
good
user
experience
when
people
have
installed
it
and
then
later
on
realizing
hey?
I
actually
need
more
are
back
because
of
these
future
flags
that
I
turn
on
right.
So
I
just
want
to
open
this
up
to
the
group
here
and
get
more
thoughts
around.
G
Driver
and,
as
you
can
see,.
A
A
No,
I
guess
I
was
going
to
ask
like
what
what's
the
current
security
posture
now
like.
Where
do
we
see
our
vulnerabilities
if
any.
G
Right
so
so
by
default,
we
think
you
know
users,
so
so,
meaning
sync
secrets
is
an
opt-in
feature
in
the
future.
G
G
Obviously
that
is
not
needed
if
you
don't
need
to
synchronize
with
kubernetes
secrets
right
and
here
within
this
rotation,
reconciler,
there's
additional
get
request
access
required
for
pods,
I
think
for
the
node
security.
That's
no
secret
right,
no
pot
secret
that
we
get
so
so
in
general.
Right,
the
guidance
is
pretty
much.
You
know
you
should
only
deploy
or
request
additional
r
back.
G
If,
if
the
solution
requires
it
right-
and
I
think
tommy
had
brought
up
a
good
point
about
hey
if
you're
not
using
the
synchronizing
secret
feature,
why
require
additional?
What
why
why
grant
additional
access
to
the
driver.
E
And
then
the
one
like
driving
factor
behind
that
is,
I
can't
remember
what
the
what
the
future
is
called
in
kubernetes,
but
like
it's
like
node
isolation,
I
think
where
you
might
have
different.
You
know
pools
of
nodes
where
you
want,
like
some
workloads
to
be
able
to
run
on
some,
but
then
to
not
have
access
to
the
workloads
they're
on
on
different
nodes,
and
I
think
the
the
secret
syncing
feature
acquired
just
blanket
secret.
E
Read
permissions
so
like
over
the
I
think
entire
cluster,
so
just
trying
to
I'd
like
the
default
policy
to
work
with
like
node
isolation,
so
that,
like
the
driver,
doesn't
then
become
a
path
to
accessing
data
from
workloads
that
shouldn't
be
able
to
run
on
the
node.
E
A
I
was
actually
talking
about
like,
if
you're,
using
like
the
tanks
or
tolerations
to
land
workloads,
one
no
building
kind
of
like
an
infinity.
It's.
E
It's
having
like
different
security
boundaries
per
like
the
actual,
like
instance,
nodes,
so
that
you
can
like
the
cubelet
process
right
on
one
node.
Can't
access
like
information
for
a
pod
that
isn't
supposed
to
be
scheduled
on
that
cubelet
or
on
that.
E
I
probably
need
to
look
more
into
that
feature,
but
that's
the
one
where
I
would
just
like
the
default
permissions
to
not
break
that.
I
don't
think
that
a
sedition
breaks
that
being
able
to
get
the
pod.
E
Yeah-
and
I
think
I'm
going
back
and
like
explaining
the
separation
of
the
the
secret
syncing
feature
into
a
separate,
are
back
where
I
believe
the
permissions
on
that
could
allow,
like
you,
know
the
driver
to
get
a
secret
for
a
a
workload
that
isn't
allowed
to
run
on
the
drivers
like
node,
where
being
able
to
get
con
metadata.
I
don't
think
has
that
same,
like
escalation
of.
E
What
do
we
want
in
the
base?
Like
our
back
permissions?
My
just
comments,
I
just
like
the
base,
our
back
permissions
to
not
allow
kind
of
escalation
between
between
node
workloads.
A
G
B
B
I
just
want
to
bring
up
this
another
pr
in
the
driver,
basically
to
revamp
the
end-to-end
test.
Speed,
it's
actually
a
huge
pr
moving
to
using
go
and
kinko
so
it'll
be
nice
to
get
more
eyes
on
those
pr
right
now
we're
using
bags
and
then
we're
trying
to
move
to
using
go
test
framework
so
that
adding
more
tests
is
more
easy.
And
this
is
the
the
pr
actually
changes.
The
test
framework
for
all
the
providers.
B
A
Yeah,
which,
which
pr
is
that.
F
Can
I
ask
a
question
about
this
rotation
secret
pr
yeah?
How
do
you
expect
a
signal
to
whatever
pod
is,
is
consuming
the
secrets
that
it
has
changed?
Is
it
up
to
the
the
app
to
kind
of
be
checking
that
periodically
and
and
reloading
their
configurations,
or
are
you
thinking
like
for,
like
kubernetes
secret
syncing,
like
having
the
internal
mechanisms
of
checking
the
the
sha
of
the
secret
and
seeing,
if
that
has
changed,
should
be
scheduling
the
pod.
B
So
for
the
face
one
part
of
it,
the
the
phase,
one
that
this
pr
implements.
What
we
do
is
we
update
the
values
in
the
mount
path,
and
then
we
also
update
the
kubernetes
secret
and
I
have
an
issue
open
to
just
add
an
event
recorder
just
in
general,
so
that
we
can
report
any
kind
of
errors
for
syncing
secrets
or
other
secret
community
secret
related
events
right
with
the
pods,
but
the
path
forward
for
the
workload
pods.
B
And
then,
if
they're,
using
the
secret
kubernetes
secret
for
populating
environment
variables,
then
they
would
need
to
restart
their
part.
But
as
part
of
phase
one,
we
are
not
going
to
initiate
any
kind
of
restart,
because
the
primary
functionality
for
the
driver
is
just
to
mount
the
contents
and
then
not
do
anything
else.
Right
and
in
terms
of
notification.
I
think
the
events
would
probably
be
give
the
user
details
that
yeah
something
has
been
rotated
or
something
has
been
updated,
and
then
they
could
take
actions
based
on
that.
B
But
we
don't
want
to
start,
I
mean,
go
and
restart
any
of
their
pods
just
yet,
because
today
the
way
kubernetes
does
it
is,
if
you
have
a
kubernetes
secret
and
if
you
mount
that
as
a
volume
in
a
pod,
updating
the
secret
only
updates
the
contents
in
the
volume,
but
it
doesn't
do
anything
else.
Apart
from
that,
so
we're
trying
to
follow
the
similar
practice
that
kubernetes
does
today.
F
Yeah,
that
makes
sense,
and
with
the
agent
injector
we
have
the
ability
to
send
signals
if
we
share
the
process
namespace,
but
I
don't
think
that's
possible
since
the
csi
pod
and
your
app
potter
are
different,
so
yeah,
I
guess
the
only
way
would
be
rescheduling
or
watching
the
file
system.
So
thanks.
A
Okay,
so
that's
the
end
of
everything
that
we
have
tagged
for
discussion.
B
D
A
Okay,
he's
having
some
difficulties
with
it,
yeah
his
audio
okay,
I
would
say
rajon
if
there's
anything
that
you
want
to
say
you
can
just
maybe
update
the
google
doc.
C
When
we
were
speaking
about
the
reconciler,
but
I
think
we
just
so-
I
had
a
question
with
the
reconciler
right
so
yeah
when,
when
we
mount
the
secret
into
the
parts
like
what
kubernetes
does
today,
does
it
mean
that
kubernetes
will
be
able
to
the
parts
or
the
applications
and
kubernetes
will
be
able
to
consume
so
secrets
has
and
when
we
update
them
or
would
they
need
to
restart
to
consume
the
secrets?
How
does
that
work.
B
So
I
mean
if
the
pod,
the
workload
part
is
actually
watching
on
the
file
system,
then
it
would
automatically
refresh
its
config
based
on
changes
in
the
file
system
right.
So
it
would
see
that
the
file
that
is
it
was
watching
is
changed
and
then
it
would
just
read
the
new
credentials
and
then
re-initialize
on
its
own,
but
other
than
that.
B
The
only
other
way
possible
is,
if
you
don't
have
file
watches,
and
if
you
need
to
consume
new
credentials,
then
you
would
just
need
to
do
a
rolling
update
of
your
deployment
so
that
new
parts
get
created
and
then
they
will
obviously
contain
the
new
amount.
So
then
they
would
just
start
fetching
the
new
credentials.
C
C
C
Yeah
azure,
sorry
yeah,
sorry
bye
in
azure
service
provider
you're
using
the
pod
identity
right.
So
how
does
that
work
today?.
B
So
we
actually
for
pod
identity.
We
the
way
we
do
it
is
we
configure
custom
resources
and
the
user
provides
which
managed
identity
to
use
for
the
pod
so
and
then
the
use
port
identity
is
just
part
of
the
generic
parameters,
field
and
secret
provider
class.
So
we
send
the
entire
secret
provider
class
to
the
provider,
the
provider
unmarshals
the
whole
object
and
then
from
there.
It
knows
that
this
part
wants
to
use
part
identity
and
then
it
requests
the
token
for
the
particular
identity
from
the
pod
identity,
components.
C
B
E
Oh
okay,
yeah
right
for
for
the
google
like
provider.
What
we
currently
do
is
obtain
through
the
kubernetes
api.
A
kubernetes
service
account
token,
and
then
we
trade
that,
with
the
gcp
apis
for
gcp
identity,
so
similar
but
different,
and
so
that
is
the
the
csi
driver
enhancement,
would
make
it
so
that
just
the
kubernetes
service
account
is
directly
passed
to
the
csx
driver,
so
that
our
plugin
would
not
have
to
talk
to
the
kubernetes
api
for
service
account.
C
C
B
B
D
B
A
If
you
would
like
to
be
a
moderator-
or
you
want
to
take
notes-
feel
free
to
sign
up
for
that
and
with
that
we
can
go
ahead
and
in
the
meeting
unless,
if
there's
anything
else,
we've
missed
that
anyone
wants
to
quickly
chat
about.