►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
Sorry
about
that,
everyone
we'll
get
this
we'll
get
better
every
other
week
welcome.
This
is
july
23rd.
So
second
csi
secret
store
call
thanks
for
joining,
and
I
think
we
got
some
new
people
who've
joined,
which
is
awesome,
want
to
kick
off
around
the
intros.
A
C
We'll
go
down
the
list
here:
I'm
tommy
murphy,
I'm
from
google
working
on
the
google
secret
manager,
csi
plugin
for
this
driver
colin.
D
Hi
everybody
I'm
colin
mann.
I
work
with
tommy
at
google,
I'm
just
getting
ramped
up
here,
so
I'm
just
attending
this
meeting
to
catch
up.
Welcome.
E
B
F
G
A
Gibson,
I'm
one
of
the
senior
pms
here
at
microsoft.
Managing
our
secure
upstream
projects
is
this
being
one
of
them
perfect.
I
think
we
got
everyone
right.
I
will
moderate
and
I
will
attempt
to
take
notes
again.
E
Yeah
so
last
week
we
released
0.012
for
the
csi
driver.
There
were
a
few
notable
improvements
that
we
added
basically
a
separate
reconciler
controller,
and
that
would
create
the
secrets
instead
of
having
the
secret
creation.
As
part
of
the
note
published
volume
and
one
of
the
major
bug
fixes
that
we
had
was
basically
honoring
the
context
when
invoking
the
provider.
E
Cubelet
has
a
context
timeout
of
about
2
minutes
3
seconds
when
trying
to
mount
the
volume
and
when
invoking
the
provider
we
were
in
setting
the
context.
So
we
added
that
so
that
if
the
provider
doesn't
respond
within
that
stipulated
time,
then
the
mount
would
actually
fail
and
then
it
would
be
retried.
The
next
time.
E
Also
with
0.012,
we
deprecated
an
old
option
of
configuring,
the
volume
attributes
as
part
of
the
pod
spec-
and
this
has
been
added
in
the
release,
notes
and
I
think
tommy
added
for
google
provider
plugin
and
they're
already
using
secret
provider
class,
but
for
hashicorp.
Maybe
we
also
need
a
doc
update
just
to
remove
references
of
providing
the
values
in
the
prospect.
B
I
think
what
we
need
to
do
in
the
future
is
to
make
these
deprecations
a
bit
more
obvious,
at
least
few
releases
ahead
of
when
we
actually
deprecate
it
and
we
we
need
to
start
making
it
more
obvious,
maybe
as
part
of
a
release
now
saying
that
we're
going
to
deprecate
something
like
in
releases
from
now
yeah.
E
Yeah
also
with
this
release,
we
switched
to
using
the
gci
repo
for
hosting
the
images
before
previously.
It
was
in
docker
hub.
But
now
we
have
moved
to
gcr
and
the
way
it
works
is
there's
a
docker
directory
in
the
driver,
repo,
where
we
update
the
image
version
and
once
the
image
version
is
updated.
E
There's
a
brow
job
that
automatically
builds
the
image
and
pushes
it
to
the
staging
repo
and
then
there's
a
manual
process
to
promote
the
image
to
the
broad
registry
and
for
I'm
working
on
a
dock
for
the
release
guidance
so
that
everyone
knows
how
it's
done.
B
By
the
way,
if
anyone
wants
to
be
part
of
the
release,
train
or
release
team,
please
raise
your
hand
always
need
more
help
there
and
eyes
more
eyes
are
always
welcome.
A
All
right,
so
we
got
the
next
milestone,
looks
like
we're
looking
for
more
contributors,
so
here's
here's
the
milestone
rita.
Is
this
all
the
the
issues
that
that
that
are
needed
to
burn
down
for
this
milestone?
Is
that
what
I'm
understanding.
B
Yeah
so
so
initially,
I
went
through
the
back
all
the
issues
in
the
backlog
just
to
save
some
time
for
from
this
call
this
round,
but
next
time,
when
we
do
another
backlog
grooming,
we
should
do
it
as
part
of
this
call,
but
we
did
it
this
time
ahead,
just
to
save
everybody
the
the
time
this
round,
but
yeah.
B
As
you
can
see,
there's
a
lot
of
good
first
issues.
I
guess-
and
some
of
these
already
have
pr's
work
in
progress.
A
Yeah
and
then
what
I
I'll
probably
do,
I'm
just
looking
here
so
yeah
we
don't
have
a
project
board,
so
I'll
probably
just
put
a
simple
combo
on
board
on
this,
so
we
can.
We
can
track
this
too,
as
well.
Just.
C
Okay,
sorry,
I
have
a
question
there
that
that
milestone
is
just
gonna,
be
like
another,
like
point
release
not
like
for
v1
like
it's
just
mild.
The
next.
B
A
Now
that's
a
good
point.
Tommy
maybe
does
it
make
sense
to
kind
of
have
a
semantic
type
of
versioning
here.
B
A
E
Yes,
okay,
I
did
that
right.
So
in
0.0012,
like
I
said
previously,
we
fixed
the
issue
by
setting
the
context
for
the
provider
calls.
Let
me
see
if
I
can
find
it
here,
so
what
we,
what
I
was
thinking
was:
maybe
we
can
also
try
and
propagate
the
context
time
out
to
the
provider
so
that
the
provider
can
just
automatically
use
the
con
user
context
with
that
particular
deadline
and
when
it
makes
calls
to
it,
the
external
secret
store
right
now
for
the
azure
provider.
E
What
I've
done
is
I've
added
a
default
context
timeout
in
the
provider,
but
there's
no
way
to
configure
that,
but
if
we
can
do
that
through
the
driver,
we
basically
need
to
add
another
flag.
Apart
from
the
four
flags
that
we
already
support,
that
would
just
be
context
timeout
equal
to
and
then
the
driver
would
determine
what
the
timeout
should
be
and
just
invoke
the
provider
with
that.
E
H
E
Right
so
I
mean
there
was
an
issue
on
a
user
had
posted
on
slack.
What
they
were
seeing
was.
The
provider
call
was
basically
timing
out
and
it
turned
out
that
the
issue
was
they
had
a
network
policy
configured
on
the
cluster
because
of
which
there
was
no
egress
traffic
and
there
was
no
context
in
the
exact
exact
command
used
to
call
the
provider.
So
what
happens?
E
So
then,
the
next
time
when
cubelet
tries
to
recall
the
driver
for
node,
publish
it
checks.
If
the
volume
is
already
mounted
and
then
if
it
is
already
mounted
it
does
not
remount.
So
then,
it
just
responds
back
saying
it's
already
mounted
and
because
of
that,
the
pod
starts
up
without
the
file
in
the
path
and
if
they
are
using
the
sync
secret
feature,
then
the
secret
does
not
get
created,
because
no
publish
volume
did
not
create
that
in
the
old
release.
E
So
what
we
did
as
part
of
this
pr
is
exec.command
all
we
just
switched
to
using
exec.commandcontext
so
that
we
include
the
cubelet
context
in
the
provider
call
and
if
the
context
times
out,
then
the
process
is
kill
and
then
there
is
a
context
context
timed
out
error.
So
in
that
case
we
successfully
unmount
and
then
the
next
time
the
driver
invokes
the
provider.
We
remount
the
whole
thing.
E
So
this
context
is
configured
for
the
driver
using
the
cubelet
context
and
now
what
we
want
to
do
is
we
want
to
propagate
whatever
is
the
timeout
also
to
the
provider
and
for
the
provider
for
the
azure
provider
today,
the
default
that
I've
set
is
one
minute
50
seconds
just
so
that
it
can
finish
all
the
operations
of
fetching
from
external
secret
store
and
also
writing
the
contents
to
the
files.
E
G
E
Is
that
pretty
much
the
only
action
right?
It
is
so
I
mean
the
provider
usually
invokes
like
a
lot
of
external
calls
right
and
then
it's
always
easier
to
embed
the
context
in
that
with
the
timeout,
so
that
it
responds
back
within
the
time
otherwise,
most
of
the
times
it's
just
context
or
background,
and
the
call
just
came
on
and
the
process
is
clear.
E
C
Yeah
this
seems
one
off
and
it
would
provide
a
little
bit
more
context
like
in
our
logs,
just
get
deadlines
being
too
short
for
requests
to
finish
things
like
that.
E
A
All
right,
that's
the
context
replacing
the
end-to-end
test
suite
with
a
nico.
E
So
I
added
that,
on
behalf
of
another
contributor
who
created
the
issue
and
also
has
a
pr
open
today,
the
end-to-end
suite
is
in
using
bats,
and
what
we
want
to
do
is
move
from
that
framework
to
either
the
gingko
framework
and
or
also
similar
to
what
cluster
api
does.
E
But
again
it's
using
the
kubernetes
client,
rather
than
just
running
batch
commands.
So
the
pr
in
this
are
switches
to
using
ginkgo,
but
it
still
continues
to
use
cube
city
I'll,
apply,
cube
city
will
create
and
tutorial
commands.
Basically,
it's
just
doing
an
exit,
dot
command,
and
I
just
want
to
bring
this
up
so
that
we
can
have
a
discussion
on
what
we
think
would
be
the
right
way
to
move
the
e
to
e
test
suite
to
a
different
framework.
E
I
think
if
we
move
it
to
ginkgo
with
using
the
kubernetes
client
similar
to
how
kubernetes
does
then
it
will
be
a
it'll,
be
easier
for
new
users
to
add
e2e
tests,
and
that
way
we
can
also
have
a
complete
coverage
for
all
the
new
features.
That's
being
added
to
the
driver.
B
Yeah,
so
in
his
work
in
progress
pr,
it
definitely
is,
I
think
it's
a
enhancement
to
what
we
have
now,
but
I
would
like
to
see
a
consistent
way.
As
you
said,
using
cube,
ctl
apply.
It
seems
like
it's
like
trying
to
do
it
in
different
ways.
E
B
And
hopefully,
adding
at
this
adding
with
ginkoi
will
help
make
it
easier
for
users
to
add
e
to
ease.
C
I
was
just
gonna
say
I
don't
have
much
context
yet
on
the
the
testing
frameworks
here.
So
I
I
have
nothing
to
add
to.
C
B
I
mean
so
maybe
maybe
the
asses,
like
maybe
folks,
on
the
call-
could
take
a
look
at
this
pr
and
chime
in
kind
of
do
a
comparison
between
what
we
have
with
the
bath's
edes
and
this
pr
and
see
which
one
you
would
like
to
work
with.
It's
actually
a
good
way
to
to
see
you
know
if
this
actually
enhances
the
contributor
experience.
C
Is
that
your
main
hesitation,
I
guess
on
this
right
now-
is
just
making
sure
that
there's
more
consensus
that
it's.
B
B
A
E
Yeah,
I
think
so.
Tommy
had
posted
the
link
last
week
for
this,
and
then
I
saw
that
the
kept
has
been
merged.
E
I
got
a
chance
to
review
the
kept
and
I
think
it's
a
very
good
addition,
and
also
it's
something
that
we
can
reuse
for
the
secret
rotation
as
well,
because
there's
going
to
be
a
new
remount,
I
mean
the
flag
is
still
not
determined,
but
this
be
going
to
be
a
new
flag
which
will
make
the
cubelet
invoke
the
node
publish
volume
at
every
periodic
interval,
and
that
is
something
that
we
could
piggyback
on
for
the
secret
rotation
feature
as
well.
E
C
Yeah,
I
think
I'll
I'm
still
reviewing
the
the
rotation,
which
is
like
the
next
thing.
That's
yeah.
I
think
I
just
want
to
keep
this
in
mind
because
it
will
save
our
plug-in.
Our
plug-in
has
to
get
the
surface
count
for
the
pod,
and
that
takes
a
number
of
round
trips
and
a
number
of
like
permissions
and
stuff
that
this
would
save
so
just
making
sure
that
they
kind
of
can
converge
in
the
future.
B
Okay,
we
can
def.
We
should
definitely
add
this
to
that
rotation
design
dock,
if
not
already
as
something
to
keep
track
of
in
the
future.
A
Okay,
so
follow
up
some
last
sync:
secret
rotation
feature
design,
review.
E
Yeah,
I
think
the
dog
is
being
reviewed
right
now.
Our
brian
couldn't
make
it
because
he
had
another
commitment.
C
E
But
I
believe
he
has
a
branch
which
has
the
changes,
so
maybe
the
next
time
you
can,
I
mean
I
think
we
probably
have
to
go
down
that
route,
because
the
kept
like
I
said
is
probably
only
count
going
to
come
up
in
120,
but
also,
if
there's
a
way
possible.
We
need
to
design
it
in
such
a
way
that
when
the
120
comes
and
121
onwards,
we
could
just
have
users
move
to
using
that.
Instead
of
having
the
overhead
on
the
driver
for
the
rotation.
A
Do
right,
vault
provider
update
vault
provider
to
the
to
use
latest
release.
G
Probably
good
for
me
to
give
a
oh
quick
status
generally
on
the
bulb
fighter,
so
yeah
we've
been
for
for
a
few
months
now,
we've
been
really
really
strapped
for
bandwidth,
so
stuff
stuff
is
backed
up
a
bit
and
it
kind
of
came
to
a
head
when
we
were
doing
the
webinar,
because
we
were
going
through
some
tutorials
with
zero,
zero
five
and
there's
actually
issues
in
what
was
what
have
been
brought
in
at
that
revision
and
chase.
G
And
I
were
talking
about
it
and
it
needs
a
little
a
little
bit
of
a
revamp
because
some
of
the
stuff
that
was
put
in
to
handle
kvv1
versus
kvb2,
like
all
that,
can
be
handled
cleanly
if
we
just
use
the
vault
api
package
like
this,
is
all
being
done
manually
with
like
each
like
gets
and
stuff,
and
we
kind
of
want
to
like
get
get
to
the
next
level.
G
On
this
thing
clean
that
up
which
will
kind
of
clean
up
the
latest
release,
get
it
get
it
stable
and
good,
and
we
can
have
that
be
our
latest
and
that's
sort
of
our
nearest
and
then
kind
of
burn
down.
G
Some
of
the
open
issues-
and
you
know
see-
what's
see,
what's
hanging
out
there
to
to
that
of
to
that
end,
so
I
think
y'all
met
tom
tom
just
started
he'll
be
he'll,
be
helping
out
with
that,
along
with
jason,
so
we've
kind
of
stabilized
on
the
team
a
little
bit
and
are
carving
out
some
some
actual
dedicated
cycles
for
csi,
and
I
I
really
can't
speak
to
like
well
using
the
royal's
latest
release
is
useful
enough
but
like
like
helm,
chart
or
window
support,
though
I
would
need
to
discuss
that
with
the
team
as
far
as
the
timelines
feasibility
but
yeah,
so
that's
kind
of
the
status
of
of
where
we're
at.
G
I
noticed
there
was
like
a
an
issue
a
day
or
two
ago.
It's
like
hey.
What's
the
status
of
this
project
and
so
yeah,
I
did
want
to
make
clear
that
it's
we
kind
of
want
to
get
it
to
a.
I
don't
know
longer
long
term,
like
what
the
steady
load
of
of
you
know
resource.
We
will
apply
like
how
how
tracking
will
be,
but
we
definitely
want
whatever
is
there
to
be
a
stable
production
usable
like
within
the
capabilities
we're
offering
they're
solid,
and
so
that's
our
immediate
term
goal.
B
Thanks
yeah,
that
makes
sense.
I
recall
you,
you
were
talking
about
cleaning
up
the
implementation
a
bit
along.
You
know
a
while
back
so
yeah.
This
is
probably
a
good
time
to
do
that.
If
there's
anything
you
need,
you
know
in
terms
of
integrating
with
the
latest
csi
release
and
any
integration
or
issues
you
see,
please
definitely
reach
out
to
the
team
here.
I
guess
we'll
definitely
want
to
address
any
of
the
integration
issues
you
see.
G
Yeah,
I
think,
as
I
recall
most
of
the
most
of
it
was
fairly
like
literally
like
cleaning
things
up
and
like
stabilizing.
There
was
one
open,
almost
design
level
decision
that
was
hanging
out
there.
I
think
you
and
I
had
both
commented
on
like
sort
of
I
can't
remember
the
detail
it
had
to
do
with
where
a
secret
was
rendered,
because
you
have
the
secret
name
and
then
the
value
within
the
secret
and
there's
like
multiple
ways
to
like
denote
what
you're
fetching
and
where
it's
going.
G
B
Right
right,
I
think
how
you
want
to
present
the
metadata
or
this
you
know
the
the
parameters
that
users
can
provide
in
in
the
secret
provider
class
is
up
to
the
the
provider
right
as
long
as
you
know.
Obviously,
we
will
need
to
update
the
e
to
e,
make
sure
we're
testing
the
right
objects
now
right
and
the
the
right
behavior
the
updated,
behavior
and
notify
users
with
the
the
changing
behavior
in
the
next
release.
B
I
think
it
would
be
helpful
to
you
know
similar
with
the.
Similarly
with
the
driver
release
planning
like
it
might
be
helpful
to
also
do
that
for
each
of
the
providers,
so
that
at
least
people
know
like
hey.
This
is
the
scheduled
release
and
here's
what's
coming
that
way,
it's
more
visible
to
users.
C
One
thing
I
have
noticed
is
I've
been
reviewing
some
of
the
newer
features
like
the
like
the
kubernetes
sync
and
some
of
the
rotation
stuff.
C
Seem
like
the
like
provider,
specific
notations
of
like
which
objects
and
like
where
they
go
and
path.
Is
you
like
that
knowledge
is
used
by
the
like
the
sync
thing
so,
where
right
now,
our
driver
has
like
a
different
way
of
denoting
the
secrets
and
the
files.
C
And
that
we'll
either
need
to
to
update
to
more
closely
match
the
the
stuff
that,
like
the
kubernetes
sync,
feature,
uses
or
address
some
of
the
like
schema
differences
somehow.
But
I
haven't
fully
formed
the
thoughts
yet
on
that.
It's
just
something
that
I
have.
B
Right
so
you're
talking
about
the
sync
object:
property
right
on
the
secret
provider
class.
B
Yeah
yeah
I
mean
we,
I
mean
if
we
have
time
we
should
definitely
address
this,
but
so
just
so
we're
on
the
same
page.
I
just
want
to
make
sure
we're
talking
about
the
same
thing.
So
I
guess,
can
I
share
a
screen.
Is
that
cool.
B
B
Oh,
I
think
you
have
to
enable
sharing.
So
if
I
think,
if
at
the
bottom,
if
you
see
permission
so
so
phil
you,
you
will
have
to
do
because
you're
the
host.
A
Yeah,
I
just
bumped
you
up
to
co-host
so.
C
Like
in
the
secret
provider
class.
E
B
Okay,
wow,
it
is
taking
a
long
time
to
load.
C
Right
but
it
seemed
like
the
reference
to
where
you
have
object
and
name
and
then
secret,
alias.
B
B
B
Obviously-
and
this
really
should
just
say
the
name
of
the
file
and,
however,
so
in
like
say
for
a
different
provider,
this
could
be
like
file
name
or
whatever
right,
as
long
as
it
matches
the
the
logic
within
the
plug-in
to
to
make
sure
it's
the
same
yeah
great
cool
hope
that
was
helpful.
C
B
That's
a
good,
very
good
point,
though,.
G
B
Yeah
speaking
of
the
google
provider,
I
think
what
I
would
love
to
eventually
do
is
have
like
a
like
a
table
here
in
the
reap
me
that
has
all
the
providers
and
then
and
then
like
a
check
mark
next
to
all
the
capabilities
as
well.
So
people
can
see
like
okay,
which
feature
is
available
in
which
provider
and
then
a
link
to
like
say,
example
or
readme,
or
something
right
like
that
would
be
super
helpful
for
users.
C
Yeah
like
where
our
status
is
kind
of
right
now,
it
works
hasn't
been
tested
too
much
or
no
integration.
C
We're
working
on
doing
the
like
a
published,
docker
image
where
right
now,
the
repo
you
have
to
build
it
yourself,
which.
B
Okay,
yeah,
that
makes
sense:
okay,
cool,
so
yeah.
So
I
guess
once
you've
done
that
and
we
do
like
a
code
review
right
and
then
speaking
of
edies
you'll
have
to
go
through
this,
so
so
yeah.
So
once
that
all
happens,
then
we
can
get
the
provider
added
to
to
the
to
the
readme.
C
And
then
I
think
we
had
maybe
some
of
the
same
issues
as
the
vault
provider
about
like
bootstrapping
off
or
like
workload,
identity
or
the.
So
there
may
be
some
code
on
the
gcp
one
that
that
the
vault
one
can
like
piggyback.
B
Okay,
yeah
anything
you
can
share
like
on
slack
and
you
know,
tag
gem
and
folks
right
and
that'll
be
super
helpful.
C
B
A
That
is
all
the
agenda
items
we
got
about,
10
minutes
left
for
the
call
any
other
announcements
or
anything
else.
Anyone
wants
to
chat
about
what
we
have
this.
H
Time
yeah,
I
want
to
bring
one.
E
Issue
up
really
quick,
I'm
not
gonna!
In
the
last
community
meeting,
we
discussed
about
optionally
installing
the
ability
to
sync
kubernetes
secrets
right
like
in
helm,
so
one
thought
around.
That
was,
if
we
do
do
that,
and
do
we
enable
it
by
default
and
just
provide
a
knob
for
users
to
disable
it,
or
do
we
disable
it
by
default
and
if
they
need
it,
they
explicitly
need
to
enable
it,
because
one
concern
with
disable
disability,
disabling
it
by
default,
is
when
users
install
it
with
helm.
B
E
Sync
secret
is
not
enabled
and
then,
when
they
try
it
out,
it's
going
to
fail
and
then
they
probably
will
have
an
issue
open
and
then
they'll
after
we
tell
them
that
you
need
to
set
this
flag
to
enable
that
feature.
Then
they
might
try
that.
But
so
I
think
what
do
we
think
is
the
best
way
to
do
that.
C
E
C
Yeah,
I've
actually
never
used
helm,
so
I'm
digging
into
that
now
too,
but
yeah.
I
think
hashicorp
wasn't
on
the
last
past
meeting,
it's
just
there's
to
to
recap.
C
My
concern
was
that
this
feature
allowed
you
to
copy
secrets
to
kubernetes
secrets,
but
that
could
be
like
an
escalation
of
privilege
or
like
you
might
not
want
your
secrets
copied
into
like
a
durable
storage
that
isn't
your
like
fault
or
key
vault
or
gcp,
and
it
gives
the
driver
permissions
over
your
secrets
to
like
read
and
write
all
secrets
in
the
cluster
which
you
might
not
want
to
grant
to
this
for
some
reason.
So
that
was
just
kind
of
why
I
filed
the
issue
and
started
looking
into
it.
A
Yeah
I
haven't
seen
the
spec
on
that
after
the
sink
did
the
secrets
persist
in
kubernetes,
like
as
secrets.
B
B
This
was
requested
by
a
lot
of
users
because
they
didn't
want
to
change
their
application
code
to
read
from
a
file
a
lot
of
applications.
You
know
the
12-factor
pattern
right.
They
actually
just
use
like
environment
variables
and
they're
they're
used
to
like
read
from
kubernetes
secrets
or
or
whatever
right.
B
So
I
definitely
agree
that
we
should
not
expose
this
feature
by
default.
I
feel
like
it
should
be
an
opt-in
feature,
given
that
we
want
the
default
to
be
most
secure
and
users
should
opt
in.
Having
said
that,
I
also
think
initial
concern
about
backward
compatibility
is
a
big
one.
G
If
I
could
a
question,
I
just
as
I
don't
know
kind
of
think
about
this
just
now,
yeah
that
sort
of
rounding
down
to
off
by
default
seems
like
intrinsically
the
right
way,
I'm
a
little
I'm
not
quite
following
the
backwards.
Compat
like
we
don't
have
secret
syncing
now
right.
So
what
what's
the
backwards?
Compat
issue.
B
So
so,
currently
the
key
vault
provider
is
the
only
one
that
supports.
The
syncing
aspect,
however,
feature
is
on
the
driver
right,
so
right
now
we're
basically
trying
to
turn
it
on
and
having
it
configurable
in
the
driver,
so
that
so
that
there
is
a
way
to
opt
in
and
opt
out
and
the
current
behavior
is
we
give
elevated
privilege
to
the
driver
assuming
that
users
want
that
feature
on
right,
which
is
problematic.
So
this
is
why
tommy
is
looking
at
making
it
configurable.
G
C
A
All
right
just
a
couple
of
minutes
left,
but
I
think
we
got
through
everything
this.
This
was
a
good
meeting.
Again
I
did
some
real-time
note-taking
if
I
miss
anything,
feel
free
to
update
or
add
to
what
I
put
out
there.