►
From YouTube: Secrets Store CSI Community Meeting - 2021-05-27
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everyone
thank
you
for
joining.
It
is
may
27th.
This
is
the
csi
secret
store
community
call
just
to
let
everyone
know
those
that
may
be
new.
This
call
is
under
the
cncf
governance
for
code
of
conduct,
so
to
sum
it
up
just
be
respectful
to
everyone.
A
A
B
Calling
mcavoy
I'm
a
software
engineer
at
electronic
arts
just
really
interested
in
kind
of
some
of
the
use
cases
of
this
project
and
also
a
maintainer
of
kind
of
a
similar
project.
So
I
just
thought
I
would
yeah
discuss
some
use
cases
here
and
yeah.
I
guess,
take
a
look
at
this
project.
C
A
All
right
thanks
for
joining
as
usual.
Again,
if
you
have
the
doc
go
ahead
and
just
mark
yourself
as
an
attendee,
if
you
have
not
done
so,
let's
see,
I
don't
think
we
have
any
announcements.
So
let's
go
ahead
and
jump
right
into
the
agenda.
A
Okay,
thanks
you've
been
awesome
on
that.
So
thank
you
so
much
all
right,
so
this
is
taking
the
notes
for
us
all
right,
let's
jump
into
it,
kellen!
Let's,
let's
talk
about
it.
We
have
you
talk
about
similar
project
goals,
let's
go
ahead
and
discuss
what
you
want
to
talk
about,
go
for
it.
B
Yep
so
there's
this
is
kind
of
related
to
it
was.
It
was
previously
owned
by
or
governed
by,
godaddy
kubernetes
external
secrets,
but
this
is
essentially
like
a
golang
rewrite
using
some
of
the
kind
of
controller,
runtime
and
other
practices,
and
it
seems
like
these
kind
of
projects
have
very
similar
goals,
and
so
we
were
interested
in
kind
of
aligning.
You
know
where
we
can.
I
think
our
our
main
use
case
was
around.
B
You
know
having
this
kind
of
secret
provider
resource,
and
then
we
have
a
secondary
resource
of
an
external
external
secret
that
then
can
be
deployed
by
say,
users
in
like
a
helm,
chart
can
be
packaged
separately
of
where
kind
of
maybe
an
administrator
of
a
namespace
or
a
cluster
can
manage
the
secret
provider,
and
then
users
can
use
this
external
secret
resource,
and
so
this
kind
of
it
seems
like
it
has
a
lot
of
alignment
with
what
you
know.
B
The
csi
driver
and
the
secret
store
is
working
on,
and
so
we
have
a
couple.
I
think
there's
a
couple
issues
that
we've
added
and
kind
of
opened
up
and
so
we're
kind
of
interested
in
kind
of
thoughts
around
that
and
thoughts
on
alignment.
There.
B
So
I
guess
in
terms
of
a
question
one
one
thought
we
had
was
if
there
was
any
interest
in
kind
of
a
separate
crd
that
is
just
like
using
the
secret
provider
and
rather
than
because
right
now
it
seems
the
main
use
case
it
here
is
around.
You
know
the
csi
driver
in
a
pod
volume,
and
but
that
can
be
a
bit
trickier
to
use
for
environment
variables
or
you
know,
docker
pull
secrets
of
where
now
you
need
to
go
back
to
that
secret
provider,
resource.
D
Mount
from
the
sinking
is
kubernetes
secret.
I
think,
as
a
csi
driver
community
as
a
project
community,
we
have
been
thinking
about
it.
I
mean,
as
separating
the
two
may
be,
providing
an
option
to
sync
his
kubernetes
secret.
Without
relying
on
the
mouse,
but
we
have
not
really
gone
into
the
design
like
we
haven't
deeply
gone
into
the
design
looking
over
the
top.
D
It
feels
like,
if
you
have
to
support
it,
it's
more
like
looking
at
the
secret
provider
class,
which
is
the
source
of
truth
for
everything
and
then
basically
having
watches
on
it
and
then
on
create
events.
Instead
of
relying
on
the
pod
mount,
we
basically
just
go
call
the
provider,
get
the
content
and
then
sync
it
as
kubernetes,
and
we
can
do
that
because
we
have
moved
away
from
using
binaries
for
providers
and
instead
we
have
started
using
grpc.
So
that
gives
us
the
flexibility
to
do
it.
D
D
I
think
this
is
where
maybe
we
can
see
if
we
can
do
something
with
the
external
secrets
project
right
like
maybe,
there
is
a
way
where
we
could
use
the
same
secret
provider
class
and
maybe
support
syncing
as
kubernetes,
secret
and
environment
variables.
That
way,
users
can
use
one
of
the
two
with
just
a
single
custom
resource
instead
of
having
multiple
customers
that
they
have
to
handle.
A
Any
other
thoughts
from
anyone
else,
I
guess
caitlyn.
What's
your
ultimate
goal
here
is
to
see
a
consolidated
project
and
then
this
being
a
certain
use
case
within
using
the
seekers
provider
class.
B
Yeah,
I
think
the
consolidated
project
was
part
of
it,
because
I
think
some
of
the
users
of
the
the
other
external
secrets
kind
of
operator
also
have
the
the
csi
driver
use
case
of
where
they
have
some
secrets.
They
don't
want
to
soar
as
kubernetes
secrets
and
so
today,
to
do
that.
B
Then
you
have,
to
you,
know,
use
both
external
secrets
operator
and
the
csi
driver
for
those
kind
of
two
types
of
secrets
got
it
okay,
but
yeah
aligning
on
the
secret
provider,
seems
kind
of
makes
sense,
at
least
for
kind
of
the
first
action
there.
E
I
have
a
good
question
so
in
order
to
read
the
securities
from
advanced
security
manager
and
the
azure
keyboard,
do
you
need
to
configure
like
some
database
credential
to
the
external
security
part.
B
Yeah
so
there's
a
couple
different
options
of
you:
can
you
either
use
like
pod
identity
if
you're,
installing
the
extra
external
secrets
operator
of
like
you
know,
using
irsa
or
similar
for
other
cloud
providers
or
in
the
secret,
the
secret
provider
resource
for
this?
The
external
secrets
project
you
can
configure
like
a
secret
reference
of
like
there's.
I
am
credentials
and
I
enrolled
in
this
secret.
E
B
F
I
think
just
to
add
something
here
really
quick,
like
the
demo
that
I'll
show
later
might
scope
that
down
a
little
bit
with
the
token
request
feature
that
is
going
in.
So
you
can
basically
scope
a
pod
to
only
get
credentials
or
only
get
things
that
it's
service
account
or
whatever
credentials
that
service
account
in
exchange
for
can
do.
But,
yes,
you're
correct.
There.
A
Okay,
good
stuff,
what's
the
plan
going
forward
here?
Do
we
need
some
of
the
maintainers
kind
of
go
a
little
deep
on
this
project
and
kind
of
understand
what
the
integration
points
me?
Maybe.
D
Yeah,
I
think
that
sounds
good.
We
can
probably
get
started
on
a
dock,
so
the
maintenance
from
csi
and
also
external
secrets
can
take
a
look
at
each
other's
project
right
out
and
then
maybe
collaborate
on
a
common
dock.
And
then,
if
we
can
get
that,
then
maybe
we
can
review
it
before
the
next
community
call.
A
C
I
guess
not
right
okay,
so
I
will
just
summarize
it
and
we
have
already
discussed
it
internally
with
a
microsoft
side
of
csi,
maintainers
field
and
animation
rita,
and
we
just
want
to
get
the
whole
community
perspective
on
that.
And
how
do
we
go
about
implementing
that?
C
So
what
currently
happens
is
that,
if
specified
secrets
do
not
exist
in
key
vault,
and
I
only
verify
this
with
azure
keyboard-
that's
what
our
primary
interest
in
so
the
volume
amount
fails
and
the
container
start
fails
right
and
there
is
a
warning
in
the
bot
description
secret
not
found.
So
clearly
you
get
a
clear
404
distinction.
What
what
exactly
happens
for
which
secret.
C
So,
in
that
case,
user
will
be
able
to
say
that
if
secret
is
not
found,
just
continue
mounting
all
other
secrets
leave
it
to
my
application
to
deal
with
empty
or
missed
secrets
at
runtime.
C
So
I
listed
three
options
that
I
can
see
what
actually
csi
driver
can
do
about
in
this
case,
when
the
secrets
are
missed
right,
so
we
can
put
an
empty
value
and
the
status
we
can
put
some
user
defined
value.
Basically
default
user
can
say
it's
it's
another
parameter,
of
course,
but
user
can
say.
I
want
to
have
some
default
value
instead
of
missed
secret
and
maybe
not
create
a
file
or
mount
not
sure.
If
that's
possible.
C
G
Yeah,
I'm
generally
supportive
of
this.
G
And
just
that
it
it
be
opt
in
behavior
just
for
the
the
difficulty
of
debugging,
like,
I
think
the
fast
fail
kind
of
helps
people
find
out.
I
didn't
have
the
permission
right
or
I
didn't
you
know,
get
the
name
right,
but
that
yeah
for
more
advanced
use
cases
being
able
to
kind
of
skip
over
a
secret.
That's
not
there
and
have
it
become
available
later
in
the
the
pod's
life
cycle
is
probably
useful
from
like
a
reliability.
G
Standpoint
of
you
know,
like
you,
don't
necessarily
want
to
block
the
the
starting
of
your
application
for
something
that
may
only
affect
you
know
like
10
percent
of
requests
to
it.
That
kind
of
thing.
C
G
Exactly
so
yeah
I
haven't
dug
much
into
kind
of
like
whether
or
not
it's
a
pod
feature
or
a
driver
feature
a
plug-in
feature
that
sort
of
stuff
yet
but
yeah.
I
agree
it's
a
valuable
enhancement.
C
That's
great
that's
great,
to
hear
any
other
thoughts,
questions.
D
No,
I
think,
just
to
add
the
way
the
secret
will
get
refreshed
is
based
on
the
rotation
reconciler
right
now
we
have
it
set
at
two
minutes.
I
think
the
only
thing
is
it
depends
on
what
external
secret
store
recommend,
because
I
had
a
conversation
with
the
keyword
team
internally
and
one
of
their
recommendation
was
two
minutes
is
rather
too
aggressive,
and
they
would
have
liked
to
have
something
like
four
hours,
because
rotation
is
not
a
common
scenario.
H
G
Yeah,
I
think
I
think
we
might
have
an
issue
about
that
where
it's
a
flag,
so
it
applies
to
the
entire
driver,
the
interval,
it's
like
all
secrets
or
secret
provider
classes
and
mounts,
but
I
think
in
kind
of
like
the
demo
that's
coming
up,
I
believe
the
there's
also
a
mechanism
on
the
the
csi
side
for
re.
G
Was
it
remounts?
I
think.
G
Yeah,
but
that
that
eventually,
I
think
that
might
be
configurable
just
out
of
the
box
using
per
mount,
which
is
a
little
bit
more
granular
than
per
driver.
G
The
refresh
interval
is
set
on
the
driver
level
currently
so
that
that
two
minute
configuration
is
configured
at
the
driver.
So
it
applies
to
every
mount
that
the
driver
handles.
C
Okay,
okay,
I
missed
that
part.
I
got
it,
but
for
the
potentially
new
parameter
that
says
best
effort
right
enabled
can
it
be
set
on
per
secret
object
or
on
the
entire
spc,
or
it's
for
the
entire
driver.
G
G
G
The
same
knowledge
of
which
secrets
at
the
start
of
a
mount
operation
to
know
like
per
secret
configuration,
I
think,
basically,
any
per
secret
configuration-
has
to
be
implemented
by
the
providers.
Currently,
that
sounds
right.
D
Yeah,
that
makes
sense.
I
think
I
had
a
brief
chat
with
tommy
yesterday
but,
like
I
think
at
least,
if
we
want
to
add
this
initially,
we
want
to
do
it
at
the
driver
level
so
that
each
provider
doesn't
have
a
different
way
of
doing
it,
because
if
we
do
provide
it
at
a
secret
level,
then
each
provider
would
have
a
different
flag
for
configuration
which
can
be
difficult
for
users.
D
So
I
think
a
good
starting
point
would
be
to
have
it
on
the
secret
provider
class
for
the
driver
level,
so
that
we
can
still
make
sure
that
the
driver
validation
and
how
for
the
code
path
works
and
then
also
we
impose
how
providers
handle
some
of
these
scenarios,
because
the
sync
kubernetes
secret
and
rotation
depends
on
the
grpc
response
from
the
providers
so
having
it
at
the
driver
would
be.
We
would
be
able
to
harden
that
interface
to
make
sure
that
the
providers
comply
with
it
and
then
driver
can
handle
the
different
scenarios.
G
It
it
did
sound
like
that
there
was
yeah
like
I
think.
The
way
many
of
the
providers
are
written
currently
is.
If
there
are
any
errors,
it
returns
an
error
response
to
the
driver.
G
So
getting
like
you
know
like
one
of
four
secrets
to
actually
like
it
seems
like
in
the
issue
as
you're
describing
it
you
might
want
partial
secrets
rather
than
just
like.
Oh,
there
was
a
partial
failure
in
the
mount.
The
pod
can
still
turn
off.
G
There
might
be
some
difficulties
there,
just
with
the
that,
like
I
don't
believe
any
of
the
providers
currently
will
return
partial
responses
to
the
driver,
so
it
may
be
like
yes,
there
was
a
problem
doing
this
mount,
but
the
pod
will
still
turn
off.
I
think
it's
probably
more
complicated
to
get
the
drivers
to
say,
there's
a
problem
in
this
mount,
but
some
of
those
secrets
are
available.
D
Yeah,
I
think
if
we
still
rely
on
the
mount
for
the
sink
kubernetes
secret,
like
one
option
is
once
provided,
is
move
to
driver
writing
the
file.
If
this
optional
feature
flag
is
set,
then
the
providers
would
just
send
an
empty
data
field
for
those
particular
secret
files
right
and
then
the
driver
would
have
everything
without
any
error.
D
It'll
just
go
right
whatever
it
is,
so
I
think
what
all
I
get
pointed
out
initially
like
having
an
empty
file
or
maybe
a
default
value
like
that,
would
just
work
so
that
from
the
driver
perspective,
it
still
thinks
all
the
files
provided.
Whether
given
by
the
provider,
is
enough,
and
then
it
will
just
go
right
that
and
then
the
sync
path
which
relies
on
the
mounted
files
will
just
read
those
and
sync
that
kubernetes
secret
as
well.
G
I
One
other
thing
actually
I
want
to
mention
here
is
with
this
same
period,
and
now
we
probably
also
want
to
think
about
throttling
on
the
provider
sides.
I
I
mean
two
things
can
happen
here
like
as
we
were
discussing
right.
Like
I
mean
if
we
are
citing
our
same
period
may
be
too
short
or
something
like
that,
we
could
end
up
throttling
the
providers
or-
or
the
second
scenario
I
could
think
of
is
like
if
the
workloads
that
are
being
run
on
are
using
this
thing
right.
I
If
these
parts
are
like
too
shortly,
then
every
time
they
come
up
and
if
they
try
to
get
the
cigarettes,
then
also
it
can
get
through
depending
upon
the
provider's
specification
so,
and
that
is
probably
beyond
our
control.
C
A
G
Think
it
would
probably
be
nicer
at
the
drive
per
level
yeah,
okay,
and
I
think
the
way
we've
done
these
in
the
past
is
just
like
a
concrete,
like
proposal,
doc,
that
we
review
and
then
distribute
the
implementation.
C
A
No,
I
was
going
to
say
yeah,
let's
see
what
we
could
come
up
with
as
far
as
documentation
and
then
like
we,
we
can
work
offline
with
you
and
helping
get
that
proposal
doc
going.
D
You're
not
on
the
slack
channel
for
the
on
the
kubernetes
slack
it'd
be
great
if
you
could
join
so
that,
like
we
can
work
on
the
slack
channel.
So
if
you
have
any
questions,
all
the
project
collaborators
can
help
answer
it,
and
then
that
will
be
a
more.
G
Community,
oh
and
I
think,
on
timelines,
we've
started
moving
to
month
later
releases
right,
and
I
think
this
was
something
discussed
in
the
last
meeting
that
we're
trying
to
move
to
a
monthly
release.
And
there
is
an
issue
open
for,
like
our
versioning
plan.
D
G
I
think
the
the
proposal
is
to
do
the
second
wednesday
of
each
month
is
the
current
proposal.
Okay,.
D
Yeah,
I
think
we
can
start
with
the
design
and
then
we
can
decide
which
milestone
it
will
go
to.
I
don't
think
it
will
go
into
the
zero
zero
twenty
milestone,
but
we
can
see
based
on
where
the
design
is
and
how
quickly
we
can
implement
it.
A
Okay,
so
next
steps
yeah
like
we'll
work
with
you
on
design
again
as
inish
mentioned,
join
the
slack
channel.
A
lot
of
async
activities
can
happen
there
yeah
and
then
I
guess
you
know
again.
This
is
a
bi-weekly
call.
A
A
All
right,
any
other
comments,
questions
concerns
about
the
proposal.
A
A
Okay,
no
worries;
okay,
so
we're
just
waiting
on
you.
Okay,
while
micah
gets
that
set
up
any
other
discussions,
anyone
wants
to
discuss
what
we
got
some.
D
A
Here
we
go
okay
yeah,
so
if
yeah,
if
everyone
can
take
a
look
at
the
pr
here,
that'll
be
helpful.
I
Hey
there
is
one
other
thing
that
other
pair
that
I
opened
about
debugging,
I'm
not
sure
how
in
general
you
know,
other
contributors
are
doing
it,
but
I
found
it
helpful
to
sort
of
have
the
live
debug
feature
so
something
I've
added
the
steps
and
docs
for
that
as
well.
A
That's
part
of
the
same
pr.
F
I
Five,
six
yeah.
I
don't
believe
we
have
a
issue
for
that,
but
it's
just
something
I
found
when
I
said.
A
F
I
think
I
broke
a
caching,
I'm
relying
I
changed.
F
Right
I
say
I
change
a
certificate
and
the
cash
certificate
is
cashed
in
an
aws
system
that
I
need
to
not
cash,
so
in
probably
about
10
minutes,
it'll
work
independent
of
my
demo,
but
I
can
show
what
I
have
let's.
Let's
do
this.
A
We're
now
starting
to
get
into
looking
at
some
of
the
latest
issues.
So
if
you
want,
we
can
just
kind
of
crawl,
the
the
issue
list
and
just
yeah
that
sounds
great,
so
I'll
talk
with
everyone
and
then
again
you
just
love
to
know
when,
when
you're
ready.
D
So
I
think
there
are
a
couple
of
breaking
changes
that
we're
going
to
be
making.
I
mean
for
folks
on
the
community
called
for
the
next
release.
One
change
is
we're
going
to
disable.
The
our
back
roles
for
sync
is
kubernetes
secret
by
default,
because
we
want
to
keep
it
secure
out
of
the
box.
D
So
just
following
on
the
practice
of
least
privileges,
we
are
going
to
set
the
flag
to
false
by
default
in
the
end.
Charts
so
for
users
who
are
using
the
csi
driver
to
sync
is
kubernetes
secret.
In
addition
to
mount,
they
would
need
to
explicitly
set
it
to
true
when
they
do
a
helm
upgrade
from
next
time.
D
So
we'll
add
that
to
the
release,
notes
as
well,
and
then
the
550
issue
after
0.0023
release,
the
next
release
could
be
on
v0.1.0
or
0.024,
but
then
that
release
we
are
going
to
set
filtered
watch
secret
to
true.
D
So
what
this
does
is,
if
you're
using
not
published
secret
reference,
so
I
think
it's
more
commonly
used
in
the
azure
provider
for
providing
the
service
principal
credentials.
So,
if
you're
using
a
non-published
secret
reference,
then
setting
this
to
true
means
that
secret
has
to
be
labeled
with
a
specific
level
which
we
have
added
in
the
documentation
for
the
load
test
and
the
reason
we're
going
to
default.
D
A
A
D
It's
known,
as
anything
else
like
I
have
a
pr
to
promote
tommy
to
approval
and
he's
been
doing
a
great
job
as
a
reviewer
and
he's
been
reviewing
evpr
and
contributing
a
lot
to
the
driver.
So
I
think
it's
time
so
if
everyone
wants
to
look
at
it
and
then
plus
one,
we
are
hoping
to
mulch
that
pr
this
week,
so
that
all
of
us
came
about
tommy
to
merge,
pull
requests.
A
All
right
we
got
my
plus
one
on
that
tommy
you've
been
doing
great
work
for
us.
I
really
appreciate
it.
Thank
you
all
right.
While
we
are
in
the
prs
and
I'm
still
tap
dancing
for
you
micah
any
other
prs
that
we
want
to
put
some
signal
on
that.
We
need
the
community
to
take
a
look
at.
A
Oh
you're,
working
okay,
let's
just
finish
out
with
the
with
the
pr
any
pr's
anyone
needs
something
some
eyes
on,
etc.
D
A
Okay,
throw
that
in
the
chat
and
with
that
micah
yeah,
you
should
be
a
co-host
I'll,
go
ahead,
stop
my
share
and
feel
free
to
take
it
away.
D
F
Sure
do
I
need
to
rejoin
is
saying
host
disabled
participant
screen
sharing
when
I
try
to
share
okay
one.
Second,
let
me
double
check.
G
A
A
F
F
A
Okay,
yeah
well
mike
micah
reboots,
just
want
to
put
out
there
are
we
planning
on
anything
to
showcase
for
kukan
north
america
later,
because
this
year
anyone
has
any
things
that
they
think
they
want
to
showcase
since
initial
time,
we
did
that
awesome
session
at
the
last
aeu.
F
Great,
so
what
I
did
was
I
just
to
what
you
have
what
you
see
in
pr
471
is
what's
going
on
here.
I
just
had
a
few
modifications,
so
I
needed
to
add,
like
the
token
request
field
to
the
csi
driver,
so
I
just
specified
like
the
audience
and
some
expiration,
and
I
just
use
my
custom
image
in
the
deployment
but
other
than
that
the
damage
set.
So
if
I
get
pods
all
names
spaces
you'll
see,
I've
got.
I
just
created
a
cops
cluster
using
the
eks
distro.
F
I've
got
the
csi
secret
store
driver
here
with
my
custom,
which
is
with
the
code
in
that
pr,
and
then
I
have
this.
I
created
a
just.
This
is
more
just
kind
of
a
proof
of
concept.
Prototype
driver
I'm
playing
around
with
basically
getting
aws
credentials
to
pods
so
basically
similar
to
how,
if
you're
familiar
with
aws
item
rules
for
service
accounts,
exchanging
the
the
pods
token
request
token
for
aws
credentials,
but
right
now
that's
baked
into
the
sdks.
F
But
if
any
change
to
that
procedure
requires
updating
all
the
aws
sdks
and
then
having
every
application
that
needs
that
uses
that
to
have
an
updated
sdk,
so
I'm
experimenting
around
with
okay.
How
can
I
do
this
exchange
of
getting
a
service
account
token,
giving
you
to
applaud
without
necessarily
needing
to
do
that
in
an
aws
sdk
that
needs
to
be
baked
into
a
bunch
of
applications,
so
secret
store
driver
is
really
nice
for
constraining
down.
F
While
I
want
to
use,
I
want
to
get
some
credential
and
mount
it
into
a
pod,
but
I
don't
necessarily
want
to
write
a
whole
csi
driver
for
that.
I
really
like
how
the
like
the
mount
our
pc
just
says:
here's
here's,
the
here's.
What
I
got
from
csi
you
don't
need
to
implement
all
the
csi
driver
methods
and
here's
the
token,
based
on
my
pr
that
cubelet
gave
me
give
me
back
bytes
that
I
can
mount
into
the
into
the
volume.
F
So
this
little
provider
basically
takes
the
request
from
the
pod
or
from
from
the
from
the
driver,
gets
the
token
and
then
calls
for
now.
Just
assume
rolls
web
identity.
So
it's
just
like
a
one
for
one.
Instead
of
doing
it
in
an
application
inside
a
pod
uses
the
aws
api
to
say,
here's
a
here's,
a
jaw
give
me
back
aws
prints.
F
So
what
I
have
is
writer,
so
I've
got
just
the
provider
class
and
it's
just
a
really
simple
configuration
right
now,
just
for
this
prototype,
but
just
to
say,
okay
for
this
kubernetes
service
account
and
in
a
certain
name,
space
assume
this
aws.
I
am
role
and
then,
on
the
provider
side,
it's
a
pretty
standard
service
account
like
the
provider,
has
just
very
minimal
permissions,
unlike
like
say
the
current
aws
secret
secrets
manager
provider,
where
it
uses
the
the
provider's
identity
to
call
secret
manager.
F
The
only
kubernetes
permissions
I
need
are
all
right.
Right
now
are
token
review
and
then
just
listing
and
watching
pods.
The
token
review
is
when
it
gets
the
token.
I
call
token
review
just
to
validate
that.
Okay,
this
is
a
valid
token.
I'm
not
getting.
You
know
fooled
by
someone,
I'm
not
trying
to
submit
a
bad
token
to
aws
or
anything,
I'm
I'm
and
I'm
doing
a
little
bit
of
caching.
So
I
don't
want
to.
F
I
also
want
to
validate
that
this
token
is
correct
and
I'm
not
giving
you
something
out
of
the
cache
that
was
correct
for
a
valid
token,
and
then
I'm
just
listing
pods
so
that
I
have
for
this
internal
cache.
I
can
clean
up
the
cache
automatically,
so
that's
the
only
kubernetes
permissions
I
have
and
I'm
not
actually
even
giving
this
any
this
this
provider
any
aws
permission.
It's
just
doing
some
exchanges.
F
So
the
it's
really
simple,
it's
just
you
know
like
a
normal
provider,
is
dropping
its
socket
in
the
the
provider's
directory
and
it
gets
the
node
name.
Just
I
pass
in
the
known
name
just
again
for
the
for
caching,
so
I
can.
When
I
watch
pods
in
the
cube
api
server,
I
can
scope
it
down
to
just
just
pause
on
my
node,
because
I
don't
need
to
cache
anything
from
other
nodes
and
I
can
show
so
what
I
have.
F
I
don't
know.
Okay,
so
I
don't
have
any
pods
yet
the
pod
that
I'm
gonna
launch
really
quick
is
just
really
simple,
just
an
nginx
pod
just
for
for
demonstration
sake,
but
what
I've
done
is
said.
F
Okay,
this
pod
is
using
that
default
service
account
that
I
specified
in
this
provider
in
the
default
name
space,
so
service
account
name
is
not
even
set
here,
but
it's
just
default
default
and
I'm
you
know
selecting
my
aws
credential
driver
and
I'm
mounting
that
to
root.aws,
so
my
driver
is
going
to
change
that
token
write
the
credentials
file
and
return
it
back
to
the
tsi
driver.
So
if
we
launch
this
pod.
F
F
F
It
should
get
to
get
that
role,
so
it
was
able
to
read
the
credentials
from
disk
call
aws
and
get
caller
identity
is
basically,
who
am
I,
who
am
I
api
in
aws
and
I'll
just
head,
and
I
guess
there's
a
session
token.
So
you
just,
I
just
need
to
show
the
first
three
lines
and
this
file
and
that's
not,
you
won't
be
able
to
steal
my
creds.
What's
that
aws
credentials,
file.
F
Yeah
there,
so
you
can
see
axis
secret
key,
not
exposing
the
token,
so
anyone
watching
this
recording
can't
stumble,
but
it's
a
cool
little
demo.
It's
again.
This
is
mostly
just
like
I'm
doing
just
some
prototyping
and
so
like
I'll,
probably
put
this
on
github
in
like
a
samples
repo,
but
I'm
I'm
playing
around
with
kind
of
credential
management
or
getting
credentials
to
pause
and
making
it
trying
to
make
it
a
little
bit
easier
than
all
the
setup
required
for
imrules
for
service
accounts.
F
So
this
is
just
kind
of
a
step
one
in
that
prototyping,
but
I
I
really
like
secret
store
provider
and
and
just
wanted
to
show
that
that
pr
that
I
have
been
kind
of
being
used
and
kind
of
what
it
could,
what
it
could
be
used
for
in
one
example,
you
can
see
a
lot
of
other
examples
where
that
could
be
used
to
do
things
like
in
this.
The
secret
aws
secret
manager
case
like
get
credentials
for
that
pod,
or
do
some
other.
D
F
I
believe
it's
like
the
cubelet
will
based
on
the
token
lifetime,
and
it's
like
70
of
the
token
lifetime,
so
I
I
haven't
like
tested
the
republish,
yet
I
just
it's
just
very,
very
much
prototype
proof
of
concept
level.
At
this
point
I
haven't
like
checked
to
see
if
that
that
gets
republished
yet,
but
it's
pretty
cool
that
everything
pipes
down
and
works.
D
A
All
right
cool
yeah:
we
got
a
pretty
good
session
today.
I
think
we
went
through
everything
we
we
kind
of
highlight
the
issues
the
pr's,
I
think
everyone's
in
sync,
with
that
we
got
some
tasks
going
forward
here
with
oleg.
A
Call,
okay
I'll,
take
silence,
there's
no
all
right,
so
our
next
meeting
will
be
june,
the
10th
in
the
next
couple
weeks.
So
we
look
forward
to
seeing
you
there
and
hopefully
everyone
gets
a
chance
to
recharge
the
batteries,
enjoy
your
memorial
day
weekend.
Hopefully,
you'll
take
your
friday
off
as
well
with
that
we'll
go
ahead
and
end
the
meeting
and
we'll
see
everyone
in
the
next
couple
weeks.