►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
This
may
not
be
possible
depending
on
your
deployment.
You
may
not
have
the
code
to
edit
or
it
may
be
prohibitively
expensive
to
implement
these
changes
and,
if
you're
targeting
deployments
against
multiple
secret
providers,
this
effort
would
need
to
be
duplicated
across
every
single
secret
storage
system.
A
A
The
third
option
is
you
could
use
a
sidecar
to
fetch
and
write
these
secrets.
The
sidecar
may
be
injected
using
a
mutating
web
hook
here.
The
identity
that
is
being
used
to
access
the
external
secret
store
would
still
be
the
pod
identity,
but
having
a
side,
car
and
an
additional
mutating
web
hook
may
have
operational
complexity.
A
A
In
terms
of
the
features
that
the
secret
store
csi
driver
supports,
it
has
a
familiar
file
system,
mount
experience
to
compute
workloads,
it's
also
pluggable,
and
it
can
support
multiple
external
secret
providers.
Without
the
application
having
to
be
modified,
it
can
load
new
values
of
secrets
throughout
the
life
cycle
of
the
pod.
A
A
A
We
talked
about
the
driver
supporting
multiple
external
secret
providers.
The
supported
providers
that
we
have
today
with
the
secret
store
csi
driver
are
azure
keyword,
google,
secret
manager,
hashicorp
vault
and
aws
secret
manager.
A
So
we
have,
we
saw
a
brief
introduction
on
what
the
secret
store
csi
driver
is.
So
how
does
the
secret
store
csi
driver
work?
A
driver
is
installed
as
a
daemon
set
on
each
node
in
the
cluster.
In
addition
to
the
driver
being
deployed,
there
needs
to
be
a
provider.
Specific
daemon
said
are
deployed
on
every
single
node.
A
A
So
we
talked
about
how
the
driver
works,
and
now,
let's
get
into
some
of
the
yaml
files
that
are
required
for
the
configuration.
So
here
we
have
a
sample
pod
spec.
A
A
A
A
A
So
this
is
a
pod
spec
for
a
part.
That's
I
implementing
api
calls
using
the
azure
sdk
to
talk
to
keyword.
The
keyword
name
is
kind
kv,
and
the
secret
that
it's
trying
to
fetch
is
secret.
One.
A
A
A
A
Similarly,
the
gcp
port
was
able
to
get
the
secret
from
google
secret
manager,
and
then
it
was
able
to
log
that
so
here
we
have
two
different
parts
that
are
implementing
the
apis
required
for
each
secret
store
backing
now,
instead
of
having
two
different
application
implementation
for
external
secret
store,
I
have
a
third
application
that
was
written
to
consume
the
secret
from
the
file
system.
Instead,
so
using
the
secret
store
csi
driver,
I
am
going
to
show
how
this
application
gets
the
secrets
from
either
of
the
secret
backend
that
is
azure
or
gcp.
A
So
if
you
see
here,
I'm
setting
a
couple
of
values
required
for
the
hand
chats
I'm
enabling
the
secret
rotation
feature
and
I'm
also
setting
the
rotation
pole
interval
to
five
seconds,
which
is
aggressive,
but
it's
only
for
this
purpose
of
the
demo
and
then
I'm
also
enabling
the
sync
secret
feature
for
the
driver.
A
We
typically
recommend
to
use
a
separate
namespace
for
the
csi
driver,
pods
other
than
the
ones
that's
used
for
your
workload.
The
default
that
we
set
in
the
manifest
is
cube
system
because
the
csr
driver
pods
are
privileged
and
it's
recommended
to
run
an
include
system
in
space.
Okay.
So
now,
let's
just
quickly
check
what
the
helm
chart
deployed.
A
A
A
A
A
A
Let's
also
look
at
what
the
secret
provider
class
looks
like
for
this
particular
for
accessing
secrets
from
azure
keyword.
So
when
you
see
here,
we
have
provider
set
to
azure
in
the
secret
provider
class,
and
these
are
the
provider
specific
parameters,
that's
required
for
accessing
the
secret.
A
A
Now
that
we
have
the
secret
provider
class
created
in
both
the
namespaces,
I
am
going
to
deploy
the
exact
same
port
spec
without
any
changes
in
azure
namespace
and
also
the
gcp
namespace.
So
if
you
see
here,
I
have
the
exact
same
pod
yaml,
which
is
referencing
a
secret
provider
class,
that's
being
deployed
in
two
different
namespaces,
which
means
it's
accessing
two
different
secret
store
buttons.
A
So
when
these
pods
get
scheduled
onto
a
node,
the
cubelet
sees
the
volume
definition
and
based
on
the
csi
volume,
the
driver
name.
It
invokes
the
csi
driver
to
mount
the
volume
the
csr
driver
will
mount
the
temp
fs
and
make
an
rpc
call
to
the
driver
to
fetch
and
write
the
secrets
to
the
file
system.
A
So
now,
let's
quickly
check
the
logs
for
the
csi
port
in
azure
namespace,
just
to
confirm
it's
working
as
expected,
and
then,
if
you
look
at
the
same
csi
pod,
which
is
reading
from
the
file
system
in
gcp
namespace,
it
has
a
secret
from
google
secret
manager.
So
there
you
go
with
this.
You
can
see
that
the
exact
same
application
without
any
changes
to
the
pod
yaml
is
able
to
fetch
secrets
from
google
secret
manager,
as
well
as
from
azure
keyword.
A
A
A
Secret,
auto
rotation,
so
a
generally
acceptable
best
practice
is
to
periodically
rotate
your
secrets.
So
if
your
external
secret
store
has
automatic
rotation
feature,
you
may
be
interested
in
how
the
workload
that's
running
on
your
kubernetes
cluster
can
get
the
new
values
of
a
secret
whenever
it
changes
the
driver
supports
rotation
automatic
rotation
by
periodically
reissuing.
A
A
So
we
talked
about
two
features:
let's
also
jump
into
a
demo
to
look
at
how
the
driver
does.
The
syncing
is
kubernetes
secret.
A
So
in
for
this
part
of
the
demo
you're
going
to
do
it
with
two
steps,
the
first
one
that
we're
going
to
look
at
is
how
the
driver
can
sync
the
mounted
content
as
kubernetes
secret.
So
the
first
thing
we're
going
to
do
for
this
is
we're
going
to
enable
an
application
to
work
with
nginx
english
controller
and
we're
going
to
store
our
tls
certificates
in
keyword
and
access
them
using
the
driver.
A
So
I've
already
created
a
certificate,
a
self-signed
certificate
for
localhost,
using
a
step
using
the
step
cli.
So
let's
inspect
the
certificate
before
we
actually
import
it
to
azure
keyword.
A
A
So
I'm
going
to
deploy
the
secret
provider
class
in
default
name
space,
and
then
I
have
a
single
file
which
defines
the
pod,
the
services
and
the
interest.
So
let's
take
a
look
at
that
so
here
I
have
a
pod
which
is
called
foo
app
and
it
has
a
volume
mount
and
in
volumes,
it's
referencing
the
secrets
to
our
csi
drivers
and
we
have
a
service
to
expose
that
part
and
then
similarly,
we
have
another
power.
A
Port
called
bar
app,
which
has
a
service
exposing
it
and
then
finally,
we
have
an
ingress
resource
which
is
set
up
to
use
tls.
A
A
A
A
A
A
So
when
we
inspect
it
with
openssl,
we
can
see
that
the
validity
for
the
new
certificate
that
I
have
is
march
18th
instead
of
march
10th.
So
I'm
going
to
import
this
rotated
circuit
into
azure
keyword.
A
A
A
A
A
I
also
said
that
I
will
do
this:
the
rotation
demo
with
the
first
setup
that
we
had
so
let's
try
that
so
I'm
going
to
rotate
the
secrets
that
were
used
by
the
application
that
was
used
as
part
of
the
pod
portability
demo
to
see
if
it
can
pick
up
the
latest
value.
So
the
application
that
was
used
in
the
first
demo
watches
the
file
system
and
it
basically
reads
the
file
from
there
and
logs
it,
and
then
it
also
has
file
watcher
implemented,
which
means
anytime.
A
There
is
a
change
in
the
file
system.
It
will
automatically
pick
up
the
new
value
and
then
log
back
so,
let's
tail
the
logs
before
I
actually
go
and
rotate
the
secret
in
azure
keyword.
So
if,
if
you
see
here
it
is
log
yellow
from
azure
keyword,
which
is
during
the
startup,
so
now
I'm
going
to
go
and
rotate
secret
one
to
say
I
am
rotated.
A
A
So
we've
looked
at
all
these
demos
and
I'm
sure
the
next
question
is
what
is
the
current
state
of
the
project,
so
the
secret
store,
csi
driver
core
functionality
is
stable.
This
includes
the
interface
defined
for
supporting
multiple
external
secrets
provider,
port
portability
with
secret
provider
class
custom
resource.
A
So
what
does
the
future
look
like
for
the
secret
store
csr
driver?
The
auto
rotation
feature
that
we
talked
about
is
currently
in
alpha.
We
are
working
towards
moving
this
feature
to
stable
by
reusing
some
of
the
csi
core
functionality.
So
csi
has
a
requires.
Republish
feature
in
which
scenario
cubelet
will
automatically
issue
an
rpc
call
periodically
to
update
the
mount.
A
A
A
A
A
And
in
terms
of
the
resources
here
are
some
resources
from
the
presentation,
so
there
is
a
documentation
link
to
the
driver
and
then
also
the
github
repo
link
to
the
driver.
Each
provider
have
their
own
specific
documentation
in
terms
of
what
are
required
for
the
secret
provider
class,
so
there's
links
to
documentation
for
the
individual
providers
and
for
the
demo,
I've
reused
some
of
what
I've
used
during
my
cube
contact.
So
all
the
artifacts
that
I've
used
today
are
available
in
that
github
repo.