►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
The
the
recent
twitter
breach
is
just
a
great
example
where
some
folks
got
access
to
some
to
a
slack
to
an
internal
slack
channel
where
tool
credentials
were
posted,
actually
pinned
in
a
slack
channel
and
that
gave
them
the
credentials
they
needed
to
do
the
the
bit
of
mystic
that
that
they
did.
A
Last
week,
we've
seen
similar
with
tesla,
where
it
was
actually
a
kubernetes
console
that
had
been
configured
to
not
require
any
credentials
to
access
and
someone
had
put
eddie
was
access,
keys
and
kubernetes
and
they
were
able
to
the
the
hacker
was
able
to
access
those
through
the
browser
through
the
kubernetes
console
and
see
those
secrets
and
and
then
copy
those
secrets
and
use
those.
So
the
the
point
is,
how
do
we
keep
credentials
safe?
A
We
can't
actually
build
applications
without
secrets,
but
how
do
we
keep
those
secrets
out
of
the
hands
of
people
who
would
who
would
do
mischief
by
them?
A
We
often
see
a
lot
of
tension,
or
at
least
some
tension
between
the
application,
development
teams,
the
devops
groups,
the
cloud
teams
that
are
trying
to
move
quickly
to
deliver
business
value
and
the
security
teams
who
are
trying
to
keep
some
controls
in
place.
And-
and
you
know,
this
friction
sometimes
bears
out
in
either
development
going
off
and
doing
their
own
thing.
A
A
What
we
really
want
to
do
is
enable
that
devops
collaboration
between
the
security
teams
and
development
teams
enable
developers
to
be
secure
as
transparently
as
possible,
relieve
them
of
the
reporting
burdens
that
security
is
is
used
to
dealing
with
in
terms
of
audit
risk
compliance,
these
types
of
things
and
empower
the
security
team
to
deliver
a
service
or
deliver
a
capability
that
developers
won't
find,
cumbersome
and
and
more
importantly,
won't
find
a
bottleneck
or
an
impairment
to
their
workflows.
A
So
this
you
know
the
whole
good
old
shift,
left
mentality
is
is
alive
and
well
here,
and
this
is
what
we're
enabling
folks
to
do.
We
want
to
make
development
secure,
but
we
want
to
do
it
with
the
oversight
of
the
security
team,
and
so
it
is
a
shared
responsibility
and
we
have
you
know,
sort
of
itemized
these
best
practices.
A
The
first
step
is
absolutely
getting
hard-coded
secrets
out
of
your
applications.
Most
people
have
gotten
that
memo
and
and
have
have
at
least
taken
steps
to
do
that.
But,
of
course,
as
soon
as
you
start
removing
secrets,
you
need
a
place
to
put
those,
and
so
there
is
a
tendency
to
put
secrets
into
the
most
at
hand
place.
You
know
if
you're
in
aws,
you
might
look
at
aws
secrets
manager
if
you're
in
azure,
you
might
look
at
azure
key
vault
if
you're
in
kubernetes,
obviously
you're
going
to
look
at
kubernetes
secrets.
A
So
we
call
these
security
islands
they're
little
pockets
of
security
and
they
may
be
okay
in
and
of
themselves,
but
this
this
audit
risk
compliance
practice
really
requires
some
insight.
The
security
team
needs
to
understand
how
credentials
are
being
used,
how
they're
being
secured,
and
it's
very
difficult
when
you
have
multiple
multiples
of
those
islands,
no
in
in
this
time
of
covid
and
the
time
of
very
poorest
network
boundaries,
it's
not
been
any
more
obvious
that
perimeters
don't
work
anymore,
so
identities
really
are
the
only
way
of
securing
privilege.
A
So
we
need
to
create
identities
for
absolutely
everything,
especially
anything.
That's
going
to
be
accessing
a
sensitive
system.
We
need
to
authenticate
it
strongly
limit
its
scope,
so
it
it's
authorized
to
access
only
what
it
needs
to
app
the
principle
at
least
privilege,
and
we
want
to
eliminate
this
problem-
that
I'll
speak
more
to
around
secret
zero.
How
do
we
secure
that
bootstrap
credential?
That
applications
need
to
kick
this?
This
all
off.
A
Credential
rotation
is
a
fundamental
best
practice
for
just
ensuring
that
if
a
secret
gets
compromised
and
you
have
to
assume
that
they
will
be,
you
have
to
assume
that
credentials
will,
at
some
point
get
compromised
rotation
effectively,
nukes
that
that
secret
in
such
a
way
that
whoever
has
it
is
no
longer
of
any
use
so
so
aggressive,
regular
rotation
of
secrets,
especially
secrets
being
used
by
applications,
is,
is
critical.
A
But
that
also
then
says:
how
do
we
do
that
in
such
a
way
that
applications
aren't
disrupted
that
applications,
connectivity
and
ability
to
connect
back
in
systems
isn't
isn't
impaired,
because
the
last
thing
we
want
is
for
application
downtime
to
be
triggered
by
credential.
You
know
good
credential
management,
so
applications
always
have
to
be
able
to
get
their
secrets.
They
always
have
to
be
able
to
connect
back
in
systems,
but
we
want
to
do
all
that
securely.
A
A
We'll
talk
a
fair
bit
here
today
about
kubernetes
secrets,
some
of
the
challenges
with
that
some
of
the
things
that
we
can
do
that
we
can
help
with
in
terms
of
mitigating
the
risk
of
those.
But
this
this
is
the
current
state
that
we're
seeing
most
organizations
in
and
and
where
we're
coming
to
market
as
a
security
company
and
putting
security
first,
but
providing
that.
So
I
like
to
say
we're
a
security
company
that
gets
devops
and
so
we're
empowering
the
security
team
to
do
that.
A
Governance
risk,
complying
governance,
risk
compliance
reporting,
but
not
getting
in
the
way
of
of
the
development
workflows.
So
we
want
everything
to
have
an
identity,
whether
it's
a
person
or
a
process.
We
want
strong
authentication
for
all
of
that.
We
want
to
authorize
with
least
privilege,
so
we're
only
granting
access
to
what
the
identity
needs
and
no
more
than
that,
and
then
we
obviously
want
to
audit
everything.
A
All
activity
needs
to
be
audited
so
that
if
there
is
an
issue,
we
are
able
to
detect
it
as
quickly
as
possible,
and
certainly
do
that
sort
of
post-mortem
analysis
to
to
understand
what
what
happened
or
or
what
identity
went.
Rogue
the
secret
zero
problem
is
unique
to
applications,
so
humans
have
a
built-in
vault,
where
they
can
mostly
remember
their
own
passwords
or
at
least
answers
to
security
questions,
but
the
non-humans.
A
Where
does
that
bootstrap
password
go
or
that
token
or
the
or
the
cert
or
whatever?
That
credential
is?
Where
do
you
store
that
in
such
a
way
that
the
application
can
get
it,
but
nobody
else
can
how
do
you
secure
it,
but
still
leave
it
accessible
to
the
application?
We
call
this
the
secret
zero
problem
and
and
we've
devised
ways
around
that.
So
this
is
often
one
of
the
first
questions
I
get
when
I'm
talking
about
this
space.
People
have
wrestled
with
this,
and
it
is.
It
is
a
a
difficult
problem.
A
A
It's
it's
it's
much
harder
to
steal
fingerprints,
and
so
we
want
to
use
this
same
type
of
of
approach
with
applications,
but
we
need
a
way
to
verify
these
attributes
these
attributes.
So
the
idea
is
that
we
are
going
to
allow
list.
We
are
going
to
pre-enroll
or
pre-define
identities,
along
with
the
attributes
that
will
be
used
to
validate
them
and
then
at
runtime.
A
A
If
it
is
on
the
allowed
list,
then
we
can
call
back
to
the
platform
to
validate
that
identity,
and
this
is
the
approach
that
we
take
in
kubernetes,
as
well
as
on
the
cloud
platforms
and
even
with
some
tools
where
we
can
look
at
each
of
these
as
a
trusted
authority
to
understand
and
know
what's
running
in
it
and
we
can
use
the
attributes
of
of
of
a
jenkins
job
of
a.
I
am
role
in
aws
of
metadata
jot
tokens
in
in
azure
use
those
as
platform
attributes
to
validate
the
these.
A
These
actors,
these
these
pods,
are
these
applications,
so
the
flow
and
I'm
going
to
be
specifically
talking
about
the
open
source
solution
conjurer,
so
cyber
arc
conjure,
is
an
open
source
vault
for
storing
and
retrieving
secrets.
A
It
is
available
at
conjure.org
there's
lots
of
content
here:
lots
of
good
blog
content,
talking
about
the
secret,
zero
problem,
various
aspects
of
application
or
secrets,
management
for
applications.
The
apis
are
well
documented
here
and-
and
we
we've
got
just
just
a
ton
of
content,
we'll
be
referring
back
to
that.
So
we're
going
to
be
talking
about
secrets
management
in
the
context
of
open
source
conjurer.
The
workflow
here
is
that
you
authenticate
using
some
strategy,
so
we
support
multiple
different
strategies
for
different
platforms
and
different
use
cases.
A
However,
authentication
happens.
Successful
authentication
results
in
us
issuing
a
short-lived
jot
token.
This
is
a
token
that
has
an
eight
minute
time
to
live,
and
basically
it's
a
bearer
token
that
can
be
used
to
retrieve
secrets.
The
secrets
are
retrieved
based
on
authorization
per
policies,
so
we
authenticate
to
validate
the
the
identity
of
the
application
that
identity
is
constrained
to
access,
only
the
things
that
it's
been
allowed
to
access
and
assuming
it's.
It
makes
a
request
for
a
secret
that
it
has
access
to.
A
It
can
retrieve
that
secret
and
use
it,
and
that
secret
could
be
a
certificate.
It
could
be
an
ssh
key.
It
could
be
a
password.
It
could
be
a
token.
Basically,
any
binary
value
that
we
want
to
use
for
for
credentials
can
be
used
to
connect
to
these.
These
target
systems.
A
These
back-end
systems
at
the
end
of
eight
minutes,
though,
that
token
will
expire
and
the
application
has
to
re-authenticate,
and
this
will
play
into
some
of
the
use
cases
that
we'll
be
demoing
here
shortly,
because
that
access
token,
when
it
expires,
basically
you've
lost
access
to
secrets
and,
given
that
we
want
applications
to
always
have
access
to
their
secrets,
there's
certain
certain
things
that
have
to
be
done
to
to
to
address
that,
so
to
dig
into
kubernetes
authentication
in
the
conjure
environment
a
little
bit
more.
This
is
elaborating
on
that
workflow.
A
That
I
talked
through
a
minute
ago:
basically,
the
the
application
identity
is
allow
listed
or
white
listed,
and
it's
defined
or
its
attributes
are
defined
in
terms
of
the
cluster
and
the
name
space
that
it's
running
in,
so
we
effectively
give
an
identity
to
the
cluster
and,
of
course,
name,
spaces
are
are
native
in
in
kubernetes,
and
so
these
would
be
ways
of
validating
an
identity.
Now
this
this
means
that
applications
running
the
same
name
space
would
share
the
same
identity,
and
sometimes
you
want
to
go
more
granular
than
that.
A
So
we
also
give
you
the
ability
to
add
a
service
account.
A
kubernetes
service
account
as
an
attribute
that
can
be
validated
for
that
identity,
so
the
identity
is
just
a
friendly
name,
but
these
attributes
are
annotations
on
that
identity
that
we
can
use
to
validate
it
at
runtime.
So
that's
the
identity
gets
defined
via
policy,
gets
loaded
into
conjure
and
defines
that
identity,
along
with
the
attributes
at
runtime,
a
a
a
helper
container
running
either
as
a
sidecar
or
as
an
init
container,
will
do
what
what
is
effectively
a
spiffy
workflow.
A
This
is
where
the
authenticator
is
going
to
format
a
certificate
signing
request.
The
ultimate
goal
is
to
create
a
a
mutual
tls
connection
with
the
server
with
the
conjure
contour
server.
The
authenticator
submits
that
certificate
signing
request
with
the
attributes
from
the
pod
metadata
attributes
from
the
pod
that
can
be
used
to
validate
that
pod
with
kubernetes.
So
when
that
request
comes
in,
conjurer
will
parse
that
csr
call
back
to
the
kubernetes
api
to
validate
those
attributes.
A
That
access
token,
then
actually,
we
will
issue
a
cert
and
a
private
key
that
can
be
used
for
credentials
for
authentication
using
that
mutual
tls
protocol
and
then
that
authentication
gives
us
the
access
token
if
you're
familiar
with
spiffy.
This
is
basically
that
that
same
workflow
and,
in
fact,
the
certificate
that
is
issued.
The
credentials
that
are
issued
here
contain
a
spiffy
s
bit,
so
we're
very
bullish
on
spiffy
and
the
whole
idea
of
of
defining
identities
for
workloads
not
for
infrastructure.
A
We
want
to
authenticate
workloads,
not
the
infrastructure
that
they're
running
on
spiffy
is
part
of
the
cncf
framework,
the
under
the
umbrella
of
cncf
and
they're.
Doing
really
great
work
around.
How
do
you
establish
identities,
strong
identities
and
strong
authentication
for
applications,
so
we're
basically
using
that
workflow,
where
the
the
authenticator
is
the
client
and
the
the
other
party
is
the
the
conjurer
server
and
and
using
that
spiffy
workflow
to
create
a
spiffy
svid,
a
spiffy
verifiable
identity
document,
which
is
that
509
cert?
A
So
so
that's
that's
a
bit
about
that.
So
we're
on
to
the
demos
now,
which
I
think
is
the
more
interesting
part
of
any
presentation,
feel
free
to
ask
any
questions.
You
know,
if
anything,
wasn't
clear
of
anything.
I
went
over
we're
basically
going
to
go
through
some
some
examples
of
how
authentication
works
in
various
ways
of
retrieving
secrets.
We've
got
several
different
demos
here
I
call
them
labs.
A
This
is
actually
set
up
to
be
a
multi-user
lab
if
anyone
ever
wanted
to
run
a
clinic
or
or
attend
one
of
our
workshops
and
we're
going
to
walk
through
several
different
ways
of
retrieving
secrets
that
are
supported
by
conjurers.
So
sometimes
people
just
want
an
api
and
and
a
lot
of
times
developers
were
just
saying,
where's
the
documentation
for
your
apis.
Well,
it's
it's
here.
You
know
the
api
docs
are
here.
A
If
you
go
to
the
developer
box
here,
here's
our
rest
apis
and
here
here's
all
the
stuff
for
how
to
how
to
retrieve
secrets
how
to
authenticate.
So
it's
all
right.
There
there's
no
gate
on
it,
and
so
you
can.
You
can
go
look
at
this
at
your
leisure.
So
what
we're
going
to
show
is
how
to
pull
database
credentials
via
the
rest,
api
or
the
app
to
connect
to
a
database.
A
Now
I
don't
actually
have
a
database
to
connect
to,
except
for
this
last
example,
so
we're
just
going
to
show
retrieval
of
the
secrets
and
echoing
in
those
secrets
in
these
first
three
labs,
but
but
to
get
on
with
that.
This
is
my
demo
environment.
We
do
a
cube
cuddle
here,
I'm
just
running
with
docker
desktop
kubernetes,
which
is
is
hugely
convenient.
A
A
And
what
I'm
going
to
do
is
just
alias
that,
so
I
don't
have
to
keep
typing
that
so
so
you
guys
don't
have
to
watch
me
type
that
now
I
can
say
kgp
and
that's
much
simpler.
So
you
can
see
these
applications
have
been
running
for
a
while,
I'm
going
to
first
walk
through
where
that
helper
container
is
running
as
a
sidecar.
The
authenticator
client
that
initiates
that
authentication
workflow
that
that
spiffy
based
authentication
workflow,
where,
where
it's
running
as
a
sidecar,
I
can
exec
into
the
application
container
using
this.
A
This
handy
little
script
now,
I'm
in
the
application
container-
and
I
can
run
this
script,
which
simulates
what
an
application
would
do
using
a
rest
api.
So
here's
here's
the
rest
call
basically
this.
This
call
here
to
get
secrets
this.
This
notation
doesn't
include
the
url,
but
you
can
see
that
here's,
our
our
url
and
the
the
endpoint
for
getting
a
password
basically
we're
we're
doing
that
here,
we're
using
some
environment
variables,
so
the
the
authenticator
will
drop
that
job
token
in
a
shared
memory
volume.
A
A
This
is
running
as
a
sidecar.
This
token
will
be
refreshed
every
six
minutes,
so
the
the
authenticator
stays
running
and
it's
continually
refreshing
this
job
token,
every
six
minutes,
so
it
never
goes
stale.
I
always
have
the
ability
to
run
my
application
to
retrieve
secret.
So
when
I
run
this
application,
it
picks
up
the
jaw
token
it
basics
deforming
codes,
it
trims
the
control
characters
out
of
it,
earl
encodes,
the
name
of
the
variable,
because
the
variable
the
variable
name
has
slashes
in
it.
A
So
basically
converting
these
these
slashes
to
percent
to
s,
and
then
we
we
make
our
call
to
retrieve
the
secret,
get
the
value
echo
the
value
so
that
all
that
all
happened
there.
I
can
go
back
in
edit
edit,
my
application,
I'm
making
air
quotes
when
I
say
application,
because
it
it
really
is
just
a
bash
script,
but
I
can
say
username
here
and
retrieve
the
username
just
as
easily
oops
wq.
A
So
now
I've
got
the
username.
So
we'll
see
this
in
the
next
couple
of
examples:
oracle,
db
user
is
a
username,
here's,
a
good
strong
password
with
upper
lowercase,
numerals
and
special
characters.
So
so-
and
this
would
be
the
thing
that
we
would
want
to
rotate,
but
now
we're
dynamically
retrieving
it.
It's
not
part
of
the
application,
it's
being
dynamically
retrieved
from
the
service,
the
identity
of
this
pod
as
being
very
strongly
authenticated,
using
that
that
spiffy
based
authentication
protocol
that
that
we
walk
through.
A
A
Api
under
the
covers
everything's
a
rest
call,
but
these
are
little
higher
level
bindings
for
for
what
you
know,
these
languages
that
you
may
be
using
provide
a
little
bit
higher
level
interface,
but
that
means
that
your
applications
can
always
pull
secrets,
and
so,
given
that
the
sidecar
is
running
there,
that
token's
always
going
to
be
there
and
and
fresh
and
be
able
to
use
to
retrieve
secrets.
A
So
that's
that's
our
first
example
here
where
we've
got
an
exam,
an
application
using
the
api
to
retrieve
secrets,
and
they
would
simply
use
that
oracle
database,
username
and
password
to
connect
to
the
database.
Second
example
now
is
using
another
open
source
project
that
cyberark
sponsors
called
summon
summon
is
a
hugely
useful
tool.
A
It
is
something
that
that
solves
just
a
ton
of
problems.
It
is
that
level
of
indirection
that
solves
so
many
problems
in
computer
science,
so
summon
will
retrieve
secrets
and
then
call
an
application
with
those
secrets
populated
in
environment
variables
or
in
memory
map
files.
The
goal
is
to
keep
the
secrets
ephemeral,
but
not
require
the
application
to
know
how
to
authenticate
or
how
to
how
to
retrieve
secrets.
A
In
other
words,
the
application
is
is
kept
blissfully
unaware
of
where
these
secrets
are
coming
from,
and
so
that
means
that
you
may
be
pulling
seekers
from
different
places
in
different
environments.
So
the
application
can
stay
immutable.
The
applications
configuration
doesn't
have
to
know
anything
about
where
it's
running
the
secrets
are
simply
injected
into
its
environment.
A
By
by
summoning
so
summon
will
call
a
provider
and
it's
a
plug-in
architecture,
so
providers
we
have
for
key
rings
for
s3
buckets
for
for
lots
of
of
different
things,
different
back-end
systems.
So
this
creates
that
level
of
abstraction,
where
you
can
pull
secrets
from
different
back-end
systems
provided
for
an
application.
The
application
doesn't
have
to
know
how
to
retrieve
it
doesn't
know
where
it's
coming
from
that
way.
A
A
So
we're
going
to
use
summon
in
an
application
in
a
kubernetes
application
where
the
authenticator
is
running
as
an
init
container.
Now
so
summon
starts
up
the
application,
and
typically
someone
would
be
your
entry
point
for
the
pod,
where
someone
would
would
pull
the
secrets,
call
the
application,
then
the
application's
off
and
running
with
this
secret,
so
there's
never
a
an
opportunity
for
the
application
to
retrieve
secrets
once
it's
started.
A
So
this
lends
itself
to
that
init
container
pattern,
and
if
we
go
over
here
to
my
environment
and
look
here,
we've
got
the
init
container
here
now,
look
it's
been
running
79
minutes,
and
so,
given
that
the
init
container
is
running
the
authenticator,
we
may
have
an
issue
with
our
our
jot
token,
because
we
we've
already
established
that
only
list
for
eight
minutes.
A
So
if
I
go
into
this
this
environment
here,
then
we
can
see
that
I've
got
a
job
token
over
here,
but
that
job
token
is
suspect,
and
so,
when
I
run
summon
so
just
to
give
you
a
little
bit
more
example
of
how
summon
works,
summon
will
look
by
default
in
a
local
for
a
local
file
called
secrets.yaml,
and
this
describes
the
names
of
the
secrets
to
retrieve
it
doesn't
say
what
provider
to
use.
It
doesn't
say
what
backend
system
these
are
coming
from:
the
contract
of
a
summon
provider.
A
Is
it
takes
a
name
in
and
returns
the
value
of
that,
so
it's
taking
the
name
of
a
secret
returning
of
the
value
of
the
secret,
in
this
case
I'm
using
the
the
conjurer
summon
provider
which
is
going
to
use
that
access
token
to
retrieve
secrets
with
this
name
and
place
it
in
an
environment
variable
with
this
name
for
the
username
and
this
for
the
password.
We
can
see
this
work,
if
I
say,
summon
env
and
then
grep
for
db
underbar,
but
it's
not
returning
anything.
A
If
I
say
summon
env
without
repping,
we
can
see
why
I've
got
an
invalid
access
token.
So
what
I
need
to
do
is
just
go
bounce
that
so
this
is
this
is
the
upside
and
the
downside
of
using
an
init
container.
In
this
scenario
we
have
we
have
the
potential
if
the
application
should
ever
want
to
go
re-retrieve
secrets,
first
off
we've
kind
of
built
in
the
fact
that
it
doesn't
know
how
to
retrieve
secrets.
A
But
if
the
application
is
going
to
get
ever
get
secrets
again,
it
has
to
be
restarted,
and
so
this
we
can
see
now
we've
got
a
new
init
container
running
here,
I'm
going
to
exec
into
that,
and
now,
if
I
say,
summon,
env
and
grep
for
things
beginning
with
db
underbar,
we
got
a
little
bit
happier
path.
We
see
that
same
oracle
database
user
and
that
same
strong
password
here
and
now.
A
A
If
I
run
this
by
itself,
it
doesn't
have
anything
to
show,
because
there
is
nothing
in
the
environment
that
that
has
db
underbar
in
it.
Unless
I
run
summon
first,
so
I
can
say,
summon
dash
web
app,
summon
and
now
we've
got
the
application
has
access
to
those
credentials,
but
as
soon
as
the
application
exits
those
credentials
disappear,
they
are
completely
ephemeral
and
the
cool
thing
is:
summon:
can
pull
secrets
into
memory
map
files.
A
So
if
you
have
ssh
keys
or
certificates
or
even
configuration
files,
you
can
store
those
retrieve,
those
as
dynamic,
in
other
words
non-persistent
files,
and
what
summon
will
do
is
is
put
the
the
secret.
The
the
environment
variable
has
the
path
to
the
memory
map
file,
so
you
still
pertain
file
system
semantics
and
that
that's
a
very
cool
thing,
so
summon
is
actually
our
most
active.
A
I
was
told
by
jerry
who
runs
our
our
integrations
and
and
open
source
team,
that
summon
is
our
most
active,
open
source
project
and
so
and
it's
for
good
reason.
It's
just
enormously
useful.
It's
especially
useful
for
doing
integrations
for
tools
that
can
consume
environment
variables
and
for
which
it
would
be
very
hard
to
add
rest
calls
into
it
to
pull
secrets
for
itself.
A
So
we
use
this
a
lot
where
we
don't
have
native
integrations
with
the
myriad
ci
cd
tools
that
are
out
there.
Many
of
them
can
read
environment,
variables
or
files,
and
we
can
use
summon
to
populate
those
and
still
keep
secrets
ephemeral,
so
a
big
big
advertisement
there
for
for
summon.
But
of
course
someone
has
to
be
baked
into
the
application
image
and
I
was
doing
a
poc
a
while
ago
and
someone
said
well
why?
Why
don't
you
just
push
them
to
kubernetes
secrets?
A
You
know
our
we've
got
all
these
applications
that
are
already
using
kubernetes
secrets.
Why
don't
you
give
us
the
option
of
using
kubernetes
secrets,
but
just
you
know
address
some
of
the
concerns
that
you
know
some
of
the
issues
around
kubernetes.
So
this
is
again
where
the
authenticator
is
going
to
run
as
an
init
container.
But
what
we're
going
to
do
is
dynamically
populate
a
kubernetes
secret,
and
this
is
kind
of
the
the
best
of
of
both
worlds
and
has
proven
to
be
pretty
popular.
A
It
addresses
some
of
the
acknowledged
risks
that
kubernetes
secrets
have,
and
I
don't
think
you
know
anybody's
underwear.
Hopefully
this
is
this-
is
all
you
know
firsthand
knowledge
to
you,
you
all
ever
on
the
phone,
but
there
there
are
issues
here.
You
know
so,
and
security
issues
so
first
off
they
are
encrypted
at
rest
in
fpd.
Only
if
you
set
it
up
that
way,
so
you
have
to
enable
encryption
and
in
the
scd
store
for
the
kubernetes
secrets
to
be
encrypted.
A
Second,
this-
and
this
is
this-
is
the
thing
that
that
is
probably
the
most
egregious
version
managing
is
mandatory.
You
always
want
to
version
manager,
your
stuff
right,
so
that's
that's
version
manage
everything
is
kind
of
devops
101,
but
now
you've
got
a
manifest
that
only
base64
encodes,
this
username
and
password,
and
you
check
that
into
github.
So
now
somebody
has
very
easy
access
to
those
credentials.
Anybody
that
can
read
your
github
repo
can
now
go
through
and
easily,
basically
for
decode,
your
your
oracle
database,
username
and
password.
A
This
is
the
problem
that
we're
most
able
to
address
applications
protecting
the
value
of
the
secrets.
Now
this
is
a
little
bit
of
foreshadowing
for
the
secret
list
solution
that
we're
going
to
show
in
our
fourth
example,
because
once
applications
get
the
secret,
you
don't
know
what
they're
going
to
do
with
it.
They
could
leak
it
in
a
log
they
can,
they
can
exfiltrate
it,
for
you,
know
nefarious
purposes
and
any
user
that
can
access
a
secret
sense.
So
you
know
applications
we
can.
A
We
can
address
this
users
and
anyone
with
root
permissions.
This
is
something
that
just
your
your
own
native
security
discipline
has
to
address
keeping
people
from
being
rude.
You
know
anybody
that
has
root.
You
know
we're
fond
of
saying
once
they're
rude,
it's
game
over,
there's,
really
nothing.
You
can
do
once
somebody's
rude
because
they
can
do
memory
scans.
They
can
access
keychains,
there's,
there's
nothing.
Someone
can
do
once
they're
rude.
This
is
this.
A
Is
you
know
a
big
part
of
our
core
business
is
just
keeping
people
from
being
rude
on
any
any
system,
they're
not
supposed
to
be
rude
on,
and
if
they
are,
we
know
who
they
are,
and
we
know
what
they're
doing
so.
But
you
know
the
user.
A
Creating
a
pod
also
has
the
ability
to
look
at
that
secret,
so
so
foreshadowing
a
little
bit
we'll
come
back
to
this
when
we
talk
about
secret
lists,
but
I
want
to
show
how
we
address
this
concern,
because
I
think
this
is
the
most
common
experience
most
developers
have.
Is
they
do
the
right
thing
and
I'm
gonna
bet
I'd
bet
a
hundred
dollars,
there's
at
least
one
person
listening
to
this
webinar
that
has
experienced
this
where
they
did
the
right
thing,
they
put
their
credentials
in
a
file,
they
version
manage
their
file.
A
Suddenly
somebody
has
act,
had
access
to
those
secrets,
and-
and
you
know
it's
it's-
it's
just
a
it's
just
the
way
things
happen
these
days.
Fortunately,
github
has
started,
adding
hooks
where
they
will
alert
you
to
the
fact
that
you
may
have
just
checked
in
some
credentials,
but
but
kubernetes
as
far
as
I
know,
kubernetes
secrets
manifests
they're,
not
one
because
their
base64
encoded,
they're,
not
obviously
credentials,
and
so
this
is
something
that
we
want
to
fix.
What
we
want
to
do
is
get
those
basics
to
four
encoded
values.
A
Out
of
the
the
secret
we
want
to
dynamically
bind
the
kubernetes,
so
we
want
to
keep
the
kubernetes
secret.
We
want
the
application
to
use
kubernetes
secrets
natively,
but
we
don't
want
the
application.
We
don't
want
that
that
manifest
to
be
to
be
checked
in
with
those
credentials
intact.
So
the
way
we
do
this
is
I'll
have
to
go,
find
my
manifest
dvd
credentials.
A
A
Okay,
here
we
go
kate
secret
template
here,
so
this
is
the
the
manifest
that
we're
using,
and
this
is
what
we
get
checked
into
github
we
can
see
now.
We've
got
our
oracle
database
username
and
password
the
name
of
the
secret
here
and
we've
got
this
annotation
here.
Basically,
this
is
a
yaml
array
of
well
and
it
looks
kind
of
like
that
secrets.yaml
file,
this
that
summon
use
so
that
the
idea
is
very
similar
when
the
secrets
provider
container.
So
this
the
secrets
provider
container,
is
an
init
container.
A
That
will
do
the
authentication
do
that
that
initial
authentication
in
order
to
retrieve
secrets,
but
it
will
have
a
directive
to
this
credential
to
this
secrets,
this
kubernetes
secret,
and
it
will
look
for
this
annotation
iterate
over
this
and
retrieve
the
value
of
this
secret
and
patch
this,
the
the
kubernetes
secret,
with
a
base64
encoded
value
of
that
username
and
that
password.
A
So
if
I
go
up
to
here
to
my
environment
here
and
and
now
you
know
this-
this
is
a
great
use
case
for
the
init
container
pattern,
because
it's
going
to
instantiate
that
kubernetes
secret
and
then
exit
the
application
has
access
to
the
kubernetes
secrets,
just
like
native
kubernetes
secrets,
but
they're
dynamically
instantiated.
When
that
pod
exits
or
when,
when
you
delete
that
deployment,
then
those
secrets
are
so
so
the
point
is
we're
never
checking
in
base64
encoded
secrets
this
value
here.
A
The
name
of
the
database
is
not
a
secret,
presumably
if
it
were,
we
could
also
store
it
as
a
secret,
but
but
in
this
case
we're
just
saying:
that's
not
a
secret.
It's
really
those
those
axis
credentials.
So
I'm
going
to
exec
into
my
injector
that's
the
way
I
did
it
yeah
and
so
and
I
can
walk
through
the
manifest
if
anybody
wants
to
see
how
this
is
done,
but
basically
I
have
mounted
these
the
the
kubernetes
secrets
as
both.
Actually
let
me
do
this.
A
Let
me
do
a
cue
cuddle,
edit
secret
db
credentials
this
this
will
actually
test
apps.
This
will
just
kind
of
show
you
the
effect,
so
so
remember
our
manifest
the
username
and
password
here
here
is
our
our
map
down
here
as
an
annotation.
But
now
we've
got
the
username
and
password
here
as
base64
encoded
values.
A
A
It
existed
only
as
initially
as
the
value
without
the
basics
before
encoded
values,
the
secrets
provider
iterated
over
that
conjurer
map
and
instantiated
those.
So
when
I
go
into
my
environment
now
they
can
be
mounted
as
either
environment
variables
or
as
as
volumes.
A
So,
if
I
do
an
env
grep
for
a
username,
actually
I
I
mount
them
as
as
for
consistency,
I
think,
grew
up
for
db.
So
there's
my
oracle
db,
username
and
password
mounted
as
environment
variables
with
those
same
environment
variables
that
we've
been
using
in
the
other
examples,
but
they're
also
mounted
as
as
volumes,
and
we
would
always
recommend
mounting
them
as
volumes
environment
variables
are
much
easier
to
discover
from
outside.
A
So
it's
it's
something
that
we
would
always
recommend
that
you
you
mount
them
as
files
and
access
them
as
files,
and
that's
basically
what
this
this
example
does
for
see.
Where
did
I
do
that?
I
guess
I
don't
have
a
great
example
here.
A
Oh
yeah,
so
my
web
app
summon
now
so
so
this
the
simple
application
that
simply
uses
those
environment
variables
can
simply
run,
but
now
we're
using
secrets.
We
don't
have
to
use
summon
to
retrieve
it.
We
don't
have
to
bake
someone
into
the
application
image.
We
can
simply
use
that
and
in
another
demo
I've
got
a
one
that
actually
reads
the
file
and
and
uses
the
file
versions.
But
in
this
case
these
environment
variables
are
populated
by
mounting
them
from
that
kubernetes
secret.
So
this
really
gets
at
this.
A
This
aspect
of
it
we're
dynamically
binding
values,
retrieved
from
from
conjurer
into
those
kubernetes
sequence,
patching
those
kubernetes
secrets
from
the
applications
perspective.
It's
just
a
kubernetes
secret.
It
can
be
used
as
a
kubernetes
secret
and
then,
when
that
pod
exits
and
and
you
delete
that
kubernetes
secret,
then
it's
gone.
The
real
point
is
nothing's
being
checked
into
into
github.
There's
no
secrets
being
checked
into
github
in
any
form,
whether
plain
text
or
in
base64
encoding.
These
other
issues
remain.
So
you
know
this
is.
A
This
is
just
good
discipline
in
setting
up
your
your
cluster.
This
is
just
good
security
discipline,
but
let's
talk
about
these
couple
of
things
because
we
we
use
this
example
here.
You
know
you
can
vault
things
in
storage.
You
can
vault
thing.
A
You
can
encrypt
things
on
the
wire,
but
as
soon
as
the
application
gets
that
plain
text
secret,
you
really
don't
know
what
it's
going
to
do
with
it,
and
so
we
see
this
as
a
a
general
issue,
our
best
all
our
efforts
may
be
for
not
if
the
application
is
irresponsible,
and
so
what
we
have
devised
is
a
solution
called
secretless.
A
A
We
want
to
give
the
application
the
ability
to
do
that
without
giving
it
the
keys
necessary
to
do
that.
So
we
do
that
with
a
proxy
where
the
proxy
is
running
as
a
sidecar
and
the
proxy
is
the
thing
that
actually
retrieves
the
secret
and
establishes
the
connection
and
brokers
that
connection
for
the
application.
A
So
the
application
never
gets
a
secret.
The
application
you
know
has
to
do
its
own
authentication
for
users
and
and
things
like
that,
but
as
far
as
connecting
the
back-end
systems,
the
applications
simply
get
the
connection
that
they're
authorized
to
get,
and
so,
if
they're,
if
the
identity
that
this
pod
is
running
as
is
authorized-
and
you
know
successfully
authenticates
and
is
authorized
to
connect
to
a
database,
it
will
get
the
connection
to
the
database.
But
the
application
never
sees
those
database
credentials.
They
stay
within
the
broker
and
therefore
can't
be
leaked.
A
So
you
know
you're
still
suspect.
As
we
said
once
you
root,
you
can
do
anything
so
keeping
people
off
route
but
buying
that.
Then
we've
addressed
a
lot
of
these
issues
where
the
applications
don't
have
access
to
the
secret
and
can't
inadvertently
leak
it
in
a
in
a
potentially
irresponsible
manner.
So
I'm
going
to
start
up
my
I
actually
should
have
done
this.
While
I
was
talking
I'm
going
to
start
my
whole
environment
here,
because
it
deploys
multiple
back-end
systems.
So
the
cool
thing
about
secret
list
is
it's
multi-protocol.
A
It
supports
http,
https,
ssh
and
then
multiple
back-end
databases,
a
growing
list
of
backend
databases,
so
we
support
postgres,
mysql
and
sql
server.
Now
I'm
told
oracle
is
is
on
its
way.
We
get
a
lot
of
questions
around
that
oracle
and
sql
server.
You
know
the
most
deployed
databases,
so
what
I'm
going
to
do
is
exec
into
this.
So
I've
set
up
an
environment
here
where
this
window
is
going
to
be
my
application.
So
I'm
going
to
exact
into
my
secret
list
app.
A
Yeah
and
so
we'll
we'll
do
a
few
things
here
and
and
in
here
I've
got
some
predefined
connection
strings
just
so
I
can
remember
because
I
can
remember
all
the
syntax
for
all
these
things,
so
I've
got
connection
strings
for
http,
sql,
server,
sequel,
postgres
and
ssh
connections,
and
so
what
I'm
going
to
do
is
walk
through
some
of
these.
So
this
this
this
window
over
here
on
the
left
is
basically
my
pod.
This
is
my
application.
A
What
I'm
going
to
do
here
is
watch
the
secret
list
broker
log,
so
this
is
just
the
log
for
that
container.
We
can
see
that
it
started
up
listeners
on
different
ports,
so
the
way
that
the
broker
knows
what
to
connect
to
is
it's
listening.
It
has
service
connectors
listing
on
different
ports,
1443
for
sql
server,
3306
from
my
sql
5432
for
postgres
8081
for
http
2022
2222
for
ssh,
so
we've
got
listeners.
We've
got.
A
We've
got
service
connections
configured
such
that
when
we
do
this
connection,
we're
going
to
watch
the
conjure
audit
log
over
here,
we'll
see
the
connector
we'll
see
the
broker
authenticate,
see
it
retrieve
secrets
and
then
what
I'm
going
to
do
up
here.
This
is
one
of
the
only
ones
that
really
echoes
its
its
its
activity.
A
My
nginx
server,
I'm
going
to
just
watch
the
log
of
my
nginx
server,
so
the
first
one
we'll
do
is
this
http
connection,
I'm
just
going
to
say:
curl,
I'm
just
going
to
paste
this
in
because
environment
variables
don't
always
work,
it
doesn't
work
for
sql
server
for
some
reason,
so
I'm
going
to
say
I'm
going
to
connect
to
nginx
on
8081..
Now,
what's
really
listening,
there
is
my
my
broker,
and
so
this
is
basically
going
through
an
http
proxy
for
localhost.
A
That
proxy
connection
is
is
going
to
this
port
where
the
broker's
listening.
So
this
happens
very
quickly,
so
I'm
gonna,
I'm
gonna
talk
through
it
and
then
then
I'm
going
to
do
it.
I'm
going
to
hit
return,
we'll
see
the
the
broker.
Wake
up
and
authenticate
we'll
see
it
hit
conjure
to
retrieve
the
secrets
for
the
http
connection.
This
is
using
basic
auth.
This
is
just
using
basic
off
back
over
here,
we'll
see
a
a
200
message
come
up
here
in
the
nginx
log
and
then
we'll
see
the
client.
A
Echo
is
just
doing
a
basic
index
get
on
that
that
top
level
entry
point
in
nginx.
So
so
the
flow
kind
of
goes
like
this,
so
it
happens
quickly,
so
we'll
go
joint.
There
did
happen.
A
Oh
wait!
I
didn't
do
my
my
nginx.
A
For
some
reason,
I'm
not
I'm
not
seeing
engine
x
over
here,
so
we
saw
it
successfully
authenticate.
We
saw
it
return
the
value
over
here
for
some
reason:
I'm
not
tracing
nginx
nginx
log.
Here
we
saw
it
authenticate
over
here
we
saw
it
retrieve
secrets
that
it
needed
to
do
its
work,
and
so
this
is
the
the
workflow
that
we're
we're
looking
at
to
to
authenticate
dynamically
retrieve
secrets
and
then
use
those
secrets
to
connect
to
a
back-end
system.
A
Now
we
have
other
things
that
we
can
connect
to.
So
let's
look
at
ssh.
So
what
I've
got
here
are
the
credentials
the
ssh
keys
to
one
of
my
ec2
instances
in
in
in
amazon
in
aws,
but
we
can
see
my
connection
string
is
just
going
to
say
foo
at
localhost.
This
is
garbage.
This
is
just
there,
but
so
the
ssh
client
works,
so
I'm
I'm
directing
it
to
port
2222
where
the
broker
is
listening.
That
broker
is
that
that
is
the
service
connector
for
for
ssh.
A
So
when
I
hit
return
here,
saying
hey
you
haven't
connected
to
this
before.
Are
you
sure
you
want
to
connect?
We
saw
it
hit
over
here
now,
I'm
in
aws,
so
I
have
connected
to
aws
without
having
access
to
that
ssh
key
the
broker
had
access
to
it
because
it
retrieved
it
from
conjure.
It
retrieved
that
ssh
key
from
conjurer
and
used
it
to
connect
to
my
backend
system.
Now
I
can
do
stuff
up
here.
A
I
can
say
curl,
you
know
check
check
the
status
of,
because
this
is
something
I
leave
running
for
doing
demos
up
in
in
amazon
or
in
aws
and
so
there's
you
know
I
can
check
the
status
of
my
my
conjure
thing
running
up
there.
So
that's
ssh.
We
can
do
similar
things
for
my
sequel.
A
So
if
I
look
at
look
at
my
my
sequel
connection
here,
here's
I've
got
a
test
app
running
over
there,
so
I
can
say:
mysql
use
the
my
native
mysql
client
local
host
connection,
but
now
it's
connected
to
the
mysql
database
and
I
can
say,
show
databases,
the
databases,
don't
do
a
really
good
job
of
showing
you
the
work.
Their
logs
aren't
very
interesting
from
a
connection
monitoring
standpoint,
so
you
have
to
kind
of
jump
through
hoops
to
make
them
do
that
for
the
last
trick.
A
We'll
just
show
sql
server,
because
a
lot
of
people
are
really
interested
in
sql
server.
What
this
will
do
is
just
do
a
real,
quick,
sql
edition,
so
sql
cmd
is
the
client
for
that
paste
that
when
I
run
that
I've
got
my
my
sql
server
answer
here,
so
the,
if
I
you
know,
do
my
kg
actually
after
kgp
isn't
defined
here.
A
If
we
look
at
all
the
things
that
are
running
in
here
now,
we
can
see
that
there's
quite
a
few
more
more
pods
running
in
my
space
here.
So
I've
got
my
postgres
database.
I've
got
my
pet
store
app.
I've
got
my
nginx
server.
I
got
the
sql
my
sql
server,
my
my
sql
server.
Then,
of
course
the
ssh
is
going
through
the
ssh
protocol
to
to
aws.
A
So
the
point
is,
though,
in
none
of
these
cases
did
the
application
in
this
space
get
access
to
those
secrets
is
able
to
connect
to
all
these
back-end
systems
without
using
those,
and
if
you
look
at
the
the
way
this
works,
this
is
very
similar.
In
fact,
secret
list
is
and
could
be
very,
you
know
very
easily,
positioned
as
a
a
a
broker,
an
access
control
broker
for
the
control
plane.
A
If
you
start
thinking
in
service
mesh
type
type
situations,
so
hopefully
everybody's
familiar
with
the
terms,
control,
plane
and
data
plane,
but
the
control
plane
basically
is
where
all
the
complex
stuff
happens.
Applications
we
want
to
stay
in
the
data
plane.
In
other
words,
we
want
them
to
be
working
at
a
business
logic
level.
We
don't
want
them
directly
involved
with
the
mess
of
running
the
services
and
so
secrets
management
kind
of
kind
of
has
that
aspect
to
it.
A
We
want
to
keep
applications
as
blissfully
unaware
as
possible
of
the
mechanics
of
authentication,
retrieving
secrets
of
the
effects
of
secrets
rotation.
We
want
to
actually
keep
them
away
from
the
secrets
entirely
and
secretless
gives
us
the
architecture
for
doing
that,
and
so
it
is.
It
is
that
proxy
for
the
control
plane
that
applications
can
avail
themselves
of,
and
it
also
gives
us
a
point
where
we
could
put
telemetry
on
that.
A
We
could
start
monitoring
how
applications
are
are
consuming
secrets
and
that
then
starts
informing
a
lot
of
the
workflows
that
security
can
do
in
terms
of
reacting
to
anomalous
situations
and
and
other
you
know,
sort
of
forward-looking
type
thing.
So
this
is,
you
know,
very
much
a
work
in
progress,
but
secretless
is
a
a
big
part
of
the
open
source
initiative.
That
cyborg
is
sponsoring
around
conjure.
It's
all
here
under
secretless
patterns,
so
you
go
into
fundamentals.
You
can
see
how
it
works.
A
You
can
see
the
currently
supported
service
connectors,
most
of
which
I
exercised
here.
So
we
see
our
https
our
database
connectors
our
ssh
connector,
etc.
It
also
has
an
sdk,
which
is
very
cool
if
you
know,
for
some
reason,
you
have
a
back-end
system.
We
get
questions
about
things
like
mongodb
and
other
things.
You
can
build
your
own
and
that's
the
beauty
of
open
source
is
we've.
A
A
Well,
you
limit
access,
that's
good
segregation
of
duties,
but
the
impact
is
also
part
of
that,
because
the
fewer
secrets,
a
an
application,
has
access
to
the
the
smaller
the
blast
radius,
as
we
call
it
so
segregation
of
duty
is,
is
something
that
you
hear
about
a
lot
being
able
to
very
precisely
define
the
the
the
credentials
that
something
has
access
to
now
in
terms
of
identifying
an
offender.
That's
where
your
audit
logs
come
in,
but
in
many
ways
audit
logs
are
are
backward
looking.
A
In
other
words,
they
they
record
what
happened,
but
they
don't
give
you
that
proactive
ability
to
do
something
about
it
and
that's
what
I
think
is
exciting
about
secretless
is
that
it
does
give
you
that
monitoring
point
where
you
could,
if
you
wanted
to-
and
of
course
there'd
be
some
overhead
in
this,
but
you
could
monitor
the
the
actual
real-time
usage
of
secrets
and
see
if
things
were
happening
in
a
much
more
immediate
fashion,
but
your
audit,
logs
and
and
and
you
know,
we
we
keep
audit
logs
non-repudiation,
you
want,
you
want
to
be
able
to
prove
something
did
or
did
not
happen,
and
you
want
to
say:
if
something
happened,
you
know
what
was
the
identity
responsible
for
now
that
identity,
you
know,
may
move
around.
A
So
an
ip
address
may
or
may
not
be
useful
in
that
context.
But
fundamentally
it
comes
down
to
what
was
the
identity
in
question
when,
when
we're
looking
in
that
doing
that
kind
of
sort
of
forensic
analysis,
the
contra
solution
is
in
the
google
marketplace,
but
you
can
always
go
to
conjure.org.
I
was
I
was
showing
a
lot
of
the
content.
That's
at
conjure.org
there
is
community-based
support
for
the
conjure
open
source
solution
as
well
as
for
summon
and
and
secretless.
A
You
can
go
to
discuss.cyberx.coms.org
and
see
some
of
the
back
and
forth
there.
We
do
regular
workshops,
we
do
regular
devops
workshops,
just
kind
of
walking
through
how
to
secure
jenkins,
workflows
and
pipelines,
as
well
as
kubernetes
examples,
the
secrets
broker.
We
saw
a
good
bit
of
today
as
well
as
summon
so
lots
of
places
to
go
lots
of
content
at
conga.org
for
you
to
consume.