►
Description
How to Securely Inject Secrets into Applications and Manage Machine Identities with Conjur - Kumbirai Tanekha (CyberArk)
Kumbirai Tanekha and Naama Schwartzblat,the lead developers on Conjur who both worked directly on the Conjur-OpenShift integration. They will be demonstrating how secrets can be managed and delivered securely to applications running in OpenShift without developer impedance, and how OpenShift security policy for secrets and machine identity can be managed as code.
B
A
Doubting
but
yeah
it's
very
slow,
but
yes,
it's
coming
so
because
I'm
going
to
share
some
road
map
with
you
also
on
this
integration-
and
this
is
a
reference-
lied
that
saying
that
things
might
change
a
little
if,
on
things,
that
I'm
will
share
with
you
regarding
the
road
map.
So
just
take
this
into
account.
So
I
want
to
share
some
information
and
regarding
sidewalk
what
is
sidewalk
we
cyber
are.
A
A
So,
first
of
all,
when
we
were
talking
about
securing
the
education
containers
running
inside
all
those
containerized
platforms,
we
understood
that
the
popular
methods
for
providing
the
secrets
to
the
containers
either
using
environment
variables,
using
volume
mount
or
just
secret
encryption
and
the
challenges
there,
where
that
secrets
can
be
easily
exposed.
There's
no
runtime
authentication
to
understand
that
the
polling,
pod
or
container
is
really
your
application
container
or
pod,
and
not
just
a
malicious
one.
A
There's
lack
of
segregation
of
duties
and
there's
no
audit
for
all
the
things,
all
the
security
that
in
the
secrets
fetching.
So
our
integration
goals
and
benefits
that
we
wanted
to
achieve
with
this
integration
in
this
area
well,
first
of
all
to
securely
provide
the
secrets
to
the
application
containers
that
are
running
in
those
environments
like
OpenShift.
We
wanted
to
it
the
integration
to
be
easy
to
use.
We
wanted
it
to
be
seamlessly
as
possible,
so
developers
will
not
need
to
change
their
application
code
in
order
to
use
and
fetch
the
secret
securely.
A
We
wanted
also
our
solution
to
run
inside
OpenShift,
because
we
wanted
it
to
be
elastic
and
that
it
will
be
able
to
scale
out
very
easily,
because
the
number
of
containers
can
change
and
we
can
have
massive
amount
of
containers,
they're,
starting
and
also
running
on
the
5th
same
physical
host
and
means
better
performance,
of
course,
and
we
can
and
again
from
that.
We
also
wanted
to
provide,
of
course,
central
audit
and
a
secret
rotation
options
next
slide.
So
here
are
some
milestones
and
roadmap
Badlands
that
we
have
for
that.
A
This
is
one
thing
we
are
going
to
add:
more
flows
that
exist
only
in
newer
versions
like
the
in
each
container
option
that
we
add
also
in
the
better
but
does
not
exist
in
openshift
reto
tree.
We
are
going
to
add
those
flows
in
the
next
versions
that
we
will
add
into
the
openshift
integration
with
conjurer,
and
we
are
going
to
look
also
on
newer
things
like
the
serviceworker
that
was
just
released
lately
and
a
more
and
combi
I
will
hand
it
to
you
now
for
the
next
slide.
Thank.
B
So
conjurer,
as
an
AMA
mentioned
earlier,
is
a
secret
management
solution
that
provides
authentication
authorization
and
and
all
that
sort
of
stuff
right.
So
from
an
authorization
standpoint,
the
kind
of
permissions
model
that
we
have
is
based
around
policy
right.
So
it's
a
declarative
policy
that
allows
you
to
say
who
has
access
to
what
it's
role,
based
access,
control
and
taking
a
step
back
and
looking
at
the
architecture
of
conjurer.
So
what
we
have
is
we
have
a
high
availability
architecture
where
we
have
masters
standbys
and
followers,
and
in
particular,
for
this
integration
you
can.
B
You
can
essentially
have
a
whole
conjure
cluster
running
with
an
opiate
and
that's
what's
shown
on
the
right-hand
side
here.
So
there's
a
quandary
master
as
some
standbys
and
and
followers
and
followers
are
for
the
purposes
of
read-only
right.
It's
for
consuming
the
secrets,
as
opposed
to
anything
else
and
the
Authenticator
that
we
have
and
that
leverages
the
OpenShift
API
in
order
to
verify
identities
of
workloads
running
within
openshift
runs
a
conjurer
follower
right.
So
how
the
whole
architecture
works
from
the
standpoint
of
the
actual
application
is.
The
application
is
running
in
its
own
project.
B
Conger
runs
in
a
specific
project
of
its
own,
the
the
whole
cluster,
even
though
the
different
kind
of
like
entities
within
the
conjurer
cluster
could
be
running
on
different
nodes,
and
they
probably
should
the
application
speaks
specifically
to
Kwanzaa
followers.
So
you
might
have
several
followers,
and
these
might
be
part
of
a
deployment
within
OpenShift
how
the
applications
speak
to
the
followers
is
through
a
service
right.
So
there's
some
load
balancing
there.
An
application
doesn't
really
necessarily
need
to
know
which
follow
it's
speaking
to
it.
B
Just
speaks
to
some
follower
great
and
from
the
application
standpoint,
how
we
have
it
laid
out
is
a
typical
application.
Is
you
know
within
a
pod
and
there's
the
there's,
the
main
app
container
and
that
app
container
is
responsible
for
actually
running
the
application?
We
have
a
sidecar
we're
using
a
sidecar
patent
as
a
means
of
actually
like
orchestrating
this
whole
communication
to
conjure
right
so
that,
ultimately,
the
app
container
doesn't
really
need
to
know
too
much
about
like
authenticating
against
conjurer.
B
All
it
needs
to
do
is
essentially
consume
an
access
token
and
grab
its
secrets
right
moving
forward.
So
the
high-level
steps
of
the
actual
process
of
like
authenticating
against
our
our
integration
is,
we
have
to
verify
that
a
pod
exists
and
essentially
what
we're
doing
there
is
a
authenticating,
the
pod
right.
We
know
the
pod
is
indeed
who
it's
as
it
is,
and
then
once
that's
done,
we
go
through
the
whole
process
of
like
authorization
using
a
auerbach
model
and
auditing
and
all
that
sort
of
stuff,
just
kind
of
happens.
B
B
Is
the
sidecar
will
issue
a
certificate,
signing
request
against
the
contour
educator
and
within
that
certificate
signing
request,
it
will
provide
relevant
details
and
the
Contra
Authenticator
will
validate
this
certificate
signing
request
and
if
everything
checks
out,
it
will
issue
a
signed
certificate,
out-of-band
and
actually
inject
it
directly
into
the
sidecar
container
right.
So
this
is
where
we
get
like
a
kind
of
like
some
strong
security,
because
the
the
sign
certificate
goes
specifically
to
the
requesting
pod
and
nowhere
else
once
that
time
certificate
is
actually
available
within
the
sidecar
container.
B
Then
the
sidecar
container
can
then
make
a
request
through
mutual
TLS
once
again
to
the
contras
educator.
And
what
happens
is
the
sidecar
container
gets
an
access
token
or
the
particular
identity
that
it's
it's
signed
up
for
and
this
access
token
is
encrypted
and
obviously
the
the
sidecar
has
the
private
key
in
order
to
decrypt
that
particular
access
token,
and
once
it
has
this
access
token,
it's
really
just
back
to
secret
management
as
usual
with
conjurer,
because
if
you
have
an
access
token,
it
means
you
can
authorize,
get
your
secret.
B
B
B
Right
so
I've
set
things
up
already,
just
in
the
interest
of
like
you
know,
the
demo
demons
aren't
great,
or
rather
the
demo
gods
aren't
always
a
kind,
so
I
already
have
a
conjurer
class
to
set
up,
and
this
kanji
cluster
has
a
master
here.
It
has
two
standbys
and
it
has
two
followers
and
that's
what
these
are.
We
have
some
kind
of
like
a
load
balancer
in
front,
but
in
any
case,
these
are
the
photos
that
I
speak.
That
I
was
speaking
about
and
we
yeah
the
load.
B
Balancer
is
what
the
application
actually
speaks
to
so
I'm
gonna,
I'm
gonna
take
a
second
and
head
over
here
to
my
IDE,
so
the
process
of
actually
kind
of
like
setting
up
this
whole
integration
is
first,
you
have
to
deploy
conjure
and
instead
of
like
going
through
every
single
one
of
these
scripts
I'm
just
going
to
go
through
on
a
very
like
high
level.
This
is
something
that's
available
on
github
like
both
of
these
repositories.
So
this
is
something
that
could
be
reproduced,
I
suppose
by
anyone.
B
So
the
first
step
is
for
us
to
create
an
open
shift
project
from
a
conduit
deployment
standpoint,
and
this
is
the
project
that
I
spoke
about
earlier-
that
essentially
kind
of
scopes
conjure
within
it
right
so
before
I
run.
All
these
scripts
I
actually
have
to
specify
all
these
environment
variables,
which
are
like
the
conger
project
name
and
if
I
head
over
here,
where
this
project
actually
is
being
run.
B
But
I
have
my
open
shop,
client
already
set
up
and
if
I
head
over
to
that
project,
and
we
list
the
pods
here,
you
will
see
that
I'm
not
really
making
this
up
they're
actually
pods
in
there.
So
we
have
some
followers
and
we
have
some
of
the
members
of
the
cluster
standby
master,
etc.
So
when
you,
when
you
run
these
scripts
here,
you
create
openshift
project,
you
build
and
push
the
rather
than
kind
of
container
images,
and
our
recommendation
is
to
do
that
with
like
a
private
registry
within
openshift.
B
Like
that's
very
good
for
that,
you
deploy
the
conjurer
cluster,
and
this
is
just
you
know
running
your
manifest
as
normal.
You
configure
the
master.
You
create
a
load,
balancer
configure
you
stand
by
configure
your
followers.
These
are.
These
are
very
straightforward
steps
that
you
know
you
could
go
into
detail
if
you're
actually
trying
to
set
up
a
conjurer
cluster
where
things
get
interesting
is
once
you've
got
your
cluster
set
up.
How
do
you
then
start
interacting
with
that
with
with
with
an
open,
shipped
application
right?
B
So
this
is
where
this
repository
then
comes
in,
which
is
OpenShift
culture
demo
and
it
makes
use
of
I
guess
similar
environment
variables.
Once
again,
it's
a
it
needs
to
know
the
conjurer
project
name
in
order
for
it
to
be
able
to
generate
the
URL
etc.
B
So
we
initialize
conjurer,
which
just
make
sure
that
you
know
we're
able
to
log
in
so
we
have
an
admin.
We
have
admin
credentials
which
allows
us
to
run
all
the
policy
and
load
all
the
policy
that
we
need
to.
We
go
to
the
second
step.
We
load
some
policy,
and
this
is
just
this
is
just
the
workflow
from
a
conscious
standpoint
right
we
have
a
CLI
that
allows
you
to
load
policy.
B
We
have
a
CA
for
each
I,
suppose
open
ship
faster
and
the
responsibility
of
that
CA
is
to
sign
like
valid
workflow
identities
and
then
on
those
identities
can
kind
of
be
used
by
applications
in
order
to
authenticate
each
application
is
represented
by
a
particular
host,
and
this
is
what's
happening
right
here.
So
the
host
in
this,
in
this
example
is
takes
the
form
of
like
the
name
space,
or
rather
the
project
are
followed
by
these
wild
cards,
which
essentially
say
that
we're
scoping
identity
to
the
name
space.
B
Once
we've
actually
initialized
conjurer,
we
load
some
policies
and
that's
one
of
the
policies
that
I
was
just
speaking
about.
So
this
policy
is
what
allows
kind
of
our
back
to
be
to
be
respected.
Right.
We
specify
in
here
that
this
host
is
actually
part
of
a
particular
grouping
of
other
hosts,
and
we
specify
permissions
over
here
where
we
say
that
this
particular
application
can
read
this
test
app
database
password.
B
It
has
read
and
execute
privileges
on
it,
and
this
is
what
happens
throughout
the
course
of
authorization.
So
going
back
to
those
script,
that
I
was
speaking
about
earlier,
so
we
go
through
the
process
of
essentially
setting
up
the
test
app
project.
We've
run
the
policy
against
conjurer.
Already
we
create
the
test,
our
project,
we
build
and
push
containers
for
the
test,
app
image,
and
then,
ultimately,
we
deploy
the
test
app,
so
I'm
actually
going
to
redeploy
that
test
app
so
that
you
can
just
see
it
kind
of
working
from
scratch.
B
B
The
URL
for
the
open
shift
dashboard,
and
that
will
be
the
first
step.
So
if
you
just
give
me
a
moment.
B
B
Right
so
I'm
in
the
appreciative,
dashboard
and
I
was
showing
everything
in
we've
scoped
earlier,
but
you
can
actually
see
that
everything
is
running
within
the
Oakley
ship
dashboard.
We
have
a
deployment
of
Condor
cluster
and
the
follower,
etc,
etc.
But
we're
not
interested
in
that
we're
interested
in
the
test
app.
So
here's
the
test
application
there's
one
instance
running
I
mentioned
earlier-
that
there's
a
there's,
a
sidecar
associated
with
deployment
of
test
app,
and
you
can
actually
see
that
from
the
manifest
for
test
app,
which
is
available
here.
B
Let's
just
go
down
a
little
bit,
so
this
is
just
a
deployment
for
test
app
test.
App
is
a
really
simple,
Ruby
application.
It
has
these
environment
variables
kind
of
tell
it
how
to
communicate
with
with
the
conger
cluster,
and
it
has
that
volume
mount
that
I
spoke
about
earlier
for
the
kanji
access
token
and
then
the
the
sidecar
Authenticator
that
I
spoke
about
is
is
right
here.
So
it
uses
an
image,
that's
publicly
available
on
docker
hub
and
it
essentially
just
plays
the
role
of
attenti
cating
and
the
application.
What
it
actually
does.
B
If
we
dive
into
that
for
a
moment,
is,
it
is
a
really
simple
server.
It
uses
the
conger
ruby
api
in
order
to
retrieve
this
variable.
I
showed
this
variable
a
little
bit
earlier
in
the
condo
policy
right,
so
this
will
be
using
the
identity
off
I
guess
the
namespace
in
order
to
to
grab
the
secret
and
rightfully
it
will
get
it.
B
So
allow
me
you
could
log
in
and
we
can
see
that
in
any
activity
that's
actually
happening
outside
in
terms
of
consuming
followers
is
reflected
within
the
Conger
UI,
the
here's,
the
Conger
UI,
here's,
the
app
TV
password
that
I
spoke
about.
So
if
I
list
down
our
secrets,
is
the
test
app
Nene
password
right,
so
actually
the
application.
The
test
application
is
in
the
background
and
I'd
use
it
a
little
bit.
But
you
can
see
the
audit
audit
log
that
conger
provides
immediately
right.
B
You
can
see
that
the
host
that
is
actually
grabbing
the
secret
every
time
that
it
tries
to
read
the
secret.
There
is
an
audit
event,
so
we
we
have
some
some
visibility
into
into
access.
Any
time
that
there's
failed,
updates
or,
like
failed
reads.
We
can
see
all
of
that,
but
right
now,
I'm
gonna
go
ahead
and
actually
go
to
that
application.
B
B
B
B
Okay,
great
so
that's
running
on
on
localhost
for
me
and
if
I
had
over
to
localhost
one
two,
three
four
and
I
refresh
this-
you
can
actually
see
that
it's
handling
connections
for
my
laptop
and
it
grabs
the
secret
value
like
this
is
actually
the
secret
value.
And
if
we
head
over
to
to
the
Conger
UI
and
we
refresh,
we
should
see
that
there
was
an
access
that
was
a
hopefully
I'm
on
the
right.
One
Kostabi
be
password.
B
B
B
And
should
be
relatively
fast,
it
will
spit
out
whatever
the
value
of
the
password
here
is
because
it's
just
an
environment
variable
that
I'm
creating
here.
So
this
is
the
new
value
and
you
can
see
that
rotation
works
right
out
of
the
bag,
because
if
I
refresh
this
well,
it
won't
work
because
I
just
killed
my
port
forwarding.
But
if
I
port
forward
again
and
I
refresh,
you
can
see
that
we
now
we
now
have
the
new
value
and
that's
rotation
right
out
of
the
bag.
B
Let
me
see
I
would
like
to
refresh
and
just
see
alright
yeah.
So
now
you
can
see
that
the
audit
log
has
updated.
So
two
minutes
ago,
I
made
that
modificator
like
I,
read
the
secret
right
by
going
to
the
route
about
a
minute
ago.
I
actually
modified
the
password,
and
you
can
see
that
admin
was
allowed
to
update
here
and
then
a
second
ago
when
I
accessed
the
secret
value.
You
can
see
that
there's
another
audit
log
yeah,
that's
that's
the
flow
in
a
nutshell.
I
guess,
just
to
reiterate
it.
B
You
deploy
conjurer
with
an
open
shift
and
your
application
just
kind
of
needs
to
follow
a
little
bit
of
configuration
based
on
Doc's
that
we've
provided
on
this
website.
Here
we
have
example
manifest.
We
have
this
example,
repo,
which
is
publicly
available
on
github,
and
you
can
actually
get
up
and
running
with
this
relatively
easy
right.
I
I
think
that's
about
it
from
from
the
demo
side
of
things
well,.
C
That
was
great
I
totally
blew
my
mind
with
the
rotations
up
there.
That
was
new
to
me.
So
really
thank
you
very
much
for
the
demo
and
for
explaining
how
it
all
works.
C
I'm,
I'm,
looking
to
see
if
there's
any
questions
in
the
Q&A
and
right
now,
mostly
it's
a
lot
of
this-
is
new
to
new
to
me,
so
I'm
thinking
what
the
I'm
trying
to
figure
out
how
to
phrase.
This
question
that
someone's
asking
is
like
what
is
the
best
practice
around
deployment
and
whether
or
not
the
whole
conjure
solution
can
run
inside
of
open,
shipped
and
I.
B
Definitely
so
I
guess
what
what
you
want
from
your
high
availability
setup
is
some
some
disaster
recovery
right.
So
if
a
node
goes
down
within
your
OB
ship
cluster,
you
want
it
to
be
the
case
that
you
don't
use
all
your
data.
That
would
be
problematic.
So
what
you
want
is,
as
you
deploy
your
Condor
cluster,
for
example,
you
want
it
so
that
at
least
like
your,
your
standby
is
need
to
be
distributed
in
different
nodes
right,
so
that
if
one
node
goes
down,
then
one
of
the
standbys
can
essentially
become
the
master.
C
B
So
the
the
side
card
that
doesn't
need
to
call
a
specific
follower,
so
you
can,
you
can
have
ten
followers
and
you
just
need
to
have
a
load
balancer
in
front
of
them,
and
the
conjurer
deploy
scripts
that
are
available
on
github,
actually
show
you
how
to
kind
of
put
all
that
together
and
essentially
what
happens
is
you
need
to
then
point
you
need
to
point
the
application
pods
or
the
sidebar
to
the
service
URL
or
the
path
with
an
open
ship
right
and
the
load-balancing
would
do
the
whole
like
round-robin,
going
from
following
to
followers.
C
A
It's
important
to
to
say
that,
because
the
followers
here
you
know
there
can
be
multiple
followers.
This
is
for
scaling
rightful.
If
there
are
many
containers
you
need
to
support
right
on
when
they
go
up,
so
you
need
many
followers,
maybe,
and
it's
important,
that
it's
very
flexible
and
you
know
you
can
scale
very
easily
and
you
don't
need
to
call
a
specific
follower
like
this.
This
is
a
very
easy
and
scalable
and
architecture
and.
A
B
Okay,
I
I,
don't
know
the
number
off
the
top
of
my
head,
but
that's
that's
something
that
we
can.
We
can
definitely
find
out
and
probably
I
can't
check
that.
A
A
C
C
B
Think
this
this
is
the
last
slide
I'm
just
looking
through
the
deck
right.
Now
there
isn't
anything
specific
with
with
details
about
conjurer,
but
people
can
definitely
head
over
to
tekonsha
org
or
alternatively,
there
is
developer
conjure
net,
which
goes
into
I,
guess
detail
about
the
integration.
That's.
A
C
Can
add
that
alright
well
I
think
that's
all
the
questions
that
we
had
in
the
chat?
If
there's
anything
else
either
of
you
would
like
to
add,
we're
definitely
going
to
have
to
get
some
follow
up
on
this
to
get
a
couple
more,
perhaps
one
of
the
folks
that
it's
actually
using
it
in
production
to
give
a
presentation
on
how
that's
working
for
them.
C
That's
what
we
like
to
do
once
we've
got
the
basic
information
here
and
and
and
have
you
guys
back
again
soon
you
are
coming
or
we'll
have
some
sort
of
presence
at
Red,
Hat
summit
or
one
of
the
upcoming
coop
cons.
Then
we'll
get
to
meet
you
all
in
person
and
we're
really
actually
excited
about
this
stuff,
because
it's
it's
really
cool
and
yeah
I
think
it's
gonna
really
help
a
lot
of
people.
C
So
thanks
guys
and
I
will
post
this
up
on
blog,
openshift,
calm
and
on
our
youtube
channel
shortly
and
we'll
you
know,
get
the
information
out
to
you.
So
thanks
for
taking
the
time
you.