►
From YouTube: OCB: Cloud Native CI/CD Pipelines on OpenShift with Tekton - Marc Boorshtein(Tremolo Security)
Description
Join the DevNation for a weekly hour-long live chat show for all things Kubernetes, Java, and Linux. We will deliver the latest developer news you need and interview guests with specialized tech expertise. We will also feature ways for you to be involved, like live Q&A and special quizzes, polls, and contests.
A
Commons
and
today
we
have
with
us
mark
boerstein
from
tremolo
security,
and
I've
asked
him
here
to
talk
a
bit
about
tecton
and
tell
everybody
I'll
give
a
good
intro
into
tekton
cd
and,
if
you
don't
know,
mark
really
you've
been
missing.
The
boat
mark's
been
a
long
time,
openshift
commons
member
and
been
on
stage
done
lots
of
briefings
on
all
kinds
of
topics,
and
so
we're
really
grateful
for
him
sharing
his
knowledge
around
tecton
today,
and
I'm
grateful
for
him
for
coming
today
on
such
an
auspicious
day
in
the
american.
A
You
know
election
cycle
shall
we
say
so
we're
grateful
that
he's
here
and
that
you're
all
with
us
today,
too.
The
way
that
we're
going
to
run.
A
This
is,
if
you
have
questions,
ask
them
in
the
chat
wherever
you
are
in
blue
g,
facebook,
streaming
or
twitch
streaming
or
whatever,
and
we
will
bring
them
to
mark
in
the
chat
here
and
get
him
to
answer
them
for
you
and
with
that,
I'm
going
to
let
mark
introduce
himself
and
the
topic
and
we'll
just
roll
on
here
and
at
the
end,
when
he's
done
with
his
demos
and
everything
we'll
have
a
live
q
a
and
conversation
as
well.
A
So
take
it
away
mark
again,
thank
you
very
much
for
everything
you
do.
B
Oh
thank
you
diane
for
giving
me
a
chance
to
come
on
and
talk.
This
was
a
lot
of
fun,
so
what
we're
going
to
do
is
we're
going
to
talk
about
tacton
city,
so
real,
quick,
my
name
is
mark
borschtein,
I'm
the
cto
of
tremolo
security.
We
do
open
source
identity
management
software.
If
you're
looking
at
specifically
in
the
open
shift
in
kubernetes
world,
we
do
open
id
connect
and
authentication.
We
also
do
automation,
so
I'm
also
the
co-author
of
a
book
that'll
be
released.
B
I
think
friday
november
6,
kubernetes
and
docker
an
enterprise
guide
and
the
research
I
did
for
that
chapter
around
building
a
platform
use
tecton
cd
as
the
pipeline
system,
and
so
we're
actually
going
to
be
walking
through
some
of
the
results
of
of
of
that
research.
B
So
I
hope
we're
gonna
have
a
lot
of
fun.
So,
let's
start
a
little
bit
with
giving
some
context
to
how
techton
cd
is
different
from
jenkins
in
the
openshift
world.
Jenkins
has
been
very
well
integrated,
since
I
think
the
beginning
right
since,
since
three
red
hat's
always
had
a
really
great
way
of
making
jenkins
a
cloud
native
system,
and
so,
if
you're
used
to
that,
there
are
differences
with
the
way
that
the
jenkins
system
worked
and
the
way
tacton
city
works
first
off
technoncd
every
task
is
a
container.
B
Every
discrete
unit
is
a
container.
This
brings
a
lot
of
power,
but
it
also
means
that
you
have
a
little
bit
more
work
to
do
when
you
define
your
pipelines,
because
every
task
is
a
container
containers
are
ephemeral.
You
need
to
be
able
to
to
keep
that
in
mind
as
you
build
out
your
pipeline.
The
the
big
upside
to
that,
though,
is
that
tasks
can
be
built
to
be
reusable.
In
fact,
tekton
has
a
website
dedicated
to
reusable
tasks
and
anything
you
can
do
in
a
pod
in
kubernetes.
B
You
can
do
it
a
task,
so
that
makes
it
extremely
flexible
jenkins.
Your
pipeline,
your
entire
pipeline,
runs
into
contain
so
that
that
makes
for
an
simpler
implementation,
but
that
now
means
that
every
container
needs
to
or
or
the
container
that
runs
the
pipeline
needs
to
contain
everything
the
pipeline
needs.
So
you
end
up
having
a
lot
of
one-off
containers
to
run
pipelines.
B
Tecton
is
an
operator
that
watches
all
instances
of
its
objects
across
namespaces,
whereas
jenkins,
you
tended
to
have
your
own
jenkins
for
each
application.
Multitank
jenkins
is
pretty
hard
to
do
correctly
and
as
different
applications
need
different
plugins.
It
was
just
a
lot
easier
to
manage
your
own
jenkins
instance,
whereas
with
technon
cd,
you
just
have
one
set
of
containers
that
actually
does
the
work
of
orchestration.
B
Now,
of
course,
you
don't
have
to
worry
about
running
the
pods.
That's
what
kubernetes
is
there
for
your
pipeline
objects
in
tecton
are
a
collection
of
tasks,
whereas,
if
you're
using
the
the
jenkins
pipeline
object
from
openshift,
that's
basically
a
single
object
with
a
big
script
in
it
so
again
more
to
manage
with
techcom,
but
a
lot
more
flexibility
and
then
finally,
the
last
one
that
I'll
touch
on
here
is
that
when
your
web
hook
triggers
are
using
jenkins
jenkins
is
a
web
app.
B
So
it's
just
another
url
in
the
web,
app,
whereas
with
tecton
your
triggers
actually
have
to
run
as
their
own
persistent
container
waiting
for
those
web
hooks.
So
there's
a
lot
of
flexibility
in
tecton
that
really
outweighs
the
the
legacy
jenkins
implementation,
but
there's
just
a
lot
to
manage
as
well.
We're
gonna
touch
on
all
those
pieces.
B
So
let's
talk
about
building
a
pipeline,
so
we're
gonna
build
a
really
simple
pipeline
for
an
application
running
inside
of
a
get
ops
mentality.
So
the
idea
is
going
to
be
that
we're
not
going
to
be
touching
kubernetes
or
open
shift
at
all
directly
we're
not
going
to
use
the
oc
command,
except
for
maybe
viewing
some
logs
we're
not
going
to
be
creating
manifests
manually.
We're
going
to
do
everything
inside
again.
B
So
that
leads
to
some
fun
challenges
that
we'll
go
through,
but
first,
let's
start
with
our
pipeline
and
your
smallest
unit
of
work
is
going
to
be
a
task.
Now
a
task
doesn't
have
to
be
a
single
step.
It
could
be
a
series
of
steps,
but
you
want
to
think
of
it
as
this
a
single
unit
of
work
yeah.
What
is
a
single
task
that
it's
going
to
do
with
some
discrete
inputs
and
some
discrete
outputs?
So
our
very
simple
python,
microservice
hello
world,
which
you
know
the
service
itself,
doesn't
really
matter.
B
For
this
conversation,
we're
gonna
want
to
take
that
python
generate
an
image
tag
for
it.
That's
unique,
I'm
a
big
fan
of
putting
date
and
time
stamps
into
my
tag,
so
I
have
a
better
way
to
track
it
by
just
a
glance
without
having
to
go.
Look
up.
B
Some
metadata
we're
gonna,
build
a
container
push
it
into
our
registry
and
then,
finally,
we're
going
to
run
a
task
that
will
check
out
a
a
git
repo
specifically
for
our
manifests
to
run
our
service
that
argo
cd
is
watching
patch
it
with
the
new
tag
and
push
it
back
in
that's
three
distinct
units
of
work,
so
we're
going
to
have
three
distinct
tasks
now
something
to
keep
in
mind
as
you're.
Designing
your
tasks
is
that,
because
these
are
containers
that
are
ephemeral,
they
don't
have
any
permanent
storage.
B
So
if
we're
going
to
generate
our
task
or
our
tag
up
here
in
task
1,
we
need
to
store
that
someplace
that's
available
in
task
3
and
task
2..
So
we
do
that
by
creating
a
workspace
inside
of
our
pipeline
and
that's
what
binds
these
containers
together
and
provides
a
kind
of
a
scratch
pad
where
we
can
put
files
and
variant
environment
variables.
Things
of
that
nature
that
can
be
securely
transferred
from
each
container.
B
Now,
as
we
have
those
tasks
and
those
tasks
have
been
built,
we
now
have
an
input
and
an
output.
The
idea
between
having
these
different
types
of
objects
is
that
again,
the
tasks
can
be
reusable.
So
when
we
take
a
look
at
the
test
and
we
will
look
at
them,
real,
quick
one
of
the
main
things
to
keep
in
mind
is
that
you
want
the
task
to
be
able
to
live
on
its
own.
B
So
by
having
additional
objects
for
input
and
output,
you
actually
make
it
easier
to
reuse
the
tasks
and
we
bind
all
these
things
together
with
a
pipeline
pipeline
references
these
objects.
So
if
you
were
to
look
at
it
in
a
graph,
you'd
have
your
pipeline
with
lines
out
to
your
inputs
and
your
tasks,
to
kind
of
pull
everything
together:
pipelines
also,
where
you
define
your
workspace
to
be
able
to
transfer
information
between
the
different
tasks.
B
A
pipeline
you
want
to
run
your
pipeline,
so
that
is
a
different
object
inside
of
kubernetes,
because
everything
is
an
object
right.
All
apis
are
built
around
objects
and
case
and
open
shift.
So
to
run
the
pipeline,
you
would
create
an
object
called
a
pipeline
run,
but
that's
not
really
great.
We
don't
want
to
do
that
ourselves
manually.
Every
time
we
want
to
run
a
build.
B
So
that's
when
you
use
the
triggers
project
and
specifically
an
event
listener
now.
An
event
listener
is
a
pod
that
will
run
basically
a
web
server.
That's
listening
for
web
hook
pushes
so
whenever
your
gitlab
or
github
instance
is
ready
to
push
your
code.
It's
going
to
call
a
web
hook
that
web
hook
is
running
inside
of
this
event
listener.
It's
a
persistent
pod.
B
You
need
to
tell
the
event
listener
to
generate
the
pipeline
run
by
using
something
called
a
trigger
binding.
There
are
a
lot
of
objects
here
to
get
this.
All
working
consistently
brings
a
lot
of
power,
but
there
is
a
lot
to
keep
track
of.
So
you
create
your
event
listener.
That's
listening
for
web
hook
pushes
that'll
create
your
pipeline
run
via
trigger
binding.
B
You
are
now
able
to
automate
the
process
of
kicking
off
your
builds,
but
you
need
to
be
able
to
reach
it
right
because
the
web
server
running
inside
of
your
cluster,
it
needs
some
kind
of
ingress
or
route
to
be
able
to
expose
it.
So
the
next
thing
you
need
to
do
create
a
route
object
to
expose
your
web
application.
Your
event
listener
for
web
hooks
to
be
able
to
push
it.
B
So
then
the
next
thing
we
need
to
do
is
integrate
it
with
our
with
our
code
infrastructure.
So
we
have
ourselves
gitlab
in
this
particular
demo.
What
we
want
is
our
developer
to
be
able
to
push
the
code
that
code
gets
pushed
into
gitlab.
You
know
maybe
you're
doing
a
merge
and
maybe
you're
doing
a
merge.
However,
you're
handling
it
we're
going
to
be
doing
a
merge
in
this
particular
workflow,
and
you
want
that
to
kick
off
well.
That
means
you
have
to
tell
git
lab
where
that
route
is
well.
B
Now
we
have
to
start
thinking
about
security,
because
I
could
kick
off
this
pipeline
and
that
could
actually
cause
some
unexpected
behavior,
if
not
done
when
it's,
you
know
as
expected
right,
so
we
create
a
secret-
and
I
created
these
little
dots
here
to
kind
of
show
where
the
secrets
line
up
so
we're
going
to
create
a
secret
for
the
event
listener.
It's
just
big
random
nonsense,
data
we're
going
to
put
that
in
the
event
listener
and
we're
going
to
put
in
the
url
that
we
configured
gitlab.
B
B
So,
let's
talk
a
little
bit
about
the
get
ops
model
in
you
know
in
the
non-gitops
model,
when
we're
done
running
our
pipeline,
what
we
might
do
is
run
a
patch
command,
a
patch
api
request
into
openshift.
That
says:
go
update
this
deployment
or
deployment
config
with
the
new
image
tag,
or
we
might
run
an
api
call
against
an
image
stream
to
have
it
import.
B
B
Well,
we
don't
want
to
just
let
anybody
do
that.
I
mean
you
could,
in
theory,
go
into
gitlab
and
start
pointing
it
to
some
random
container
that
could
cause
all
sorts
of
havoc,
so
we're
going
to
create
ssh
keys
inside
of
get
lab
for
our
our
operations,
manifest
repo,
so
that
that
secret.
You
know
that
reboot
can
only
be
updated
by
our
get
ops
pipeline.
B
That's
another
secret
that
has
to
be
generated
and
managed,
and
so
then,
with
the
output
of
this
pipeline,
we're
gonna
have
an
image
right,
we're
gonna
patch
our
repo,
but
we're
also
going
to
have
a
new
docker
container.
B
Well,
that
image
is
going
into
a
registry
and
you're
probably
going
to
want
to
have
some
security
on
that
register.
You
don't
want
to
just
let
anybody
push
an
image
into
it
now
this
particular
pipeline,
that's
actually
exactly
what
we're
doing,
because
we
we
didn't
want
to
get
too
deep
in
the
weeds
there,
but
yet
another
secret
that
you
would
want
to
manage
in
real
life.
So
you've
got
a
handful
of
secrets
here.
B
A
B
To
do
this
in
mass,
then
you,
you
really
need
to
understand
how
you
automate
the
process.
So
we've
been
talking
about
get
off
some
and
we're
not
going
to
go
too
deep
into
the
process
of
get
ops.
You
know
the
main
idea
behind
git
ups
is
that
you're,
using
a
git
repository
as
your
source
of
truth,
so
you're
not
running
control
commands
to
update
objects
inside
of
the
api,
you
are
checking
code
into
git
and
then
relying
on
the
workflows
and
git
to
manage
that
process.
B
So,
in
order
to
do
that,
we've
got
three
repos.
We've
got
an
application
code
repo
and
that's
our
main
input
for
techton
cd,
that's
where
our
our
microservice
is
going
to
go
or
our
application
is
going
to
go
we're
going
to
have
build
code.
So
this
is
the
code
specifically
for
tekton,
and
these
are
your
techton
manifest
now
what's
important
about
this
build
code?
Is
we're
going
to
run
it
in
its
own
namespace
outside
of
our
application?
The
reason
we
do
that
is
all
those
secrets.
B
B
B
I
merge
that
in
to
dev
for
manifests
that
then
will
deploy
into
a
dev
environment
we're
happy
with
that.
We
run
a
merge
from
dev
into
production
and
then
our
get
offs
controller
picks
it
up.
Now
we're
going
to
use
argo
cd,
we're
not
going
to
really
talk
too
much
about
argo
cd,
because
argo
deserves
its
own
show.
B
So
with
all
that
again,
if
you're
feeling
overwhelmed,
that's
probably
a
good
thing
to
be
honest,
so
this
is
a
graphic
I
had
mentioned
before
that
I
co-authored
a
book,
this
graphics
right
out
of
a
book.
It's
the
object
map
of
all
the
different
objects
that
we
created
in
order
to
make
this
all
work
together
between
gitlab,
argo,
cd
and
openshift,
and
you
can
see
there's
a
lot
of
them.
So
what's
really
important
again,
I
can't
stress
this
enough.
You
you
get
used
to
it!
B
The
first
couple
of
times,
and
then
to
do
it
in
mass
you've,
got
to
use
automation.
That's
the
only
way,
you're
going
to
be
able
to
maintain
your
sanity
through
all
of
this.
So
with
that
said,
let's
go
ahead
and
see
how
this
all
comes
together.
B
So
I'm
going
to
go
ahead
and
show
all
of
my
firefox
displays
here.
B
B
Then
gonna
go
ahead
and
deploy
that
application
into
the
dev
environment,
watch
the
build
and
then
get
running
and
prod.
So,
let's
start
off
by
creating
a
new
project,
so
I'm
gonna
log
into
open
unison,
open
unison
is
my
company's
open
source
project
and
that's
going
to
be
really
the
engine
of
the
automation
and
the
first
thing
I'm
going
to
do
is
I'm
going
to
log
in
and
I'm
going
to
request
that
new
application
be
created.
B
So
I'm
going
to
call
this
python
test,
15
and
demo,
so
this
opens
up
a
request.
This
is
like
trying
to
avoid
the
email
shuffle
of
hey.
Can
you
do
this
for
me?
Can
you
create
this
or
you
know,
even
a
snow
ticket?
You
know
you
might
integrate
with
this
for
some
automation,
so
I'm
going
to
go
ahead
and
log
in
as
an.
B
Let's
go
ahead
and
confirm
that
approval
now
before
I
move
out
of
here,
let's
go
over
to
open
shift
and
we
can
see
that
a
lot
is
going
on
here.
We're
provisioning
things
into
git
lab
we're
provisioning
things
into
into
open
shift.
So
this
is
actually
going
to
take
a
minute
to
create
all
those
objects,
so
we'll
go
ahead
and
give
that
a
second.
So
while
those
objects
are
creating,
I'm
gonna
go
ahead
and
kind
of
show
off
our
tasks
here.
B
So
our
tasks
we
talked
about
them
before,
let's
get
a
little
bit
into
the
nitty-gritty
of
what
they
look
like
so
the
first
one
is
actually
gonna,
be
this
generate
image
task,
so
tasks,
think
of
tasks
as
a
one-to-one
with
images
or
with
pods
they're
sort
of
like
a
deployment
in
that
they're,
a
pod
plus
other
stuff.
B
So
our
task
in
this
instance
is
going
to
generate
a
tag
based
on
the
current
timestamp,
we're
going
to
save
it
someplace.
So
you
can
see
here.
I'm
not
entirely
sure
why
code
thinks
this
is
all
wrong,
but
so
you
can
see
here
that
we
have.
Our
input
is
our
get
resource.
B
We
have
we
just
have
a
a
an
image
here.
A
a
this
git
commit
image
is
just
a
simple
image
that
I
created
that
had
all
the
tools
that
I
needed
in
order
to
be
able
to
to
do
what
I
wanted
again
pod
right.
You
can
do
anything
in
in
tecton
that
you
can
do
inside
of
a
pod.
B
What
I'm
not
defining
here,
though,
which
you
might
be
used
to
in
in
like
a
deployment
or
a
deployment
config,
is
your
mount
points
that
all
gets
defined
inside
of
the
pipeline,
and
so
I
just
run
a
script,
really
nothing
crazy.
Here,
it's
just
a
bash
script
that
is
going
to
generate
an
image
tag
and
store
it
into
this
directory.
So
we
look
at
the
pipeline,
we'll
see
that
there's
a
mount
point
to
tecton
results
that
actually
lets
us
go
ahead
and
save
information
into
that
workspace.
B
Podman
would
work
just
as
well
too,
and
that's
going
to
generate
a
docker
image
without
actually
having
to
have
a
docker
daemon
installed,
and
so
we
passed
the
information
in
here's,
our
image,
url
and
here's
our
or
yeah
there's
our
image
url.
That
has
our
tag
in
it
and
it's
going
to
go
ahead
and
generate
an
image
based
on
our
docker
file,
push
it
into
our
registry
and
then
finally,
we're
going
to
patch
our
deployment.
So
this
is
where
the
get
off
side
comes
in.
B
So
again,
just
like
our
first
task.
What
we're
doing
is
we're
running
a
pod.
There
is
a
push
secret
that
is
being
mounted
okay,
so
the
volume
mount
is
there,
just
not
the
the
workspace
and
so
that
push
secret
is
an
ssh
key,
that
we
can
use
to
talk
to
gitlab,
pull
source
and
update
it,
and
the
work
itself
is
actually
pretty
straightforward.
B
We're
getting
our
git
host
we're
copying
in
our
ssh
keys.
We
need
to
be
able
to
copy
our
ssh
keys
to
some
place.
That's
writable!
Otherwise,
ssh
is
going
to
get
really
unhappy.
B
We
need
to
make
sure
that
we're
able
to
talk
to
git,
so
we
do
an
ssh
key
scan
and
then
finally,
we
pull
out.
You
know
we
run
our
git
checkout.
B
We
patch
commit
and
push
at
that
point.
Our
go.
Cd
is
going
to
pick
up
the
work
and
move
forward
an
eye
on
time
here,
yeah
we're
good.
So
let's
go
ahead
and
just
check.
We
should
be
done
great.
So
first
thing
I'm
going
to
do.
Is
I'm
going
to
log
out
here
and
I'm
going
to
log
back
in
as
our
original.
B
B
B
I've
got
a
build
project.
I've
got
a
dev
project,
that's
our
dev
operations
project.
I
have
our
production
project.
Each
of
these
are
linked
to
get
lab
repos.
So
my
git
lab
build
or
my
argo
cd
build
project
over
here
is
linked
down
here
to
my
python
test,
15
build
project.
So
that's
looking
for
updates
to
that
code
base.
B
B
B
B
B
So
we
push
it
into
git
lab.
Give
it
a
quick
refresh
here.
Okay,
great,
we
have
our
code
right,
nothing
crazy
there!
So
next
thing
we're
going
to
want
to
do
is:
let's
go
ahead
and
deploy
our
our
operations
code,
so
we're
going
to
go
over
to
our
test
15
dev
operations.
So
this
is
our
development
project
for
our
manifest.
So
this
is
where
our
deployment
is
going
to
go.
Our
service
is
going
to
go
or
ingress
is
going
to
go
stuff
like
that.
B
B
B
B
All
right,
so
that
is
on
its
way,
and
if
we
give
this
a
real,
quick
refresh
see
if
it
got
picked
up,
it
did
so.
We
can
see
that
argo
cd
has
picked
up
the
manifests
that
have
been
pushed
into
the
environment
and
we're
pointing
to
this
broken
image
that
doesn't
exist.
So
what
we're
going
to
see
here
in
a
moment
is:
oh,
it's
all
right
here.
We've
got
this
little
broken
heart
right.
That's
because
the
image
is
in
a
crash
image
pull
loop
because
the
image
doesn't
exist.
B
B
B
Now,
before
I
go
ahead
and
push
this
out,
I
do
need
to
make
one.
B
B
Got
I
think
we
have
a
few
extra
things
here.
B
A
B
B
B
So
we're
going
to
push
this
to
python
test
15
dev
python,
test
15
operations
beautiful.
So
what
we're
going
to
do
is
we're
now
going
to
go
ahead
and
push
this
into.
B
B
B
There
we
go
so
it's
synced
up
and
we've
created
a
bunch
of
objects
and
now
our
pipeline's
actually
ready
to
go.
We
have
our
our
trigger
bindings,
our
templates,
our
tasks.
Everything
is
set
up,
it's
ready
to
go.
We
can
now
go
ahead
and
move
our
code
into
the
system
so
into
the
pipeline.
So
we're
gonna
go
ahead
and
I'm
gonna
fire
up.
B
Our
tecton
dashboard
now
this
dashboard
is
still
really
early
stages.
It's
not
at
all
ready
for
time,
prime
time,
especially
given
the
fact
that
it
does
not
yet
support
any
like
authentication
or
anything.
So
obviously,
you
can
see
here
that
we
have
our
hello
pipeline
and
we
can
take
a
look
at
it
and
create
pipeline
runs,
but
we're
not
going
to
do
that
quite
yet.
B
So
the
next
step
is
to
go
ahead
and
merge
our
code
from
our
person,
my
personal
workspace,
where
I've
been
doing
my
work
for
the
application
into
debt
that
will
then
kick
off
or
into
the
main
repo
that
will
then
kick
off
the
pipeline.
B
B
And
we
can
see,
the
pipeline
runs,
kicked
off
and
it's
creating
the
image,
so
it
already
ran
the
the
first
step,
which
was
the
create
image
tag
so
that
step
ran.
Now
it's
going
to
go
ahead
and
start
the
second
one.
B
Which
is
the
build
process,
so
that's
running
and
we
can
see
the
logs
blogs
are
fun.
This
one
actually
takes
a
minute
diane.
Do
we
have
any
questions
in
the
queue
that
I
can
maybe
start
hitting
on.
A
Let
me
just
see
oh
banash
is
just
asking
for
all
of
your
steps
is:
are
your
steps
and
walk
through
available
for
them
to
run
run
through
themselves
somewhere?
Yes,.
B
B
Yeah,
so
what
we
ran
was
we
built
our
container
container
went
ahead
and
you
know
building
a
container
right.
We
all
know
how
to
build
containers
pushed
it
out
and
then
our
last
step
was
updating
our
git
repo,
so
that
went
ahead
and
patched
it.
So
I
come
over
here
to
my
applications
and
here's
production.
B
B
And
the
commit
references,
so
this
commit
is
actually
the
commit
from
the
application
repo.
So
that's
how
you
can
tie
a
change
to
your
development
repo
to
a
change
in
your
application,
so
you're
actually
tying
it
all
back,
and
you
can
see
here.
The
changes
were
to
update
that
tag
and
for
some
reason
that
that
flag
doesn't
change,
but
now
that
that's
in
there
see
if
argos
picked
it
up,
it
has
so
we
can
see
that
argo
picked.
B
It
up,
updated
the
deployment
inside
of
kubernetes
that
created
a
new
pod
to
get
rolled
out.
We
now
have
this
nice
little
green
heart
and
dev,
and
our
broken
heart
went
away
so
so
that
fixed
our
loneliness,
and
so
now
we
have
our
application
running
in
development.
This
would
be
a
great
time
to
run
whatever
automated
test
cases.
However,
you
know
whatever
your
qa
process
is
now.
B
The
next
step
is
we
want
to
go
to
production
and
while
this
technically
doesn't
have
anything
to
do
with
techdon,
I
just
think
it's
really
cool,
so
I'm
going
to
show
it
off
anyway.
So,
let's
come
to
our
production.
We
can
see
that
nothing's
going
on
here,
because
nothing
is
in
our
in
our
production.
Namespace.
B
B
B
So
we
now
have
code
in
our
production,
namespace
and
there's
argo
cd,
picking
it
up
pushing
into
prod.
So
we
now
have
an
automated
pipeline
with
tecton
and
gitops
and
yeah.
All
those
different
objects
were
created
automatically,
and
you
know
in
this
particular
instance:
we
go
through
this
in
the
book
and
how
we
separate
the
workflow
and
how
we
design
the
workflow.
B
You
know
we
talk
about
from
multi-tenant
standpoint,
making
sure
that
you're
breaking
it
up
in
in
all
three
applications
in
gitlab,
argo
and
an
app
and
in
tecton
kubernetes,
but
the
other
thing
that
that's
really
great
about
this
approach
is
the
audit
trail
we're
actually
keeping
track
of.
I'm
gonna
go
ahead
and
log
in
here
again
and
show
you
the
audit
trail
of
all
those
objects
getting
created.
So
you
now
have
the
ability
to
say
with
confidence,
not
just.
B
You
know
not
just
when
things
were
created
but
why
they
were
created.
So
if
I
go
to
our
audit
reports
change
log
for
period
nope
like
that,
so
all
those
objects
we
talked
about
getting
created
here,
they
all
are.
If
I
get
to
the
right
workflow
here
we
go
so
here
are
all
the
objects
that
are
being
created
inside
of
kubernetes
inside
of
gitlab
inside
of
argo
cd.
That
makes
all
these
things
work
together.
B
I
said
we're
in
a
day
management
company.
At
the
beginning,
that's
kind
of
our
our
take
on
our
spin
on
automation
is
making
sure
that
everything
comes
together.
So
you
can
now
audit
from
an
application
running
in
prod
to
where
it
is
in
dev
why
it
was
merged
all
the
way
back
down
to
why
everything
was
created
in
the
first
place.
B
So,
yes,
that's
the
whole
demo.
We
got
some
time
for
questions
before
we.
Actually,
you
know
what
let
me
come
back
here,
because
I
made
a
promise.
I
want
to
keep
that
promise.
B
Did
the
demo
so
connect
with
us
and
here's
a
link
to
the
book?
You
know
we'll
have
a
direct
link
inside
of
the
the
show
notes,
I
guess,
and
through
november
15th
the
code
25
kubernetes.
I
think
it
only
works
in
the
us,
though,
unfortunately
for
a
25
discount
on
the
book
and
the
books
got.
You
know
more
than
you'll
ever
want
to
know
about
docker
to
be
honest,
and
then
things
like
doing
backups
of
your
cluster
authentication,
deep
dives
into
our
back
and
pod
security
policies.
B
Falco
is
covered
in
there
for
for
monitoring
your
containers
and
then
the
demo
you
just
saw
is
kind
of
the
culmination
of
the
last
chapter
where
we
talk
about
building
pipelines
and
building
a
platform.
B
So
that's
everything.
Do
we
have
any
more
questions.
A
Diane,
well,
I'm
not
seeing
any
questions
I
think
everybody's
gonna
buy.
The
book
is:
what's
gonna
happen,
can
you
talk
a
little
bit?
Are
you
you're
using
the
open
source
side
of
cystic
falco
a
little
bit?
That's
you
know
how
that
how
that
helps
you
in
your
in
your
enterprise
here
a
little
bit.
B
So
I'll
actually
be
honest,
my
co-author
wrote
the
falco
chapter.
He
usually
he
works
at
a
big
bank.
It
was
interesting
the
breakdown
of
the
way
we
wrote
the
book.
I
wrote
most
of
the
security
chapters.
B
He
wrote
most
of
the
kind
of
sysadmin,
but
falco
was
the
one
area
where
he
just
had
much
better
experience
than
I
did.
He's
manages
kubernetes
clusters
at
one
of
the
top
like
five
banks
in
the
world,
and
so
his
his
deep
experience
was
just
amazing
in
the
area.
So
he's
he's
been
a
big
proponent
of
falco.
I
I
know
he's
had
discussions
with
folks
that
are
like.
Well,
you
know
we
don't
let
people
exact
into
pods
like
well.
B
Okay,
that's
great,
but
do
you
want
to
have
that
single
point
of
failure
in
your
system?
Or
would
you
rather
have
something
I'll
tell
you
when
somebody's
exact
into
a
pot
and
like
all
right,
that's
actually
a
good
point.
So,
as
always
with
security,
it's
really
important
to
have.
You
know
that
defense
and
depth,
where
you've
got
multiple
failure,
points
covered
so
that
one
particular
aspect
fails.
B
You've
got
something
else
to
back
it
up,
while
you're
figuring
out
what
the
issue
is
and
falco's
a
big
part
of
that.
A
Yeah
we're
going
to
have
to
like
I
keep
asking.
I
mean
trouble
with
my
video
here,
so
I
apologize.
I
keep
I
I
love
the
sisdig
folks
and
and
they've
done
some
amazing
stuff.
I'm
gonna
have
to
get
the
falcos
in
there.
Maybe
we
can
get
your
co-author
there
to
to
join
in
a
conversation
around
falco
and
and
using
that
and
demo
demo
that
in
action
as
well,
because
there's
there's
so
many
different
pieces.
A
It's
amazing
always
to
me
that
how
much
of
I
mean
all
of
our
world
is
pretty
much
open
source
now
and-
and
you
have
an
enterprise
guide,
but
you
know
falco,
you
know
and
kubernetes,
and
you
know
all
of
these
things.
It's
pretty
amazing
to
me
this
this
new
kind
of
world
that
we're
living
in.
So
it's
it's
just
just
awesome.
A
So
we
don't
have
any
questions
which
I
think
is
amazing,
because
it
was
a
pretty
complicated
topic
and
I'm
I'm
really
I'm
grateful
for
the
tour
de
force
on
on
how
how
to
use
it
and
how
to
deploy
all
this
stuff.
It's
pretty
amazing.
A
So
tell
me
what's
next,
then,
for
you
guys
what's
your
next
book,
if
you
I
mean,
I
know
it
took
a
lot
of
time
to
write
this
one
but
like
what
would
if
you
could
have
another
six
months
and
no
work
and
just
constantly
what
is
the
book
that
you
would
write
next.
B
So
I
actually
really
want
to
expand
upon
the
stuff
that
we
covered
here.
I
don't
really
feel
like
we
did
it
justice
in
the
last
chapter
of
the
book.
You
know
we,
we
don't
really
touch
on
things
like
security
scanning.
We
don't
touch
on
things
like
automating
automating
builds
based
on
different
events.
B
So,
as
an
example
you
having
something,
that's
looking
for
cves
that
get
patched
and
then
kicking
off
a
build
based
on
that,
we
don't
really
cover
that
that
that's
you
know
this
is
kind
of
that
starting
point,
but
there's
so
much
more
to
get
into
there.
You
know
we
don't
really
cover
a
lot
of
the.
B
We
start
to
touch
on
the
enterprise
side
of
the
business
management
around
clusters,
but
I
would
really
love
to
do
like
a
deep
dive
into
discussions
around
multi-tenancy
at
both
the
cluster
layer
and
at
the
name,
space
or
project
layer.
A
lot
of
people
are
doing
really
interesting
things
right
now
with
like
hierarchical
name,
spaces
and
using
that
to
help
map
an
organization
structure
into
the
technology
and
and
making
the
technology
work.
B
Setpod
security
policies
are
going
to
be
gone
by
1.22,
I
think,
is
one
pod
security
policies
go
away,
they've
been
deprecated
and
so
kind
of
covering
how
you
would
replace
pod
security
policies,
because
there's
a
hodgepodge
of
tools
out
there
right
now,
but
I
haven't
really
found
one
that
is
a
drop-in
replacement.
B
So
there's
a
lot
of
stuff
around
that
that
that
I
would
want
to
do
as
well.
It's
it's
it's
a
huge
time.
This
book
is
650
pages.
I
think
deep
and
we're
just
scratching
it.
A
Yeah
we
do
have
a
couple
of
questions.
One
is
asking
he
keeps
hearing
that
techdon
is
still
in
preview
and
if
you
know,
if
there's
an
eta
for
the
ga
release
and
diane
fedema
is
asking,
why
do
we
need
argo
cd
in
addition
to
tecton?
What
does
argo
do
for
us?
That
tecton
cannot.
B
B
Are
great
questions
so
the
first
one
I'll
I'll
I'll,
actually
defer
to
red
hat
on
that
one,
so
the
there's
there's
tekton,
which
is
its
own
project,
it's
part
of
the
k-native
kind
of
ecosystem
and
then
there's
red,
hat's,
tecton
operator
pipelines
where
red
hat
is
migrating
off
of
jenkins
as
its
official
build
technology.
B
So
I
don't
work
for
red
hat,
so
I'll
I'll
defer
to
someone
from
red
hat
to
be
able
to
answer
that
one.
The
second
question:
why
do
we
need
argo
if
we
have
tekton?
That
is
a
great
question
because
they
do
two
different
things.
Techton
is
a
build
technology.
B
Its
primary
goal
in
life
is
to
take
source
code
and
create
artifacts.
It's
the
the
the
integration
side
of
ci
cd
a
lot
of
times
the
waters
get
really
muddied
between
ci
and
cd.
But
honestly,
at
the
end
of
the
day,
tecton
is
not
designed
to
be
a
deployment
tool.
Could
you
deploy
directly
using
it?
You
could-
and
I
am
absolutely
100
guilty
of
writing
ci
pipelines
that
deploy
code,
but
that's
not
really
what
it's
designed
to
do
when
you
think
about
it
from
a
higher
level.
B
The
reason
why
git
ops
becomes
so
powerful
and
using
something
like
argo
in
conjunction
with
cd
or
ci
in
in
tecton,
is
that
it
allows
you
to
map
your
enterprise's
processes
into
technology
a
lot
more
easily.
So
imagine
we're
not
talking
about
kubernetes
we're
we're
talking
about
just
software
in
general
and
you
work
in
a
large
enterprise
and
you
have
to
go
through
a
change
board
before
you
can
push
things
into
production.
B
So
you,
you
know,
we've
all
gone
through
this
right
where
we
have
to
write
up
a
ticket,
we
put
it
in
the
change
board.
It's
gotta
go
before
the
change
board.
Somebody
has
to
approve
it,
and
then
you
know,
then
you
run
your
your
deployment
plan.
Well,
you
could
do
that
with
a
pipeline,
but
you
know
is
the
pipeline
going
to
sit
there?
Are
you
going
to
create
a
pipeline
object?
How
are
you
going
to
audit
that,
like
once
you're
done
with
your
change
control
process?
What's
the
artifact
that
you
put
into
your
ticket?
B
That
proves
that
what
you've
done
is
completed
now
you
use
the
get
ops
model
where
everything's
in
git
git
is
now
your
source
of
record.
So
you
just
put
the
link
to
the
commit
into
your
your
your
change
control
ticket
as
the
proof
that
everything's
done
and
you
can
now
backtrack
through
get
the
merge
from
dev
the
push
to
dev
that
links
via
commit
back
to
the
application
commit
merge
request.
So
you
know
with
all
the
you
can
squash
everything,
so
all
it's
in
there.
B
You
now
have
a
record
all
the
way
back,
so
you
could
do
it
directly
with
tekton,
but
you
know
the
it's
square
peg
round
hole.
It's
it's!
It's
not
really!
What
it's
designed
to
do
you
could
you
know
you
could
smash
a
nail
in
with
the
butt
end
of
a
screwdriver?
It's
not
really
the
best
way
to
do
it,
though,.
A
Yeah
and-
and
we
do
want
to
keep
our
compliance
and
risks
and
audit
people
happy
so
being
able
to
document
that
is,
is
really,
I
think
key.
I
think
some
of
us
who
work
inside
of
the
vendor
houses
and
we're
doing
build
stuffs,
and
you
know
creating
pocs.
A
We,
we
kind
of
sometimes
cheat
and
actually
not
sometimes
mostly
cheat
and
build
stuff,
and
then
we
forget
about
the
realities
of
working
in
big
banks
and
enterprises
with
heavy
lifting
around
compliance
and
audit
practices
so
yeah,
it
really
does
save
people's
butts,
basically
to
have
argo
in
the
mix.
So
that's
that's
always.
A
I
think
one
of
the
things
that
I
I
love
mark
about,
that
you
always
bring
us
back
to
here,
is
what
the
reality
is
for
the
enterprises
that
are
trying
to
deploy
all
of
this,
and
you
know
it's
kind
of
it's
kind
of
interesting.
So
what
kind
of
have
you
seen
you
you
know?
Your
co-author?
Is
that
with
this
big
o
bank,
what
kind
of
adoption
have
you
seen
of
this?
The
movement
from
to
away
from
jenkins
to
techton
city?
Is
it
like
this
tsunami?
B
You
know
it's,
it's
gradual,
I
think
you
know,
I
think,
a
lot
of
times
in
the
talking
circuit.
We
tend
to
gravitate
to
the
new
shiny.
B
You
know
it's
not
because
jenkins
is
bad
or
there's
something
wrong
with
jenkins.
I've
known
plenty
of
shots
are
like
not
we're
happy
with
jenkins
we're
just
going
to
keep
using
it.
Mazel
tov
right,
you
don't
have
to
go
chase.
The
new
shiny,
if
you
don't
need
to
where
techton
becomes
really
powerful,
is
one
it
gives
you
the
ability
to
leverage
kubernetes
built-in
multi-tenant
capabilities,
which
is
something
that
jenkins
is
not
that
great
at
that.
B
Multi-Tenant
jenkins
is
really
really
hard
to
do
correctly
and
so
you're
piggybacking
off
of
everything.
Kubernetes
gives
you
between
our
back
and
pod
security
policies
and
secret
management
and
namespaces,
and
being
able
to
construct
an
environment
where
only
certain
people
get
access
to
that.
It
also
makes
it
really
easy
to
be
able
to
set
up
a
development
workspace.
So
you
know
I'm
not
going
to
be
developing
these
these
tasks
in
that
build
name
space
right
that
that
would
be
bad.
I
want
to
do
it
inside
of
my
own
personal
name
space.
B
B
It
gives
you
a
lot
more
flexibility,
but
that
said,
you
know
it
also
adds
a
lot
of
complexity,
and
it's
really
important
that
if
you
don't
have
an
effective
automation
strategy,
if
you
do
this,
be
in
a
bespoke
way,
you're
you're,
leaving
yourself
open
to
a
lot
of
pain.
A
If
you're
making
a
craft
beer,
it's
great,
if
you're
doing
you
know,
I
don't
know
tailoring
clothes
or
something
like
that,
but
if
you're
doing
enterprise
deployments
and
pipelines
bespoke
is
probably
not
the
way,
and
that
kind
of
that
kind
of
the
last
question
we
have
here
is:
would
you
advise
integrating
argo
cd
with
an
with
it
with
a
jenkins
pl,
I'm
not
quite
sure
what
pl
stands
for
but
pipeline?
I
guess
pipeline.
B
Yeah,
I
don't
see
why
not,
you
know
they're
two
different
technologies
right.
Argo
is
making
sure
that
your
manifests
and
get
are
your
manifest.
Are
your
objects
in
kubernetes
jenkins
is
a
ci
tool,
so
you
know
yes,
you
can.
You
know
just
like
with
tekton.
You
can
use
it
for
cd,
but
it's
not
advisable
it's
great
for
generating
artifacts.
That's
what
tekton
does
or
that's
what
jenkins
does
so
you
you
want
to
rip
out
jenkins
and
and
use
it
for
or
you
want.
B
You
know,
rip
out
tecton
from
this
presentation
and
use
jenkins
perfectly
valid
approach.
It's
really
up
to
what
y'all
are
best
at
jenkins
is
not
going
anywhere
anytime
soon.
So
it
is
still
great
technology.
A
A
Listening
for
what
the
bright
shiny,
new
objects
are
explaining,
how
it
all
works,
and
hopefully
getting
some
of
you
to
give
us
your
feedback.
Take
a
look
and
read
this
book
definitely
take
advantage
of
it
and
give
mark
some
feedback,
and,
and
probably
I'm
betting,
that
that
final
chapter
isn't
as
lightweight
as
you
as
you
think
it
is,
and.
B
A
I'll
look
forward
to
another
another
another
episode
of
this,
and
I'm
totally
grateful
for
you
taking
the
time
today
to
share
this
a
lot
of
interest
in
this
topic.
I'm
sure
we'll
have
you
back
and
we
look
forward
to
seeing
you
are
you?
Did
you
get
us
a
talk
or
any?
Are
you
participating
in
kubecon
coming
up
in
november.
B
Yeah,
so
I'm
gonna
be
doing
a
lightning
talk
at
the
security
day.
I
think
it's
called,
or
the
security
conference
the
day
before
a
lightning
talk
on
why
you
should
be
using
open
id
connect
and
not
certificates
when
authenticating
to
your
cluster.
So
that
was
a
lot
of
fun
to
put
together.
A
Oh
cool
yeah,
I
love
lightning
talks.
First
of
all,
because
it
you
don't
have
to
take
forever
and
you're
off
the
stage
as
quickly
as
possible
and
then
it's
great,
but
it
also
it
really
is.
It
makes
people
be
very
succinct.
So
it's
it's
wonderful
and
as
well
we'll
be
doing
a
lot
of
talks.
Not
we
don't.
I
don't
think
we
have
a
tech
tong
talk,
but
we
have
a
lot
of
great
talks
coming
at
the
openshift
commons
gathering,
that's
also
on
day
zero.
So
please
do
join
us
there.
A
It's
it!
We
we
front
loaded
that,
with
a
lot
of
openshift
4.6
release,
we've
got
clayton
and
derek
carr
and
michael
barrett
reprising
their
two
ferns
conversation
from
the
san
diego
north
american
kubecon.
When
we
were
on
the
boat,
so
we've
gone
from
being
in
person
to
being
on
a
boat
to
being
virtual.
A
I
can't
imagine
what
2021
is
going
to
bring
so
all
right.
If.
A
With
that,
I'm
going
to
thank
you
and
wish
everybody
well
be
safe,
be
healthy
and
we'll
we'll
talk
to
you
all
tomorrow
is
the
latam,
the
openshift
commons
in
vivo,
we'll
talk
to
you
tomorrow
in
spanish,
at
2
p.m.
Pacific
time
I
think
we're
talking
about
mta.
What
is
that
migrating
tool?
A
Migration
tools,
applications,
I
don't
even
know
and
I'm
going
to
be
listening
to
it
in
spanish.
So
it's
going
to
be
really
great,
so
anyways
take
care
everybody
and
we'll
talk
to
you
all
soon.
Thanks.