►
From YouTube: KubeCon Office Hours (Ep 4): Implementing GitOps
Description
Join Christian Hernandez, GitOps Extraordinaire, for a journey through how to achieve GitOps in any number of ways. We'll be joined by special guests Siamak Sadeghianfar and Aubrey Muhlach
A
Good
morning,
good
afternoon,
good
evening,
wherever
you're
hailing
from
welcome
to
a
very
special
episode
of
openshift
tv,
this
is
the
kubecon
office
hours
implementing
get
ops
run
by
the
wonderful
josh
burkus
from
the
red
hat
open
source
program
office.
Thank
you,
josh
I'll
hand
it
off
to
you
now.
B
Thank
you
chris.
So
for
this
hour,
which
is
kubecon
adjacent
accompanying
kubecon,
a
lot
of
people
are
looking
at
implementing
github
space
workflows.
A
lot
of
tools
are
being
discussed
at
kubecon
for
get
up
space
workflows,
but
you
know
I
work
a
lot
with
customers
and
users
in
the
open
source
realm
and
I
find
that
a
lot
of
people
aren't
quite
sure
what
git
ops
means
in
real
life,
and
so
we
have
here
the
red
hat,
get
ups
crew
to
answer
your
questions.
B
In
addition,
this
is
kind
of
a
joint
event
with
the
get
ups
happy
hour.
Now
it's
still
morning
where
I
am
so
I'm
drinking
espresso
here,
rather
than
maybe
an
aviator
or
a
sidecar,
I
I'll
wait
I'll
hold
that
until
later
on.
But
in
the
meantime
we
will
learn
all
about
get
ops.
There
will
be
the
the
team
will
have
a
short
presentation
for
folks.
B
They
have
a
couple
of
demos
queued
up
that
they
can
show
you,
but
please
go
ahead
and
ask
your
questions
in
chat
at
any
time,
and
I
will
queue
those
up
and
make
sure
to
ask
them
on
air
to
the
crew
so
that
they
can
answer
them.
And
if
we
do
use
your
question
on
air
and
you
live
somewhere
that
we
can
ship
to.
You
are
entitled
to
an
openshift
t-shirt
and
I
will
give
you
information
on
how
to
clean
that
later
on.
B
So
with
that,
I
first
I
folks.
If
you
can
introduce
yourselves,
see
mac,
you
want
to
start.
C
Sure
thing:
muted,
myself,
I'm
seeing
myself
again
far
product
manager,
red
hat
and
I
run
our
devops
and
github's
portfolio
on
on
top
of
openshift
working
very
closely
with
some
of
the
gentlemen
on
this
call
excited
to
talk
to
you
about
some
of
the
activities
that
we're
driving
specifically
on
githubs.
D
Yeah,
so
my
name
is
chris
hernandez
technical
marketing
manager
at
red
hat
and
get
ops
enthusiasts
right,
so
I'm
very
very
excited
about
getting
the
minds
together
in
this
special
edition
kind
of
mesh
kubecon
office
hour.
Slash
get
offs
happy
hour,
so
so
yeah
I'm
excited
to
work.
To
take
some
of
the
some
of
the
conversations
we've
been
having
internally
and
kind
of
just
bring
it
out
and
make
sure
we
answer
all
your
guys's
questions.
E
Hey
hi,
I'm
shaw:
big
bose,
I'm
a
ci
cd
architect
at
red
hat.
I
work
with
a
bunch
of
tooling
to
ensure
developers
are
successful
on
openshift
and
I'm
very
excited
to
talk
to
you
all.
B
Okay
and
why
don't
you
take
it
away
with
the
presentation
and
in
the
meantime
again
attendees,
please
ask
your
questions
and
chat.
C
All
right
implementing
github,
so
I
wanted
to
talk
a
little
bit
about
what
git
ops
is
and
how
we
see
githubs
in
the
ecosystem
of
kubernetes,
before
jumping
into
more
exciting
stuff,
that
my
my
call
is
going
to
walk
you
through
with
some
demos
and
some
interactive
conversations,
hopefully
down
the
line
so
before
I
get
to
get
after
where
it
comes
from,
cicd
is
hopefully
or
a
lot
of
you
are
familiar
already.
C
A
lot
of
the
teams
that
I
talk
to
have
already
established
a
ci
process
of
this
in
their
organization,
especially
if
they
are
building
application.
They
are
from
the
app
side
of
the
house,
and
there
are
application
teams,
and
ci
generally
refers
to
those
series
of
steps
that
an
application
goes
through.
C
A
changes
go
through
from
the
code
till
you
release
an
application
and
continuous
deliver
the
more
advanced
form
of
that
that
expands
it
to
deployment
deployment
all
the
way
to
various
environments,
and
there
are
a
lot
of
ambiguity
in
how
people
refer
to
cfi
and
cd,
and
they
want
to
talk
about
these
concepts.
Cd
really
applies
to
the
entire
process
right,
so
sometimes,
when
people
talk
about
cd,
they
refer
to
that
right.
End
of
the
spectrum,
just
a
deployment,
and
sometimes
you
refer
to
it
as
the
whole
as
the
entirety
of
this
process.
C
So
for
the
sake
of
this
cult
today,
what
I
want
to
be
what
I
mean
by
ci
is
that
left
part
of
the
spectrum
that
focuses
on
releasing
application
and
cd
focus
is
a
right-hand
side,
just
a
deployment
so
that
we
have
the
same
terminology
that
we're
going
to
use
to
the
rest
of
the
calls
of
the
session,
and
the
reason
I
make
this
separation
is
that
when
it
comes
to
technology
and
tools
that
helps
various
teams
to
set
up
these
kind
of
processes,
you
generally
need
different
type
of
requirements,
different
type
of
capabilities
within
these
two
phases,
and
although
a
lot
of
tools
are
used
across
the
entire
spectrum
for
ci
cd
jenkins
being
the
most
famous
one
of
those,
they
usually
lack
features
for
one
side
of
this.
C
There
are
tools
that
are
more
focused
on
the
cd
and
and
help
you
to
do
more
advanced
city
processes
and
tools
that
are
more
focused
on
on
ci
and
how
does
githubs
come
to
the
picture
and
git
ups
is
really
one
way
of
implementing
ci
cds.
It's
one
form
of
continuous
delivery.
It's
a
very
developer,
centric
approach
to
continuous
delivery,
because
it's
building
on
a
process
that
has
been
for
a
long
time
used
within
the
software
developer
teams
for
for
developing
software.
C
It's
it's
built
on
the
very
simple
concept
of
treating
it
as
the
single
source
of
truth
for
everything
related
to
your
infrastructure
application.
So
everything
absolutely
everything
that
describes
your
application
and
infrastructure
needs
to
go
into
this
git
repository
as
a
single
source
of
truth,
and
you
drive
your
operations
through
the
git
processes.
So
if
you
want
to
make
a
change
in
your
environment
or
a
deployment
or
infrastructure,
that
generally
is
represented
as
a
comment
in
one
of
those
git
repositories.
C
It
usually
comes
as
a
pull
request
that
gets
reviewed
by
the
team
and
other
members
and
gets
discussed
before
gets
to
repo,
and
once
it's
in
the
repo,
then
there
are
processes
in
place
that
make
sure
that
kind
of
change
is
reflected
into
your
application
or
infrastructure,
and,
like
I
said
this
is
nothing
new
really.
This
is
how
we
develop.
We
have
our
source
code
in
git
repositories
usually,
and
we
have
feature
branches
and
pulled
requests
and
red
hat
is-
is
heavily
focused
on
open
source
project.
C
Every
single,
open
source
project
that
I'm
aware
of
is
following
this
exact
same
process.
The
idea
of
githubs
is
expanding,
this
to
operations
driving
operations
using
the
exact
same
familiar
concepts,
and
why
would
we
do
that?
Why
would
we
implement
continuous
delivery
through
get
out,
because
there
are
a
couple
of
benefits
that
are
very
hard
to
achieve
when
you
are
relying
on
other
ways
of
implementing
continuous
delivery?
The
first
one
is
really
visibility
on
audit.
C
C
You
often
have
an
area
of
different
type
of
tools
that
you
rely
on,
for
security,
for
automation,
for
deployment
and
other
things,
and
it's
very
difficult
to
have
visibility
across
the
entire
thing
gear
ups
relies
on
git
as
the
single
place
that
you
have
visibility
across
everything,
so
every
change
that
goes
into
some
environment
has
to
be
represented
as
a
change
in
the
git
repository.
So
you
rely
on
on
the
traceability
that
a
git
provider
gives
you.
C
C
It's
also
enhances
security
really,
because
git
ups
is
not
only
that
your
state
of
get
is
reflected
to
the
cluster,
but
also
looks
at
it.
The
way
around.
So
you
want
to
make
sure
that
the
cluster
doesn't
deviate
from
what
you
have
is
what
you
have
described
declared
in
your
git
repository.
So
you
make
sure
that
if
a
cluster
deviates
for
whatever
reason,
because
somebody
manually
has
gone
change,
something
on
the
cluster
or
a
change-
a
rare
change-
some
maybe
a
system
is
compromised
and
a
cluster
changes.
C
Deployment
changes
the
image
that
is
deployed
changes
and
then
you
would
immediately
get
notified
and
know
that
through
your
city
process,
because
you
know
that
there
is
a
derived
from
from
the
git
repository,
you
can
react
on
it.
So
there
are
a
lot
of
benefits
on
relying
on
on
proven
processes
and
the
value
that
the
git
provider
get
get
provided.
Get-Based
processes
provide
to
to
driving
operation
that
implementing
devops
through
githubs
really
enables
those
kind
of
capabilities
to
be
applied
to
your
operation
side
as
well.
C
So
how
would
that
look
like
if
you're
implementing
cd
through
through
githubs
the
ci
part,
is
what
you
would
expect?
So
we
have
source
repository?
You
have
a
cr
processing
place,
I'm
oversimplifying
here.
Obviously,
but
there
are
a
lot
of
stages
in
between
till
you
have
released
your
application
once
you've
done
that,
then
there
is
another
repository
I'm
making
in
this
option
here.
C
This
could
be
the
exact
same
repository
that
contains
all
the
configuration
of
that
application
and
the
infrastructure
that
this
application
runs
on
it,
and
you
have
a
cd
processing
place
that
syncs
everything
that
is
in
that
repo
to
the
cluster
make
sure
that
the
cluster
comes
to
the
state
that
you
have
desired
in
your
git
repository
so
far.
This
doesn't
look
that
different
from
how
people
have
been
doing
ci
cd,
perhaps
even
using
jenkins
right.
C
This
is
quite
similar
from
from
30
000
miles
up,
but
if
you
look
deeper,
what
what
github's
apple
case
is
really
this
git
ops
loop
that
you
don't
only
deploy
or
sync
the
state
of
the
kit
to
the
cluster
forget
about
it,
but
rather
you
constantly
monitor
the
state
of
the
cluster
identify
derives.
So
if
something
changes
on
the
cluster
or
changes
on
the
git
repo,
both
ways,
you
would
know
that
the
system
is
out
of
think,
detect
the
drifts
and
be
able
to
take
action
upon
it
automatically
correct
it.
C
If
you
want
to
be
always
in
sync
or
perhaps
notify
an
admin
to
look
into
it
and
really
make
sure
that
you,
you
have
full
control
over
the
state
of
your
clusters,
this
is
also
extremely
valuable
for
for
multi-cluster
environments
right
because
in
the
city
process,
it
doesn't
really
matter.
If
you
have
one
cluster
as
a
target
or
10
or
50
of
them,
you
you
have
the
same
configuration
push
to
to
these
environments.
C
I
realized
that
I
talked
a
lot
about
application
delivery,
but
also
I
talked
a
lot
of
teams
that
are
interested
in
employing
git
ups
practices
for
only
infrastructure
configuration
which
focuses
really
on
that
lower
part
of
this.
This
diagram
that
they
only
have
companies
that
relate
to
the
kubernetes
or
openshift
cluster
itself,
how
you
configure
the
cluster
and
you
have
a
cd
process
that
makes
sure
that
the
clusters
are
always
in
sync,
on
openshift.
C
There
are
three
distinct
offering
that
really
maps
to
to
to
this
githubs
process
and
they
enable
different
parts
of
a
github
based
application
delivery.
There's
openshift
builds
that
automates
building
images
on
on
the
platform,
so
you
don't
have
to
have
local
tools
constantly
installed
and
you
build
your
images
in
the
same
exact,
same
exact,
same
environment
that
the
application
runs
as
similar
to
the
production
environment.
C
There
are
openshift
pipelines
based
on
takedown
pipelines
that
enable
ci
kubernetes
a
native
way
of
doing
ci,
so
it
really
builds
on
top
of
kubernetes
concepts
for
implementing
ci,
and
there
is
openshift
git
apps
based
on
argo
cd
that
enables
that
full
get
ups
loop
and
it's
making
sure
that
you
can
sync
get
repose
directly
to
the
cluster
and
always
stay
in
sync
within
that
model.
C
How
does
that
map
sources
offering
to
the
model
that
I
mentioned
so
openshift
python
should
build
focus
really
on
the
ci.
I
don't
should
get
ups
in
the
lower
part
that
that
allow
you
that
enable
you
with
the
with
the
githubs
process,
with
the
syncing
to
the
cluster.
All
the
three
together
give
you
a
base,
a
foundation
for
implementing
the
end
to
end
git
ops
process.
C
So
I
have
a
very
expand
extensible
way
to
build
pipelines
and,
on
the
cd
part,
focus
really
on
a
state-of-the-art
sync
process
that
it.
It
can't
do
the
workforce
protocol
based
type
of
functionality.
But
it
does
a
great
job
at
making
sure
that
your
clusters,
all
in
sync
with
your
good
triples
at
all
time,
identified
drift
and
and
correct
for
you
if
you
configure
it
as
such,
and
with
that
I'm
going
to
ask
my
colleague
jimmy
also
to
talk
to
us
a
little
bit
about
advanced
cluster
management
for
kubernetes
at
red
hat.
F
Yeah,
thank
you
so
much
appreciate
that
so
yeah
I
mean
to
continue
on
with
the
with
the
story
here.
Obviously
you
know
having
this
pipeline
building
through
cicd
having
open
shift
right.
We
also
add
another
layer
to
this,
which
is
advanced
cluster
manager
for
kubernetes,
and
this
tool
really
helps.
You
not
only
manage
the
life
cycle
of
all
of
your
clusters,
but
it
does
much
more
than
that.
F
We
can
integrate
from
the
application
deployment
engine
directly
into
github
to
make
sure
that
all
of
your
application
deployments
are
driven
and
following
those
same
best
practices.
So
you
can
just
directly
integrate
pull
your
code
directly.
Make
changes,
update
them
right
and
continuously
keep
that
that
process
going
within
your
your
applications.
F
Of
course,
we
can
also
do
policy
driven
and
governance
risk
and
compliance
on
top
of
all
those
those
containers
and
multi-cluster
observability
with
grafana.
So
we
we
can
actually
graph
out
all
of
your
clusters
to
bring
visibility
into
where
there
might
be
some
issues
with
your
application.
So
if
you
wanted
to
look
at
and
troubleshoot
problems,
so
advanced
customer
manager
for
kubernetes
really
is
a
great
tool
that
just
helps
kind
of
bring
it
all
together
by
managing
extracting
some
of
that
management
directly
from
the
openshift
platform
in
the
console
to
this
product.
F
That
really
helps
you
cut,
like
I
said,
not
only
visualize
it,
but
also
allows
you
to
build
your
application
next
slide,
and
so
you
know
I
just
wanted
to
kind
of
quickly
graph
out
and
show
you
guys
how
this
process
kind
of
works
right.
So
obviously
we
talked
a
little
bit
about
having
multi-clusters
right
and
deploying
those
multi-clusters
across
multiple
different
environments.
So
the
idea
here
is
to
have
you
know
advanced
customer
management
for
kubernetes.
F
This
subscriptions
are,
they
can
be
directly
to
github
or
they
can
be
to
helm
charts
anything
in
between
right
but
taking
github
as
an
example,
we
subscribe
to
that
specific
channel
which
contains
my
code
right
and
then
we
can
take
that
and
propagate
it
across
all
of
the
clusters
that
are
being
managed
directly
from
an
acm
perspective,
so
making
it
very
easy,
very
simple
to
maintain.
F
It
makes
it
also
super
simple
to
kind
of
make
changes.
So
if
you
wanted
to
go
and
push
a
new
change,
perhaps
to
a
dev
cluster
to
test
it
out
before
you
go
into
production,
the
process
is
very
easy.
You
can
just
go
back
edit,
your
application,
deploy
it
or
even
deploy
a
new
cluster
to
test
out
that
application
in
devil
qa.
F
So
everything
is
all
managed
directly
from
advanced
customer
manager
for
kubernetes
a
subscription
model
subscribing
to
that
that
channel
from
within
github
next
slide
same
thing
goes
for
local
subscription
flows.
If
you
wanted
to
do
that
again,
it's
all
driven
the
same
exact
position,
so
advanced
cluster
manager
for
kubernetes
can
run
in
as
in
a
completely
isolated
environment,
and
you
can
actually
push
from
a
local
git
repo.
D
Cool,
do
we
want
to
break
for
questions
or
yeah.
B
We
we
actually
we
have.
We
have
our
first
question,
so
lily
wants
to
know
github
actions.
How
does
github
actions
integrate
with
all
of
the
tools
that
you
have
here
if,
at
all,.
D
Yeah
I
haven't
worked
with
github
actions
directly
from
what
I
understand.
It's
a
it's,
a
it's
a
ci
cd.
It's
like
a
ci,
another
product
kind
of
like
jenkins.
I
don't
know
cmac
jimmy
or
bose.
If
you
guys
worked
with
getups.
C
Sure
I
I
can
talk
about
a
little
bit.
Github
is
actually
a
very
good
partner
of
ours
and
keep
an
eye
on
github
universe.
That
happens,
I
think
in
two
weeks
or
three
weeks,
so
there
gonna
be
some
more
news
about
this
as
well,
but
github
has
ci
functionalities
added
to
the
a
long
array
of
capability
developer
photos
capability
that
they
have
around
the
repository
so
workflow
and
ci
is
one
part
of
that.
C
You
have
this
yaml
syntax
to
build
ci
functions
and
execute
that
as
a
part
of
the
events
that
are
generated
in
that
git,
repo
and
github
actions
are
similar
to
plugins
that
there
are
these
units
that
you
can
put
in
in
your
workflow
and
compose
different
different
type
of
things.
So
there
are
github
actions
for
let's
say,
deploying
an
application
on
kubernetes
or
scanning
your
image
or
run
it
may
build,
perhaps
for
for
application,
and
so
we
we
do
have
customers
and
like
that
really.
G
C
C
There
are
already
a
number
of
github
actions
that
you
could
put
in
your
github
workflow
and
as
that
workflow
executes,
you
can
interact
with
an
openshift
cluster.
So
the
point
application
to
a
data
status,
roll
out
a
new
image
and
so
on,
and
we
are
working
actually
to
expand
those
number
of
actions.
So
they
can
automate
a
lot
more
of
those
kind
of
activities
directly
as
as
a
predefined
unit.
C
So
perhaps
not
only
just
run
a
command
against
openshift
but
be
able
to
roll
out
a
new
deployment
by
just
providing
the
name
of
the
deployment,
and
you
build
the
image
through
the
github
workflow
and
pass
that
the
name
of
the
image
to
to
the
github
action
that
can
integrate
with
openshift
and
that
redeploys
the
image
on
openshift.
So
we
do.
We
are
collaborating
with
github
to
identify
some
of
these
actions
that
we
that
that
gets
requested.
C
A
lot
by
the
community
of
github
and
bring
more
of
those
into
the
ecosystem
of
github
actions
that
are
available
for
openshift.
B
Okay,
so
one
more
question:
martin
asked
about
centos
and
okd.
I
guess
I
guess
martin
is
asking
whether
or
not
you
know
the
tools
that
you've
talked
about
can
be
run
with
that
stack.
If,
if
not,
they
can
follow
up
with
a
follow-up
question,
but.
B
C
Yeah
they
can.
So
that's
really
the
that's
the
dna
of
red
hat
right.
So
we
I
wouldn't
talk
about
anything
that
does
not
have
an
upstream
and
open
source
available
to
be
available
on
open
source
and
not
only
on
kd.
These
pieces
that
I'm
talking
about
are
really
available
for
kubernetes,
so
you
can
run
it
on
mini
cube
or
any
any
kubernetes
flavor,
including
okd,
and
so
openshift
pipeline
is
really
a
supported,
takedown
pipeline,
so
they
can
pipelines
can
can
run
on
an
okd
or
any
any
kubernetes.
C
B
All
flavors
of
kubernetes
in
essence,
including
okd
cool
mparm,
wants
to
know
if
anybody's
working
on,
maybe
if
it
already
exists,
a
plug-in
to
handle
your
secrets
together
with
argo
cds,
so
that
they
automatically
get
checked
in
new
secrets
automatically
get
checked
into
a
secure
secrets.
Repository
like
vaulter
or
key
cloak.
C
E
So
there
are
a
couple
of
different
solutions
that
we
are
currently
exploring,
and
one
of
the
things
that
you
will
see
in
my
demo
shortly
is
that
we
have
a
new
cli
for
you
to
bootstrap
everything
get
ops,
which
means
that
in
the
world
of
github
you
need
stuff
to
land
on
your
git
and
then
there
would
be
applied
to
your
cluster
and
all
that
so,
which
means
if
there
is
a
secret
that
you
need
that
has
to
be
represented
on
git.
It
has
to
be,
of
course,
encrypted
or
referenced.
E
One
of
the
things
that
we've
been
looking
at
is
seal
secrets.
The
other
we've
been
using
is
the
sops
solution
so
yeah.
So
we
are
we're
kind
of
trying
out
both
at
this
point,
but
the
demo
you
will
see
right
now
in
show
in
a
few
minutes
is
where
we
have
a
cli
which
depends
on
seal
secrets.
Christian.
Do
you
want
to
add
to
that.
D
Yeah
yeah
well
just
just
to
kind
of
expand
on
that
and
I
think
bosey
did
pretty
pretty
good
job
about
it,
because
I'm
a
fan
of
seal
secrets,
there's
this
kind
of
in
bose.
You
know
during
our
conversations
internal
conversations
is
that
you
know
there's
there's
this
kind
of
there's
kind
of
gray
area
when
it
comes
to
secrets
right
because,
like
there's
something
like
vault
that
manages
secrets,
but
it's
not
necessarily
get
ops
because
your
secrets
aren't
stored
in
git
right.
So
the
secrets
is
actually
stored
somewhere
else
yeah.
D
So
it's
technically
not
get
ups,
because
it's
not
sword
and
get
I'm
personally
a
fan
of
sealed
secrets
only
because
one
like
you're
encrypting
it
right
and
two
usually
when
you're
using
get
ops
you're
using
your
own
internal
git
repository
so
you're,
not
really
necessarily
storing
your
secrets
like
on
github
right
you're
like
storing
it,
like
you
know,
in
your
own
internal,
get
get
systems.
So
there's
there's
less!
D
D
So
I'm
a
fan
of
sealed
secrets,
not
to
say
that
you
know
none
of
the
the
other
solutions
aren't
valid
right
because,
like
vault
has
it's
a
secret
management
system
right,
so
you
kind
of
you
have
to
weigh
some
of
the
things
like
you
want
to
manage
your
secrets.
You
want
something
to
manage
your
secrets,
or
do
you
want
to
manage
your
secrets
yourself,
except
you
encrypt
it.
So
it's.
E
D
An
exploration
you
know
sort
of
process.
E
C
And
I
I
I
do
agree
with
christian
and
that,
like
a
lot
of
teams
I
talked
to
and
they
asked
this
question
when
I
asked
so
what
are
you
doing
right
now
and-
and
the
answer
is
exactly
what
you
described-
they're
just
putting
their
secrets
in
their
internal
gate,
repos
right,
so
they
it
usually
comes
as
a
very
sensitive
area
that
people
look
for
something
like
really
elegant
to
solve
the
problem.
But
in
the
meantime,
all
the
secrets
are
in
the
git
repo
in
plain
sight,
really.
E
Yeah,
so
welcome
to
the
demo
of
github's
application
manager.
E
A
few
things
that
I
would
touch
upon
here
is,
first
of
all,
you
know:
how
do
you
actually
execute,
get
ops
or
ci
cd
or
even
the
stuff
that
we
were
talking
about,
which
is
maintaining
your
secrets
in
git,
so
that
you
can
have
truly
everything
in
git,
so
I'll
I'll
touch
upon
those
topics
in
this
demo
and
in
this
demo,
what
I'll
primary
focus
on
is
what
what
is
the
promised
land
with
respect
to
githubs
and
then
also
tell
you
how
some
of
the
work
that
we
are
doing
is
helping
us
get
there.
E
So
a
very
basic
question:
what
would
you
do
as
what
would
you,
as
a
developer,
have
with
you
to
begin
with?
What
you
would
have
is
your
source
code,
of
course,
because
that's
the
stuff
that
you
built.
That's
your
business
application
that
you
now
want
to
have
deployed
on
some
kubernetes
environment
or
some
environment.
E
Now
what
you
need
for
that,
for
that
you
would
need
deployment
manifests,
because
that's
what
actually
takes
your
code
and
ensures
that
they
can
run
on
a
cluster,
and
since
we
talked
about
maintaining
them
on
git,
you
need
some
kind
of
pr
checks
or
branch
verification.
So
you'll
need
pipeline
manifests
as
well,
which
means
you
need
your
deployment
scripts
and
you
need
pipelines
and,
of
course,
with
with
this
you're
good
to
go.
But
then
the
next
question
is:
how
do
you
lay
all
that
out?
E
Because
there
could
be
a
ton
of
different
ways
to
do
that?
There
could
be
amazing
ways
to
do
that,
and
there
could
be
not
some
amazing
ways
to
do
that
so,
which
is
why
we
are
working
on
a
cli
that
would
actually
enable
you
to
structure
all
this
in
an
opinionated
way
and
ensure
that
you
don't
have
to
worry
about
figuring
out.
What
is
the
best
way
to
lay
that
out?
Rather,
you
can
only
focus
on
hey.
I've
got
an
application,
and
I
know
I
need
ci
cd.
I
know
I
need
get
ops.
E
I
know
I
need
secrets
encrypted
now.
What
is
the
best
way
to
maintain
that
and
have
that
applied
on
the
cluster?
So
what
I'm
going
to
do
is
I'll
quickly
go
over.
You
know
what
the
cli
looks
like
so
the
cl,
so
the
cli
is
called,
get
ops
application
manager
it
it
effectively
generates
manifests
for
your
pipelines.
It
generates,
manifests
for
your
cd
or
get
ops,
and
it
also
does
all
the
job
of
you
know
getting
your
secrets
encrypted
before
you
can
push
them
to
get.
Which
means
you.
E
You
should
not
end
up
in
a
situation
where
you
push
something
to
get
and
you
realize
it
wasn't
encrypted.
So
yeah
a
quick
look
at
what
the
commands
look
like.
So
you,
so
you
have
a
bunch
of
different
commands
but
they're
effectively.
All
about
you
know:
bootstrapping
your
ci,
your
cd,
creating
a
new
environment
in
good
ops
and
in
a
while
I'll
get
to
that.
E
What's
the
difference
between
an
environment
or
an
application
or
a
service,
and
then
you
could
also
directly
manage
your
git
web
hook
from
this
cli,
one
very
clear
distinction
between
why
the
cli
exists
and
what
you
can
do
without
the
cli.
The
job
of
the
cli
is
not
to
deploy
things
on
the
cluster.
For
you,
there
are
a
bunch
of
clies
for
that.
E
Everything
should
get
applied
from
kit
and
when
I
say
everything
I
mean
everything,
so
you
need
pipelines
sure
that
itself
also
should
be
maintained
on
your
kit.
There's
a
lot
of
content
here,
but
the
good
news
is
the
cli
makes
this
all
boring,
which
means
you
don't
need
to
learn
tecton
or
you
don't
need
to
learn
anything
kubernetes
specific
to
be
able
to
generate
your
pipeline
manifest.
E
You
just
need
to
express
an
intent
of
what
you
need
to
do,
and
then
I
said
that
you
need
to
deploy
stuff
to
your
cluster
in
the
end,
which
means
you're
using
argo
cd
for
this.
E
The
cli
also
generates
that
for
you
and
you
could
see
that
there
is
a
folder
under
config,
which
is
called
argo
cd.
A
few
minutes
back.
I
think
you
as
a
question
around
secrets.
Thank
you
for
asking
that,
and
I'm
so
happy
to
show
you
that
I've
got
a
bunch
of
secrets
here
on
git
and
they're
all
encrypted.
E
I
didn't
have
to
do
much
with
this.
All
I
had
to
do
effectively
is
have
sealed
secrets
installed
on
openshift
as
an
operator
and
then
the
get
ops
application
manager
cli
automatically
knew
that
it
is
there
and
it
went
ahead
and
and
generated
manifests
where
I
have
an
encrypted
secret
so
that
I
didn't
have
to
bother
about
you
know
encrypting
it
myself
or
figuring
out
what
that
even
means.
E
So
with
that
I'm
going
to
quickly
give
you
a
walkthrough
of
you
know
what
a
typical
githubs
repository
should
have,
so
it
should,
of
course,
have
your
applications
manifests,
which
means
I
have
my
source
code,
but
I
do
not
know
how
to
deploy
it
on
openshift
or
kubernetes.
E
All
that
should
be
living
on
your
kit
as
well,
so
the
cli
bootstraps
that
for
you
in
this
case
you
would
see
it
potentially
assumes
that
you
would
have
a
defraud
states
at
the
minimum.
You
could
change
these
names.
The
way
you
want
it
to
be,
and
inside
each
of
these
directories
you
would
have
a
layout
that
would
indicate
the
stuff
you
want
to
deploy
so
in
this
layout,
which
is
an
opinion
which
the
cli
gives
you
is
that
hey?
E
You
know
you
need
to
have
an
environment
inside
that
you
specify
an
application
and
inside
that
application
you
specify,
which
are
the
services
that
you
want
to
deploy,
and
one
of
them
is
called
the
taxi
application
taxi
service,
and
there
are
a
couple
of
kubernetes
manifests
inside
it.
Now
once
again,
did
I
hand
crop
this
myself?
No
all
I
did
was
use
the
cli
for
that.
E
So
if
you
remember
the
whole
get
off
flow
of
doing
things,
which
means
you
don't
make
any
changes
to
the
cluster,
you
only
make
changes
to
your
git
repo
and
you
ensure
that
all
your
governance
lives
in
your
repo
and
when
you
merge
things
in
gate,
you
want
to
ensure
that
stuff
gets
reflected
on
your
cluster.
So
that's
what
I'm
going
to
do
here
so
for
that,
so
that
just
so
that
you
know
I'm
not
bluffing,
I'm
going
to
have
my
openshift
console
open!
E
Please
let
me
know
if
my
screen
is
too
big
or
too
small.
I
can
adjust
that
then
I'm
going
to
go
ahead
and
modify
a
manifest
that
I
need
to
get
redeployed
on
my
cluster.
So
I'm
going
to
go
to
the
deployment
manifest,
I'm
going
to
say
hey!
This
looks
like
this
one
replica
defined
here,
I'm
going
to
make
it
four.
E
And
I'm
going
to
propose
a
change
for
that
on
git
and
as
soon
as
that
branch
got
created
or
the
pull
request
got
created,
you
could
see
that
I
have
a
pipeline
running
here
already.
E
I
did
not
trigger
that
manually
just
as
soon
as
I
did
that
this
trigger
detect
on
pipeline
I'll
go
back
to
the
same
old
question.
Did
I
write
these
manifest
myself?
No,
the
sale.
I
did
that
for
me,
I'd
say,
succeeded
what
it
did
is
it
ensured
that
I
didn't
make
a
change
that
was
illegal,
which
means
the
change
is
legal
in
the
kubernetes
world.
The
change
is
something
that
could
potentially
go
in
safely
and
I'm
going
to
go
ahead
and
merge
this,
because
I
see
it
succeeded.
E
And
then
I'm
going
to
do
a
merge
before
I
do
that,
I'm
going
to
show
you
here
that
here's
a
previous
date
of
the
deployment
with
one
replicas,
which
I
showed
you,
which
I
modified
to
four
and
I
made
a
pull
request
to
it.
I
I
didn't
want
you
to
master
it
directly.
Why?
Because
I
wanted
somebody
on
my
team
or
somebody
tell
me
hey.
This
looks
good.
Let's
take,
let's
take
the
change
in
and
get
it
applied
on
your
cluster,
so
I'm
bumping
up
replicas
to
handle
the
holiday
traffic.
E
Let
me
have
a
nice
commit
message
and
I'm
gonna
go
ahead
and
commit
it
and,
let's
see
if
things
get
pulled
in
while
that
happens
so
very
shortly,
you
should
see
that
I've
bumped
up
the
number
of
replicas
from
one
to
four
the
number
of
pods
on
the
left
should
quickly
change
yeah,
as,
as
you
could
see
the
moment,
I
merged
my
pr
which
had
changes
to
my
deployment
environment.
The
changes
got
pulled
in
and
I've
got
four
replicas
of
my
application
running
to
handle
the
holiday
traffic.
E
So
what
did
I
just
accomplish?
As
a
quick
summary,
I
opened
a
pr,
it
ran
a
ci
pipeline
which
is
kubernetes
native
and
then
I
merged
it
and
I
finally
have
stuff
changed
automatically,
which
means
I
did
not
have
to
apply
this
change
on
the
cluster
manually
and
have
to
handle
some
ticketing
system
to
keep
an
eye
on
it.
All
I
did
was
I
went
into
get
I
modified
it.
E
So
if
somebody
comes
by
and
say
hey,
you
know
what
find
me,
the
guy
who
went
and
modified
this
all
you
would
have
to
do
is
go
to
your
comment,
history
and
see
who
managed
this.
I
could
totally
go
and
revert
this
commit
and
say:
hey,
I'm
done
with
four
replicas.
I
could
just
go.
Go
back
to
one
that
should
be
good.
So
now
the
the
problem
that
we've
been
solving
is
hey.
This
is
all
cool,
but
then
you
know
I've
got
thousands
of
applications,
thousands
of
services.
E
You
don't
come
to
the
cluster
as
such,
but
you
could
directly
go
to
your
git,
manifest
tweak
your
pipeline
and
ensure
that
argo
cd
syncs
that
in
for
you,
so
the
same
thing
is
happening
for
argo
itself,
which
means,
if
you
want
to
modify
your
argo
configuration
itself,
which
is
doing
all
the
heavy
lifting
of
continuously
deploying
your
stuff
from
git
into
your
cluster.
You
would
also
do
that
on
git
and
ensure
you
know
everything
gets
synced
onto
your
cluster
and
at
the
same
time
you
would
see.
E
I
have
got
my
three
different
copies
of
the
same
application
running
in
three
different
environments,
as
I've
said:
a
dev
environment
stage,
environment
and
a
broad
environment.
So,
with
argo
cd,
I'm
going
to
ensure
that
everything
that's
on
the
cluster
is
going
to
remain.
Synchronized.
E
Tecton
pipelines
is
going
to
ensure
that
I
have
a
kubernetes
native
pipelines.
Cam
is
going
to
ensure
that
all
all
this
that
you
want
to
all
this
niceness
that
you
see
here
is
going
to
be
bootstrapped
for
you
and
managed
on
git
without
you
having
to
figure
out
how
to
do
that.
E
With
that
I'll,
stop
I'll
pause
and
check.
If
you
have
any
questions.
B
Okay,
you
want
to
stop
sharing
there
for
a
minute.
B
Cool,
so
we've
actually
had
some
questions
to
come
in
a
lot
of
them
have
to
do
with
secrets
and
encryption,
but
before
those
come
in
martin
had
a
follow-up
question
asking
what's
the
actual
difference
between
for
these
tools
for
argo
and
ocm
and
other
things
in
the
upstream.
What's
the
actual
difference
between
those
tools
and
the
enterprise
versions
that
are
available
from
red
hat
well,.
D
Actually
but
yeah,
I
guess,
go
ahead,
yes
yeah,
so
I
guess
I'll
just
start
really.
The
only
difference
is
that
you
get
support
from
us,
since
we
we
use
upstream
upstream
code,
a
lot
right
where
you
know
red
has
always
been
the
open
source
company.
So
it's
really.
The
the
difference
is,
is
that
you
can
call
us,
but
you
know
I'll,
let
shubik
and
cmak
expand
on
that.
E
Yeah,
I
think
your
spot
on
christian.
You
know
we
have
teams
which
work
upstream
in
the
tecton
community.
We
have
teams
which
work
upstream
in
the
argo
cd
community.
So
a
hundred
percent
of
the
code
that
we
write
is
on
upstream
itself.
The
only
bit
that
we
do
is
to
ensure
that
hey.
You
know
there
is
an
amazing,
openshift
pipelines
operator
that
ensures
that
tecton
is
well
managed
in
your
cluster.
It
is
life
cycle
across
versions
of
openshift.
E
We
are
doing
the
same
thing
with
argo
cd
as
well,
and,
of
course,
on
top
of
that,
we
would
provide
guidance
to
our
users
and
customers
to
say
hey.
This
is
the
best
way
to
do
that,
and
most
of
the
guidance
is
something
that
we
also
contribute
upstream
as
well,
but
then
that's
pretty
much.
The
only
difference
there
that
we
we
want
to
ensure
that
across
different
versions
of
openshift
there's
a
good
way
to
life
cycle,
these
technologies
that
we
endorse-
or
we
recommend
you
to
run
on
the
cluster,
see
mark.
C
No,
I
think,
you're
exactly
like
the
gist
of
it
is
really
that
that
our
focus
is
to
drive
this
technologies
upstream
and
everything
we
do
goes
there.
That
includes
all
the
fixes,
bug
fixes
patches
and
those
kind
of
thing,
and
their
focus
is
to
make
sure
that
they
run
really
well
on.
C
Openshift,
provide
a
great
experience
for
users
that
want
to
build
applications
and
deploy
and
manage
it
on
top
of
openshift,
and
I
think
this
is
this
is
something
that
this
is
a
great
role
that
we
can
play
within
these
communities
as
well,
because
a
community
usually
will,
if
you
look
at
orgo
city
or
takedown
itself,
for
example,
is
very
focused
on
that
particular
tool.
C
But
we
can
look
at
the
the
use
cases
that
our
customers
have,
and
it
usually
touches
many
of
these
tools
together
because
they
are
they're
out
there
to
fulfill
a
purpose.
They
have
a
goal
in
place
and
they're
using
multiple
combination
of
these
tools.
To
achieve
that.
So
it
gives
us
a
way
to
look
horizontally
across
all
these
tools
and
make
sure
they
work
well
with
each
other
to
to
achieve
that.
That
purpose
and
that
that
goal.
B
I
I
do
want
to
mention
one
exception
to
what
people
said
before,
because
we
did
use
advanced
cluster
management
in
mentioned
it
in
the
first
presentation.
Advanced
cluster
management
has
not
been
100
open
source
upstream,
yet
we
are
in
the
process
of
making
all
of
that
code
available
as
an
independent
upstream
project
the.
But
it's
not
done
yet
and
it'll
be
happening
over
the
course
of
this
year.
Pretty
much.
D
F
Yeah
yeah,
and
on
the
flip
side
of
that
I
mean
we
do
have
you
know
I
get
repo
available
with
all
the
code
there.
So
we
obviously
you
know
we're
still
working
through
the
process
to
actually
make
it
fully
fully
open
source.
But
we
do.
We
do
have
a
repo
out
there
with
most
of
the
code
for
acm
yeah.
B
Okay,
so
we
then
have
questions
both
from
m3
atto
and
the
second
son
about
secrets
and
encryption
m3ados
has
to
do
with
storage
of
the
secrets
on
the
cluster.
You
know
that
is
you
know:
how
are
they
being
stored,
so
they're
actually
encrypted
and
not
just
hashed?
B
You
know
what
what
solution
to
use
there?
The
second
son,
you
know
this
might
be
a
related
answer-
wants
to
know
about
automated
key
rotation
for
the
keys
used
to
encrypt
the
secrets,
particularly
in
a
multi-cluster
situation.
D
Yeah
I'll
I'll
start
that,
just
only
because,
as
and
and
by
the
way,
chris
thanks
for
for
posting,
the
link
to
my
previous
stream
about
secrets,
that's
that's
on
the
on
the
chat,
so
the
so
the
first
and
foremost-
and
I
think
chris
you
put
a
good
link
in
the
chat,
is-
is
that
you
can
encrypt
the
fcd
database
right.
So
not
only
can
you
encrypt
the
actual
ncd
database,
you
can
actually
encrypt
the
disks
that
they're
on
right.
D
So
your
you
know:
you're
you're,
adding
a
layer
and
you're
adding
another
layer
on
top
of
that,
and
then
you
actually
used
sealed
secrets
right
instead
of
storing
them
in
in
base
64.,
and
so
so,
then
you
have
kind
of
like
a
triple
layer
right
like
it
just
security.
All
it
is
is
just
layers
right
of
like
how
many
barrier
of
entries
that
you
have
so
yeah.
D
So
that's
kind
of
like
one
of
the
one
of
the
solutions
is
that
you're
you're
encrypting
at
every
every
step,
right,
you're,
you're,
encrypting,
the
disks
you
have
ncd
encrypted
at
rest
right
and
then
you're,
using
either
sealed
secrets
or
vault
or
something
to
encrypt
those
those
secrets
to
kind
of
expand
about
about
the
rotation.
D
Sealed
secrets
does
rotate
the
encrypting
the
encrypting
key
right,
but
it
doesn't
get
rid
of
any
keys
right
so,
like
it'll
rotate,
the
key,
but
the
old
key
is
still
valid
right.
It'll
keep
you
know
rotating
the
key
rotating
the
key
anytime,
you
encrypt
it.
It's
actually
up
to
the
end
user,
to
one
re-encrypt,
your
secrets,
with
the
new
key
before
deleting
the
old,
older
keys.
So
that's
one
and
two
make
sure
you're
you're,
you're
rotating
the
username
and
passwords.
D
This
is
something
that
we,
you
know
we
kind
of
went
over
in
depth
in
my
previous
stream
right.
So
I
won't
go
too
into
depth
here,
but
really
that
goes
back
to
the
conversation
of
like
managing
your
secrets
versus
just
encrypting
it
and
there's
there's
really
no
no
tool
that
does
both
right.
So
you
know
you
do
have
to
kind
of
like
self,
if
you're
using
still
secrets,
you
kind
of
have
to
self
manage
the
rotation
and
recryption.
B
B
This
question
was
off
of
the
demo,
which
is
that
the
question
that
f
glazer
has
is
the
peer
decorations.
Do
those
run
a
customized
cli
to
check
the
files.
E
No,
so
so
it
actually
did
a
dry
run
to
ensure
that
everything
works
there.
So
it's
not
a
custom
customized
cli
it
effectively
does
a
few
things,
which
is
this.
It
ensures
that
all
the
manifest
that
you
have
on
your
git
repo
is
still
valid
after
you
proposed
a
change
to
it.
D
Yeah
cool,
I
think
we
have
one
more
demo
unless
there's
more
questions
right,
I
think
we
got
the
acm.
Is
there
any
yeah.
F
Okay,
great
awesome,
awesome
all
right,
so
let
me
go
ahead
and
share
my
screen
here:
real,
quick
and
quick
verification
that
you
can
see
it
awesome
all
right.
So
so
what
you
know
what
we
saw
earlier
right?
Obviously
we
saw
it
working
and
interacting
from
I
get
perspective
with
open
shift
right,
and
so
what
I
wanted
to
do
is
extract
it,
one
more
layer-
and
so
you
know
with
advanced
cluster
management
for
kubernetes.
F
We
have
several
other
use
cases
that
we
can
talk
about,
but
I'm
just
going
to
focus
today
on
the
application
lifecycle
in
the
application
life
cycle.
What
this
allows
us
to
do
is
build
applications
using
what
we
leverage,
what
we
call
an
application
builder.
You
can
see
here
towards
the
towards
the
the
right
that
we
have
the
actual
application
yaml
as
we're
building
our
application.
F
You
can
see
that
we
can
assign
a
very
specific
name
space
to
this
application
right,
so
we're
deploying
this
application
right
on
top
of
openshift.
As
a
matter
of
fact,
advanced
customer
manager
for
kubernetes
runs
on
top
of
openshift
right
as
an
operator,
and
you
can
see
here
that
we
have
different
repo
types
that
we
can
select
for.
F
So
if
I
can
select
my
git
repo,
all
I
have
to
do
is
provide
things
like
a
url,
username
access,
token
branch
path,
right,
pretty
straightforward
type
of
information
here
when
I'm
building
my
application
so
say
I
want
to
build
an
application
for
with
my
gear
repo.
What
this
is
going
to
do
automatically
is
going
to
go
to
the
specific
gear
repo
which
I'll
show
you
guys
in
a
minute,
pull
all
my
information
based
on
the
branch
that
I
select
in
this
case.
F
You
know
I'm
going
to
select
actually
my
hugo
branch
and
I'm
not
going
to
select
the
path.
I
don't
have
any
path
available,
but
you
notice
that
we
had
that
very
easily
as
a
drop
down
to
consume
from
within
acm.
So
this
just
extracts
it
one
more
layer
for
you
know
enabling
your
operators
to
perhaps
deploy
these
clusters
directly
right
and
once
changes
are
actually
made
within
the
gear
repo.
You
can
merge
it
directly
into
acm.
F
We
also
have
integrations
with
all
the
third-party,
with
with
other
tools
set
up
within
red
hat,
which
is
an
ansible
tower,
so
ansible
tower
with
amsoil
tower.
We
can
integrate
and
run
playbooks
to
do
even
more
interactions
with
other
products
right
like,
for
example,
opening
a
change
ticket
with
servicenow
or
doing
an
f5
load
balancer
things
of
that
nature.
F
We
can
also
deploy
these
to
specific
clusters
or
you
can
deploy
to
clusters
based
on
labels,
so
we
can
create
a
specific
labels
that
only
exist
on
specific
clusters
and
deploy
it
there.
We
can
also
set
a
deployment
window
when
we
want
this
application,
so
perhaps,
if
you're
doing
testing
for
this
application
right,
you
can
actually
have
a
very
specific
window
that
you
are
allowed
to
do
the
testing
in
or
even
perhaps
deploy
a
change
into
production.
You
can
select
it,
of
course,
based
on
days
of
the
week
or
location
times,
etc.
F
In
this
case,
I'm
going
to
just
do
always
active,
and
so,
while
this
application
is
building,
I'm
going
to
go
ahead
and
navigate
to
this
other
application.
That
might
seem
very
familiar
to
some
of
you,
so
this
application
is
just
a
pacman
application
that
we
have
built.
Yes,
it
is
the
pac-man
game.
You
can
see
here
that
I
have
a
subscription,
as
we
talked
about
earlier
in
the
the
slide
act
right
directly
to
my
gear,
repo
now
notice.
F
If
I
go
ahead
and
click
here,
it
gives
me
the
url
for
the
give
repo,
as
well
as
the
the
branch
and
the
path
that
we
use
for
that
right.
So
I
have
basically
direct
access
to
my
my
my
my
gear
repo.
I
can
I
can
make
any
changes
here,
so
I
can
go
ahead
and
edit
any
of
this
and
actually
push
it
back
into
my
application
and
automatically
acm
will
pick
this
change
within
the
resource,
topology
right
and
and
push
that
change.
F
You
can
also,
of
course,
using
labels,
leverage
the
the
power
of
deploying
into
multiple
clusters.
So,
right
now
I'm
deploying
these
applications
into
two
specific
raw
clusters.
F
I
have
also
an
ansible
job
that
I
ran
to
do
a
pre-hook
and
a
post
hook
for
the
two
very
specific
things,
one
for
servicenow
another
one
for
an
f5
load,
balancer
right-
and
you
can
see
here
at
the
center
of
it-
is
all
within
my
subscription
right,
which
is
again
the
way
we
interact
with
with
git.
You
can
see,
of
course,
all
of
that
resource,
ammo
right
here
of
the
actual
deployment
and
how
the
deployment
was
done.
F
You
can
quickly
go
and
edit
that
if
you
wanted
to
and
alter
it
and
change
it
to
do
that
as
well,
so
there's
several
ways
that
you
can
interact
and
and
also
kind
of
work
with
within
advanced
customer
manager
for
kubernetes
to
actually
deploy
your
applications.
F
In
this
case,
I
get
a
route
the
for
this
specific
application,
you'll
notice
that
you
know
I
can
go
in
and
actually
go
out
and
play
my
pac-man
game
right,
pretty
straightforward
notice
here
that
I
deployed
it
into
two
different
clouds.
So
I
have
this
deployed
into
amazon,
us
east,
and
then
I
have
one
deploying
on
amazon
us
west.
F
So,
since
this
is
low
balance,
I
can
go
and-
and
you
know,
if
I
refresh
it
will
switch
clusters,
I
can
also
look
at
the
specific
route
for
each
individual
application,
so
you
can
see
within
each
individual
clusters
right
so
and
you're
able
to
actually
see
all
of
these
resources
right
from
here.
So
it's
very
simple,
very
easy
way
to
kind
of
visualize
manage
your
application
right
directly
from
advanced
customer
manager
for
kubernetes,
while
consuming
it
from
your
good
resources
right.
F
You
know
as
we
as
we
saw
earlier,
I
can
I
can
come
in
here.
I
can
make
a
change
to
my
spec
within
my
application
notice
here
that
you
know
this
is
just
a
basic
yaml
deployment
that
I
have
that.
Have
you
know
things
like
the
channel?
They
have
my
subscription
type.
It
has
my
placement
rule
right,
so
my
placement
rules
where
I'm
going
to
actually
place
this
cluster
right
and
you
can
make
these
changes
and
all
consume
it
directly
from
advanced
cluster
manager
for
kubernetes.
F
So
it's
really
easy,
very
simple,
very
straightforward.
To
kind
of
manage
all
of
this,
you
can
of
course
manage
all
of
your
secrets
for
your
specific
clouds
within
acm,
as
well,
so
all
of
your
clusters,
whatever
they
might
be
on-prem
or
they
might
be
on
other
clouds.
You
can
manage
all
of
those
contents
and
resources
there
directly,
so
pretty
easy,
pretty
straightforward
way
of
consuming
your
gear,
repos
within
advanced
customer
manager
for
kubernetes.
F
D
Yeah,
I
don't
see
any
questions
coming
in
and
by
the
way,
great
timing
jimmy
that's
like
we
have
five
minutes
left
to
to
wrap
up
and
answer
any
questions.
So
it's
not
it's!
It's
almost
like
we
timed
this
perfectly
yeah.
D
I
love
it
when
a
plan
comes
together,
yeah,
so
there's
no
questions
coming
in,
but
I
think
I
do
like
to
point
out
in
both
demos
that
no
one
either
shubik
or
jimmy
they
didn't
touch
the
cluster
right
so
and-
and
I
think
that's
very
important
to
to
come
into
light
right.
I
think
that's
just
the
general
get
ops
ideology
and
practice.
D
I
always
refer
to
a
talk.
I
saw
a
long
long
time
ago
when
kubernetes
was
just
a
name.
Kelsey
hightower
you
guys
may
have
heard
of
him
was
talking
about.
You
know
was
trying
to
explain
kubernetes
and
the
design
of
kubernetes,
and
he
said
like
how
do
you
design
it's
cooper?
He
described
it
as
kubernetes
is
a
system
designed
if
I
took
your
ssh
keys
away
right
and-
and
I
think
with
get
ops
where
you
know
the
idea
is
like
okay.
D
Well
now,
I'm
taking
away
cube
ctl
from
you
right,
so
you
know
it's
going
that
extra
miles
going
a
step
further
is
like
okay.
Well,
I'm
you
know.
If
I'm
taking
away
your,
not
only
your
ssh
keys,
but
also
your
cube
ctl
and
your
oc
command,
cli
right,
so
kind
of
a
bolting
to
state,
but
it's
kind
of
like
that's
the
idea
right.
Neither
neither
shubik
nor
nor
jimmy
actually
dropped
into
the
command
line
or
actually
even
interacted
with
with
the
with
the
openshift
console
right.
D
They
were
driving
everything
through
git,
repos
and
through
practices
and
processes.
So
you
know
just
kind
of
just
kind
of
just
bringing
that
to
light
and
that's
kind
of
like
the
power
of
get
ops
right,
whether
you're
managing
your
application
right
with
the
cic
or
you're,
managing
multiple
clusters
and
you're
trying
to
deploy
your
applications
through
multiple
clusters,
you're
driving
everything
with
practices
right
in
tool
sets.
So
I
don't
know
if
any
other
questions
came
up.
Josh
while
I
was
pontificating
there,
but.
B
Oh
one
last
question,
and
then
we
will
have
to
wind
up
andrea,
wants
to
know
the
openshift
documentate
the
openshift
documentation
around
cicd
is
all
written
as
by
a
command
and
control
format.
B
That
is,
you
know,
send
this
command
send
that
command
instead
of
declaratively,
which
is
more
your
sort
of
get-up-centric
approach,
where
you
say
you
know,
define
the
file
and
then
it
gets
automatically
synced
and
wants
to
know
whether
or
not
anybody's
working
on
the
documentation
to
incorporate
more
of
the
declarative
approach.
C
This
is
absolutely
so.
This
is
definitely
something
that
we
have
on
our
radar.
We
have.
We
have
recognized
this
since
a
couple
of
releases
back,
really
that
our
documentation
really
does
not
reflect
how
declarative
the
platform
itself
is.
We
have
we
have
spent
about
two
years,
making
the
entire
platform
declarative
right.
C
It
was
exactly
as
it's
asked
that
it
touches
every
single
page
of
our
documentation,
but
we
are
incrementally
across
virus
teams
within
openshift
engineering,
prioritizing
that
and
updating
the
docs
until
to
get
to
the
point
that
and
the
entire
configuration
is
also
documented
as
how
it
can
be
managed
declaratively
and
it's
just
dropped
in
definitely
on
our
radar.
Hopefully,
we
progress
faster
than.
D
B
Okay,
well,
thank
you
very
much.
We
are
at
the
end
of
our
office
hours
happy
hour.
If
you
asked
a
question
during
this,
I
have
posted
a
link
in
chat
for
how
you
can
claim
your
t-shirt
and
we
will
need
to
wind
up
now.
If
you
missed
part
of
this,
it's
going
to
be
available
on
youtube
and
the
team
when
when's
the
next
happy
hour.
B
D
Other
week,
every
other
thursday,
so
it'll
be
not
this
not
next
thursday,
but
the
upcoming
one.
So.
B
Awesome,
and
so
you
can
ask
more
questions
there
thanks
everyone
great
time,
thanks
for
questions
and
thanks
so
much
for
the
panel
for
supplying
so
much
information.