►
From YouTube: KubeCon EU GitOps Office Hour
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Yep
howdy
I'm
josh
burkus,
I'm
here
as
red
hat's
community
architect
for
everything
cloud
native
and
asked
some
of
our
get
ops
folks,
so
techton
pipelines,
redhead
devops,
to
come
in
and
talk
to
us,
including
natalie,
hey.
C
B
And
see
you
mac.
D
I'm
sam
excel
again
from
product
manager
for
openshift,
red
hat.
I
look
after
our
ci
cd
capabilities.
B
Yep
and
we're
here
primarily
for
questions
folks,
have
you
know
a
few
things
like
a
demo
or
whatever,
but
we're
really
here
to
talk
to
you
and
answer
your
questions
and
discuss
things
with
you
so
feel
free
to
ask
and
as
a
little
incentive
to
step
forward.
B
We
have
some
of
these
new
newly
designed
get
ops
t-shirts
which
we
will
be
giving
away
to
people
who
participate
in
this
office
hour
session.
Yes,.
B
Yep,
so
to
kick
things
off,
I
actually
have
a
couple
questions
of
my
own.
A
B
A
lot
of
the
getups
tooling
that
I'm
familiar
with
ties
into
github
or
gitlab,
but
not
not
all
of
it,
does
like
tecton,
I
believe,
uses
built-in
get
that
you
run
on
your
cluster
correct.
D
You
could
say
that
so
that
I
think
that's
I
kind
of
like
know
where
you're
going
with
that,
and
I
do
like
that.
This
question,
because
get
ops
is
really
focused
on
on
the
process
really
with
the
workflow
right.
D
So
we
had
this
conversation
in
the
in
the
coffee
break
this
morning
with
natal
as
well
that
when
you
look
at
devops
the
the
main,
the
really
the
main
thing
within
devops
is
the
cultural
bit
the
the
collaboration,
the
people,
although
it
also
had
workflows
and
processes
and
practices
and
technologies
associated
with
it.
D
But
the
main
really
the
gist
of
it,
is
that
culture
change
when
it
comes
to
get
ups,
it's
not
much
of
a
it,
has
cultural
aspects
to
it,
but
the
gist
of
it
is
really
the
process,
and
the
name,
I
think,
is
a
little
bit
perhaps
misleading,
because
the
name
already
has
the
name
of
a
tool
in
it.
D
So
it
feels
like
you,
already
narrowed
down
and
pick
a
tool
for
you
if
in
this
process,
but
essentially
like
it
ops
talks
about
the
workflow,
the
way
of
working,
the
process
of
working
and
git
is
being
the
prominent
version
control
system.
So
it
happened
to
be
in
the
name,
but
the
practice
is
never
the
last
applied
to
any
vision,
control
system
that
that
the
team
might
be
using
and
nothing
really
would
change.
E
E
Yes,
sorry
if
I
could
add
something
so
so
to
complement
what
samak
said
so
one
of
the
the
major
things
or
that
that
shifted
when
we
speak
about
get
ups
is
so
everyone
used
to
store
their
assets
somewhere,
so
whether
it
be
a
folder
or
or
you
know,
a
spreadsheet
or
whatever.
E
So
you
basically
have
some
place
where
you
have
your
reference
configuration
your
variables
and
all
of
those
items
or
whatever
scripts
you
you
want
to
use
to
deploy
your
your
infrastructure,
your
applications,
but
one
of
the
main
things
that
are
really
important
now
with
github.
Is
this
ability
to
be
able
to
not
only
deploy
stuff
from
that
are
declared
in
your
repos,
but
also
the
ability
to
continuously
check
if
what
is
deployed
is
actually
in
sync,
with
what
you
have
declared
in
your
desired
state?
So
what
we
call?
E
Maybe
this
synchronization
loop,
where
basically,
you
are
pulling
information
from
your
git
repo
to
see
what's
the
desired
state?
What
did
I
want
to
deploy
and
then
you
are
also
pulling
information
from
your
target
systems
to
see
what
is
actually
deployed,
and
then
you
can
see
if
you
can,
if
there's
any
drift,
if
there's
something
that
you
need
to
remediate
to
to
to
make
things
as
they
were
desired.
E
So
it's
it's
not
just
a
matter
of
storing
your
data
from
a
git
and
cloning,
the
repo
and
at
some
point
in
time,
deploying
it
from
there,
because
everybody
has
been
doing
that
for
for
a
while,
but
it's
also
some
new
practices
that
come
with
it
that
are
also
enabled
by
the
technologies
that
just
happened
to
to
to
be
created
in
the
last
years,
with
things
like
kubernetes
and
such
things.
E
So
it
was
a
long,
complementary
section
that
I
hope.
B
B
So
we
have
I'm
going,
to
paraphrase
him
a
little
bit
right,
because
we
have
pipelines
and
stuff
for
application
building
and
testing
and
deployment
via
get
ops.
But
what
about
git
ups
for
operations
and
system
administration,
the
I'm
actually
a
little
bit
familiar
with
that,
because
we
actually
do
a
bunch
of
that
in
kubernetes.
B
D
I
I
can
just
like
start
talking
a
little
bit
about
that,
so
I
do
see
a
lot
of
similarities
of
like
githubs
for
operations
and
get
asked
for
app
delivery.
The
principles
are
essentially
the
same.
It's
just
that
within
the
active
space.
This
has
been
really
the
way
that
appdev
has
worked
for
forever.
As
long
as
I
remember
right,
this
is
we
we
call
it.
D
We
give
it
a
new
name
but
having
the
source
of
the
application
in
in
a
version
control
as
the
source
of
truth
and
everyone
collaborating
around
that
are
all
the
changes
going
into
that
through
branching
and
different
work
workflows
that
people
use
that's
well,
understood,
we'll
know
most
teams
you
talk
to
today.
They
do
have
this
problem
this
this
this
process
in
place.
D
So
it
is
really
not
nothing
new
from
that
aspect,
and
even
when
it
comes
to
like
operation
side,
if
you
go
a
couple
of
years
back,
even
pre,
pre-kubernetes
and
people
were
like
automation
of
infrastructure,
really
took
a
took
a
lot
of
traction
when
tools
like
chef
and
puppet
came
along,
the
people
had
a
way
to
describe
what
they
want
to
happen.
D
On
on
an
infrastructure
and
ansible
south
side,
there
were
a
lot
of
tooling
terraform,
a
lot
of
tooling
created
around
that
space
and
it
ops
or
application
ops
or
milligraph
different
operational
group.
When
they
dealt
with
these
these
technologies,
they
didn't
keep
it
on
their
own
laptop
or
on
on
a
web
server.
It
was
always
they
versioned
them,
they
put
them
in
some
version
control.
Maybe
using
them
was
not
automatic,
so
they
they
want
to
make
a
change
on
a
fleet
of
virtual
machines
or
on
their
cloud
infrastructure
or
on
kubernetes.
D
It
was
a
person,
perhaps
running
an
ansible
playbook
like
a
latest
version
of
it
or
a
particular
version
of
it
from
that
version,
control
from
git,
let's
say
and
run
it
against
that
infrastructure.
For
that
happen,
so
even
that
part
of
it
is
not
really
new
for
the
operational
teams
and
what
what
what
github's
really
for
operation
means
is
that
there
is
another
level
of
automation
like
happening
in
the
place
that,
instead
of
a
person.
F
D
Taking
those
manifests
in
whatever
format
they
are
from
the
version
control
and
applying
into
the
infrastructure
that
happens
automatically.
Somebody
sits
in
the
middle
and
and
does
that
so
coming
back
to
your
like
question,
specifically
from
a
tooling
perspective,
I
think
the
ideas
are
are
are
not
really
changing
between
the
app
delivery
and
operations
and
the
the
tooling
the
same
they
are
like.
D
They
are
exactly
the
same
needs
just
used
for
for
different
purposes
and
what
I
see
also
in
the
conversation,
we
have
our
customers
to
see
that
the
approach
to
infrastructure
also
have
started
to
look
a
lot
more
like
the
app
defrost
process.
So
we
talked
about
having
a
like
on
the
app
side.
You
maybe
have
a
ci
or
pipeline
in
place
that
builds
the
code
and
deploy,
and
so
on.
I
more
and
more
see
that
on
the
operation
side
also
changes
on
on
the
configuration
of
the
cluster.
D
Perhaps
there
is
a
ci
process
for
that
in
place,
that
there
is
a
pipeline
that
gets
kicked
off
and
with
the
advancement
that
has
happened
around
kubernetes
provisioning
or
specifically
openshift
provisioning,
where
the
cloud
infrastructure
managed
kubernetes
that
you
can
treat
your
clusters
as
ephemeral
in
a
sense
like
short-lived,
maybe
spin
up
during
ci
process
test
the
configuration.
That
was
that
that
is
really
the
change.
Sets
that
an
operational,
an
ops
person
has
created
on
that
cluster,
validate
that
the
change
has
happened
correctly
and
become
agenda
cluster
right.
D
So
I
what
I'm
trying
to
get
that
is
that
they
did
these
two,
these
two
ways
of
working
and
converging
at
a
rapid
space
that
rapid
pace
the
way
I
see
it
and
becoming
quite
similar
and
the
same
set
of
tooling
that
is
being
used.
But
there
are
a
lot
of
variety
of
tools
that
exist
within
the
github
space
that
can
augment
this
process
and
some
of
them
maybe
they're
a
little
bit
more
toward
active
a
little
more
about
operation,
but
the
the
functional
the
fun
the
function
of
those
tools.
E
Yeah
sure-
and
so
if
I
can
add
to
that,
so
maybe
one
of
the
evolutions
that
is
driving
this
is
when
we
see
more
and
more
all
of
these,
like
infra
equipment
like
routers
and
all
of
those
things
becoming
api,
more
and
more
api,
driven
like
focusing
a
lot
on
bringing
those
capabilities
to
allow
them
to
be
configured
in
an
automated
way.
Some
even
go
embracing
things
like
the
kubernetes
state
of
mind
where
you
are
basically
declaring
a
file
that
describes.
E
Here's
how
I
want
my
configuration
to
look
like
and
then
this
configuration
can
be
automatically
applied
to
the
equipment,
whether
it
be
like
storage
or
routers
or
firewalls
or
whatever,
and
so
one
of
the
things
that
we
see
more
and
more
is
so
if
this
capability
is
not
built
in
the
tool-
and
it
cannot
be
like
all
this
automation
cannot
be
implemented
within
the
tool
itself.
E
We
see
those
vendors
starting
to
build
their
own
operators
like
embracing
the
operator
model
where
they
are
basically
building
that
logic
into
the
operator.
E
The
operator
understands
in
terms
of
declarative
ways
what
we
want
him
to
achieve,
and
then
the
operator
has,
in
his
logic,
the
everything
needed
to
interact
with
the
apis
that
are
provided
by
the
ops
side
of
things
and
that's
basically,
a
way
things
can
be
handled
from
an
upside
in
an
automated
way,
but
still
embrace
that
git
ups
approach,
where
everything
is
declared
or
described
as
a
as
gold,
and
so
that
leads
us
into
another
question,
which
is:
how
does
git
ups
work
when
we
consider
databases
or
stateful
applications
and
that's
basically
one
of
the
best
examples.
E
Then
all
you
have
to
do
is
create
a
yaml
file.
That
would
say
I
want
you
to
do
a
backup
of
this
thing
and
then
the
operator
will
trigger
the
operation
side
of
thing,
which
is
doing
the
actual
backup
of
your
database
or
things
like
scale
up
and
down
or
auto,
auto
remediation
and
such
things.
So
it's
basically
coming
together.
As
siamak
said,
we
are
seeing
some
practices
that
were
more
in
the
apps
app
world,
becoming
also
embraced
in
the
ops
side
of
things.
E
C
And
about
about
this
about
the
state
full
app
and
the
database,
I
think
there
is
it's
a
very
good
point,
because
if
you
think
about
database
migration
right
where
we
need
to
migrate
the
schema.
So
if,
when
we
use
githubs,
we
can
delegate
this
if
to
an
operator
to
the
cr
this.
This
is
a
life
cycle
or
we
can
take
benefit
of
the
gitobs
tool
for
the
hooks,
for
instance,
for
open
shift
key
tops.
We
use
argo
cd
project,
arco
cd
has
hooks,
so
we
can
work
on
the
hooks.
C
We
have
pre-hook
post
hook
so
before
the
syncing
process
of
argo.
In
this
case,
we'll
monitor
the
change
and
before
it
take
action,
it
can
execute
some
script
and
this
script
can
be
the
database
migration.
For
instance,
if
you
use
any
tool
like
a
flyway,
you
can
invoke
your
flyway
to
migrate
the
schema,
and
then
you
can
apply
your
change.
C
So
it
is
consistent
and
I
think
we
we
are
working
with
christian,
that
I
see
in
chat
to
a
catacota
lab
on
learn
openshift.com
for
about
oops
and
sing
wave,
but
in
the
while
I
can
also
share
in
the
chat
the
new
karakora
lab
that
we
have
about
key
tops.
If
you
want
to
try
kidops
and
chris,
maybe
you
can
send
it
in
the
restroom,
so
we
will
yeah
if
you
want
to
learn,
githubs
and
and
and
also
see
how
it
works.
C
You
can
use
this
lab
and
we
are
working
also
on
this
hooks
part.
So
you
can
understand
how
to
work
with
stateful
hub
using
the
operator,
but
also
taking
benefit
from
hooks
and
sink
waves,
which
are
the
tooling
in
the
argo
to
control
the
stateful
up
life
cycle.
A
B
I'm
going
to
put
some
stuff
in
the
chat
actually
more
about
this.
I
I
one
thing
I'll
add:
is
that
actually
git
based
database
migrations
are
believe
it
or
not?
A
new
thing,
I'm
going
to
paste
in
the
chat,
a
project
called
switch
for
sql
databases
that
dates
back
to
when
git
itself
was
only
two
years
old.
B
So
this
is
something
people
have
been
thinking
about.
We
just
haven't
put
all
the
tools
together
necessarily,
but
we
have
a
different
question
here
from
andrew
sullivan.
Are
there
presumably
this?
This
would
be
you'd
answer
this?
Probably
in
the
context
of
tecton.
B
Are
there
guidelines
and
suggestions
around
repo
usage
when
we
vote
for
cluster
per
app
per
team?
What's
a
good
approach.
D
Like
like
any
any
good
question
in
in
the
id
space,
the
answer
is,
it
depends
really
right.
So
there
are,
I
I
don't
feel
like
I
every
every
single
customer
I
talk
to.
This
is
one
of
the
first
questions
that
that
they
get
asked
and
I
don't
think
there
is
a
single
right
answer
there,
because
and
each
each
of
the
strategies,
so
this
this
applies
also
for
like
how
do
you
do
the
configurations
of
the
cluster
and,
if
you're,
using
argo
ca
to
sync
that
or
delivering
the
application?
D
How
do
we
design
that
repo-
and
there
are
a
couple
of
things
that
affect
that
decision?
First
of
all-
is
access,
control,
right
and,
and
that
is
very
much
tied
into.
There
are
two
things
that
have
influenced
that
one
is
that
what
is
the
desired
access
control
structure
within
the
organization
there?
You
see,
you
see
some
orgs
that
they
want
the
the
operator
platform,
ops
team
to
own
the
configuration
of
the
cluster
and
application
teams
are
given
a
namespace
or
a
number
of
namespaces,
and
they
are
allowed
to
deploy
apps
too.
So.
D
But
they
are
allowed
to
deploy
apps
into
those
namespaces
so
that
that
gives
like
one
indication
of
what
kind
of
access
control
unit
there
are.
There
are
organizations
that
give
a
cluster
to
every
dev
team,
a
small
cluster
and
that
def
team
is
admin
to
the
cluster
and
not
cluster
admin,
but
they
can
install
operators,
for
example,
they
can
modify
certain
configs
at
the
cluster
level,
but
upgrade
of
the
cluster,
the
config
of
the
the
nodes,
the
machines
they're
all
owned
by
the
platform
platform
owner.
D
So
there
are
these
levels
of
structures
of
access,
control
that
affects
how
how
how
you
should
slice
your
config
in
a
way
that
it
can
actually
control
that
access
and
gets,
and
from
the
other
side,
the
git
provider
that
you
use.
They
also
support
different
ways
of
controlling
access.
Some
of
them
support,
for
example,
controlling
access
within
a
single
repo
that
certain
folders
are
accessible
to
certain
roles.
Some,
don't
you
can
only
have
it
at
their
repo
level,
some
at
the
branch
level.
D
So
these
two
play
an
extremely
important
role
on
and
what
what
is
most
sensible
for
for
an
organization
and
with
this
trade-off,
as
you
would
expect,
the
simplest
is
that
there
is
one
report
that
has
everything
and
the
other
way.
The
other
side
is
that
you
have
many
repos
for
every
slice
of
access
that
you
want,
and
so
the
overhead
of
management
of
this
repos
varies
a
lot
on
how
much
control
you
want
on
those
configs
and
how
much
overhead
you
accept
for
managing
these
these
repos.
D
I
don't
know
if
I
clearly
answer
your
question,
but
it's
a
choice
that
needs
to
be
made
by
really
analyzing
what
what
a
team
expects.
Really
an
organization
expects
for
their
teams
to
be
able
to
do.
D
I
don't
know
if
others
have
any
any
other
point
of
view
on
this.
This
is
a
very
like
contentious
area
and
a
very
common
question
that
comes
up.
D
D
Yes,
like
one
repo
that
would
represent
the
entire
cluster,
all
the
name
spaces,
if,
if
there
is
elastic,
deployed
there
for
everyone,
that's
also
in
that
repo.
If
my
application
payroll
is
on
that
cluster
namespace,
that
also
is
in
that
repo,
so
that
that's
one
side
of
the
extreme
and
there
are
it-
fits
certain.
F
D
And
also
like
that
means
that
you
are
so
you
with
that
way.
You
don't
have
any
overhead
of
repo
management
right.
You
have
a
single
repo
that
makes
it
easy,
but
you
are
creating
a
different
bottleneck,
because
every
pr
across
the
entire
cluster
or
various
teams
that
sending
pr's
have
to
be
reviewed
by
a
limited
number
of
it
is
a
small
group
that
is
usually
a
platform
owner,
so
they
they
become
the
bottle
and
the
gatekeeper.
D
What
kind
of
changes
can
go
into
this
cluster
so
depending
on
how
what's
the
volume
of
changes
from
other
teams?
How
many
other
teams?
How
many
apps
are
on
that
cluster
and
pr's
are
coming
from
those?
This
can
also
run
into
a
bottleneck
of
you
have
becomes
like
the
kubernetes
repo.
You
have
tens
or
hundreds
of
pull.
D
Waiting
to
be
reviewed
to
go
into
the
repos
and
any
deployment
can
happen.
So
you,
if
you
look
at
the
grand
scheme
of
things,
then
why
are
we
doing
this
at
all?
Why
are
we?
Why
would
an
organization
do
githubs?
D
You
want
to
be
able
to
roll
out
changes
faster
and
be
able
to
look
at
what
has
happened.
So
if
you
run
into
that
bottleneck,
you're
kind
of
like
neutralize
the
value
that
you
would
get
you
would
be
getting
from
from
this
process
so
for
some
organization
that
that
method
would
work
as
long
as
it
doesn't
become
a
bottleneck
for
others.
If
the
volume
of
change
is
high,
it.
E
Would
become
a
bottleneck,
yeah
and
so,
as
siamak
said,
this
is
a
very
moving
space,
so
things
are
evolving
a
lot
and
and
the
way
I
mean
the
practices
are
also
evolving
as
at
the
same
time
at
the
tool
as
the
tools.
E
So
if
we
take
a
very
simple
example
for
for
some
time,
if
you
wanted
to
deploy
an
application
that
has
many
microservices,
then
because
of
some
limitations
of
the
tooling,
you
had,
for
instance,
to
put
all
of
your
code
within
within
the
same
folder
if
you
didn't
within
the
same
repo
in
different
folders.
If
you
didn't
have
to
manage
that
overhead
and
be
able
to
trigger
the
changes,
but
what
happens
is
if
you
have
10
micro
services
and
then
you
are
doing
a
push
on
the
repo.
E
You
have
to
start
passing
exactly
what
happened
and
see
what
folder
has
changed
in
order
to
minimize,
like
triggering
other
things
that
haven't
really
moved,
but
because
you
had
a
single
push
that
triggered
a
whole
pipeline,
then
everything
starts
to
be
rebuilt,
even
if
there
were
no
changes
in
such
things,
so
things
evolved
and
then
people
started
looking
at
okay.
E
So
if,
if
I
wanted
to
avoid
those
things,
then
let
me
separate
my
application
code
from
my
resource
from
my
deployment
resources
to
avoid
what
josh
said
like
I
made
a
big
push
and
then
somehow
everything
gets
triggered.
E
So
now
you
are
separating
your
your
application,
but
then
again
people
starting
to
realize
that
they
might
want
to
still
have
all
of
their
applications
or
microservices
managed
into
separate
repos
and
still
be
able
to
trigger
those
changes
in
the
getups
way,
and
things
started
to
happen
within
the
tooling
side
to
to
cope
with
that.
And
one
of
the
examples
is
what
we
call
application
sets
with
argo
cd,
which
is
basically
something
that
allows
you
to
manage
a
meta
set
controlling
many
applications.
E
So
everything
can
reside
within
its
own
repo.
But
at
some
point
you
are
governing
all
of
them
with
this
notion
of
an
application
set.
That's
that
that
allows
you
to
handle
your
10
micro
services
as
being
part
of
one
bigger
application,
but
everything
still
residing
in
its
own
repository
to
have
more
granular
control
over
your
repose
lifecycle.
E
So
it's
it's
still
evolving
and
I
think,
as
this
is
going
to
be
embraced,
more
and
more
some
general
practices
are
going
to
to
start
to
to
to
to
to
happen
like
becoming
some
recognized
patterns.
E
And
if
there
are
some
missing
things
within
the
tooling,
then
this
is
also
going
to
to
evolve
to
cope
with
those
patterns.
So
it's
basically
how
you
know.
Continuous
improvement
works
all
the
way.
B
Okay,
we
have
other
questions.
B
Oh,
I
think
it's
actually
very
simple.
We've
got
somebody
who's
asking
about
the
yeah,
the
the
argo
operator,
which
is,
you
mentions,
ability
to
back
up
and
restore
argo
cd
cluster.
He
thought
that
was
meant
that
we
were
backing
up
and
restoring
a
database.
I
wasn't
aware
that
argo
had
a
database.
It.
A
Right,
but
with
argo,
you
can
do
a
quote
back
up
by
creating
a
new
argo
cd
export
resource.
It's
documented
right
here
in
the
the
operator
hub
page
I'll
copy
it
here
and
drop
it
in
the
chat
but
yeah
under
usage
and
backup.
It
shows
you
the
ways
in
which
it
can
do
things
so.
E
And
I
I
believe,
argo
also
uses
redis
as
a
way
to
store
information.
I
don't
know
what
exactly
goes
in
there,
but
it's
one
of
the
inter
intermediate,
but
it's
not
like
a
stateful
stateful
database.
It's
just
to
improve
things
like
caching
and
being
able
to
compare
things
in
faster,
like
key
values.
If
you
wanted
to
compare
things
between
what's
the
desired
state
and
something
that
you
pulled
like
two
minutes
ago
or
something
like
that,
then
it
accelerates
also
those
operations.
E
B
So
we
have
another
question,
so
this
is
about
management
practices
and
get
ops
andrew
who's,
the
the
asker
mentioned
specifically
itil,
but
this
would
also
apply
to
a
number
of
other
process
management
regulations.
B
How
do
you
reconcile
that
with
get
ops
and
agile
practice
right,
because
these
are
sort
of
you
know,
certifications
of
cascading
processes
that
you're
supposed
to
follow,
and
they
are
often
requiring
a
heavy
paperwork
process
from
step
to
step?
Is
there
a
way
to
actually
implement
git
ups
practice
in
an
environment
where
you
are
required
to
comply
with
itil
or
other
such
process
regulations.
D
I
would
say
yes
because,
like
github's,
like
workflow,
does
not
impose
any
particular
pace
on
on
the
on
the
practitioner,
but
rather
it
enables
the
infrastructure
to
to
be
able
to
deliver
at
any
pace
that
is
needed
right,
so
so,
following
ital,
that
obviously
puts
some
some
strength
on
the
process
and
slows
down
the
workflow
because
of
the
requirements
of
that
process,
but
the
way
that
it
maps
to
git
ups
is
that,
if
I
simplify,
I
tell
to
say
there
are
certain
milestones
in
in
for
an
application
to
be
delivered.
D
Approving
this
thing,
someone
authorized
and
those
actions
happen
in
a
declarative
manner,
a
sink
of
whatever
that
needed
to
go
to.
The
next
stage
goes
to
those
clusters
so,
and
from
that
point
again,
you
have
a
pause
in
the
process
to
fulfill
what
what
iq
requires
you
to
do.
So
what
git
ops
would
contribute?
D
Really
the
value
in
that
scenario
might
not
be
directly
that
you
are
deploying
multiple
times
a
day
into
your
environment,
but
rather
the
visibility
that
you
get
into
the
process,
that
every
change
is
auditable
and
can
be
discussed,
and
especially
when
failures
happen,
you
have
a
quite
a
straightforward
way
to
backtrack
to
find
what
was
the
change
set
within
that
time
frame
that
a
failure
has
happened
in
production
that
might
have
caused
that
to
be
able
to
really
narrow
down
the
the
the
the
analysis
that
you're
doing
to
the
changes
that
you
can
easily
get
through,
and
it
visible
that
you
can
get
easily
through
the
get
ups,
trust
that
that
is
how
I
see
those
two
words
would
live
alongside
each
other.
A
So
there's
some
interesting
comments
happening
in
chat,
and
I
want
to
touch
on
those
just
real
quick.
If
we
can,
you
know,
there's
some
confusion
about
backing
up
argo
cd
or
the
application
data
right,
like
the
application
data
that
ends
up
landing
on
a
pvc,
for
example,.
F
A
Backup
process
needs
to
happen
outside
of
argo,
or
you
could
potentially
input
an
operator
to
do
that.
Maybe
in
your
argo
deployment
of
your
app
but
again
you're
backing
up
application
data,
not
argo
data,
if
that
makes
sense
right,
like
all
the
argo
data
lives
in
the
git
repo,
you
just
redeploy
if
you
need
to
and
then
as
far
as
itil
goes
itio
ito.
However,
you
wanna
say
it.
A
D
Yeah,
that's
an
interesting
way,
definitely
to
to
look
at
it
that,
like
some
of
those
roles
and
concepts,
take
a
different
role
and
they
piggyback
on
on
the
git
provider
and
what
you
would
get
out
of
good
essentially
so
that
does
resonate
with
me,
but
I'm
not
an
ideal
practitioner
so
not
sure
my
opinion.
This
is
too
much
to
be
relied
upon
in
this
context,.
A
It's
a
good
point,
though,
right
like
this,
could
bring
folks
forward
into
a
new
development
paradigm
to
where
they
can
move
faster
and
that's
really.
What
we
all
want
to
do
essentially
is
push
features
faster,
make
customers
happier.
A
B
Okay
different
question:
this
is
sort
of
for
jafar,
which
is:
can
we
deploy
openshift
itself
using
a
git,
ops,
workflow.
E
So
yeah,
that's
that's
actually
a
very
interesting
question,
because
if
you
look
at
the
one
of
the
major
evolutions
that
happened
between
openshift
3
and
openshift
4.,
if
you,
if
some
of
you
remember
openshift
3,
we
used
to
deploy
the
solution
using
ansible,
playbooks
and
basically
all
the
intelligence
or
all
the
orchestration
of
how
you
are
going
to
deploy.
The
different
components
was
triggered
or
orchestrated
by
the
the
playbooks.
E
E
If
I
wanted
to
go
to
to
connect
to
an
ndap
server
or
if
I
wanted
to
use
open
id
connect
or
here's
the
yaml
file
describing
how
my
registry
should
be
configured
to
have
https
and
such
things
and
all
of
the
30-plus
operators
that
compose
the
platform
have
all
specific
configuration
files
and
we
have
also
a
whole
wrapper
file
that
basically
describes
the
installation
of
cluster.
Where
you
can
say
how
many
masters
you
want
how
many
workers?
E
What
are
the
placement
rules?
What
are
the
types
of
machines
that
you
want
to
use,
of
course,
depending
on
the
target,
it's
going
to
be
different,
etc,
and
once
you
have
those
files,
then
you
can
start
deploying
them
in
give
up
fashion
because,
for
instance,
if
you
look
at
acm
the
advanced
cluster
management
tool,
that's
basically
what
it
does.
E
E
D
E
D
Is
an
operator
in
the
like
operator
have
an
openshift
as
well.
That
is
like
really
the
core
of
what
enables
that
functionality
in
acn
and
this
cluster
management
is
called
hype.
So
if
you
have
access
to
an
openshift
cluster,
you
can
try
this
out
already
install
the
hive
operator,
and
then
you
have
a
declarative
way
to
even
provision
cluster,
and
the
reason
for
hive
exists
is
really
it
is
to
enable
the
get
ups
workflow
for
provisioning
a
life
cycle
management
of
a
cluster.
It
actually
would
be
good
to
have.
C
Yeah
and
also
what
the
man
jafar
mentioned
about
the
authentication
right.
So
if
we
have
multiple
openshift
clusters,
we
want
to
sync
all
the
same
authentication
system
for
multiple
options.
Cluster
may
be
controlled
by
acm.
Let's
say
we
have
one
active
directory
that
will
be
used
for
getting
access
to
multiple
openshift
cluster,
maybe
different
group,
but
if
we
want
to
track
and
control
that
we
can
use.
Also
keytops
approach
for
this,
even
if
in
the
the
installation
is
not
that
the
github's
way.
D
This
is
actually,
I
guess
this
fascinates
me
a
lot
that
how
much
of
this
way
of
thinking
is
enabled
because
of
kubernetes.
So
these
patterns
are
like
the
very
way
of
working
is
not
something
that
would
say
is
is
new,
it's
just
that
it
is
reappeared
on
their
different
terms
and
shapes
because
of
kubernetes.
D
That's
how
this
I
interpreted,
and
it
started
with
it's
similar
to
like,
if
you
think
about
microservices
that
distributed
computing
has
been
there
for
a
very
long
time,
but
microservices
kind
of
like
rejuvenalized
that
that
that
patterns,
so
kubernetes
really
like
started
with
this
declarative
way
of
looking
at
infrastructure,
even
though
the
concepts
were
there
and
now
like
we
fast
forward
five
six
years,
we
talk
about
provisioning,
kubernetes
itself
in
declarative
manner
and
the
cloud
office
infrastructure
underneath
it
also
in
a
declarative
manner,
and
also
the
managed
services
on
the
cloud
services
also
as
a
declarative
manner.
D
Maybe
I
I
want
to
use
sqs
queue
in
my
application.
How
can
I
declaratively
on
kubernetes
provision?
That's
like
that
concept
has
propagated
now,
I
think
in
both
directions
downwards
and
upwards,
and
I
contribute
a
lot
of
that
to
to
kubernetes
kubernetes.
To
be
honest,
that
brought
life
into
this
way
of
thinking
in
many
other
layers
of
the
infrastructure.
B
Yeah,
I
mean
I'll
say
from
having
tried
to
do
some
of
this
six
or
seven
years
ago
that,
prior
to
kubernetes
being
a
way
to
do
infrastructure.
We
had
major
missing
stair
problems,
as
in
there
were
always
critical
pieces
of
your
infrastructure.
Application
stack
that
could
not
be
declaratively
managed.
B
And
in
fact,
my
first
deployment
of
kubernetes
in
version
0.4
kubernetes,
was
specifically
so
I
could
in
fact
do
declarative
management
of
a
data
analytics
application
because
it
wasn't
really
possible.
Otherwise,
I
needed
to
be
able
to
swap
out
the
version
of
the
application
and
do
the
database
migration
at
the
same
time
and
and
having
it
containerized,
and
the
very
primitive
version
of
kubernetes
enabled
me
to
do
that,
and
so
I
think
I
think
mainly
what
it
did
was.
This
is
what
we
always
wanted
to
do,
and
now
it's
possible.
D
It's
exactly
really
that
that's
how
how
I
interpret
this
as
well
like
those
concepts
where
the
technology
was
not
there
to
support
those
concepts
in
in
in
an
accessible
way
like
kubernetes
made
these
very
accessible.
It's
the
same
reason
that
I
say
like
devops,
really
took
off
the
containers
and
kubernetes
made
it
accessible.
D
F
D
The
same
same
pattern,
the
same
way
of
working,
so
I
think
that's
one
of
the
fascinating
thing
in
about
kubernetes
that
has
that
has
been
an
enabler
in
many
different
areas,
that
way
of
thinking
and
that
automation
that
perhaps
not
up
from
it
wasn't
like
the
division
was
not
there,
that
it
becomes.
These
patterns
become
so
pervasive,
but
in
practice,
over
the
last
couple
of
years
they
have
been
reused
and
applied
really
to
every
layer
of
the
infrastructure
and
applications
of
bc.
A
Yeah-
and
I
said
this
years
ago,
right
like
to
me,
the
holy
grail
of
devops
is
get
ops
right.
Where
I
make
a
change
via
git,
it's
tracked,
it
has
a
hash,
it's
you
know
auditable,
and
then
an
approval
process
takes
place.
Ci
puts
everything
into
place
and
off
you
go
right
to
me
adopting
get
ops
kind
of
helps.
You
jump
over
that
culture.
D
Yeah
the
definite
place
I
had
a
person
agree
that
there
is.
There
are
some
adjustments
that
need
to
be
to
be
made,
and
I
see
this
happening
like
mostly
bottom
up
to
be
frank,
that
it
starts
yeah
with
single
teams
moving
in
that
direction
and
for
a
package
rather
than
I
haven't
seen
any
git
ops
initiative
like
I've
seen
a
lot
of
devops
initiatives.
I
haven't
seen
any.
D
Starts
with
smaller
teams
and
and
propagates,
because
it's
because
of
it
the
values
it
provides
and
also
how
accessible
it
is.
But
it
does
require
a
change
in
mindset
of
how
we
work.
E
Yeah
and
also
one
of
the
things
I
I
like
with
the
with
this
get
ups
approach
and
the
way
things
have
evolved
is,
if
you
think
about
how
we
used
to
automate
infrastructure,
for
instance
provisioning.
E
You
had
some
some
scripts
that
you
triggered,
and
then
you
had
your
your
infra
ready
somewhere,
but
as
weeks
go
by
some
people
start
to
directly
connect
to
their
instances,
make
changes,
make
changes,
change,
environment
variables
and
such
things
manually.
And
then
you
have
a
disconnect
between
what
you
have
provisioned
and
what
is
actually
in
real
life,
and
you
are
unable
to
reproduce
things
because
of
those
people
that
have
tempered
with
the
with
your
intro
and
what?
What
is
good,
I
believe
with
get
ups.
Is
that?
E
Because
you
have
this
notion
of
being
able
to
to
think
the
real
state
with
with
what
you
you
wanted
and
prevent
drifts,
then
it
really
tackles
this
problem
of
somebody
tempered
with
the
environment
variables
for
my
application
and
then
everything
stops
working,
because
if
you
have
put
things
right
in
in
the
right
way,
then
it's
going
to
be
automatically
remediated
and
you
shouldn't
have
that
problem.
A
E
Yeah
we
talked
about
it
with
the
git
ups
approach
for
cluster
provisioning.
But
if,
if
somebody
somebody
wants
to
jump
on.
D
Kind
of
like
talk
a
little
bit
more
specific
about
it
as
well,
so
because
that
that
behavior
is
actually
the
openshift
get
out
behavior
that
you,
you
install
the
operator
and
you
get
a
global
instance
of
our
cd
pre-provisioned
and
pre-configured
for
you,
and
this
is
a
very,
very
common
pattern
that
every
cluster
has
one
instance
of
argo
cd,
that
is,
it
has
elevated
privileges,
it's
not
cluster
admin,
but
it
has
more
privileges
than
regular
users
have
so
that
it
can
perform
operations
on
on
the
classic,
configure
the
cluster
to
the
degrees
that
the
platform
owners
allow-
and
this
is
this
instance-
is
usually
owned
by
the
platform
team
and
you
might
have
additional
auto
cd
instances
for
the
app
teams.
D
A
Awesome
so
andrew
asked
a
question.
You
know
we
have
argo,
it's
out
there.
There
are
other
tools
that
do
get
ops.
What
are
some
you
know
consideration.
Why
did
we
choose
argo
versus
flux,
for
example?
Or
is
there
a
good
reason
to
maybe
use
flux
instead
of
argo
in
some
people's
cases,
because
we
have
an
operator
for
flux?
I
believe.
D
Yes,
yeah,
there
is
an
operator
built
by
vivors
that
is
on
on
operator
hub,
and
there
are
other
tools.
There
is
a
fleet
from
rancher
and
there
is.
But
google
has
the
google
sync
a
little
bit
more
specific,
perhaps
to
google
infrastructure,
but
it
follows
the
same
and,
and
there
are,
there
are
other
other
tools
in
this
space
as
well,
and
they
are
like,
as
the
problem
they
solve,
is
extremely
similar.
They
all
more
or
less
do
the
same
thing
they
make.
D
They
think
you
are
get
proposed
to
to
clusters
that
that's
what
all
of
them
do
and
differences
more
on
the
feature
and
nuances
of
them
from
from
a
user
perspective,
developer
perspective,
and
then
there
are
differences
in
the
communities
around
these
projects
from
governance
or
continuity
perspective.
If
you
will
so,
if
you
are,
if
you're
a
platform
owner
or.
D
Like
drive
and
like
own
a
like
some
part
of
it
slices
of
it
infrastructure
in
your
organization,
then
business
continuity
of
the
technologies
that
you
choose
like
play,
a
role
also
that
that
becomes
an
impact,
important
factor
and,
and
you
see,
differences
in
those
areas.
So
when
we
at
red
hat
and
for
openshift,
we
kept
hearing
from
customers
about
the
challenges
of
both
configuring
cluster
in
a
consistent
manner
and
also
delivering
application
in
in
multi-cluster
environments,
because
we
could
see
already
that
the
number
of
clusters
are
on
the
rise.
D
They're
they're,
having
a
lot
of
customers
that
used
to
have
two
clusters.
Now
they
have
six
clusters
10
clusters,
and
you
can
see
that
in
in
the
survey
that
cncf
does
as
well
from
last
year
that
they
compare
from
2017,
I
think,
to
20
to
20.,
and
you
can
see
the
number
of
clusters
that
are
like
five
number
of
respondents
that
are
running
five
to
ten
cluster
or
10
to
20
and
and
and
more
in
all
groups
is
increasing.
D
So
we
kept
hearing
about
the
challenges
of
how
to
evolve
is
consistent.
So
we
started
looking
at
the
various
open
source
projects
within
space.
This
is
what
we
do
with
the
red
hat.
We
we
support
community
projects
and
productize
the
ones
that
are
much
more
more,
more
ready
for
enterprise
consumption.
So
we
we
talked
actually
with
most
of
these
projects
and
reviewed
and
argo
really
sticks
out
both
from
the
maturity
of
the
project
itself.
It's
a
it's
an
extremely
popular
project
used
by
many
many
organizations.
D
There
are
about
a
hundred
references
on
the
github
repo
of
argo
cd,
like
public
customers,
using
it
in
production
and
so
then,
and
very
active
community.
So
you
see
quite
a
large
number
of
contributors
that
are
working
around
the
project,
commenting
bring
their
use
case
and
and
that's
an
indicator
of
a
healthy
community.
D
From
our
perspective
from
red
hat,
like
the
worst
thing
that
can
happen
to
community
is
that
a
single
vendor
is
driving
it
and
based
on
their
own
view
of
that
space,
and
they
are
based
on
their
own
requirements,
which
is
also
a
threat
to
existence
of
a
project.
If
that
one
vendor
stops
contributing-
and
we
see
that
argo
city
has
a
very,
very
vivid
community
with
a
lot
of
vendors
being
there
and
it's-
we
also
really
like
the
the
kubernetes
native
nature
of
it.
It's.
D
So
because
of
that,
we
started
like
getting
more
and
more
closer
to
the
community
and
contribute
more
and
and
join
the
community
as
as
really
the
major
git
ops
tool
on
the
portfolio
and
we're
really
happy
to
see
that
it
progresses
in
the
maturity
in
cncf
as
well.
So
we're
hoping
to
see
that
quite
soon
our
cd
would
be
one
of
the
graduated
projects
in
cncf
as
well.
A
That's
amazing,
awesome
yeah,
so
we
got
a
few
like
two
or
three
minutes
left
and
I
need
to
hop
on
over
to
the
next
office
hour.
Talking
about
openshift47
questions,
anything
else
you
want
to
wrap
up
with.
D
I
think
if,
if
anyone
is
interested,
so
we
we
talk
a
lot
about
githubs
and
openshift
and
I
know
like
it
might
sound
that
it's
not
that
accessible
with
might
be.
Thinking
like
how
am
I
gonna
try
this.
If
I
want
to
look
at
it-
and
I
know
argo
cd,
but
how
does
openshift
kiddos
look
like?
We
do
have
this
interactive
tutorials
that
I
think
that
not
all
and
the
team
puts
together.
D
Yeah
exactly
catacota
tutorials.
I
do
definitely
encourage
you
and,
if
you're
interested
to
try
out
should
pipelines,
openshift
gitabs
together
or
as
a
separate
labs.
Try
it
out
that
link
you,
you
get
an
environment
in
your
browser
and
a
really
nicely
well
organized
tutorial
that
walks
you
through
getting
your
your
your
hands
like
a
little
bit
dirty
with
these
technologies
with
the
process
itself,
you
get
a
feel
of
it
and
once
you
do
that,
you
have
any
feedback.
Please
do
reach
out
to
us.
We
love
to
talk
githubs.
D
A
Awesome
well
folks,
as
I
mentioned,
we
have
another
office
hour
coming
up
here
in
a
few
minutes,
so
unless
there's
any
last-minute
questions,
we're
going
to
wrap
up
here,
thank
you,
siamak
jafar,
natali
you're,
all
my
favorite.
When
it
comes
to
get
ops,
I'm
I'm
shocked.
Christian
didn't
magically
appear
out
of
nowhere.
A
F
A
F
A
A
Here
see
a
mox
here's
yours,
jafar
and
then-
and
christians
is
right
here
kind
of
like
a
bevy
of
blog
post
for
everybody
to
get
you
up
to
speed
on
this
stuff.
So
yeah
super
important
all
right
folks.
Thank
you
very
much
for
tuning
in
stick
around.
If
you're
on
the
youtube
or
the
the
twitch
channel,
we'll
be
coming
up
with
a
q,
a
session
about
openshift
so
feel
free
to
hang
out
for
a
little
bit.