►
From YouTube: Getting Started with Continuous Delivery and GitOps
Description
In this webinar you will learn the concepts behind continuous delivery, continuous deployment and GitOps. You will also learn how to set up a DevOps pipeline that deploys your application to several environments on Kubernetes
A
Hi
everyone
Welcome
to
our
webinar
session,
we're
excited
you
were
able
to
join
us,
we're
going
to
give
people
just
another
30
seconds
or
so
to
hop
in
and
we
will
get.
Things
kicked
off.
A
All
right,
let's
go
ahead
and
get
started
once
again
thanks
everyone
for
joining
us,
we're
happy!
You
have
decided
to
spend
some
time
with
us
this
morning
or
afternoon,
wanted
to
start
off
just
with
some
brief
intros
and
some
housekeeping
items
first
off
wanted
to
introduce
our
presenter
today.
This
is
sander
breenen.
He
is
a
customer
success
manager
based
out
of
our
emea
region.
So
we're
happy
to
have
him
with
us
today
and
in
in
terms
of
housekeeping
items.
A
If
any
questions
come
up
throughout,
the
presentation
feel
free
to
put
those
in
the
Q
a
portion
of
your
Zoom
window,
we'll
we'll
be
able
to
get
those
answered
either
by
typing
them
back
to
you
through
the
Q.
A
or
sander
will
have
some
time
at
the
end.
To
answer
a
few
questions
as
well
last
thing
this
webinar
is
being
recorded,
so
we
will
be
sending
out
a
recording
to
you
all
here
in
the
next
day
or
two
and
with
that
I
will
turn
the
time
over
to
sander.
A
B
You
Taylor
thanks
all
for
joining
in
this
webinar
on
continuous
delivery
and
githubs
yeah.
Let's
get
immediately
started
with
it.
So,
let's
get
started
with
the
story.
I
want
to
tell
you
One
Day
at
my
previous
company
I'm
working
for
gitlab.
Now
for
almost
a
year
and
in
my
previous
company
I
had
a
customer
that
called
in
with
a
problem
on
production
and
their
production
system
didn't
come
up
after
they
ran
an
upgrade
based
on
one
book
we
created
for
them
yeah.
They
panicked,
of
course.
B
So
at
that
time,
and
we
contacted
this
organization,
they
were
in
the
process
of
restoring
this
production
instance.
They
were
restoring
a
production
backup
in
their
test
instance
and
then
trying
to
restore
this,
to
promote
this
test
instance
to
a
production
instance.
So,
in
a
way
they
try
to
have
an
existing
instance
and
then
promote
that
to
new
a
new
production
and
then
clean
out
the
other
one.
B
Apparently
we
investigated
this
problem
and
it
turned
out
that
they
followed
the
one
book
but
forgot
one
step
in
that
one
book,
and
then
we
applied
the
step.
Everything
was
fine.
Now
they
are
the
Lessons
Learned.
In
this
story
is
that
continuous
delivery
is
very
important,
because
if
this
organization
had
automated
this
one
book,
then
they
couldn't
forget
this
one
single
step,
and
then
this
upgrade
would
have
wouldn't
have
failed.
So
one
of
the
biggest
challenges
of
organizations
today
is
therefore
to
deliver
software
in
a
safe
and
reliable
way.
B
B
B
There
is
hardly
any
process
in
your
organization,
not
relying
on
software,
so
think
about
it.
If
can
you
imagine
any
process
any
service
into
in
your
organization?
That
is
not
relying
on
software
now,
I,
don't
and
therefore,
software
delivery
is
one
of
the
key
things
to
distinguish
yourself
from
other
organizations,
but
many
organizations
still
struggle
with
this
and
yeah.
This
is,
of
course,
a
comic.
This
is
not
real,
but
it
does
illustrate
a
little
bit
of
what
we
do
often
see
in
in
organizations
a
CSD
pipeline.
B
That
is
hardly
automated
and
heavily
relying
on
web
books,
manual
procedures
and
last
minute
fixes
is,
is
what
is
this?
This
comic
is
describing
and
with
manual
procedures
and
one
books,
it's
extremely
difficult
to
keep
track
of
all
the
commands
you're
executing
in
order
to
get
a
running
system,
and
it's
not
only
the
commands
that
you
entered
beforehand
to
get
to
a
system
and
they
can
then
need
to
put
in
the
runbook,
but
it's
also
to
keep
track
of
where
you
are
at
the
runbook
during
a
production
deployment.
B
That's
also
the
reason
why,
in
my
previous
example,
the
production
of
great
fails
now
we
didn't
end
up
in
a
50
person
conference
call
in
the
ends,
but
if
we,
but
we
have
to
have
and
apply
manual
changes
in
order
to
bring
production
back
to
a
working
state
with
continuous
delivery
and
githubs,
you
want
to
avoid
exactly
these
problems
and
start
deploying
with
the
confidence
that
it
always
succeeds.
B
Hence
the
one
book
that
we
just
discussed
to
deliver
software
safely.
You
need
to
add
quality
and
Security
checks
in
the
delivery
process
and
to
to
deliver
software
quickly.
The
deployment
needs
to
start
as
soon
as
possible
after
Security
checks
have
passed,
preferably
automatically,
and
that
is
best
managed
via
a
deployment
pipeline.
B
B
This
is
where
development
and
operations
are
connected
together
to
truly
deliver
or
deploy
an
application
immediately
after
Security
checks
have
passed,
you
need
to
connect
the
deployment
Pipeline
with
the
development
pipeline.
That's
what
you
see
in
this
image.
This
is
where
devops
comes
into
player
course.
So
devops
is
a
concatenation
of
development
and
operations.
B
As
you
probably
all
know,
and
development
teams
not
only
not
longer
only
responsible
for
the
software
change,
but
also
needs
to
know
something
about
the
deployment
and
how
to
deploy
the
change
in
the
production
in
order
to
do
that
safely
and
securely,
you
need,
of
course,
quality
and
security
built
in
into
your
pipeline,
but
you
also
need
a
secure
way
of
this
pipeline
accessing
a
production
system.
B
B
B
What
it
requires,
therefore,
is
that
you
see
your
infrastructure
as
a
code
base,
a
code
base
that
requires
testing
that
requires
quality
assurance
and
Security
checks.
When
you
look
at
your
infrastructure
as
a
code
base,
then
it
makes
sense
to
apply
the
same
cicd
pipeline
that
you
would
use
in
development
also
to
your
infrastructure
code.
B
Let's
illustrate
that
with
another
example,
so
in
another
organization,
I
work
with
they
heavily
scripted
their
deployment
process
using
ansible.
So
the
only
thing
that
they
had
to
do
was
simply
start.
The
municipal
Playbook
and
a
production
update
was
running,
so
they
automated
their
one
books
are
the
same
part
of
the
up
of
the
of
the
part
of
the
procedure
was
to
test
the
update
first
by
running
the
Playbook
on
a
test
environment
due
to
some
restrictions.
B
B
Many
people
already
are
using
the
sust
and
dust
scanners
on
their
applications,
but
we
now
also
have
sust
scanners
on
infrastructure
escorts
so
on
their
form
or
ansible
skits
I'll
demo.
That,
later
too,
and
now
we
want
to
deploy
quickly
as
well
with
the
gitlab
terraform
integration,
you
can
build
a
dataform
pipeline
that
automatically
runs
after
you
merge
a
merge
request
and
it
will
deploy
your
infrastructure
automatically
into
one
of
the
cloud
environments.
B
B
The
infrastructure
team
typically
uses
data
form
to
deploy
to
any
of
the
large
Cloud
environments,
Azure,
Google
or
Amazon,
and
they
typically
build
an
a
kubernetes
environment.
That's
also
what
I
use
in
the
demo
activities
engine
on
either
of
these
platforms,
where
applications
can
land
application.
Development
teams
can
then
provide
Helm,
charts
or
kubernetes
manifest
files
and
two
artists.
In
fact,
a
sort
of
a
configuration
around
these
kubernetes
manifests
and
they
use
those
to
deploy
the
applications
on
that
infrastructure.
B
As
you
can
see,
these
applications
use
Bots
and
a
pod
is
sort
of
a
a
small
surface
within
the
application
that
delivers
a
certain
functionality
and
it
has
different
sizes.
So
there
can
be
per
application
difference.
Resource
demands
on
your
cluster
infrastructure,
therefore
close
collaboration
between
infrastructure
teams
and
app
development
teams
to
make
sure
that
this
infrastructure
is
capable
of
Hosting
your
application
needs.
B
So,
let's
look
at
a
more
detailed
workflow
now
here
we
have
again
we
have
a
cloud
operator
that
is
responsible
for
the
for
the
clouds
environments,
the
deployment
Targets
in
this
case,
and
we
have
an
app
development
team,
that's
building
an
application,
the
deployment
type
Target.
Let's
assume
that
that's
a
kubernetes
cluster
running
in
one
of
the
big
cloud
environments,
the
current
state
of
that
kubernetes
cluster-
is
already
stored
in
gitlab
and.
B
After
pushing
these
changes
to
the
central
gets,
you
create
a
merge
request
in
that
most
requests.
You
can
also
include
the
cloud
upgrader
to
give
you
the
changes
made
and
make
sure
that
the
deployment
Target
is
capable
of
running
this
application
and
if
a
change
is
needed,
the
cloud
operator
can
also
make
a
change
in
his
infrastructure
as
code
Repository.
B
To
a
terraform
Pipeline
and
the
infrastructure
change
build
pass
security
scanning.
When
security
scanning
is
successful,
it
will
go
through
the
styrofoam
pipeline
to
be
deployed
on
the
deployment
targets
you
can
optionally
place.
A
policy
manager
in
between
that's
all
from
hashicorp,
like
terraform,
is
also
from
hashicorp
and
and
that's
an
extra
sort
of
device
in
between
your
pipeline
to
secure
your
environment.
B
So
in
in
the
Sentinel
policy
manager,
you
can
Define
per
infrastructure
per
namespace
or
whatever,
who
is
allowed
to
deploy
where
this
is
not
part
of
the
demo
and
but
it's
optionally
available
and
integrated
once
that's
deployed,
then
the
merge
request
that
app
development
team
has
created
can
also
be
merged
so
that
the
test
and
deploy
pipeline.
B
So
this
I
pipeline
can
run
and
check
the
application,
and,
after
that,
security
scanner
scanning
can
kick
in
to
check
if
all
secure,
if
the
application
is
safe
and
secure
to
deploy
with
that,
we
can
use
a
get
Ops
architecture
to
detect
changes
in
the
application
and
then
automatically
deploy
that
through
the
gitlab
agents,
com
configuration
into
the
deployment
targets,
of
course,
with
every
deployment,
there
are
definitely
Secrets
like
passwords
Keys
certificates,
Etc
that
that
needs
to
be
deployed
together
with
your
application
or
with
your
infrastructure.
B
B
B
Repository
that's
deployed
to
the
deployment
targets,
whereas
the
Hello
World
app
will
deliver
a
Docker
image
and
then
that
Docker
image
is
sort
of
promoted
to
production
through
this
world
greetings
and
environment
that
will
generate
a
kubernetes,
manifest
files
and
to
a
git
Ops
process
that
watches
these
manifest
files.
It
will
detect
the
changes
and
deploy
those
changes
in
in
the
clouds
environments,
and
then
we
we
have
created
this
circle.
B
B
I
start
with
the
Hello
World
app,
so
this
is
the
application
that
an
app
developer
is
developing
on,
and
that
is
guys
it's
it's
simple
microwave
service.
That's
going
to
give
you
a
nice
hello,
world
line
and
I
always
change
the
background
here
to
make
something
changing.
That's
really
visible:
that's
open,
IDE
and
start
with
making
this
change.
B
B
And
then
I
create
a
new,
it's
a
bit
difficult
to
say
with
this
dark
theme,
but
I'll
create
a
new
Branch
here
and
then
I
commit
so
now,
I
created
a
branch
and
that
branch
is
then
automatically
created
into
a
merge
request.
Now,
that's
this.
Let's
assume
that
we
want
to
include
operations
here.
B
B
You're,
the
best
I
can
simply
go
forward
with
creating
the
merge
request.
So
now
we
have
the
merge
request.
Open
I
have
a
included
operations
in
this,
so
I
can
later
on
check
this
box
to
mark
that
operations
is
finished
and
I
have
a
winning
pipeline
here.
This
pipeline
is
now
doing
a
version
check,
but
then
it's
building
the
docker
image
and
running
a
few
tests
container
scanning
their
secret
detection
tests
and
the
sus
test,
and
after
that,
it's
deploying
a
review
app.
B
As
part
of
the
merge
request,
so
the
conical
build
is
already
finished,
but
it
means
I
already
have
the
orange
app
here
and
that's
now,
a
free
text
that
contained
the
new
version
with
the
orange
backgrounds
in
the
merge
request,
I
said
that
we
needed
something
changed
by
operations.
B
Project
the
operations
project
is
the
only
project
that's
running
on
a
separate
environment
which
is
self-managed.
All
the
other
projects
are
running
on
data
level,
on
our
SAS
platform,
but
for
the
demo
environment
we
decided
to
use
a
separate
self-managed
environment,
which
is
in
fact
the
same
thing.
Only
self-managed
you,
you
see
the
terraform
files
that
will
create
a
Google,
kubernetes
environment.
B
Here
you
see
that
how
it's
working
this
is
simply
a
telephone
file
that
can
be
deployed
to
the
terraform
integration.
B
And
in
this
one,
I
also
include
a
gitlab
agent
immediately,
so
I
immediately
deploy
a
get
lab.
Agent's
product
data
form
kubernetes
integration,
and
this
getlab
agent
is
also
automatically
reporting
back
to
the
cluster
management
project
in
our
Cloud
environment.
And
that's
then,
connecting
the
kubernetes
agents
to
the
kubernetes
cluster.
B
Normally
I
always
always
make
a
merge
request
first
and
then
test
and
validate,
and
then
you
go
into
a
production
match.
That's
the
idea
of
githubs
is
that
you
start
your
production
pipeline
through
a
get
push
and
typically
emerge
request
where
you
secure
your
main
branch,
where
you
add
secure
variables,
so
that
only
certain
people
can
see
them
and
the
production
environment
is
protected.
B
This
has
triggered
a
pipeline,
as
you
can
see,
so
this
pipeline
is
now
winning
and,
as
you
can
see
it's
already,
including
it's
also
including
a
sustatic
analysis
test
on
the
terraform
files.
It
will
check
their
form
if
the
syntax
is
correct.
If
there's
anything
in
in
the
code,
quality
context
done
against
certain
best
practices,
checking
for
any
secrets
or
any
other
things
that
you
could
have
done
wrong
in
in
your
phone
code.
Then
it
runs
in
the
validate
and
plan.
B
I'm,
sorry
so
the
terraform
state
is
stored
here
you
can
see
that
it's.
The
latest
pipeline
is
passed.
It's
updated
seven
hours
ago,
and
here
you
can,
for
example,
lock
the
states
for
only
access
of
like
gitlab.
You
can
copy
detail
from
any
command
or
even
download
the
state
file,
and
you
can
even
remove
it.
This
is
more
like
for
troubleshooting.
B
If
you
really
are
in
trouble,
your
state
isn't
in
in
sync
with
the
cloud
environment
anymore
and
it's
impossible
to
get
it
in
sync
again,
then
what
you
can
do
is
delete
the
state
files
and
make
terraform
create
new
state
files
for
you.
So
that's
like
the
last
store.
If
you,
if
you
really
are
in
total,
no,
you
can
also
configure
terraform
to
cicd
variables.
B
A
cicd
variables
are
very
important,
for
example,
to
must
Secrets,
like
this
API
token
or
to
protect
certain
at
the
Year.
This
agent
talk
was
almost
also
must
or
to
protect
certain
variables
for
a
certain
environment,
or,
of
course,
if
you
protect
the
variable,
it's
only
running
on
protected
branches.
If
you
protect
your
branches
well,
then
this
variable
is
running
only
on
that
branch
and
the
variables
that
are
prefixed
with
TF
underscore
bar
these
variables
are
then
used
within
the
data
form
files
and
they
are
replaced
automatically.
B
A
B
A
B
And
once
that's
done,
then,
you
will
see
in
the
Google
Cloud
that
there
is
a
new
workload
for
the
gitlab
agent.
So
here's
the
gitlab
agents
it's
running
at
the
moment
already
and
it's
still
on
version
15
20..
B
But
when
the
deployments
has
finished,
then
you
should
see
that
the
that
the
Google
Cloud
environment
is
going
to
terminate
that
Bots
and
restart
it
with
a
new
version
and
that
you
can
should
then
be
15
40
in
this
case,
so
that
that's
the
job.
Now,
if
I
go
to
workloads
in
the
agent
again,
then
you
should
see
now
that
it's
deploying
1540
and.
A
B
Hello
World
app,
then
this
pipeline
is
now
so,
and
it
also
deployed
a
review
as
I
mentioned.
Well,
if
I
go
to
environments
here,
you
can
see
that
I
have
now
a
review
up
and
the
more
reviews
that
I
deployed
an
orange
one.
So
this
one
is
now
Orange,
whereas
if
I
go
to
the
production
environment,
which
is
deployed
through
the
environment,
project
and
I
go
to
production
and
this
production
is
still
very
bright,
blue.
B
B
This
will
now
first
stop
the
deploy.
The
review
app
here
in
the
in
the
existing
pipeline
and
I
also
created
a
new
Pipeline
on
the
main
branch.
At
the
moment.
That's
starting
again
with
the
version
check
the
build
of
the
docker
image
testing
and
then
you
see,
rather
than
creating
a
staging
for
a
review
app,
it
deploys
a
staging
environment
and
then
we
go
into
a
production
environment.
So
this
pipeline
looks
a
little
bit
different
than
the
merge
request
pipeline.
B
B
Hold
one
so
once
that's
finished,
and
we
can
see
that
in
the
meantime
I'm
going
to
show
you
what's
happening
after
it.
So
once
this
pipeline
is
finished,
I
can
promote
that
to
production,
and,
what's
then
happening
is
that
if
this
is
going
to
the
world
greetings
and
the
world
greetings
and
is
the
the
project
that
will
build
them?
Do
we
need
this
manifest?
So
here
are
these
manifest
stored
for
production
and
for
staging?
At
the
moment
the
production
manifest
is
still
on
version,
one
zero,
nine
for
the
Hello
World
app.
B
So
once
I've
promoted
and
run
the
correct
pipelines,
you
will
see
that
this
is
then
upgraded
to
a
new
version
label,
and
once
that's
done,
then
githubs
is
picking
this
up
automatically.
So
that's
the
idea
of
this
demo.
So
let's
show
you
that
I
already
showed
you
that
in
the
Google
cluster
environments
we
need
the
environment.
B
I'll
automatically
deployed
a
gitlab
agent
and
basically
it's
like
agent
is
configured
so
that
it
automatically
once
it
started,
will
call
back
to
this
cluster
management
project
in
gitlab.com,
and
you
can
see
here
that
this
gitlab
agent
has
registered
a
Google
Google
kubernetes
cluster.
Here,
that's
last
contact
was
four
minutes
ago.
It's
now
as
you
as
I
recently
deployed
1540,
and
it's
configured
in
this
way
so
to
be
to
deploy
or
not
to
connect
a
kubernetes
agent
to
gitlab.
B
Here
you
can
see
githubs,
Inward
and
working
and
I
have
configured
on
this
agent,
a
get
Ops
the
project,
and
this
is
Manifest
project.
That's
looking
at
the
world
greetings
and
project,
it's
detecting
changes
on
the
Manifest
files,
so
the
Manifest
directly
and
then
only
on
offer
us
within
it.
So
if
there's
any
change
in
the
world
greetings
and
in
that
manifest
directory,
then
this
we
need.
This
agent
is
automatically
picking
this
up.
B
In
this
manifest
tracking
that
project
you
have
both
production
and
staging
manifest.
I
can
imagine
that
to
secure
and
separate
production
and
staging
you
would
want
to
have
a
separate
agent
for
your
production
environment
and
a
separate
agent
for
your
staging
environment
and
also
separate
github's
configurations.
B
So,
let's
go
back
to
the
pipeline
that
should
now
be
finished
and
it's
blocked
now.
So
what
I
can
do
now
and
manually
is
go
and
promote
the
image
to
the
latest
version.
What
this
will
do
it?
It
will
drag
the
image
in
the
container
registry
so
that
it
has
a
latest
thought
environment
that
is
then
linked
to
the
latest
version.
B
B
Now,
once
that's
done
now,
I've
promoted
this
to
production,
so
yeah
I
would
typically
also
release
this
I
create
a
release.
This
is
at
least
one
zero.
Ten.
B
And
I
can
add
a
dag
message
here:
I
can
set
a
really
strike
title
even
add
some
release
notes
link
to
run
book,
for
example,
if
you
want
that
or
any
other
release
assets
yeah.
But
what
is
most
important
is
that
this
will
sort
of
fix
your
release
and
wraps
up
the
source
code
for
this
release
and
assign
stats
to
this
release
conversion.
So
the
release
version
is
stacked
in
the
repository.
B
A
B
This
can,
of
course,
be
automated
or
scheduled
whatever
there's
different
ways
on
how
you
can
automatically
start
this
pipeline.
But
in
order
for
me
to
tell
things
and
to
show
you
how
things
work,
I
need
to
do
things
manually
so
that
the
pipelines
dot
are
not
overtaking
my
story.
B
So
at
the
moment
we
still
have
this
blue
backgrounds
on
both
production
and
staging,
and
while
the
pipeline
is
running
here,
it's
triggering
a
deployment
pipeline.
So
what
it's
done?
It's
checked
the
latest
spot
label
in
the
in
the
container
from
the
Hello
World
app.
Once
it's
detected
a
new
version,
it
will
trigger
a
child
pipeline.
This
child
pipeline
will
then
start
constructing
the
production
and
staging
manifest,
and
it
is
then
updating
first
staging
and
then,
when
that's
successful,
it
will
do
the
same
thing
on
production.
B
This
takes
a
bit,
but
if
we
go
to
a
repository,
so
this
has
created
a
new
commit
on
the
1310
in
the
main
line
to
reflect
the
changes.
So,
if
I
look
at
the
staging
manifest
now,
then
you
see
here
that
is
now
on
version
1010,
that
we
just
released.
B
A
B
B
And
do
this
for
production
as
well.
This
will
take
a
little
bit
a
couple
of
minutes,
but
then
also
the
the
production
environment
will
will
turn
into
an
orange
background
and
yeah
as
I
said.
I
have
done
this
all
with
manual
steps
now
so
I
can
show
you
in
between
that
the
environments
are
still
different
and.
A
B
Once
finished,
then
the
environments
are
changed,
but
in
in
a
real
live
environment.
I
would
definitely
make
sure
that
this
is
automated.
B
Let's
see
in
the
workloads,
if
it
already
has
changed,
that's
about
to
change,
so
you
see
here
now
it's
already
on
1310
and
we
are
now
in
the
middle
of
terminating
the
old
pot
and
starting
the
new
one.
This
does
mean
that
if
I
go
to
the
production
environment,
you
know
it
should
be
punch.
Yes,
it's
always
nice
to
see
that
a
demo
really
works
and
doesn't
give
evolution
cool
now,
of
course,
every
one
of
you
also
wants
to
do
this
and
wants
to
see
how
that
works.
B
B
Cluster
management
is,
in
fact
a
existing
project
template
in
gitlab,
and
the
infrastructure
is
in
this
case,
Loosely
based
on
our
infra
Standards
Initiative
I
took
a
slightly
different
one,
but
it's
the
same
thing
and
it
will
definitely
work
the
same
way,
whereas
I've
added
this
agent
configuration
to
it
so
that
you
immediately
have
an
agent
available
in
your
cluster
instead
of
deploying
that
afterwards,
a
good
resource
as
well
to
get
started.
Is
this
blog
post,
The
Ultimate
Guide
to
get
Ops
with
gitlab?
B
This
blog
post
is
in
fact
a
blog
post
series
of
eight
or
nine
I'm,
not
sure,
and
so
it
can.
You
can
follow
that
step
by
step
to
get
started
with
githubs
and
continuous
delivery
in
gitlab,
we'll
make
sure
that
you
also
get
these
URLs
later
on
after
this
webinar,
and
with
that,
I
would
like
to
thank
you
all
and
I
know
that
Taylor
is
about
to
launch
a
poll
too,
and
after
that
we
can
take
some
questions.
So
thank
you
all
for
listening
in
and
bearing
with
me.
A
Thank
you,
sander.
That
was
great,
yes,
I
just
launched
that
poll.
A
A
So
the
the
first
question
here
in
the
development
project,
the
review
app
is
deployed
as
part
of
the
pipeline,
whereas
production
and
staging
are
deployed
in
the
EnV
project.
Is
the
review
app
also
deployed
via
the
git
Ops
configuration.
B
It
isn't
and
no
and
is
so.
This
demo
consists
of
in
fact,
two
ways
of
deploying
something
into
a
kubernetes
environment,
so
the
githubs
is
detecting.
Changes
automatically
on
kubernetes
manifests
that
you
will
store
somewhere
and
in
this
case,
I
took
a
separate
project
for
that,
but
you
can
also
combine
it,
whereas
the
review
app
and
also
the
staging
app,
although
I,
didn't,
show
that
on
purpose,
they
are
deployed
automatically
through
Auto
devops.
B
So
we
have
our
Auto
devops
templates
that
you
can
use
and
that
will
give
you
a
very
quick
release,
Pipeline
with
only
including
one
template,
and
this
template
automatically
deploys
a
staging
and
a
production
environment
for
you,
which
will
work
on
a
kubernetes
called
text,
that's
pointing
to
the
gitlab
agent
in
the
cluster
management
project.
So
I
pointed
the
Hello
World
app
to
that
cluster
management
project
and
that's
the
where
the
agent
is
living
and
therefore
this
review
app
job
has
access
to
this
human
digital
context
and
can
deploy
on
dignities.
A
Great
next
question:
in
the
detailed
workflow
you
have
a
separate
Cloud
operations
team,
but
with
devops
isn't
it
then
the
idea
to
have
one
team
with
both
developers
and
operations.
B
Yes,
and
no
so
devops
is,
of
course,
the
concatenation
of
developments
and
operations,
and
it's
often
also
seen
as
simply
adding
operations
into
a
development
team.
But
if
you
read
the
devops
handbook,
that's
definitely
definitely
not.
B
It
requires
a
lot
of
different
skills
and
also
continuously
awareness,
so
you
have
to
continuously
monitoring
it,
for
example,
and
of
course,
you
can
put
a
lot
of
that
into
the
hands
of
the
the
development
teams
if
it's
based
on
the
application,
the
application
monitoring
but
the
infrastructure
monitoring,
you
would
definitely
want
to
have
a
separate
Cloud
information
stream.
For
that.
A
B
Yes,
so,
as
you
see,
a
saw
in
the
demo
is
that
the
get
Ops
configuration
with
the
agent
is
monitoring
the
Manifest,
the
kubernetes
Manifest.
So
it's
in
fact,
the
monitoring,
the
application
code-
git
Ops
in
in
the
basis
is
in
fact
push-based
operation.
So
you
you
push
a
change
into
a
get
repository
and
centrally
manage
the
depository
and
based
on
on
the
merge
request
trigger
it,
will
trigger
a
pipeline
and
then
deploy
the
changes.
B
If
you
already
have
terraform
available
and
in
git
in
bitlab,
then
what
you
can
do
is
you
can
start
building
this
Pipeline
and
the
clust
the
resource
I
just
saw
you
showed
you
with
the
Google
cloud
and
project
gives
you
examples
on
how
to
do
that,
so
you
create
a
pipeline
that
will
run
the
terraform
validate
and
terraform
apply
inside
of
your
plug
Pipeline,
and
once
you
do
that,
then
your
gits
push
you
merge.
A
I
think
so
all
right,
one
last
question
here:
do
we
have
to
create
four
separate
projects
for
this,
or
can
this
also
be
combined
in
one
single
project.
B
You
can
definitely
combine
things
for
this
to
work
now
at
the
moment.
It's
still
requires
that
all
the
projects
are
in
the
public
space
so
that
projects
can
really
see
each
other
with
separation
of
Duties
and
Minds.
I
have
really
separated
all
the
different.
B
How
you
say
it's
the
different
jobs,
let's
say
different
functions
in
different
projects.
Also,
it
makes
it
a
bit
easier
to
see
where
we
are
in
the
process,
but
you
can
definitely
combine
a
few
things.
For
example,
add
the
manifests
into
the
application
project,
for
example,
or
the
cluster
management
stuff
into
the
cloud
environments
and
project-
that's
definitely
possible
yeah.
You
can
even
do
everything
in
one
project.
If
you
want
to.
A
Great,
thank
you.
Those
are
all
the
questions
I
saw,
so
we
can.
We
can
go
ahead
and
wrap
up
at
this
point.
I
wanted
to
thank
everyone
again
for
for
joining
us
today,
and
and
thank
you
sander
for
that
presentation.
Keep
your
eyes
peeled
for
for
additional
sessions
like
this
in
the
future,
and
with
that
we'll
we'll
say
goodbye
for
today
thanks
everyone.