►
From YouTube: GitLab CD Deep Dive
Description
Cristiano Casella, Technical Account Manager, provides a general overview of GitLab CD and some things to know about deploying to different environments
A
What's
up
party
people
and
thank
you
so
much
for
joining
us
for
another
week
for
the
CES
skills
exchange.
We
are
very
excited
to
have
you
all
with
us
today
and
a
continuation
of
a
previous
session
that
we
did
on
the
CI
deep
dive.
We
are
continuing
the
conversation
with
the
CD
deep
dive
special
shout
out
to
Cristiano
for
all
of
his
support
and
health
and
helping
you
build
this
session,
and
he
will
be
our
presenter
today.
So
I've
got
the
agenda
in
the
chat.
So
please
add
your
questions
there.
B
You
I'm
able
to
see
my
screen
and
hear
my
voice
great.
So
in
the
first
part
of
this
presentation
we
talked
about
CI
and
today
we
will
focus
more
on
the
last
part
of
the
employment
lifecycle.
The
city
I
start
talking
about
a
famous
parlor
that
we
always
challenge
on
the
field.
I
hope
that
a
lot
of
you
can
appreciate
a
mage.
B
So
usually
we
found
customer
that
having
an
existing
Jenkins
installation
as
sometimes
you
know,
it's
not
that
easy
to
approach
a
migration
from
Jenkins
to
get
la
pipelines.
So
what
we
have
to
keep
in
mind
talking
about
these
kind
of
things,
I'm
not
talking
today
about
direct
tool
or
singer
I'm,
talking
more
about
the
strategy,
but
I'm
happy
to
make
a
follow
up.
If
you
want
to
talking
about
real
tool
and
conversion,
so
I
mean
the
first
point
is
that
Jenkins
is
in
some
case
similar
to
our
product.
B
We
start
from
a
kit
or
SVN
clone
and
is
able
to
set
up
a
list
of
jobs
and
maybe
approval
in
the
middle
one
important
things
that
we
need.
Obviously,
it's
able
also
to
define
like
we
do.
What
is
important
to
keep
in
mind
that
usually,
you
will
never
found
chilly
solution
of
Jenkins
that
are
similar
because
Jenkins,
like
other
application,
basic
imply
ins.
So
there
is
a
wide
library
that
you
are
able
to
search
an
Jenkins
website.
B
You
can
install
a
lot
of
different
tool
that
are
changing
deeply:
the
the
Jenkins
behavior,
so
you
could
use
a
tool
that
extend
the
privilege
management.
You
could
use
tool
that
extend
the
environment.
You
could
use
tools
that
let
you
easily
deploy
your
code
to
a
specific
platform
like
you
were
ninnies
or
you
can
you
could
use
tool
to
manage
the
release
of
your
software?
B
Looking
at
me,
net
tag
version
and
these
kind
of
things.
So
it's
important
where
you
compare
what
you
are
talking
about:
immigration
to
make
a
complete
asset
of
the
real
state
of
the
installation.
Usually
I
asked
the
customer
if
I'm
able
to
have
a
sort
of
screencast
or
similar
to
understand
exactly
what
they
are
using
inside
Jenkins
and
how?
B
B
B
So,
as
you
know,
we
are
able
to
manage
parallel
job
thanks
to
our
stage
and
we
are
able
also
to
have
a
complex
triggering
system
between
the
different
job
managing
the
relationship.
For
this
kind
of
reason,
what
a
user
suggests
to
a
customer
approaching
immigration
is:
don't
make
a
simple
trance
translation,
it's
not
about
that
if
you're
using
Jenkins
from
a
while.
Maybe
the
the
idea
behind
your
five
line
is
a
little
bit
outdated
and
you
and
I'm
pretty
sure
that
you
got
some
experience
from
that
kind
of
feedback.
B
B
Obviously,
if
you
have
a
custom
script
like
a
batch
file
or
similar,
it's
really
easy
to
be
great.
You
can
just
copy
your
script,
add
it
to
the
project
or
in
a
dedicated
project
and
includes
the
script
inside
your
pipeline.
If
you
are
talking
about
an
integration
with
a
platform
for
reasons
that
we
are
not
supporting
or
a
tool
the
trend
are
supporting,
it
could
be
more
challenging.
Remember
that,
with
our
script,
we
are
able
to
use
external
API.
B
B
A
lot
of
customer
are
looking
at
the
plug-in
based
logic
as
an
advantage,
but
keep
in
mind
that
at
the
end
it
will
happen
like
any
other
plug-in
basis.
The
application
WordPress
is
totally
the
best
sample
that
I
have.
In
my
mind,
you
start
just
with
the
core
version.
You
start
adding
clients
and
at
the
end,
you
have
a
lot
of
lists
of
clients
that
are
not
testing
between
them
and
nobody
is
giving
you
any
kind
of
warranties
about
the
compatibility
with
different
PI
and
inversion
and
especially
about
an
expected
behavior.
B
So
having
a
lot
of
clients,
it
means
to
have
a
lot
of
configuration.
A
lot
of
time
spent
to
ensure
that
everything
is
working
and
more
planning.
You
are
going
to
add
and
less
confident
you
will
be
at
the
next
update.
This
is
totally
the
logic
that
is
behind
our
tool.
As
you
know,
and
this
is
something
that
we
we
should
always
put
on
the
table
talking
about
Jenkins
versus
kit
lab.
B
We
is
true
that
with
Intel
faience,
but
you
know
that
we
have
many
other
way
to
integrate
with
the
poly
application
ether
manager.
Apps
is
a
perfect
assistance.
Actually,
we
are
letting
the
customer
easily
install
age,
it's
a
little
complex
part
from
like
Prometheus
or
elasticsearch
or
our,
for
instance,
all
the
part
you
got
in
communities.
So
it's
not
the
same,
but
remember
that
we
have
definitely
good
level
of
API
a
good
level
of
integration.
B
B
Cue
Bernini's
I
think
that
this
is
one
of
the
more
requested
topic
talking
with
Casper.
Actually,
we
are
totally
integrating
with
Cuba
Nene's
in
a
lot
of
different
way.
We
are
able
to
deploy
github
on
communities.
We
are
able
to
to
deploy
just
runners
on
communities.
We
can
deploy
our
application
in
psychie
Bernini's
or
also
deploy
a
kubernetes
cluster
from
the
github
integration,
and
obviously
we
can
deploy
all
the
core
infrastructure
services
with
our
management
apps,
but
what
we
need
to
know
before
to
proceed
with
this
kind
of
integration.
B
So,
let's
give
a
look
to
the
different
scenario.
Communities
is
obviously
a
complex
architecture,
many
layers
to
the
bog.
If
we
are
going
back
to
the
typical
monolithic
application
staying
on
a
single
linux
server,
usually
the
developer
was
able
to
connect
to
the
machine
and
give
a
look
to
the
server
logs
or
application
love
starting.
This
is
not
possible.
You
know
micro
services
architecture.
B
Usually
you
have
more
network
layer
that
are
utilize
it
you
have
a
complex
storage
management
and
especially
you
have
all
the
micro
services
that
need
to
interact
with
them,
usually
between
an
API
layer.
So
it's
totally
a
more
complex
situation,
just
sync
it
to
a
single
component
that
is
carried
to
five
different
instance,
and
if
you
want
to
have
an
overview
to
trace
a
user
activity,
you
need
to
collect
the
log
for
the
root
platform,
not
just
from
one
single
service
that
we
are
able
to
assess
that.
B
So
you
need
a
lot
of
satellite
services
that
letting
you
tracing
networking,
storage,
application,
log
and
whatever.
So
it's
really
important
that
the
customer
recognize
this
kind
of
skill
set
that
is
required
is
true
that
we
have
our
wizard
to
deploy
different
kind
of
communities
integration,
but
remember
that
a
wizard
will
not
help
you
when
something
is
going
to
be
broke.
So
sometimes
I
heard
some
customer
that
is
approaching
queuing
is
just
with
our
integration.
We
know
Q
about
watch,
maintaining
a
company's
production
environment.
B
Sometimes
we
talk
about
a
team
disaster
recovery
and
similar.
Remember
that
the
most
secure
solution
could
not
be
the
best,
because
if
you
ship
a
solution
that
is
totally
secure,
but
it's
too
much
complex
for
the
customer
knowledge.
They
probably
will
not
be
able
to
roll
back
or
make
a
disaster
recovery
in
case
off.
So.
C
B
Develop
on
communities
you
are
able
to
instantly
flop
on
communities
using
our
official
m
chart.
Let's
give
a
look
to
the
common
question.
There
is
a
lot
of
discussion
between
M
2
and
M
3.
Maybe
you
know
that
the
main
difference
in
M
3
is
that
tiller
component
has
been
removed.
So
now,
how
is
talking
directly
with
the
API
of
Q
is
not
passing
by
the
middle
component
that
was
called
tiller.
There
is
a
lot
of
confusion
behind
version.
Actually,
the
gift
of
a
chart
is
working
with
both
of
them
major
version.
B
It
works
with
you
and
it
works
with
tree,
and
you
have
the
reference
about
diversion.
They've
started
to
support
that,
but
our
integral
our
documentation
and
as
a
good
part
of
our
test,
I'll
still
focus
it
on
the
version
two.
So
maybe
it
could
happen.
That's
looking
in
our
documentation,
you'll
find
a
common
that
is
working
in
version
two
and
is
not
working
in
batch
entry.
B
Don't
worry,
you
can
use
the
suggestion
from
the
cube
control
or
a
ham
that
you're
using
and
you
are
able
usually
to
find
easily
what
you
need
to
adapt
in
general
approaching.
Have
remember
that
the
ham
chart
is
public,
so
you
can
give
look
to
our
project
and
there
is
always
an
override
file
that
is
defining
all
the
single
configuration
that
the
help
chart
is
a
setting.
It's
really
similar
to
our
config
eternal
file
for
github
installation.
There
is
a
default
configuration
every
single
voice
that
is
commented.
B
B
One
instance
is
it
LePage
so
before
to
choose
with
a
customer,
which
is
the
best
deployment
approach?
Remember
about
that
and
check
our
compatibility
list
that
is
mentioned
in
the
installation
guidelines,
so
that
you
are
sure
that
every
component
that
the
customer
is
think
available
for
the
choose,
a
deployment
method.
B
The
Elm
installation
is
including
horizontal
out
of
scaling
policies,
so
the
deployment
that
you
are
doing
with
ham
is
a
sort
of
H
a
deployment.
Why
I
mean
a
sort
of
because
for
reasons
you
have
to
manage
the
storage
part.
So
if
you're
queuing
in
this
cluster
have
a
storage
provider
that
is
a
really
under
H
a
it's
fine,
the
auto
scaling
will
take
care
just
about
the
load.
B
So
if
you
have
your
pod,
dedicated,
for
instance,
to
get
lop
shell
or
github
registry
or
sidekick
and
in
which
it
will
reach
the
target,
its
load
or
memory
conception,
the
community
scheduler
will
make
the
pod
horizontal
scaling.
But
it's
just
about
that.
Obviously
you
can
customize
the
policy
and
remember
also
that
this
could
not
fix
some
specific
configuration.
For
instance,
if
you
are
deploying
yourself
in
weaker
machine
environment,
the
large
sidekicks
setup,
you
are
able
to
define
for
each
sidekick
node,
which
kind
of
job
should
be
executed.
Actually
with
the
ham
chart.
B
B
Remember
also
that
our
chart
is
including
some
sub
cart.
For
instance,
we
are
using
component
like
ingenious
registry,
Postgres,
ready
and
so
on,
that
are
not
development
by
us,
so
we
are
not
writing
and
maintaining
a
new
chart
for
that
you're,
just
including
the
official
chart
with
a
specific
version.
Sometimes
you
could
need
understand
some
very
some
specific
configuration
regarding
visa.
Usually,
for
instance,
it
happens
with
the
ingenious
ingress.
B
It's
possible
to
immigrate
an
existing
honorable,
sista
Latian
to
the
Elm
installation.
So
if
you
have
a
customer
that
have
an
old
start
or
won't
you
go
for
a
cloud,
provided
instance
the
beta,
don't
you
Arnie!
This
is
something
that
we
can
achieve
complex
and
large
set
up,
as
mention
it
could
be
not
supported.
B
Truck
your
chart.
Configuration
on
a
git
repo
is
tough,
I
suggested
Cuba
need
is
deployed
on
different
provider,
can
have
different
behaviors
overshift
is
the
perfect
distance
about
that
with
that,
basically
put
an
authentication
layer
in
front
of
the
key
witness
API.
Our
integration
is
trying
to
talk
directly
with
a
Cuban
in
this
API,
but
something
is
not
working
as
expected.
So
pay
attention
to
the
version
that
we
are
talking
about.
Cuban
is
pay
attention
to
the
provider
that
we
are
talking
about.
B
Another
resistance
is
that,
as
maybe
someone
of
you
can
remember
in
Google,
Cloud
is
required
to
add
a
specific
declaration
to
the
ingress,
because,
if
not,
you
will
not
get
the
external
IP
address
for
the
client
connection,
so
don't
take
your
knees
as
general
objects.
That
is
always
having
the
same
kind
of
answer
you
need
to
investigate
which
kind
of
needs
we
are
talking
about
for
reason.
So
we
had
a
lot
of
trouble
passing
from
version.
B
One
point,
fourteen
to
one
point:
sixteen,
because
a
lot
of
things
in
the
API
has
been
deprecated
or
just
updated.
The
format
for
the
request
and
obviously
we
support
backups
or
communities
they
are
managed
in
a
different
way
is
not
anymore.
Ekron
in
the
local
machine
is
a
community
scheduler
task,
but
it's
totally
something
that
we
can
do
so.
Don't
worry
about
that.
D
B
Wanted
to
at
this
point,
because
thinking
about
cuban
is
usually
the
faster
I,
have
a
lot
of
confusion.
They
think
that
we
are
integrating
minis,
but
some
time
is
not
clear
about
the
different
role
of
the
kubernetes
integration.
So
I
wanted
to
clarify
every
single
scenario
that
we
can
have
between
github
and
communities.
The
first
that
I
mentioned
it
was
Keith's
lab
over
Cuba
nice.
Now
we
will
give
a
loop
to
the
others.
B
This
is
important
because
is
the
perfect
distance
for
infrastructure
as
a
code
that
I'm
going
to
give
a
look
for
in
the
presentation
a
lot
of
customers
are
asking
about.
How
can
I
deploy
my
infrastructure?
How
can
I
use,
for
instance,
tariffs
or
or
similar?
This
is
the
perfect
instance.
You
have
an
amp
chart,
the
discovering
your
application.
You
can
use
kit
lab
to
manage
the
EM
chart
and
to
deploy
your
application
inside
your
club
provider
and
have
you
having
everything
stored,
inversion.
It.
D
B
Runners
on
communities
4-h
new
job.
First
of
all,
we
have
Ashley
an
uncharged
that
is
letting
you
deploying
runners
on
site
communities.
The
common
error
that
I
see
is
the
customer
that
is
trying
to
apply
out
of
scale
for
the
runners,
but
this
is
not
the
way
it
should
works.
Basically,
what
we
deploy
with
our
runners
is
a
listener
is
a
container
that
we
connect
with
our
infrastructure.
Looking
at
our
Q
will
connect
to
your
communities
cluster
and
for
each
new
job
that
they
will
be
created.
It
will
spin
up
a
new
pot.
B
So
if
you
want
to
scale
up,
your
runner
with
kubernetes
is
not
about
scaling
up
the
first
pot
that
you
see
after
the
deployment
there
is
a
specific
configuration
in
the
ham
chart.
That
is
the
concurrent
parameters
that
is
letting
you
to
decide
how
many
runners
can
be
spinning
up
at
the
same
time.
This
is
how
the
Kuban
ET
cela
scaling
for
runners
should
be
managers.
B
Another
common
error
is
regarding
the
certificate.
You
should
pass
a
certificate
to
the
runners
only
if
you
are
using
acid
sinus
certificate
that
is
not
release
it
by
a
pool
by
a
Pollock
certification,
Authority
deploying
on
communities.
This
is
the
most
interesting
case.
Obviously
deploying
our
application
on
community
is
what
it
means
when
github
create
the
class
ur,
a
gillip
service
account
with
carson
admin.
Privilege
is
created
in
the
default
namespace
to
manage
the
new
created
class
er.
This
is
one
of
the
most
common
question.
Caps
were
is
looking
our
wizard.
B
We
are
asking
for
a
cursor
admin,
and
the
customer
is
thinking
that
we
are
using
cluster
admin
for
everything.
No,
it's
not
about
that.
Our
cluster
admin
is
required
to
manage
all
the
secret
that
we
use
it
during
the
deployment.
So
when
I
define
my
application
in
mud,
Simon
main
space
I'm
not
going
to
use
the
classic
role,
admin
that
I
pass
it
to
the
wizard,
but
that
account
will
take
the
ownership
of
creating
a
specific
service
account
for
that
specific
job,
and
this
is
giving
the
customer
all
the
security
regarding
cross
deployment.
B
B
You
know
that
we
have
the
guitar
man
adapts,
guitar
manager,
apps
is
a
set
of
ham
chart
with
the
full
configuration
that
we
manage
it,
that
let
you
easily
a
set
of
application
like,
for
instance,
in
various
asset
manager,
and
so
on.
Remember
that
you
are
able
to
customize
them.
There
is
a
specific
file
that
you
can
add
to
your
project
that
is
going
to
manage
the
override
for
that
specific
application.
You
can
also
centralize
the
configuration
for
the
wool
manager
up
to
layer
inside
a
specific
project.
B
As
mentioned
a
dedicated
service,
account
will
be
created
for
the
specific
namespace
and
the
performance.
If
you
want
to
jump
in
the
middle
regarding
the
different
managed
apps,
you
are
able
to
do
that.
For
instance,
I
want
to
deploy
service.
That
is
really
fun
for
me
with
help,
but
I
don't
want
to
use
a
dedicated
chiller
deployment
or
I.
Don't
want
to
use
another
authentication
layer.
I
can
just
download
the
certificate
that
the
our
deployment
created
and
use
the
same
configuration
to
interact
with
the
hand
alignment
so
github
integration
will
see.
B
My
deployment
and
I
will
be
able
to
see
and
check
what
gitlab
made
with
this
integration
and
obviously
the
same
for
the
value
of
github
Manager,
apps
I
deployed
ingress
with
github
I
didn't
know,
for
instance,
how
many
note
I'm
able
to
scale
up
with
the
concurrent
parameters
that
I
mention
it
in
the
past.
I
can
just
use
the
arm
client
to
retrieve
the
actual
configuration.
B
If
you
want,
you
may
to
go
over
this
and
make
a
full
customization,
for
instance,
to
guard
in
secret
and
so
on,
you
can
do
it,
but
you
need
to
create
that
custom
pipeline.
Remember
that
housed
in
a
custom
pipeline,
you
are
able
to
use
our
integration.
For
instance,
if
you
want
to
retrieve
the
communities
matrix,
the
deployment
dashboard
is
similar.
You
just
have
to
follow
our
naming
convention
for
the
service,
and
in
that
case
also
with
a
custom
pipeline,
you
are
able
to
have
all
the
information
in
our
integration
panel
for
communities.
B
Another
important
point
is
that
Cuba
needs
is
including
liveness
and
progress
check.
You
should
always
check
which
kind
of
fantastic
we
need
is
going
to
make
to
your
application.
The
common
instances
without
adopts
hopes
is
waiting
for
our
web
application.
That
is
listening
on
the
Fort
5,000.
A
lot
of
ferrer
that
we
also
saw
in
the
demo
system.
Engineers
was
just
application
that
was
listening
on
a
different
port
like
the
default
HTTP
port.
B
So
double
check
every
time
that
we
are
following
the
standard
that
we
are
expecting
from
their
communities
in
inside
in
our
outer
devups
configuration
we
are
able,
as
you
know,
to
create
the
docker
image
for
you
and
the
enzyme
behind
is
about
that.
Oroku
is
a
great
tool
to
standardize
your
code
and
to
have
an
image
that
is
quick,
running
your
entire
system,
but
pay
attention
to
the
security
layer.
Usually
arauco
is
not
updating
so
much
their
image,
so
they
usually
are
updated
and
they
can.
B
They
could
include
some
vulnerability,
especially
in
a
customer
that
is
using
our
security
scam.
It's
easy
that
the
container
scanning
is
not
passing
the
test,
showing
some
high
critical
vulnerability,
but
without
defining
a
custom
docker
file.
Usually
you
are
not
able
to
fix
them,
so
the
suggestion
is
start
with
a
roku.
That's
your
application
and
sure
that
your
code
is
following
the
marker
standard,
but
for
production,
environment
use
the
dockerfile
override.
B
Infrastructure
as
a
code,
this
is
one
of
the
common
topic
talking
about
continuous
deployment.
A
lot
of
company
now
have
large
set
up
regarding
the
infrastructure,
usually
a
braid
at
least
Nemea.
They
are
using
a
part
of
the
internet
at
the
center
about
a
different
cloud
provider
and
it's
always
important
to
have
a
way
to
manage
that
and
infrastructure.
As
a
code
is
the
solution
for
all
of
this
trouble.
B
Basically,
you
can
use
I,
keep
repo
to
store
the
configuration
for
your
cloud
provider
or
your
data
center,
depending
on
the
only
environment.
Obviously,
but
infrastructure
is
a
good
make
it
easy
to
define
and
ensure
that
your
infrastructure
is
running
as
the
finite
is
less.
You
have
an
easy
catalog
about
what
has
been
deployed.
You
are
able
to
be
fast
and
skin
faster.
B
B
So
we
have
the
change
management
and
obviously
we
have
a
standardization.
Basically,
we
are
taking
the
common
advantage
of
the
regulator's
software
development
and
we
are
coping
disadvantage
in
the
infrastructure
management.
This
is
why
the
infrastructure
code
is
getting
this
kind
of
audience
on
the
market.
You
have
a
lot
of
tool
that
are
able
to
achieve
this
goal.
They
have
some
small
difference.
I
just
would
like
to
make
a
quick
overview
over
them.
B
Puppet
or
pipette
is
a
tool
that
is
based
on
a
client-server
environment.
Basically,
you
have
a
catalog
when
you
define
working
on
the
O's
name,
what
which
kind
of
configuration
should
be
setted
up
on
the
client?
The
client
is
fetching
the
new
version
of
the
catalog
every
5-10
minutes
and
is
applying
the
configuration.
This
tool
is
going
to
be
a
little
bit
updated
because
it's
more
oriented
to
the
normal
view
from
machine
environment
or
big
metal
actually
with
micro
services.
B
C
B
And
the
client
are
downloading,
they
updated
calendar
from
there.
So
you
have
the
wool
lifecycle
of
the
deployment
on
the
infrastructure,
managed
by
the
gate.
Logic,
ansible
ansible,
a
puppet
are
similar.
The
only
difference
is
that
puppet
required
configuration
and
one
argent
that
is,
is
solid
on
the
local
machine.
Ansible
have
similar
cathode,
they
call
it
playbook
and
it
uses
a
common
connection.
Protocol
lies
like
ssh
to
connect
to
the
target
environment,
may
the
assets
and
apply
the
configuration,
but
they
are
really
similar.
The
difference
is
that
in
a
infrastructure
is
a
good
environment.
B
B
Terraform
terraform
is
little
bit
different
from
this.
Terraform
is
based
on
a
set
of
provider
like,
for
instance,
Google,
Cloud,
kubernetes,
abs
and
similar.
That
is
giving
you
a
list
of
a
sort
of
API.
Basically
you
are
your
credential
to
the
telephone
configuration.
You
have
a
list
of
faction
and
objects
that
you
can
manage
from
that
provider.
You
are
going
to
use
a
set
of
model
to
define
the
configuration
for
each
object
that
are
available
from
the
provider
about
that.
Keep
in
mind
that
from
version
13.0,
we
started
integrating
more
terraform
with
our
interface.
B
For
instance,
you
are
able
now
to
see
the
result
of
the
merger
of
the
terraform
apply
inside
the
Majoris
screen
and
remember
after
that,
we
have
a
strong
relationship
with
our
core
to
the
company
behind
server
and
we
are
working
a
lot
to
improve
our
tool
synergy.
So
it's
totally
some
thing
that
is
interesting.
Another
another
point
that
could
be
interesting
for
you
is
the
demo
system
engineer
platform.
The
wool
platform
is
managed
by
heat,
lab
and
terraform.
You
have
access
to
our
Revo.
You
can
find
out
so
worried
for
dedicated
to
ansible.
B
B
B
So
we
can
use
a
pipeline
to
deploy
our
configuration
to
deployment
environment,
but
is
not
like
the
planning
software.
There
is
some
crucial
difference
between
deploying
software
and
deploying
infrastructure.
Another
everything
can
be
run
it
back.
If
you
destroy
a
resource
like
a
virtual
machine,
maybe
the
a
drive
containing
data
is
not
available
anymore,
so
you
should
pay
more
attention
regarding
that
and
remember
that
inside
our
pipeline,
we
are
able
to
define
if
a
job
can
be
stopped
it
or
not.
B
If
a
job
can
make
a
specific
number
of
retry
and
Simula
or
if
it's
a
low
it
to
fail
or
not.
This
is
totally
important
talking
about
infrastructure
as
a
code
pay
attention
out
so
to
stop
a
pipeline
during
a
deployment
you
can
leave
the
infrastructure
in
a
unexpected
the
state
and
you
could
have
trouble
thing
again
with
the
same
infrastructure.
B
Maybe
you
remove
a
some
resource
and
you
seem
not
uploaded
the
new
one,
so
in
case
of
rom
deployment,
you
know,
infrastructure
is
a
good
environment
required
also
our
remit
hand
or
similar
and
obviously
don't
use
infrastructure
as
a
code.
If
you
don't
know
how
to
do
it.
Manually,
because
also
in
this
case,
it
will
be
a
complex
situation
if
you
need
to
investigate-
or
if
you
don't
have
a
background
about
the
common
trouble
doing
infrastructure
deployments.
B
Closing
this
presentation,
I
added
a
couple
of
worship
at
the
end,
is
something
that
you
can
totally
try
or
not
is
up
to
you
know
no
feedback
or
follow-up
away
will
be
scheduled.
It
is
a
couple
of
exercise
that
I
had
it
here.
If
you
have
trouble
following
them,
you
can
totally
ask
me
for
support
and
we
can
review
the
what
you
are
trying
to
do.
A
D
I'll
type
it
up
in
a
minute,
but
christiana
I
was
looking
at
our
stages
in
get
lab
and
I
believe
that
CD
covers
both
the
release
and
configure
stage.
As
a
matter
of
fact,
I
believe
it
covers
release
even
more
than
configured
because
it
has
continuous
delivery
right
in
the
description.
I
didn't
see
any
talk
about
feature
flags
or
release,
orchestration
or
release.
B
To
give
you
some
some
context
behind
the
content
of
the
slide,
we
made
a
quick
pool
Chris
wrong
a
couple
of
months
ago,
something
similar
and
looking
for
the
common
question
and
topic
that
was
interesting
regard
the
wool
presentation,
the
CI
and
city
part,
and
what
you
mention
it
wasn't
in
that
in
that
in
that
list
of
requests.
Actually
he
need
is
one.
What
was
one
of
the
most
requested
things
like
tariffs
or
and
similar?
B
If
we
have
other
topics
to
cover
I'm
totally
open
to
to
schedule
a
new
a
new
session,
we,
as
you
can
imagine,
the
the
environment
was
really
large.
Our
Julie
is
covering
a
lot
of
different
things
in
CI,
n
CD,
and
so
we
we
needed
to
pick
up
just
just
some
topic,
but
I'm
totally
open
to
to
look
for
for
new
session.
Just
just,
let
me
know
it's
not
a
problem.
That's.
D
C
Aligned
with
what
John
is
saying
when
a
doctor
CDE,
it's
very
easy
and
obviously
she
give
emphasis
chicky
lab
to
kubernetes,
because
Gil
that
plays
very
well
with
it
right.
I
would
like
to
see
in
a
future
presentation
the
more
traditional
infrastructure
that
it's
not
so
easy,
because
I
don't
think
we
talk
a
whole
lot
about
it.
Right,
for
example,
I
flew
out
to
Brazil
to
a
traditional
bank
right
and
then
the
scenario
is
given
to
me
had
nothing
to
do
with
kubernetes.
C
It
was
very
much
like
okay,
I
have
a
Windows
environment
right,
so
I
need
to
put
runners
on
Windows
and
then
I
need
to.
How
do
I
execute
power?
Shell?
Now?
How
do
I
remotely
connect
to
my
production
environment
right
versus
Linux
now
now,
I
need
what
characteristics
I
need
to
make
sure
I
fill
these
requirement
of
deploying
Linux
environments.
What
about
databases
right?
I
would
like
to
see
a
CV.
There
are
more
focused
on
those
scenarios.
They
are
more
traditional.
There
are
more
legacy
customers,
because
I
think
I
feel
like
that.
C
B
I
can
totally
understand,
and
this
is
why
I
added
puppet
as
the
first
tool
for
the
kind
of
scenario
that
you
mention
it
I
do
like
puppet
is
the
perfect
solution.
Basically,
you
are
able,
for
instance,
to
define
that
you
have
your
front
end
server
window,
so
Linux
is
the
same.
The
tool
is
cross-platform.
You
want
to
ensure
that
or
a
specific
tag
from
your
code
or
just
the
last
version
of
a
label
from
your
git
repo
is
deployed.
B
What
you
can
do
using
git
lab
to
pilot
puppet
is
to
say:
okay,
when
I
publish
this
version,
ensure
that
the
server
is
releasing
the
new
version
in
production
or
for
a
sensor.
What
you
can
do
is
to
make
to
force
the
pool
of
from
a
specific
get
the
configuration
every
10
story
or
one
hour.
What
you
want
time
interval
on
another
case
that
you
can
did
you
could
do
for
a
since
is
to
attach
a
monitor
profile
to
the
infrastructure,
is
a
code
for
reasons
I'm.
B
Getting
back
from
my
monitor
system
that
my
front
end
is
showing
you
500
errors,
kind.
The
monitoring
system
made
reactant
in
the
monitoring
system
term.
We
are
in
a
heart
state
and
the
heart
state
can
trigger
an
action.
Detection
could
trigger,
for
instance,
I
get
a
pipe
a
line
that
is
going
to
deploy
again
the
specific
service
and
remember
that
talking
about
infrastructure
as
code,
we
not
table
just
to
manage
your
single
application.
B
What
you
are
coding,
but
using
telephone
for
reasons,
so
we
can
update
your
DNS
configuration
your
switch,
your
storage
or
everything
that
is
able
to
be
managed
by
any
single
kind
of
integration.
So
is
a
real
common
situation.
What
you
mention
it
really
few
customer
actually
are
able
to
go
for
communities
without
any
pain
and
disease
were
infrastructure.
Echo
as
a
code
is
more
important
because
the
way
you
have
a
customer
using
communities
from
a
cloud
provider
you
may
not
have
that
kind
of
infrastructure
is
a
code
because
the
90
percentage
of
infrastructure
is
transparent.
B
C
B
That
specific,
VLAN
or
deployment
has
been
changed
after
a
specific
issue.
We
go
back
to
the
change
management
and
they
want
to
ensure
that
all
the
validations
be
made.
You
know
we
can
make
all
the
tests.
For
instance,
usually
this
kind
of
tool
have
a
link
test,
because
just
a
syntax
error
can
break
your
buffer
and
our
approval
stage
are
perfectly
fitting
in
this
kind
of
situation,
because
you
can
have
the
data
center
order
that
approve
that
you're,
going,
for
instance,
to
take
a
3/4,
be
methyl
no,
the
for
the
new
deployment
and
similar.
B
So
our
tool
is
correct
that
we
have
an
easy
plug-in
for
Anissa,
but
the
reason
why
we
are
not
having
that
kind
of
vying
for
the
legacy
it
just
because
the
legacy
have
really
our
wider
kind
of
environment.
There
is
a
lot
of
tools
that
we
are
able
to
integrate,
but
have
a
different
story
behind
I
put
it
on
the
table:
just
retool
Papa
terror
from
and
ansible,
but
the
results
of
chef.
There
is
CF
angina
and
it's
probably
a
longer
list.
B
So
in
general,
my
suggestion
is:
go
back
to
the
to
the
region
how
they
are
managing
the
release
manually.
They
are
coughing
the
artifacts
that
they
are
lancing
a
script,
which
is
the
level
of
the
automation
for
the
actual
release,
and
if
they
have
a
tool
that
is
managing
the
platform,
we
are
totally
able
to
integrate
with
that
tool
and
to
migrate
from
you
know,
from
the
tool
users
to
a
real
infrastructure
as
a
code.
If
the
customer
is
not
doing
that.