►
From YouTube: 2022-12-14 Introduction to GitLab deployments
Description
Discussing the deployment and release process
A
B
So
welcome
this
is
going
to
be
our
very
hopefully
very
high
level
overview
of
how
deployments
and
releases
are
working,
so
this
is
December
2022,
so
this
is
kind
of
current
as
of
now
so
within
delivery.
We
are
looking
after
deploying
to
github.com
and
also
releasing
to
ourselves
managed
users
and
they
are
connected
but
running
on
quite
different
cadences,
so
deployment2.com.
B
We
are
tracking
that
with
mttp
sort
of
in
real
terms
of
how
frequently
changes
are
landing,
on.com
we're
usually
getting
around
four
or
five
deployments
a
day,
so
that
one
is
running
on
on
the
sort
of
hours
schedule
and
then
everything
that
has
landed
on.com
gets
packaged
up
for
the
monthly
self-managed
release,
so
same
changes,
different
package,
different
release,
Cadence
and
then
between
those
two.
We
also
have
the
scheduled
security
release
and
we
also
have
patch
releases.
B
So
what
this
looks
like
we've
just
I've
just
started
working
it's
a
little
bit
Rough
and
Ready,
but
coming
soon
is
I've
added
a
sorry,
I'm
really
not
getting
on
well
with
with
with
zoom.
Today,
I,
don't
know
why
it
looks
different
to
me.
B
I've
just
started
working
on
improving
things
in
the
handbook,
so
we
previously
had
everything
on
the
releases
handbook,
page
I've
added
a
new
page,
which
is
deployments
and
releases
to
give
this
level
of
kind
of
overview
of
what
roughly
are
we
doing?
If
you
don't
need
to
be
the
person
that
actually
presses
the
button
like
how
the
deployments
and
releases
work
and
then
from
there,
we
can
drill
down
into
like
the
specifics
of
the
processes,
so
excellent
deployments
and
releases.
B
What
is
roughly
Happening
Here
is
the
sort
of
three
three
phases
right.
So
we
have
the
mostly
stage
groups
responsible
for
this
bit
where
releases
like
sorry
code
is
written,
changes,
features
bug,
fixes,
whatever
it
is,
and
they're
being
reviewed
and
then
eventually
they
get
merged
into
whatever
the
Project's
default
branches,
and
then
we
sort
of
pull
those
up
and
then
on
a
on
a
scheduled
Cadence.
B
So
this
is
this
was
determined
by
the
release
managers
at
the
moment,
like
it's,
every,
maybe
I,
every
three
hours
a
package
gets
created.
So
basically,
everything
that
hasn't
previously
been
deployed
gets
wrapped
up
into
a
new
package.
The
packaging
pipelines
run
by
distribution,
so
distribution
provide
us
with
a
package
and
that
then
triggers
our
deployment
process.
A
Okay,
first
questions,
yes
go
for
it,
so
my
understanding
of
this
is
that
essentially
every
every
time
I
deployment
is
around
a
new
package
is
created
and
the
div
is
a
list
of
commits
to
the
default
Branch.
Essentially
yeah.
B
A
Commits
that
landed
within
a
certain
window,
and
you
can
definitely
say
okay,
you
know
between
this.
You
know
package
and
the
new
package.
You
know
these
20
new
commits
are
in
you,
don't
know
what
those
are
right.
Let's
I
mean
you
do,
but
it
could
be
anything
that
is
committed
in
that
period.
B
Of
time,
that's
right
exactly
yeah,
so
we
we
do
know
like
specifically
what
they
are,
but
from
a
kind
of
like
a
deployment
process,
it
is
treated
a
bit
like
a
black
box.
Yeah
we
are
not
expecting
to
evaluate
this
particular
commit
was
more
risky
than
that
one,
and
therefore
the
process
changes.
It's
like
automated,
as
close
as
we
can
get
to
continuous
deployment,
so
we
trigger
off.
So
on
step.
B
Four,
we
basically
have
a
package
that
automatically
triggers
off
our
deployment
process,
which
I'll
show
you
a
little
bit
more
in
a
second,
but,
like
largely
is
basically
if
we
are
able
to
do
the
deployment
it
begins
on
deploying,
and
then
we
have
a
kind
of
that
then
feeds
too.
We
attempt
to
deploy
this
to
to
gitlab.com.
A
B
So
that
is
a
regular
Cadence
going
on
new
packages
with
any
new
changes
get
created,
they
hopefully
get
deployed,
to.com
cycle,
repeats
and
then
sort
of
a
set
point
in
the
month.
We
trigger
the
release
process,
so
that
will
happen
this
month.
We'll
be
kicking
that
off
at
the
end
of
this
week,
most
of
the
prep
work
will
take
place
next
week
ahead
of
the
22nd
basically
trying
to
pull
together.
B
A
Okay,
I
have
a
question.
So
let's
say
you:
you
have
created
a
deployment
that
is
running
on
kitlab.com
today,
yeah
and
then
is
there
any
difference
between
that
package
that
was
deployed
together.com
and
a
self-managed
package.
Or
is
it
literally
at
some
point
you
say:
okay,
you
know
we've
deployed
to
github.com
a
package
three
hours
ago.
That's
going
to
be
the
release
candidate.
A
B
So
it
is
a
different
package
and
that
is
down
to
the
the
time
period.
So
a
package,
a
new
package-
that's
heading
to.com
this
morning,
for
example-
will
only
have
about
four
hours
of
changes
in
it
because
it
will
be
from
the
previous
package.
A
package
going
to
self-manage
will
have
a
month's
worth
of
changes
because
it
will
be
going
from
the
the
previous
self-managed
package,
which
was
the
22nd
of
the
previous
month.
So
the
size
of
the
package
is
different,
I
think
I,
don't
know.
B
This
was
certain,
though
actually
but
I
I
think
it
must
be
a
different
pipeline,
because
I
think
the
packages
we
are
creating
for.com
I,
think
are
intended.
For.Com
I,
don't
know
that
we
create.
Actually,
that
might
not
be
true,
because
we
build
more
Dev.
A
B
A
You
just
for,
for
the
sake
of
the
argument
right
and
you
build
new
packages
for
gitlit.com
every
few
hours
and
then
at
on
Friday.
You
know
this
is
not
a
like.
All
of
the
changes
that
happen
through
the
week
are
Incorporated
to
finance
package,
plus
the
latest
diff,
you
know
from
a
few
hours
ago,
so
the
package
on
Friday
is
everything
that
changed
in
the
entire
week
correct
you.
A
B
Yes,
yes
sure
yeah
you're
totally
right.
Let
me
think
about
how
they
end
up
being
different.
In
that
case,.
B
Yeah,
that's
right,
I
think,
probably
what
the
big
difference
will
be
actually
yeah.
It's
a
great
question
because
actually
I
don't
know
the
exact
differences
I
think
it's
probably
around
tagging.
So
it's
probably
about
what
we're
actually
saying.
Is
that
this?
What
is
with
like
the
version
right
so
I'm
self
on.com?
It's.
A
B
Know
pre
the
previous
version
was
a
few
hours
ago
and
we
have
a
new
version.
Self-Managed
screen,
obviously,
tagging
a
great
much
much
larger
change
set,
so
I
think
that's
probably
the
big
difference,
but
actually
yeah
I,
don't
I,
don't
know
the
exact.
The
exact
differences,
though.
A
Yeah,
because
you
could
like
just
conceptually
and
again,
I,
don't
know
if
you
let's
say
you,
you
created
a
release
candidate
and
tagged
it
as
such.
You
should
be
able
to
deploy
it
to
github.com
right
at
that
point,
because
it,
it
just
then
happens
to
be
the
tag
release
that
is,
that
is
running
on
gitlab.com,
because.
B
B
Is
yeah?
That's
right:
yeah,
that's
right:
yeah,
no
you're,
right,
yeah,
yeah
and
at
the
moment
just.com
it's
ahead
of
self-managed.
So
it's
the
ordering
is
effect
now,
but
yeah
I
think
that's
correct.
A
B
Right
yeah,
let
me
try
through
that
yeah,
because,
that's
that's
definitely
you
will.
You
will
probably
be
slightly
aware,
although
not
super
aware
it's
more
aimed
at
the
engineers,
but
you
will
probably
be
slightly
aware
of
the
the
things
that
indicate
up,
but
I
can
certainly
go
through
that
for
sure.
So.
B
Well,
let
me
go
through
that.
Actually,
so
what
happens?
Is
we
basically
restart
the
prep
for
the
monthly
release,
so
in
this
this
month
we
will
begin
that
on
the
16th,
so
the
the
challenge
we
have
is
we
need
to
be
able
to
package
a
stable
set
of
changes
right
so
on.com
we
can
recover
reasonably
straightforwardly.
So
it's
it's
a
lot
less
risky.
We
just
new
changes
come
in
and
we
deployments.com
and
if
needed
it
could
roll
back
or
more
commonly.
We
tend
to
roll
forwards
and
we
fix
from
that.
B
The
challenge
for
the
self-managed
is
we
take
a
point
in
time
and
we
cut
and
we
say
all
of
these
changes
are
a
stable
release
candidate.
So
what
we
do
to
firstly
guarantee
this
22nd,
but
also,
secondly,
to
try
and
guarantee
the
stability.
Is
we
have
sort
of
steps
where
we
almost
sort
of
come?
We
almost
guarantee
changes
will
make
it
so
the
first
one
of
those
will
happen
on
this
Friday.
B
At
the
end
of
the
day,
the
release
managers
will
announce
the
candidates
commit
to
make
self-managed
and
what
that
basically
is
it's
sort
of
our
insurance
policy.
So
we're
basically
able
to
say
everything
up
until
this
point
will
definitely
go
into
the
22nd,
but
hopefully
more
because
that's
quite
a
few
days
in
this
case
it'll
be
the
16th,
so
quite
a
few
days
ahead
of
the
22nd.
But
if
things
really
don't
go
well,
we
know
we
have
a
point
where
we
can
say
this
has
been
running
on.com.
B
It
is
stable
and
we
will
cut
from
there,
so
that
will
be
happening
on
the
Friday.
That
will
be
a
commit
that
will
match
up
with
a
deployment
that
the
release
managers
have
made.com,
that
they're
happy
with
so.
B
That's
right,
yeah,
so
that's
where
we
pick
the
set
of
changes
and
then
what
will
happen
from
the
20?
Sorry,
the
20
from
the
16th
onwards,
is
on
the
Monday,
so
the
19th,
the
release
managers
will
they'll
still
be
going
through
deployments
and
we
will
still
be
hopefully
achieving
successful,
stable
deployments,
and
if
we
do,
the
the
guaranteed
commit
starts
to
move.
So
we
might
get
to
the
end
of
Monday
and
we
have
had
you
know
three
more
deployments
and
everything
looks
good.com
is
stable.
B
We
haven't
seen
any
incidents
we
haven't
got
any
like
failing
tests.
Great
all
of
those
extra
changes
then,
can
also
be
considered
part
of
the
monthly
release
and
I
believe
that
this
month
the
the
actual
cutoff
will
be
I
believe
it
is
probably
the
19th.
So
we
have
a
date
where
we
basically
say
right.
B
This
is
this
is
the
the
Quebec
that
we're
we're
going
to
tag
from
the
reason
we
do
that
we
do
that
a
few
days
before
the
22nd,
because
once
we
do
the
tag,
we're
basically
at
step
six,
we
then
deployed
out
to
we
basically,
first
of
all
we
check
we
can
generate
a
a
passing
release
candidate.
We
have
tests
and
things
that
run
for
that
and
then
we'd
apply
it
to
the
pre-environment.
B
We
can
we
check,
we
can
deploy
it,
we
can
check,
we
check.
Pre,
you
know
remains
online.
We
don't
have
any
downtime
on
the
deployment
we
run
tests
there.
If
there
are
any
problems
there,
additional
changes
will
be
brought
into
the
self-managed
release,
but
in
an
ideal
month
that
all
just
goes
completely
smoothly
great.
We
can
tag
from
there
and
that
is
the
self-managed
release
for
the
22nd.
A
B
Big
on
note,
we
have
two
really
big
unknowns
that
go
on
around
preparing
the
monthly
release.
The
first
one
is
dot
com
stability,
so
could
be
something
internal
like
a
change
we
deploy,
isn't
as
stable
as
we
need
it
to
be,
or
it
could
be
external
and
actually
for
unknown
reasons.
Something
happens
that
prevents
us
deploying
for
a
period
of
time.
B
So
that's
where
the
sort
of
early
steps
of
prep
begin
to
come
in
then
the
other
second
unknown
is
at
the
point
we
do
the
release
candidate
commit,
and
we
basically
say
this
is
the
change
set.
There
is
a
chance
that
isn't
a
stable
change
set
and
in
which
case
we
have
to
make
additional
changes.
So
sometimes
it's
a
case
of
we
have
too
many
changes.
We
sometimes
have
people
coming
and
saying:
we've
accidentally
picked
up
something,
that's
you
know
part
of
a
set
of
changes.
B
A
B
A
They
don't
really
fully
understand
what's
going
on,
but
you
do
need.
So
if
you
like
what
happens
sometimes
at
least
from
a
product
perspective,
is
you
really
want
to
ship
this
feature,
then
more
than
one
change
needs
to
actually
like
get
merged
right.
One
of
them
makes
it,
but
not
all
of
them
make
it,
but.
A
Of
life
doesn't
make
it
and
then
you
know
so
there's
this
like.
Will
it
go?
Will
it
not
go?
Nobody
knows
or
something
is
wrong.
You
know
at
which
point
people
get
things
like
hey
this
thing.
Has
you
know
it's
not
behaving
as
it
should?
What
will
we
do
right
and
then
there's
manual
involvement
from
from
the.
B
It
yeah
exactly
so
I
think
it's
it's
definitely
a
difficult
one,
because
there's
certainly
a
lot
of
unknowns
in
in
that
prep
window
and
so
I
think
the
challenges
come
from
Above
All,
Else
Garrett.
We
guarantee
the
22nd,
so
there
are
definite
times
where
I
think
people.
B
You
know
there
are
changes
that
like
ideally,
would
make
it
in,
but
from
a
Time
perspective.
Just
simply
can't
so
that's
always
the
the.
A
Yeah
another
question:
so
just
looking
at
your
plot
and
a
fairly
sort
of
visual
person,
so
I
appreciate
it,
the
the
numbering
is
not
strictly
correct
right,
it's
like
I
mean
it
is.
It
is
sort
of
there's
a
loop
in
here
right.
A
A
B
B
Are
we
able
to
deploy
them
if
so
great,
let's
deploy
and
then
there's
kind
of
a
secondary
process
on
the
release,
which
again
is
actually
also
circular,
but
it's
on
a
monthly
Cadence
and
they
they
basically
just
feed
deployments
feed
into
releases
but
yeah
sure
it's
not
a
linear
view.
Okay,.
B
B
A
Get
it
but
okay.
B
Yeah
but
I
think
the
key
thing
here
is
we.
We
do
sort
of
almost
have
three
independent
processes
right
where
it's
like
the
developer
or
like
the
Stage
Group
life
cycle
is
its
own
cycle
of
create
changes
and
merge
changes
and
that
is
actually
separated
from
the
deployment
cycle
where
the
deployment
cycle
is
other
changes
can
I,
deploy
them
and
then
again
releases
are
same,
but
there
will
always
be
changes
because
it's
a
month.
So
again
it's
like
other
changes.
B
A
I
get
that
and
just
in
terms
of
like
terminology-
and
this
is
just
me
trying
to
get
my
head
around
like
you,
you
say
that
there's
the
deployment
cycle
and
the
release
cycle,
but
in
in
a
way
it
feels
like
the.
B
A
A
B
That's
correct,
yeah,
yeah
I,
wonder
if
the
other
big
difference
we
have
is
almost
about
around
sort
of
right,
maybe
not
requirements
but
like
I,
guess,
maybe
constraints
because
one
of
the
things
around.com
is
we
we,
so
we
can
recover
more
easily.
We
can
detect
problems
more
easily
and
recover
more
easily.
We
can.
We
have
great,
more
use
of
things
like
feature
Flags,
so
we
have
like
more
deployment
strategies.
The
the
challenge
I
think
with
self-managed.
B
Is
it
sort
of
needs
to
be
a
a
ready
to
go
package
like
there
are
certain
assumptions
around
that
around
how
like
backwards
compatible
changes?
You
know
you
have
the
kind
of
multiple
Milestones
to
like
rolls
certain
changes
out
or
remove
things,
and
also
just
the
challenge
of
recovering
so
I
think
those
are
the
bits
that
actually
make
it
I
think
are
a
little
bit
tricky
because
they're
almost
like
a
totally
different
group
of
considerations,
so
yeah
so.
A
There's
essentially,
but,
and
it's
it's
not
meant
to
be
nitpicky,
but
to
me
it's
like
there
are
just
different
ways
to
deploy.
Gitlab
one
is
deploying
the
lab
on.com,
which
has
its
own
set
of
constraints
and
intricacies.
Then
there
is
one
of
deploying
to
self-managed
with
different
intricacies
yeah,
and
then
there
is
another
one
coming
up
which
is
deploying
together,
dedicated,
which.
B
A
A
A
B
That,
like
it's
so
I,
remember
how
he
worked
there,
but
it
was
something
along.
The
lines
of
like
distribution
gave
us
the
capability
to
deploy
and
release
and
Delivery
at
basically
the
ones
who
take.
B
Turn
it
into
yeah
like
turn
it
into
a
a
reality.
Yes,
but
yes,
I
think
that's.
A
We
we
need,
you
know
we
need
to
build
I,
don't
know
like
a
an
Arcane
package
format
for
for
github.com,
because
we're
changing
our
Linux
distribution
that
we
run
right.
So
we
need
your
package
thing
to
not
only
spit
out
a
Debian
package,
but
you
know
an
XZ
package
thing.
Can
you
do
that
and
then
they
would
say
like
well,
that's
crazy,
but
they
would
know
how
to
actually
make
that
reality,
not
delivery.
B
A
B
Package
requirements,
distribution
which
create
the
package
like
and
let
delivery
now
here
is
the
package
that
you're
using
basically-
and
we
just
pick
up
that
package
and
see
it
safely
through
to
its
destination.
B
You
think
exactly
exactly
so
what
we
try
and
do
is
say
like
in
terms
of
like
thinking
about
this
and
trying
to
move
towards
automations.
We
try
not
to
worry
too
much
about
what
is
in
the
package.
There
have
been
times
in
the
past
where
say,
for
example,
things
like
rollbacks,
some
of
the
some
of
the
Nuance
on
sort
of
rollbacks
sort
of
can
fall
to
people
being
like.
Well,
let
me
look
inside
the
package
and
actually
see
if
it's
safe
and
as
much
as
we
can
move
away
from
that.
B
A
Yes,
okay,
cool
I,
think
I
I,
get
that
so
I
I'd
say
like
sort
of
my
assumptions.
Right
now
is
that
you
know
picking
a
commit
from
the
default
Branch
right.
That's
you
know
an
understood
thing.
You
know
there's
not
very
much.
You
have
to
worry
about
from
that
end.
As
that's
you
know,
you
pick
something
at
a
specific
moment
in
time
right
and
that's
okay,
but
the
challenge
comes
in
assessing
the
stability
of
that
change
set
right.
B
Yes,
yes,
that's
largely
true.
Yes,
now
the
challenge
I
get
from
a
product
point
of
view
comes
from
the.
B
B
We
are
sort
of
expecting
that
you're
just
working
with
a
single
repo
and
that's
always
been
our
kind
of
major
difference
between
what
we're,
what
we
end
up,
trying
to
do
or
what
we
end
up
needing
to
do
and
and
how
the
product
fits
with
that.
So.
A
B
B
A
Single
repo,
you
package
that
that
up
into
something
and
you
deploy
it
exactly-
and
you
know
we
don't
so
it's
not
that
we
we
don't
have
a
mono
repo,
but
we
ship
a
monolith
like
a
big
900
I,
don't
want
to
say,
cross
the
gigabyte
Mark
by
now
the
package
yeah.
But
we
package
everything
in
there
right
the
everything
that
you
need
to
run
right.
Many
things
which
you
know
means
you
have
to
pull
from
15
different
repositories.
B
Think
that's
for
us,
because
what
that
makes
it
difficult
for
us
to
have
is
a
sort
of
feminine
dropping
in
points
into
the
product.
So,
if
I
want
to
see
this,
where
would
I?
Where
would
I
drop
in?
B
So
that's
where
we
end
up
building
a
lot
of
our
custom
tooling,
to
actually
bring
all
of
that
information
together
so
that
from
a
delivery
perspective,
we
can
look
at
things
from
our
package
view,
which
is
what
we
care
about
like
this
one
package
is
going
to.com
and
ignore
the
kind
of
the
fact
that
actually
inside
this
package,
we
are
pulling
changes
from
like
eight
projects
or
something
like
that.
I.
A
See,
okay,
so,
for
example,
for
your
package
on
github.com.
You
don't
need
postless
in
included,
because
skinner.com
has
its
own
way
of
like
managing
posters
via
petroni.
So
the
post
was
part
of
the
package
that
would
be
built
for
self-managed
is
useless
for
forget
lip.com.
You
don't
really
need
that.
B
We
don't
use
that
yeah
and
I
was
thinking
more
like
the,
for
example,
release
managers
want
to
know
this
package
is
on
this
environment.
You
know
they
don't
necessarily
care
that
that
package
doesn't
have
giddly
changes
or
it
does
have
Pages
changes
that
doesn't
really
matter
from
a
release
manager
perspective.
They
care
about
this
unit
of
a
package,
but
I,
don't
think
we
really
have
a
way
of
representing
that
at
the
moment
in
the
product.
B
B
A
B
Bacon
like
get
through,
so
we
can
see
19th,
we
will
announce
the
the
candidate
commit
on
the
20th,
we're
going
to
create
the
test
release
candidate
and
see
how
that
works.
So
basically,
this
just
walks
the
street
so
relatively
straightforward
to
organize
those
things.
B
This
is
where
I'm
never
going
to
ever
find
that
I
think
that
topic,
and
here
we
go
too
many
jobs.
So
so
that's
the
release
process
really
straightforward,
just
a
load
of
steps
that
we
go
through
now
on
deployments.
We
have
a
lot
of
interesting
pieces
that
sit
around
deployments.
B
So
if
I
dig
onto
the
deployments
page
where
we
get
the
more
involved
so
that
those
three
sort
of
steps
of
a
new
package
is
created,
we
check
if
we
can
deploy
it
and
then
we
deploy
it
is
even
this
isn't
like
the
super
super
detailed
view,
but
it
is
the
next
level
down.
So
we
have
a
layer
that
sits
above
this
is
the
decision
of?
Can
we
deploy?
So
what
we're
checking
for
then
is.
Do
we
have
a
production
change
lock
in
place?
B
Is
the
weekend
change
lock
on
place
like
do
we
have
an
S1
incident
running?
There
are
certain
sort
of
overarching
things
like
that
that
would
just
automatically
block
the
entire
deployment
process.
I
see
we're
assuming
things
are
healthy
enough
for
us
to
deploy.
We
would
then
kick
off
this
process.
Yeah.
A
A
B
So
we
have,
we
are
reasonably
close,
so
the
blocking
on
incidents
or
high
sort
of
change
requests.
We
are
using
labels
to
do
that.
To
drive
that
functionality
for
the
weekend
change,
lock,
that's
custom,
tooling
that
actually
release
Stage
Group
do
have
the
deploy
freeze,
feature
which
we
could
switch
over
to.
That
actually
does.
That
is,
that
is
comparable.
B
We,
we
just
haven't,
prioritized
the
work
to
do
that
switch,
but
we
could
the
one
we
don't
have
is
any
way
of
handling
something
like
a
production
change
lock,
and
that
is
a
slightly
different
type
of
changelog
to
the
weekend
change
lot.
The
weekend
change
lock
is
like
always
between
these
dates.
B
It's
always
between
like
a
certain
time
of
Friday
and
a
certain
time
or
Monday,
and
it
can
be
overwritten
with
certain
criteria.
The
production
train
track
is
different
because
the
sort
of
scheduled
in
advance-
and
they
are
random
dates
because
they
depend
on
on
need.
We
don't
have
anything
yet
that
matches
the
the
use
case
we
use
for
those
okay.
A
But
the
problem
you're
trying
to
to
solve-
or
you
have
solved
right
it
is-
is
in
existence,
is
you
need
to
be
able
to
conditionally
schedule
or
you
need
to
check
if
deployments
can
run
correct,
so
yeah
in
an
Ideal
World,
you
know:
if
you
have
a
deployment
system
on
gitlab,
you
will
be
able
to
say.
Okay,
you
know
here
are
the
conditions
under
which
the
problems
can
can
happen
right
and
then
you
can
Define
rules
right,
let's
say
well.
A
A
B
It
right
exactly
yeah
and
have
ways
for
us
to
for
all
of
our
sort
of
change,
lock,
type
processes
they
all
have
a
way
of
overriding,
if
needed,
so
having
some
way
where
we
can
also
do
that
stuff
as
well.
Okay,.
A
We
have
on
our
roadmap
for
forget,
love,
I,.
B
Believe
we
have
elements
going
towards
it
yeah,
so
there
are
things
around
kind
of
like
I,
don't
know
exactly
what
it's
called,
but
there
are
ways
of
kind
of
authorizing
deployments
either
like
maybe
authorized
in
advance
or
authorized
a
particular
one.
So
those
things
get
quite
close,
so
yeah
I
believe
that
we
are
like
it.
It's
a.
We
have
some
issues
that
describe
things
I,
think
they're
being
considered
yeah
yeah.
B
No
that's
great
cool,
so
once
we
have
basically
determined
like
we
have
a
package
and
we
are
able
to
go
ahead
and
do
our
deployment.
We
then
attempt
to
start
deploying
now
at
each
of
these
stages.
We
also
as
I
talk
through
the
environments.
Each
environment
also
has
the
ability
to
be
locked
as
a
sort
of
isolated
thing,
so
the
pipeline
can
also
be
paused
at
stages
by
locking
certain
environments
and
what
that
means
is.
B
It
allows
us
to
be
running
multiple
deployment
pipelines
in
parallel,
but
not
overlapping,
on
the
same
environment,
but
what
it
means
is:
if
deployment
a
is
in
progress-
and
it's
currently
deploying
to
the
production,
Canary
environment
pipeline
B,
which
started
it
up,
won't
also
be
deploying
to
the
same
place.
It
will
get
queued
up
behind
it.
So
it's
one
of
the
ways
we've
got
mttp
down
is
to
actually
pretty
much.
Every
deployment
is,
is
near
constantly
deploying
if
we
can
achieve
that.
So
it's
like
a
constant
conveyor
belt.
A
B
No
we're
not
quite
there,
so
we
have
a
couple
of
pieces
where
we
make
sure
the
reason
for
that
is
all
around
the
mixed
version:
tests
things.
We
have
some
extra
controls
to
manage
environments,
to
allow
specific
testing
of
mixed
versions.
So
because
of
that
we
will
always
have
two
one
always,
but
we'll
often
have
two
versions
running
in
production,
because
the
way
we
deploy
to
our
clusters
is,
we
do
two
and
then
the
other
two.
B
So
we
have
four
of
them,
so
it
will
be
previous
version
and
new
version,
and
then
everything
is
done
and
then
the
next
deployment
will
come
through.
We
haven't
got
to
a
stage
I
expect
we
will
do
in
the
future,
though,
but
it
will
make
testing
a
bit
more
complicated
where,
theoretically,
you
could
have
a
different
version
on
each
of
those
four
clusters
and
it
could
be
a
more
granular
roll
out
that
that
allows
things
to
go
through.
We
haven't
got
that
right
now.
It
is.
It
is
a
and
b
okay.
B
So
I
will
not
go
into
I'll,
try
not
to
go
into
too
much
detail,
but
shout
if
you
do
have
more
questions
just
because
some
of
this,
you
really
won't
need
to
know,
but
basically
we
have
a
number
of
environments
right.
So
a
package
We
Begin
deploying
now
staging
Canary
is
our
key
environment.
That's
where
pretty
much
the
bulk
of
our
testing
takes
place,
it's
the
first
environment
that
we
deploy
to,
and
so
that's
the
kind
of
essential
one.
B
Alongside
that,
we
are
also
deployed
to
the
staging
graph
environment,
which
is
built
from
the
reference
architecture.
That
is
not
a
blocking
piece
of
the
deployment
pipeline,
but
it
does
the
same
package
in
parallel
to
stage
in
Canary,
we
also
deployed
staging
ref,
so
Stadium
Canary
we'd
deploy
the
package
assuming
that
all
goes
well,
we
run
QA
tests.
B
These
are
testing
the
functionality
of
the
package,
but
they
also
contain
the
mixed
version
testing.
So
this
work
starts
to
get
a
little
bit
complicated,
so
these
tests
also
hit
staging,
and
so
this
gives
us
a
way
of
recreating
the
production,
rollout
environment.
So
in
production
we
have
a
production
Canary
and
we
have
a
production
like
main
Fleet.
They
share
a
database
and
we've
recreated
the
same
setup
on
staging
and
the
idea
of
the
mixed
version.
B
Testing
is
basically
to
be
like
is
staging
Canary
working
as
expected
when
we
have
this
new
version
on
it,
but
also
it's
staging
still
working
correctly
when
stagent
Canary
has
this
new
version
on
it,
and
that
allows
us
to
check
for
mixed
version
problems
which
we
have
seen
in
the
pastel
production.
So
previously,
we've
had
cases
where
we
put
the
new
version
on
stage
in
Canada.
Oh
sorry,
on
production
Canary
because
it
shares
the
database.
Users
of
production
were
getting
like
unexpected
Behavior
because
of
that
Canary.
B
B
So
we
want
to
what
will
happen
on
this
pipeline.
I'll
repeat
that
at
the
end,
but
basically
what
happens
on
the
pipeline
the
deploy
to
stage
in
Canary,
we
run
tests.
If
those
pass
we
automatically
deploy
to
production
canary
and
we
run
tests
and
if
those
pass,
we
also
have
additional
like
Health
metrics
coming
from
production.
Connecticut
production
environment
has
a
small
amount
of
real
traffic.
B
B
They
have
a
manual
promote
button
and
that
then
triggers
the
deploy
to
staging.
So
we
always
begin
staging
runs
ahead
of
production.
We
keep
the
two
versions
in
parallel:
the
staging
runs
slightly
ahead
of
production
and
then
some
like
30
minutes
later.
The
production
deployment
also
triggers,
and
that
runs
through.
A
B
B
Means
is,
we
have
the
canaries
get
upgraded
first
and
then
the
main
fleets
so
staging
Canary
production
Canary,
then
staging
and
production
staging
a
production
will
stay.
We
will
keep
those
two
versions
bit
the
same.
So
if
it,
if
the
deployment
to
production
failed
for
whatever
reason
we
would
roll
staging
back
to
match
the
versions
up.
If
we
deploy
to
staging
and
it
succeeded,
we
didn't
do
a
production,
we
would
also
roll
back,
but
we
we
carefully
managed
the
versioning.
A
B
B
B
It
does
two
things,
so
the
main
one
that
does
is
the
bait.
It
gives
us
an
hours
breaking
time
between
production
and
Canary
and
production,
so
that's
a
slightly
longer
window
for
for
our
baking
time.
It
also
is,
if
anything
really
drastic
happened
on
staging.
We
have
a
chance
of
stopping
on
production.
In
actual
fact,
they
end
up
more
staggered
than
they
look,
because
production
is
so
much
bigger
than
staging
yeah.
Staging
actually
takes
only
about
30
minutes
to
deploy
production
takes
about
90
minutes,
so
actually
it
gives
us
a
a
window.
B
I,
don't
think.
We've
ever
actually
needed
it,
though,
interestingly,
because
the
way
staging
Canary
and
production
Canary
works
is
if
it
is
a
problem
that
is
going
to
show
up
in
the
staging
deployment.
We've
almost
certainly
caught
it
ahead
of
this
time,
but
it
is
a
it's
a
it's
it's
just
another
added
Safeguard
so
that
you
know
we
do
run
the
change
through
every
test
environment
ahead
of
our
production,
deploy.
A
B
That's
it
yeah
exactly
for
the
Developers
for
most
things
that
the
developers
do
May,
probably
not
100,
but
pretty
much.
Everything
should
be
visible
on
stage
in
Canary
yeah.
One
thing
that
we
do
test
quite
uniquely
on
staging
our
infrastructure
changes.
So
if,
for
example,
the
database
has
been
upgraded,
it
would
be
different
between
staging
and
production.
So
that's
kind
of
an
additional
test
area.
A
B
Cool
thank
you,
so
that
is
where
the
complexity
comes
in
right.
So
this
is
what
the
release
managers
spend
most
of
their
time.
Coordinating
and
managing,
because
What
will.
What
will
normally
be
happening?
Is
you
see
above
here?
So
these
are
the
times
we
actually
are
creating
new
new
branches
and
new
packages,
so
they
come
in
every
three
hours
or
three
to
four
hours
based
on
release
manager
schedule.
Now
what
we
have
here
is
a
pipeline
that
takes
around
five
to
six
hours.
B
So
what
would
commonly
be
happening
is
the
release?
Managers
would
be
deploying.
Let's
have
a
deployment,
that's
going
on
to
stage
in
Canary
they
may
have.
The
package
ahead
is
currently
like
on
production,
Canary
or
baking.
They
may
have
another
one,
that's
currently
rolling
out
to
production,
so
they
are
coordinating
all
of
those
pieces
if
any
of
them
fail
or
if
we
have
any
problems
on
production,
they
take
action
and
basically
decide
like
what
do.
I
need
to
do.
A
B
That
is
the
challenge
for
sure.
So
at
the
moment
it
is
reasonably
well
automated
and
I
think
we
can
see
a
like
decent
path
to
to
to
automate
further.
So
at
the
moment,
the
only
manual
step
is
this
is
box
six
and
that's
actually
not
a
required
manual
step
from
from
the
technical
point
of
view.
B
An
agreement
of
us
like
do
people
kind
of
like
clock
in
and
therefore
deployments
are
running
or
do
you
clock
out
and
that
pauses
them
or
do
you
have
some
other
way
of
controller?
We
haven't
reached
a
great
way
of
actually
solving
that
so
right
now,
I've,
given
this
is
literally
just
a
button.
B
It's
a
fairly
low
overhead,
but
all
the
rest
of
it
is
automated
and
the
environment's
all
check
for
basically
like
availability,
like
am
I
already
in
use,
am
I
healthy,
can
I
allow
this
deployment
to
go
through
and
they
they
pass
through.
The
gates.
I
think
well
we're
going
to
have
the
really
big
challenges
due
to
the
complexity
of
this
and
the
fact.
B
B
Hope
I
was
actually
audible,
oh
yeah,
yeah,
absolutely
yeah,
nice
cool,
so
one
other
thing,
I've
mentioned,
which
we
also
have
a
great
opportunity
for
which
we
haven't.
Yet
we
have
the
pieces.
Quality
are
also
thinking
about.
We
haven't
yet
done
so
the
moment
we
rely
quite
heavily
on
QA
tests,
so
we
do
the
deployment.
We
then
run
tests.
If
the
tests
pass,
we
do
the
next
deployment.
B
B
An
hour
for
product
for
stage
of
canary
it's
about
an
hour
for
production,
canary
and
and
it's
like
actually
a
couple
of
hours,
probably
for
production.
So
we
do
quite
a
lot
of
deployment
work.
Then
the
tests
can
be
about
20
minutes,
they
may
fail
and
basically
what
they
could
be
telling
us
is
yeah.
This
thing
didn't
deploy
correctly
or
you
know
something
fundamentally
wrong
on
the
package.
So
what
we've
been
sort
of
talking
about
for
a
while
and
thinking
about,
and
also
equality
are
thinking.
B
The
same
is
how
to
improve
environment
health
checks
so
that,
as
we
do
the
rollout
of
the
deployment,
so
we
have,
we
have
better
checks
on
production,
but
they
are
more
telling
us.
Is
the
production
environment,
healthy,
they're,
not
re-evaluating
the
package?
In
order
to
do
that,
we
would
want
something
similar
on
our
test
environments.
So
how
can
as
we
roll
these
out,
we
can
actually
be
monitoring
the
the
the
health
of
the
environment.
Like
has
it
been
affected
by
this
new
package?
A
Yeah
well,
this
is
maybe
timely,
because
all
of
those
things
always
happen
at
the
same
time.
Right
I
have
a
a
meeting
later
on
with
Wyoming
who
there
is
some
concern
about
mandatory,
upgrade
paths
and
there's
another
Mr.
That
I
saw
that
would
allow,
which
is
not
market
yet
which
would
allow
product
managers
and
Engineering
managers
or
a
combination
of
the
two
to
wave
compatibility
requirements
for
certain
changes.
A
So
let's
say
you
know
that
you
know
the
front
end
on
the
minor
sub
page
is
going
to
be
broken
until
this
other
thing
is
going
to
be
shipped,
but
I
don't
fully
understand
that
yet,
and
my
worry
would
be
that
this
would
show
up
in
deployment
pipelines
as
increased
errors
right
and,
as
you
know,
or
something
is
wrong.
So
how
do
we
know
that
that
is
right?.
A
B
B
As
part
of
the
packaging
Q8
job,
but
for
most
code
changes,
that's
a
like
a
manual
trigger,
so
it's
sort
of
like
for,
if
you
do
something,
a
real
big,
front-endy
change
or
something
like
that,
you
would.
You
know
you're
definitely
recommended
to
run
them.
I
think
people
generally
do
run
them,
but
for
the
majority
of
changes
they
don't
get
run.
So
this
is
the
first
time
that
we
actually
run
those.
So
there's
a
very,
very
good
chance
that
yeah.
A
B
And
we
have
a
bit
of
a
unfortunate
delay
there,
because
what
sort
of
happens
is
it's
a
little
bit
of
a
round
robin
so
the
release
managers
will
see
or
not
always,
but
the
worst
case
is
release.
Managers
are
the
first
to
say:
oh
these
tests
are
failing,
they
would
go
to
the
quality
on
call
engineer.
The.
A
B
On
call
engineer
would
go:
oh
okay,
interesting
that
looks
like
a
real
failure.
Let
me
go
and
find
someone
they
either
use
Dev
escalation,
I
think
more
commonly.
They
know
which
stage
group
to
go
directly
to,
and
you
ask
a
developer
basically
like.
Is
this
a
real
failure
and
make
a
decision
from
there?
So
it's
a
little
bit
it.
Certainly
it
can
be
quite
a
long
debug
process.
Yeah.
B
And
if
it's
helpful
to
dig
in
more
of
these
things,
then
we
can
also
certainly
do
the
more
practicalities
of
like
what
the
release
managers
are
seeing
and
doing
in
amongst
either
this
process
or
the
release
process.
If
you
want
to
know
more
of
the
kind
of
Hands-On
details,
yeah.
A
B
A
Five
seconds
of
work
I
think
there
are
a
couple
of
things
that
I'd
like
to
learn,
and
you
touched
on.
Some
of
them
is
like
how
time
intensive
are
some
of
those
boxes,
then,
who
owns
some
of
those
boxes
and
who
does
what?
Where
I
think
that
is
interesting
to
me
and
I?
Think
then.
Lastly,
maybe
we
can
try
and
figure
out
what
parts
here
require.
A
Custom,
tooling
and
I
think
we
talked
about
it
yesterday,
maybe
that
getting
sort
of
digging
into
that
I
think
maybe
quite
useful,
because
it
may
serve
as
opportunities
for
for
us
and
what
to
do
and
where
we
believe
the
the
value
is
and
I.
Think,
given
that
you
have
a
long
sort
of
process,
we
may
need
to
choose
sort
of
area
a
level
of
abstraction,
but.
B
B
A
A
B
Okay,
absolutely
yeah,
that
sounds
great
yeah
I.
Definitely
don't
think
I,
don't
think.
There's
anybody
who
has
the
whole
thing
in
their
head
either,
because
we
have
a
sort
of
a
layer
beneath
this
which
is
like
the
release
managers,
and
then
we
have
the
layer
beneath
that
which
is
the
how
the
tools
actually
work.
So
there's
the
good
levels
of
complexity,
I
think,
certainly
for
you.
You
certainly
don't
need
to
go
too
many
layers
for
this.
So
but
yeah
I
think
that
sounds
like
a
great
a
great
Next
Step.
A
Excellent
well,
then,
thank
you
so
much
for
your
time.
I
learned
the
time
today.
I
think
this
is
quite
the
accomplishment
as
well
to
set
all
of
that
up.
So
I'm,
pretty
impressed
and
I
know
that
there's
a
lot
of
thought
that
goes
into
all
of
those
Parts
as
well,
and
because
there's
a
risk
involved
exactly.
B
Okay,
thank
you.
Amy
I'll,.