►
From YouTube: 2020-10-14 AMA about GitLab releases
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
let's
get
started
so
hello.
Everyone
welcome
to
this.
Ask
me
anything
about
releases
and
deployments
with
the
delivery
team.
I'm
amy
phillips,
engineer
manager
with
delivery
and
we're
also
joined
by
quite
a
lot,
hopefully
most
of
the
delivery
team
as
well.
B
B
Thanks
thanks
for
organizing
this,
if
no
one
else
has
a
question,
I
think
the
releases
and
deployments
this
is
not
about
distribution.
This
is
about
like
keeping
releasing
to
gitlab.com.
B
Cool
thanks
for
that
contact
appreciate
it.
B
Maybe
maybe
the
question
is
in
some
far
future:
if
you
merge
something
as
a
maintainer,
is
it
going
to
get
released
to
get
laptop
home
immediately
or
oh?
Are
we
gonna
say
like
look
when
the
sun
is
over
the
pacific?
We
have
very
few
team
members
who
are
awake,
we're
just
going
to
hold
those
releases
out
until
it's
there
or
are
we
going
to
say
look
because
if
you
team
members
are
awake
at
that
time,
there
will
be
few
mergers.
So
it's
not
a
problem.
A
That's
a
great
question,
so
I
think
certainly
the
way
we're
thinking
about
releases
right
now
is
we
are
working
through.
So
at
the
moment,
release
managers
play
a
very
important
role
in
releasing
it's
a
manual.
There
are
manual
steps
in
the
process
and
they
are
crucially
doing
the
kind
of
manual
promotion
out
to
production
and
we're
working
at
the
moment
to
put
in
place
kind
of
assisted
deployment
which
takes
away
a
lot
of
those
manual
steps.
A
But
I
sort
of
provide
to
release
managers
with
lots,
more
information,
but
there'll
still
be
a
manual
promotion
to
to
production
and
eventually
we'll
get
to
the
stage
where
that
becomes
automated.
Once
we
have
kind
of
full
confidence
in
the
process,
at
least
in
the
very
like
I
mean
in
in
the
future
that
we
can
see
so
far
that
will
take
place.
It
will
be
an
automated
deployment,
but
during
a
window
like
an
approved
window,
the
approved
window
will
be
when
we
have
a
release
manager
online.
Who
can
be
responding
if
something
goes
wrong?
A
So
I
think
we
are
probably
a
very
long
way
away
from
wanting
to
like
exp
either
expand
the
release
managers
to
all
regent
or
release
when
we
don't
have
a
release
manager
available
at
some
point
things
will,
I
guess,
become
so
reliable
that
that
would
be
safe.
We
could,
I
guess,
switch
to
like
page
of
duty
if
it
goes
wrong,
but
that's
a
very,
very
long
way
out.
So
I
think
the
focus
at
the
moment
would
be
on
moving
to
automated
deployments,
but
during
approved
windows.
A
So
we
have
release
managers
we
pair
up
and
we
have
a
emea
time
zone
and
also
a
america's
time
zone.
So
we
we
can
deploy
through
quite
a
lot
of
the
day
and
then
it's
just
the
apac
region.
We
don't
currently
cover
but,
as
you
say,
right
now,
there's
actually
not
so
many
changes
coming
in
in
that
time.
So
it's
a
little
bit
less
impact.
B
Thanks
for
that-
and
maybe
I
have
this
assumption-
that
it's
possible
to
release
individual
changes,
but
I
could
even
imagine
that
we
have
like
at
peak
hours,
we
have
kind
of
more
merges
happening
than
that
we
can
do
releases.
Suppose
we
have,
I
don't
know
50
merges
on
a
day,
then
you
have
only
half
an
hour
per
release
to
make
the
release
even
takes
longer
than
that.
How
should
I
think
about
that?.
A
Yeah,
absolutely,
I
think,
that's
I
think,
seeing
the
number
of
changes
that
come
in
and
even
even
with
the
fastest
pipeline.
I
think
we
will
struggle
because
of
the
size
we
are
and
the
number
of
changes.
I
don't
think
we
could
ever
be
in
a
place
where
one
change
would
have
its
own
deployment.
I
think
we'll
always
have
batches,
but
we
are
working
to
make
the
releases
as
fast
as
possible
so
that
that
batch
size
does
come
down
at
the
moment.
It's
taking.
A
It
takes
around
six
hours
to
run
the
full
pipeline
yeah.
We
have
a
lot
of
changes
like
we
have
to
go
through
staging,
so
we
deploy
to
staging
and
we
have
to
deploy
to
canary,
and
then
we
wait
on
canary
for
an
hour
with
just
a
limited
set
of
traffic
to
detect
errors,
and
if
that
goes
well,
we
start
deploying
to
production
and
that
takes
about
two
hours.
So
there's
lots
of
there's
lots
of
room
to
improve
that.
A
A
That
will
give
us
huge
gains
it's
much
much
faster
to
deploy
onto
kubernetes
than
it
is
onto
our
virtual
machine,
so
we'll
get
huge
gains
there
as
well,
but
we're
never
going
to
have
a
deployment
pipeline.
That's
20
minutes
like
you
know,
I
think
that's
a
real
big,
big
goal.
So
I
think
what
we'll
focus
on
instead
is
moving.
So
at
the
moment
we
are
deploying
we
usually
manage
a
couple
of
deployments
on
gitlab.com
each
day
and
that's
giving
us
a
mttp
a
mean
time
to
production.
A
I
should
say
of
around
19
hours,
so
that
means
an
mr
being
merged
in
19
hours
it
will
be
on
gitlab.com,
so
we've
just
changed
that
target
we're
going
to
reduce
that
down
to
12
hours.
So
that
should
mean
that
you
know
all
mrs
get
merged
within
12
hours.
You'll
hit
production
once
we
get
to
our
target.
So
that's
our
current
focus
and
then
we'll
see
where
we
can
reduce
that
further.
C
I
can
add
some
color
to
that
if
you
you're
okay
with
that
amy
yeah,
so
you're
right,
said
absolutely
right
there,
our
current
time
to
actually
be
ready
to
deploy
even
is
around
four
hours,
and
that
time
is
the
time
that
we
don't
fully
control.
So,
for
example,
it
might
take
an
hour
and
a
half
for
a
full
test
suit
to
finish
inside
of
gitlab
or
gitlab.
So
I
would
add
to
amy's
comment
about.
We
will
never
get
to
deploy,
commit
to
production
and
correct
it
with.
C
That
will
also
need
to
be
reduced
significantly
right
now.
It
takes
around
an
hour
hour
and
a
half
to
build
container
images
right
because
they're
big,
like
it,
takes
a
lot
of
time
to
get
them
done
so
for
us
to
be
actually
ready
to
deploy.
It
takes
a
significant
chunk
of
time
that
that
we
need
to
address
first,
and
at
this
point
in
time,
I'm
not
really
sure
whether
that
is
the
most
important
part
right
right
now.
C
B
That
that
helps,
would
you
is
it
okay
for
me
to
give
some
context
about
merge
strains
for
a
few
minutes
thanks
for
that.
I
recognize
that
none
of
this
is
relevant
to
the
work
you're
doing
today.
You're
doing
great
work,
keep
it
going
this.
This
has
nothing
to
do
with
that.
I
think
with
merge
trains
we
are
able
to
maybe
get
some
contacts.
Merge
trains
originated
because
I
was
talking
to
one
of
like
the
biggest
startups
in
the
world
and
they
said
look.
We
have
one
ios
app.
B
We
got
hundreds
of
developers
working
on
hundreds
of
changes
a
day
and
our
test
suite
takes
more
time
to
run
than
that.
We
have
time
in
between
changes
and
it
decouples
that
problem.
You
can
have
a
very
long,
transit
time.
Suppose
your
test
chain
or,
like
your
in
in
your
case,
your
release-
train,
takes
like
the
six
hours
that
you
say
you
could
still
kind
of
have.
I
don't
know
half
an
hour
in
between
releases
and
that's
how
that
works
with
being
half
an
hour
on
staging.
B
B
You
test
them
like
that,
so
theoretically,
there
could
be
a
future.
If
you
have
change
in
hand,
it
takes
a
full
workday
to
get
that
online,
but
we
can
do
that.
Every
two
minutes,
like
the
time
between
two
changes,
is
only
two
minutes,
so
we
can
deploy
individual
changes
and
I
haven't
found
good
works,
I'm
using
transit
time
and
interval
time
between
the
metros
or
trains.
I'm
not
sure
that's
the
right
nomenclature,
but
I
want
to
want
to
offer
kind
of
what
we
learned
in
merge
streams
there
and
maybe
in
some
far
future.
A
Yeah
definitely
thanks
for
that
context,
and
this
will
all
become
more
interesting
as
well
as
we
move
into
kubernetes,
because
that
also
gives
us
great
tools
to
change
our.
It
won't
be
so
much
just
a
pipeline
that
we
run
through
it
will
be.
It
gives
us
lots
more
options
to
spin
things
up
with
new
versions
and
run
percentage
of
traffic,
and,
if
they're
good,
we
switch
more
traffic
into
that
cluster
and
take
another
cluster
down.
So
we
get
a
quite.
B
Yeah
not
super
interesting
and
that's
something
that,
for
example,
a
spinnaker
supports
really
well
having
multiple
changes
out
there
at
the
same
point.
At
the
same
time,
I
think
in
gitlab
releases.
That's
still
that's
not
natural.
Yet,
as
far
as
I
understood
the
latest
state,
I
think
so
that's
something
we
should
start
thinking
about
on
the
product
side.
Maybe.
C
We
have
some
conversation
about
this
about
being
able
to
run
your
change
set,
plus
the
the
current
master
on
a
subset
of
production.
B
Thanks,
I
I
kind
of
want
to
propose
the
four
stages
of
canaryian.
You
have
no
canary
canary.
You
have
one
canary
environment
where
you
can
test
one
change.
You
have
multiple
ones
where
you
can
have
your
contest,
change,
a
change
b,
change
c,
no
change
so
multiple
ones
and
then
the
the
final
level.
The
ultimate
level
like
the
lab
tearing,
is
the
canary
train,
similar
to
a
merge
strain.
B
You
have
one
canary
environment
that
is
the
like.
The
no
changes,
the
one
is
a
and
one
is
a
plus
b
one
is
a
plus
b,
plus
c
etc.
Just
like
you
have
a
merge
train
and
that
will
allow
you
to
have
all
of
the
changes
go
from
like
one
percent
deployment
to
100
deployment
as
you
go
along,
because
that
is
not
possible.
B
If
you
have
multiple
canary
environments
that
each
only
support
one
change
compared
to
the
current
state-
and
I
think
that's
as
I'm,
I'm
pretty
sure-
that's
not
supported
in
our
product
yet-
and
I
think
spinnaker
does
a
better
job,
I'm
not
sure
spinnaker
does
multiple
ones
or
what
did
you
do
with
canary
train?
I
think
our
product
does.
One
canary
and
they
do
multiple
canaries
and
we
probably
need
to
go
to
a
canary
train.
A
Yeah
interesting
idea,
yeah
that
would
it
would
certainly
allow
a
lot
more
thorough
testing.
So
yeah,
that's
a
really
good
one.
A
Thank
you
for
breaking
silence.
That's
a
great
question.
D
I'm
not
familiar
with
the
openshift
work,
so
I
don't
know,
don't
worry
about
it
specifically
being
open
shift.
Let
me
let
me
rephrase
openshift
to
actually
just
say
operator
the
concept
of
the
operator
itself
right
now:
you're,
currently
making
use
of
the
helm,
charts
and
thus
helm
files
to
orchestrate
those
environments.
D
C
I
have
a
question
about
this,
so
will
the
operator
allow
us
to
inject
logic
so
that
we
can,
because
one
of
the
problem
that
we
have
now
that
we
are
working
on
the
assisted
deployment,
assisted
rollback,
is
actually
checking
the
status
of
the
production
environment
during
the
deployment.
So
with
the
omnibus
we
have
the
the
fleet,
which
is
broken
down
in
service
and
each
service
is
broken
down
in
batches.
So
it's
easy
at
the
end
of
each
batch.
C
This
is
a
problem
for
the
helm
chart
in
the
in
the
current
state,
because
basically
we
have
less
control
there.
We
just
triggered
the
kate's
workload
and
it
helps
do
the
magic.
So
next
check
is
either
he
completed
the
deployment
or
decided
to
roll
back,
but
we
have
no
nothing
to
say
in
between
right.
We
cannot
stop.
So
will
this
be
possible
with
the.
D
Operator
I
lost
track
of
the
unmute,
but
for
a
second
that's
a
really
interesting
question.
I
don't
believe
it's
specifically
not
possible,
but
it's
not
specifically
possible
yet
either
it's
very
early
in
that
particular
function
of
design.
C
A
Great
and
thanks
for
thanks
for
asking
that,
are
there
any
other.
A
No
okay,
in
which
case
thanks
everyone
for
coming
along
thanks,
sid
jason,
for
questions,
and
I
think
that
was
a
really
interesting
conversation.
So
thanks
a
lot
bye,
everyone.