►
From YouTube: 2020-08-12 AMA about GitLab Releases
Description
AMA with the Delivery Team
A
Fantastic,
let's
get
started
so
thanks
for
coming
along
everyone.
This
is
the
ask
me
anything
about
gitlab
releases
and
deployment,
I'm
amy
phillips
engineering
manager
for
delivery,
and
we
have
quite
a
lot
of
the
delivery
team
online
as
well.
So
yeah
really
looking
forward
to
answering
your
questions
today,
jackie.
Thank
you
so
much
for
first
question:
would
you
like
to.
B
Verbalize
for
sure,
so
recently,
we've
delivered
a
deploy,
freeze,
function,
functionality
and
I
noticed
inside
one
of
your
repositories.
You
have
change
locks
as
a
feature
that
you
all
are
using.
So
I'm
interested
in
starting
a
dialogue
about
helping
improve,
deploy,
freezes
for
your
all's
use
case,
yeah,
fantastic.
A
Definitely
I
mean
the
short
answer.
Is
yes,
absolutely
we'll
be
prioritizing
the
change?
Jeff.
Do
you
wanna?
Actually,
maybe
you've
got
better
view
of
the
specifics
and
where
this
fits
in
for
us.
C
We
have
a
project
called
changelock
that
generates
a
ci
image
that
includes
a
script
that
we
run
and
it
allows
you
to
specify
pretty
flexible
change
logs.
I
remember
when
we
were
working
on
this
feature.
C
The
flexibility
we
wanted
were
to
use
like
a
cron-like
syntax
and
also
use
absolute
dates.
I
don't
know
I
haven't
caught
up
with
the
feature
recently
to
see
whether
those
two
formats
are
supported
and
then
the
other,
the
other
big
need
for
us
would
be
to
have
some
way
to
override
the
change
lock
if
it
fails-
and
currently
we
do
that
with
the
ci
variable.
C
If
all
of
those
requirements
could
be
met
with
the
feature,
then
you
could
definitely
change
like.
We
could
definitely
deprecate
this,
but
I'm
not
sure
if
they
do.
B
B
A
That's
a
great
question,
so
we
have
two
key
results
that
we're
working
towards
this
quarter.
One
of
them
is
around
removing
some
of
the
manual
or
as
many
of
the
manual
decision
points
as
we
can
make
in
the
deployment
process.
So
at
the
moment
we
are
currently
deploying
to
gitlab.com
at
least
once
a
day.
A
In
recent
weeks
we've
been
going
beyond
that
which
is
fantastic,
but
it
very
much
relies
on
the
release,
managers,
making
sort
of
manual
decision
points
and
sort
of
pushing
things
through,
which
is
not
really
the
sort
of
longer
term
scalable
solution
that
we
want
to
go
for.
So
there's
a
few
pieces
involved
in
that.
A
In
the
moment
we
have
sort
of
two
really
big
decision
points
one
is
around:
should
we
promote
this
particular
build
to
production
and
the
other
one
is
when
it
is
being
promoted
out
to
production,
monitoring
that
and
making
sure
that
everything
continues
to
look
healthy
and
making
taking
a
human
decision?
A
If
you
know
things
start
to
look
unhealthy,
someone
has
to
make
a
decision
to
either
like
halt
deployment,
investigate
or
you
know,
find
some
way
to
revert
that
so
we're
working
to
automate
those
two
via
metrics,
so
that
we
actually
end
up
with
a
sort
of
scalable
release
pipeline.
That's
able
to
decide
build,
looks
healthy.
A
You
know,
has
the
right:
metrics
passed
the
right
tests,
it
gets
gradually
promoted
through
to
production,
and,
whilst
it's
going
out,
we
actually
also
have
a
system
monitoring
the
the
health
of
our
of
gitlab
and
making
a
decision
of.
Should
we
be
pausing
or
rolling
back
on
this
on
this
deployment.
A
So
that's
one
big
one
and
that
will
all
feed
into
mttp
mean
time
to
production,
which
is
the
sort
of
metric
we're
tracking
within
the
team
and
then
the
other
key
result.
We're
working
on
is
continuing
our
kubernetes
migration
working
on
unmodified
helm,
charts
so
that
we
can
dog
food
things
ahead
of
customers
and
we'll
be
working
to
migrate.
All
the
stateless
services
over
this
quarter.
D
How
is
that
going
getting
rid
of
that
dependency?
So
we
can
go
full
steam
ahead
with
migrating
those
to
the
helm,
charts.
A
Yeah
absolutely
definitely
really
appreciate
everyone's
efforts
to
remove
these
blockers
as
we
as
we
come
across
them.
We
are
progressing
well
at
the
moment,
but
yeah.
This
is
definitely
lots
of
work
to
do
to
actually
migrate
these
things
and
get
them
like
safely
running
and
deployed.
So.
D
Cool,
what's
the
next
service
that
you're
looking
forward
to
moving
migrating
to
the
helm,
chart.
C
I
can
speak
yeah.
I
can
speak
to
that
since
I'm
working
on
it.
The
next
service
that
we're
migrating
are
websockets,
which
is
used
for
the
interactive
terminal
for
when
you
connect
to
a
kubernetes
cluster.
It's
not
a
very
like
it's
not
used
a
lot,
but
we
have
a
little
bit
of
websockets
traffic
for
that
and
also
get
https.
C
So
those
two
get
https
being
like
you
know,
has
a
lot
more
traffic
than
web
sockets.
Those
we
obviously
pick
something
that
doesn't
have
any
nfs
dependencies.
It's
the
first
front-end
service
ever
migrating
other
than
registry.
So
it's
kind
of
exciting
to
see
how
this
is
working
in
kubernetes,
it's
already
in
staging
and
we're
planning
to
move
it
to
canary,
very
soon
we're
working
on
a
couple
blocking
issues,
but
nothing
that
is
going
to
take
a
long
time
to
fix.
D
C
Sure
yeah,
so
the
probably
the
the
remaining
blocking
issue
that
we
have
is
that
logging
is
a
little
bit
different
in
cloud
native
than
it
is
when
you're
running
on
vms,
when
you're
running
on
virtual
machines,
the
application
is
logged
into
a
bunch
of
different
log
files
rails
is,
and
then
we
have
fluid
d,
which
is
like,
which
is
pointed
at
different
at
these
different
logs,
and
it
has
a
bunch
of
rules
for
cloud
native.
C
All
of
the
logs
go
to
standard
out,
so
you
have
a
bunch
of
logs
coming
from
different
log
files
that
are
all
interspersed
together.
Currently,
we
don't
have
a
way
to
turn
off
the
unstructured
logs
completely.
C
We
have
the
option
in
rails
to
do
either
or
both
like
you
can
turn
on
json
logging
or
you
have
both
like
json
logs
and
unstructured
logs,
and
what
we
need
to
do
is
turn
off
the
unstructured
logs,
so
we're
making
a
change
in
the
application
to
turn
those
logs
off
which
so
that
we
don't
get
overwhelmed
with
logs
to
our
last
search
cluster.
That's
really
the
last
remaining
thing
that
we
have
for
get
https.
C
A
E
Up,
I
know
I
asked
this
regularly,
but
is
there
anything
that
you
need
my
team
to
prioritize
in
the
next
two
weeks?
I
ask
because
we're
closing
in
on
the
release
date
for
this
cycle
and
we
need
to
prep
for
the
first
week
of
anything.
That's
big!
So
is
there
anything
you
can
call
out
for
distribution
to
work
on.
A
Skybox,
does
this
one
maybe
fit?
We've
got
a
few
things
I
think
ready
for
review,
but
I'm
not
aware
there's
any
huge
blockers.
We
need
to
be
prioritized
unless
either
of
you
disagree.
F
I
think
the
one
thing
I
could
think
of
off
the
top
of
my
head
is
the
situation
where
we
currently
split
our
front
end
fleet.
You
know
we
got
the
get
service,
the
api
and
the
web
services.
Currently,
I
don't
believe
our
home
chart
currently
supports
this.
I
believe
you
already
have
an
issue
where
you
guys
are
working
on
that.
Currently,
though,
I
can't
recall
to
top
my
head,
though,.
E
Yep-
and
that
is
exactly
the
one
I'm
talking
about
so
precisely-
there's
there's
two
open
items
that
we
are
aware
of
and
we
are
prioritizing
prioritizing
into
13
for
which
are
the
service
maps,
so
the
ability
to
have
different
groups
of
the
web
fleets
and
the
refactoring
of
logging.
E
We
have
a
community
item,
that's
going
that
has
a
methodology
to
actually
wrap
all
of
the
logs
into
structured
json.
The
downside
of
that
is
we're
still
producing
a
ton
of
logs
as
an
application
that
are
not
structured
in
any
way.
So
that
ties
back
to
what
was
discussed
earlier
about
turning
off
non-structured
with
possible
and
convincing
people
that
we
need
to
maybe
stop
generating
17
logs
in
a
single
ruby
process.
D
Maybe
a
bit
further
out,
but
when
do
you
think
we'll
just
do
like
a
deploy
on
every
merge
into
our
main
branch
like
anytime?
Something
is
merged
as
long
as
we
haven't
turned
deploys
off
it
just
gets
released,
and
then
it
doesn't
need
a
human
to
look
at
the
stats
but
we'll
roll
it
back
automatically.
If
the
metrics
are
off.
A
Yes,
definitely
the
the
dream,
I
think,
for
certain
types
of
deployment,
so
the
sort
of
daily
auto
deploys
we're
not
too
far
away
from
that,
so
we're
hoping
to
have
at
least
the
sort
of
key
pieces
in
place
in
the
next
few
months.
They'll
be
deploying
within
a
window.
So
we
still
want
to
make
sure
that
we
have
the
right
people
around
to
sort
of
oversee
things,
and
if
we
did
roll
back,
we
would
put
things
back
into
a
place
where
it
was
like.
A
So
we
won't
be
sort
of
fully
fully
continuous
to
playing
everything
but
we'll,
certainly
within
the
next
few
months,
certainly
in
a
stage
where
the
release
top
bot
will
be
able
to
make
a
decision
and
push
out
to
to
production
without
human
intervention
in
terms
of
really
big
things,
I
think
probably
we're
we're,
probably
on
covers
of
stuff,
and
I
think
we
have
some
interesting
windows
around
sort
of
patch
releases
and
the
monthly
releases,
and
things
like
that.
A
So
we
probably
look
to
be
easing
some
of
those
recently
working
to
run
auto
deploys
alongside
the
monthly
security
releases,
which
has
been
a
huge
change
for
us
and
really
sort
of
unblocks.
These
daily
auto
deploy
lots
of
that
sort
of
work
is
going
along
alongside
which
is
also
going
to
feed
into
this
as
well.
D
Cool
thanks
for
that
and
another
question.
An
assumption
of
mine
is
that
when
we
move
to
helm
charts,
it's
going
to
be
quicker
to
kind
of
make
a
small
fix
in
gitlab,
because
we
don't
have
to
need
a
new
omnibus
package.
D
Is
that
the
case,
if
we
move
to
helm
charts
all
across
everything
is
on
kubernetes,
we
don't.
We
no
longer
need
our
hotpatch
process
or
maybe
that's
still
the
case,
because
the
hotpatch
process
is
super
fast,
and
this
still
takes
time
to
see
jason
wiggling
in
this
chair.
Whether
this
is
true
or
not.
I,
I
wonder,
I
wonder
what
the
what
people
think.
C
C
I'll
start
with
this
yeah,
so
we
see
a
huge
improvement
in
deployment
times
using
cloud
native.
The
reasons
for
this
is
that
we're
not
dealing
with
the
900
megabyte
omnibus
package
and
you
know,
bringing
it
down
over
the
network
and
installing
it
blowing
it
out
on
disk.
This
is
just
we're
dealing
with
a
much
leaner
image
and
we
also
don't
have
to
deal
with
rolling
through
a
static
set
of
virtual
machines,
draining
them
from
aha
proxy.
It's
just
like
a
very
slow
process.
C
We
just
like
kubernetes,
do
all
the
work
so
we're
seeing
like
a
very
large
improvement
more
than
like
three,
three
or
four
times
improvement
in
deploy
times,
which
is
fantastic,
whether
or
not
I
mean
so.
For
this
reason
alone,
we'll
be
quicker
to
deploy
to
production.
We
still
have
to
build
images,
of
course,
instead
of
the
omnibus
package,
which
right
now
we're
doing
both
images
build
faster
than
the
omnibus
package.
C
I
think
so
that's
a
bit
of
a
savings
there,
so
I
think
it
looks
good
whether
or
not
we're
going
to
not
have
any
omnibus
package
for
gitlab.com,
I
think,
remains
to
be
seen,
like
maybe
we'll
still
run
the
omnibus
for
the
sake
of
dog,
fooding
omnibus,
on
a
very
small
subset
of
our
front
end.
That
could
be
a
decision.
D
C
Yeah-
and
I
think
this
will
and
there'll
still
be
a
lot
of
savings,
even
if
we
do
that
like.
If
we
only
have
a
few
virtual
machines,
I
mean
the
time
to
deploy
to
those
will
be
done
in
parallel
and
kubernetes.
You
know,
will
still
save
us
a
bunch
of
time.
So
so
that's
a
possibility,
but
we
haven't
decided
yet
I'll
pass
it
to
jason.
If
he
has
anything
to
add.
E
If
there's
no
other
component
changes,
we
only
have
to
rebuild
necessarily
the
one
container
and
when
you
deploy
that
new
version
effectively,
you
only
have
to
restart
that
portion
of
the
fleet
that
actually
needs
to
be
restarted.
Those
containers
and
anything
related
to
that
in
the
event
that
some
configuration
item
changed
otherwise,
so
that
bonus
is
still
there.
A
No
okay!
Well!
Thank
you
so
much
for
everyone
who
had
questions
and
also
thanks
to
tim
and
jackie
for
adding
in
some
awesome
notes,
as
we've
been
talking
thanks
for
joining
us.
Hopefully,
this
has
been
interesting
and
we'll
see
you
next
month.