►
From YouTube: 2023-08-09 AMA about GitLab releases
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Okay,
so
we
are
at
time-
and
we
have
people
here
and
we
have
an
agenda
so
I'll
get
things
started
so
welcome
everyone.
This
is
the
monthly
AMA
about
gitlab
releases
and
deployments.
Today
is
August
9th
2023.
we're
joined
by
some
members
of
the
delivery
group
we're
a
little
bit
shorter
than
some
months
here,
because
we
have
lots
of
people
taking
a
well-deserved
PTO,
but
we
will
get
through
all
your
questions
and
Lauren.
Thank
you
so
much.
A
Sure
so
this
came
up
sometime
when
I
was
having
dinner
and
I
was
wondering.
What's
the
difference
between
git
lab
release
on
gitlab.com
and
get
lab
release
for
self-managed
customer,
do
they
manage
their
own
releases?
Are
they
bumping
it
up
in
their
own
instance?
Are
there
nudges
or
is
it
automated?
B
A
little
bit
about
that
great
question,
excellent
yeah,
so
they
are
different
in
terms
of
the
installation
and
so
github.com
is
we
at
gitlab
manage
all
of
the
infrastructure?
We
manage
all
of
the
get
that
version,
so
we
do
all
of
the
operating
for
the
users.
B
So
we
are
deploying
our
latest
like
code,
our
changes
to
github.com
multiple
times
each
working
day,
so
people
are
right
on
the
kind
of
the
bleeding
edge
of
of
get
up
and
then,
once
a
month
we
publish
a
scheduled
monthly
release
to
everyone,
who's
self-managed,
so
self-managed
people
have
their
own
infrastructure.
They
are
managing
everything
themselves.
We
just
provide
them
with
a
package
that
they
can
then
install
and
receive
GitHub.
So
some
of
those
we
call
it
self-managed,
some
of
those
will
be
in
Cloud
providers.
B
So
they
are
related
processes.
I'll,
give
you
a
link
on
how
we
generate.
They
are
related
in
terms
of
how
we
build
the
packages,
but
for
self-managed
users
they
they
have
the
freedom.
Basically,
so
we
have
a
maintenance
policy
which
I'll
talk.
You
through
a
little
bit
in
your
second
question,
so
we
have
a
maintenance
policy
around
the
support
that
we
provide
to
customers
and
users
on
self-managed.
But
it's
up
to
them
to
actually
go
through.
The
process
of.
There
is
a
new
version
and
I
will
install
that.
A
I
have
a
follow-up
follow-up
question
there.
Would
that
mean
that
our
customers
then
have
their
own
engineering
teams
that
are
managing
these
updating
of
packages.
B
Most
likely,
yes,
I,
think
I
I,
don't
know
if
everybody
burst,
but
I
think
it's
a
reasonable
assumption
that,
yes,
most
likely
they
do.
A
Cool
very
interesting,
I
didn't
I,
didn't
know
that
and
then
the
second
question
I
have
is
related
to
gitlab
security
releases
and
the
the
latest
one.
We
have
three
minor
versions
that
were
included
with
that.
So
I
was
wondering
what
why.
B
That
why
do
we
yeah?
Why
yeah
yeah
great
question?
Thank
you
for
asking
us.
So
this
is
related
to
our
maintenance
policy,
so
I've,
given
you
the
the
link
there,
and
so
basically,
we
are
supporting
security
fixes
in
the
current
gitlab
version
and
also
the
previous
two.
So
this
is
so
bug
fixes
are
only
in
the
current
version
and
we
recognize
that
not
everybody
is
able
to
keep
up
with
our
pace
of
release.
So
we
don't
want
to
leave
people
vulnerable,
so
security
fixes
go
out
to
to
older
versions
as
well.
B
You
for
your
questions,
great
questions,
great,
so
McKelly.
You
have
question
three.
B
A
disclosure
for
the
full
audience
but
Kelly
does
know
the
direction
as
an
engineering
manager
in
this
group.
So
let
me
give
a
little
bit
of
an
overview,
so
we
have
got
several
kind
of
areas
that,
as
a
delivery
group,
we
are
responsible
for
and
that
we
care
about.
So
this
includes
the
deployments
to
getlove.com.
It
includes
the
package
releases
for
our
self-managed
users,
but
it
also
is
about
helping
us
get
ahead
and
be
prepared
for
kind
of
the
future
of
get
up.
So
this
quarter.
B
We
are
working
on
a
few
different
projects.
We
have
one
project
which
is
around
adapting
the
release
process.
This
has
two
parts
to
it.
The
first
part
and
the
most
sort
of
like
critical
timeline
part,
is
preparing
for
the
monthly
release
date
to
change
so
previously
at
gitlab
forever.
We
have
always
put
out
the
self-managed
release
on
the
22nd
of
the
month
that
will
change
on
16.6,
which
is
November's
release,
and
from
that
point
onwards
we
will
be
putting
out
the
self-managed
release
on
the
third
Thursday
of
the
month.
B
So
I'll
be
moving
date,
but
it
maintained
pretty
similar
kind
of
consistency
and
from
a
get
that
point
of
view,
it
is
a
much
much
better
working
practice.
At
the
moment
we
have
some
months
where
the
22nd
is
the
Saturday
or
a
Sunday,
and
that
requires
quite
a
lot
of
people
to
work
on
their
weekends
to
handle
the
release.
So
we
will
be
moving
to
the
third
Thursday.
B
Then
the
second
part
of
the
adapting
release
process
is
around
our
security
releases.
So
we
currently
have
one
scheduled
security
per
month,
and
that
has
a
couple
of
challenges.
One
is
that
sometimes
a
month
can
be
quite
a
long
time
if
we
discover
a
serious
vulnerability
on
you
know
the
day
after
a
security
release.
We
don't
really
want
to
wait
almost
a
full
month
before
we
fix
it,
and
we
also
have
got
quite
a
lot
of
work.
I
guess
it's
quite
an
intensive
security
release
process
to
actually
prepare
a
security
release.
B
So
we've
been
doing
some
work.
This
quarter.
Sorry
we've
been
doing
work
in
Q2
to
improve,
prove
their
security
release
process
and
make
it
easier
to
prepare.
This
will
be
the
follow-on
and
it
will
move
us
to
multiple
scheduled
security
releases
per
month,
so
we're
aiming
initially
for
two
and
that
will
sure
it
will
close
up
the
gap
between
the
number
of
security
releases.
It
will
also
mean
each
security
release
should
be
smaller
in
terms
of
the
number
of
changes.
B
So
that's
going
to
be
one
piece,
so
let
me
just
put
a
link
in
for
that
okr.
So
and
then
the
other
piece
is
I,
don't
think
we
haven't
updated.
Okr
am
I
right,
McKelly.
C
B
Well,
the
other
people
prepared
and
I
care,
and
then
we
changed
it
this
week
and
we're
putting
the
pieces
in
place,
but
the
second
piece
is
around
getting
delivery
group
prepared
for
the
future
and
that
is
around
cells.
So
we
know
that
as
a
company
we
are
shifting
Focus
to
cells,
there's
lots
of
teams,
shifting
Focus
to
cells
and
in
delivery.
That
will
be
quite
a
big
change
for
us
because
we
wanted
to
deploy
packages
to
cells
and
manage
rollouts
across
the
fleet
of
cells.
So
this
quarter.
B
B
Currently,
when
dedicated
roll
out
their
packages,
there
can
be
some
downtime
there
isn't
always,
but
there
can
be-
and
that's
definitely
not
ideal,
and
you
know
it's
difficult
for
customers
to
for
us
to
give
down
time
so
we'll
be
looking
into
options
we
have
for
how
can
we
roll
out
packages
to
Dedicated
without
any
downtime
the
significance
of
dedicated
is
it
will
give
us
a
very
similar
Tech
stack
to
what
we're
expecting
Souls
we'll
be
running
on
and
we'll?
B
Hopefully,
then
everything
we
learn
in
this
quarter
will
be
able
to
apply
over
the
next
months
and
years
to
our
sales
work.
B
Okay,
okay,
so
we
will
have
updated
our
KRS
for
this
second
part
very
soon,
but
but
yeah
quite
an
exciting
time
for
delivery
to
be
like
moving
sort
of
beyond.com
as
it
is
it
right
now
and
into
dedicated
and
then
into
cells.
B
Excellent,
so
I'll
move
on
to
a
question:
four
Olivia
yeah
yeah.
D
One
of
my
question
is
basically,
since
we
are
doing
continuous
deployment2.com
I
was
wondering
stay
from
an
SRA
perspective.
What
are
the
triggers
and
what
could?
What
are
the
special
and
what
could
trigger
a
rollback
and
automatic
rollback?
I
assume
rollbacks
are
automated
as
well
as
deployments
in
case
of
failure.
B
So
this
is
a
fantastic
question,
because
they're
actually
not
automatic,
so
I
will
give
you
a
little
context
first
on
rollbacks
at
gitlab,
so
we
do
have
rollbacks
at
the
moment
they're
manually
triggered
and
that
will
usually
always
be
the
result
of
an
incident
in
the
incident
we'll
decide.
The
best
course
of
action
is
to
roll
back
now.
The
rollback
itself
is
very,
very
quick.
B
20
minutes
we
will
have
the
full.com
fleet
back
on
the
previous
version,
but
they're
not
automatic
right
now
and
the
reason
they're
not
is
because
of
post-deploy
migrations.
So
we
have
a
stage
in
our
deployments
where
we
run
post,
deploy
migrations,
and
at
that
point
the
database
gets
updated
and
we
make
any
schema
changes.
We
don't
have
a
mechanism
for
rolling
back
beyond
that
point.
So
what
we
haven't
got
right
now
is
a
way
of
always
guaranteeing
we
can
roll
back.
B
We
can
usually
roll
back,
but
we
can't
always
roll
back,
so
we
haven't
actually
got
I,
guess
enough:
a
rollback
ability
for
us
to
have
invested
in
automatic
rollbacks.
B
However,
your
question
is
still
very
very
interesting
because
I
think
it
ties
really
closely
with
what
the
delivery
system
team
was
working
on
last
quarter,
which
was
to
sort
of
start
moving
us
to
this
path.
So
McKelly
do
you
want
to
maybe
talk
us
through
what
your
team
was
was
working
on
and
kind
of
what
thresholds
you
were
sort
of,
considering
using
sure.
C
So
last
one
last
quarter,
we
started
to
look
at
capability
to
manage
our
traffic
dynamically
right,
because
in
our
current
infrastructure
we
have
static
configuration
also
about
Canada.
We
always
received
okay.
We
have
no
capabilities
to
steer
traffic
away
from
a
faulty
deployment
right,
so
they
probably
wanted
to
build.
C
It
started
starting
to
have
more
Dynamic
way
to
manage
the
traffic
that
is
incoming
within
our
environments
and
be
able
to
ship
the
traffic
to
New
deployments
or
all
deployments
and
with
the
meaning
of
a
rollback
in
case
of
need,
so
to
look
at
these
from
a
different
perspective,
we're
looking
at
implementing
different
deployment
strategies
and
with
different
deployment
strategies
that
they
were
like
aided
by
these
routing
capabilities
on
steering
traffic.
C
Just
to
make
an
example
of
uploading
deployment
right,
so
the
idea
would
have
been
to
have
a
new,
a
new
version
of
gitlab
being
deployed
in
a
cluster
and
environment.
Let's
not
be
does
not
be
very
exactly
that
for
the
current
setup
we
have
and
deciding
when
to
switch
traffic
to
the
new
version
of
gitlab
That
Could
set
traffic
to
our
customers
and
in
case
of
problems,
having
like
a
quick,
immediate
rollback
to
the
previous
version,
where
this
bug
was
not
presented.
C
So
their
goal
would
have
been
to
actually
reduce
any
kind
of
incident
that
was
related
to
to
a
deployment
like
all
this
kind
of
deployment
that
maybe
are
introducing
about
the
SMD
code
in
previous
environments
and
have
a
quick
way
to
react
to
those
and
minimal
impact
to
our
customers.
Using.Com.
D
Sorry,
I
think
it
makes
a
lot
of
sense.
Thank
you.
Thank
you
very
much,
but
that
brings
to
me
another
question.
Since
we
kind
of
traffic
the
the
we
can
select
where
traffic
goes,
how
do
we
handle
deployment?
Basically
right
now.
D
This
means
that,
and
at
one
stage
on
the
entire
fleet,
we
don't
have
the
same
exact
same
version
of
gitlab
everywhere
in
the
cluster
right,
so
basically,
from
a
purely
random
perspective
of
the
load,
balancer
one
one
could
lend
to
a
a
dedicated
instance
of
the
cluster
or
whatever
and
on
another
card,
refreshing
whatever
it
could
land
on
another
instance
with
a
slightly
different
features.
Right.
Yes,.
C
Yes,
so
I
mean
in
general
in
this
case
that
you
need
to
use
always
feature
Flags
right
and
be
able
to
also
click
feature
Flags
when
deployments
are
fully
done
in
theory
right,
we
have
actually
an
effort
going
on
in
development
itself
to
be
a
better
use
of
feature
flag.
While
we
do
our
deployments,
that
would
have
prevented
us.
Maybe
it
brought
some
outages
in
back
in
the
last
weeks
in
general.
C
We
deploy
right
now
is
that
we
use
a
horizontal
product
or
scalar
that
we
Auto
scale
the
number
of
PODS
run
in
the
new
version
that
are
starting
gradually
to
sell
traffic
to
customers
and
then
scaling
down
the
number
of
odds
we
served
in
the
previous
version.
So
right
now,
it's
kind
of
a
it's
kind
of
a
rolling
update
using
an
auto
scaler,
but
with
a
different
with
some
differences
with
standard
kubernetes
update.
C
So
this
is
allow
us
to
have
a
zero
long
time
deployment
there,
but
it's
still
not
allow
Islam
allowing
us
to
quickly
roll
back
to
our
previous
version
for
the
way
that
is
this
is
currently
designed
and
to
quickly
roll
back
to
a
previous
version.
Is
not
only
the
control
on
the
pods
and
the
cluster
itself
is
also
controlling
where
the
traffic
should
be
routed,
and
this
means
like
in
introducing
different
capabilities
are
in
our
infrastructure
to
be
able
to
understand
deployment
in
spaces
and
deciding
how
this
should
be
steered.
B
For
Christmas
and
I've
just
added
the
link
there
it's
about
4E,
which
has
the
kind
of
overview
of
our
deployments
thanks:
okay,
Sam,
your
question.
E
Yeah,
so
we've
done
a
lot
of
work
on
on
security
releases
and
made
a
lot
of
difference,
we're
going
into
into
Q3
with
some
of
the
stuff
working
on
more
changes
in
security
releases.
So
what's
happening
what's
new
we
may
or
may
not
have
some
people
on
the
call
who've
been
working
on
that
directly.
E
B
The
the
credit
with
us.
F
Sure
so
yeah
we've
made
some
big
improvements
and
so
just
a
little
bit
of
context
for
everyone.
One
of
the
most
interesting
problems
we
deal
with
in
delivery
is
we're
an
open
source.
F
We
have
an
open
source
code
base,
but
when
we
have
a
security
vulnerability,
we
can't
just
make
changes
in
the
open
source
code
and
then
wait
for
that
to
get
built
into
a
package
and
deployed
and
released
to
customers.
We
need
a
way
to
keep
that
vulnerability
private
until
we've
released
it
essentially,
so
we
have
a
private
mirror
of
our
public
open
source
repository
and
changes
go
into
that
and
we
have
to
keep
the
two
in
sync.
F
But
at
some
point,
when
we're
making
these
security
changes,
it
can
no
longer
stay
in
sync
with
the
open
source
code.
So
every
time
we
do
a
security
release,
we
kind
of
go
through
this
process
of
merging.
In
all
these
security
changes,
waiting
for
them
to
be
built
into
a
package
get
deployed
to
gitlab.com.
We
create
releases
for
self-managed
users
and
put
those
out
for
them,
and
then
we
finally
merge
the
code
back
into
the
public
open
source
project
where
people
can
see.
F
You
know
what
we
did
if
they
were
curious
and
that
sort
of
like
breaking
of
the
mirroring
between
the
open
source
and
security
code
and
then
trying
to
get
it
back
in
sync
together
as
cause
those
problems
for
quite
some
time,
purely
due
to
the
amount
of
traffic
that
we
see
by
gitlab
developers.
You
know
pushing
code
constantly
to
our
open
source
code
base,
so
there's
there's
sometimes
there's
conflicts
that
occur.
Sometimes
it's
just
a
matter
of.
F
F
We
have.
A
special
project
called
merge
train
that
is
doing
this
for
us
and
it
also
keeps
things
in
sync
during
that
security
release,
so
that
we
don't
accidentally
lose
any
changes
that
that
are
not
security
related
and
then
long
term.
We're
going
to
sort
of
dog
food,
our
own
product
and
quick
requests
to
do
this,
so
we
just
simply
create
a
merge
request
from
the
security
repository
into
the
open
source
repository
when
everything
is
all
said
and
done,
and
we
can
merge
it.
F
We
know
that
they're
already
safe,
because
they've
they've
been
approved,
reviewed
on
the
security
repository,
so
I've
been
making
improvements
there
and
then,
in
addition
to
that,
we,
as
you
can
kind
of
tell
just
by
describing
this
process,
there's
a
lot
involved
in
you
know
kind
of
orchestrating
the
security
release
and
making
sure
everything
goes
correctly.
F
So
that's
that's
the
job
that
release
managers
have
every
month
they
spend
a
week
kind
of
just
keeping
everything
organized
and
keeping
everything
moving
and
making
sure
that
the
security
release
happens
as
planned,
and
that
is
much
about
the
fixes
that
are
being
written
can
get
into
as
possible.
F
That's
always
been
one
of
the
most
difficult
processes
for
delivery
and
for
the
release
managers
every
month,
so
we've
been
working
to
improve
that
process
by
increasing
automation,
we've
been
using
pipelines
to
automate
many
of
the
tasks
that
the
release
managers
do
manually
in
hopes
that
eventually
release
managers
can
kind
of
take
a
hands-off
approach
and
just
say:
I
want
to
cut
a
release
and
not
really
have
to
do
anything
all
from
there.
F
So
over
the
last
few
months
we
use
a
we
use
a
an
issue
template
that
has
a
bunch
of
checkbox
tasks
for
every
month.
When
we
do
a
security
release,
we
have
to
work
through
these
like
80
tasks,
in
order
for
it
to
happen
over
the
course
of
a
week
and
we've
cut
out
around
30
or
40
percent
of
those
tasks
and
automated
them,
and
that
saved
a
bunch
of
time
and
a
bunch
of
headache
for
release
managers.
F
And
we
continue
to
kind
of
head
down
that
path,
especially
since
our
goal
is
to
be
able
to
do
these
more
often
than
once
a
month
in
the
future.
B
All
right,
awesome,
so
I
think
this
is
a
really.
It
was
a
really
exciting
project
for
us
because
prepare
we
prepare
a
lot
of
releases,
we
prepare
the
monthly
release,
we
prepare
patch
releases
and
we
also
prepare
security
releases
and
also
critical
security
releases
and
they
all
use
a
very
similar
approach.
B
So
this
was
our
first
sort
of
project
to
add
automation,
make
use
of
pipelines
to
sort
of
take
away
some
of
the
manual
steps
and
hopefully
put
us
on
this
sort
of
track
towards
automating
our
releases
so
and
it
will
have
lots
of
exciting
kind
of
projects
similar
to
this
one
in
the
in
the
future.
And
hopefully
the
outcome
of
this
is
releases
just
become
a
lot
easier.
B
They
become
a
lot
faster
and
hopefully
then
that
starts
to
really
get
us
into
a
place
where
preparing
a
release
can
be
more
of
a
self-serve
thing.
So
you
don't
need
to
depend
on
a
release,
manager,
who's,
fully
trained
up
and
has
kind
of
specialist
permissions
to
go
through
all
the
steps
for
you.
But
if
we
can
automate
as
well,
then
it
can
start
to
become
something
that
we
can
potentially
give
to
other
teams.
B
Fantastic,
does
anybody
else
have
a
question
that
they
would
like
to
verbalize.