►
From YouTube: 2021-02-10 AMA about GitLab releases
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
we're
at
the
hour
so
welcome
to
this
ama
with
delivery
team,
so
we're
here
today
with
our
monthly
ama
to
answer
any
questions.
People
have
about
deploying
gitlab
or
releasing
our
monthly
packages
as
well.
A
This
time
around,
we've
put
together
some
slides
as
well
to
try
and
give
a
little
bit
of
a
background
view
of
the
things
we're
working
through
right
now,
we're
really
interested
in
scaling
and
how
coding
at
scale
is
kind
of
feeding
into
how
we
can
keep
up
our
cadence
of
deploying
and
help
us
as
we
as
we
grow
so
you're,
seeing
the
slides
that
we
are
not
only
deploying
more
frequently
these
days,
but
we're
also
dealing
with
more
traffic
than
we
are.
A
We
have
been
in
the
past,
which
is
great
and
also
more
changes.
So
as
we
have
more
changes,
even
though
we're
deploying
more
frequently
the
the
number
of
changes
in
each
deployment
is
also
growing,
which
adds
some
risk,
but
also
it's
a
good
encouragement
for
us
to
to
move
faster,
so
the
more
frequently
we
can
deploy.
Then
we
can
get
these
deployments
down
to
smaller
size.
A
So
those
are
the
things
we
are
sort
of
working
towards,
please
feel
free
to
add
in
questions
into
the
agenda
as
we
go
through,
but
I'll
kick
off
with
michelle's
question
she's,
not
on
the
call
so
I'll
read
this
one
out
so
michelle's
asked:
how
do
we
prevent
deploying
something
to
an
environment
that
will
break
another?
Another
environment,
for
example,
running
a
migration
to
drop
a
table
would
work
fine
on
canary
and
pass
all
the
tests
but
break
production
until
the
change
has
been
made
it
to
production.
A
B
Yep
sure,
thanks
so
for
reference,
I
think
it
is
useful
to
explain
a
bit
how
our
deployment
process
goes,
particularly
between
canary
and
production.
So
when
updating
canary,
we
first
execute
the
regular
migrations,
then
we
update
canary
fleet.
Then
we
do
another
thing,
such
as
executing
the
qa
and
let
canary
sit
for
a
bit
before
promoting
to
broad
when
we
promote
the
product.
We
update
the
broad
fleet,
and
then
we
execute
the
post
migrations
for
the
particular
example
of
dropping
a
table.
B
Dropping
a
table
is
only
allowed
to
be
executed
in
post
migrations
after
the
broad
fleet
has
been
executed.
We
don't
do
we
don't
drop
tables
on
canary
because
that
is
risky,
because
that
will
be
done
with
regular
migrations
that
are
updated
before
the
canary
fleet.
It
is
a
bit
complicated,
but
that
will
be
like
the
whole
process.
In
a
nutshell,
then,
another
sensitive
operation
would
be
dropping
a
column.
Dropping
a
column
is
not
as
easy
as
I'm
just
going
to
drop
a
column
e.
B
B
Our
database
documentation
actually
has
like
great
examples
of
the
sensitive
operations
that
can
be
prevented
from
a
database
and
deploy
perspective
and
also
a
code
that
could
work
in
different
states
is
something
to
look
forward
or
to
look
out
when
it
comes
from.
From
an
engineer
perspective,
and
with
that
I
basically
mean
that
when
we
are
updating
our
production
fleet,
we
update
the
nodes
are
updated
at
the
same
time.
But
some
nodes
can
be
finished
earlier
than
others,
which
also
could
lead
to
unexpected
failures.
And
this
is
something
to
to
take
to
consider
for.
C
Sit
thanks:
are
we
keeping
the
architecture
components
list
up
to
date,
because
I
think
I'm
seeing
stuff
that
isn't
in
the
overall
list
of
architecture
like
plan
to
or
uml.
A
That's
a
great
question
I
would
hope
so
we
should
be
we'll,
certainly
take
a
look
and
see
if
we
have
any
gaps
in
there
and
make
sure
it's
part
of
our
process.
Yep.
C
Amy
you
refer
to
like
we're
having
more
and
more
changes
in
gitlab,
so
I
want
to
deploy
more
frequently.
What's
that,
what's
the
expectation
where
do
you
want
to
be,
I
don't
know
halfway
through
the
year
end
of
quarter
or
end
of
year,
whatever
goals
you've
set
like
and
how
we're
gonna
do
that.
A
Absolutely
yeah,
so
at
the
moment
our
we
are
doing
well
with
our
mttp
target
at
12
hours.
Rollbacks
is
the
sort
of
the
next
big
enabler
for
us
to
to
trend
that
one
down
so
for
two
reasons
one
is.
A
It
will
allow
us
to
recover
from
incidents
more
quickly,
which
frees
us
up
to
do
more
deployments,
but
also
it
allows
us
to
take
a
little
bit
more
risk
with
our
deployments
because
we
can
recover
more
quickly,
so
the
big
goal
will
be
to
be
able
to
go
below
our
mttp
12-hour
target
and
look
to
reduce
that
further,
hopefully,
which
will
hopefully
then
get
us
back
on
place
with
the
number
of
changes
then
smaller
deployments,
alongside
that,
the
migration
for
kubernetes
is
going
well,
so
the
big
goals
we
have
some
big
targets
there,
which
is
to
complete
the
stateless,
kubernetes
migration
and
then
start
on
the
stateful
ones.
A
C
Cool
thanks-
and
maybe
that's
super
obvious
to
you,
but
I
think,
are
we
never
gonna
get
into
a
state
where
we
deploy
every
change,
because
it's
just
it's
almost
too
much
work
and
we
have
too
many
changes
and
it
wouldn't
be
efficient
or
is
that
an
aim.
A
It's
definitely
the
the
goal
yeah,
I
I
don't
think
we'll
get
there
this
year,
but
yeah.
I
think
that's
absolutely.
The
goal
like
see
just
in
terms
of
the
fast
feedback
and
like
kind
of
concept
of
the
smaller
the
batch.
The
safer
deployment
like
the
ultimate
small
batch
is
one
change,
so
yeah.
D
A
Absolutely
the
the
sort
of
the
biggest
goal,
but
but
not
this
year's
goal.
A
So
one
question
I
had
that
I
was
curious
about
was
which
I
was
kind
of
chatting
with
myra
about
earlier.
It
was
interesting
to
me,
as
I
put
these
slides
together
to
come
across
the
incident
I
referenced
in
there,
which
had
the
one
of
our
database.
Our
recent
database
migrations
failed,
not
because
there
was
anything
wrong
with
the
code,
but
on
slide,
it's
a
slide.
A
Yeah
on
slide.
Two.
There
was
a
database.
Migration
failed,
not
because
there's
anything
wrong
with
the
code,
but
because
it
time
it
couldn't
get
the
locks
it
needed,
because
there
were
so
many
other
things
going
on
with
the
database-
and
I
was
curious
as
to
tomorrow
earlier
about-
is
that
the
only
incident
we've
we're
aware
of
that
had
a
similar
kind
of
traffic.
I
guess
traffic
related
problem
related
to
him.
B
B
I
think
it's
in
gigabytes
now,
so
the
operations
that
were
doing
these
migrations
were
actually
quite
simple:
we're
adding
just
a
column
or
yeah.
I
think
both
of
them
were
added
in
a
column,
but
it
couldn't
be
done
because
to
add
a
column,
you
are,
you
need
to
acquire
the
lock
and
there
were
other
transactions
running
at
the
same
time,
high
long
transactions
that
were
also
acquiring
the
locks,
so
the
migration
failed.
B
Now.
What
is
curious
is
that
this
operation
wouldn't
have
been
a
problem
like
six
months
ago
or
perhaps
eight
months
ago,
but
they
are
being
problem.
There
has
been
a
problem
now
and
to
my
knowledge,
and
I
think
some
database
team
members
are
here
we
are
exploring
like.
Where
are
these
high
long
running
transactions
are
coming
from
because
we
are
still
not
sure,
so
that
is
a
corrective
action
from
for
one
of
those
incidents.
D
Yeah
exactly
so
the
what
we,
what
we're
seeing
increasingly
is
those
longer
transactions,
and
it
could
be
as
easy
as
having
one
of
these
steps,
basically
preventing
us
from
getting
those
locks
and
and
sort
of
causing
those
troubles,
so
we're
basically
figuring
out
which
of
these
transactions
are
causing
this.
And
what
are
the
patterns
that
we
need
to
change,
and
this
is
something
that
we
can.
D
We
can
see
with
increasing
data
size,
some
of
these
patterns
they
just
take
much
longer
than
they
did
before,
and
I
think
this
is
what
we're
seeing
and
what
we
also
need
to
address.
But,
first
of
all
we
need
a
bit
of
better
monitoring,
I
think
for
the
transaction
length
and
also
wherever
they're
coming
from,
and
then
we
can
chase
them
down.
A
Nice,
that's
great
thanks
for
sharing
and
appreciate
your
efforts
here
like
definitely
it's.
It's
been
it's
interesting
for
us
as
we
go
into
our
rollbacks
work
as
we
kind
of
look
closer
and
closer
at
the
database.
Migrations
like
certainly
those
are
the
things
that
have
so
far
prevented
us
from
just
having
rollbacks
for
free
is
how
do
we
actually
manage
rollback
safely
around
changes.
The
database
so
definitely
appreciate
all
your
efforts
on
this
stuff.
D
That
we're
also
working
on
the
on
the
database
testing
and
so
that
we
we
are
better
in
better
shape
with
testing
migrations
before
we
deploy
them
before.
We
even
merge
them
right
so
much
much
early
in
the
cycle,
and
this
is
gonna
help
us.
I
think,
with
a
lot
of
problems
that
we
recently
had
with
migrations.
D
However,
there
are
still
problems
like
like
we
just
talked
about
with
locks
and
that
are
related
to
the
production
traffic
that
is
coming
in,
and
this
is
something
that
the
database
testing
that
we're
currently
working
on
is
not
gonna
help
with,
because
you
know
this,
this
traffic,
it
only
happens
in
the
production
environment,
and
we
will
have
to
find
other
ways
to
to
deal
with
that.
E
When
we
were
investigating
the
state
of
the
art
of
the
deployment
rollback
in
the
industry,
it
was
this
interesting
concept
of
having
a
shadow,
cannery
or
even
a
shadow
production.
The
problem
is
that
is
really
expensive.
Well,
it
will
help
us
in
this
type
of
situation,
because
the
point
is
that
you
may
have
settings
in
your
application
like
an
environment
variable
that
tells
the
application
to
never
reply
to
any
incoming
request,
and
so
some
of
these
companies
are
running.
E
Basically,
they
have
a
proxy
in
front
of
the
application
and
they
send
the
traffic
to
real
production
and
to
shadow
production,
so
that
both
process,
the
same
amount
of
requests
exactly
the
same
request
and
they
have
the
educated
database.
They
are
kept
in
sync,
so
they
can
test
things
like
that
right.
So
they
have
an
environment
that
never
replies
back
to
the
the
client,
but
there
they
can
run,
for
instance,
migration
in
advance
seriously.
Let's
see
how
this
migration
behave
on
real
production
load,
but
yeah,
it's
just
doubling
the
cost
of
production
environment.
So.
D
You
can
also
do
it
on
the
database
level
only
so
we
would
also
be
able
to
sort
of
capture
the
traffic
that's
going
on
on
the
database
and
then
replay
that
in
in
another
database,
so
that
you
don't
have
to
replicate
the
whole
environment.
That's
yeah!
That's
definitely.
It
here
sure.
A
Nice
yeah,
that
sounds,
sounds
excellent.
Yeah.
Definitely
I've
seen
places
where
having
a
migration.
We've
had
a
full
kind
of
separate
deployment
pipeline
so
with
a
separate
migrations
environment
to
test
against
like
a
kind
of
production-like
environment
and
then
deployment
separately,
which
I
suppose
also
makes
rollbacks
easier
as
well.
A
Something
I
added
into
the
slide,
but
only
very
briefly
was
the
expand
and
contract
pattern.
Alessia.
Do
you
want
to
just
talk
to
us
a
little
bit
about
how
you
know
how
you
ended
up?
Like
you?
I
know
you
wrote
some
examples
out
which
I've
got
linked
in
there,
but
maybe
just
give
us
a
little
overview
of
like
why
this
matters
and
kind
of
what
we
might
do
to
adopt
this.
E
Sure
so
this
basically
is
the
extended
answer
to
michelle
question
the
first
one.
So
the
point
is
that
when
you
have
an
environment-
and
you
want
to
have
zero
downtime
upgrades
like
we
are
like
the
future,
we
are
providing
to
our
customers
and
what
we
are
doing
on
github.com.
E
Basically,
everything
that
introduces
a
breaking
change
must
go
through
these
three
phases,
which
are
basically
this
is
ex
it's
expanding
contract,
but
the
fees.
Oh,
let
me
try
to.
E
Yes,
it
is
expand,
migrate
and
contract
and
basically
database
is
the
easier
one
because,
with
post
deployment
migration,
we
found
a
way
to
have
those
three
steps
in
a
single
package
so
with
a
single
deployment,
because
basically
I
would
use
the
database
migration
as
an
example,
because
it's
easier
and
it's
more
common,
easier
to
understand.
So
in
databa
interviews,
migration,
the
expand
phase
is
the
regular
migration,
so
we
run
it
before
the
deployment.
E
Then
we
migrate
the
fleet,
so
every
machine
that
is
running
version
and
start
running
version
n,
plus
one,
and
then
when
we
are
sure
that
all
the
fleet
is
running
version
n
plus
one.
So
it's
the
new
version
of
the
application
and
is
working
correctly.
We
can
contract,
which
means,
for
example,
removing
a
column.
E
So
database
is
easy
because,
basically
the
income
we
we
can
do
this
in
one
single
package
when
when
it
is
not
database
related,
it
requires
basically
three
milestones.
That's
the
real
problem
here,
because
we
want
to
give
zero
downtime
upgrade
for
our
customers,
and
so
we
could
do
this
in
three
days
at
gitlab.com
in
github.com,
because
even
even
less
than
three
days
just
three
deployment,
three
consecutive
production
deployments,
but
because
of
the
of
the
on-premise
installation,
we
need
to
split
this
over
three
milestones
and
basically
that's
it.
E
Every
time
we
want
to
change
something
that
will
not
be
able
to
run
at
the
same
time
with
old
version
of
the
new
version
we
have
to
think
about.
How
can
we
split
this
change
in
three
releases?
Another
example
which
doesn't
rely
on
database
is
something
like
so
often
happens.
Something
like
this
that
we
change
markdown
parsing.
So
this
this
is
a
common
source
of
incident.
E
We
add
a
feature
or
we
change
something
in
our
markdown
engine
and
as
soon
as
canary
machines
start
processing
the
new
data,
they
will
save
markdown
pre-processed
information
on
cache
with
the
new
format
and
when
a
production
machine
that
is
running
the
old
version
pick
it.
It
doesn't
know
what
to
do
with
it
and
it
crashes.
E
So
what
we
do
in
this
case
to
avoid
it
is
that
we
first
implement
the
reading,
so
we
need
to
make
sure
that
we
ship
something
that
can
read
the
new
format
without
writing.
It
then,
the
next
release.
When
we
know
that
the
reading
part
works,
we
can
ship
the
writing
and
then
the
third
part
we
remove
the
the
the
backboard
functionality.
So
after
the
third
release
it
everything
works
with
the
new
implementation
and
that's
it
basically.
A
Nice
thanks
for
going
through
that
james,
do
you
want
to
verbalize
take
over
writing.
F
Sure
sure,
yeah
and
I'll
type
it
in
for
sure
but
yeah.
Thank
you
for
explaining
that.
Also
that
really
clarifies
some
of
the
things
I've
been
trying
to
understand
is
how
we
were
handling
some
of
our
stuff.
One
of
the
things
that
you
know
based
upon
what
you
just
explained.
That
comes
to
my
mind
that
I'm
just
kind
of
curious.
F
If
it's
even
come
across
say
you
know,
say
we
have
a
you
know
three
dots
for
and
we
have
x
dot,
y
dot,
z
right
and
that's
the
that
you
know
that's
for
the
customer
etc
right.
But
then-
and
we
can
only
do
like
you
just
said
right-
we
can
only
do
something
for
the
customer
this
month
this
month
this
month,
but
have
we
ever
thought
about
having
you
know
like
a
dot
alpha,
and
you
know
that
is
what
happens
internal
and
so,
for
instance,
what
you
just
described.
F
E
Okay,
thanks
for
your
question,
james
is
so
we
had
this
conversation
around.
How
can
we
speed
up
adoption
or
of
new
features
without
breaking
into
three
milestones,
because
that's
basically
the
problem,
so
I
propose
something
I
can
find
the
link
of
the
because
there's
a
conversation
on
issues,
so
I
will
find
it
so
that
I
can
link
it
here.
Basically,
this
is
already
complex,
so
adding
this
extra
complexity
on
top
of
it
doesn't
really
help.
But
the
point
is
that,
yes,
we
can
do
it
depending
on
the
type
of
breaking
change.
E
E
In
order
to
speed
up
this
for
our
customer,
we
could
leverage
post
deployment
migration,
so
feature
flags
are
stored
in
database
and
post-deployment.
Migration
is
a
synchronization
point
in
time
they
tell
us
all.
The
production
fleet
is
upgraded
now,
so
this
means
that
you
could
use
ship
the
feature
with
disabled
with
the
feature
flag,
wait
till
the
end
of
the
deployment
and
automatically
by
post
deployment.
Migration
enable
the
flag-
we've
never
done
this,
but
in
theory
it
could
work.
A
A
No
okay,
fantastic
thanks
so
much
for
coming
along
everyone
and
thanks
for
all
the
questions,
hopefully
we'll
see
you
all
next
month.
Oh
thank
you.