►
From YouTube: 2021-05-12 AMA about GitLab releases
Description
AMA with the Delivery Team
A
Okay,
I'll
get
started
so
hello.
Everyone
welcome
to
the
may
delivery
teams,
ama
so
feel
free,
please,
to
like
add
any
questions
you
have
on
the
agenda
and
we'll
also
have
time
at
the
end.
If
you
want
to
just
verbalize
so
nick
you
have
the
first
question.
B
Yeah
we're
so
we're
working
on
a
lightweight
training
for
development
engineers,
to
increase
awareness
of
version
compatibility
issues
and
and
how
they
affect
upgrades
for
comm
and
self-hosted.
B
C
I
can
take
this
one
amy,
so
I
just
started
writing
an
answer
in
the
agenda.
So
we
recently
created
an
end
book
page,
which
is
name,
is
coding
at
scale.
There's
a
link
in
the
agenda
right
now.
It's
just
a
description
of
the
problems
and
a
collection
of
resources.
So
there
are
more
resources
on
top
of
the
one
that
you
linked.
We
have
something
about
user,
uploads,
sidekick,
multiversion
compatibility
and
migration.
C
Now
this
is
coding
at
scale,
so
it
covers
both
delivery
and
scalability
type.
But
I
think
these
are
both
important
things
to
create
awareness
around.
B
C
There's
also
a
follow
up
on
this,
which
is,
I
wrote
some
of
those
pages,
not
all
of
them,
but
I
think
we,
we
are
lacking
the
kind
of
a
feedback
loop
from
developer
in
this
area
because
oftentimes,
we
are
aware
of
this
problem.
Some
of
them
are
very
high
level
or
eye
complexity.
We
try
to
write
something,
but
then
usually
only
engineers
that
ends
up
helping
in
an
incident
related
to
this
one,
read
it
and
then
start
to
understanding
it.
C
A
And
I
just
want
to
add
in
there
I'm
really
happy
to
see
this
so
yeah
thanks
so
much
nick
for
creating
the
issue
and
pushing
this
forwards.
One
thing
we
have
been
talking
about
quite
a
lot
around
rollbacks
is:
is
this
sort
of
version
compatibility
and
how
it
all
fits
in
with
post-deployment
migration?
A
So
it's
such
a
relevant
topic
for
us
right
now,
and
actually
I
think,
if
there's,
if
there's
things
we
can
review
in
the
training
and
documentation
as
well
like
we'll,
certainly
read
through
these
things,
but
if
there's
anything
we
would
be
useful
for
us
to
add
in.
Let
us
know
we
can
we
can
help
out
there.
D
Yeah
just
saying,
I
think
this
is
great
one
problem
I
do
find
working
with
self-managed
customers
is
their
upgrade
cadence
and
the
way
they
run,
gitlab,
the
environments
or
setups
are
actually
quite
different
than
what
we
use
for
sas
or
test
environments.
So
I
think
this
is
super
helpful
and
thank
you
so
much
for
creating.
E
You
have
this
wow,
so
we
are
looking
to
stand
up
a
monthly
user
group
virtually
and
given
that
we
have
a
wonderful
monthly
release,
we
were
thinking.
We
absolutely
want
to
make
sure
it's
part
of
the
content
that
we're
talking
about.
So
I
was
looking
to
see
if
there
was
a
good
place
that
I
should
be
keeping
an
eye
on
as
we
build
out
the
monthly
agendas.
I
saw
a
blog
post
from
march.
What
I'm
looking
for
is
something
that
speaks
to
the
value,
so
we've
got
this
new
monthly
release.
E
So
what
does
that
really
mean
to
you,
mr
user?
Mr
maybe
potential
prospects,
upsell
customer
to
help
them
understand
what
it
really
entails,
and
then
I
did
find
the
site
that
looks
like
it
updates
monthly
on
the
22nd.
So
I
was
just
looking
to
see
if
there
are
any
other
resources
that
maybe
I
should
be
keeping
an
eye
on,
and
I
see
that
nick
dropped
in
a
very
handy
tool.
So
thank
you
for
that.
B
Yeah,
I'm
not
sure
if
it
addresses
the
the
the
part
of
your
question
about
speaking
to
the
value
but
yeah.
I
just
wanted
to
link
link
that
for
anyone
who
wasn't
aware
of
it.
E
A
Gonna
say
yes,
and
there
is,
there
should
be
a
section
in
there
as
well
about
what's
new
and
sharing
that,
but
yeah
the
blog
post
is
the
main
way
we
we
generally
just
enable
these
things.
So
we,
the
product
team,
create
the
blog
post
yeah,
create
the
release
post
each
month,
and
then
we
just
publish
that
on
release
day.
A
E
Sound
sounds
great,
any
other
training
resources
or
get
lab
team
members,
you
think,
would
be
interested
in
you
know
this
would
also
be
a
platform
for
anybody
who
wants
to
talk
about
the
the
monthly
updates.
C
So
maybe
something
worth
mentioning
is
that
the
next
milestone
will
be
14.0,
so
measure
release
breaking
changes,
and
things
like
that.
So
we
are
putting
some
effort
in
communicating
up
front
breaking
changes
or
new
yeah
breaking
changes
mostly
so
because
things
are
gonna
change
daily
for
our
sas
user
and
just
on
the
22nd
with
the
new
package
for
the
on-premise
installation.
So
we
I
think
you
should
you
could
reach
out
to
orit
because
she's
trying
to
organize
communication
around
this,
and
maybe
it's
working
the
more
we
can
amplify
the
message.
C
A
Put
it
in
here
there's
also
an
issue
I'll
add
in
I'll
find
an
link
hidden
afterwards
for
you
that
I
could
ping
but
yeah
just
on
a
kind
of
a
little
bit
of
of
why
the
breaking
changes
will
be
coming
through
throughout
the
month
is
the
breaking
changes
will
all
be
targeting
the
14.0
release
on
the
22nd
of
june,
but
as
they
go
into
gitlab.com
on
our
daily
deployments,
those
are
as
soon
as
we
can
make
them,
so
the
changes
drop
in
on
gitlab.com
around
our
deployment
cadence
and
then
make
it
out
into
self-managed
on
the
22nd,
but
yeah
eric
is
leading
the
way
on
coordinating
this
stuff.
F
C
A
C
Basically,
we
were
always
running
dry
runs
in
production
and
then
today
we're
running
a
real
one
and
the
checks
were
more
restrictive
than
we
expected.
So
we
had
troubles
in
starting
the
process
other
than
that.
The
real
surprise
was
that
if
we
we
removed
cannery
from
the
fleet
to
just
have
a
simulation
closer
as
possible
as
a
real
incident
and
production
wasn't
able
to
handle
the
load
of
the
deployment
without
cannery
so
likely
we
are
running
under
provision
it.
So
this
is
the
big
finding.
C
F
A
A
So
there
is
some
traffic,
but
I
would
assume
the
majority
of
it
is
of
the
load,
at
least
is
internal
yeah,
so
yeah
overall,
like
the
rollback,
went
really
successfully.
We
as
elections,
said
we
we
had
some.
It
was
interesting
at
the
start.
We
had
to
set
this
one
up
as
a
kind
of
test
case,
so
what
that
meant
was
because,
in
a
real
in
a
real
world,
we'll
be
rolling
back.
A
When
we
have
an
incident,
so
a
deployment
will
have
completed,
we
have
discovered
something
has
gone
wrong
and
then
we
would
initiate
a
rollback
to
the
previous
package.
In
this
case
we
didn't
have
an
incident.
It
was
a
test
scenario,
so
what
that
meant?
Was
we
cancelled
the
previous
deployment
to
allow
us
to
roll
back?
So
it's
a
little
bit
of
a
different
one
which
made
it
start
a
little
bit
trickier,
which
was
the
diploma,
was
technically
still
ongoing
because
it
hadn't
completed.
A
So
we've
got
some
improvements
there
on
documentation
to
make
that
a
bit
easier
and
then
yeah.
We
will
look
at
how
we
can
also
increase
capacity
across
the
fleet
so
that
we
can
actually
run
these
things
at
the
same
time,
but
it
was
great
great
test
and
the
rollback
was
successful
and
it
did
what
we
expected
it
to
do.
A
We
ran
qa
tests
afterwards
and
they
all
passed
so
overall,
we're
really
happy
and
we'll
be
making
some
changes
and
then
looking
to
run
another
test
before
next
step
will
be
unleash
on
incidents
so
we'll
be
starting
to
check
out
incidents
rollback's
worth
noting.
We
can't
roll
back
every
type
of
incident.
A
So
there's
a
couple
of
constraints:
we
only
roll
back
to
a
previous
package,
so
it
does
rely
on
us
detecting
a
problem
before
we
do
a
follow-up
deployment
and
then
that
gives
us
an
opportunity
to
roll
back
and
we
also
can't
roll
back
post
deployment
migrations.
So
those
are
the
things
that
we
are
we'll
be
sort
of
starting
to
monitor
and
work
out
like.
How
can
we?
When
can
we
roll
back
and
also?
How
can
we
have
more
opportunities
to
roll
back.
F
G
A
That's
a
really
good
question.
We
haven't
got
one
at
the
moment
we
have
got.
We
have
got
our
agenda
document,
which
has
quite
thorough
notes
where
we
sort
of
debriefed
in
so
we
probably
won't
create
a
separate
thanks
henry
yes,
we
probably
won't
create
a
separate
one.
We
kind
of
discussed
in
real
time
and
put
in
some
sort
of
follow-up
actions,
but
please
feel
free
to
add
comments
in
that
document
as
well.
If,
if
you
have
any
other
questions,
perfect
awesome,
thank
you
thank
you
and
stan.
F
A
Yeah,
absolutely,
and
certainly
certainly,
we
track
so
we
track
on
the
mean
time
to
production,
and
at
the
moment,
oh,
we
haven't,
got
metrics
they're
running
a
bit
delayed.
I
think
at
the
moment
we
are
our
target's
24
hours.
A
I
think
we're
a
little
bit
below
that
at
the
moment
we
raised
it
up
in
at
the
end
of
march,
so
there
are
two
things
that
really
impact
this,
and
one
of
them
is
incidents,
so
it
if
we
have
incidents,
we
often
are
blocked
on
deployments,
which
is
why
we've
raised
up
the
target
at
the
moment
to
24
hours,
we're
seeing
like
over
the
last,
as
we
all
know,
like
first
few
months
of
the
year
like
increased
number
of
incidents,
and
that
just
has
a
big
knock-on
effect
to
deployments
and
then
the
other
one
we
see
particularly
around
when
we're
sort
of
maybe
responding
to
incidents
is
actually
it's.
A
It
seems
to
be
very
much
around
the
kind
of
approvals
on
anyone
feel
free
to
dive
in,
but
my
perception
seems
very
much
around
like
actual
the
time
to
get
something
approved
and
merged
and
run
through
those
pipelines.
Before
we
even
begin
the
deployment
process.
C
Yeah
something
more
more
details
on
this,
so,
basically
from.
If
there's
no
incident,
you
can
think
that
your
change
will
be
in
github.com
in
less
than
24
hours.
12
hours
is
a
good
average,
but
as
soon
as
something
blocks
us,
it
could
be
whatever
delayed
post
deployment.
My
positive
immigration
taking
too
much
or
any
type
of
incident
or
near
miss
incident,
because
we
block
deployment
also,
if
something
starts
going
wrong,
then
basically
to
get
out
of
this.
C
It
takes
around
eight
hour,
which
is
a
lot
because,
basically
everything
is
blocked,
so
we
need
to
identify
the
problem
fix
it.
Then
we
have
a
pipeline
running
because
it's
a
merge
request.
It
has
to
be
reviewed
in
a
complete
pipeline,
so
it
takes
roughly
from
one
to
two
hours.
C
C
A
There's
a
couple
of
things
that
are
taking
place
to
improve
on
this,
so
quality
are
doing
a
super
job
of
like
working
through
ways
to
test
run
faster
and
reducing
flaky
tests
and
having
it
so
that
we
can
actually
like
well,
we
we,
as
get
up,
can
actually
run
the
relevant
tests
against
a
change
before
we
merge
things
in.
So
those
are
all
super
helpful.
A
We
also
just
recently
had
graham
joining
the
team
he's
based
in
apac,
so
over
the
next
few
months,
we'll
be
training
graham
up
into
release
management
duties
and
that
expands
our
window,
we'll
suddenly
be
available
to
deploy
potentially
every
like
all
the
all
hours.
At
the
moment,
we're
limited
to
deployments
in
emea
and
america's,
which
is
another
reason
if
you
merge
sort
of
you
know,
like
eight
utc
you're,
probably
going
to
have
to
wait
12
or
14
hours
to
our
next
deployment
window.
A
So
we'll
see
these
numbers
start
to
come
down,
but
yeah.
There's,
there's,
certainly
lots
of
lots
of
improvements.
We
would
love
to
make
here.
F
A
We
have
graham
in
in
australia,
so
it's
just
him,
so
we
won't
have
a
full
cover
because
he's
also
on
call,
so
we
will
have
a
sort
of
skeleton
cover
over
there,
but
but
yeah
as
we
as
we
sort
of
look
to
sort
of
expand
things
over
the
next
year
or
so
I'm
sure
we'll
we'll
look
to
expand
this
out.
A
D
All
right,
thank
you,
so
just
raising
awareness,
nothing
really
actionable,
but
support
is
using.
We
have
an
epic
setup
to
prepare
for
14.0
deprecations
braking
changes,
upgrade
problems
and
I
just
think
it's
a
great
opportunity
to
collaborate.
We'll
get
customers
creating
tickets,
we'll
surface
issues
that
affect
the
customers,
and
then
we
can
all
work
together,
find
solutions,
document,
workarounds
or
mitigations.
D
A
Fantastic
has
anybody
else,
got
any
questions,
any
follow-up
comments
or
anything
else,
I'd
like
to
bring.
A
Up
no
okay,
in
that
case,
thank
you
so
much
once
again,
like
really
appreciate
questions
and
discussions
and
all
the
links
and
things
everyone
shared.
So
thanks
so
much
for
joining
us
today
and
I
hope
you'll
have
a
great
rest
of
your
day.