►
From YouTube: 2022-12-14 AMA about GitLab releases
Description
Delivery Group monthly AMA about GitLab deployments and releases
A
Everyone
I
just
wait
a
few
more
seconds
before
we
start
just
let
people
join
on
the
call.
A
Hey
I
get
started
to
welcome
everyone.
This
is
the
14th
of
December
2022,
and
this
is
our
monthly
AMA
with
the
delivery
team.
So
I've
got
a
couple
of
questions
on
the
agenda
already,
so
thanks
for
adding
those
so
Leo,
would
you
like
to
verbalize
the
call
questions
if
you're
on
the
call
yeah?
Yes.
B
Yes,
no
thank
you
and
I'm,
not
sure.
If
this
is
the
the
right
you
know
call
for
for
these
type
of
questions,
but
shot
by
all
means
feel
free
to.
Let
me
know:
I
do
have
a
request
for
a
customer
to
have
a
call
with
them
to
review
some
of
the
features
or
some
of
the
concerns
that
they
have
with
using
our
release,
features
versus
the
they're
currently
using
a
heat
tool.
I
think
that's
the
name
or
heat
software
and
they
have
a
comparison
Matrix
that
they
generated.
A
Great
question
so
we're
probably
not
the
right
team
for
you
to
to
answer
this
stuff.
If
it's
around
specifically
around
how
does
a
feature
work?
It's
not
actually
a
feature,
so
delivery
team
is
deploying
and
releases
changes
to
GitHub
users,
not
necessarily
always
using
all
the
features
of
GitHub.
However,
Chris,
are
you
just
listening
in
Chris,
because
I
actually
believe
we
do
have
the
right
person
on
this
call.
C
C
What
we
could
do
is
create
an
issue
and
discuss
it,
maybe
more
more
detail
what
what
you're
hoping
to
get
out
of
the
call
and
what
the
customer
is
asking
for,
but
and
then
we
can
kind
of
Route,
depending
on
what
specific
and
I'm
not
actually
familiar
with
heat
at
the
moment,
so
something
I
could
look
into
as
well,
but
yeah,
let's
start
an
issue
and
then
feel
free
to
tag
me
on
it.
D
So
I'll
keep
the
theme
of
strange
questions
rolling,
so
I'm
having
a
surprisingly
difficult
time
narrowing
down
what
are
our
most
critical
gitlab
repos.
What
supports
base
functionality
of
gitlab
is
a
product
versus
like
what
we
would
consider.
Maybe
a
feature
I'm
thinking
here
of
like
if
label
functionality
were
to
go
down.
That's
that's
not
great.
That's
for
a
customer
experience,
but
that's
also
not
necessarily
we
can
stop
using
the
tool.
I'm
wondering
would
you
all
from
that
perspective
of
understanding
how
everything
kind
of
filters
in
and
gets
deployed?
A
Yeah,
that's
a
super
interesting
question,
so
we
can
certainly
take
a
look.
My
sort
of
quick
answer
would
be
I,
don't
think,
because
we
are
running
as
monoliths
I.
Don't
think
we
actually
have
sort
of
a
way
of
saying.
Oh,
if
you
know
all
the
code
that
makes
labeling
happen,
just
isn't
there,
for
whatever
reason
things
will
be
fine
as
a
model.
A
If
there
is
a
kind
of
assumption
that
everything
will
be
there
and
our
deployments
package
up
changes
from
a
number
of
repos
and
then
we
we
just
treat
that
as
a
single
unit,
it's
either
working
or
it's
not
working
where
some
of
this
might
get
a
little
bit
more
kind
of
like
how
how
working
is
it
comes
into
I
guess
like
how
much
testing
goes
into
each
package,
so
there
are
certainly
levels
of
criticality
that
Quality
are
sort
of
helping
to
guide
on
how
my
many
of
these
features
get
tested
before
we
deploy.
A
So
there's
certainly
are
cases
where
we
have
things
that
are
considered
less
important.
They
may
not
have
all
the
tests
in
in
place
for
a
deployment,
it
could
be.
We
deploy
something
and
it's
not
looking
exactly
as
it
should
be
and
a
fix
will
get
added
in,
but
in
terms
of
actual
like,
if
a
element
of
GitHub
there
are
some.
So
there
are
some
satellite
projects
which
we
don't
package
up,
for
example
like
container
registry
and
cars
and
various
sort
of
satellite
areas.
A
Things
like
that
they
they
are
related
but
also
deployed
a
little
bit
differently.
So
it
could
be
that
we
don't
pull
changes
from
those,
but
for
things
that
are
really
integral
to
the
deployment
package.
So
more
like
things
like
Italy
and
Pages,
as
well
as
all
of
the
actual
like
Omnibus
and
get
up.
If
any
of
those
were
were
missing,
the
deployment
wouldn't
work
for
us.
Now
it
doesn't
mean
the
product
wouldn't
be
usable,
but
that's
a
slightly
different
measure,
so
I
think
probably
cool
as
a
model.
A
Next,
we
should
probably
assume
it
won't
work,
but
I
think
maybe
the
other
element
to
your
question
is
more
of
a
perhaps
a
product
sort
of
one
in
terms
of
like
which
things
should
be
treated
as
critical,
like
as
a
from
a
user
experience
like
what
are
the
Pages
or
features
that
I
would
always
expect
to
have
available.
Quality
I
should
probably
have
opinions
there,
as
well
as
the
people
who
sort
of
plan
like
test
coverage
for
these
things
as
well.
D
Okay,
that's
super
helpful
yeah,
I,
LinkedIn
Mr,
there
I
know
it's
a
it's
a
strange
question:
it's
it's
mostly
just
if
we
want
to
take
a
risk-based
approach
to
if
we
start
locking
down
some
of
these
repositories
and
providing
more
change
management
just
to
to
meet
our
compliance.
Our
regulatory
obligations,
I
wanna
I
wanna,
be
thoughtful
about
just
not
providing
these
broad
sweeping
changes
everywhere.
I
want
to
really
highlight
those
most
critical
areas.
So
if
anybody
here
has
any
thoughts,
please
feel
free
to
weigh
in
on
that
Mr.
A
So
I
have
a
question:
I
know
we
don't
have
the
the
key
engineer
here,
but
maybe
we
could
get
the
sort
of
high
level
of
one
of
the
interesting
projects
we're
working
on
this
quarter
inside
delivery
is
around
improving
the
the
gitlab
deployment
pipeline
sort
of
observability,
which
is
something
which
we
have
our
mttp
performance
indicator
to
show.
This
is
how
long
it
typically
takes
a
change
to
reach
production,
but
we
don't
have
really
good
insights
into
what
contributes
to
that
number.
So
sometimes
it's
a
little
bit
higher.
A
So
one
of
the
really
interesting
projects
that
is
taking
place
this
quarter
is
to
try
and
actually
improve
that,
but
I
wonder
like
okay,
maybe
would
you
would
you
be
able
to
just
give
us
a
little
bit
of
an
overview,
I
guess
what
I'm
interested
in
most
is
the
I
suppose
like
how
how
this
fits
in
alongside
the
gitlab
product
and
I
guess
like
well,
we
might
end
up
with
in
the
future
in
terms
of
what
we're
trying
to
achieve.
E
Yes,
I
mean
that's,
that's
a
good
question,
so
we
are
actually
what
we
are
trying
to
build.
We
are
trying
to
build
observability
Foundation
probability
within
pipelines,
so.
B
E
Now,
as
you
said,
our
mttp
is,
like
you
know,
from
when
the
merge
request
its
production,
like
is
an
end-to-end.
It's
just
a
number
that
measured
the
entire
the
entire
chain.
We
don't
have
the
visibility,
because
we
don't
have
actually
the
instrumentation,
the
tooling
right
now
or
understanding
each
one
of
these
like
a
pipeline
or
jobs
or
stages.
E
How
much
time
is
taking
right
so
because
our
mttp
is
composed
by
some
active
Parts,
where
we
do
actually
some
some
jobs
some
deployments,
but
we
also
have
some
active
waiting,
for
example,
when
we
have
some
packaging
happening
like
from
distribution
and
so
on.
So
we
we
also
don't
understand
if
sometimes
some
of
those
stages
or
jobs
or
parts
of
the
pipeline
start
to
take
more
time
for
a
problem
or
maybe
because
we
extend
too
much
our
end-to-end
testing
or
QA
and
so
on.
E
How
do
we
spend
the
time
in
our
mttp
which,
how
we
charge
the
parts
where
we
can
optimize
start
to
understand
if
we
have
some
Trends
and
if
this
stands
like
increasing,
constantly
and
so
on,
so
in
which
part
and
how
we
can
improve
that
part
all
this
functionality
right
now,
we
are
building
it
within
within
delivery
team,
because
it's
something
that
we
actually
need
for
ourselves
and
now
we
can
decrease
our
FTP
and
so
on.
E
But
this
would
be
probably
a
great
feature
that
we
will
We
would
like
to
see
within
the
product
right.
So
if
we
have
this
problem,
a
lot
of
our
customers
are
going
to
have
the
same
problem,
especially
the
one
that
having
more
complex
pipelines
to
deliver
their
software
and
having
this
functionality
within
the
product
itself.
E
So
maybe
within
our
Monitor
and
observability
product
stack
would
be
probably
a
great
a
great
addition,
and
we
will
try
to
give
back
on
that
part
and
have
our
learnings
and
we
are
already
using
some
of
the
functionalities
that
has
been
offered
by
the
monitor
observability
group.
But
definitely
we
need
to
start
to
to
see
how
we
can
monitor
and
observe
one
of
the
main
features
of
our
product
that
are
Pipelines.
C
Do
yeah
several
holidays
and
you're
coming
up
I'm
just
curious
at
the
delivery
team,
there's
anything
particular
to
prepare
for
that
or
during
that
and
after
that,
any
you
know
if
our
deployments
typically
changed
during
that
period.
A
Yes,
that's
a
super
question,
so,
yes,
we,
we
absolutely
do
over
the
holiday
period,
so
this
year
it's
running
from
the
26th
right
through
to
the
second.
We
have
just
general
low
availability
across
infrastructure,
so
even
more
significantly
than
within
delivery.
We
have
quite
low
availability
in
the
reliability
teams,
so
we
don't
want
to
like
create
unnecessary
Risk
by
by
making
lots
of
changes
over
this
time.
So
we
do
have
a
hard
production
change
lock
coming
in
over
that
time,
and
that
will
prevent
deployments.
A
It
will
also
prevent
feature
flag
changes
and
it
will
prevent
any
kind
of
General
changes
that
that
often
are
happening
on
production,
so
we'll
have
a
sort
of
full
lockdown.
Now.
One
thing
we
do
know
is
that
actually
halting
deployments
for
a
long
period
of
time
adds
risk
because
it
means
the
first
deployment
that
we
do
following
these
things
has
an
awful
lot
of
changes
in
it.
A
So
we
are
also
planning
that
we
will
have
a
couple
of
deployments
that
will
take
place
during
this
production
train
track
and
they
will
be
kind
of
carefully
coordinated
and
managed
with
approvals
so
that
we
can
actually
try
and
just
reduce
our
deployment
pressure
a
little
bit
over
those
times.
G
What
are
your
plans
to
kind
of
you
know,
plan
for
that
or
yeah?
What
are
we
hoping
to
accomplish?
There.
A
Yeah,
it's
a
really
great
question.
It's
absolutely.
It's
certainly
something
we've
considered
and
you
know,
there's
always
a
risk
of
you
know
us
needing
to
respond
to
an
incident
or
a
security
vulnerability,
or
something
like
that.
So
we
we
do
have
people
available
who
are
working
over
those
times.
We
also
have
a
kind
of
escalation
policy,
so,
if
needed
like,
we
already
have
approvals
for
a
couple
of
plan
deployments
that
will
take
place
just
to
reduce
the
pressure
in
the
event
that
there
are
sort
of
significant
problems.
A
We
could
also
like
do
additional
work
right
that
would
allow
us
to
actually
releases
are
not
covered
by
the
production
change
lock
right,
so
we
can
still
put
our
patch
releases
or
security
releases
if
we
need
to
I
believe
across
development.
There
is
also
a
similar
kind
of
like
escalation
process
that
will
be
in
place
so
that
people
can
be
sort
of
gathered
and
coordinated
on
these
things,
if
needed,.
A
F
It
I
heard
through
the
grapevine
that
the
delivery
team
might
be
doing
a
fastboot
sometime
next
year
and,
if
that's
the
case,
I'm
kind
of
curious
as
to
what
you
all
might
work
on
and
I'm
also
curious
as
to
how
you
plan
to
Showcase
how
productive
or
whether
or
not
something
was
successful.
During
the
fast
food.
I
guess.
A
That
is
a
great
question,
so
I
will
give
a
first
one,
but
please
jump
in
people.
So
we
are.
We
have
put
together
a
proposal
for
a
fastboot
that
we
would
like
we
would
like
to
have.
So
we
have
a
few
motivations
for
this
to
do
it
right
now.
A
So
one
is
over
this
past
year
we
have
had
quite
a
lot
of
Team
changes,
so
the
team
has
grown
and
it
has
split
in
two,
but
we've
also
sort
of
shifted
what
we're
working
on
and
we
have
up
until
sort
of
this
last
couple
of
months
we
have
very
much
been
the
team
who
handles
deployments,
handles
releases
and
does
all
the
work
required
to
get
these
changes
out
now
to
line
up
with
the
sort
of
the
the
Paz
sort
of
view,
the
product
direction
that
we
want
to
take
in
the
sort
of
platforms
direction
we
want
to
be
moving
in.
A
We
really
want
to
start
shifting
so
that,
actually
we
are
not
the
people
who
are
always
holding
all
the
tools
and
taking
all
the
actions,
but
actually
it
becomes
more
of
a
self-managed.
Sorry,
a
self-serve
process
where
you
know
developers
or
other
people
who
want
information,
can
pull
information
themselves
without
having
to
ask
delivery
for
everything.
A
So
with
that
in
line,
we
have
put
together
a
proposal
so
that
we
can
come
together
and
start
building
out
some
of
these
foundations.
A
The
idea
we
have
is
to
spend
a
few
days
together
and
build
or
try
and
build
out
a
sort
of
MVP,
so
very,
very
first
iteration
that
would
allow
us
to
deploy
experimental
changes
to
gitlab.com.
A
So
that
would
be
the
idea,
the
sort
of
a
few
interesting
elements
from
delivery
point
of
view.
Packaging
is
a
significant
one.
So
at
the
moment,
all
changes
get
packaged
together
and
we
deploy
them
together.
So
we
need
to
have
a
way
of
handling
like
creating
and
managing
an
experimental
package,
and
we
would
also
need
to
think
about
what
does
the
environment
look
like,
so
we
currently
have
our
production,
Canary,
environment
and
our
production
environment.
A
This
would
be
something
different,
and
this
would
be
a
much
more
sort
of
contained
and
sort
of
like
safe
environment.
We
wouldn't
just
want
like
traffic
to
land
on
it
randomly.
We
very
much
need
to
manage
that
and
an
observability
as
well,
because
the
point
of
having
something
experimental
would
be
to
monitor
it
and
see
like
how
does
this
actually
behave.
So
those
are
some
interesting
elements
that
we
would
hope
to
be
able
to
come
together
and
and
have
a
go
at
building
up.
A
No,
no
great
question
Chris
great
question,
so
fast
food
is
a
bit
of
a
gitlab
concept,
but
it's
sort
of
a
come
together,
bring
people
together,
co-located
together
for
a
short
period
of
time
and
try
and
basically
just
kick
off
a
project
or
a
concept
or
something
to
come
together.
So
delivery
did
one
very,
very
early
days
delivery
team.
Several
of
the
people
in
delivery
team
came
together
to
do
the
first
version
of
Auto
deploys
so
to
put
all
the
pieces
together.
A
So
we
we
think
this
one
has
similar
kind
of
complexity
that
runs
across
both
the
delivery
teams
and
would
be
well
suited
to
actually
figuring
out
as
a
group
together.
G
Think
I
think
Matt
did
a
great
article
about
how
he
did
profiling
on
a
redis
cluster.
Is
that
still
possible
on
kubernetes
and
then
the
second
question
is
like
okay?
Are
we
moving
other
things
in
there
as
well.
A
F
Yeah,
so
scalability
certainly
has
done
a
lot
of
work
around
profiling,
writers
and
such
and
a
lot
of
the
tooling
they
use
should
be
interchangeable,
whether
it's
trying
to
investigate
problems
inside
of
kubernetes
or
inside
of
virtual
machines,
if
I
recall
correctly,
what
they
were.
Troubleshooting
was
inside
of
virtual
machines.
That
said
the
tooling
that
they're
leveraging
should
still
be
interchangeable
inside
of
the
kubernetes
land.
F
As
far
as
my
next
steps
in
migrating
things
into
kubernetes
I.
Think
we've
decided
to
kind
of
put
a
pause
on
that
at
the
moment.
If
I
recall
correctly,
there's
been
some
upcoming
changes
with
the
direction
that
giddly
has
chosen
to
take
with
their
method
of
managing
redundancy
within
the
giddly
service
itself,
storage,
redundancy
and
such
so
with
that
we
have
chosen
not
to
migrate
that
particular
workload
on
over
into
kubernetes
and
that
also
impacts
prefect
because
it
seems
like
perfect
might
be
changing
quite
greatly.
F
The
only
other
item
I
could
think
of
that.
Pretty
much
is
not
inside
of
kubernetes
postgres
I.
Don't
see
that
moving
into
kubernetes
that's
a
massive
system.
We
have
a
dedicated
team
for
it.
These
are
Big
servers.
F
If
we
move
that
to
kubernetes,
that
would
be
something
we
would
want
to
raise
with
the
the
team
that
manages
those
systems,
but
I
think
the
only
well
I
guess.
There's
one
of
I
think
we've
only
migrated
one
or
two
redis
clusters
into
kubernetes
I,
don't
think
we've
got
all
of
them
inside
of
there,
but
I
think
scalability
is
kind
of
tackling
that
and
deciding
the
next
steps
for
redis
in
general.
F
I
get
I,
guess
another
one.
We
could
probably
think
of
as
our
front
door
h
a
proxy.
Currently,
the
decision
was
made
to
try
to
upgrade
AJ
proxy
in
place
that
we
get
the
latest
version
that
we
were
not
hitting
the
end
of
life.
Support
on
the
version
that
we
currently
run
and
I
think
because
of
that
work
we
are
pausing
trying
to
migrate
our
front
door
services
into
kubernetes
as
well,
so
that'll
be
something
that
the
reliability
team
will
be
tackling.
A
And
I
think
just
to
add
to
that
that
I
would
say
like
I
dispute
a
little
bit.
Your
word
pause
scale
back,
so
all
of
the
sort
of
key
services
around
the
sort
of
application
are
pretty
much
migrated.
So
yeah
we
did
all
of
the
stateless
services
are
migrated
and
I.
Think
redis
at
the
moment
looks
like
the
last
really
sort
of
significant
one
that
would
get
great
benefits
from
from
migrating,
so
scalability
yeah.
They
have
done
some
great
work.
A
There
I
believe
they're
continuing
to
to
look
at
how
to
migrate
the
rest
of
the
redises
and
then
beyond
that
we
have
a
lot
of
little
services
that
are
very
much
done
to
how
gitlab.com
is
maybe
set
up
and
running,
but
probably
a
lot
less
useful
for
General
users.
So
they
will
they
may
or
may
not
come,
but
they're
sort
of
not
considered
part
of
our
gitlab.com
migration.
A
And
that
means
we
can
move
on
to
the
next
sort
of
exciting
stuff,
because
the
whole
point
of
doing
the
migration
was
to
set
us
up
for
deployments,
so
they
are
faster
and
they
are
more
flexible.
A
So
actually,
this
is
a
great
sort
of
place
for
delivery
to
get
to
reliability
and
now
taking
on
more
of
the
sort
of
Hands-On
operating
of
the
kubernetes
Clusters
and
doing
more
of
the
kind
of
Maintenance
and
getting
those
things
to
to
sort
of
be
where
we
want
them
to
be
for
the
future
and
delivery
will
now
be
starting
to
build
out.
What
does
that
mean
for
our
deployment
strategies
like?
How
can
we
make
more
use
of
this?
This
flexibility
that
we
have
so
there'll
be
lots
of
exciting
stuff
coming
for
that.
A
No
okay,
fantastic
thanks
so
much
everyone
thanks
for
all
the
questions
and
thanks
for
joining
us
and
I
hope,
you'll
have
a
great
rest
of
your
year.
We'll
see
you
next
year.
Take
care
bye,.