►
From YouTube: 2022-06-08 AMA about GitLab releases
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay,
so
welcome
everyone.
This
is
the
june
8th
2022
get
lab
releases
and
deployments
ama.
A
So
we
are
here
today
with
a
few
people
from
delivery
team
delivery
group,
as
I
suppose
we
should
start
calling
ourselves.
We
are
rapidly
heading
into
splitting
into
our
two
teams,
orchestration
and
system,
but
does
anybody
have
any
questions
either
or
in
fact,
there's
nothing
on
the
agenda?
But
does
anyone
have
a
question
they
would
like
to
verbalize.
B
I
have
a
question
I
would
like
to
verbalize.
Please
go
ahead.
Can
you
describe
in
some
details
what
the
team
split
is
going
to
encompass
for
delivery
and
like
what
the
end
goals
are,
and
you
know
what
and
such.
A
Whilst
I
talk
and
then
I'm
happy
to
jump
in
once,
we
continue
so
we
currently
are,
or
we
have
for
for
the
years,
that
delivery
has
existed,
being
a
single
team
delivery,
team
responsible
for
deployments
and
releases,
and
in
that
work
we've
had
quite
a
broad
domain,
so
we
have
put
together
deployment
pipelines,
we've
put
together
release
processes,
but
we've
also
been
involved
in
setting
up
kubernetes
clusters
and
migrating
services,
so
in
to
split
that
down
into
domains
that
are
sort
of
team
sized.
We
are
going
to
attempt
to
split
into
two
teams.
A
We
will
have
the
delivery
orchestration
team
who,
which
I'll
be
managing
and
alessio,
will
be
one
of
the
engineers
plus
myra
and
graham
and
matt
who's,
just
joined
us
and
we'll
be
focused
on
how
we
actually
kind
of
orchestrate
deployment
and
release
tasks
so
practically
day
to
day.
This
is
going
to
include
things
like
how
do
we
get
security
fixes
into
security
releases
in
our
automated
painless
way?
A
How
do
we
take
a
change
that
somebody
has
sort
of?
I
suppose
I'm
simplifying
it
down
into
if
you
have
a
thing
and
you
want
to
deploy
it
to
a
place.
Orchestration
will
give
you
the
framework
and
the
tools
to
be
able
to
do
that,
and
then,
on
the
other
side,
we
will
have
the
delivery
system
team.
Okay,
do
you
want
to
talk
that
one
through
I'll
take
over
typing.
C
C
Understanding
also
how
we
can
move
forward
to
the
goal
of
having
self-serve
deployments
from
from
the
teams
and
everything
else,
and
I
think
what
we
are
missing
right
now
is
to
focus
a
bit
more
on
building
this
platform
that
we'll
be
able
to
to
deploy
all
the
applications
on
top
of
it.
So
we'll
be
more
more
focused,
probably
on
the
kubernetes
side,
but
there
will
be
some
overlapping
that
we
have
pre-orchestration
team
right.
C
A
We're
still
trying
to
work
out
the
exact
kind
of
boundaries,
but
we've
been
talking
a
little
bit
recently
about.
So
if
we
look
at
delivery
kind
of
domain
and
tooling
today,
it's
quite
difficult
to
sort
of
see
the
edges,
because
it's
been
built
by
one
team
and
owned
by
one
team.
So
that
makes
sense,
but
if
we
look
sort
of
maybe
where
we
want
to
get
to
it
hopefully
starts
to
get
a
little
bit
clearer.
So
one
of
our
big
goals
for
delivery
group
is
to
be
able
to
allow
more
teams
to
self-serve
deployments.
A
So
at
the
moment,
pretty
much
all
of
the
changes
come
through
delivery
for
some
people
that
works
really
well,
we
have
the
auto
deploys
and
they
go
out.
Other
people
that's
a
bit
difficult,
so
they
don't
necessarily
have
to
be
grouped
into
auto
deploys.
But
at
the
moment,
for
our
sake,
that's
the
way
it
works
so
we'll
be
looking
to
see
if
we
can
set
things
up
to
get
as
many
people
self-servers
want
to
be
self-serve.
So
I
think,
if
we
start
to
look
in
that
sense,
then
we
all
become
more
of
a
provider.
A
An
enabler,
so
orchestration
is
sort
of
enabling
other
teams
to
have
a
means
to
deploy
and
system
is
perhaps
giving
us
a
means
by
which
to
deploy
so
perhaps
more
like
metrics
and
giving
us
a
way
which
we
can
uniformly
tap
into
doing
the
things
around
deployments,
like
tracking
them
monitoring
them,
as
well
as
actually
the
the
simple
act
of
code
hitting
a
server.
A
So
we've
got
lots
to
define
still
on
this.
We
will
certainly
be
working
together,
at
least
through
this
quarter.
We
have
shared
okrs
for
this
quarter
and
I
expect
over
the
next
few
months
we
have
a
lot
of
things
where
we'll
have
shared
ownership
and
we'll
work
together
with
each
other.
A
B
I
recently
saw
a
proposal
about
blue
green
deployments
kind
of
just
flashed
through
the
the
issue
tracker,
I'm
kind
of
curious
as
to
is
this
something
that
we
are
pursuing
and
if
so,
where
are
we
with
that
and
is
it
happening?
I
guess
I
don't
know.
A
We,
we
probably
wouldn't
want
to
just
do
it
for
the
sake
of
just
doing
it,
and
the
main
reason
for
that
is.
We
already
have
a
lot
of
complexity
around
particularly
around
our
deployments,
so
we
run
multiple
clusters
on
kubernetes,
not
all
the
same
type.
So
we
have
our
one
regional
cluster.
We
have
our
three
zonal
clusters,
just
throwing
blue
green
in.
Amongst
that,
for
the
sake
of
it
will
add
a
lot
of
complexity.
A
A
The
couple
of
big
benefits
that
I've
seen
on
blue
green
is
one
is
around
the
just
the
speed
at
which
you
can
reverse
off
something
right,
so
you're
not
really
even
technically
rolling
back
you
for
those
people
who,
maybe
don't
know
blue
green
you
deploy
and
you're
running
on
a
version
you
deploy
the
new
version
alongside
it
and
you
flip
the
traffic
over.
So,
rather
than
having
to
speed
up
a
rollback
pipeline
to
recover,
we
would
just
put
the
traffic
back
on
the
on
the
previous
running
one.
So
it
could
be
an
interesting
one.
A
It
it
may
make
things
easier
as
we
get
to
self-serve
like
it
may
make
it
easier
for
a
stage
group
team
to
be
able
to
kind
of
reverse
something
out
with
the
out
running
of
rollback,
but
it
certainly
does
add
complexity.
You
need
to
keep
track
of
a
lot
more
pieces
and
a
lot
more
versions,
so
we
would
need
to
consider
quite
carefully.
A
I
think
what
does
it
bring
enough
value
and
sit
alongside
our
current
canaries,
which
give
us
our
rolling
deployments
to
be
able
to
justify
the
extra
kind
of
operability,
as
well
as
the
actual
physical,
like
actual
cost
of
running
twice
as
many
clusters
is
not
not
a
trivial
sort
of
material
cost
as
well?
So
it's
an
interesting
idea.
We're
certainly
not
going
to
just
implement
it
next
month.
I
think.
A
Does
anyone
else
want
to
throw
in
anything
like
what
are
other
people's
thoughts
about
blue
green?
Has
anyone
seen
this
like
as
a
kind
of
interesting
practice
being
being
used
somewhere.
E
Yes,
I
was,
I
work
in
applied
email,
so
we
build
models
on
our
site
to
solve
problems
around
gitlab.
That's
what
we're
doing
right
now,
we're
not
at
any
stage
where
we
need
this
right
now,
but
it
past
companies
when
we've
had
models
going
on.
You
do
tests
around
like
model
drift
and
dead
air
drift,
but
you
really
are
never
ever
going
to
get
a
sense
of
like.
Is
your
model
operating
better
or
well
or
similar,
as
expected,
unless
you're
putting
it
through
traffic?
That's
real!
E
It
was
like
yeah
we're
getting
difference
in
our
improvements
or
we're
getting
reductions,
or
this
entirely
broke
the
use
of
the
flow
for
every
user
on
us
that
got
it
one
time
that
was
awkward,
but
you
get
these
sort
of
senses
of
like
it's
easy
to
sort
of
take
a
small
batch
of
users
and
sort
of
get
them
onto
the
upgraded
path
for
your
ml
models,
which
really
provides
more
information
than
you
could
probably
get
any.
F
We
had
interesting
thoughts
on
this
about
the
the
ability
to
package
some
custom
versions
of
the
application,
with
the
current
content
of
whatever
is
running
in
production
plus,
let's
say
a
single
merge
request
and
give
them
a
share
of
production
traffic.
So
something
like
proving
that
your
code
is
worth
production
by
running
on
production
before
you
can
even
get
become
part
of
the
package
itself.
F
F
So
this
is
something
that,
depending
on
what
we
plan
to
do,
can
be
interesting.
It
poses
few
challenges
like
database,
migrations,
breaking
changes
and
things
like
that,
but
I
definitely
see
a
situation
like
this
that
you
described
where.
Basically,
you
want
to
say,
I
have
an
idea
which
is
some
something
like
an
a
b
testing.
I
have
an
idea.
I
don't.
F
A
F
So
that's
a
great
question
amy,
so
I
think
that
the
biggest
the
two
biggest
challenges
on
this
at
our
scale
are
the
the
pipeline
timing,
so
building
that
type
of
artifact
timely
by
also
having
all
the
tests
and
everything
and
being
able
to
deploy
it.
F
It's
a
challenge,
and
so
I'm
kind
of
wondering
if
we
want
to
bundle
a
couple
of
changes
together,
so
we
kind
of
have
rounds.
So
let's
say
we
have
three
testing
window
during
24
hour
and
you
can
kind
of
join
a
testing
window
with
your
merge
requests,
and
so
we
bundle
things
together.
But
then
problem
is
that
you
don't
know
what
is
affecting
what
right.
So
in
dl
world,
we
would
like
to
be
able
to
just
give
someone
a
box
or
a
pod
or
a
part
of
a
deployment
with
its
own
metrics.
F
That's
the
perfect
idea-
and
this
goes
into
the
second
in
the
second
problem,
which
is
if
we
can
identify
safely
changes
that
are
say
I
would
say,
patchable
on
top
of
an
existing
image
image.
Then
that's
easier,
because
at
that
point
we
can
build
this
very
quickly.
We
can
just
pick
the
production
image
apply
the
patch
on
top
of
it
and
re-tag,
and
so
we
have
something
which
is
ready,
but
change.
So
the
reposing
challenges
with
dependency
upgrades
database
migrations.
So
we
need.
F
We
really
need
to
think
about
what
type
of
changes
can
we
test
in
this
type
of
environment
yeah?
I
I
think
that,
for
reason,
all
the
say,
algorithmic
optimizations
or
things
like
that-
should
be
good
candidates
for
this
type
of
changes,
as
well
as
ui
changes
or
yeah.
So
this
type
of
changes
should
be
okay
and
probably
even
easy
to
build
package
and
test
out.
A
Interesting
yeah-
and
I
wonder
I
mean
sometimes
we
hear
about
changes
that
are
not
easily
controlled
by
feature
flags.
I
guess
this
could
be
an
alternative
way
to
actually
try
and
safely
put
these
things
into
production
and
have
a
way
to
roll
them
back
out
as
well.
F
D
F
How
does
this
behave
on
the
same,
not
the
same
traffic,
but
the
same,
because
if
traffic
is
rather
to
that
image,
it's
not
routed
to
the
the
main
one,
but
in
the
same
time
of
the
day,
when
we
have
this
type
of
traffic,
how
does
the
two
versions
compare?
Is
it
using
more
memory?
Is
it
faster
it,
and
I
mean
this
is
this-
is
something
that
we
can't
do
today.
So
I
mean
it's,
I
think
it's
good.
It's
a
great
improvement.
B
So
I
have
a
follow-up
question.
Alessio
you
utilized
the
terminology
package
quite
often
when
you
were
describing
potential
solutions
normally
for
our
current
audit
deployment
mechanism.
We
build
everything
and
deploy
all
of
that
at
the
same
well.
At
the
same
time
in
quotes,
I
should
say:
are
we
talking
about
potentially
wanting
to
segregate
pieces
of
our
application
out
into
smaller
packages,
like
the
container
registry
would
be
its
own
item
to
be
deployed
to,
for
example,
which
would
be
that
would
remove
it
from
some
sort
of
alignment.
I
container
right.
B
F
That's
a
great
question:
I
think
that
I
may
be
wrong
on
this,
but
the
there
is
a
product
decision
around
having
a
single
package
that
just
insults
and
gives
you
everything,
which
is
what
generated
the
situation,
which
is
say
painful
for
us
when
we
want
to
deploy
gitlab.com.
But
on
that
specific
aspect
there
are
improvement.
We
can
do
on
our
own
side,
so
I'm
thinking
very
simply
so
right
now
we
always
wait
for
omnibus
packages
that
takes
longer
compared
to
cng
images
to
build
because
we
have
the
deploy
box,
which
is
running
migration.
F
That
is
a
vm,
but
there's
no
reason
that
why
we
can't
run
migrations
on
kubernetes
as
a
kubernetes
job
instead
of
on
a
vm,
it's
also
even
safer,
because
we
we
potentially
we
could
run
more
of
them
and
the
database
itself
will
lock
the
concurrent
migrations,
but
the
the
migration
that
gets
first
on
the
on
the
database
is
safer
because
you
can't
change
the
content
of
the
migrations
on
a
pod.
It's
an
atomic
unit.
It's
it's
ben!
It's
running
with
what
is
bundled.
F
A
Awesome
great
questions,
excellent
chat.
We
come
to
you,
so
you
don't
run
out
of
time.
Please
please
donate
your
question.
D
Sure
random
question
that
came
up
as
as
part
of
an
mri
I
was
looking
at
yesterday
can
at
run
time.
We
expect
every
type
of
deployment,
omnibus
or
docker
or
whatever,
to
have
an
installed
and
fully
expanded
node
modules
that
matches
that
version
of
the
deployment.
Or
do
we
not
necessarily
expect
that
to
exist
at
runtime
in
some
deployments
or
any.
F
Consumption
as
well,
how
many
of
us
will
replace
content
on
this
for
you,
so
I
can
briefly
tell
you
of
an
incident
that
we
had
with
italy.
That
was
based
on
this
assumption.
So
we
have
on
disk
what
we
think
we
have
in
disk,
so
it
was
around
14.0.
F
So
very
briefly,
gisely
has
a
binary
helper
and
basically
it
forks
and
spawn
a
spawn
a
new
helper
and
they
communicate
over
binary
encoded
messages.
Okay,
so
what
happens
here
is
that
they
broke
the
api
for
14.0
as
a
major
release.
We
can
break
things
type
of
situation,
but
when
you
run
omnibus
omnibus,
replace
binaries
on
disk,
but
before
you
run
reconfigure,
the
you're
still
running
the
old
italy
versions
in
memory.
So
what
happened
was
that
the
binary
running
version
13?
F
D
A
No
one
thing
I
do
want
to
say
so
thanks
so
much
for
the
great
conversation
today,
thanks
go
back
chad
for
your
your
questions
and
for
everyone
else
for
sharing
thoughts
and
opinions
on
this
alessio.
You
mentioned
optimizing
post-deploy
migrations
by
running
them
on
kubernetes.
A
It
doesn't
have
to
be
this
week,
but
at
some
point
could
we
get
an
issue
created
for
that,
because
I
think
that
would
be
a
really
really
good
thing
for
us
to
have
planned
in.