►
From YouTube: 2021-08-11 AMA about GitLab releases
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
fantastic,
let's
start
so
welcome
everyone.
This
is
the
august
delivery
teams,
ama,
so
we're
here
to
answer
any
questions.
People
have
around
how
things
get
deployed
to
gitlab
and
how
we
do
monthly
releases
or
anything
else
we're
working
on
at
the
moment.
So
victor,
thank
you
for
having
the
first
question.
Please
go
ahead.
B
Yeah
I
have
the
first
10
questions
or
so
actually
you
might
be
aware
of
that
that
in
products
there
is
a
totally
new
category
and
product
focus
around
making
gitlab
powerful
in
deploy
around
deployments,
delivery
and
environments
as
well
as
a
result.
B
C
Yeah,
I'm
not
fast
enough
at
typing,
so
I
just
wanted
to
paste
an
example
pipeline,
which
I
paste
in
the
document
here,
which
will
show
you
that
how,
for
instance,
for
communities
we
are
deploying
out
over
different
availability
zones
so
to
speed
things
up.
C
You
just
do
this
two
at
the
same
time,
but
yeah.
C
C
Let
me
show
you
this,
so
this
is
an
example
of
deployment
into
staging
and,
as
you
can
see,
we
start
with
them
building
the
chart
and
then
go
through
the
dry
run.
So
we
can
see
what
communities
wants
to
apply
or
what
helen
wants
to
apply
to
kubernetes
and
then
we're
going
to
the
manpod
stage
here.
B
C
C
C
And
doesn't
apply
it,
but
exactly
this
will
happen
in
the
next
stage
here
to
non-prod,
in
this
case
v2
or
availability
zones.
At
the
same
time,
if.
B
The
success
will
be
go
over
one.
B
C
To
review
it
in
this
case,
in
the
automatic
auto
deploy
pipelines,
it's
it's
just
run
and
if
it
doesn't
fail,
it
will
automatically
get
forward.
But
nobody
is
comparing
things
here,
but
we
have
the
stage
here
that
for
other
jobs
that
we,
where
we
commit
a
change,
we
just
run
the
driver
and
we
skip
all
the
rest
of
the
steps
here
and
we
can
just
see
what
would
it
like
to
do
and
then,
when
we
merge
it
would
create
a
pipeline
like
this.
C
That
would
apply
this
so
up
until
to
the
dry
run.
We
see
it.
Then
we
push
a
change
right.
This
is
the
pipeline,
which
is
until.
A
C
Fine
with
all
the
diffs
inside
here
we
merge
it
and
then
the
rest
pipeline
with
the
rest
of
the
changes.
But
okay.
B
C
Okay,
and
so
we
go
through
staging
then
station
qa,
then
to
canary,
which
is
just
running
in
one
single
zone
or
an
original
cluster.
Let's
say
because
canary
doesn't
get
enough
traffic
so
that
it
makes
sense
to
split
it
into
different
clusters,
and
if
that
is
successful,
we
go
over
to
production,
but
here
we
split
it
into
two
phases:
alpha
and
beta.
C
We
ended
up
with
that.
After
a
while.
Previously,
we
had
really
the
different
availability
zones
like
c,
b,
c
and
d,
and
also
the
original
cluster,
but
that
would
take
too
much
time
to
do
them
serially.
So
we
wanted
to
speed
up
our
deployments,
so
we
just
deployed
to
the
to
the
original
cluster
and
one
zone
here
and
here
to
the
other
two
zones
and
phase
beta
just
to
speed
things
up
and
split
it
a
little
bit.
C
It's
just
a
thing
that
we
ended
up
with
for
production,
because
it
was
the
right
compromise
between
speed
of
deployments
and
safeness
between
different
regions
and
availability
zones.
B
Okay,
one
question:
what
that
we,
when
we
came
up
with
the
set
of
questions,
was
that
actually
we
know
that
gitlab.com
is
deployed,
I
think
twice
a
day
now
or
something
along
those
lines.
It's
like.
Is
it
a
scheduled
job
that
that
takes.
C
B
C
It's
even
more
complicated,
so
what
you
see
here,
I
think,
wasn't
so
we
have
auto
deployments.
This
is
the
thing
which
is
deploying
new
versions
of
gitlab
everywhere,
and
a
part
of
this
is
kubernetes
which
we
can
see
here.
Another
part
is
still
the
vm
fleet,
which
is
happening
in
a
different
pipeline,
and
but
if
you
want
to
do
configuration
changes
which
you
can
see
here
in
this
pipeline,
it's
the
same
kind
of
pipeline,
but
it's
running
through
or
is
you
know,
started
when
you
merge
something
and
for
auto
deployments.
A
C
See
nice
thing
we
could
automate
this
more
because
we
have
alerting
and
things
and
we
have
things
like
production
checks
which
check
if
you
have
an
incident
running
and
things
like
that.
But
still
we
want
to
do
this
manually
at
this
point.
B
Yeah
slowly,
puzzle
pieces
come
together
because
amy
spoke
about
this
like
two
weeks
ago
or
something
so
basically,
if
I
understand
well,
gitlab.com
is
deployed.
B
But
when
you
have
a
configuration
change
in
your
kubernetes
resources,
for
example,
then
specifically,
this
pipeline
is
run.
Yes,
what
we
see
what
we
see
now?
Okay,
what
do
you
consider
configuration
change.
C
Well,
this
would
be
something
like
I
don't
know.
C
Yeah
so
let's
see
here,
for
instance,
this
was
a
change
where
we
added
some
ports
for
the
gitlab
web
servers,
for
instance.
So
this
is
something
that
we
need
to
in
our
community
workflows
repository
where
we
maintain.
B
B
Yeah
this
this
is
the
best
pipeline.
You
could
show
me
because
I
know
this
issue.
I
know
this
merge
request.
The
whole
team
knows
it.
Actually,
this
allows
rays
to
access
the
cars
on
staging
and
prep
road
yeah.
C
So,
in
the
surprise,
where
you
don't
have
access,
probably
we
do
this
change
and
then
helen
fire
creates
or
generates
all
the
kunitas
json
files
after
a
while
and
commits
it,
and
then
the
pipeline
is
running
and
doing
the
changes
and
applying
them
through
the
different
environments.
C
C
So
this
is
the
big
difference
and
the
autodeploy
pipelines
are
triggered
by
a
scheduled
drop
and
I
think
they
are
triggered
four
times
a
day
right
now
or
even
five
times
a
day.
Sometimes
we
don't
promote
the
canary
to
production
and
at
apec
times
we
don't
have
as
much
release
management
coverage
there,
but
I
think
now
we
have
some
coverage
there,
so
we
normally
should
be
able
to
deploy
five
times
a
day.
I
think.
A
So
no
well
so
releases
monthly
releases
are
a
very
different
process
to
the
auto
deploy.
So
the
auto
deploys
know
that
would
like
we
have
the
way
the
tooling
generates
the
package
names
and
the
branches
that
wouldn't
happen.
It
is
theoretically
possible
that
a
monthly
release
could
be
could
be
tagged
up
wrong,
but
that's
we
have
quite
a
clear
process
where
we
go
through
this
step.
A
So
theoretically,
it's
possible,
but
but
it's
one
of
the
many
reasons
where
we
kind
of
have
like
process
and
like
release,
managers
and
trained,
and
things
like
that
to
avoid
those
things.
B
Could
could
you
drive
me
through
that
as
well
like
how
much
releases
is
how
it
reaches
production?
Can.
A
I
hang
on.
Can
I
just
tie
this
back
to
the
agenda
a
little,
so
people
can
follow
so
just
to
summarize
on
the
conflict
changes
so
germany,
I
I
think
about
correct
me
from
around
like
skype.
Henry
generally,
I
think
about
conflict
changes.
Config
is
anything
that
changes
the
setup
of
our
clusters
or
like
our
infrastructure,
but
in
this
case
our
clusters
and
then
the
releases
are
applied,
putting
out
the
things
that
are
sitting
on
top
of
that,
so
it's
a
higher
level.
A
So
do
we
already
have
this
one
and
so
months?
So
sorry,
I
mean:
let's
get
back
to
this.
B
For
a
moment,
one
question
I
would
have
here
is
that,
even
when
you
just
put
a
new
so
like
like
with
every
how
to
say
that
the
the
configuration
for
gitlab,
I
think
it's
config,
tamil
or
or
gitlab.,
I
don't
know
so
like
the
base
configuration
file
of
gitlab,
I
never
configured
gitlab
myself.
Would
you
consider
that
part
of
the
release
or
it's
part
of
your
configuration.
A
So
if
it's
not
something
that
would
go
out
to
users
in
self-managed,
then
it's
configuration
if
it's
something
that
would
get
packaged
up
and
released
under,
like
you
know,
15.0
or
whatever
for
self-managed,
then
it's
it's
a
part
of
our
releases.
If
it's
something
we
are
changing
and
configuring
ourselves,
then
no,
it's
configuration.
C
Well,
let's
say
it
in
another
way,
so
we
have
all
the
configuration
values
stored
in
chef
in
a
repository
or
stored
in
our
communities,
workloads
repository
and
only
if
some
defaults
should
be
changed
in
a
new
release
which
we
don't
have
values
for
in
our
configuration.
Then
that
would
be
applied
during
the
release.
C
B
Okay,
yeah,
I'm
I'm
thinking
whether
the
the
major
really
is
actually
interesting
to
us
that
the
monthly.
B
Or
not
because
I'm
not
sure.
A
Okay,
just
to
kind
of
summarize
for
other
people
who
might
be
here
so
within
our
sort
of
regular
release
cycle.
So
every
day
we
are
doing
auto
deploys
they
are
going
to
gitlub.com
and
we
are
releasing
changes
as
they
come
in
and
then
what
we
do
once
a
month
is.
We
have
a
monthly
release
and
everything
that
has
been
deployed
on
gitlab.com
up
until
the
release.
A
Prep
date
gets
bundled
together
and
prepared
into
the
monthly
release,
which
then
goes
and
packaged
and
released
out
for
self-managed
users,
so
they're
linked,
but
they're
slightly
different
release
processes
they're
on
different
cadences,
so
we've
talked
a
little
bit
about
auto.
Deploys
it
auto
deploys
an
mr
package
comes
in
hits
staging
it
hits
canary.
It
hits
production
when
we're
doing
the
monthly
releases.
A
What
we're
doing
is
we
at
a
certain
point,
everything
that
has
been
deployed
already
to
production
gets
tagged
into
a
release,
and
then
that
gets
deployed
to
our
pre
environments
for
testing
and
if
successful,
it
goes
to
our
production,
our
release
environment
for
testing
and
then
goes
through
to
production.
The
actual
tricky
thing
about
prepping
the
monthly
releases
cutting
the
auto
deploy
pipeline
so
saying
this
package
today
is
the
stable
one
for
the
auto
for
the
monthly
release,
and
you
know
we
we
know
we're
going
to
pick
up
some
bugs.
A
B
E
Projects
do
not
know
about
each
other
by
default,
and
we
do
have
a
little
bit
of
documentation
about
this.
What
we
end
up
doing
is,
after
the
necessary
clusters
and
infrastructure
are
set
up
via
terraform,
and
we've
got
the
necessary
secrets
that
enable
our
ci
systems
to
talk
to
the
clusters,
we'll
bring
over
those
authentication
artifacts
over
into
our
ci
and
they're
managed
as
variables
and
then
inside
of
our
kids
workloads
repository.
For
example,
we
have
a
configuration
that
we
generate
by
hand
that
tells
us
hey.
E
E
Each
of
them
have
a
different
connection
like
restrictions,
so
we
have
to
make
sure
that
we're
going
through
the
necessary
host
or
it's
got
the
necessary
access
to
delta
host.
In
this
case,
there's
a
runner
dedicated
that
has
the
ability
to
talk
to
these
clusters.
B
And-
and
here
it's
like
when
you,
when
you
bring
over
the
old
tokens,
do
you
do
this
automatically
so
like
either
using
terraform
that
sets
up
the
environment
variable
in
the
other
project
or
you
do
it.
E
Manually,
that's
all
manual,
so
we'll
create
the
authentication
tokens
in
terraform
as
a
service
account
and
we'll
pull
down
the
json
file
that
google
will
provide
for
us
and
we'll
put
that
in
environment
variable
for
us
manually
inside
of
each
of
our
pipelines
or
the
in
the
inside
of
the
ops
instance,
rather,
okay
and
we'll
do
that
for
every
project
that
needs
to
deploy
to
kubernetes,
because
we
have
a
few
of
them
that
do
that
so
there's
at
least
three
repositories
that
has
the
same
information
kind
of
repeated.
E
So
it's
not
only
these
kubernetes
workloads,
repo
yeah
we've
got
a
few
others
there's
at
least
two
others,
one
that
we're
experimenting
with
using
the
tonka
project
from
grafana
and
we're
also
using
a
set
of
another
project.
That's
dedicated
towards
bolstering
or
providing
other
mechanisms
that
help
us
monitor
our
clusters.
B
Cool
okay.
Thank
you
very
much.
I
will
go
to
my
next
question
like
what
services
actually
know
about
our
deployments.
I
in
the
announcements
like
I've
seen
that
slack
sentry
is,
has
a
link
for
for
all
the
announcements.
So
from
this
I
guess
that
all
century
is
set
up
with
all
the
releases
and,
and
it
knows
when
we
release
something,
are
there
other
third-party
tools
that
that
are
aware
of
when
we
release
something
to
staging
and
prod.
E
There's
a
few
when
we
do
a
deploy,
we'll
send
an
annotation
to
grafana.
So
if
you
look
at
our
dashboards,
you'll
see
a
line
gets
created
when
the
deployment
starts
and
then
you'll
have
a
line
when
the
deployment
ends
and
grafana
will
shade
that
section.
E
We
also
track
our
releases.
I
forget
the
name
of
the
project.
Robert
might
be
able
to
tell
me
quickly,
but
we
do
track
our
releases,
and
this
is
for
the
purposes
of
helping
us
understand
where
we
are
such
that
our
tooling
could
notify
the
appropriate,
merge,
requests
and
add
the
label
saying
hey.
This
word
request
is
on
this
environment
and
such
and
then
there's
also
elasticsearch.
So
we
have
like
an
event
log
that
says:
hey
we've
deployed
to
the
api
fleet
or
we've
completed,
deploying
to
the
git
fleet,
et
cetera.
That
kind
of
thing.
B
Okay,
yeah,
it's
it's
enough.
We
just
at
least
have
a
rough
understanding.
I
would
skip
the
next
question.
I
think
it's
it's.
You
spoke
about
these
parts
and
it's
not
that
interesting
to
get
into
detail
there.
So
the
following
one
is
like.
We
were
in
a
bit
of
trouble
to
to
form
this
question,
but
something
like
do
you
do
we
treat
configuration
as
an
artifact?
E
So
I
don't
know
if
what
I
will
convey
actually
answers
your
question,
but
I
don't
think
so.
We
store
our
desired
state
inside
of
our
git
repositories.
Stuff,
that's
not
secrets.
Rather
so
from
a
technical
standpoint,
we
are
version
controlling
what
we
want
in
our
clusters
to
an
extent
we
don't
store.
The
final
end
result
of
what
those
manifest
may
look
like
after
the
fact
or
when
the
configuration
is
generated.
B
I
know
a
little
bit
about
how
our
secrets
are
managed,
but
but
I
would
like
to
make
sure
that
we
have
the
information
here
like
when
our
secrets
injected
into
this
deployment.
E
So
during
we
have
a
dedicated
helm
chart,
that's
part
of
the
get
lab,
kate's
workloads
get
labcom
repo
repository
and
it's
going
to
query
a
few
places
that
are
necessary
for
calling
our
secrets.
Most
of
our
secrets
are
stored
in
jk,
gkms
or
inside
of
our
chef
vault.
So,
depending
on
what
secret
object,
we
need
that
helm
chart
is
going
to
reach
out
query
for
that
information
using
whatever
api
is
available
to
it.
B
So,
okay,
as
I
was
typing
as
well,
I
I
probably
missed
parts
of
it.
So
we
answered
the
secrets
are
sold
in
gkms
and
we
use
gcloud
commands
in
ci
to
read
the
secrets
and
these
secrets
are
used
mostly
for
ham
to
access
the
cluster.
That's
what
you
said
or
something
else.
E
We
try
to
avoid
writing
the
the
secret
out
in
any
way,
shape
or
form.
We
try
to
avoid
making
sure
that
ci
is
not
going
to
accidentally
print
out
to
the
screen
that
kind
of
stuff
just
so
that
we
don't
actually
leak
the
secret
in
any
way,
shape
or
form.
E
E
A
Okay,
we
are
at
time.
Is
there
any
final
questions
that
people
want
to
throw
out
there?
We
can
also
continue
async
if
people
want
to
add
others
thanks
for
all
the
answers,
great
thanks
for
the
questions
and
mark
welcome.
If
you
we're
in,
we
have
julian
school
delivery,
is
our
slack
channel.
So
if
you
have
questions,
please
feel
free
to
come
and
chat
to
us,
but
otherwise
thanks
very
much
everyone.
I
hope
you
have
a
good
rest
of
your
wednesday
thanks.
Everyone
thanks.
Thank
you.