►
From YouTube: 2023-05-10 AMA about GitLab releases
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
B
So
the
purpose
of
this
is
for
you
all
to
ask
us
any
questions
you
may
have
about
deploying
to
github.com
or
releasing
packages
to
our
self-managed
users
and
we're
happy
to
talk
through
things
we're
working
on
or
maybe
planning
on
working
on
or
also
just
day
to
day.
I
know:
we've
got
major
Milestone
coming
up
and
we've
been
making
changes
the
patch
release
process,
discussing
maintenance
policy
as
well
as
our
other
projects.
C
Sure
I'll
ask
about
the
back
Port
releases
and
that's
the
most
sort
of
top
of
mind
right
now,
I'm
curious
about
the
sort
of
the
the
experience
the
delivery
has
had
with
the
backboard
changes.
Has
it
been
beneficial?
Has
it
been,
you
know,
kind
of
a
pain.
It's
been
a
lot
of
confusion
around
yeah.
B
Great
question,
thanks
for
asking
us
Don,
so
this
has
been
interesting.
I'll
tell
you
a
little
bit
about
the
the
story
of
the
of
the
project
where
we're
at
now,
because
I
think
that's
also
relevant
sort
of
lots
of
ongoing
conversations
and
then,
if
biowots
jump
in
share
some
of
the
stuff
about
the
internal
pilot
I'll.
Let
her
otherwise
I'll
also
talk
about
that
a
little
bit,
but
the
maintenance
policy
discussion
opened
up.
B
It
was
quite
a
while
ago
now
it's
maybe
like
a
a
year
or
a
year
and
a
half
ago,
and
we
recognized
that
we
were
getting
sort
of
increasing
numbers
of
back
Port
requests
to
get
bug
fixes
into
some
of
the
older
versions,
particularly
the
versions
that
were
perhaps
are
still
putting
out
security
fixes
for
without
the
bug
fixes.
B
So
delivery
was
allocated
some
additional
headcount
in
order
to
automate
and
sort
of
support,
the
workload
that
came
with
that
and
we
started
doing
the
work
to
automate
and
improve
the
patch
release
process
to
support
things.
So
unfortunately,
something's
changed,
the
head
count
didn't
land
for
us
and
we
managed
to
make
some
great
process
changes,
which
is
what
we
now
see
with
the
patch
release
process
changes.
This
is
really
exciting
for
us,
because
what
we've
done
is
taken
away.
B
The
sort
of
check
where
delivery
people
or
release
managers
have
to
actually
merge
in
bug,
fixes
and
just
handed
that
back
to
developers.
So
it's
sort
of
our
first
step
towards
self-manage,
sorry,
self-serve
deployments
and
releases
tasks.
So
that's
super
exciting
for
us
where
we
are
right
now
is
we
are
going
ahead
with
adopting
that
as
the
official
patch
release
process,
but
under
the
existing
maintenance
policy
extension.
B
So
we're
not
currently
planning
to
extend
the
maintenance
policy
because
of
the
additional
workload
that
may
bring,
which
we
can't
currently
support
so
for
right
now
we
will
continue
to
be
backboarding,
so
continue
to
be
supporting
bug
fixes
into
the
current
version
only
but
using
this
new
patch
phase
process
and
then
we'll
review
that
as
we
go
along
more
interesting,
though,
is
the
stuff
that
Myra's
doing
with
the
pilot
with
the
internal
pilot
marriage.
You
want
to
talk
a
little
bit
and
maybe
should
I
take
out
some
links.
D
For
sure,
so
it
started
on
1510,
in
which
we
were
piloting
this
new
process
for
patch
releases.
So
before
that
we
use
the
label
of
peak
into
the
version
and
then
that
label
allows
us
to
include
those
merge
upwards
into
the
batteries
with
the
new
with
the
new
process,
we
basically,
we
allow
maintainers
to
merge
into
the
stable
branches
and
then
release
managers
at
some
point
review.
D
What
is
the
pressure
for
that
version
and
then
prepare
a
patch
release,
so
we
basically
delegate
a
bit
of
the
responsibility
to
developers,
so
they
can
choose
what
they
or
they.
When
I
mean
they
I
mean
developers
and
product
managers
and
Stage
team,
so
they
can
choose
what
is
going
to
be
included
in
a
package
and
then
the
responsibility
of
creating
the
battery
AC
soft
further
for
release
managers.
D
This
has
reduce
a
fairly
amount
of
time
for
release
managers
to
build
patch
releases,
so,
for
example,
in
1510
I
think
we
did
five
or
six,
which
is
way
more
than
the
usual
2
that
we
used
to
do
with
the
whole
process.
So
that
is
better
and
we
are
on
our
way
to
implement
this
new
process
as
the
new
batch
release
process.
And
there
is
an
epic
for
that
that
I
plan
to
link
and
I
also
can
link
the
results
of
the
internal
patch
release
or
the
internal
Pilots.
C
C
This
been
a
positive
thing
that
you've
been
experiencing.
B
Yeah
I
think
so
I
think
that
what
we're
seeing
is
that
fixes
going
into
a
patch
releases,
much
smoother,
we've
kind
of
separated
out
the
responsibilities
in
a
nicer
way,
so
that
release
managers
can
just
focus
on
tagging
and
Publishing.
With
that
stage
group
to
decide,
you
know
the
fixes
and
how
those
go
in
so
I
think
that's
working
really
well
for
us,
so
yeah.
B
Overall,
it's
been
really
positive
and
from
the
links
that
Myers
dropped
in
as
well
I
think
so
far,
I
think
there's
a
few
neutral
people,
but
hopefully
that's
a
good
neutral
that
generally,
it
seems
to
have
worked
out
for
people
as
well,
but
we're
still
seeking
feedback.
So
if
people
disagree
with
that,
please
let
us
know.
D
B
D
Overall,
the
experience
for
developers
has
also
been
positive
because
we
are
asking
for
them
to
create
a
merge
request
against
a
stable
branch
and
create
a
merch
requests
for
gitlab
developers.
It
is
basically
well
and
everyday
tasks.
D
So,
as
we
run
a
survey
a
couple
of
weeks
ago
in
the
sentiment
across
that
survey
was
that
this
experience
was
a
positive
one
for
developers
so
yeah.
That
was
also
a
good
data
data
point.
E
Yeah
and
when
you're
opening
those
merge
requests
right,
those
pipelines
that
run
against
those
those
stable
branches
now
for
15.10,
which
means
you
can
really
shorten
the
cycle
time
of
asthma
fixed
work,
do
I
think
it's
going
to
be
reliable
for
this
build.
You
know.
Yes,
I
can
do
all
of
that
self-serve
without
having
to
get
you
know,
release
managers
and
software
engineers
and
tests
to
run
those
pipelines.
E
You
know
on
my
behalf,
so
it
should
enable
like
much
quicker
kind
of
feedback
loops
on
the
patch
kind
of
side
that
Engineers
are
putting
in
and
when
creating
those
merge
requests.
B
One
group
who
I
would
mention
who
are
not
on
this
course
they
should
just
represent
them,
is
Quality
quality
are
the
people
who
so
far
have
lost
out
a
little
bit
by
our
change
here,
we're
using
an
interim
process
for
the
testing
and
at
the
moment
that
does
add
some
workload
for
Quality.
So
we
have
a
plan
for
that.
We
do
need
to
rectify
that.
To
get
that,
so
it's
always
automated
testing,
so
we
don't
have
to
keep
pinging
sets
to
do
additional
tests
for
us.
B
B
No
great
Greg:
do
you
want
to
verbalize
sure
yeah.
F
So
I'm
wondering
if
there
is
a
way
to
even
roughly
determine
how
many
feature
flags
were
added,
but
never
enabled
anywhere,
be
it
gitlab.com
or
self-managed
for
context.
The
application
security
team
is
discussing
basically
Bounty
awards
for
vulnerabilities
detected
where
enabling
a
default
disabled
feature
flag
is
required
to
exploit
anything
and
they
it
came
up
that
it
may
be
the
case.
F
We
have
feature
Flags
developed
but
never
actually
enabled
and
so
trying
to
get
an
idea
of
like
how
often
that
occurs
and
and
what
percent
of
feature
Flags
might
never
get
enabled
foreign.
B
C
So
I
mean
there's
two
sources:
information
right,
so
feature
flags
are
defined
in
a
yaml
in
the
in
the
git
lab,
like
config
development
feature
Flags,
or
something
like
that.
So
we
know
the
list
of
all
the
feature
flags
that
are
available.
Now
we
have
to
get
the
feature
flags
that
are
actually
enabled
in
production
and
that
can
be
done
either
through
I.
Think
the
chat,
Ops
or
I
think
there's
an
API
as
well
that
that's
about
to
link
and
then
I
think
we
just
compare
those
lists.
A
This
doesn't
answer
the
question
of
knowing
if
something
was
never
enabled
in
the
history
of
that
maybe
but
I'm
not
sure,
because
if
I
remember
those
the
state
of
feature
flag
is
stored
in
the
database.
So
if
we
you,
we
actually
want
to
know.
A
If
something
never
I
mean
you
can
delete
official
flag
where
we
often
either
re-enable
or
disable,
it's
really
uncommon
for
us
to
delete
until
we
actually
remove
the
the
the
fission
flag
definition
itself,
so
maybe
from
an
opposite
perspective,
is
more
interesting
to
see
what's
in
the
database,
so
because,
if
something
is
in
the
file,
so
we
have
the
file
in
the
yaml
file
and
it
has
no
entry
in
the
database.
It
means
that
no
one
ever
either
enabled
or
disabled.
It
is.
A
If
you
flip
the
value
it
should
set
it
to
full,
but
I'm
I
mean
I'm,
not
sure
about
this
I
do
remember:
it
was
the
option
to
disable
it
and
to
remove
it.
So
I
mean
it's
been
a
long
time,
so
it
maybe
isn't
no
longer
yeah.
C
We
have
an
issue
tracker
that
tracks
all
the
feature
flag
changes.
So
if
you
really
want
to
go
through
the
history
of
all
this,
if
it
was
a
flipped
on,
then
you
could
go
parse
that
but
I
think
it's
more
interesting
to
know
which
ones
are
not
even
turned
on
right
now,
right,
like
that's
the
first
step,
you
probably
want
to
know.
F
Yeah
I
I
did
look
at
the
the
yaml
and
I
did
find
a
docs
page,
and
it
looks
like
right
now,
based
on.
What's
in
the
yaml,
75
are
disabled
so
really
really
trying
to
gauge
like
essentially
we.
It
might
not
be
a
good
idea
to
pay
top
dollar
for
bounties.
If,
if
it
requires,
somebody
enables
a
feature
flag
and
there's
any
chance
that
that
feature
flag
will
never
get
enabled.
F
Yeah
yeah
exactly
and
so
right
now,
there's
basically
a
debate
about,
should
the
Bounty
that
we
we
found
a
a
well
a
bug
in
the
Debian
package
registry
was
detected,
we're
basing
the
Bounty
award
on
cbss
score
and
it's
basically
a
twenty
thousand
dollar
difference
in
Bounty
award.
Based
on
like
that,
that's
that's
the
debate
right
now,
yeah.
C
B
G
Thanks
yeah
sure,
actually
I
was
trying
to
understand
what
happens
after
we
kick
start
the
deployment
like
where
it's
getting
deployed
and
how
do
we
scale
like
in
devops
side,
because
I
was
actually
working
on
a
feature
and
trying
to
see
the
logs
where
we
are
also
getting
the
Pod
information,
so
I
kind
of,
like
landed
up,
landed
in
this
dock,
where
we
captured
how
we
migrated
from
VM
to
gke
so
where
pods
are
getting
involved.
So
my
question
is
I'm,
not
sure,
really
whether
it
applies
to
this.
G
This
call
or
the
delivery
team,
but
I'm
just
like
asking
here
to
see
I'm
sure
that
you
guys
can
point
me
in
a
right
direction,
where
I
can
understand
this
better
yeah.
So
that's
the
question
basically.
B
Oh
thanks
for
asking
very
relevant
to
what
we
do.
Does
anyone
want
to
have
a
stubber,
giving
an
overview
or
finding
some
resources
that
give
some
context
with
us?
Okay,.
A
E
Pick
it
might
be
a
an
interesting
opportunity
for
me
to
just
ask
a
question
and
do
some
validation,
so
I
think
one
of
the
the
things
we
might
be
missing
is
a
walkthrough
of
what
happens
to
my
change
once
it's
merged
to
master
is,
would
something
like
that
as
a
handbook
page
or
a
DOT
be
useful
from
from
your
point
of
view,
to
understand
what
happens
to
a
change
once
it's
merged
to
master?
You
know
in
a
bit
of
a
step-by-step
simplified
workflow
kind
of
walkthrough
or
explainer.
G
Yeah
I
think
in
that
context,
the
link
which
Alessio
just
posted
I,
think
that
do
have
the
tips
involved
in
the
general
delivery,
so
I
kind
of
have
an
image
on
what
happens
for
my
changes
once
I
merge
in
this
day
once
a
major
to
master
Branch.
G
But
what
I'm
wondering
here
is
how
let's
say
like
for
any
rather
than
gitlab.com
level
scale,
so
we
are
getting
one
million
or
plus
loggings,
but
I
just
wanted
to
understand
where
it's
coming
from
from
which
machine,
Maybe,
yeah
I
I
can
understand
that
the
question
is
not
properly
formed
because
I
don't
have
the
question
not
properly
formed
in
my
head
as
well,
but
I
just
like
wanting
to
see
how
we
handle
the
scaling
as
well
like
what
happens
if
there
are?
G
Itself
or
do
we
have
any
configuration
on
that
or
if
there
is
any
resources,
describes
how
we
handle
our
actual
machine,
where
our
github.com
is
hosted
or
being
deployed?
Something
like
that.
Okay,.
D
A
For
the
stateless
Services
we
completed
the
kubernetes
migration
I,
don't
know
the
status
of
what
were
described
in
the
first
link
that
you
found
so
ideally
in
general,
the
the
the
the
idea
is
this.
We
have
several
clusters
so
when
we
deploy
so
first
of
all,
we
have
two
stages
so
for
each
environment
we
have
a
canary
stage
and
the
main
stage.
A
So
let's
talk
about
the
main
stage,
which
probably
is
the
thing
that
you
care
the
most
about
this
so
with
the
main
stage
is
based
on
several
clusters,
Zone
and
multi-regional,
which
is
only
I,
don't
remember,
and
so
basically
we
deploy
the
helm
charts
on
cluster
but
clusters
by
cluster
okay.
So
in
for
scaling
in
each
cluster,
we
are
using
HPA.
A
So
if
you
want
to
dig
more
into
how
these
things
is,
configured
I
think
you
will
be
able
to
find
it
in
the
Kate's
workload.
Repo,
not
sure,
I,
think
it's
the
right
one.
So
anyway,
the
configuration
is
in
the
repo
itself.
So
just
maybe
ask
in
what's
the
name
of
the
channel
infra
lounge
if
it
wasn't
renamed
recently
yeah
infrastructure
launch,
and
so
maybe
someone
that
actually
worked
on.
That
piece
of
information
can
point
you
where
the
configuration
is
and
then
you
can
read
it
and
ask
other
questions
and
yeah.
C
C
B
Awesome
thanks
for
thanks
for
asking
that
and
for
the
answers
and
yeah
as
as,
let's
say,
I
mentioned,
like
we
have
the
infrastructure.
Lounge
is
a
sort
of
infrastructure
Department
wide
channel.
So
if
you
have
specifics
or
you
get
stuck
on
specifics,
that's
a
great
place
to
drop
in
and
and
ask
more
lots
of
people
will
be
around
to
help.
B
Awesome
I
wanted
to
ask
a
little
bit
about
metrics,
so
I
know
our
delivery
system.
Team
have
been
doing
amazing,
work
on
and
expanding
out
delivery,
metrics
and
adding
loads
of
dashboards
with
some
of
mine
just
giving
us
a
bit
of
an
overview
and
sharing
the
most
exciting
links.
So
we
can.
We
can
all
enjoy
that.
H
I'm
gonna
take
this
one
I,
don't
know
if
everybody
has
access
here
to
our
best
sports,
but
in
general
I
will
post
the
links
here.
So
we
actually
started
to
work
on
these
already
in
the
Q4,
where
we
start
to
build
the
capabilities
of
to
collect
metrics
from
our
Pipelines
and
in
traces
in
particular,
right
because
in
particular
we
were
using
the
use
case
of
deployment
SLO
in
some
parts
where
we
had
this
number
or
mptp
that
we
didn't
know
where
we
were
spending
all
of
our
time
and
we
evolved
in
that
direction.
H
During
q1
and
during
q1
we
started
to
look
at
exactly
how
we
can
drill
down
to
the
deployment
as
hello,
and
now
we
can
just
get
more
information
on
where
we
spend
time
where
we
spend
time
in
all
in
our
pipelines,
how
much
time
we
take
for
various
jobs,
value
stages,
but
not
only
that,
but
also
understanding,
which
are
the
major
inefficiencies
that
we
have
in
our
development
pipeline.
H
So
we
started
to
collect
metrics
around
how
many
times
we
are
trying
jobs
and
to
which
project
these
jobs
are
on
belonging
tool,
and
these
also
make
it
understand
how
much
time
you're
adding
on
top
of
rmtdp,
because
we
have
these
failures.
So
maybe
there
is
a
job
that
is
failing
three
four
times
in
a
row
because
and
then
we're
gonna,
try
and
then
pass,
and
maybe
it's
just
some
flakiness,
but
these
are
on
top
of
our
mttp.
H
That
is
the
delivery,
metric
minutes
and
hours
at
the
end
of
our
like
week,
day
or
in
which
in
each
pipeline
we
are,
we
are
running
so
we
started
to
identify
all
these
so
identify
this
kind
of
bottlenecks,
and
this
is
going
to
be
the
first
step
ahead
where
we
can
start
to.
We
know
exactly
where
to
look.
H
We
know
exactly,
which
are
the
main
drivers
component
that
are
make
us
spend
time
in
different
places,
and
in
addition
to
that,
we
started
to
understand
exactly
restart
well
to
see
some
some
Trends,
okay,
you
know
we
have
some
time
a
particular
job,
a
particular
project
that
starts
to
take
more
and
more
time.
At
this
point,
we
experience
start
to
address
this
part
and
I
said.
Okay,
maybe
we
have
something
to
look
at.
H
Maybe
our
dtp
is
actually
increasing
because
of
this
Factory
and
so
on,
and
on
top
of
that
we
are
able
to
drill
down
the
various
component
or
deployment
as
slow
right.
How
much
time
you
are
spending
on
QA,
how
much
time
you're
spending
on
deploying
digital
e,
perfect
or
deploying
our
main
stage
and
so
on.
H
So
we
can
exactly
measure
these
kind
of
things,
so
we
now
we
start
to
take
like
action
or
what
in
the
promise
that
we
face
it,
but
also
to
understand
which
are
the
parts
that
we
have
more
chance
to
to
improve
that
are
going
to
give
us.
The
the
bigger
benefits
now
I'm
going
to
look
here,
I'm
going
to
link
a
dashboard
but
I'm,
not
sure
if
everyone
has
access
to
it.
H
We
are
also
planning
to
have
a
demo
on
this
part
soon,
when,
as
soon
as
we
have
some
documentation
in
place-
and
we
also-
we
were
also
able
to
link
together
by
this
dashboard
because
we'll
try
to
build
this
in
a
way
that
we
can
drill
down
from
a
single
pipelines
to
the
single
failures
that
we
have
and
from
that
look
at
the
various
stages
in
projects
and
jobs.
So
to
be
able
for
the
release
managers
manually
for
Risk
Managers
to
understand
how
each
pipeline
went
and
which
other
project
we
face
on
those.
B
And
I
suppose
related
to
that,
we
also
have
been
doing
some
work
to
track
ready
to
manage
a
workload
which
is
a
sort
of
set
of
metrics
we
haven't
previously
had
and
that
sits
alongside
existing
metrics
that
we
have,
which
is
mttp
and
deployment
blockers,
which
we've
been
tracking
for
some
time.
B
Awesome
great
work
system
super
to
see
the
stuff
excited
to
to
start
using
it.
B
Thanks
so
much
Dan
thanks
everyone
for
joining
us
great
to
have
you
all
here,
thanks
for
everyone
for
us
questions
really
appreciate
you
you're
coming
and
taking
interest
in
things
and
pushing
us
to
to
keep
improving
and
yeah,
enjoy
the
rest
of
your
Wednesday.
Take
care
bye.