►
From YouTube: GitLab 14.3 Monthly Release Kickoff (Public Livestream)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
As
a
reminder,
everything
we
share
here
is
in
the
context
of
the
primarily
so
in
context
of
that.
Whatever
we
share
today
should
be
taken
with
that
perspective
that
we
value
velocity
or
predictability.
A
Third
thing
you
can
follow
along
with
us
by
going
on
release.kickoff
by
going
on
about.gitlab.com
direction,
slash
kickoff
page,
you
will
see
various
issues:
videos
where
you
can.
You
can
go
deep
dive
and
see
what
we
are
working
on
and
contribute
to
it
or
provide
us
comments
to
make
it
better
in
terms
of
the
order
of
operations.
A
We
will
start
off
today
with
ops
which
will
be
led
by
james
heinbeck,
a
product
leader
in
observation,
ops,
section
followed
by
enablement,
which
will
be
led
by
fabian.
Another
product
leader
in
the
enablement
team,
followed
by
david,
will
cover
the
dev
section,
and
then
hillary
will
walk
us
through
the
sex
section.
Did
that
take
it
away?
James.
B
B
First
up
is
an
effort
from
the
pipeline
execution
team
who
found
a
background
process
that
was
cleaning
up
stuck
jobs,
but
it
was
often
timing
out.
This
was
resulting
in
an
increase
in
pending
jobs
and
longer
wait
times
for
users
who
kick
off
pipelines
for
the
git
lab
runner
cloud.
So
the
team
is
going
to
be
digging
into
what's
causing
this
timeout
to
clear
out
those
stuck
jobs
and
decrease
decrease.
B
Wait
times
all
around
the
pipeline
authoring
team
discovered
that
one
of
the
slower
endpoints
that
they
were
looking
at
was
taking
up
their
monthly
error
budget
was
the
mirror
pull
api
oftentimes
when
we
have
slow
end
points,
it's
really
the
heaviest
users
that
are
most
impacted
by
that
slowness.
But
in
this
case
the
team
found
that
the
majority
of
users
of
the
endpoint
were
feeling
that
slow
down
equally
so
they're
prioritizing
research
and
fixing
this
so
that
we
can
positively
impact
a
large
number
of
users
of
the
standpoint.
B
B
So
far,
the
team
has
been
addressing
this
by
manually
purging
out
those
old
pods,
but
they
really
want
to
get
this
result
so
that
it's
not
a
manual
process
and
user
users
aren't
having
to
wait
around
for
them
to
be
cleared
out
for
their
jobs.
To
start,
that
should
ultimately
result
in
less
block
clock
time
for
your
pipelines.
B
B
So
far,
users
have
been
dealing
with
this
by
utilizing
the
api
to
manually
clear
their
cache
or
writing
their
own
automation
to
clear
the
cache
which
works
okay,
but
it's
not
great.
So
the
team
is
going
to
make
this
less
manual
for
users
by
implementing
a
ttl
or
time
to
live
of
90
days
for
objects
in
the
dependency
proxy,
so
old
items
can
be
cleared
out.
B
B
First
up,
the
pipeline
execution
team
who
found
that
the
treatment
of
the
add
to
the
merge
train
button
when
a
previous
pipeline
had
failed.
Just
wasn't
consistent
with
the
experience
of
the
rest
of
the
gitlab
interface,
it's
important
for
us
and
for
users
that
experience
is
consistent,
and
so,
in
this
case
we
found
that
a
modal
alert.
A
modal
two
alerts,
you
that
the
action
that
you
were
about
to
take
is
dangerous
or
could
have
unexpected.
B
Next
up,
the
testing
team
is
addressing
an
issue
in
the
test
summary
mr
widget,
for
users
of
dark
mode,
which
is
my
favorite
theme
when
a
user
who's
applied.
That
theme
clicks
into
the
details
of
a
failed
test,
they're
presented
with
a
modal
that
looks
like
this,
which
has
details
about
the
test,
the
execution
time
and
the
stack
trace.
B
So
you
can
easily
debug
that
failing
test,
but
we
found
that
the
system
output,
as
you
can
see
here,
just
is
non-existent
so
as
a
workaround
users
would
have
to
copy
paste
or
switch
their
theme,
and
that's
just
not
a
great
experience.
So
the
team
is
going
to
change
this
up
so
that
the
treatment
of
this
code
snippet,
is
the
same
as
everywhere
else.
There's
code
step
it's
in
the
in
the
interface
for
users
who
have
that
dark
mode
applied.
B
Next
up,
the
package
team
found
that
the
existing
graphql
query
or
endpoint
for
the
dependency
proxy
wasn't
doing
a
great
job
of
describing
what
was
available
to
users
or
using
that
endpoint
for
discovery.
This
can
make
it
hard
for
a
team
to
quickly
adopt
the
dependency
proxy
in
their
day-to-day
development
use
cases.
B
So
the
team
is
going
to
create
a
graphql
endpoint
for
the
dependency
proxy,
so
users
can
more
easily
understand
what
information
is
available
like
the
group
settings
and
what
is
enabled
and
disabled
urls,
and
what
stored
blobs
are
out
there,
along
with
the
digest
for
them.
This
should
make
it
easier
to
understand
and
to
use
this
as
a
ui
for
dependency
proxy
for
teams.
B
So
far,
the
agent
is
able
to
register
with
a
project
or
with
projects
to
enable
deployment
and
monitoring
from
projects
into
the
cluster.
But
if
multiple
projects
in
a
group
want
to
access
that
cluster,
each
of
those
projects
has
to
be
registered
and
this
setup
just
isn't
easy.
So
the
team
is
going
to
enable
a
way
to
register
at
the
group
level
so
that
any
of
the
projects
within
a
group
can
access
that
cluster
for
deployments
and
monitoring.
B
The
team
also
found
that
in
the
agent
listing
page,
which
just
lists
the
agents
and
allows
the
user
to
click
in
to
get
details,
it
just
wasn't
a
great
experience
for
users
who
are
constantly
clicking
in
and
out
to
see
the
status
of
agents.
So
they're
going
to
bring
some
of
those
details
forward
to
the
listing
page.
Things
like
the
connection
status,
the
kubernetes
version
and
more
and
by
bringing
those
details
to
the
list.
B
Page
it'll
make
it
easier
and
quicker
to
see
which
agents
are
associated
with
the
project
if
they're
connected
or
not
and
when
they
last
did,
as
well
as
other
details
that
administers
need
at
that
high
level
and
then
wrapping
up
adoption
through
usability
is
the
monitor
team
who's.
Continuing
their
focus
on
incident
management,
something
the
team
heard
from
users
of
incident
management
is
that
triaging.
C
Thanks
james,
these
are
very
exciting
new
features
and
changes
in
the
in
this
section,
and
I'm
really
looking
forward
to
seeing
them
all
right.
I'll,
walk
you
through
the
work
that
the
enablement
section
is
doing
in
14.3
and
as
a
quick
reminder,
the
enablement
section
is
responsible
for
the
feature
and
tools
our
customers
use
to
run
and
operate
gitlab
at
any
scale.
C
The
first
item,
I'd
like
to
highlight
is
the
work
that
the
global
search
team
is
doing
on
the
performance
of
the
count
controller.
This
is
a
bit
abstract,
but
essentially
what
it
means
is
that
when
you
search
in
gitlab,
we
present
you
a
little
result
badge
here
and
sometimes
takes
up
to
15
seconds
to
actually
return.
It's
costly
to
actually
operate
and
our
users
have
to
wait.
C
The
second
thing
I'd
like
to
highlight
is
work
that
is
ongoing
in
the
database
team
regarding
the
primary
key
overflow
risks
on
some
of
our
tables.
When
gita
was
very
small,
we
essentially
assigned
integers
to
some
of
our
tables,
there's
only
so
many
of
them
available.
It
turns
out
that
a
great
thing
happened
in
gitlab
continues
to
grow
and
in
some
specific
areas,
for
example
our
ci
features.
It
means
that
we
are
getting
to
the
end
of
our
available
integers
and
we
need
to
swap
them
out
for
larger
ones.
C
I'll
show
you
the
impact
of
the
work.
The
team
is
doing
in
this
little
plot
here
so,
for
example,
for
the
ci
job
artifacts,
we
used
up
to
75
of
the
integers
available,
and
this
cliff
here
is
the
database
team,
actually
switching
them
over
to
big
integers,
which
gives
us
a
lot
of
scalability
at
headroom
to
grow
further,
make
sure
that
these
tables
remain
stable.
You
can
see
here's
another
one,
which
is
the
ci
stages,
and
so
we
are
working
towards
actually
mitigating
all
of
those
tables
in
14.3.
C
The
memory
team
is
also
doing
great
work
in
making
sure
that
our
ruby
version
stays
current.
We're
currently
utilizing
2.7,
there's
a
major
ruby,
upgrade
version
3
that
has
a
ton
of
really
interesting
features
and
also
a
lot
of
performance
benefits,
so
we're
making
sure
to
upgrade
to
those
so
that
you
and
also
gitmap.com
can
benefit
from
running
a
recent
ruby
version.
C
And
lastly,
this
is
a
longer
term
effort.
Something
that
we're
experiencing
now
at
specifically
on
gitlab.com,
but
also
for
large
instances
is
that
we
need
to
improve
the
scalability
of
our
database
backend
to
support
up
to
10
million
monthly
active
users.
The
first
iteration
of
that
is
to
decompose
our
database.
What
we're
going
to
do
is
we're
going
to
take
the
tables
related
to
a
specific
feature
set
ci
continuous
integration,
we're
going
to
move
it
to
another
database
cluster,
thereby
almost
halving
the
size
of
the
database.
C
This
allows
us
a
lot
of
headroom
and
scalability
so
that
we
can
continue
to
grow
and
remain
stable
on.com.
So
those
were
the
items
for
the
first
theme.
The
second
is
adoption
through
usability
and
other
groups
in
enablement
are
doing
really
interesting
work
in
that
area.
The
first
one
is
something
that's
also
a
great
example
for
iteration
global
search
where
we
add
a
sort
by
popularity.
C
So
when
you
search
in
gitlab,
you
can
now
not
only
filter
by
most
relevant,
but
also
by
popular
upvotes,
and
this
is
something
we've
already
delivered
for
issues
and
in
14.3
we're
going
to
actually
add
the
next
iteration
and
we're
going
to
do
this
for
merge
requests
so
rather
than
waiting
to
ship
everything.
At
the
same
time,
the
team
has
done
a
great
job
actually
doing
it
feature
by
feature
and
rolling
these
out
in
an
iterative
fashion.
C
The
next
item
is
to
do
with
geo
our
solution
for
disaster
recovery
and
distributed
teams,
and
so
our
large
customers
are
starting
to
use
gitted
cluster
for
highly
available
git
storage,
and
we
need
to
make
sure
that
geo
and
gita
cluster
work
seamlessly.
It
turns
out
that
the
way
geo,
renames
repositories
is
not
working
well
with
get
to
the
cluster
and
can
lead
to
failures
and
potential
data
loss.
C
Lastly,
this
is
a
new
market,
but
I'm
very
excited
about
this
one.
So
we've
talked
about
this
before
in
14.2,
but
we
still
have
a
few
finishing
touches
to
get
our
general
available
release
of
gitlab
operator,
which
would
allow
us
to
actually
deploy
gitlab
to
openshift,
and
so
one
of
the
outstanding
items
here
is,
for
example,
to
support
all
of
the
production-ready
components
in
the
gitlab
chart
and
a
few
other
items
that
need
to
be
finished
and
we
continue
to
chip
away
at
those
to
deliver
the
ga
release
for
operator
and
that's
it
from
enablement.
D
Thank
you
fabian
and
let's
dive
into
dev
and
sex,
so
I'm
david
santos,
senior
director
of
product
management
for
the
dev
and
sex
sections,
so
to
start
with
dev,
create,
is
primarily
focused
on
get
lab,
hosted
first,
focusing
on
reliability,
infradev
and
scalability.
D
So
our
source
code
team
and
our
code
review
teams
are
actively
focused
on
those
items
being
prioritized
for
the
gili
team.
They
are
also
focused
on
get
lab
hosted.
First,
though,
their
work
here
will
also
benefit
large-scale
deployments
of
self-managed
instances,
they're
focusing
on
improving
synchronization
between
the
repository
storage
and
the
prefect
database.
D
D
For
the
plan
stage,
they
are
also
heavily
focused
on
the
adoption
through
usability,
as
well
as
get
lab
hosted.
First,
the
first
thing
here
where
we've
been
talking
about
work
items
and
creating
issue
types,
the
project
management
team
is
focused
on
adding
support
for
those
default
issue
types
which
will
then
enable
the
product
planning
team
to
bring
epics
down
to
the
project
level.
D
D
And
then
to
wrap
up
plan
again
with
a
focus
on
get
a
lot
hosted.
First,
they
are
focused
on
reliability
and
scalability,
as
well
as
performance
of
getlab.com,
and,
of
course,
that
will
then
benefit
large-scale
deployments.
Customers
have
within
a
self-managed
environment
over
to
manage,
manages
again
focused
on
some
adoption
through
usability,
but
definitely
also
focus
on
reliability
as
well.
The
first
thing
is
a
small
improvement,
but
has
a
big
impact.
D
D
This
is
really
key
for
customers
who
are
wanting
to
do
things
like
sign,
commits
and
identifying
that
the
person
making
that
commit
is
the
actual
user
and
being
able
to
see
that
public
key
will
make
a
big
impact
on
that
for
the
compliance
team
they're
focused
on
the
continued
work
to
have
propagation
of
merge,
request
approval
settings
from
the
group
level
down
to
the
project
level.
We
talked
about
this
in
the
last
couple.
D
D
You
can
have
a
lot
of
audit
data
if
you
have
a
lot
of
compliance
requirements
and
that
can
take
up
a
lot
of
storage
and
could
potentially
lead
to
slowing
down
your
experience
within
git
lab.
This
is
going
to
allow
you
to
send
audit
logs
to
an
external
service
that
way
they
can
be
kept
there
for
long
term
storage
purposes.
D
Some
requirements
are
7
10
years
of
audited
events
that
will
then
be
able
to
be
off
offset
to
a
place
where
you
can
have
better
long-term
storage
for
the
import
team.
They're
focused
on
improving
infradev
experience
focused
with
reliability,
but
another
thing
I'm
very
excited
about
they're
working
on
is
the
improvement
of
gitlab
to
get
lab
migration.
D
This
first
part
here
is
being
able
to
generate
the
projects
that
are
needed
as
you
do
a
group
move
and
as
they
continue
to
improve
upon
that,
that'll
obviously
then
begin
to
include
the
actual
project
data.
As
well
and
then
lastly,
for
the
manage
stage,
we
added
a
new
group
called
workspace
since
the
last
kickoff
call.
This
group
is
doing
a
lot
of
work
to
consolidate
the
back
end
of
the
product
so
that
we
can
have
better
performance
and
scalability.
D
This
first
effort
here
you
can
see,
is
getting
kicked
off
as
the
consolidation
of
groups
and
projects
in
the
back
end.
This
will
allow
the
propagation
of
settings
that
we
just
showed
related
to
compliance.
It's
also
going
to
enable
faster
performance
of
the
product
as
well
and
giving
parity
between
self-managed
and
experience
so
very
excited
about
this.
It's
a
multi-milestone
project,
but
we're
getting
it
kicked
off
here
with
14
3.
D
and,
lastly,
the
ecosystem
stage.
Since
the
last
kickoff
call
ecosystem
was
moved
up
to
a
stage.
This
has
to
do
with
the
amount
of
investment
we're
now
putting
into
our
ecosystem
capabilities.
The
ecosystem
stage
is
primarily
focused
on
sas
reliability.
For
this
milestone,
however,
there's
a
little
bit
of
work
being
done
that
I
wanted
to
highlight.
The
first
is
setting
defaults
on
self-managed
to
match
what
we're
using
for
getlab.com.
D
D
D
Users
will
be
able
to
leverage
that
design
system
for
objects
they're
putting
into
their
commit
it'll
also
long
term
have
the
benefit
of
higher
performance
as
we'll
be
able
to
use
optimized
components
throughout
the
ui
over
to
the
sex
section.
So
just
pretend
I'm
hillary,
as
I
present
this
to
you
to
start
off
with
the
secure
stage.
D
Secure
stage
is
very
much
focused
on
reliability
in
14,
3
and
14
4
throughout
this
quarter.
First
example
of
this
is
the
static
analysis.
Team
is
focusing
on
optimizations
of
the
components
that
make
up
their
scanners
and
how
their
scanners
interact
with
the
gitlab
instance
itself.
Those
are
not
familiar.
The
scary
scans
run
outside
the
default.
Application
is
part
of
the
runner,
and
they
interact
just
like
a
third
party
through
our
ecosystem
integrations
would
do
so.
D
What
this
looks
like
is
on
our
first
mvc,
when
the
scanner
that
we've
been
talking
to
you
about
all
year
identifies
something
as
a
false
positive.
It's
going
to
add
a
banner
here
telling
you
that
it's
been
determined
to
be
a
false
positive
and
it's
giving
you
instructions
on
how
to
action
on
that
and
dismiss
the
vulnerability.
If
you
agree,
as
we
continue
to
optimize
that
functionality
capability,
it'll
automatically
dismiss
it
in
that
some
of
those
wireframes
we
showed
you
earlier
in
the
year.
B
D
That
one
it
started
on
the
dashboard,
the
next
one
yeah
there
you
go
okay,
great!
So
now
that
I'm
back
the
stack
analysis
team
is
about
ready
to
release
the
first
version
of
our
ability
to
auto
identify
and
findings
as
a
vulnerability
or
as
just
sorry
as
a
false,
positive
we've
been
talking
about
this
throughout
the
year.
D
Talking
about
the
ability
to
have
us
automatically
dismiss
a
vulnerability
in
the
nbc,
we're
adding
the
banner
so
as
the
engine
determines
that
there's
a
vulnerability
that
is
a
false
positive
you'll,
be
alerted
of
that,
and
then
it
gives
you
instructions
on
how
to
work
through
that
and
dismiss
it.
If
you
agree
long
term,
we
will
be
able
to
automatically
dismiss
those
as
they
occur,
but
as
a
first
nbc.
This
is
a
really
great
step
forward.
D
If
you're
beginning
to
get
that
value,
the
dash
team
has
been
working
on
dest
on
demand
scheduling
for
the
last
couple
milestones.
Their
focus
is
now
taking
it
beyond
the.
I
want
this
to
run
next
week
on
monday,
to
being
able
to
say
I
want
this
to
run
every
monday
at
the
beginning
of
the
week,
giving
you
the
ability
to
have
that
recurrence.
D
D
However,
for
us
to
scan
apps
in
production,
we
want
to
make
sure
that
you
are
actually
the
owner
of
the
app
and
didn't
have
a
typo
and
so
we're
having
the
ability
to
add
a
meta
tag
that
the
scanner
can
look
for
when
it
sees
that
it
knows
that
the
application
is
supposed
to
be
scanned,
it's
been
approved
and
they
can
run
an
active
scan
against
it
to
finish
up
secure
or
back
going
to
sas
reliability.
The
threat
insights
team
will
be
spending.
The
next
couple
milestones
focused
on
refactoring.
Our
store
reports
report
service.
D
This
is
the
component
that
takes
the
results
from
all
the
scanners
and
brings
it
into
the
ui
they'll,
be
working
on
improving
that
and
being
able
to
allow
the
scale
better
and
then
finally,
and
probably
touched
on
this
when
talking
about
decomposition
work
as
part
of
the
database
working
group,
the
threat
insights
team
is
also
working
on
a
bunch
of
refactoring
based
off
how
to
optimize
for
database
sharding
and
database
optimization
and
then.
Finally,
to
finish
up
with
protect,
the
protect
team
is
focused
on
application,
security,
testing
leadership
and
adoption
through
usability.
D
D
D
D
D
This
is
a
nice
again
big
step
forward
for
usability
you'll,
be
able
to
over
the
next
couple
of
releases,
be
able
to
not
just
have
your
static
rules
you
have
today
where
you
have
to
set
amount
of
count,
and
it
applies
to
only
high
and
critical
you'll.
Be
able
to
do
things
such
as
decide
which
scanners
should
have
the
policy
and
the
approval
attached
to
it,
as
well
as
how
many
vulnerabilities
need
to
be
found
to
trigger
that
and,
of
course,
being
able
to
set
the
severity
yourself
as
well.
D
Some
really
great
information
off
these
issues.
I
highly
recommend
you
check
them
out,
but
that's
everything
from
the
dev
and
sex
section
by
both
myself
and
hillary
kind
of
the
same
person
today
with
that
back,
referred
to
a
noob.
A
Thank
you
david.
I
do
want
to
take
a
couple
of
minutes
and
talk
a
little
bit
about
our
focus.
As
you
may
have
noticed,
we
are
focusing
a
lot
more
on
performance,
reliability
and
really
security,
and
the
question
typically
comes
at
this
point,
which
is
hey
what
about
features
and
capabilities.
So
it's
important
to
understand
that
all
of
these
things
are
providing
really
strong
capabilities
for
our
customers
and
it's
a
false
sort
of
choice.
To
think
that
features
are
more
important
than
any
of
these
foundational
pieces.
A
I
want
to
just
also
highlight
a
few
of
the
things
the
team's
already
working
on,
for
example,
performance,
improving,
getting
authentication,
so
we
can
safely
authenticate
customers
quickly
when
using
source
control
in
our
product
hard
to
argue
that
that's
not
a
good
value
to
provide
to
our
customers.
A
Similarly,
if
you
look
at
verify
improving
performance
so
that
we
can
scale
to
20
million
plus
builds
a
day,
because
we
think
we'll
get
there
in
24
months,
you
know
com
platform,
but
also
large
self-managed
installations
might
have
really
long
large
amount
of
builds
a
day
as
well,
security,
always
front
and
center
and
top
of
mind
operational
improvements
around
improving
the
operational
use
of
gitli
cluster,
which
keeps
the
the
system
highly
available.
Making
it
easy
makes
makes
it
less
error
prone
to
accidents
or
mistakes,
keeps
your
system
up
and
running
for
all
your
users.
A
We've
also
instituted
error
budgets
so
that
teams
have
accountability
but,
more
importantly,
they
can
see
if
they
have
errors
or
problems
in
their
area
of
code,
that
how
does
it
impact
users
as
an
experience?
And
then
they
can
react
to
it
pretty
quickly
and
immediately,
so
that
we
can
provide
highly
available
experiences
to
our
customers
and
then
other
things
like
av
deployment
of
shared
runners,
which
allows
reducing
the
risk
of
deployments
again
so
that
unplanned
downtimes
don't
show
up
on
our
customer
environments
and
continuing
to
improve
isolation
and
reduce
reliability.