►
From YouTube: GitLab 12.10 Kick Off - Enablement:Memory
Description
12.10 Kickoff for the Memory group
A
A
The
costs
to
what
we
charge
on
on
these
different
types
of
groups
and
they'll
give
us
a
little
more
flexibility
in
how
we
manage
our
public
and
private
and
free
services,
as
it
relates
to
CI
minutes
on
get
lab
comm.
So
it's
a
little
more
short-term
project
were
working
on
and
we
then
also
have
some
larger
themes
that
we're
working
on
across
the
group
as
well.
A
The
first
is
that
we
are
continuing
to
make
progress
on
Puma
Puma
will
be
available
as
a
generally
available
service
in
12.9,
and
we
are
planning
to
continue
to
improve
and
iterate
on
Puma
in
1210,
in
preparation
for
it
being
opted
out
in
13.0
in
May.
It's
a
few
things
working
on,
hear
things
like
documentation,
a
blog
post
to
get
the
word
out,
and
our
experience
on
gitlab
comm,
spoiler
alert
on
the
blog
post.
It's
been
very
helpful.
A
We
were
able
to
reduce
our
memory
consumption
by
about
30
to
40%
without
having
any
impact
on
our
latency.
So
we
are
very
excited
about
Puma
in
general
and
then
we're
doing
a
couple.
Things
checks
in
education
here
before
we
enable
it
on
by
rather
by
default
for
everyone
in
that
picture.
No
time
frame
so
I
got
going
on
for
Puma
there
again
just
driving
to
completion
so
that
we
can
have
more
people
benefit
from
these
improvements.
A
We're
also
working
to
improve,
get
labs
own
development
practices.
So
we
can
help
catch
some
less
than
desirable
performance
characteristics
during
the
development
phase
itself
before
it
ever
even
reaches
an
environment
like
staging.
For
example,
or
get
louder
calm,
let
alone
our
self-managed
customers,
so
we
have
three
of
these
working
through,
for
example,
just
can't
walk
through
them
here.
A
The
first
is
that
we
want
to
help
to
improve
some
best
practices,
as
relates
to
memory
consumption,
so
they
we
aren't
attempting
to
read
in
the
entirety
of
a
single
file
into
memory,
but
as
a
instead
focusing
on
reading
line
by
line
by
line
which
would
give
us
more
constant
scale
of
memory
usage
rather
than
a
single
big
bulk
load.
We're
also
looking
to
improve
the
guidance
and
really
nudge
developers
to
use
more
efficient
iterators,
as
you
can
see
down
below
for
some
performance
metrics.
A
This
actually
really
doesn't
matter,
and
there
are
small
changes
you
can
make
that
have
some
pretty
significant
performance
differences,
for
example,
each
with
index
is
actually
quite
slow,
30%
slower
than
a
simple
index
based
array
access,
so
we
are
working
to
introduce
these
and,
with
in
robocop,
can
help
to
nudge
our
developers
into
better
choices
as
far
as
how
they
relate
to
performance
again
with
some
pretty
simple
changes
here.
Similarly,
this
is
a
bit
of
a
stretch
goal
here.
A
As
well
in
the
same
model,
or
we
can
catch
again,
some
sort
of
news
cases
that
can
introduce
some
performance
regressions
if
you
will
and
in
some
cases
are
less
than
desirable
characteristics.
So
that's
only
the
moment.
Practices
we're
also
continuing
to
finish
up
our
work
on
import.
We've
done
a
lot
of
work
and
import
in
the
last
few
months.
A
This
is
the
key
theme
for
us
and
we
are
handing
this
off
to
the
import
team
to
take
over
from
here
and
we're
just
doing
a
couple
more
things
before
is
fully
transitioned
off
to
that
team,
and
they
will
take
it
from
here.
So
you
can
see
a
little
bit
of
this
work
around
fixing
some
return
codes
and
also
some
streaming
sterilizers
as
well.
A
Finally,
here
as
far
as
we're
trying
to
achieve
broadly
is
that
we
also
want
to
make
plan
available
for
installations.
What
plan
limits
are
is
that
it
allows
us
to
enable
our
self-managed
customers
to
take
advantage
of
the
application
limits
that
we've
been
introducing
into
the
platform.
Application
limits
are
a
way
to
limit
certain
types
of
user
behavior
so
that
it
can
prevent
unintentional
or
intentional
abuse
which
could
destabilize
the
service.
A
You
degenerate
use
cases
that
they
might
not
really
intend
to
impact
the
service,
but
ends
up
happening,
and
so
we've
been
going
through
and
adding
limits
to
these
places
within
the
codebase
and
right
now
you
have
to
it's
only
bubbling
calm,
but
with
this
change
here
we
can
make
these
plan
limits
available
on
the
default
plan,
which
is
available
for
all
of
our
customers
and
available
in
court
as
well.
This
will
help
to
allow
our
users
to
take
advantage
of
those
stability,
improvements
and
leverage
them
in
their
own
installations
as
well.
These
are
also
tunable.
A
So
what
we'll
do
is
we'll
ship,
the
folks
that
we
ship
on
good
map
comm,
that
we
have
worked
well
for
our
user
base
there
and
we'll
make
them
to
the
defaults.
But
if,
for
some
reason
you
have
some
use
case
when
you
need
to
have
50,000
web
hooks
and
you
have
scaled
out
your
hardware
accordingly,
then
great,
you
can
go
ahead
and
you
can
tune
these
limits
to
your
own
needs
and
therefore
you
can
raise
or
lower
them
as
you
see
fit.
A
Tail
of
people
who
might
be
staying
on
unicorn
is
or
isn't
moving
over
to
Puma,
as
we
work
on
defining
a
date
around
when
unicorn
should
be
removed
completely
from
the
codebase
and
it's
there
are
things
go
for
around
what
types
of
endpoints
were
using
for
object?
Storage.
This
is
important
for
us,
because
we
have
two
different
types
of
object:
storage,
upload
support
within
the
gate
lab
we
have
direct
and
background
direct
works
without
any
type
of
shared
file
system.
Background
requires
shell
file
system.
A
However,
direct
has
fewer
supported
backends
than
the
background
version
does,
and
so
we'd
like
to
actually
switch
over
completely
to
direct
because
we're
trying
to
get
rid
of
the
requirement
for
shared
storage.
However,
we
would
like
to
get
understanding
of
how
many
users
are
actually
using
potentially
back.
You
know,
destroy
backends
that
aren't
supported
right
now.
Today,
form
droplets
object,
storage
and
also,
if
there
are
some
which
ones
are
most
popular,
so
we
can
try
to
implement
them.
So
those
are
the
work
items
we
have
going
on
the
memory
team.
A
Some
really
exciting
features
here
across
improving
the
appellant
practices
for
gitlab
deriving
the
puma
and
import
projects
to
completion
and
allowing
us
to
take
on
new
aspects
of
hopefully
moving
more
proactive
on
these
changes
and
working
to
address
additional
improvements
and
optimizations
to
get
that
itself.
So,
thank
you
very
much
and
we
can't
wait
for
12:10
to
arrive
to
all
of
you
and
stay
tuned
for
our
next
release.
Out
to
this,
which
should
be
30.
Now,
though,.