►
From YouTube: GitLab 13.5 Kickoff - Enablement:Memory
Description
Memory kickoff for 13.5.
Planning issue: https://gitlab.com/gitlab-org/memory-team/team-tasks/-/issues/62
A
Hi,
I'm
josh
lambert
group,
product
manager
for
enablement
and
I'd
like
to
talk
about
what
we're
planning
to
achieve
in
the
13.5
release
of
gitlab
that
will
be
available
in
october
22nd.
We
have
a
couple
of
main
items
here
that
we're
focusing
on
on
the
team.
The
first
one
is
that
we
are
continuing
to
work
on
support
for
dynamic
imagery
sizing
in
the
product.
A
That's
significant:
we
have
making
progress
so
far.
We've
been
running
some
experiments
on
gitlab.com
with
a
small
subset
of
avatars,
and
you
can
see
here
it's
going
pretty
well
overall
96
reduction
in
image
file,
size,
no
errors
and
pretty
fast
resizing
duration
for
most
of
our
operations.
You
can
see
here,
for
example,
on
average
you
know
the
99
tile
was
120
k,
original
file
size
and
that's
now
4k
going
down
to
the
end
user
again.
A
This
makes
a
big
difference
in
the
front
end
by
reducing
the
amount
of
time
it
takes
downloading,
but
also
the
amount
of
time
it
takes
to
resize
on
the
client
side
and
then
do
all
of
the
layouting
work
as
the
image
changes
and
gets
loaded
in.
A
So
that's
a
big
improvement
and
the
two
areas
that
we're
working
on
here
is
number
one:
we're
working
to
minimize
the
security
impact
of
these
of
the
resizing
process.
We've
been
working
on
using
graphics,
magic,
we're
not
totally
tied
to
that.
We
might
still
change
it
out,
but
overall,
of
course
you
know.
A
We
are
also
working
to
investigate
more
of
the
caching
options,
as
we
are
making
good
progress
on
dynamic
resizing.
We,
of
course,
don't
want
to
resize
every
single
image
every
single
time
it's
requested,
and
so
we
want
to
be
able
to
cache
these
resized
assets
to
not
have
to
re-pull
it
to
the
github
servers,
increasing
the
cost
of
bandwidth,
but
also
paying
the
compute
costs
and
resizing
every
single
time
and,
of
course,
also
the
latency
involved
as
well,
and
the
pull
and
resizing
so
for
a
lot
of
reasons.
A
Caching
is
important
and
it
is
the
next
major
step
towards
delivering
this
to
broader
customers
from
there.
We're
also
looking
to
shift
onto
a
little
more
forward-looking
items
and
a
little
smaller
impact
ones.
We
made
some
improvements
to
a
couple
endpoints
that
had
a
lot
of
cash
sequel
calls
these
impact
performance,
but
they
also
consume
a
lot
of
memory,
and
so
we've
improved
a
couple
of
them,
for
example,
on
the
notes,
controller.
We're
also
taking
a
take
a
look
here
at
the
merger
control
controller
if
that
is
not
getting
in
15.4.
A
But
the
overall
goal
here
for
the
cash
sql
calls
in
this
release
is
to
resolve
one
or
two
more
they're,
relatively
quick,
but
to
document
what
we
did
to
establish
that
best
practice,
and
that
way
we
can
enable
the
rest
of
git
labs
development
teams
to
do
this
work
themselves,
because
this
has
been
ongoing
challenge
and
there's
a
lot
more
endpoints.
They
have
to
solve,
and
so
we
can't
solve
this
all
ourselves
in
the
memory
team.
A
So
that
is
what
we're
doing
in
this
release
for
the
cash
sql
calls
moving
on
down
to
the
next
item
here,
which
is
real
user
metrics
on
gitlab.com,
is
a
feature
of
snowplow.
It's
also
a
feature
of
many
other
types
of
front-end
and
overall
performance,
suites
like,
for
example,
bitterdog
and
new
relic,
but
they
can
provide
just
page
load
timing
back
to
the
server,
and
so
we
want
to
take
advantage
of
this
on
git
lab
to
be
able
to
understand
how
the
broader
community
of
users
actually
experiences
gitlab.
A
You
know
we
do
some
performance
testing
from
a
gcp
instance
in
ohio
to
gitlab.com,
also
on
gcp,
that's
very
controlled,
but
how
do
people
in
europe?
How
do
people
in
asia?
What
is
their
experience
like?
What's
the
experience
like
on
a
cellular
connection
with
some,
you
know
kind
of
rough
or
not
great
connection
strength,
so
these
things
will
get
from
rum
and
we'll
have
a
better
idea
of
the
overall
performance
of
gitlab
that
these
users
actually
perceive,
and
so
this
is
important
for
us.
A
We
want
to
try
and
try
to
track
this,
and
it
will
also
give
us
information
on,
for
example,
how
much
impact
something
like
geo
replicated.
Instances
in
different
regions
would
have
to
improve
performance,
so
this
should
be
a
pretty
lightweight
change
again.
Snowplow
has
support,
for
this
might
just
be
enabling
a
quick
flag
and
that
will
also
send
the
page
load
timings
along
with
the
other
information
we
get
from
snowplow
back
up
into
gitlab.
A
Finally,
here
the
last
item
we
want
to
start
getting
underway.
I
had
the
memory
team
work
more
on
the
ruby
2.7
stuff.
Rugby
207
has
memory
improvements
and
also
performance
improvements,
and
so
you
want
to
go
ahead
and
start
to
be
able
to
leverage
the
benefits
of
2x7
and
also
work
to
stay
up
to
date
on
tutor
8
and
as
we
approach
to
300
next
year,
because
again,
we'll
just
be
continuing
memory
and
performance
benefits
in
each
of
these
releases
and
we'd
love
to
take
advantage
of
them
in
particular
on
2x7.
A
There
is
the
garbage
collection
changes
which
should
really
allow
us
to
reduce
our
memory,
consumption
by
forking
by
loading,
a
research
resources
before
forking
and
getting
a
lot
more
use
out
of
that
and
memory
reduction
out
of
that
than
we
have
in
the
past
because
of
how
ruby
does
gc
so
exciting
changes
here.
Some
really
large
impact
really
large
high
impact
items
with
dynamic,
imprecising
and
also
the
rubik's
cube,
upgrade
so
super
excited
and
if
you
have
any
thoughts
or
feedback,
please
leave
comments
on
these
issues.