►
From YouTube: 2020 11 30 Memory Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Know:
okay,
they're.
A
This
is
the
first
engineering
focused
book
club,
I've
seen
a
lot
of
them
have
been
like
management
related,
but
they're
doing
ruby
under
a
microscope.
It
started
last
week
so
if
you
want
to
join,
I
think
tomorrow's.
The
next
meeting
you
wouldn't
be
too
far
behind
but
they've,
been
I've
been
on
a
couple
book.
Clubs
they've
been
pretty
interesting
so
anyway
glad
I
shared
there's
a
bunch
of
other
topics
on
there.
Let
you
all
catch
up
on
so
for
team
topics.
A
We
had
some
follow-up
items
from
last
week,
some
issues
that
need
to
be
created
and
there
was
also
a
side
conversation
in
memory
team
meeting
about
which
epic
we
should
create
issues
under,
and
I
found
a
bunch
that
were
created
under
the
two
different
epics.
So
I
think
we're
mostly
in
good
shape.
There
there's
a
couple
lingering,
but
I
had
a
couple
questions.
So
does
do
these
two
issues
and
we
gotta
comment
on
the
second
one,
but
first
one
do:
does
this
one
still
belong
in
thirteen
seven?
C
C
D
I
created
an
issue
for
the
first
part
of
this
because
yeah
like
because
it's
not
just
graphql
right,
there's
a
there's
a
couple
other
things
like
not
rugged.
What
is
it
rouge?
I
think
the
syntax
highlighter
and
a.
A
D
Things,
grape
and
but
yeah,
I
agree
with
camille.
We
probably
don't
want
to
end
up
with
like
15
different.
If
then
elses,
you
know
to
figure
out
when
to
do
these
things,
so
I
created
a
placeholder
issue
for
yeah
figuring
out
how
we
could
do
that
in
a
more
like
plugable
approach
that
works
for
any
of
these.
If
that's
possible,
I
don't
know,
I
think
I'll
link
that
somewhere
further
down.
C
Usage,
graphql
and
gems
could
be
like
the
one
of
the
aspects
like
that
we
may
not
load
when
it's
not
needed,
but
I'm
kind
of
looking
that
we
should
look
at
the
problem
more
holistically
like
if
we
have
action,
cable
and
you
don't
need
the
process
with
the
action
cable
like
how
we
can
just
disable
this
component
or
like
any
other
component,
something
that,
like,
I
think
we
discussed
last
week
with
fabian
like
how
we
can
reduce
the
application
footprint,
give
us
ourselves
space
to
add
a
new,
more
features
but
kind
of
retaining
the
current
footprint.
E
Yeah
and
I
had
a-
I
had
a
question
with
regards
to
the
to
the
follow-up
items
as
well
and,
as
you
will
be
able
to
tell
I'm
not
100
up
to
date,
so
I've
I've
tried
to
look
through
the
epics
and
the
outcomes
of
of
the
week.
Do
we
have
essentially
a
list
of
follow-up
items
that
are
ranked
by
either?
E
You
know,
for
example,
how
much
memory
we
are
going
to
save
by
doing
this,
that
we
estimate
and
some
kind
of
and
craig
will
see
this
from
the
memory
cell,
some
kind
of
stack
ranking
as
to
the
complexity
involved
of
achieving
this
right.
How
confident
we
are
in
doing
it,
because
that
may
help
us
then
prioritize
very
clearly
for
the
next
milestones.
It's
like
we
had
to
start
right.
Maybe
the
graphql
bit
is
the
first
one
to
start,
maybe
something
else,
but
essentially
a
these.
E
D
D
So
we
just
went
straight
ahead
and
broke
issues
out
for
what
we
think
yeah.
Can
we.
C
The
biggest
impact
would
be
so
did
we
like
create
our
issues.
I
am
at
least
aware
from
my
side
that
I'm
still
need
to
create
one
or
two
more
issues,
so
maybe
what
we
should
do,
maybe
as
part
of
these
issues
in
the
header,
in
some
common
form
or
like
as
part
of
the
command,
try
to
write
like
the
impact
and
the
complexity,
yeah
and
yeah.
G
E
So
we've
done
this
with
memory,
sorry,
with
database
last
week,
there
is
a
lightweight
framework
to
to
employ
for
this.
It's
called
rise,
reach
impact,
confidence
and
effort.
It's
essentially
a
way
of
just
very
simply
ranking.
You
know
like
issues
in
based
on
those
dimensions,
and
that
may
be
a
way
here,
also
to
say:
okay,
you
know
who
is
going
to
benefit
from
it.
What's
the
expected
impact
we
have,
we
can
measure
that
really
in
terms
of
memory.
E
You
know
how
complex
is
it
and
how
much
effort
is
it
going
to
take
us
roughly
in
in
weight
or
in
in
months
whatever
we
prefer,
and
if
everybody
agrees
then
we
can.
We
can
once
we
have
all
of
those
issues.
We
can
rank
them
in
that
in
that
way,.
E
Yeah,
so
we
have
two
options
with
this
option:
one
is
we
do
it
async
right
once
all
of
the
follow-up
issues
are
created,
I
I
can
create
an
issue
now
and
actually
explain
sort
of
how
how
it
works.
The
other
option
is,
we
can
do
it
synchronously
in
the
you
know,
memory
team
office
hours
at
least
and
record
it
right
with
the
people
that
have
time
and
then
others
can
can
pitch
in
based
on
the
output,
which
is
probably
a
little
bit
more
time
effective.
E
C
So
I
I
I
guess
the
question
is
like:
are
we
like?
We
will
be
able
to
prepare
like
all
these
issues
in
the
good
enough
form
for
tomorrow,
if
not
maybe
either
tomorrow,
and
I
think
that
on
other
office
hours
we
have
on
friday,
yeah.
C
F
E
Yeah
that
sounds
good
and
I
can
open
an
issue
in
the
epic
sort
of
laying
out
the
process.
You
know,
and
I
can
do
that
either
I'll
do
it
probably
tomorrow
morning.
I
don't
have
time
today
to
say
like
this
is
how
this
works
roughly
right,
then,
by
friday
the
issues
can
be
in
and
then
we
can
sort
of
come
together
in
the
office
hours
and
just
talk
about
it
and
and
compare,
and
we
can
record
it
for
those
that
can't
make
it
because
of
time
zones
cool
great.
F
Yeah,
no
problem
yeah
one
more
question,
so
maybe
I
missed
some
of
the
meetings,
but
do
we
have
anywhere
pointed
how
we're
going
to
implement
this?
Let's
say
optimization
from
the
user
side?
Will
it
be
some
gitlab
yaml
config
or
how,
from
the
user
perspective,
they
will
install
the
constraint
gitlab.
Let's
say
the
optimize
github
on
the
machine
or
do
we
have
an
issue?
Maybe
because
we'll.
B
C
I'm
kind
of
assuming
right
now
that,
like
whatever
we
do,
it
doesn't
degrade
functionality
or
does,
and
it
doesn't
increase
the
complexity
of
the
configuration.
So
so.
G
C
C
We
do
not
introduce
additional
configurations
for
the
things
we
aim
for
things
that
are
transparent,
yeah.
E
D
No
yeah,
just
like
one
one
point
about
this:
there
are
some
areas
where
this
will
be
tricky
and
where
I
do
agree
with
alexa.
We
need
to
think
about
this.
A
bit
more
like
one
good
example
is
actually
graphql
and
gray,
and
all
these
kind
of
workload
specific
chunky,
gems
that
we
currently
load,
because
only
for
gitlab.com,
we
actually
separate
our
fleet
into
weapon
api
workers,
not
not
probably
most
customers.
Don't
don't
do
that,
especially
not
the
single
node,
smaller.
You
know
kind
of
two
gigabyte
deployments.
D
So
then
the
question
is:
how
do
you
know
without
configuration?
What
does
this
node
do
or
what?
What?
If
it
does
serve
both
kinds
of
traffic,
then
that
point
is
kind
of
moot
right,
because
you,
then
you
still
will
have
to
load
all
this
stuff
into
the
same
process.
So
so
I
think
there
are
definitely
open
questions.
We
need
to
think
about.
C
So,
like
okay
kind
of
like
our
guidelines,
has
convention
over
configuration.
If
we
see
something
that
is
constrained,
let's
try
to
be
good
by
default,
so
maybe
like
for
the
puma
single
mode
to
be
chosen.
Like
we
say:
okay,
your
machine
has
less
than
three
gigabytes
of
ram.
It
uses
less
than
less
or
equal
to
cpus.
We
runs
in
the
single
mode,
because
this
is
the
only
one
that
makes
sense
in
this
setting.
E
I
think
my
two
cents
from
the
from
the
product
perspective
is
that
in
general,
I
think
anything
that
is
user,
transparent
and
just
makes
us
more
efficient
right,
as
in
we
get
you
know,
a
more
efficient
application
without
any
drawbacks
is
probably
the
ideal
scenario
where
it's
like
everybody
wins
following.
That
is
probably
something
where
we
can
guess
sensible
defaults
right
for
very
constrained
architectures,
you
know.
E
As
in
we
we
can
maybe
say:
okay,
you
know
we
know
this
is
very
constrained,
so
we're
going
to
do
it
a
little
bit
different
like
in
puma,
and
I
think
then
the
last
option
is
really
to
say:
okay,
you
know
and
that
that's
a
that's
a
bigger
decision.
Do
we
want
to
offer
something?
You
know
where
we
just
allow
people
to
by
default,
deactivate
certain
very
expensive
features,
because
they
don't
use
them,
but
that
obviously
degrades
the
user
experience,
because
by
default,
some
things
won't
be
available
and
that's
usually
not
something.
E
I
think
we
would
like
to
do,
but
that
may
still
be
an
option
and
I
don't
I
don't
have
enough
data
to
see
if
that
is
actually
required
right.
That's,
I
think,
a
larger
question
of.
Do
you
want
to
offer
something
like
gitlab
light
right,
and
maybe
that's
even
the
the
answer
to
sort
of
our
fear
in
a
way
that
there's
disruption
from
the
bottom
right
with
some
other
products
that
come?
E
They
don't
have
all
of
the
features,
but
they
are
very
lightweight
right
and
if
we
are
like
concerned
that
people
will
not
choose
gitlab
because
they
perceive
it
as
bloated
right.
That
may
be
actually
a
way
of
handling
it,
but
I
think
that's
maybe
sort
of
in
that
requires
maybe
a
bit
more
investigation
instead
of
my
feeling
right
and
from
what
I
hear
and
please
correct
me.
E
If
that's
not
correct,
there
are
actually
quite
a
few
options
of
what
to
do
with
the
application
to
make
it
more
memory
efficient
without
really
impacting
the
user
experience.
A
So
that
the
question
came
to
mind
when
we
were
talking
about
this
so
regardless
of
the
outcome
and
how
we
implement
this
with
silent
or
default
configurations,
do
we
have
an
issue,
that's
tracking
the
changes
that
we're
going
to
make,
because
we're
going
to
need
to
document
that
somewhere
to
inform
users
that,
if
they're
in
a
memory
constrained
environment,
these
configurations
will
have
taken
place
during
the
install,
and
you
know
perhaps
they
could
upgrade
their
their
vm
or
whatever
to
where
the
memory
constraints
no
longer
there,
and
they
would
want
to
change
their
settings.
A
C
Do
you
do
you
talk
about
the
issue
or
like
documentation
in
general?
That
is
part
of
the
omnibus,
for
example,.
A
I
think
there's
gonna
need
to
be
like
a
big
picture
overview
of
what
we've
done
for
this
two
gig
goal
like
what
are
all
the
changes?
Almost
I
don't
know
if
it's
going
to
be
a
blog
post,
but
just
a
description
of
here
are
the
changes
that
we
made
to
get
us
to
this
smaller
footprint.
And
here
are
the
things
like
a
single
node
install
needs
to
consider
that's
different
than
a
larger,
install.
C
Okay,
I
I'm
hearing
like
two
aspects
like
one
is
like
smart
of
the
two
wiki
two
geek
week
and
the
second
is
like
an
actual
list
of
legends
that
needs
to
be
made,
which
may
be
user
facing
correct.
C
Because,
like
I'm
kind
of
thinking
that,
like
maybe
on
friday,
when
we
meet
with
the
with
this
rice
framework,
we
just
gonna
have
the
summary
of
the
two
week.
C
E
Well,
it's
also,
if
I'm
the
way,
I
understood
this,
like
we
don't
quite
know
yet
exactly
what
all
the
changes
are
going
to
be
that
we're
going
to
make,
but
I
think,
as
part
of
those
changes,
if
I
understand
you
correctly
craig,
we
should
definitely
have
a
if
there
are
certain
like
differences
in
behavior,
depending
on
where
gitlab
runs
right.
E
There
needs
to
be
some
kind
of
documentation
saying
yes,
if
you
have
less
than
x
ram
right
and
fewer
than
this
cpus
right,
we
will
run
puma
in
that
mode
right,
and
that
is
probably
something
that
needs
to
go
into
omnibus,
saying
running
gitlab
in
in
a
memory
constrained,
environment,
right
and
then
I
would
argue,
with
every
feature
or
with
every
change
that
we
make.
The
documentation
needs
to
be
updated
to
reflect
that
right
and
that
should
be
part
of
the
the
work
and
that
probably
accumulates
quite
a
bit
of
of.
A
D
Okay,
yeah.
I
think
it
will
also
help
with
validating
some
of
the
things
that
we
might
now
think
are
impactful,
but
we
still
have
to
verify
or
validate
in
production,
for
instance,
because
we
have
not
done
any.
I
have
not
done
any
testing
on
dot
com
plants
during
the
two
gigabyte
week
right
and
so
I'm
looking
into
this
issue
around
gc
compaction.
Now
right-
and
so
we
did
this-
you
know
our
local
machines
and
on
a
on
an
omnibus
vm,
but
that's
not
what
we
run
in
production.
So
right.
We
still
need
to
see.
F
F
One
more
question:
do
we
need
like
to
involve
telemetry
in
this,
so
will
we
will
we
be
able
to
see
the
difference
if
we
will
implement
any
improvements
for
constraint,
environment
via
telemetry
or
or
not?
So
I'm
not.
D
F
F
F
Yet
familiar
with
what
we
sent
in
telemetry,
but
I
would
say
I
would
probably
say
I
would
review
that
and
see
if
we
send
enough,
so
we
could
see
the
difference
when
we
improve
something
and
if
we
don't
send
enough,
we
could
like
improve
telemetry
really
fast
because
we
expect
the
like.
So
that
was.
E
C
Maybe
one
aspect
for
us
to
like
to
improve
here
is
like
contribute
reference
architecture
that
is
smaller
than
the
current
ones.
It's
more
constrained,
and
this
is
how
we
kind
of
measure
impact
of
our
changes.
It's
still
not
gonna,
be,
let's
say,
100
percent
accurate
because
it's
still
not
gonna
run
everything.
E
E
You
know
a
location
and
how
the
application
behaves
as
a
whole
and
not
only
for
the
changes
that
we
make,
but
ideally
also
for
the
changes
that
other
people
make,
so
that
when
you
know
things
actually,
like
somebody
deploys
a
new
feature,
they
get
information
that
when
they
do
this,
you
know
our
memory
footprint
is
going
to
increase
by
x
right
and
I
think
that
they
actually
put
us
in
a
good
position
longer
term
to
like.
Let
people
know
hey
you're
doing
this,
but
be
aware.
E
F
D
D
The
whole
thing
could
benefit
from
a
review,
maybe
like
kind
of
because
some
of
these
changes
have
been
put
in
place
a
year
ago
and
I'm
not
sure
if
you
ever
went
back
to
look
at
and
edit
and
see
how
effective
this
even
was
like
that.
You
know:
do
people
actually
see
these
errors
like
in
on
ci?
Do
they
act
on
them?
You
know
if
we,
how
often
does
this
job
fail
a
signal
that
there
was
an
increase
in
memory?
D
I
have
no
idea
if
you
like,
surround
these
things
we
build
and
they
are
there
and
they
run,
but
we're
not
sure
how
effective
they
are.
I
think
might
be
one
of
the
problems
so
so
yeah,
I
totally
agree,
but
we
should
also
recognize
that
we
did
do
a
bunch
of
things
in
that
area.
So
we
should
also
yeah,
maybe
review
what
we
have
done
and
figure
out
what
did
work
and
what
didn't.
A
And
fabian,
that
was
a
question
that
was
asked
in
last
week's
meeting
like
how
do
we
have
an
ongoing
measurement
of
memory,
growth
and
usage?
And
it's
something
I
asked
grant
about
during
our
one-on-one
last
week,
and
he
said
the
answer
is
right.
Now:
no,
we
don't
have
something,
but
he
was
kind
of
brainstorming
yeah.
A
He
could
see
setting
up
a
docker
environment
where
we
quickly
spin
up
like
and
he
used
a
2gig
environment
as
an
example
spin,
it
up
run
it
on
a
regular
basis
to
see
what
the
memory
growth
is
and
see
where
it
starts
to
fail,
so
that
you
know
we
don't
have
a
one
and
done
like
we
set
up
a
two
gig
environment
and
then
the
next
week
someone
had
the
feature
that
immediately
blows
it
out
of
the
water.
So
it's
being
discussed,
but
I
don't
think
we
have
a
formal
issue
to
kind
of
track.
E
Oh,
that's.
That's
really
good
to
know
thanks
for
the
context
and
again,
I'm
sometimes
I'm
just
not
aware
of
some
of
those
things.
So
I
you
know
it's
morning.
I.
B
E
As
I
do
now,
unfortunately,
but
my
meeting
is
moved
from
next
week
and
then
I
I
have
a
little
bit
more
time
anyways
I
have.
I
think
I
have
an
action
item
for
this.
So
thanks
for
the
discussion,
I
learned
a
lot,
so
thank
you
for
for
sharing
these
things
so
transparently.
E
Okay,
see
you
fabulous
next
week,
oh
this
week,
I'm
also
wrong.
You
can
always
ping
me
right.
Thank
you.
C
Oh
cool,
thank
you.
I
I
just
wanted
one
more
thing:
the
discussion
that
we
had,
I'm
actually
finding
that
it's
gonna
be
super
challenging
to
see
the
impact
of
the
small
features
if
they
increase,
like
the
memory
usage
by
let's
say,
500
kilobytes
or
like
one
megabyte,
it's
very
hard
to
see
that
even
the
scale
of
the
application,
but
what
we
can
really
do.
It's
like
do.
Statistical
analysis
of
like
github.com
because,
like
github.com,
has
pretty,
let's
say,
uniform
traffic
distribution.
C
I
am
particularly
talking
here
like,
for
example,
this
tweaking
joy,
malloc
and
gc.
Settings
like
this
is
one
of
the
aspects
that
should,
of
course,
impact
constraint
environments.
But
if
we
pick
the
right
numbers,
it
should
also
provide
a
big
deal
on
github.com,
something
that
should
be
noticeable
instrumentational
module.
If
we
remove
that,
I
honestly
don't
know,
but
maybe
they're
gonna
be
some.
C
Some
small
difference
there,
because
if
you
like,
it's
interesting
aspect
because,
like
we
run
multi
processes
and
multi
means
many
servers
and
many
processes,
so
even
one
megabyte
difference.
If
you
do
a
summary
of
all
the
memory,
usage
can
actually
result
in
an
average
drop.
If
you
compare
the
same
period
of
the
day
to
the
last
day
of
maybe
like
one
100
megabytes,
and
it
starts
to
become
noticeable,
then
because
it's
not
one
megabyte,
it's
like
100
megabytes
less
on
the
whole
fleet,
which
makes
kind
of
a
difference
on
the
graphs.
C
So
I'm
kind
of
thinking
that,
like
we
may
be
actually
looking,
we
should
be
looking
at
all
these
data
and
use
like
that.
What
they
provide
us,
because
the
github.com
provides
us
very
good
stochastic
distribution
of
the
of
the
load,
but
also
provides
us
a
very
a
lot
of
different
and
also
what
he
has
pointed
out
kind
of
granular
services,
because,
like
different
sidekiev
and
different
sidekick
jobs,
we
have
dedicated
servers
for
action
capable
web
rp.
C
So
actually
they
kind
of
give
us
some
kind
of
granularities.
That
should
give
us
like.
If
you
sum
these
numbers,
it
should
give
us
like
some
indication,
if,
like
the
chance
that
we
introduce,
is
actually
reducing
or
increasing
memory,
it's
like
reducing
the
duration
or
increasing
if
you
compare
before
the
deploy
or
maybe
if
you
compare
the
the
same
period
of
the
day
from
last
day
or
maybe
the
same
period
from
the
last
seven
days,
and
you
kind
of
do
this
kind
of
comparison
of
the
numbers.
C
So
what
I'm
saying
like
it's
kind
of
tricky,
because
we
may
be
looking
at
savings
of
just
a
few
hundred
mega
or
a
few
hundred
kilobytes
per
a
single
process
that
it's
hard
like
to
notice
on
the.
If
you
look
at
the
single
processes,
but
if
you
look
at
100
processes
and
100
servers,
it's
maybe
become
visible
because
of
the
amount
and
github
welcome
can
be
something
to
provide
this
information
much
faster
and
makes
it
much
easier
for
us
to
react.
F
Makes
sense,
but
I
mean
I
agree,
that
small
small
features
could
not
be
noticeable
in
terms
of
changes,
even
if
it's
creasing,
but
I
don't
think
it
blocks
us
from
building
such
feature.
We
could
just
ignore
and
not
like
respond
anything
if,
if
there
is
nothing
noticeable,
because
if
there
is
nothing
noticeable
by
the
let's
say
automatic
pipeline,
we
could
not
also
easily
judge
what
the
change
is
making.
But
if
we
see
like
10
megabytes,
20
megabytes
of
difference
on
the
gpt
reference
architecture,
we
could
start
by
posting
some
warnings
in
this
site
so
yeah.
F
C
C
But
I
I'm
kind
of
like
thinking
that,
overall,
we
should
be
looking
at
the
trend
of
all
changes
in
the
bike.
How
much
the
trend
is
like
benefit?
What
I'm
saying
like
this
matrix
instrumentation,
for
example,
this
module,
like
seems
so
small,
it's
really
hard
to
understand
exactly
how
much
benefit
is
gonna
bring
like
we
can
somehow
estimate.
But,
like
I
mean
in
the
end,
I
don't
know
really.
In
some
cases
we're
gonna
see
a
like
a
a
drop
or
maybe
even
increase.
C
So
that's
my
problem.
What
I'm
kind
of
like
saying,
like
it's,
really
close
to
impossible
to
measure
like,
for
example,
the
impact
of
the
given
endpoint
on
the
memory
usage,
unless
you
run
that
in
the
gpt,
with
a
very
good
testing
suit,
covering
exactly
this
ant
money
like
something
that
we
we
fighted
with
the
nikola
and
the
cash
cashed
queries
like
yeah.
It's
like
it's
like
looking
for
the
needle
in
the
stock.
G
D
I
think
this
had
come
up
before
in
a
different
discussion.
But
what
do
you
think
about
the
idea
of
you?
You?
You
saw
the
change
that
we
are
now
just
as
we
did
for
sidekick
queues.
We
are
now
labeling
each
controller
by
category
by
feature
category
right.
D
So
every
controller
now
has
to
be
assigned
to
a
feature
category,
and
I
was
wondering-
maybe
that's
a
bit
like
two
out
there,
but
would
it
be
possible
to
do
a
like
simulated
run
of
gitlab,
where
we
color
each
object
that
we
load
into
memory
based
on
like
provenance,
like
which
feature
it
belongs
to
based
on
the
class
name,
for
instance?
Right
because
often
we
do
we're
not
super
strict
about
it,
but
more
often
than
not,
we
put
them
in
name
spaces
right.
D
So
for
runner,
like
everything,
is
in
gitlab,
colon,
colon
runner
or
something
so
I'm
wondering
like
if
it
would
be
interesting
to
have
some
kind
of
memory
map
that
way.
You
know
where
we
kind
of
color
our
domain
in
terms
of
well
get
that
runner
consumes
more
memory,
because
all
those
objects
belong
to
the
runner
feature.
D
I
thought
if
that
was,
if
that
would
be
possible
or
doable.
I
think
that
could
be
really
interesting
to
see,
because
I'm
sure
there
are
features
that
are
more
like
yeah,
more
interesting
in
terms
of
how
they
drive
usage
than
others.
Maybe,
and
it
could
be
interesting
to
see
if
we're
like,
maybe
over
investing
memory
as
a
resource
in
some
features
that
are
actually
not
that
important
drivers
for
like
usage
or
whatever
yeah.
C
C
But
maybe
this
is
the
way
how
you
can
hack
and
like
let's
say,
as
you
mentioned,
like
the
color
code.
In
what
context
the
given
object
was
allocated,
maybe
looking
at
the
site
key
execution
context
of
the
current
threat.
G
G
D
About
gpt,
or
we
could
start
with
that,
yeah
and
also,
I
think
what
we
will
very
quickly
notice
is
probably
that
there
will
be
a
lot
of
objects
that
are
just
in
a
generic
use
by
everyone
bucket
right,
because
there's
a
lot
of
like
framework
classes
and
stuff
that
will.
G
C
Kind
of
wondering
like
can
we
accurately
get
a
number
of
the
objects
that
got
allocated
for
the
given?
Let's
say,
site,
keyworker
being
executed,
because
right
now
like
we,
we
right
now
have
a
matrix
for
the
database
call
maybe
like.
Instead
of
the
color
coding,
we
could
like
get
a
number
of
the
objects
that
got
allocated
as
part
of
that
execution.
But,
like
the
accurate
number
because,
like
taking
into
account
that
shredding
and
that
multiple
side
key
workers
can
be
executed
during
the
same
at
the
same
time.
C
So
if
we
could
somehow
get
like
the
exact
number
and
also
the
amount
of
the
memory
that
allocated
this
should
give
us
some
idea
of
the
generated
memorial
pressure
on
the
garbage
collection
and
maybe
if
you
would
be
able
to
do
it
efficiently.
Maybe
we
would
include
that
as
part
of
the
of
the
metrics,
and
we
would
oh,
we
have.
We
have
an.
A
What
do
we
need
to
call
the
issue
then,
or
do
you
want
to
matthias?
Do
you
want
to
create
it
or
camille.
A
Okay,
all
right
from
action
items
from
last
week,
I'm
going
to
jump
back
to
where
we
were
so
we
are
going
to
backlog
this
one.
I
don't
think
we
talked
about
this
one,
but
it
looks
like
there's
a
couple
comments
in
here:
it's
not
impactful!
So
do
we
need
to
keep
this
one
in
13,
7.
D
C
A
C
Why
why
I'm
saying
that,
because,
like
this,
if
done,
would
reduce
a
ton
of
memory
on
github.com
like
it
would
be
like
a
drop
to
maybe
a
half,
or
maybe
like
even
one
third
to
what
we
use
today?
So
it's
actually
something
that,
from
the
perspective
of
gitlab.com,
gives
like
a
lot
of
room.
C
So
this
is
this
is
why
I'm
kind
of
torn
yeah,
because,
like
on
this
morning,
stars
it's
not
being
used
under
on
the
large
instance,
we
can
like
the
same
impacts
like
with
the
let's
say,
using
puma
introducing
amount
of
the
workers.
This
actually
is
a
big
memory.
Civic
and
I
can
crack
if
you
are
interested,
can
give
you
like,
let's
say
a
pretty
accurate
number,
what
it
does
mean,
so
I'm
I'm
after
matias.
C
A
All
right,
let's
see,
and
then
I
ran
through
action
items
from
last
week.
Do
we
have
an
issue
for
this
one?
I
couldn't
find
one
between
the
two
epics
that
we
have.
A
All
right-
and
we
did-
I
found
this
one
after
a
commentary
about
which
epic
things
go
under
looks
like
we
do,
need
an
issue
for
this.
A
A
A
Those
are
all
notes,
we're
going
to
cover
the
rice
framework
and
then
fabian
had
a
follow-up.
I
don't
know
if
you
guys
are
following
along
on
image
resizing.
It
looks
like
it's
going
to
fall
to
a
group
manage
and
mathias
asked
a
question
about
the
uploads,
I'm
not
sure
if
that
falls
under
the
same
group
or
not,
but
it's
a
sign
to
fabian.
He
will
follow
up
on
it
and
then
I
have
about
five
minutes
left
of
this
meeting
so
from
retros
having
a
regular
demo
as
part
of
this
meeting.
F
D
Super
informal
but
yeah
we
could
do
it
as
part
of
this
meeting
meeting
as
well.
I
don't
think
I
would
want
it
to
like
take
away
too
much
time
from
like
yeah
like
planning
concerns,
because
that's
important
as
well.
D
It
to
feel
like
yeah,
like
low
low
entry
barrier
kind
of
you
know.
F
D
Like
really
yeah
making
make
it
super
easy
to
contribute
to
that
and
kind
of
informal,
but
we
could
still
record
it.
You
know
and
and
put
it
up
somewhere,
but
yeah.
A
A
Yeah,
I
like
the
idea
of
demos
as
long
as
they're
not
forced,
like
you,
said,
low
area
low
barrier
to
entry,
I've
been
in
companies
where
they've
forced
weekly
demos,
and
sometimes
they
were
not
meaningful,
so
I'm
I'm
supportive
and
open
to.
However,
it
wants
to
be
implemented.
However,
it's
ultimately
implemented.
Sorry
camille,
I
think
I
cut
you
off.
C
C
As
for
the
timing,
I'm
kind
of
thinking
that
maybe
the
best
moment
would
be,
let's
say
one
week
before
the
like
the
release
so
like
somewhere
around
before
22nd,
because
on
that
angle
like
it,
would
make
fabian
to
better
understand
what
to
put
in
the
release
post,
but
also
it
would
make
us
to
somehow
conclude
this
milestone
and
the
next
week.
We
would
basically
start
working
on
the
new
milestone
on
and
the
new
tasks.
C
So
I
I'm
kind
of
like
looking
at.
If
you
would
do
demo,
it
would
be
demo,
maybe
not
only
to
us,
but
also
for
you
and
also
fabian,
and
also
anyone
that
would
be
interested
in,
like
kind
of
like,
I
guess,
maybe
more
like
recapping,
like
the
the
kind
of
finishing
milestone
like
what
improvements
we
had
and
also
maybe
like
the
outcome
of
the
demo.
It's
also
like.
We
really
know
what
should
be
our
response
item,
something
that
we
kind
of
struggling
to
to
prepare
ahead
of
the
time.
C
A
All
right
and
I
will
copy
the
other
action
items
down
here
below
it,
looks
like
we
got
through
the
agenda.
Are
there
any
other
topics
folks
want
to
cover
before
we
end
today?.
D
I
I
just
wanted
to
quickly
like
ask
or
mention
that
I'll
probably
pick
up
one
or
two
items
before
we
that
go
through
the
voting
already
is,
that
is
that,
okay
or.
D
Like
there's
one
that
we
kind
of
know
would
be
very
impactful,
which
was
the
drop
dropping
gitlab
exporter,
but
that's
one
of
these
issues
that
has
a
lot
of
lead
time,
because
we
really
need
to
look
at.
We
don't
even
know
like
who's
using
that
and
to
what
extent
and
which
metrics
do
not
exist
in
other,
are
not
exported
by
other
processes
yet
and
stuff.
So
there's
a
bunch
of
stuff
to
look
into
that
would
help
us.
I
think
also
then
yeah
decide
how
complex
it
actually
is
to
do.
A
Yeah,
I
think
yep
feel
free
to
continue
with
work.
You
see
as
priority
right
now.
I
think
where
it'll
be
interesting
is
like
the
topic
we
just
covered.
The
preforking
it'll
help
to
decide
whether
we're
working
on
it
now
or
whether
we
have
a
singular
focus
on
the
memory
constrained
environment
right.
A
It
was
interesting
with
the
database
team
because
we
had
about
six
different
topics
that
the
team
was
focusing
on
and
it
helped
the
team
to
come
together
and
focus
on
two
and
really
make
some
rough
cuts
on
okay.
We
are,
we
are
actively
deciding
to
not
work
on
these
other
four
things
and
we
need
to
inform
stakeholders
that
we're
going
to
work
on
them
much
later
on,
so
it'll
be
a
good
exercise,
but
I
think
it'll
have
a
different
outcome
than
the
database
team.
A
So
I
think
this
one,
I
think,
will
help
us
to
focus
on
what's
most
important
with
the
2gig
footprint
and
is
there
anything
else
that
we
need
to
bring
in
kind
of
on
the
side
while
we're
trying
to
finish
up
this
initiative?
So
yes
continue
with
what
you're
working
on?
Don't
let
the
waiting
exercise?