►
From YouTube: 2021 03 08 Memory Team Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
It
is
monday
march
8th-
and
this
is
the
memory
group
meeting.
There
are
quite
a
few
non-verbalized
updates
there.
Everybody
can
read
before
we
jump
into
the
boards,
so
there's
lots
of
requests
still
for
what
our
results
are
from
running,
get
lab
in
a
memory
constrained
environment.
So
let's
try
and
wrap
those
up
this
week,
if
at
all
possible,
I
know
we
have
the
documentation
started
for
how
to
configure
the
memory
constraint
environment.
I
think
the
big
unknown
is
measuring
memory
on
a
small
instance
with
two
gigs
of
ram.
A
A
B
A
I'll
be
honest
here
see
I
read
through
matthias
issue
here.
It
looks
like
the
m
by
settings
were
all
rolled
out
and
you
can
read
the
thread
if
you
want
this
one,
it
looked
like
it
went
all
the
way
to
production.
Camille.
Do
you
know
if
this
one's
done.
A
D
I
I
would
have
to
check
out
the
let
me
can
you.
Can
you
give
me
the
link?
I
will
tell
you
quickly
because
like
actually,
no
just
just
click
it,
there
is
optimistic
logging.
Okay,
we
merged
prometheus
histogram
block
matrix.
I
did
not
medius
histogram
local
metric,
but
it's
working
like
the
the
er
comment
logs
to
work,
so
I
would
assume
that
it's
just
fine
to
close
it
now
as
being
done.
A
All
right
and
then
ran
through
in
dev
this
morning,
so
there
are
still
a
few
mrs
open
on
optimizing.
Je
malik
and
garbage
collection
on
gitlab.
B
E
B
D
Right
cool
plus
assigned
to
me,
I
know
you
like
look
how
it
looks,
but
how
to
test
it
I
mean
look
how
you
do
it,
how
you
did
it,
but
probably
like
you,
need
to
have
the
load
balancing
configured
locally.
B
D
D
Maybe
we
should
like
ask
database
team
to
guide
us
on
like
if,
if
we
want
to
test
like
this
application
like
and
how
you
would
be
able
to
test
it,
yeah,
that's.
B
D
D
E
B
It's
not
easy
thing
to
do
or
we
can
like
deep,
like
push
it
and
enable
feature
flag
and
see
if
it's
if
it
works.
C
D
E
D
D
So
I
guess
like
on
the
last
meeting.
It
was
like
that
maybe
there'd
be
assigned
someone
to
that,
but
if
you
would
crack
find
someone
like
interested
from
our
group,
I
think
we
could
also
like
tackle
that.
Okay
yeah
I
mean
there
are
some
ways
like
how
to
approach
it.
I
tried
initially,
but
I
think
if
you
would
follow
the
suggestion
of
my
batch
loader,
which
is
like
designed
for
this
kind
of
purpose,
it
should
be
pretty
fine.
A
D
This
one
we
talked
on
friday
and
I
think,
like
I
suggested
that,
like
it
kind
of
requires
like
a
fully
rewrite
this
worker,
I
mean
the
service
and
it
requires
a
ton
of
database,
sorry
domain
knowledge
and
since
we're
gonna
have
someone
from
the
security
group,
I
I
think
darby
or
someone
from
the
security
one
of
the
security
managers,
gonna,
basically
assign
it
to
someone
from
security
group
took
basically
could
completely
rewrite
it.
D
Okay,
I
I
think,
like
the
outcome
was
like
that
matthias
made
the
prior
one
on
that,
because
he
measured
that
it's
like
40
percent
of
the
sql
load
on
the
primary
going
from
the
side
key.
So
it
seems
like
pretty
big
sync.
A
Okay,
I
will
add
both
of
those
to
the
rapid
action
agenda,
make
sure
we
find
an
owner
for
them.
I'm
sorry,
this
one
is
going
to
stay
with
us,
so
I'll
make
sure
the
other
one
we
find
an
owner
for
it
and
then
seg
segfault
and
jason
jam.
D
Yeah,
so
we
like
it
turn
out
that
there
is
like
another
same
problem
in
the
compact.
There
is
another
ruby
extension
that
is
fading,
so
alexi
asked
sarah
what
to
do,
but
they
basically
like
what
they
did
in
the
end
they
roll
it
out.
Our
nokias,
gc
changes
and
alex
is
going
to
work
on
disabling
gc
compaction
or
disabling
nokia
dc.
It
will
turn
out
like
what
we
do,
but
it's
actually
like
this
is
the
summary
of
what
we're
gonna
do.
D
It
appears
that,
like
dc
compact
being
used
here,
causes
us
problems
because
some
of
the
gems
are
not
properly
handled
with
that.
So
we
might
just
like
be
premature
on
using
compact
on
2.7
3.0
has
some
improvements
for
the
compaction,
they're
gonna,
let
to
shuffle
references
and
like
much
easier
to
catch
these
type
of
errors,
but
for
time
being,
like
we
kind
of
agree
that,
like
first
time
charm
kind
of
thing
and
like
we
don't
want
to
break
first
time.
D
So
I
mean
like
the
truth,
is
like
this:
six
sec
forts
that
go
unnoticed
for
like
for
pretty
much
over
two
weeks
because,
like
we,
we
imagine
enabled
it
like
two
weeks
ago,
but
I
kind
of
did
run
the
pop-up.
So
it
was
completely
random
that
didn't
pop
up.
So
I
guess,
like
the
severity
of
that,
was
like
pretty
minimal
because
it
didn't
cause
like
a
noticeable
impact.
A
A
A
Questions
maybe
for
fabian
or
giannis.
This
came
up
on
the
number
team
triage
issue
and
I
couldn't
remember
what
label
we're
using
now,
but
this
is
most
certainly
not
a
feature.
What's
the
what's
the
tech
dead
or
back
end
label
that
we're
using,
I
think,
there's
a
technical
debt
item
issue
a.
F
Label,
I
think,
that's,
I
think
it's
already
labeled-
that
right
actually
the
array
one
at
the
bottom
yep
all
right.
I
got
rid
of.
A
Then
you
want
to
run
through
this
real
quick.
I
kind
of
I
talked
about
these
a
little
bit,
but
is
there
any
any
more
detail?
You
want
to
add
to
this
where
the
requests
go.
F
Sorry
I
was
a
bit
late
in
the
beginning.
I
think
the
main
ask
here
is
really
to
say:
okay,
let's
finish
the
documentation
for
all
of
the
settings,
that
we
recommend
in
a
constrained
environment,
and
then
there
are
essentially
two
follow-ups
to
this.
I
think
would
be
great
if
we
could
attribute
memory
impact
per
change
right,
for
example,
this
puma
single
right
there's
a
range
of
how
much
it
can
do,
but
I
think
it
would
also
be
very
impactful
to
have
a
simple
measurement
saying:
okay,
you
know
like
in
gitlab
13.7.
F
None
of
these
changes
were
around
right
or
maybe
even
within
the
version
like
latest
version
of
gitlab
1310,
where
it
uses
x,
amounts
of
memory.
You
change
the
settings.
This
is
probably
going
to
cause
the
reduction
of
around
that
right.
I
think
that
will
be
really
helpful
because
it
will
wrap
this
up
nicely
and
it
will
be
a
good
first
iteration
and
it
will
also
show
the
impact
of
the
work
that
we've
that
we've
made.
F
Yeah,
I
think
the
the
ascii
is
really.
It
would
be
good
if
we
could
finish
this.
You
know
in
the
next
week
or
so
just
because
I
think
we
should
wrap
this
up
and
also
there's
the
desire
to
understand
what
impact
we've
made
from
sort
of
the
product
side
and
would
be
super
helpful.
Quite
frankly,
for
me
to
have
that.
D
A
G
One
I'm
currently
working
on
building
a
reference
architecture
with
kubernetes
running
puma,
essentially
and
well.
I
still
need
that's
relevant
to
this
group.
Through
my
testing,
I
found
that
it
wasn't
performing
to
the
same
kind
of
spec
as
as
the
normal
10k
environment
would
do
with
people
running
on
rails,
yeah
and
then
rails
vms.
G
What
I
noticed
is
that
eventually
increasing
the
pods
just
stopped
working
at
some
point,
it
just
didn't
really.
It
seemed
to
hit
like
a
ceiling
so
to
speak,
but
then
up
these
paws
were
set
up
and
that
they're
both
two
core
boxes
of
two
puma
workers.
G
I
know
it
was
an
issue
that
we
were
quite
a
bit
underneath
the
worker
count
compared
to
the
normal
10k.
So
I
pieced
that
that
made
a
slight
improvement,
not
not
as
much
as
I
expected,
and
then
I
added
more
after
that,
and
it
still
did
nothing,
but
when
I
stretched
the
pause
over
to
four
core
boxes
or
four
workers
instead,
that
actually
then
started
to
improve
again.
So
I
thought
that
was
quite
interesting
to
see
that
puma
just
performed
better
and
bigger
pods,
so
to
speak
with
more
cores.
G
Even
though
there's
more
workers
on
that
same
container
so
to
speak,
so
I
don't
know
if
there's
an
issue
there,
yet
I'm
still
testing
out
and
trying
things
out,
but
certainly
I
think
in
the
kubernetes
world
you
know.
Ideally
you
want
the
paws
to
be
as
small
as
possible.
So
if
a
pod
dies
it
doesn't
affect
overall
performance
too
much.
So
at
the
moment,
I
think
we're
gonna
have
to
go
four
core
I'll,
probably
get
pushed
back
from
the
distribution
team
about
that,
because
they
want
pause
to
be
small
as
possible.
G
So
it's
a
bit
of
a
weird
balancing
act,
but
it's
just
giving
you
a
highlight.
I
think
the
team
might
have
been
tagged
on
the
issue
that
I'm
reviewing
this
on.
G
If
you
want
to
have
a
look,
feel
free,
but
there's
a
lot
of
numbers
in
there
now
some
testing
stuff,
but
once
I
have
more
data
I'll,
probably
come
back
and
let
you
know-
but
I
I
I
feel
there
might
be
some
kind
of
weird
issue
there
or
maybe
just
an
inherent
restriction
running
on
such
small
pods
and
expecting
the
same
performance.
I
don't
know.
A
The
question
just
came
to
mind
so
camille.
If
we
merge
the
composable
code
base-
and
I
haven't
apologies-
if
it's
answered
in
that
blueprint,
but
what's
the
first
step
like
if
we
start
taking
on
that
work,
is
it
broken
down
into
like
order
of
operations
on
what
we
should
do?
First.
D
I
think
the
first
of
them
will
be
really
like
making
what
we
call
those
poc
and
create
iterations
and
epic
like
what
is
like
the
important
features
that
we
want
to
like
develop
in
this
model,
because
I
think,
like
my
perception,
if
we
would
move
that
into
that
model,
we
would
need
to
kind
of.
D
I
I
don't
know
like
where
to
start,
there
is
a
lot
of
aspects
how
we
want
to
tackle
that,
but
I'm
kind
of
thinking
that,
like
what
I
would
really
want
to
achieve
in
the
end
goal,
is
that,
like
all
the
gitlab
codebase
that
it's
loaded,
it's
explicit,
it
has
explicit
context.
D
So
I
would
even
say
kind
of
going
even
further
than
what
nicola
did.
That
would
maybe
introduce
the
gitlab
core
enzyme,
which
is
like
the
main
enzyme
that
all
the
code
lands
by
default.
That
would
define
the
gitlab
curve,
which
also
is
below
that
to
kind
of
prevent
loading.
Everything
in
the
single
like
the
big
name
space.
But
this
is
something
like
to
kind
of
still
discuss
on
how
to
approach,
because
I
think,
like
my
perceptions
is
like.
D
This
would
be
kind
of,
like
my
expectation,
I
would
rather
say
that
we
should
maybe
have
enzymes
ski
trap:
enzymes,
gitlab,
core
enzymes,
gitlab
web
and
join
slash.
Whatever
else
we
may
need-
and-
and
this
is
where
the
the
code
base
of
the
key
club
leaves,
but
also
like
this-
is
like
multi-step
process
how
to
move
things
like
in
very
small
increment.
D
Why
we
do
it,
so
we
would
need
to
kind
of
really
spend
some
time
on
figuring
out
the
order
of
the
operations
that
nikola
did
with
this
very
big
chunky
amount
of
the
work
because
we're
not
gonna
move.
I
don't
know
three
thousand
fives
in
a
single
goal,
because
it's
gonna
create
a
lot
of
conflicts,
so
we
need
to
figure
out
a
way
how
to
do
it.
Like
integratively,
I
mean
move
more
and
more
pieces.
Add
up
to
that.
So
maybe
we
just
kind
of
the
first
iteration
will
be
defined.
D
Let's
create
engines
folder,
let's
create
this
shallow
enzyme
skills
lab
core
site
keyweb,
let's
create
kind
of
like
skeleton
for
for
everything
and
kind
of
start,
then
slowly
moving
things
into
these
end
joints
and
also
kind
of
discovering
the
dependencies
of
the
given
component
that
we
will
be
moving
because,
like
what
nicola
did
he
kind
of
like
look
and
took
up
graphql?
D
It
would
be
super
great
if
you
could
also
move
the
sleep
folder
into
this
gitlab
minus
web,
so
kind
of
like
kind
of
trying
to
understand
how
we
can
untangle
these
messy
lip
folder
that
we
have
today.
That
has
like
kind
of
cross
dependencies
everywhere.
D
D
So
so
so
I
guess
like
probably
starting
with
like
figuring
out
the
iteration,
maybe
starting
with
the
skeleton,
then
kind
of
trying
to
understand
what
are
the
lowest
pieces
that
we
can
move
like
one
by
one
until
like
we
get
to
the
state
that
we
moved
everything
that
we
wanted,
but
also,
I
guess,
like
the
graphql
examples,
is
intriguing
because
of
this
pops
up
notification.
So
I
guess
it
would
also
impact
the
action,
cable
work
and
the
graphql
work
and
we
also
have
graphql
architecture
blueprint.
D
So
I'm
kind
of
thinking
there's
like
a
lot
of
steps,
a
lot
of
coordination
on
on
doing
that.
A
D
It's
kind
of
messy,
but
it's
really
like
the
messy
topic:
how
to
really
approach
that
I
mean
like
poc,
how
it
is
done
it
like
shows
the
the
end
story
but
like
from
like
making
poc
production.
Rather
it's
like
it's
yet
another
story
and
like
if,
if
I
would
estimate
the
effort
of
the
poc,
like
probably
multiplied
that
by
three
to
five
in
the
effort
on
making
poc
production,
rather,
which
is
like
yet
significantly
more
iterations
to
make.
A
Understand
yeah,
it's
it's
quite
complicated
and
I
think
in
the
brief
conversations
I've
had
with
jerry
on
the
topic
he
he
understands
that
this
is
going
to
take
a
while
to
find
the
right
first
iteration
to
test
on
this.
So,
but
thank
you
for
that
overview.
It's
very
helpful.
D
So
I
I
don't
have
like
anything
of
the
example
today,
such
but
they're
gonna
be
likely
be
example
at
some
point-
and
I
know
that
we
are
talking
about,
for
example,
the
ci
runners,
so
maybe
ci
would
kind
of
follow
that
pattern
with
the
rp
of
using
the
dedicated
enzyme
for
that
purpose,
like
kind
of
containing
all
of
that
services
logic
and
whatever
else
is
needed.
D
So
maybe
something
along
this
lines,
but
I
think,
like
skeleton,
is
probably
like
the
most
important
aspect
like
to
ship
to
provide
like
a
way
how
these
skeletons
should
be
defined
and
how
the
data
flow
between
the
skeletons
should
look
like.
This
is
also
like
what
we
talked
with
gary
about
that
it's
gonna
impose
slightly
different
data
starting
model
because,
like
today
you
would
sell
data
through
the
code
base.
D
I
mean
you
would
search
sidekief
to
controller,
maybe
like
calling
exactly
the
same
code
that
would
be
caught
in
the
web
in
the
psyche
process,
but
this
would
impose
the
kind
of
maybe
even
like
going
outside.
If
you
want
to
fetch
like
the
graphql,
you
should
use
appy,
because
the
graphql
infrastructure
gonna
be
scared
separately
to
the
side
key
and
the
site.
He
doesn't
really
need
the
graphql,
so
it
would
also
define
like
the
data
flow
model.
D
It
would
slightly
change
data
flow
because
you
it
would
limit
you
exactly
what
you
can
run
in
the
given
context
and
how
you
want
to
sell
data.
I
mean
you
would
still
have
the
same:
the
same
principles
of
sharing
data
for
reddis
and
through
database,
how
you
do
today
with
the
modders,
but
you
would
not
have
the
method
of
like
serving
data,
I'm
just
going
to
call
this
very
complex,
graphql
resolver
to
generate
a
json
payload
or
whatever
else
in
the
site.
Key.
This
kind
of
thing
would
be
impossible
you
would
require.
D
We
would
require
that,
like
you
have
to
call
appy
on
the
web
process
to
actually
to
perform
this
kind
of
operation.
D
D
F
Yeah,
no,
I
I
think
I
have
to
read
this
in
more
detail,
but
you
know
in
order
to
avoid
making
unqualified
comments,
but
I
think
my
my
like
we
talked
about
that
last
week
with
very
large
changes
like
this.
I
think
it
will
be
the
most
important
aspect
on
you
know
how
we
enable
others
to
actually
do
this.
You
know
when
they
understand
it's
important,
because
I
think
otherwise
it
will
take
forever
and
we'll
own
this
gigantic
surface
area
and
won't
be
able
to
actually
do
much
about
it.
F
You
know
sort
of
relatively
swiftly,
even
though
it
will
still
take
a
long
time.
So
that's
why
my
main
concern,
because
it
opens
a
bucket
of
let's
say
a
rather
sizable
bucket.
If
I
understand
correctly,
yes,.
D
Yes,
we're
not
gonna
do
all
of
that
on
our
own,
so
I'm
kind
of
like
also
talk
with
craig
that
if
this
is
becoming
so
important,
maybe
we
have
this
database
scalability
group
that
is
focusing
on
the
particular
aspect,
and
this
is
one
of
these
many
long-term
architectural
improvements.
D
So
maybe
this
would
also
require
this
kind
of
like
more
multi-team
approach
because,
like
I
think,
like
it's
unreasonable
to
expect
that
we're
gonna
do
everything
because,
like
the
code
base,
is
constantly
changing,
changing
like
we're,
not
gonna,
do
everything
someone
gonna,
add
so
much
new
code
that
we're
not
gonna
able
to
handle.
D
So
I'm
kind
of
thinking
like
that,
if
this
is
our
like
way
to
tackle
the
big
monolith,
this
should
probably
be
also
more
formalized
at
some
point,
but
I
I
would
kind
of
like
wait
for
that
as
soon
as
we
have
like
much
wider
agreement
that
this
is
the
approach.
A
Yeah
and
so
the
the
kickoff
is
wednesday,
I
don't
expect
a
lot
of
decisions
to
be
made
at
that
point
in
time.
It'll
be
an
overview
like
jerry's
ideas,
and
then
maybe
we
get
into
the
composable
code
base
idea,
but
camille.
What
you
described
is
what
I
assumed
the
approach
was
going
to
be,
and
you
added
a
lot
more
detail
than
I
had
at
times.
So
thank
you.
Yes,
I
think
there
will
be
like
the
skeleton's
a
great
example,
and
maybe
we
find
the
smallest
possible
implementation.
A
A
I'm
not
even
sure
what
to
call
this,
your
own
engine,
your
own
folder.
This
would
be
a
great
place
to
migrate
it
and
get
us
some
scalability,
so
that
again
we
can
farm
this
out
to
other
teams
and
they
can
own
it
going
forward
and
maybe
from
a
product
management,
standpoint,
giannis
or
fabian.
You
all
know
of
some
new
endpoints
that
are
being
created,
that
in
the
near
future,
that
we
could
consider
for
this
new
construct
that
camille
has
come
up
with
yeah.
F
F
Oh,
this
just
happened
and
that
impacts
five
other
areas
and
then
you
scramble
to
get
it
done,
and
I
think
the
you
know
if
we
have
docs
to
at
least
point
to
and
say
like
hey,
you
know
you
should
do
it
this
way
or
then
you
know
that
that
helps
already
tremendously,
but
I
you
know
I
would
state
a
wrong
but
yeah.
I
would
wrong
to
say
that
this
is
totally
under
control,
but
I
get
regularly
surprised
by
things
that
are
happening.