►
From YouTube: 2020 11 23 Memory Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
to
this
version
of
the
memory
team
weekly
meeting
it
is
november,
23rd
got
a
not
verbalized
item
or
two
on
there
go
back
to
what
fabian
is
typing
note.
I
will
be
out
after
tuesday
for
those
a
week,
there's
u.s
holiday,
forget
wednesday
is
friends
and
family
day,
then
thursday's
thanksgiving
for
us
and
I'm
not
going
to
come
back
on
friday.
So
I'm
just
going
to
take
friday
off
so
yeah
team
topic,
so
the
13
6
retro
is
out.
I
think
it
was
published.
A
On
thursday
of
last
week
we
had
some
feedback
and
action
items
from
the
last
retro.
So
the
big
one
for
this
one
13-6
and
going
forward
is
voting
on
topics
so
that
we
make
sure
we
actually
make
our
retros
useful.
So
sometimes
people
put
all
of
their
comments
about
the
retro
into
one
and
then
make
it
hard.
It
will
make
it
difficult
to
vote
on
topics
if
you
just
bundle
everything
up
into
one
comment.
A
So
if
you
have
different
topics
to
talk
about,
please
break
them
out
in
the
different
comments
so
that
we
can
vote
on
them
and
have
follow-up
items
from
the
retro
and
in
a
week
or
two
I
will
summarize
what
the
action
items
are
and
we
can
follow
up
on
them.
A
Let's
see
so
next
topic
is
the
two
gig
sync
week,
so
I
I
caught
up
on
all
the
videos
as
of
friday
and
it
seems
like
there
is
some
confidence
level
that
we
can
run
on
two
gigs
and
camille.
You've
got
some
comments
there,
verbalize,
yes,
so.
B
Like
it
requires
a
lot
of
tuning,
but
it
seems
like
the
way
it
would,
there
are
still
like
it's
kind
of
short
a
little
on
the
resources,
so
it's
kind
of
like
you're
kind
of
trading.
B
It's
still
like
you
require
this
swap,
but
like
from
like
different
tests
like
with
different
improvements,
it
seems
that
it
should
be
able
to
run,
but
there
is
like
caveats
like
because
we
use
swap
it
would
require,
like
pretty
fast
ssd-based,
swap
it's
kind
of
like
rule
outs
like
raspberry
pi
2g,
but
I
think,
like
there
is
like
in
general,
it
seems
possible
to
run
in
the
2g
without
the
swap,
but
it
requires
a
little
more
like
investigation
on
like
like
further
optimizations.
B
So
I,
like
I
link,
I
pointed
out
like
maybe
like
the
major
aspects,
maybe
like
in
the
ordering
of
the.
B
I
think
my
my
my
understanding,
like
of
the
like
how
things
are
like
low
hanging
fruit.
Maybe
the
low
hanging
fruit
are
the
first
one.
B
I
I
don't
know
matthias
about
like
that
question,
but
like
if
we
talk
about
raniki
like
with
the
2g,
it
seems
likeable
it's
connected
with
some
caveats,
but
if
we
are
talking
about
like
2g
with
the
raspberry
pi,
it's
slightly
harder
story
because
of
the
swap,
not
being
so
effective
done.
A
Okay
sounds
like
we
have
some
ideas,
but
do
we
have
like
an
approach
or
a
plan?
Do
we
have
representative
issues
on
what
we're
doing
next?
I
know
in
the
videos
or
talked
about
there
were
some
low-hanging
fruits
that
we
could
attack
to
reduce
the
memory
footprint.
But
do
we
have
an
overall
idea
or
approach
to
get
down
to
two
gigs
or
whatever?
The
ultimate
footprint
will
be.
B
I'm
not
sure
if
we
have
like
all
the
issues
created
and
and
scored
like
to
the
next
items.
I
don't
think
that
like
because
this
is
like
the
next
steps,
the
long
hanging
fruit.
I
don't
think
that
we
did
go
through
all
of
that,
like
data
to
summarize
that
property.
B
C
I
mean,
like,
I
think,
the
high
level
takeaway
was
that
there's
three
avenues
we
can
pursue
the
first
one
is
not
running
things
that
we're
currently
running
by
default,
and
I
think
that
needs
product
input
because
it
comes
with
some
drawbacks
like
if
we
don't
run
prometheus
by
default.
We
will
not,
by
default,
collect
usage
ping
metrics
for
topology
finances,
so
there's.
C
Trade-Offs
then
the
second
one
is
not
loading
things
that
we
might
not
need
that's
more
of
a
technical
lever,
because,
like
nikola
already
looked
into
this,
we
can
save
some.
I
think
it
was
like
15
meg
or
something
like
that
by
not
loading
any
graphql
stuff,
which.
C
C
Which
was
like,
or
maybe
that
is
a
part
of
all
of
these,
which
is
looking
at
kind
of
the
data
in
memory.
You
know
it's
not
not
so
much
like
the
feature
we
run
or
the
code
we
load,
but
that's
also
just
data
in
memory
where,
like
we're
not
convinced,
we
always
need
that
but
yeah,
maybe
that's
like
included
in
these
other
things,
but.
B
Oh,
I
I'm
kind
of
thinking
like
I
agree
with,
like
one
addition,
probably
like
that
the
lowest
long
hanging
fruit
is
like
the
tuning.
What
we
have
today,
the
second
probably
low
hanging
fruit
with.
Maybe
not
a
big
effort,
is
like
running
in
the
puma
single
because,
like
I,
I
think
like
we
noticed
that
this
kind
of
gives
a
lot
of
like
room.
B
I
think
you
just
something
like
250
megabytes
or
300
megabytes
better
and
like
everything
else
is
really
like,
maybe
like
disabling
the
processes
that
we
may
not
want
to
run,
which
is
like
github
exporter,
but
everything
after
that
is
really
like
optimizing.
The
application
code,
where
it's
not
loading,
graphql,
not
loading,
some
parts
of
the
code
base
which
are
kind
of
significantly
harder
and
require
more
effort.
B
But
I
think,
like
the
general
conclusion
that
I
got
from
the
friday
meeting,
it's
like
that.
All
of
this
selective
loading
is
kind
of
connected
with
maybe
the
bigger
story.
Maybe,
like
the
complete
architecture,
overview
like
how
we
load
the
github.
Can
we
make
it
more
selective?
Can
we
build
like
the
guidelines
to
to
make
developers
to
to
be
able
to
like
to
decide
in
what
context
they
want
the
application
to
be
loaded
and
like?
B
I
think
that
graphql,
that
pop-up
as
something
the
easiest
like
to
to
try
out
like
how
this
architecture
could
look
like,
and
it
seems
that
the
graphql
gives
like
pretty
noticeable
benefit
of,
like
15
20
megabytes
right
yeah.
So
this
is
like
20
megabytes
less
on
the
side,
key
process,
basically.
F
C
Be
as
effective
effectively
using
the
multi-threaded
nature
of
puma,
because
it
will
only
be
a
single.
B
Process-
I
I
I
think
like
correct.
Also
there
is
like
this
unanswered
question
like
how
you
run
puma
in
the
cloud
night.
You
do
iran
like
with
multiple
workers,
or
do
we
want
to
run
in
the
puma
single
right
now
we
run
in
the
multiple
workers,
so
I
think
the
answer
is
like
as
matias.
C
Actually,
like
now
that
you
mentioned
this,
this
is
a
really
interesting
point,
because
I
was
surprised
to
see
not
because
I
I
just
hadn't
thought
about
it,
but
like
when
I
saw
how
we
scale
puma
in
for
dot
com
in
kubernetes.
C
We
actually
run
it's
very
different
from
the
vms,
like
with
the
vms,
we
run
like
a
smaller
number
of
nodes
which,
with
a
lot
more
processes
per
node,
and,
I
think,
also
more
threats
per
process,
so,
whereas
in
kubernetes
we
run
like
smaller
machines,
but
would
run
a
lot
more.
So
that
would
be
a
part
right,
the
the
unit
of
scale
and
but
we
run
a
lot
more
pods,
but
only
I
think,
with
like
two
workers
per
pod.
B
Yes,
but
like
there
is
another
aspect
that,
like
we
didn't
really
ever
yet
look
at
that,
because
it
has
the
same
impact
on
the
memory
research
for
the
club
native
and,
like
I
guess,
like
the
current
model,
when
you
run
multiple
forks
is
basically
more
efficient
than
like
two
workers.
Basically
so
I'm
kind
of
like
this
is
interesting.
Something
for
us
to
look
at
as
well
like.
E
B
What
is
like
these,
like
the
boundary
for
the
cloud
native,
like
how
many
workers
we
should
be
running,
yeah.
A
C
Yeah
and
I
would
be
interested
in
knowing
understanding
what
is
the
overhead
that
we
pay
per
pod,
because
because
I
think
it
is
in
theory
like
thinkable
that
we
might
be
that
we
could
run
puma
in
single
mode
and
then
we
double
the
number
of
parts.
You
know
it's
an
option
right.
I
don't
know
if
that
is
feasible,
actually
in
practice,
but
like
if
the
scheduling
and
the
resource
like
the
bare
metal
resource
overhead
for
running
all
these
extra
parts
is
small.
C
This
can
have
a
big
benefit,
which
is
it's
much
easier
to
scale
at
the
pod
level,
because
this
is
just
something
you
tell
the
kubernetes
orchestration
level,
how
you
want
your
service
scale
and
it
will
go
ahead
and
do
that
because
it
keeps
running
this
control
loop.
Whereas
if
you
scale
at
the
app
server
level,
you
need
to
actually
deploy
a
configuration
change.
You
need
to.
C
You
actually
need
to
deploy
a
change
to
the
helm,
charts
and
then
yeah
re-deploy,
which
is
just
which
is
not
it's
not
a
responsive
way
of
scaling
and
says.
G
B
I
mean
probably
like
the
I
think,
the
way
how
I
see
it
like
you
can
expect
that,
let's,
let's,
let's
assume
that,
like
regardless
on
amount
of
the
workers,
you
can
assume
that
each
bot
requires
one
gig
of
the
memorial
plus,
like
very
small
amount
on
top
of
for
for
each
worker.
So
I
kind
of
thinking
that,
like
now,
if
you're
on
puma
single,
it's
gonna
still
gonna
be
one
gig.
If
you're
on
puma
cluster
with
two
workers,
you
have
basically
twice
the
capacity,
but
it's
still
gonna
be
around
one
gig.
B
A
What's
the
we
have
a
couple
of
issues
in
here
that
cover
like
some
of
the
next
steps
in
the
hanging
fruit,
but
it
sounds
like
we
need
to
spend
some
more
time
on
this.
Does
that.
C
C
D
F
C
A
That's
a
good
question,
probably
throw
them
in
the
same
epic.
For
now
I
don't
see
yeah
create
another
one.
I
mean
we
can
always
we
can
reorganize
later
or
create
an
implementation
epic.
What
about
does
it
need
to
happen?
Synchronously,
it
kind
of
seems
like
maybe
that
might
be
more
useful.
Instead
of
people
dividing
and
conquering,
there
might
be
some
overlap
on
issue
creation.
A
C
Oh
okay,
I
mean
yeah.
Maybe
we
can.
Maybe
we
can
just
create
a
bunch
of
bare
bones.
Just
like
you
know
skeleton
issues
together,
just
so
that
we're
all
in
agreement
on
what
should
be
looked
at
or
done,
and
then,
but
maybe
we
can
split
them
up
among
us
and
take
that
async
to
actually
like
yeah
flash
these
out.
D
Yeah,
I
think
that
makes
sense.
May
I
also
ask
a
couple
of
of
questions
from
from
my
end,
because
I
was
not
part
of
it
and
I
probably
still
have
to
do
a
little
bit
of
of
reading
to
understand
the
details,
but
just
so
that
I
I
have
this
correct
in
my
in
my
head,
so
essentially
what
what
the
investigation
revealed
is
that
there
is
some
opportunity
for
tuning
how
the
application
works,
and
that
does
not
require
removing
any
functionality.
D
It
just
essentially
means
we
handle
what
we
have
slightly
differently
to
reduce
the
overall
memory
footprint
that
may
have
impact
on
the
startup
time,
for
example,
and
you
could
argue
that
the
tuning
will
also
affect
larger
installations.
You
know
to
some
extent
right.
They
they
may
be
more
noticeable
on
small
ones,
but
it
has
you
know.
Overall,
you
know
we
were
more
efficient
in
how
we
handle
the
our
like
git
lab
as
a
whole.
Is
that
correct
so
far.
C
Yes,
although
I'm
not
sure
if
tuning
alone
will
be
enough
to
actually
go
below
that
two
gigabyte
limit
sure.
C
I
mean
we've
seen
we
have
this
interesting
graph
from
our
topology
ping
usage
ping,
where
we
can
see
that
every
gitlab
iteration
that
we
release
every
month,
we
have
been
growing
our
total
rss,
so
we're
currently
on
a
upward
trend
trajectory
with
total
memory
use.
So
so
the
question
is
also
how
much
runway
we
have
like.
If
we
do
all
these
things,
are
we
gonna
doing
the
same?
Are
we
gonna
have
another
one
week
you
know
two
gigabyte
session.
C
Yeah,
let
me
let
me
find
this
real,
quick.
It's
I
continue
creating
this
it's
great.
Let
me
find,
or
maybe
it
was
josh,
but
let
me
find
it.
A
D
Okay
and
then
another
follow-up
question
from
from
me
would
be
essentially-
and
I
think
you
know
that
is
maybe
the
gist
of
it.
We
grow
memory
because
we
add
more
stuff
right
and
if
you
you
know
have
you
know
if
you
start
shipping
more
and
more
things
with
with
gitlab
right,
I've
heard
about
another
database
that
we're
going
to
add
you
know,
and
you
know
you
have
prometheus
and
all
of
these
services.
D
You
know
we
are
going
to
we're
going
to
grow,
and
so
the
other,
maybe
more
radical
solution
is
just
to
disable
certain
things
and
say:
if
you
do
not
need
it
right,
then
you
know
this.
This
will
not
be
available,
but
that
saves
you
saves
you
some
some
ram
right.
I
think
the
question
there
is,
if
that
is
desirable
right
as
in
do
we
want
to
do
that
right,
because
that
is
going
to
mean
that
certain
functionalities
are
are
no
longer
available
right
now,
yeah.
C
Well,
I
want
to
qualify
this
a
little
bit
because
it
doesn't
mean
then
they
need
to
be
entirely
unavailable.
I
I
would,
I
would
maybe
say
we
could
look
at
turning
them
off
by
default,
with
the
option
to
turning
them
back
on
right,
like
in
omnibus,
you
can
already
toggle
all
these
things
on
and
off.
It's
just
like
a
bunch
of
them
are
turned
on
by
default
and,
like
I
not
totally
sure
if
we're
like,
if
all
these
users
even
need
them
to
be
turned
on
at
all
times,
maybe.
B
C
It
and
then
we
are
running
an
extra
puma
instance
on
their
machines
that
they
don't
use,
and
as
long
as
we
have
like
clear
language
in
the
ui,
I
guess
to
inform
the
admins
that
look
like
this
is
not
currently
running.
If
you
need
it,
you
need
to
turn
it
on,
but
it
will
cost
you.
You
know
a
bit
of
extra
memory,
then
maybe
we
can
just
be
in
a
better
default
position
and
you
know
maybe
they
can
then
get
that
by
adding
more
swap
horn,
whatever
you
think.
D
D
What
will
that
actually
do
to
the
user
experience?
Is
it
going
to
be?
You
know
just
as
good
as
it
is
now
or
would
it
actually
like?
Will
it
be
one
of
those
things?
Well,
it
runs,
but
you
know
it's
actually
either
still
slow
right
or
certain
significant
parts
of
the
application
are
no
longer
available
because
you
can
make
use
of
them
and
I'm
I'm
just
wondering
like
what's
sort
of
the
the
feeding,
because
I
would
argue
that
reducing
it
for
for
the
sake
of
reduction,
but
then
the
user
experience
is
not.
D
Actually,
you
know
like
good,
then,
what's
the
what's
the
point
right,
and
I
think
this
is
why
I
may
be
a
little
bit
more
flexible
with
regards
to
this
limit,
but
I
I
don't
I'm
not
quite
sure.
That's
that's
really
just
my
sort
of
looking
at
it
from
from
afar.
C
Yeah,
I
totally
agree.
I
mean
those
are
perfectly
fair
points
and
that's
exactly
why.
I
also
think
we
need
to
look
at
this
from
a
product
lens
right.
I
totally
agree
and
we
we
like
as
a
team
during
last
week.
We
have
not
thought
about
this.
A
lot
right.
I
mean
no.
D
E
C
A
Like
we'd
need
a
reference
architecture
for
the
2gig
footprint
to
tell
us,
like
you,
know,
it'll
work
with
this
many
users
or
this
many
projects
or
this
combination
of,
and
that's
where
grant's
expertise
would
really
come
into
play
because
yeah.
What's
the
u7
thing,
we
can
limp
along
at
two
gigs,
but
you
really
can't
use
it
so.
B
Like
adding
like
to
your
question
because,
like
you
mentioned
that,
like
we
may
disable
functions
that
are
not
used
on
this,
make
like
user
experience,
I
don't
think
that
we
are
really
at
this
point
that
we
need
to
do
it.
I'm
kind
of
more
like,
like
the
example
of
nikola
of
graphql,
shows
that
we
can
still
free,
like
a
lot
of
space,
for
adding
a
lot
of
new
features
for
like
for
upcoming
months
or
maybe
even
like
multi
months,
by
just
being
more
selective,
so
yeah.
B
I
guess
like
the
like
the
the
aspect
that
I'm
seeing
is
like
we've,
the
good
architecture
of
like
the
code
base.
It
appears
that
we
could
basically
free
a
lot
of
space
to
be
able
to
continue
adding
more
functions
with
having
exactly
the
same
user
experience,
but
having
significantly
more
room.
B
As
for
the
memory,
I
mean
wrong
means
like
yeah
back
back,
I
think
you
have
a
personal
like
the
current
human
research
is
growing
yes,
so
this
should
like
probably
cut
a
little
bit
of
this
memory
usage
to
like
the
11,
maybe
from
two
or
three
years
ago,
and
we
would
still
be
growing,
but
maybe
we'll
be
oscillating
of
the
current
value
or
significantly
below
the
current
value.
D
Yeah,
no,
I
think
it's
a
fair
point
like
to
me
personally.
I
think
what
we
want
to
be
is
we
want
to
be
sort
of
mindful
of
people's
resources.
You
know
be
as
efficient
as
we
can
be,
but
if
you
add
a
new
functionality
that
you
think
has
value
and
that
adds
ram
right,
then
the
trade-off
is.
What
do
we
care
more
about
right,
the
resource
usage
or
this?
This
feature
that
ideally
has
value
right
and
you
know
that's
obviously
also
a
sort
of
a
product,
a
product
question.
D
G
D
C
Range
of
customers,
some
companies
very
large,
like
some
individual
users,
so
I'm
actually
wondering
if
that
ties
all
back
into
what
we've
talked
talked
about
briefly
like
a
year
ago.
This
whole
idea
of
a
gitlab
light.
Is
there
any
like
merit
in
this?
Maybe
we
need
to
like
rekindle
that
discussion,
and
I
don't
know
what
thinks
about
these
things.
You
know
because.
F
But
if
we
are
talking
about,
for
example,
this
graphql
thing
like
the
point
is
that
we
use
that
graphql,
but
we
are
not
using
it,
for
example
in
sidekicks.
So
if
we
are
selective
about
the
functionality
and
don't
load
anything
related
to
the
graphql,
we
can
save
some
space,
but
this
will
require
a
lot
of
architectural
changes.
Yes,
but
that's.
C
Yes,
because
we
were
like
talking
about
the
product
angle,
but
I
think
I
think
the
graphql
I
think
we
can
just
do
that.
I
think
that
has
nothing
to
do
with
like
the
disabling
features.
Yeah
and.
D
D
Because
I
unfortunately
still
need
to
like
fix
my
calendar,
I
have
a
meeting
at
half
that
I
need
to
move
around
that's
hard
to
move,
but
just
I
linked
one
item
here
with
regards
to
sort
of
the
product
perspective
on
this,
which
I
think
relates
to
what
you're
doing
here,
which
is
something
that
was
opened
a
week
ago,
which
is,
I
think,
a
driving
force
behind
trying
to
reduce
the
usage
is
the
concern
that
there
is
some
form
of
sort
of
low
end
disruption
through
services
like
getty,
right
as
in
a
lighter
service
with
significantly
lower
resource
usage,
can
do
things
that
gitlab
can
also
do
right,
and
so
that
may
mean
some
users
are
actually
going
to
choose
that
over
us.
D
You
know
which
could
be
a
problem
in
the
medium
term,
long
term.
As
far
as
I
know-
and
I
have
to
this-
is
maybe
one
one
thing
that
I'm
trying
to
do
this
week
or
next
is,
I
don't
think
we
have
any
kind
of
data
that
I
have
access
to
that
actually
says
this.
D
This
is,
you
know,
a
thing
right
and
what
is
the
scope
of
this
thing,
and
so
I'm
trying
to
dig
a
little
bit,
because
my
my
impression-
and
I
don't
know
if
that's
true
again,
I
need
to
look-
is
that,
for
example,
the
perception
of
being
lightweight
right
may
have
to
do
with
resource
usage,
but
it
may
also
have
to
do
with
the
fact
that
you
can
maybe
do
a
significant
chunk
of
the
things
that
you
can
do
in
gitlab
in
getty
as
well
right-
and
you
know
that
works
perfectly
for
for
folks,
and
that's
good
enough
right
now.
D
So
this
goes
back
to
this
sort
of
gitlab
lite
question
right.
If
for
a
chunk
of
users
right
even
with
reduced
functionality,
right
or
you
know,
some
things
that
gt
can't
do
that's
sufficient,
maybe
a
strategy
is
also
to
say
well,
here
is
git
lab
right
in
a
smaller
package
right
with
only
the
core
functionalities
right,
but
I
I'm
not
quite
sure
you
know,
because
people,
I
think
they
care
about
the
user
experience
at
the
end,
the
things
that
they
actually
want
to
do
with
it.
D
B
Don't
know
there
is
one
interesting,
like
aspect
of
your
of
your
like
sentence
like
everything
that
we
did,
we
tested
against
github
ee
need
it.
It's
like
another
product
that
it
should
be
smaller
in
terms
of
research.
So,
like
we
didn't
actually
compare
ce
versus
emr
usage,
so
this
ce
may
actually
be
a
product
that
answers
like
your
suggestion,
about
having
something
that
is
more
lightweight,
basically
because
of
the
smaller
package,
and
he
also
mentioned
about
like
the
perception
of
the
light
weight.
B
I
also
agree
with
you
that
is
multi-dimensional,
but
I
kind
of
feel
that,
like
it's
not
only
about
like
two
gig
limit
or
like
amount
of
the
resources
that
you
use,
but
also
about
the
let's
say,
the
complexity
and
the
amount
of
the
components
that
you
need
to
deploy,
so
it's
like
github
will
grow
not
only
of
the
code
base,
but
it
also
grow
the
amount
of
the
sibling
services
that
it
requires
to
run,
which
is
like
entirely
just.
B
I
mean
this
kind
of
like
not
microservices,
but
still
like
sibling
services.
Architecture
like
if
you
look
at
even
on
the
omnibus
and
how
many
services
it
runs.
It
was
about,
I
think,
12
different
services
and-
and
this
for
some
people
may
be
like
intimidating-
that
I
have
this
guitar
the
transfer,
the
docker
container.
B
D
I
think
this
is
exactly
my
point
exactly
and
I
think
I
think
what
I
try
to
get
it
is
like.
If
our
concern
is
that
a
segment
of
the
market,
let's
say,
is
concerned
with
complexity
and
large
things
that
they
don't
like
right,
what
are
these
dimensions?
Right
and
memory
is
certainly
one
of
them,
but
maybe
there
are
others
right
and
we
shouldn't
we
shouldn't
forget
it.
I
like,
in
my
mental
model
right
now,
it
may
be
completely
wrong.
Is
I
thought
about
it?
D
It's
like
well,
gitlab
is
a
little
bit
like
the
ubuntu
of
of
linux
distributions
in
that
regard.
Right
it
comes
you
install
it
and
you
have
all
of
the
stuff.
You
could
possibly
need
already
pre-packaged
right,
but
some
folks
may
actually
want
to
have
a
more
minimal
installation
right
and
then
they
choose
arch,
for
example,
because
they
feel.
D
Oh,
I
I
want
only
the
services,
I
really
need
and
whatnot
right
and
memory
usage
is
one
dimension,
but
maybe
there
are
others
too,
and
also
you
know
how
many
people
choose
one
over
the
other
and
what
is
the
market
segment?
I
I
have
no
answers.
I
think
this
will
be
interesting
to
figure
out
a
little
bit
just
to
help
us
prioritize
these
product
questions
right.
D
It's
like
maybe
some
of
these
things-
the
tuning,
for
example
in
the
graph
here,
a
bit
sound
like
sensible
things
to
do
regardless,
but
if
we
talk
about
turning
things
off
right
by
default,
is
that
something
we
want
to
do
you
know?
Is
that
maybe
something
we
want
to
do
for
ce
is
there?
Is
that
maybe
the
answer?
I
don't
know.
A
So
should
we
follow
along
with
that
issue
listed
below
for
more
info
on
like
the
market
feedback?
I
know
you
have
to
fall
off,
so
that's
why.
D
Yeah
I
have
to
fall
off,
but
I'll,
I'm
on
that,
because
I
want
to
understand
what
actually
is
happening
there
and
I
will
probably
need
to
help
with
that
a
little
bit
as
in
if,
if
it's
one
of
those
things
where
people
say
like.
Oh,
we
ought
to
do
this,
but
I
think
it's
quite
relevant
for
the
memory
team
and
for
sort
of
the
next
steps,
so
I'll
try
to
figure
out
what
we
can
do.
Okay,.
A
So
as
far
as
next
steps,
I
see
matthias,
you
just
created
an
issue
and
a
suggestion.
I
was
actually
going
to
suggest
the
opposite,
so
it
seemed
like
when
I
was
watching
the
video
everybody
kind
of
had
their
own
area
of
exploration.
A
A
B
B
You
created
this
issue
and
it's
I
mean
this
jump
in
the
memorial,
but
it's
due
to
amari
work,
puma
worker
killer
settings,
and
we
actually
have
the
issue
assigned
to
this
milestone
to
like
revisit
this
memorial
worker
killer
settings,
to
maybe
increase
that
even
more.
B
C
Okay,
all
right
that
explains
a
lot
yeah
thanks.
It.
B
C
B
There
is
like
some
expectation
for
us
to
set
the
limits
to
what
we're
running
on
the
github.com,
which
would
increase
these
memory
usage
limits
on
on-premise
installation
by
another
20
percent.
Basically,
and
we
need
to
make
a
decision
how
we
approach
it.
Do
we
stay
with
the
current
limit?
Do
we
maybe
try
to
tune
the
default
settings,
or
do
we
decide
to
increase
the
limit
again
and-
and
and
I
mean
this
is
probably
like
the
the
most
important
issue
for
us
to
tackle
and
dislike
in
this
release.
C
On
the
bright
side,
it's
a
it's
a
nice
validation
that
our
tracking
works,
because
we
should
be
seeing
this
fight.
Definitely.
A
Okay,
I
put
a
note
in
here
for
everybody
to
create
their
skeleton
issues
and
then
we
can
review
and
groom
async.
I
figure
y'all
can
touch
base
tomorrow
during
office
hours
and
then
I
will
go
back
and
review
the
notes
later
on
the
day
make
sure
I'm
all
caught
up.
D
A
C
C
B
C
B
So,
like
nokias
work
from
what
I
understand
they
do,
it
doesn't
really
do
compacting.
It
just
runs
group
edge
correction,
multiple
times,
four.
B
C
Okay,
fair
enough
thanks
yeah,
I
think
for
compaction.
I
think
we
should
just
do
it,
because
it's
quite
fast,
and
especially
at
the
time
before
you
fork,
that
is
not
a
hot
request
path,
so
it's
like
if
we
lose
a
little
bit
of
time
there.
That's
not
really
a
big
deal,
and
I
don't
think
it
helps
like
in
the
long
term,
over
the
lifetime
of
the
application
very
much,
but
it
has
a
temporary
effect
that
looks
beneficial,
so
it
seems
like
it
does
more
good
than
harm.
So
we
could
probably
just
do
that.
A
I'm
curious:
this
is
probably
an
implementation
detail
that
maybe
we
haven't
figured
out
yet,
but
on
this
goal,
whenever
we
get
to
whatever
the
shape
of
a
2
gig
footprint
looks
like,
is
this
going?
Are
we
going
to
make
the
installation
and
configuration
of
this
smart,
or
is
this
going
to
be
like
ask
the
user?
That's
installing
this
and
setting
this
up?
Are
you
trying
to
do
this
on
limited
resources.
C
C
We
can
create
sensible
defaults,
at
least
that
are
scale
with
the
available
resources.
F
B
So
I
I
like,
we
didn't
really
yeah
yeah
like
validated
all
aspects
I
mean
especially
related
to
the
gc
tuning,
but
I'm
assuming
that
like
for
the
gc
settings
likely
they
would
be
predefined
and
global
and
they
would
affect
2d
or,
like
another
size,
install
as
well.
So
maybe
like
the
puma
single
being
like
the
the
ones.
I
think
that
it's
very
hard
to
say
it
should
be
like
default
or,
like
should
be
smart
or
not
smart.
So
this
may
be
like
someone
some
single
setting
like
to
use
in
this
particular
case.
C
B
I
I
think,
like
my
my
understanding
of
looking
at
the
current
settings,
understanding,
ruby
vm,
that
I
learnt
a
lot-
is
that
these
settings
that
are
configured
today
for
every
ruby
process
is
they
are
not
tuned
to
github.
Basically,
they
are
they
are.
They
are
good
defaults
for
generic
application,
but
they
are
not
good
defaults
for
github.
B
F
I
just
want
ask
about
extending
ruby
and
vm.
Is
there
a
separate
topic?
I
didn't
look.
I
would
like
to
see
what
you
did
there.
B
I
created
like
these
two
parts
so
far
there
will
be
vm,
I'm
not
sure
like
what
you're
like
like
exactly
looking
at,
but
if,
if
you
are
interested
liking,
like,
let's
say,
trying
to
add
some
new
function
with
me
and
like
seeing
like
how
I'm
doing
that,
we
can
jump
on
some
color
like
I
can
like
we
can
play
on
that
thanks,
but.
F
D
B
It's
intimidating
how
big
like
his
thought
is
like
especially
like
dc.c
I
like
how
it
behaves,
but
it's
it's
kind
of.
If
you
understand
like
a
few
concepts
behind
that,
like
the
behind
the
data
structure,
it's
not
really
that
hard.
B
B
It's
not
really
that
that
hard
like
to
accident
that
I
mean,
like
it's
hard
like
to
really
like,
let's
say,
optimize
the
current
code
to
write
to
the
let's
say
too
much
better
memory
usage,
because
it's
already
pretty
optimal.
B
For
example,
there
was
like
a
moment
that
I
looked
to
see
opcodes
if,
like,
however,
how
we
store
the
op
codes,
but,
like
all
of
that
is
pretty
like
pretty
efficient
in
the
way
how
it's
written,
so
I
also
was
looking
at.
Can
I
have
a
map
like
the
iseq
from
a
disk,
and
the
answer
is
like
it's
pretty
challenging
like
to
do
it
so.
B
B
Size
is
pretty
hard
like
to
to
like
to
make
a
major
differences.
You
rather
like
can
do
a
hero.
He
can
cut
fixing
like
very
small
aspect
like
this
is
the
way
iterating
of
that.
But
if
you
want
to
get
a
very
detailed
information
about
something
I
mean
dependency
of
or
something
like
that,
it's
pretty
fairly
simple
like
to
implement
that.
C
B
I
mean,
like
I
didn't
figure
out
yet
exactly
the
best
way
like
to
build
it.
So
far
I
was
using
debian
package
to
building
so
I
was
like
reading
the
initial
debian
package
and
commenting
some
aspects
that
I
didn't
want
to
do
and
then
like
selectively
compiling-
and
this
was
like
my
workflow
so
far,
but
it
was
pretty
fast
on
that.
If
you
talk
but
like
I
can
show
you
exactly.
G
A
I'm
going
to
fall
off
here
in
a
minute
there
I
was
going
to
say
we
could
run
through
real,
quick
and
just
throw
out
ideas
on
who's
going
to
write
up
the
issues
if
we
want.
So
if
anybody
has
an
idea
like
mathias,
if
you
know
that
the
issues
you're
going
to
write
up,
we
can
list
them
out
here
so
that
we're
not
clashing
or
if
that's,
not
a
good
use
of
time.
We
can
skip,
and
just
do
that,
async
and
jump
over
to
the
image
scaling
topic
that
you've
added.
G
C
H
A
C
Yeah,
so
I
picked
it
up
again
today
because
we
didn't
really,
I
spent
time
on
it
last
week.
So
it
looks
like
the
main
reason
that
we
keep
breaching
our
slo
is
because
we
have
a
bunch
of
we
have
a
data
quality
issue.
Probably
so
we
have
a
bunch
of
pngs
that
have
corrupt
chunks,
but
it's
like
part
of
the
color
space
table
or
something
that
is
encoded
in
the
png
and
the
check
summing
failed.
So
I
I
both
go
lang
and
two
tools.
I
checked.
C
They
agree
on
like
one
image
that
I
tested
with.
It
was
broken
in
that
sense
and
then,
if
the
png
decoder
will
just
fail
when
when
we
try
to
read
that
image
in
the
scalar,
that
is
hard
to
fix-
and
I
also
don't
know
what
the
security
implement
implications
are
of
scripting
out
crcs,
for
instance.
So
I
created
an
issue
for
that
because
we
could
do
something
like
during
upload.
C
Maybe
we
should
just
run
png
fix
over
over
every
png
that
we
upload
to
increase,
because
currently
we
just
yeah
have
a
bunch
of
basically
corrupt
pngs
and
they
render
fine.
You
know
if
you
bring
them
up.
It's
just
like
they're,
not
in
integer.
Is
that
the
word.
G
C
So
they
cause
problems.
So
I
need
to
like
check
with
andrew
again
what
the
best
way
forward
is.
Fortunately,
like
he's
working,
I
think
we
just
need
to
lower
our
expectations.
For
now
that
we
have
a
bunch
of
like
bad
data,
simply
that
we
need
to
deal
with
so
and
he's
working
on
a
rework
of
how
we
monitor
slos.
C
That
allows
us
a
bit
more
fine
game
grain
control
because
currently
they're
all
tied
to
the
service
level,
not
on
the
component
level.
So
they
basically
all
define
the
same
thresholds,
and
that
would
allow
us
to
have
a
image
scalar
component,
specific
way
of
tracking
this,
so
that
we
can.
We
can
change
our
bars
basically
right
and
then
we
can
say
we
allow
a
cert.
C
We
allow
more
errors
than
the
main
workhorse
component,
for
instance,
that
cert
ordinary
work
request,
which
I
think
is
totally
sensible,
and
then
there
has
been
other
it
wasn't
even
just
famous
gaining
like
he
had
been
working
on
this,
because
there
were
other
components
that
had
they
just
have
different
error
patterns.
And
you
know
you
have
different
expectations,
then
yeah.
So
we
might
have
to
just
end
up.
We
might
have
to
just
silence
that
for
another
week
or
two,
okay,
yeah-
and
I
don't
know
like
these
issues
going
forward.
C
We
should
probably
because
we
have
this
handover
issue
still.
Is
that
still
something
we
want
to
do
this
morning
to
who?
Because
I'm
not
really
sure
how
to
like
go
about
this
right
now
and
then
we
yeah.
We
have
a
bunch
of
like
issues
that
we
will
probably
not
work
on,
and
this
sounds
like
one
one
of
those
probably.
A
Yeah
I'll
work
with
fabian
on
that
to
find
out
all
right.
A
A
All
right,
I
got
the
follow-up
items
here,
so
I'm
going
to
go
through
thirteen
seven
later
on
today
it
looks
like
there's
some
issues
that
can
either
be
closed
or
just
kicked
out
entirely
and
I'll
make
sure
they're
better
represented,
while
we're
working
on
the
13-7,
as
we
add
the
two
gig
items
in
there
and
then
yeah
for
everybody
else,
adding
issues
for
the
two
gig
implementation
goal
all
right.
Any
other
topics
for
today.
C
I
just
forgot
to
mention
the
caching
stuff,
I'm
looking
at
this.
If
we
speak
basically,
so
I'm
just
going
to
create
the
same
spreadsheet
that
we
had
before
to
compare
the
yeah,
how
many
images
and
I
was
served
from
client
caches,
so
it
should
have
a
nice
performance
impact
but
yeah
I'm
not
I'm
not
done.
I
just
pulled
the
cdn
logs,
so
I'm
gonna
look
at
that
today.
Still.