►
From YouTube: Memory Team Weekly | 12 Oct 2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
So
what
do
we
need
to
discuss
today
is
u.s
holiday?
Okay,
it's
a
record
to
cloud
okay,
so
domain
knowledge
craig
mentioned
that
we
should.
We
should
fill
this
data
for
our
team
yaml
profiles,
but
looking
at
camille's
profile,
I'm
not
really
sure
I
could
put
a
lot
into
it
so,
like
I
will
probably
put
rails,
maybe
postgres,
but
I'm
not
sure
I
could
put
any
of
this.
A
Yeah,
of
course
we
can
add
new
ones
like
the
whole
prometheus
or
ql
thing
is
something
that
we
added
only
recently
that
was
all
like
after
I,
after
I
had
trouble
finding
reviewers
for
reviews
for
mrs
that
contained
from
ql.
I
actually
suggested
to
add
this,
and
there
were
a
couple
people
I
think,
andrew
for
instance,
he
said
yeah,
I'm
happy
to
do
it,
so
we
just
we
just
added
this.
I
think
it's
totally
fine.
It's
up
to
us.
A
B
B
B
A
B
Okay,
so
let's
start
with
spring
players,
you
know
because
it's
more
straightforward
and
then
we
could
review
the
issues
in
the
milestone,
so
image
resizing
what
I
could
say
we
rolled
out
it
last
week,
not
the
previous
last
week,
because
we
had
an
issue
where
we
needed
to
update
a
bit
of
one
horse
and
so
on,
but
it
was
minor
and
the
data
looks
good.
A
A
B
C
So
so
my
suggestion
would
be
like
to
first
enable
that
by
default
and
second,
you
might
write
this
feature
flag
to
be
ops
type
because,
like
obviously
feature
flag,
it's
by
definition,
long
living
feature
flag.
That
may
have
performance
implications,
and
I
think
we
may
want
to
have
a
key
switch
with
this
obstacle.
So
kind
of
like
have
the
single
feature
effect
that
would
completely
disable
this
functionality
if
you
find
it
somehow
being
the
dose.
C
So
this
would
be
my
proposal
like
I'm
like
keeping
that
for
pretty
long
time,
but
like
acknowledging
that
this
is
ops
type
of
the
picture
fact
that
by
definition,
is
a
long
leaving
and
it
kind
of
serves
as
a
kill
switch
for
this
feature
in
I
don't
know
some
some
hkc
that
we
may
run.
B
C
By
default,
now
it's
it's
like,
like
you,
convert
that
feature
flag
to
be
ops,
type
right
and
like
it's,
it
behaves
it's.
It
works
exactly
the
same
as
the
the
current
feature
flag,
but
you
kind
of
like
move
the
category
from
the
development
to
ops,
where,
like
you,
consider
this
no
longer
be
a
development,
but
rather
like
a
performance
related
feature
flag
that
we
may
want
to
enable
or
disable
whatever
your
style
you
choose.
C
If
there
is
something
causing
like
instability
to
github.
That's
the
purpose
of
the
ops,
okay,
okay,
we'll
do
that.
I
will
create
the
issue
of
this.
Oh,
so,
like
myself,
we
not
remove
that,
but
rather
ask
that
this
is
a
feature
that
may
have
performance
implications
really,
and
we
kind
of
keep
that
for
a
long
period
of
time
as
long
as
we
kind
of
need
that,
but
on
the
other
hand
like
we
enable
that,
for
everyone
by
default.
B
Yeah
so
with
enabling
this
by
default
comes
another
question
so
by
enabling
them
by
default.
We
like
automatically
enable
everything
for
self-managed
instances,
so
should
we
do
any
specific
work
on
it
on
self-managed,
like
I
don't
know
some
predictions
of
performance
before
enabling
them
by
default
because
we
were
like,
we
did
quite
a
lot
of
tests
on
gitlab.com
right.
We
did
like
a
couple
of
preparation
tests.
We
get
a
lot
of
data,
we
had
this
data,
but
on
self-managed
we
don't
have
like
at
least
I
don't
know
which
data
do
we
have
on
self-managed.
B
So
how
do
we
go
with
this
myself?.
C
I
I
I'm
not
convinced
like.
I
would
personally
kind
of
pretty
much
enable
that
right
now,
because,
like
we
tested
that
on
the
guitar.com
and
like
at
least
from
my
understanding
like
what
we
run
on
the
self
manage,
it's
gonna
be
faster
than
on
github.com,
but
still
having
like
this
kid
switch
just
in
case.
A
C
A
Concern
was
not
so
much
about
how
fast
it
is,
but
more
that
we
might
be
expanding
more.
A
Cdn
because,
like
I
think
his
concern
was
that
not
enough
requests
are
hitting
the
dot-com
application
even
compared
to
some
some
of
the
bigger
self-managed
installations,
because
if
they
are
not
behind
the
cdn
yeah,
it's
just
not
that
they
would
be
slower,
but
that
it
would
just
hit
this
path
more
often
than
we
would
on
gitlab.com.
That's
the
concern
at
least.
C
So
so,
like
our
keywords
should
maybe
answer
that
to
some
extent,
but
it's
also
pretty
tricky
like
to
get
a
realistic
profile
of
the
data,
because,
like
you,
we
don't
really
know
exactly
what
requests
on
the
on
premise
are
being
executed.
So.
C
A
But
what
if
we
just
like
brought
up
if
we
just
performance
profile,
a
like
an
avatar
heavy
endpoint?
You
know
like
something
that
shows
a
long
discussion
with
a
lot
of
common
history
or
something
like
that
and
then
just
see
how
cpu
responds
and
kind
of
like
kind
of
looking
at
the
worst
case,
and
if
the
worst
case
is
okay,
then
we're
okay
kind
of
approaching.
From
that
point,.
B
B
A
Oh
wait:
I
think
we're
talking
about
two
different
things
here
right
so
you're
talking
about
actually
collecting
real
data
from
customers
right.
A
I
I
don't
know,
I
think,
that's
like
one
step,
maybe
more
in
the
future,
because
I
mean
we
haven't
even
fully
rolled
it
out
on.com.
I
I
think
I
thought
what
we
were
talking
about
for
now
was
to
just
test
just
performance
test
it
on
for
self-managed,
just
using
our
performance
test,
suite
right,
yeah.
C
C
A
Already
so
we
could,
we
could
do
that
already
and
then
gcp
there's
like
a
simple
gcp
has,
like
you
know,
kind
of
like
a
they
have
like
dashboards,
where
you
can
look
at.
What's
that,
cpu
doing
on
that
part
and
so
forth?
So
so
they
have
some
limited
way
of
looking
at
what's
going
on
and
it
might
not
be
as
detailed
as
whatever
we
have
in
performance.
B
C
If
it
works,
it
works.
I
like
to
use
a
spin
like
we
don't
have
anything
that
would
help
us
today
with
the
usage,
you
have
to
add
some
metric
to
the
usage
being.
That
would
check
how
many
like
these
times.
We
call
this
endpoint
yeah,
and
it
could
be
like
part
of
the
work
that
matthias
continued
about
the
prometheus
metrics.
C
Basically,
because
there
you
have
like
information
about
the
request
being
fired
and
like
you
can
execute
promac
from
qr
query,
but
it
kind
of
makes
us
really
like
three
weeks
or
four
weeks
into
the
future
to
get
this
data.
B
B
Yeah,
I
don't
know
so.
The
next
question
camille
is
typing
already
yeah.
I
noticed
that
there
are
some
very
slow
requests,
but
they're,
not
even
the
99th
percentile.
So
I
didn't
spend
a
lot
of
time
looking
at
this,
so
I
wonder
if
we
should
like
create
an
issue
and
spend
time
investigating
them,
or
should
we
just
let
them
be
until
we
hit
99
percentile
with
something
better.
A
No,
it's
a
good
question
camille,
but
we
no,
we
don't.
We
don't
do
that
so
that
that
is
it's
something
we
realized
when
we
worked
on
when
we
added
just
before
I
left,
we
added
the
the
histogram
for
the
latency
buckets,
but
that
includes
the
time
it
takes
to
stream
back
because
that's
the
workhorse
perspective.
It's
not
the
perspective,
so
we
measure
on
at
the
workhorse
process
level
writes
what
workhorse
is
feeding
back
from
the
scalar
process
into
the
client,
and
so
we
measure
the
full
round
trip.
A
C
But
it's
not
indicative
that
github
pages
is
being
slow.
It
can
be
indicative
of
client
being
slow.
So
this
is
like
one
of
the
I
think,
missing
metric.
That
would
help
us
to
answer
time
like
how
much
latency
versus
actual
transfer
is
there.
B
A
A
A
We
had
a
prometheus
because
I
I
prefer
to
look
at
police
to
be
honest,
not
kibana,
also
because
in
cabana
we
only
have
a
week's
worth
of
data,
or
something
like
that
and
thanos
is
like
goes
back
months,
which
is
great
so
so
I
would
prefer
to
focus
on
prometheus
from
here
on
out,
not
not
not
cabana
and
yeah,
so
I'm
basically
working
on
the
dashboard
definition.
It
took
me
some
time
today
to
set
up
all
the
this
like
lit
sonnet.
A
Is
that
what
what
it's
called
like
this
like
json-based
definition
language,
and
then
you
can
write?
There's.
A
Because,
then
you
can
from
our
buckets
you
can
create.
You
can
pull
out
individual
quantiles
as
well,
and
you
can
plot
these
and
then
you
can
break
them
down
by
width
and
image
format
and
stuff.
So
that's
what
I'm
currently
working
on
yeah
and
so
I
have
like.
I
have
a
sample
up
already.
It's
just
a
snapshot
like
it's
a
test,
dashboard
and-
and
I
mean
so
far
it
looks
pretty
good
as
well.
A
I
mean
I
see,
barely
anything,
barely
anything
over
or
actually
nothing
over
200
milliseconds.
So
it's
not
too
bad
yeah
and
also
gives
you
automatically
like
minimal
maximum
averages.
A
Dashboard,
so
I
should
have
said
I'm
adding
a
grafana
dashboard.
I
mean
the
data
is
in
prometheus
right.
We
added.
A
Like
just
before
I
left
yeah,
we.
A
Other
things
like
how
many
scale-up
processes
and
stuff,
so
I
think.
A
B
Just
a
close
images,
I
think
I
would
say,
is
the
most
tricky
build
in
my
opinion
this
one,
because
it's
like
it's
really
not
that
clear
how
to
proceed.
I
mean,
of
course
we
could
just
throw
it
out,
but
it
seems
like
josh,
have
some
concerns
and
we
at
least
should
answer
them
in
some
way.
So
I
would
say:
let's
create
these
issues
and
try
to
concentrate,
and
if
you
look
into
the
epic,
let
me.
B
A
Good
meeting
for
this,
but
if
anyone.
B
A
Like
we
can
do
it
sometime
this
week,
maybe
I
would
like
to
understand.
I
don't
think
I
fully
understand
yet
how
all
these
caching
layers
work
or
interact
currently
that
we
serve
static
resources
from
well.
I
guess
you
can't
even
consider
this
aesthetic
resource
because
we
process
it
on
the
fly
but
yeah.
A
I
don't
think
I
fully
understand
it
yet
like
like
what
happens
like
what
is
really
just
because
I
assume
there's
separate
levels
of
caching
right,
like
those
requests
that
never
even
hit
the
application
stack
that
we
not
even
see
in
workhorse,
because
they're
serviced
fully
from
the
cdn,
but
then
there's
also,
then
they
also
requested
do
hit
the
application
where
the
client
might
ask.
A
Has
this
like
a
head
request
or
something
right
where
I
would
ask,
has
this
resource
changed
and
then
the
server
might
reply
based
on
an
e-tag
or
something
that
no,
it
has
not
changed.
You
don't
have
to
re-download
it
right,
but
it's
still
a
request.
It
hits
the
service.
It
will
just
not
spawn
like
an
image
scaling
process.
B
A
B
B
Yeah
so
camille
mentioned
that
nicholas
mercer
quest
looks
great,
so
hopefully
it
will
close
some,
and
I
think
the
only
thing
that
will
be
left
is
to
write
some
sort
of
instruction
or
maybe
some
guideline
for
other
teams.
How
we
did
that-
and
I
expect
nicola
will
like
work
on
this
when
he
will
be
back
so
ramon
gitlab.
B
B
A
A
Because
I'm
still
catching
up,
but
I
just
saw
the
planning
issue
for
13.6,
I
think,
and
had
there
been
like,
because
that's
this
big
epic,
about
like
gitlab
on
two
gigabytes.
I
think
it
is
like
again
like
cutting
down
on
overall
memory
requirements.
Is
this
something
that
how
soon
do
we
think
we
should
like?
It
would
probably
be
good.
That
sounds
like
a
big
thing,
like
a
lot
of
probably
a
lot
of
ways
to
go
about
it.
A
It
would
be
good
to
have
some
kind
of
kick
up
for
that
epic,
I
think
together
and
like
brainstorming,
somehow
yeah.
I
was
just
wondering
if
that,
if
you
had
already
talked
about
it,
while
I
was
gone,
no.
A
B
B
So,
what's
left
in
13.5,
since
it's
going
to
close
this
one
is
by
nicola
and
I'm
pretty
sure
it's
related
to
merch
request.
That's
being
that
will
be
much
soon,
so
I
think
it
will
be
closed
this
one
technically
speaking,
I
think
that
this
one
will
go
to
visualizer
closes
because
rollout
happened,
but
since
this
issue
used
to
check
feature
flags
and
we
still
keep
them,
I
think
we
should
we
could
let
it
open
and
keep
it
in
13.6
so
implement
until
executed
duplication
strategy.
B
Okay,
so
this
one
is
not
being
tribute
to
be
honest,
my
opinion
that
this
one
will
go
to
13.6,
knowing
that
nicola
is
on
video,
so
configure
image
scaling,
I'm
very
close
to
it.
We
had
like
a
full
circle,
moving
from
like
environment
variables
to
config
file,
and
I
hope
it
will
be
merged
soon,
and
I
hope
it
won't
require
any
significant
effort
from
my
side.
So
I
will
let
it
sit
in
13.5
this
one's.
B
B
B
We
definitely
will
sleep
because
and
this
one
investigate
impact-
this
definitely
goes
into
the
next
milestone
and
don't
worry
much
scaling
to
productivity.
C
B
B
A
B
B
B
What
else
do
we
have
my
team
later
feedback
and
camille,
so
we
basically
sorted
it
out.
A
I
just
added
another
item
because
I
was
curious
about
your
opinion
as
well,
so
so
action
cable
has
been
deployed.
So
it's
being
there's
a
rollout
issue
right
now.
I
added
it
at
the
bottom
and
there
was
a
very
surprising
memory
spike
not
like,
as
it
was
yeah
so
so
I
wasn't
around
when
it
happened,
but
my
understanding
was.
A
It
basically
is
related
to
the
number
of
users
that
connected,
and
there
was
a
one
gigabyte
spike
in
memory
on
that
worker,
which
is
a
lot
so
they
turn
it
off
again
and
we
don't
know
what
caused
it.
To
be
honest,
so
there's
some.
A
Basically,
I
left
a
couple
suggestion
what
I
think
we
could
be
doing
about
it,
but
I'm
also
not
like
super
experienced
with
like
debugging
or
profiling,
this
stuff
in
prod.
So
if
you
have
any
idea
for
how
to
go
about
these
things,
that
would
be
good
to
chime
in
because
I
think
also,
we
need
to
help
heinrich
out
here
a
little
bit
like
he's
done
with
like
most
of
the
work
on
like
this
implementation
and
yeah.
So
we're
kind
of
looking
to
wrap
up
that
initial.
A
That
phase
one
basically
as
well
to
call
that
done
well
and
yeah.
It
sounds
like
this
might
get
stuck
otherwise
in
in
the
rollout,
which
would
be
very
disappointing.
A
It
sounds
like
I.
This
was
totally
unexpected
because
that
there
were
trends
like
if
this
is
correct,
then
this
would
translate
to
almost
a
megabyte
per
user
connected
in
like
extra
memory
use,
which
sounds
like
super
excessive.
I
have
no
idea
what
this
would
be.
This
was
not
something
we
were
seeing
while
working
on
it
and
it
wasn't
visible
in
canary
either
because,
like
the
fact,
no
one
uses
that
feature
on
canary
so
yeah.
A
So
it's
good
that
we're
testing
it
and
and
pride
to
see
like
how
it
behaves,
but
I
think
we
might
have
to
support
him
a
little
bit
on
this.
A
I
don't
think
we
need
a
separate
issue
for
this
for
now,
like
just
keep
an
eye
on
it,
but
yeah,
I
wanted
to
bring
it
up.