►
From YouTube: 2020 09 21 Memory Team meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
to
this
edition
of
the
memory
team
monday
meeting,
so
for
this
meeting,
we're
going
to
run
through
the
13.5
planning
issue,
it's
mostly
about
wrapping
up
the
image
resizing
work,
this
weird
link
to
the
security
epic,
but
we
were
done
with
that.
So
let's
go
back
up
to
the
parent,
epic.
C
B
It
like
manifests
itself
in
different
ways
in
all
of
these
solutions,
so
I
guess
it
will
be
done
like
we
got
a
thumbs
up
from
andrew
on
the
latest,
mr,
but
there
was
like
an
open
question
around
like
do.
We
still
need
to
reduce
user
priorities
for
user
privileges
for
this
process,
because
it
will
still
run
as
the
git
user.
You
know,
just
using
golang
doesn't
help
with
that,
so
it
would
still
be
due.
B
It
would
still
be
able,
in
theory,
to
read
like
files
created
by
git,
which
is
I
don't.
A
A
And
then
so,
we're
talking
about
a
golang
solution
incorporated
within
workhorse
are
all
the
representative
issues
in
this
parent
issue.
Sorry
parent
ethic
that
we
have
all
the
work
that
needs
to
be
done
for
13.
A
A
B
A
B
Like
I
mean,
I
think
I
think
we're
not
sure
right,
I
mean
it's
not
even
in
production,
so
I
think
we
should
deploy
it
see
how
it
performs
and
if
it's
fast
enough,
I
I'm
not
sure
we
know
this
at
this
point
whether
we
need
to
do
anything
on
top
of
cdn
caches.
I
guess
right.
I
guess
the
only
caching
that
we
currently
deploy
for
that
is
cdn
caches.
C
D
D
I'm
really,
after
I'm,
not
sure
exactly
how
we're
gonna
solve
on
prem,
but
our
documentation
has
some
links
on
the
for
the
configuring
cdn
for
the
on-prem
installations.
B
Yeah,
I
guess
what
I'm
trying
to
say
is
as
well
like
what
what
is
like.
We
need
to
see.
Is
it
even
worth
looking
at
additional
caching
outside
of
cdns,
because
for
small
images,
as
we've
seen
it's
fairly
fast
right?
So
I
I'm
just
wondering
like
what
the
trade-offs
are,
because
you
know
it
would
save
us
a
little
bit
of
cpu
time
spent.
I
suppose,
which
I
guess
is
cost,
but
I.
E
B
B
C
E
C
B
Yeah,
I
I
think
the
order
of
things
should
be:
let's
first
merge
this,
mr,
let's
deploy
it,
let's
see
how
it
performs,
and
then
we
can
see
is:
does
it
have
such
a
significant
impact
on
these
nodes
that
we
actually
need
to
consider?
Caching?
Otherwise,
I
would
just
leave
that
open
for
now.
B
B
E
What
is
like
critical
to
get
the
first
roll
out
the
second
one
is
what
what
are
follow-ups
that
we
could
like
get
to
after
roll
out,
maybe
they're
blocked
by
reload
itself,
and
third,
one
is
something
we
will
pass
to
to.
Let's
say
the
further
team,
maybe
because
definitely
like
too
many
issues
in
the
same
bucket.
B
B
If
we,
for
instance,
trip
this
100
process
threshold
in
production
right,
because
that
that
means
like
image
scanning
will
silently
break
and
we
we
won't
even
know
so,
I'm
throwing
together
stuff
for
this
right
now,
but
maybe
we'll
be
good
to
have
a
like
a
sub
epic
that
tracks
the
work
for
making
this
production
ready
and
then
caching,
maybe
can
even
be
part
of
this.
It
could
be
one
of
the
last
tasks
to
decide.
You
know
if
this
is
something
you
want
to
do
now
or
in
the
future
or
at.
B
All
yes,
I
think
I
think
we
should.
I
mean
I'm
doing
it
right
now,
because,
because
this
is
yeah
like
this
is
kind
of
like
a
bit
of
a
special
thing
now
in
in
workhorse.
That
is
not
super
obvious
like
how
it
operates.
You
could
tell
by
people
different
people
reviewing
the
mrs.
There
were
always
a
lot
of
questions
around
like
how
does
how?
How
does
that
work?
What
happens
when.
B
I
I
have
an
mr
open
already
that
I'm
working
on.
A
I
mean
like
yeah,
we're
kind
of
in
the
spot
where
it
would
have
been
a
good
idea.
Had
this
blueprint
process
been
created.
Excuse
me.
D
D
If,
if
you
have
someone,
I
don't
know
pm
engineering
another
or
someone
else
who
wants
to
understand
exactly
the
high
level,
what
we
are
doing,
you
can
kind
of
give
them
a
blueprint
to
to
discuss,
and
it
also
kind
of
declares
who
is
doing
what
in
that
like
and
who
is
the
expert
who
is
the
owner
who
is
like
dri
and
like
basically
not
like,
makes
it
clear
about
the
individuals
and
their
responsibilities.
D
E
A
To
find
the
handbook
entry
for
it
now
so
yeah
camille
did
a
fairly
good
description
here.
I
will
find
it
and
link
it
in
so
you're
all
not
waiting
for
me
here.
D
I
found
it
so
I
I
link
in
this
below
architecture
workflow
and
it's
actually
like
described
fully,
remember
the
blueprints
and
all
aspects.
So
you
can
take
a
look
at
that
mathias
and
alex
scene
and
tweet
that,
through.
A
A
I'm
not
sure
on
that
camille
on
whether
or
not
we
should
create
a
blueprint,
because
it's
kind
of
a
interesting
balancing
act
right.
It's
a
fair.
It's
a
newer
process
than
the
work
that
we
kicked
off
here.
It
might
make
sense,
but
how
many
of
these
efforts
would
we
have
to
go
back
and
retroactively?
Add
to
the.
D
B
Yeah,
okay
sounds
good,
I
mean
there's,
definitely
there's
a
couple,
there's,
definitely
interesting
stuff
in
there
like
finding
dris
and
yeah.
You
know
to
the
point
of
like
who
to
hand
it
off
to
this
is
something
we
need
to
think
about
and
yeah.
I'm
also
not
sure
like
who
I
mean.
How
do
we
decide
that?
Because
I
think
the
idea
is
that
we,
we
will
not
own
this
in
the
longer
term,
right
right.
A
Yeah,
and
was
it
he
pronounced
his
name
jacob.
Am
I
pronouncing
it
right?
Okay,
yeah.
He
indicated
on
the
issue
that
the
workhorse
maintainers
could
pick
up
the
the
going
implementation
that
we
talked
about,
because
it's
so
small
and
they're,
confident
that
it's
not
going
to
cause
any
issues.
So
the
image
resizing
as
it's
currently
envisioned.
They
were
happy
well,
they
said
they
would
take
on
the
support
of
it.
A
Okay,
so
it
sounds
like
we
have
some
at
least
organizational
work
to
do
as
far
as
creating
a
sub-expect
to
kind
of
coordinate
and
organize
the
work
remaining
for
rolling
that
one
out,
and
there
was
something
else
we
do
need
to
mark
items
as
deliverable
for
this
milestone.
A
I
can,
since
it's
the
end
of
your
day,
I
can
create
that
sub
epic
and
start
organizing
the
issues,
and
I
will
ping
everyone
on
the
channel,
so
you
can
get
in
the
issues
that
make
sense.
If
I
miss
anything.
B
A
All
right
get
back
to
the
planning
issue
so
and
then
the
next
one
established
best
practices
for
handling
cash,
sql
calls
so
nikola's
been
working
on
this
off
and
on
for
a
little
while
I
saw
that
alexi
pick
something
up
and
shin
you've
been
looking
at
it.
A
The
goal
here
is
to
pick
up
representative
examples,
so
we
can
roll
out
some
good
examples
and
enable
other
teams
to
do
this
for
themselves,
so
we're
not
trying
to
find
all
of
them
chin.
You
asked
is
three
too
many.
If,
if
everybody's
already
into
the
ones
that
they're
working
on
who
already
started
making
progress,
I
don't
think
so.
I
think
it's
fine
to
finish
up
what's
been
started,
but
again,
let's
not
try
to
do
them
all.
Let's
figure
out.
E
We
started
to
discuss
it
with
nikola
just
an
hour
ago,
because
I
mean
there
are
a
couple
of
complexities
about
this
whole
stuff.
The
ground
is
shifting
so
because
it
highly
depends
on
the
data
and
on
the
even
maybe
contacts
related
contents
of
the
tables
and
so
on.
So
on
some
day
you
could
see
some
queries
are
leading
on
another
day.
You
could
see.
Another
queries
are
leading
so
we're
trying
to
figure
out
how
to
find
the
best
criteria.
F
Yeah
I
would
like
to
just
summarize
like
what
we
are
trying
to
achieve.
Like
yeah.
We
have
a
lot
of
endpoints
that
have
potential
like
cash
sql
queries.
So
I
think
that
we
would
like
to
provide
a
metric
or
a
way
for
like
everybody,
to
take
a
look
and
detect
those
endpoints
that
potentially
has
sql
queries.
So
we
are,
we
were
concentrating
on
that
metric
that
is
already
generated,
but
it
turns
out
that
it
can
happen
that
it
highly
depends
on
the
data
and
the
contents
of
the
endpoint.
F
That
is
counting
the
number
of
queries
and
the
texts,
the
potential
and
plus
one.
Maybe
we
can
use
the
same
one
that,
because
it
contains
the
support
for
cache,
queries
and
provide
and
document
a
way
how
to
write
ample
one
cache
queries
as
well.
So
if
anyone
saw
one
it
can
cover
it
with
tests
and
we
should
provide
examples,
how
we
solve
it.
So
the
the
question
is:
what
is
the
most?
F
I
don't
know
impactful
way
to
do
it,
so
maybe
we
can
solve.
For
example,
we
saw
examples
where
we
have
10
12
12
000
cash
queries,
but
it
happens
only
occasionally
like
from
60
000
times.
It
happens
only
100
times
for
specific
projects
and
it's
tricky
to
debug.
It's
tricky
to
like
recreate
and
reproduce.
F
So
maybe
we
should
more
concentrate
on
the
ones
that
are
reproducing
every
time
and
then
we
can
like
create
a
metric
that
will
detect
those
kind
of
endpoints
and
we
can
like
shift
them
to
other
teams
and
say
okay.
These
are
obviously
happening
on
every
request,
not
dependent
on
the
data
and
on
content,
and
you
just
need
to
take
a
look
at
the
logs
and
reproduce
it
and
see
which
queries
are
cached
and
repeating
itself
and
just
try
to
resolve
it.
So
maybe
we
need
to
simplify
this
a
little
bit.
E
Yeah,
what
I'm
still
don't
understand-
and
maybe
I
wanted
to
ask
a
team-
what
is
more
critical
for
us,
an
accident
memory?
Spikes
like
we
have
this
t-mobile
project.
They
have
like
very
heavy
queries
and
they,
you
have
like
tons
of
cash
queries,
but
they
only
called
accidentally,
for
example,
when
they
create
a
new
project
in
a
future
space.
Or
are
we
curious
about
like
consistent
memory
edition,
let's
say
when
we
have
regular,
but
not
that
severe
cash
sql
amount
in,
let's
say
many
requests
which
happen
quite
often.
E
F
A
F
Vote
because
I
I
think
that
we
should
like
separate
them,
because
those
kind
of
obvious
things
can
take
a
little
like
less
time
if
it's
obvious,
if
it
generates
it's
easy
to
reproduce,
it
should
be
easy
to
reproduce,
but
it
depends
like,
for
example,
like
importing
for
bitbucket
requires
you
to
have
a
big
bucket
account
and
a
project
to
import.
So
it's
not
always
easy
to
reproduce
that
endpoint.
But
if
it's
easy
to
reproduce
it's
easy
to
see
log
and
it's
obvious
what
you
should
need
to
do,
but
for
those
spikes
it's
more
tricky.
A
F
And
to
answer
on
camille
answer
a
question:
I
extended
the
cash
counter
for
all
our
specs
to
take
into
account
through
cash
queries,
but
they
are
not
written
like
we
don't
have
a
lot
of
endpoints
covered
with
this
ampersand
check
and
also
the
specs
are
written.
That
way
that
we
are
just
like
adding
one
more
thing
and
then
comparing
the
results,
which
will
not
always
result
in
amplifiers.
F
User
one
project-
I
I
I
tried
to
enable
it
for
every
cache
counter
that
that
we
used
in
our
specs
and
I
didn't
detect
any
samples,
one
cache
queries
which
doesn't
mean
that
they're
not
there
so
yeah.
E
F
A
A
That
one
there's
the
quick
spike
to
enable
rome
and
snow
plow
joshua
wanted
someone
to
pick
that
up
and
take
a
look
at
it
and
spend
a
couple
days
effort.
If
it
takes
too
long,
then
we
can
redirect
to
another
team
with
more
front-end
experience.
So
anybody
has
any
questions.
Take
a
look
at
that
issue
and
then
start
looking
forward
to
ruby
2.7
support.
So
there's
a
lot
of
memory,
improvements,
garbage
collection
and
compaction
improvements
into
seven.
A
This
epic
is
not
very
well
broken
down
yet
because
I
have
to
imagine,
there's
going
to
be
a
lot
of
research
on
our
end
on
how
to
even
get
it
up
and
testing.
There
were
some
comments
about
it's,
not
that
hard
famous
last
words
to
get
a
2.7
environment
and
start
just
even
testing
against
it.
Stan
has
done
some
work
there,
but
I
would
imagine
we're
going
to
spend
a
good
amount
of
time
talking
about
goals
and
efforts
around
what
we
want
to
test,
how
we
want
to
test
on
2.7.
A
A
I
think
the
13.5
sounds
like
there's
a
couple
more
issues
that'll
be
created
based
on
the
discussions
that
we've
had
today
and
will
be
added
to
thirteen
five.
It
looks
like
I
kicked
out
a
couple
issues
from
thirteen
five
that
would
have
been
long-standing
carryovers.
I
just
threw
them
in
the
backlog
yeah.
So,
let's
run
through
the
issues,
real,
quick
and
see.
If
there's
anything,
we
need
to
mark
as.
A
So
there
were
discussions
at
one
point
in
time
that
when
we're
doing
the
dynamic
image
resizing
that
we
weren't
hitting
the
right
image
sizes,
do
we
still
need
to
go
through
and
revisit
the
supported,
avatar
sizes.
F
Yes,
we
do
I'm
currently
preparing
the
mr,
because
at
the
moment
we
are
like
supporting
upscaling,
400
pixels
and
we
are
supporting
200
pixels,
which
is
not
at
all
the
resize
size
and
we
didn't
take
into
consideration
the
most
popular
ones.
So
there
is
still
a
kind
of
open
discussion,
but
I
think
that
we
mostly
agree
to
support
those
most
popular
sizes
and
when
we
there
is
a
separate
issue
that
we
should
in
the
end,
consider
it
all
those
sizes.
F
F
A
Them
that
one's
already
marked
as
deliverable,
we
already
talked
about
caching
options.
I
don't
know
if
anybody
caught
this
one
during
the
last
retro
the
comments
that
I
think,
matthias
and
well
matias
mainly
brought
up
about
improving
the
feedback
cycle.
There
was
good
conversation
in
our
retro
about
it
and
the
engineering-wide
retro
they
wanted
us
to
follow
up
and
kind
of
give
some
more
feedback
on
some
improvements.
A
The
main
one
was
easier
access
to
sci-sense,
but
I'll
go
back
through
that
issue
and
make
sure
that
we've
carried
forward
or
actually
documented
some
improvements
there.
So
if
anybody
has
any
ideas
that
they
want
to
add
to
that,
that's
a
follow-up
item
that
we
have
to
deliver
for
the
next
big
retro,
see
the
duplication
strategy.
Is
that
something
we're
going
to
deliver
in
this
milestone?.
F
This
is
the
small
mr,
like
I
just
extracted
it
from
the
static
image
resizing,
because
it's
in
implements
just
the
another
that
application
strategy
that
prevents
the
the
sidekick
job
to
be
like
scheduled
and
executed
twice
at
the
same
time.
So
I
guess
we
can
like
benefit
from
it.
It's
not
used
anywhere
camille
suggested
in
the
mr
that
we
can
probably
deliver
it
sometime.
F
A
Right
binary,
matching
extension.
E
A
B
Yeah
this
should
should
happen.
This
milestone
right,
I
mean
hopefully
yeah.
It
should
be,
then
the
next
one
I
I
keep
wondering
like,
because
it's
in
our
backlog
can
we
do
anything
about
this
or
like
who
owns
that,
because
I
don't
feel
qualified
to
judge.
C
B
Quality,
I
think
that
needs
like
front
end
designer
eyes
or
something
because
okay,
I
I
like,
I
wonder
like
I
with
this
kind
of
stuff,
I
keep
thinking.
I
would
just
like
ship
what
we
have
show
it
to
people
and
ask
them.
Is
that
good
enough?
You
know
like
because
if
they,
if
it's
good
enough,
then
there's
probably
no
need
for
an
issue.
E
B
Well,
what
I'm
thinking
is
we're
kind
of
committed
to
shipping
the
go
scaler
anyway
right.
So
if
we
ship
it
and
it
will
be
still
behind
a
feature
flag
for
a
while-
I
I
would
rather
like
like:
can
we
can?
We
just
tie
it
to
that
and
then.
B
Sizes
and
kind
of
like
tell
us,
if
that's
something,
that's
good
enough
or
whether
we
need
to
revisit
the
yeah.
The
financial.
A
A
F
Okay,
these
two
to
those
two,
I'm
not
sure
we
will
probably
like
take
a
look
again.
I
will
think.
C
E
Because
I
already
took
another
one
also
because
when
I
took
this
pipeline's
controller
like
last
friday,
it
was
trending,
but
today
it's
not
trending
at
all.
On
the
last
five
days.
Look
so
there
are
other
offenders
today,
so
I
started
to
like
streamline
the
criteria
a
little
bit
better
for
myself
what
to
pick
yeah,
because
it's
like
shifting
all
the
time,
depending
on
data-
I
guess
so
yeah
one
I
I
would
even
say.
E
A
Once
we
zero
in
on
the
specific
issues-
and
we
can
mark
them
as
deliverable
later
on,
you
can't
mark
epics
as
deliverable.
So
it's
fine.
C
A
E
B
Probably
has
value
if
we
find
that
if
we
roll
it
out
to
a
significant
portion
of
our
users,
if
we
then
see
like
a
20
cpu
spike
on
average.
E
B
A
About
we
just
reword
it
to
reflect
current
state
instead,
so
the
fact
that
it's
going
to
be
hard
for
us
to
specifically
measure
this,
but
we'll
keep
track
of
the
overall
cpu
impact
to
this
as
we
slowly
roll
this
out,
so
that
we're
just
we
keep
it
in
mind
and
we're
not
forgetting
about
it.
So
I
I
will
update
to
reflect
the
current
state.
E
F
B
Okay,
no,
that
that
sounds
good.
It's
just
like
because,
like
I
said
I'll,
be
gone
next
week,
so
so
this
week,
I'll
still
be
working
on
the
image
scaling
things
that
are
open.
It's
just
yeah,
let's
make
sure
we
at
least
do
a
handover,
then,
on
thursday
or
friday.
C
A
Okay,
we
have
gotten
through
everything
on
the
agenda.
Is
there
anything
else
we
need
to
cover
for
today?
I
have
quite
a
few
follow-up
items
to
take
care
of
any
other
topics.
For
this
meeting,
though,.
F
F
F
F
C
Oh
good,
no,
that's
that's!
That
sounds
great
yeah.
I
didn't
know
really
where
else
to
turn
to
if
import,
because
all
I
know
is
this
team
has
worked
in
it.
So
thanks
for
doing
that,
I'll
sync
up
later
and
I'll
be
able
to
take
it
from
there
yeah.