►
From YouTube: 2020 07 06 Memory Team Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
We
had
a
bunch
of
great
discussions
with
Tim
and
Matias
Friday
and
meaty
groups
also
contributed
to
the
discussion
and
I
still
want
to
be
on
the
same
page
about
this
issue.
But
as
far
as
I
understand
the
stakeholders
are
pretty
much
I
mean
Tim.
Pretty
much
wants
to
go
with
dynamic,
resizing
result
considering
static
at
all
equals
of.
We
want
to
build
one
like
universal
tool
for
future
content
images
as
well.
So
that's
how
I
feel,
but
Mattias
raised
a
great
concern.
So
why
are
we
going
to
christen
Eric?
B
So
my
idea
is
just
to
make
some
decisions,
so
we're
going
for
the
real
exists
for
not
a
person,
because
it's
important
to
move
forward
and
to
not
waste
time
of
the
discussions,
and
the
next
step
is
try
to
build
some
local
prototype.
I
probably
need
to
tinker
with
workhorse
a
bit
to
inject
this
servers
like
English
proxy
in
in
our
case
it
will
get
behind
the
workforce.
As
far
as
I
understand
our
infrastructure,
I.
C
C
B
B
I
pretty
much
worked
with
static
only
before
so,
I
didn't
have
any
prior
experience
to
dynamic,
but
I
feel
like
we
want
dynamic,
because
we
want
to
cover
all
all
of
the
issues
in
one
solution.
So
I
feel
like
this
issue
is
not
about
like
make
some.
You
know
some
greedy,
short-term
Lok
and
includes
improvement
to
current
situation,
but
build
the
dynamic
service
because
we
needed
any
service,
especially
for
content.
So
that
was
my
understanding
causation.
C
Yeah
I,
I'm,
so
curious,
I
think
we
I
think
what
we
should
do
is
my
own
two
cents
so
but
I
mean
I
think
what's
happening
here,
obviously
that
we
should
get
some
more
data
on
what
the
value
is
and
additional
image
types
beyond
avatars
seems
like
70%
of
I,
think
the
request
for
the
avatars
and
I
might
know.
The
remaining
question
I
think
is
what
percentage
are.
C
C
B
Yeah,
that's
that's
right
question,
but
do
you
know
I
should
ask
about
our
like
beta
statistics
on
our
image
data
volumes,
because
I
could
get
like
request
rate
as
Camille
suggested
this
from
Cabana,
but
I?
Don't
know
where
to
look
how
to
fight
how
much
data
do
they
have?
How
much
better
to
be
talking
about
it's
something
like
s3
statistics
right
or
whatever
storage
we
use
for
raw
data.
You
guys
control
large.
D
B
D
D
There
is
I
think
that
is
like
I
did
not
worry
like
that,
but
I
think
there
is
like
very
interesting
characteristic
if
our
image
requests
and
the
cloud
flower
every
request
that
goes
to
get
love
needs
to
be
authenticated.
It
means
that
the
CloudFlare
likely
multiplies
the
same
on
what
are
multiple
of
times,
for
example,
because
it
cannot
catch
that
avatar
for
like
for
many
clients,
it
can
only
catch
the
avatar
reference
that
five
only
for
the
single
session
that
is
currently
locked,
so
technically
I'm
kind
of
curious.
D
If
we
could
sorry,
it's
not
recover
under
the
authenticates
cloud,
file
needs
to
pass
like
authentication
token,
for
it
up
to
authenticate
its
request,
its
individual
different
requests,
so
I'm
kind
of
wondering
like
how
many
unique
requests,
for
example,
for
avatars.
We
have-
you
know,
maybe
like
in
some
window,
because
it
could
give
us
like
indication
if
currently
I
believe
it
was
like.
D
120
203
was
per
second,
but
is
very
likely
I
identify
today
that
it's
very
likely
that
we
request
the
same
fight
over
and
over
because
of
the
different
user
accessing
that
file,
and
we
need
to
authenticate
that
this
user.
We
need
out
in
triplicate
this
user
to
authorize
this
operation.
So
it
can
give
us
like
some
good
information
of
how
efficient
would
be
cashing.
Is
it
like
0%
or
is
it
like
95%?
The
caching
would
be
if
we
would
consider
dynamic
but
dynamically
some
some
cash.
D
B
That's
interesting.
Take
thanks
else.
I
also
wanted
to
ask
you
like
we
should.
We
should
go
for
200
requests
right.
We
shouldn't
go
for
3-0
for
type
of
responses,
because
we
only
care
about
like
nearly
served
images
which
are
not
in
browser
cache.
When
we
are
talking
about
all
the
statistics
of
this
presentation.
D
Yes,
like,
like
200,
is
actual
data
being
transferred
because
yeah
I,
not
just
three
three
excerpts,
usually
I,
think
is
because
if
you
have,
if
not
much
on
the
HTTP,
where
you
can
perform
attack
matching
but
I,
don't
really
expect
it
like
the
next
attack
matching
being
useful,
so
I
mean
it
would
be
interesting
to
see
like
the
breakdown
of
200
vs..
We
have
v-0
fork,
I
I'm,.
B
D
C
D
It's
it's
like
image
resizing
it's
it's
like.
If
you
want
to
have
a
good
quality
image.
Resizing
is
not
cheap
operation,
it
is
already
need
to
do
some
kind
of
buy,
you're
filtering
or
like
tried
on
you're
filtering,
and
it's
not
cheap
and
doing.
Guam
may
be
gonna,
be
an
fine,
but
there
is
also
another
us,
but
he's
gonna
increase,
not
dancing,
and
this
image
resizing
he's
not
gonna,
be
instantaneous,
maybe
something
you
just
gonna
be
five
mini
seconds,
but
some
of
them
could
be
100
milliseconds.
D
E
B
Much
yeah
I
would
like
to
like
one
more
point
for
me
and
then
probably
would
require
some
code
for
Josh
I
want
I,
want
to
understand
what
what
is
apparent
initiative
for
this
task.
Is
this
initiative,
like
maybe
blockbusters
and
surely
going
for
static,
is
easier
and
so
on,
and
if
initiative
has
like
make
some
dynamic
capabilities
like
Tim
described
like
you,
upload
some
huge
file
and
you
get
it
like
compressed
its
computing
as
issue.
So
I
would
like
to
understand
what
is
our
like
underlying
motive
behind
this
task
result.
F
B
Luck
then
definitely
maybe
going
caching
only
for
guitars
is
the
best
solution
here,
because
it's
fast
we
could
easily
implement
it
in
models
because
we
had
carrier
wave
and
its
ship
and
it's
like
super
easy
to
do,
and
maybe
they'll
get
some
huge
results
really
fast,
and
then
we
could
go
for
dynamic
for
like
couple
of
months,
because
it
would
take
some
time
so
yeah.
If
we
could
refine
this
moment,
it
will
be
really
quiet.
So.
B
D
C
Making
get
that
faster
from
a
user's
perspective
is
is
the
goal
which
is
get
to
Camille's
point,
reducing
the
page
load
time.
That
is
100%
agree
any
question.
You
know
it
has
become
whether
we
do
a
sort
of
short
term
solution
which
might
have
higher
ROI
but
might
not
be
as
extensible
as
a
general
solution.
Or
do
we
just
fully
invest
in
a
much
larger
effort
like
at
least
likely
larger
for
a
general
purpose,
dynamic
solution,
not
much.
C
We
have
the
evidence
to
justify
spending
a
significant
chunk
of
time
on
diamond
resizing
without
looking
in
more
detail
at
like
what
it
was
actually
being
requested
and
how
often
I
think
feeling
avatars
are
relatively
common.
They
make
up
a
think
in
the
previous
moments.
Seventy
percent
of
the
overall
requests,
but
you
know
random
image
uploads
on
random
issues.
That's
that's!
Probably
a
lot
more
hit
rate
of
a
cache
than
anything
else.
A
There,
like
a
Lexi's
comment
earlier
about
breaking
down
the
issue,
because
creating
a
dynamic
image
resizing
universal
solution
is
not
an
MVC
right.
It's
not
it's,
not
an
iterative
approach,
so
maybe
we
break
it
down
to
attacking
the
biggest
problem
it
sounds
like
avatars
is
the
biggest
problem
right
now.
How
do
we
make
that
faster?
How
do
we
produce
a
number
of
calls
so?
A
But
for
now,
let's
take
the
conversation
to
the
issue,
so
we
can
move
on
to
other
items
in
this
fantastic
conversation,
but
I
think
breaking
this
down
to
solving
an
incremental
problem,
rather
than
trying
to
solve
the
entire
problem
of
image
resizing
and
making
our
kill
out
faster.
Let's
approach
the
images,
it's
probably
not
the
best
approach,
so
we'll
take
it
to
the
issue.
Nicolai
your
app
for
improved
performance
of
show
gaze
on
the
Bob
control.
F
Regarding
blob
controller,
like
we
have
a
fix,
that
is
remembering
number
like
those
nodes
appear
iterating
and
instead
of
person
every
time
we're
now
using
that,
but
since
blob
control,
like
the
logic
behind
that
this
reference
filter,
is
used
in
a
lot
of
places,
not
just
in
the
blob
controller
like,
like
Henrik,
pointed
out
like
it's
used
in
file
preview
issue
preview,
the
comments
and
everything
else,
and
in
case
that
something
is
wrong.
Like
did.
This
fix
can
improve
a
lot
of
things.
F
But
again,
if
something
is
broken,
we
are,
for
example,
remembering
the
comments
and
issues
in
the
database,
so
we
can
easily
like
a
lot
a
lot
and
faster.
So
in
case
that
something
is
broken.
We
can
maybe
need
to
not
just
revert
changes,
but
also
clean
the
database,
which
is
better
so
I,
introduced
the
feature
flag
behind
this,
and
we
need
needed
to
find
solutions.
F
So
in
case
that
feature
flag
is
not
enabled
that
everything
works
as
before,
and
even
if
some
of
those
filters
are
used
outside
of
the
pipeline
that
it's
fear
works
like
that.
Nothing
is
broken
because
we
don't
need.
We
don't
have
any
benefit
of
remembering
or
cashing
those
nodes
and
updating
them
if
they're.
F
F
F
F
Regarding
github
mat
eastern
sections,
like
a
lot
of
people
review
this
issue,
and
what
is
left
is
that
Camille
take
a
final
review
and
then
we
can
decide
it
like
what
we
are
going
to
do
with
it.
Like
it's
a
lot
of
refactoring,
it's
a
little
bit
risky
refactor,
because
it
touches
almost
everything
regarding
the
transactions
after
review,
we
will
discuss
it
and
see.
F
I
promise
on
our
last
Monday
meeting
that
I
will
split
this
issue
and
that
we
can
close
this
one
and
leave
just
the
investigation
part,
because
we
now
have
we
fixed
the
data.
So
we
have
like
we
can
detect
the
number
of
cache
SQL
queries,
but
we
need
a
separate
issue
for
investigation,
but
I
didn't
still
create
those
tasks
today.
F
G
Yeah,
so
it's
what's
called
a
you
found
what
looked
like
to
be
a
memory
bug
for
psychic
when
importing
projects.
It's
kind
of
a
surprise
want
to
see
it's
kind
of
a
weird
one.
I
just
want
to
call
it,
and
maybe
a
quick
discussion
about
it,
because
we
do
have
the
team
set
up
an
army
at
s4
and
that
that
correctly
start
failing
as
well,
and
that
was
raised
that
was
investigated
by
the
import
team.
I
understand,
but
I
think
it
wasn't.
It
was
maybe
missed.
G
That
was
a
memory
issue,
because
it's
running
against
staging
and
it's
maybe
not
so
easy
to
get
stages.
Metrics,
where
a
sudden
on
performance
test
environments
we
could
I
was
like
you
know
what
was
happening
and
looking
at
there
as
a
stuff
was
clear.
So
maybe
as
somewhat
of
a
memory
killer
and
that's
example,
it
was
so
I'm
I,
don't
know
it's
weird
we're
doing
everything
right.
We
are
testing
out
a
you
know
in
an
automated
fashion,
the
pie
plates
that
start
failing.
G
F
G
G
It
looks
like
zero
importing
this
specific
project
which,
which
is
to
get
lab
force
projects
which
is
the
largest
project
in
the
world.
But
it's
it's
it's
it's
a
medium
size,
I'd
say
in
1300,
I
saw
only
about
1.2
gig
stable,
which
is
what
a
cycling
process
took
off
and
then
the
31
that
climbs
up
to
2.9
at
max.
Then
it
drops
a
bit
I
think
that
drop
is
natural
as
part
of
the
imprint
process.
G
D
F
G
G
A
G
G
It's
that's
fair
enough.
Is
it
was
hard
what
place
cuz
is
import
base,
also
memory
psychic.
So
no,
we
don't
know
what
the
cause
actually
is.
This
place
safe
to
assume
it
is
a
project
import
issue
more
than
a
psychic
memory
issue,
we're
not
seeing
any
other
psychic
memory
weirdness.
So
that's
a
fair
show.
A
A
D
A
G
I've
taught
my
head
I
think
it's
a
it's
between
700
and
megabytes
and
a
gigabyte
I've
taught
my
head,
but
it
depends
on
the
context
you
talking
about
at
rest
or
in
the
Lord.
Obviously,
it'll
fluctuate
a
little
bit,
but
I
think
up
to
a
gigabyte
is
fairly
because
there
it's
a
fairly
realistic
gonna
way
to
describe
it
because
I
thought
that's
the
limit
we
put
in
the
pit,
the
baby-killer
as
well.
Sorry,
thank
you
baby.
My.
B
G
G
G
B
C
B
G
They
say
heaven
50,
megabase,
sorry
for
the
very
killer.
It
was
one
I
think
it
was
I.
Think
I
said
it's
one
gigabyte
at
one
point
so
I'm
sorry,
on
50
megabytes
did
we
not
have
data
from
customers
now
or
something
that
I'm
sure
I've
never
seen
like
there's
a
there
was
a
genuine
deduction
of
memory
compared
before
and
after
maybe
it
was
some
of
our
customers.
C
G
G
G
It's
it's
a
kind
of
I'm
trying
to
feel
like
a
way
to
be
start
promoting
and
codify
a
way
that
team
performance,
testing
kind
of
higher
up
the
dev
chain
they'll
be
easy
for
browser
performance
testing
excites
me
because
you
don't
need
a
heavy
server
to
do.
That's
obviously
specific
to
client
performance,
but
Sarah
performance
that
is
different,
more
difficult,
but
I.
Guess
that
should
stop
us.
So
I've
just
ran
up
a
portal
there
and
everyone's
welcome
to
feel
free
to
to
read
and
comment
on
it.
G
They'll
treat
goal
being
that
we
have
more
deltas,
adding
in
a
performance
test
the
subscription
or
mobile
test,
depending
on
the
component
they're
testing
I'm.
Only
talking
about
unit
and
smoke
for
assessing
I
should
qualify,
but
I
told
me
asking
for
the
big:
like
will?
Quality
I'll
keep
doing
the
big
kind
of
integration
tests,
but
if
they've
teams
can
do
like
smoking
unit
test
to
give
them
at
least
a
more
immediate
feedback
of
hey,
this
thing's
actually
dramatically
slower
it
shouldn't
get
released.
Essentially
she
can
get
committed
until
this
is
addressed.