►
From YouTube: 2020 07 13 Memory Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
good
Monday
memory,
team
meeting
and
jump
right
into
it,
so
we
have
four
days
left
in
the
timeline
so
near
the
end
after
we're
done
walking
through
these
we'll
go
through
what's
unassigned
and
talk
about
anything
that
needs
to
be
carried
over
to
the
next
milestone,
so
jumping
right
in
telemetry
chin,
you
did
a
nice
summary
or
anything.
You
wanted
to
add
and
Chios
I
know
you're
back
from
a
week
off
and
catching
up.
But
don't
thing
you
want
to
add
to
what
Jim
you
wrote
out
for
the
telemetry
topic,
I'm.
B
Not
not
much
I
actually
just
forgot
about
the
dashboard
I'm
looking
at
it
now
so
definitely
surprising
that
this
yeah
responses
for
the
node
size
is
bigger
than
one
that
can
only
be
a
bug
because
we
do
not
support
with
you
know.
So.
That's
curious,
yeah
I'll
have
to
look
at
that
more
then.
The
open
mr,
which
says
in
maintainer
review.
I
just
spotted
a
small
issue,
not
a
big
deal,
but
I'm
fixing
that
right
now
it's
actual
ready
assigned
to
the
maintainer,
but
it's
a
small
change.
B
C
B
C
D
B
B
D
B
A
B
Sone
that
so
we
don't
support
multi,
not
yet
so
so
now
this
should
be
is
expected
to
work
if
customers
run
more
than
one
node,
so
so
I
would
expect
that
the
vast
majority
is
in
this
one,
no
bucket
I'm,
even
surprised
that
there
are
still
a
bunch
that
are
not
like
there's
one
that
has
if
I'm
reading
this
right.
It
says
it's
a
seven
note
system,
that's
very
strange
I
mean
so
it
could
be
because
so
this
data
is
really
late
right.
So
this
a
lot
of
this
might
be
a
couple
of
weeks
old.
B
Stuff,
like
you,
bind
one
of
the
gate
lab
services
to
like
a
like
a
broadcast
address,
something
like
a
listen
on
all
the
network
interfaces.
We
might
end
up
mapping
this
to
a
separate
node,
so
this
is
kind
of
a
bug
or
like
a
shortcoming
we
were
aware
of,
but
it
wasn't
targeted
for
the
NBC.
So
we
did
it
as
part
of
13.2,
so
yeah.
B
Still
actually,
but
I
suspect
it
might
be
these
things
where
stuff
is
actually
running
on
the
same
node,
but
they
we
don't
recognize
them
as
much
because
maybe
they
like
use
a
different
way
of
identifying
local
host.
You
know,
because
you
could
use
an
IP
address
and
you
know
it
could
be
an
ipv6
versus
ipv4
address.
So
that's
all
that
being
addressed
in
this
much
request
down
there.
C
A
D
E
B
You
know
what's
funny
we
like
for
the
import,
reppin
action.
There
was
the
main
thing
we
identify
as
being
the
major
CPU
time
sink
for
imports
as
well,
so
maybe
maybe
as
a
nice
side
effect.
We
will
also,
if
we
fix
that,
there's
a
nice
side
effect,
we
might
get
see
better
import
performance
as
well.
That
would
be
interesting
to
look
at.
D
So
it
just
shows
you
the
raw
file,
but
with
some
extra
parsing
you
know
shows
you're
in
the
the
theme
in
the
window
that
you
have
set
for
your
new,
impersonal
kind
of
settings
and
HBase
and
ultimately
just
the
raw
file
there's
the
rendered
view
which
is
from
markdown.
So
when
you
click
on
the
martin
file,
you'll
have
the
actual
render
view,
and
if
your
files
have
the
same,
so
we
actually
spent
those
so
she's
a
the
testing
it
because,
while
they
are
shared,
they
are
obviously
still
extra
bits
with
the
rendered
view.
D
So,
there's
another
issue
for
the
rendered
view,
which
is
currently
last
ejects,
was
much
higher
per
mark,
a
kind
of
a
large
Martin
file
which
gone
by
the
issues
with
twenty
seconds,
but
I
wonder
today:
where
is
it?
It
is
a
16
second
16
a
second.
So
the
improvement
here
should
improve
that
issue,
but
not
they.
She
does
nice
to
the
dog.
D
D
Other
than
just
mark,
then
I
think
once
I
saw
like
a
3d
file
getting
rendered
an
actual
3d
heat
in
the
browser,
which
is
pretty
crazy
when
you
think
of
a
cool
feature,
mind
pretty
crazy,
so
I'm
I've
written
issue
to
eventually
for
us
to
explore
other
files
that
can
fall
into
that
kind
of
bracket.
But
again
I.
Don't
expect
that
to
be
part
of
this
team's
work
unless
the
steams
asked
again
to
help
in
the
same
area
so
but
yeah
that
mr,
should
improve
the
issue
of
link
to
the
doc.
D
E
D
E
G
E
Yes,
I
started
this
investigating
those
end
points
where
we
have
those
like
a
big
amount
of
cash
as
well
calls
and
I
identify
several
end
points
for
request.
Several
end
points
for
like
sidekicks
servers
and
I
started,
investigating,
like
creating
note,
so
I
promoted
this
issue
to
the
epic,
the
first
one
and.
E
Started
working
from
the
first
sub
issue
that
is
related
to
creating
notes,
and
this
again
touches
those
bonsai
filters
that
we
use
in
blood
control.
So
I
will
push
an
mr
because
I
already
fixed
some
of
them
and
we
will
see
how
it
goes,
but,
like
I
identified
a
lot
of
places
inside
the
bonsai
filter
of
like
get
it
varies.
G
I'm
kind
of
like
yo-yos
to
understand
if
it's
right,
if
it's
like
the
a
big
amount
of
the
cash
where
it
is
like
ugly
pattern
or
is
it
no
20
Potter
so
I'm,
just
kind
of
like
looking
for
that
answer,
it's
like.
If
we
have
these
23,000,
it's
like
it's
really
bad,
that
we
have
is
23,000
or
it's
like
it's
a
page.
E
A
G
F
Like
that's
for
image
resizing,
it's
still
in
research,
I
would
say
so
we
have
a
next
think
meeting
on
Wednesday
and
my
idea
is
to
build
the
data
and
throughput
for
our
size
of
a
village,
an
example
of
English
probes.
Here
when
a
single
signal
may
be
on
the
multi-sphere,
because
you
know
structure
already
fests
and
benchmarks,
comparing
it
to
other
tools.
So
it
would
be
nice,
like
sort
of
I,
don't
know
like
sort
of
comparing
point
from
where
we
could
decide
until
I.
C
F
F
So
a
lot
of
services
would
affect
a
church,
but
if
you
could
somehow
discreetly
like
inject
it
in
the
world
course,
maybe
it
would
note
that,
like
complex
in
terms
of
infrastructure,
as
for
approach
itself,
I
pretty
much
think
from
from
what
we
discussed
so
far
and
from
what
team
wrote
I'm
trying
too
much
things
that
dynamic
is
what
stakeholders
I
mean
would
like
us
to
do
sooner
or
later
so.
I
have
a
feeling
like
that.
What
we
want
is
a
longer
it's
sort
of
universal
solution.
I.
B
I
know
I
just
meant
to
say:
what's
the
from
class
experience,
it
seems
like
the
typical
trajectories
that
it's
usually
initially
easier
to
do.
The
pre
generated
images
where
you
just
generate
them
as
soon
as
a
user
uploads
an
asset,
but
then,
if
you
know,
as
soon
as
volume
adds
up,
it
gets
more
and
more
expensive
and
the
more
sizes
you
add
to
maintain
this
like
pool
of
images
but
I,
don't
really
know
yet
is
I
mean
we
have
some
day.
B
F
B
F
Yeah,
as
I
mentioned
by
Tim,
it
also
quite
tricky
to
statically
resize
content
images
with
a
lot
of
content.
Images
like
they're
used
for
and
there
are
much
bigger
than
others.
So
it
wouldn't
be
much
of
a
game,
but
we
would
still
need
to
store
a
lot
of
data
because
they
would
have
like
bigger
size,
so
I
feel
like
yeah,
more
or
less
moving
towards
10a,
making
this
epic
over
static.
So
to
give
some
idea.
C
C
B
F
It's
not
only
image
data
right,
it
could
be
I,
have
an
initial
breakdown
in
this
crate.
If
you
open
an
issue
this
on
CD
and
hit
rate,
for
example,
yeah
and
like
scroll
a
bit,
we
had
a
table.
I
asked
one
of
above
a
bit
above
and
also
above
above
scroll,
scroll
up
yeah.
So
this
is
para
poder,
but
we
didn't
distinguish
it's
already
like
difficult
to
ten
minutes,
but
we
can't
distinguish
either
file
upload
or
how
many
images
across
this
file
folder
stuff.
F
F
C
I,
so
my
concern
with
the
dynamic
resizing
is
kind
of
feels
like
the
front-end
team.
Just
wants
the
easiest
solution
for
them,
which
I
don't,
which
is
great
if
we
can
provide
it,
but
we
also
have
eight
different
front-end
avatar
sciences,
and
you
know,
and
and
there's
also
desire
for
other
random,
resizing
right
and
I'm,
worried
that,
well
that
our
cache
hit
rate
will
be
really
low
or
rather
make
the
cache
very
large
to
try
and
handle
these
things.
C
C
You
know
I
wonder
about
how
frequently
these
images
are
actually
utilized
other
than
avatars.
It
seems
like
avatars
are
by
far
and
away
the
most
commonly
requested,
and
then
you
know
how
often
you
get
a
custom
uploaded
image
night
I
think
the
common
ones
might
be
like
to
read
me
files
and
things
like
that
on
project
home
pages
like
I.
Can
those
probably
pretty
high
value
frequently
hit,
but
like
a
random
issue
upload?
Is
it
that
big
of
a
deal
if
you
pull
them
the
whole
thing
and
resize
it
in
your
browser?
D
B
I
think
there
is,
there
might
also
be
a
middle
ground
where
I
I
think
that
could
be
a
solution
that
is
kind
of
a
like
a
mixture
of
this,
where
it's
not
generated
on
the
fly.
Perhaps,
but
you
can
do
this
approach
where
it's
done
lazily,
so
it's
always
like
lagging.
Let's.
Let's
say
you,
you
want
to
pull
down
an
avatar
that
is
supposed
to
be
small,
so
it
needs
to
be
fast,
but
on
the
first
hit
we
would
find.
B
Oh,
we
don't
have
that
size
yet
so
we
still
serve
the
original
as
we
do
know,
but
we
know
there
was
a
request
for
it.
So
we
a
synchronously,
let
a
psychic
worker
generate
a
smaller
version
of
it
or
maybe
even
all
of
them
that
we
know
might
get
requested
so,
but
it
would
be
lazy
so
that
the
next
time
the
same
request
comes
in
for
that
small
I
mention.
B
We
do
have
it
right,
so
there's
kind
of
a
middle
ground,
because
you
don't
do
it
on
the
fly,
because
you
don't
need
to
wait
for
that
image
to
be
generated
in
band-like
as
part
of
the
request
response
cycle.
But
you
also
don't
need
to
run
these
massive
bad
jobs
where
you
need
to
tran
translate
like
you
know,
terabytes
potential
image,
data
which
might
take
a
week
or
so
could
just
be
another
option.
Yeah.
F
Or
we
could
set
like
a
pretty
aggressive
time-out
on
dynamic
resizing.
We
would
have
this
service
like
a
nice
proxy.
We
would
super-aggressive
time
out
if
it
wouldn't
response,
we
would
solve
the
original
image
and
then
like,
let
it
finish
the
job
and
cache
as
a
result.
The
next
time
we
served
from
the
cache,
for
example,
because
I'm
sure
it
would
be
like
radio
for
avatar
Bridgette
answered
a
quest
for
like
how
much
a
couple
of
milliseconds,
maybe
so.
C
C
F
B
It's
a
really
good
question,
because
I
think
a
lot
of
these
questions
kind
of
hinge
on
this
whole.
You
know:
what
should
we
do
or
is
the
thing
we
want
to
do
for
calm?
Is
that
necessary
to
do
for
self
manage
and
kind
of
vice
versa?
Or
should
it
work
for
both
cases,
because
I
would
say,
fuck'em
I
would
just
say
let
the
CDN
CDN
handle
the
caching
and
then
we,
if
it's
not
in
the
CDN
cache,
we
go
to
an
object
storage.
But
if
self-manage
that.
D
Generally,
I
think
from
what
we
see
that
usually
it's
a
proxy
browser
starts
get
funny
when
you're
trying
to
load
assets
directly
from
different
URLs
than
the
one
that
you're
on
they've
generally
will
block
it.
Other
kind
of
weird
behavior,
like
that
I
think
I've
seen
a
different
movie,
a
different
object,
storage
provider.
Maybe
one
saw
it
going
to
right
now.
She
was
confusing
because
there
wasn't
any
direct
config
to
do
that.
But
generally
it's
a
proxy.
C
C
D
G
Like
it's
even
more
complex,
because
what
is
cheaper
is
like
to
cash
or
is
it
like
sending
humor
on
to
the
big
stories
or
the
kind
to
access
the
file
directory?
I
honestly,
don't
know
about
the
pricing
it's
like,
we
probably
will
have
to
run
experiment
and
check
the
impact
on
the
aggress
coast.
Yes,.
C
C
G
Probably
so
probably
what
you
want
to
do
is
rewrite
is
to
transfer
the
least
amount
of
the
data
from
the
google
cloud
in
any
form
to
the
to
the
CloudFlare,
basically
or
like
to
the
clients,
because
like
because
like,
if
you
do
like
caching,
let's
say
internally
and
proxy
internally,
to
do
a
big
storage.
These
traffic
is
free.
You
only
paying
for
the
requests
really,
but
the
traffic
is
free,
so
like
resizing
convicted
of
price.
Has
the
impact
that
you
still
pay
for
the
agnes
traffic
from
atop
so
which
is
which
is
like
digress?
C
G
Yeah,
so
so
not
sure
which
is
more
expensive.
Yes,
so
basically,
we
also
look
at
the
weave
and
like
the
expected
hit
ratio
and
from
the
seven-day
period,
we
are
looking
that
github
could
have
the
hit
ratio
of
like
with
on
the
dynamic
resizing
of
something
like
85
to
90
percent,
on
the
images
like
not
changing
any
sizes
so
like
we
were
looking
at
90%
of
the
hit
fresh
on
the
current
sizes
so
like
supporting
all
sizes
that
are
coming
to
us
by
the
application.
G
G
We
would
be
residing
on
about
11.45
million
of
the
images
from
the
seven
day
period
and
it's
kind
of
like
flattens
over
time
because,
like
for
the
60
minutes,
we
had
around
like
50
percent,
less
than
50
for
the
24
hours
we
had
about
25
hours
or
20,
and
it
was
like
the
longer
period
like
we're
kind
of
like
getting
to
something
like
10
percent
or
something
like
that
on
the
hit
ratio
and
I'm
talking
about
the
Gita,
because
I'll
accidentally,
the
CDN
and
CDN
Edition
idea.
He
asked
about
50
percent
hit
ratio.
G
So
so,
probably
like
you
could
look
that
we
would
be
saving
like
hey,
let's
say,
maybe
nine
medium
x,
48
kilobytes,
because
this
is
the
average
size.
Probably
the
avatar
will
be
of
one
kilobyte
size.
I
would
expect
something
so
we'll
be
saving
about
47
kilo,
bytes
x,
9
million
of
the
files,
so
this
would
be
the
traffic
sightings
I
guess.
G
I
because
it's
another
contractor,
because
the
each
request
needs
to
be
authenticated
and
authorized
so
basically,
this
is
why
you
have
so
poor.
He
tracer
on
the
CDN
for
the
Albertans.
Probably-
and
this
is
interesting
aspect-
maybe
there
is
some
optimization
somewhere
to
put
if
their
avatars
publicly
available.
We
could
make
me
more
aggressive
with
the
caching
because,
like
my
Riki
of
the
avatars,
they
are
publicly
available,
so
we
don't
need
to
authorize
them
and
like
we
could
really
request
them.
G
G
G
Probably
from
what
I
understand
how
it
works
like
the
CloudFlare
creates
like
a
unique
the
I
mean.
This
is
why
we
have
some
poor
hit
rates
on
the
CloudFlare,
because
it
ties
you
with
the
social
cookie
or
something
like
that.
So
basically,
you
may
receive
the
fight
from
the
cloud
for
the
same
session
cookies
being
used,
but
like
it's
not
being
shared
across
all
users,
because
it
needs
to
be
authorized.
That's
interesting!
It
that's.
B
G
G
B
Yeah
I
was
actually
wondering
if
that's
even
worth,
having
a
separate
epic
because
correct
if
I'm
wrong,
but
my
understanding
was
like
anything
related
to
the
CDM,
then
will
never
apply
to
self-manage
right.
So
so
we
can't
really
rely
on
anything.
The
CDN
does
for
the
final
solution.
Unless,
yes,
we're
not
that's.
C
F
C
C
But
yeah
but
I
think
regardless
we
paid
for
CloudFlare
and
let's
try
and
get
usage
on
engines,
so
they
can
optimize
our
experience.
Calm
we're
paying
for
a
service.
We
put
it
for
a
reason:
let's,
let's
try
and
you
let's
try
and
get
a
highest
ROI.
We
can
out
of
it.
But
it's
a
good
point
that
I
think
you
know
we
don't
want
to
depend
upon
CDN
and
for
the
overall
I
think
epic,
here
or
issue
here,
which
is
the
image
resizing
yeah.
B
A
A
And
sorry,
going
back
to
the
image
resizing
it
big
issue,
so
there's
a
follow-up
meeting
on
Wednesday:
do
that
to
discuss
further
letter
that
I
got
there
so
for
unassigned
issues
for
13,
just
going
down
from
the
top
Nikolas
look
like
something
you're
working
on.
Is
this
gonna
make
it
for
13,
or
should
we
kick
it
out?
A
G
F
F
F
B
B
This
one
yeah
it's
like
kind
of
it-
was
one
of
these,
like
colorful
basket
issues
of
multiple
things
so
I
fixed.
Can
you
cool
it
down
a
little
bit?
Actually
right,
I
fixed
actually
to
a
lot
of
these
four
things.
I'm
actually
almost
inclined
to
close
this,
and
then
it
can
end
it
a
bit
of
a
grab
bag
of
things
that
we've
found
were
or
might
becoming
issues.
Most
of
them
relate
to
multi
note,
but
some
of
them
actually
turned
out
to
be
promised
earlier.