►
From YouTube: 2020 07 20 Memory Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Camille,
yes,.
A
And
then
future
flags,
typeops
and
license
are
those
gonna
finish
in
the
next
two
days.
B
I'm
gonna
finish:
probably
one
of
them.
It's
gonna
be
licensed.
Okay,
but
I
I
don't
think
that
I'm
gonna
finish.
Ups,
all.
A
Right,
let's
move
it
over
then
there's
a
follow-up
documentation
grant
for
adjusting
memory
requirements.
C
Oh
yeah,
I
I
have
to
admit
I
forgot
with
this
yeah.
Let
me
review
that.
I
think
what
I
said
last
year
is
still
is
still
correct,
so
I
think
we
just,
I
think,
we're
kind
of
splitting
hairs
over
the
the
correct
terminology
to
use
about
500,
more
thousands
should
be
up
to
which
I'm
easy
with
by
the
way
so
yeah
leave
it
with
me
I'll
get
that
progressed.
A
C
Yeah,
that's
fine
as
well
the
before
it
with
tables.
We
just
use
the
numbers
in
the
table
itself,
but
actually
in
the
docs.
We
do
say
up
to
so
I'm
happy
with
that
to
be
to
be
the
the
situation
there,
so
I'll
ping,
axel
and
hopefully
get
it
progressed
this
week
for
the
remaster.
C
D
A
D
D
D
B
A
To
finish
that,
yet
okay
looks
like
we're
in
a
good
spot
for
thirteen
two
now
so
I
don't
know
if
anybody
was
following
along
so
with
the
import
export
handover,
I
couldn't
transfer
it
for
some
reason.
It
just
kept
failing
silently
and
doing
nothing.
So
I
submitted
an
issue
to
track
down
what
was
going
on
there
and,
in
the
meantime,
george
did
an
export
and
import,
so
they
have
the
pipelines
running
on
their
side.
What
do
we
want
to
do
with
our
project?
E
I
would
it
would
be
nice
to
keep
the
history.
Can
we
just
archive
it
so
that
it?
You
know,
you
know
that
it's
not
in
plain
sight,
but
you
know
if
there's
anything
we
ever
want
to
go
back
and
reference
we
can
still,
instead
of
the
ability
to
look
at
it.
That
would
be.
That
would
be
good.
Okay,
thank
you,
craig
form
for
doing
that.
Yeah.
E
A
I
don't
know
if
anybody
read
the
description,
but
I
would
follow
the
instructions
hit
transfer
and
it
would
do
nothing.
The
screen
would
just
sit
there.
I
wouldn't
get
an
error
or
anything,
so
I'm
not
sure
what
happened
there
we'll
see
if
we
get
any
traction
on
the
bug,
but
that
was
a
weird
one,
so
I
will
I'll
archive
that
later
on.
E
A
E
Thanks
by
the
way
super
useful,
the
stuff
you
did,
I
really
love
it.
This
is
great.
I
mean
because
for
us
that
that's
all
we
were
looking
for
to
just
like
test
what
we
have
you
know,
so
this.
E
Wait
for
like
data
team
to
build,
you
know
like
all
the
stuff
on
top
of
it,
so
I
think
for
now
that's
just
good
enough
and
yeah,
so
we
just
started
to
look
at
the
submissions
and
some
odd
stuff
in
there,
but
I
think
mostly
it
makes
sense
and
it
kind
of
yeah,
mostly
it's
actually
as
expected.
I
guess
it's
also
nice
to
see
that
the
first
three
buckets
or
so
the
biggest
ones
in
terms
of
like
hardware
specs,
seem
to
fall
pretty
clearly
into
like
our
reference
architecture.
E
So
it
looks
so
far
that
people
are
following
it,
which
is
nice,
so
yeah,
that's
a
couple
like
weird
submissions
with
like
an
insane
number
of
cpus
and
while
that's
like
physically
possible,
it's
probably
the
more
likely
answer.
Is
that
we're
over
counting
some
somewhere.
So
I
send
a
couple,
mrs
make
sure
that
can't
happen
anymore.
So
at
least
so
we
don't
know
if
that's
happening,
because
for
yeah
for
this
to
know,
we
would
have
to
basically
get
a
label
dump
of
everything
they
report
into
prometheus,
which
is
not
an
option.
E
E
So
that's
all
the
mrs
I
well
one
I
added
that
is
not
related
to
a
bug
necessarily,
but
one
thing:
that's
still
a
bit
lacking
is
our
coverage
of
github
services,
and
what
I
think
would
just
be
good
to
know
is
like
how
lacking
is
it
because
currently,
unless
something
is
explicitly
on
the
service
allowance
that
we
have,
we
just
skip
it
entirely
for
like
privacy
reasons,
because
we
don't
accidentally
want
to
include
data.
E
That
is
not
even
gitlab
related,
but
so
I
created,
mr
though,
to
at
least
track
these
as
failures,
so
that
we
know
oh
here's,
a
job
that
we
encountered,
which
we
were
not
able
to
map
to
one
of
these
allowed
services
that
we
do
track.
So
I
think
that
might
be
a
sweet
spot,
because
then
at
least
we
know
it's
happening
and
what
and
we
would
know
what
the
job
names
are,
which
is
very
technical,
config,
there's
no
pii
in
there
or
anything,
but
but
we
would
still
not
track
the
actual
data
of
these
either.
E
So
I
think
it
should
be.
That
should
be
like
a
good
background
to
go
about
this
yep.
I
know
ginyu
worked
on
like
some
cleanup
stuff,
like
the
old
web
server
breakdown
was
broken.
We
can
just
remove
this
now.
The
new
data
is
still
empty,
like
these
charts
are
still
empty,
because
I
got
the
timelines
all
mixed
up,
because
we
only
ship,
we
will
only
be
shipping
this
with
13.2
the
fixed
version.
So
that's
why
there's?
No
data
yeah
we'll
have
to
be
impatient.
E
So
I'm
not
totally
on
top
of
this
issue
that
so
there
was
some
problem
with
automatically
ingesting
this,
and
it
sounds
so.
We
talked
to
someone
from
data
team
and
they
said
the
telemetry
team.
They
are
looking
into
a
problem
with
the
data
pipeline.
So
until
that
is
resolved,
the
data
team
was
manually
updating
the
data
for
this
test
dashboard
once
a
week,
so
so
that
was
slow
at
least
for
a
while,
I'm
not
told
I
think
I'm
subscribed
to
this
issue.
E
I
haven't
got
any
updates,
so
it
might
still
be
an
issue,
but
on
top
of
this
also
the
problem
is
it's
just
like
upgrade
delay
right.
So
for
all
these
new
features
that
we
will
ship
and
fixes
that
we
will
ship
with
30
by
two.
We
need
customers
to
upgrade
to
13.2
right,
so
right,
yeah
we
might
get
a
mixture
of
like
good
and
bad
data
for
for
a
while,
but
yeah
over
the
coming
months.
It
should
get.
D
Unfortunately,
I
think
the
analyst
and
data
engineer
who
typically
would
work
on
this
is
out
of
office
until
mid-august.
That's
you,
so
the
ask
is,
it
seems
to
be,
can
can
we
self-serve?
D
So
I
wanted
to
ask
the
question
craig.
You
seem
pretty
handy.
I
don't
want
to
sign
you
up
without
your
knowledge,
but
anyways.
E
Keep
in
mind
that
we
don't
have
that
data
yet
right,
because
this
will
also
be
part
of
13.2,
so
we
will
have
to
wait.
I
don't
know
how
long
it
will
take
for
the
first
data
points
to
come
in
for
request
volume,
but
once
we
have
this
yeah
I
mean
it
would
be
great.
If,
like
oh
chino,
can
yeah
exactly
she
knew
he
opened
access
requests.
E
So
he
can
do
the
queries
now
as
well,
exactly
if
we
can,
just
even
if
it's
just
a
we
can
just
slap
like
another
table
or
kind
of
like
a
nice,
smaller
graph
and
therefore
now
that
might
just
be
good
for
now
to
verify
it
will
not
be
maybe
the
final
exam.
D
E
So
I
forgot
to
mention
this:
the
usage
ping
itself
only
runs
once
a
week
as
well.
Exactly
so,
and
I
think
yeah
it's
on
a
sunday
or
some
weekend,
yeah
like.
E
It's
like
I
don't
know
about
the
day,
but
it's
like
it
uses
a
like
a
jitter
approach
so
so
that
we
don't
have
this
thundering
hood
request.
I
don't
know
how
it's
spread
out
exactly,
but
it
will
try
to
add
some
randomness
to
the
submissions.
D
So
we
don't
deal
ourselves
on
on
sunday.
Every
every
day
makes
sense,
but
but
yeah
cool.
So
I
I
think
if
you
also
want
to
reply
there
and
then
we
can
maybe
set
like
a
due
date.
So
don't
forget
about
it
like
you
know
next
week
and
you
can
check
in
and
see
what
data
looks
like
if
it's
even
there
that
that
would
be
great
I'll
set
up
data
I'll
set
one
now,
just
so
you
don't
forget
about
it,
but
that
sounds
good.
D
It
might
require
some
dbt
work.
I
don't
know
to
get
this
to
get
the
new
fields
over
into
the
proper
tables,
but
you
all
seem
to
be
able
to
do
queries
just
fine
on
those
kind
of
raw
usage
ping
data
which
I
wasn't
sure
was
possible.
So
maybe
we
don't
need
dvt
work.
I
don't.
I
don't
know.
E
E
Do
you
actually
think
it
would?
I
don't
know
if
this
is
against
company
policy,
but
I
was
wondering
if
we
could
have
a
team
account
even
for
science,
because
I
think
my
understanding
was
the
main
reason
why
it's
kind
of
tricky
to
get
access
to
this
to
create
dashboards
is
purely
for
cost
reasons
for
most
cost
reasons,
because
it's
like
per
seat
licenses
or
something.
E
So
that's
why
we
have
to
go
through
an
access
request.
I'm
wondering
do
you
think
it
would
be
possible
at
all
to
just
have
a
team
account
so
that
we
could
all
do
it.
D
F
Sizing
yeah,
we
had
a
sink
meeting
consulted
with
matthias
and
camille,
and
I
think
the
idea
we
came
through
is
that
it's
quite
complex
and
overwhelming
try
to
like
instantly
insert
image
proxy
into
this
whole
story.
So
we
came
with
the
idea
to
try
a
quite
small,
quite
simple
plc
like
using
the
tools
which
already
available
in
go
so
and
just
write
in
simple
cmd,
like
which
we
use
as
a
like,
separate
subprogram
inside
workproxy,
to
make
it
resizing.
F
If
you
do
that
so
yeah,
I
think
our
plan
is
to
have
think
meetings
with
camille
and
try
to
get
our
hands
dirty
and
go
as
soon
as
possible,
and
maybe
we
will
postpone
a
bit
image
proxy
story.
I
mean
the
particular
tool
itself
so
yeah.
That
was
my
summary.
E
No
sounds
good.
Thank
you.
D
One
thing
I
have
to
add
here
I
just
like
to
comment
on
the
issue:
is
that
apparently
we're
already
using
an
image
proxy
on
github.com?
D
I
wasn't
aware
of
this,
but
apparently
using
camo
as
an
image
proxy
for
security
reasons.
I
I
linked
in
the
issue
yar.
Just
let
me
know
yesterday
I
was
asking
about
that.
We
need,
like
a
network
flow
diagram,
so
you
understand
like
how
all
this
stuff
works,
with
like
cloud
flare
and
all
the
interactions
and
to
the
point
it
was
like.
Well,
you
know
we
have
something
else
in
here,
too
yeah
I'll.
Oh
it's
I
linked
it
in
the
dynamic
here.
D
E
B
A
B
So
maybe,
like
alexi
kind
of
falls
down
for
us
like
specifying
in
ways
anything
generic
and
like
proxying
data
from
like
external
service,
and
it
could
be
image
proxy.
It
could
be
any
other
service
that
would
provide
you
like
image
resizing,
but
like
with
this
external
service.
That
is
like
the
tricky
part
like
with
this
particular
one
that
you
would
not
be
able
to
resize
local
files.
It
would
be
on.
You
would
be
only
able
to
resize
the
files
that
you
could
access
via
object,
storage
or
something
like.
B
A
All
right
look
into
that
and
move
on
for
now
so
nicola
you're,
mostly
working
on
this.
G
Yeah,
actually
I
didn't
work
a
lot
on
that,
because
I
was
wrapping
up
those
massive
transactions
and
some
other
issues.
I
updated
the
epic
with
the
potential
candidates
for
his
investigation,
so
I
plan
to
create
other
sub
items
on
this
issue.
This
there
is
only
one
existing
issue.
G
G
Yeah
I
posted
this
because
I
saw
that
josh
closed
the
issue.
I
guess,
on
friday,
like
this
optimization,
is
currently
under
the
future
flag,
so
yeah
josh.
I
think
we
should
probably
remove
it
from
the
relief
post
because
we
didn't
enable
this
speech
reflect
yet
because
we
wanted
to
test
it
like.
E
G
F
G
C
Yeah,
it's
it's.
It's
a
slightly
different
process
of
performance
issues
and
we're
still
learning
as
we
go
along,
but
the
idea
is
that
we
need
to
confirm
that
the
performance
is
improved
or
if
it's
a
future
flag,
then
we
need
to
test
it
with
flag
on
and
then
we
tend
to
close
the
issue
when
the
flag
is
removed
and
defeat
and
the
fix
is
actually
in
the
code
proper.
But
when
it
comes
to
these
things,
we'll
learn
quickly.
C
That,
and
there
is
you
know,
some
performance
issues
are
pretty
complicated
and
they
have
you
need
they
need
to
be
tackled
in
various
ways.
So
we
made
it
so
that
you
can
close
issues.
If
you've
done,
you
made
some
progress
and
then
we
can
make
a
new
issue
for
the
next
tier.
That
kind
of
thing.
So
that's
what
needs
to
happen
here,
both
source
of
rendered
view.
C
I
attempt
to
try
and
create
two
issues
and
try
to
make
it
as
clear
as
possible
that,
even
though
it's
the
same
page,
there
are
two
different
parts
of
it
at
least
to
start
with
that
are
performing
both
badly
and
also
both
differently.
C
So
we
do
need
to
have
two
issues
to
keep
tracking
both,
but
I'm
more
than
flexible
to
how
we
do
that.
If
you
want
to
raise
an
epic
and
then
issues
for
each
one
or
anything
else,
it
obviously
may
not
be
with
memory
team.
It
might
be
a
different
team.
Again,
that's
fine,
but
we
as
quality
performers
need
issues
to
keep
track
and
keep
keeping
on
those
pages
so
that
we
know
that
they're
still
not
performing
up
to
after
the
targets
that
we
want.
Ultimately,.
G
C
B
C
So
that's!
That's!
That's
great!
Thank
you.
So
much
for
the
work
so
we'll
create
we'll
create
a
new
issue
for
that
and
for
everyone
for
source
I
ever
I'll
ever
reopen
or
create
an
issue.
That's
that
makes
it
clear
what
it's
about
and
have
it
say
to
the
right
team
for
the
next.
Whatever
the
next
work
would
be.
A
C
C
C
C
For
the
same
page,
yeah
yeah.
C
Yeah
I
get
help,
they
apply
quite
a
few
different
limits
in
places
still
loads,
though
it
still
loads
it
does.
It
does
but
hey
that's.
We
know
that
we're
not
king
or
performance,
not
yet,
but
we're
making
some
good
progress
so.
G
C
I've
gone
by
our
competitors,
comparison
which
they
actually
have
the
weekly
to
my
knowledge,
yeah.
We're
actually,
second,
though,
not
on
that
test,
still
bad,
but
we're
still
we're.
Second,
like
our
former
score
20
according
to
google's
lighthouse,
but
that's
still
seconds
it
helps
actually
worse.
So,
but
that's
a
different
file.
That's
the
linux
maintainers
file,
which
is
equally
a
pretty
nasty
file
yeah.
I
guess
the
change
log.
C
C
I
mean
that
is
one
of
the
other
stories
about
performance
as
well
that
people
don't
talk
about
is
or
it's
not
it's
not.
The
first
discussion
is
that
you
know
the
competitor
is
source
and
they're
liquid
they're
lightning
fast,
but
it's
because
they
don't
do
much
they're
very,
very
basic
git
repo,
where
they
actually
they'll
show
you
the
file.
But
that's
it
there's
nothing
else
happening,
and
the
pages
are
very.
B
C
So
that's
the
other
part
as
well.
Yeah
they'll
show
you
the
file
fast,
but
all
the
other
stuff
that's
happening
just
isn't
happening,
so
that's
their
apartment
as
well.
Like
even
like
that
page
even
says
here.
It's
on
sourcehub
sourcehut.
C
It's
only
two
requests,
whereas
on
our
says,
23
github
231,
so
you
know
every
every
other
competitor
is
doing
more.
So
it's
always
a
it's
a
bit
of
a
game.
Sometimes.
C
But
yeah
we're
seeing
some
good
improvements
on
these
pages
source
render
is
the
right
one
to
tackle
first
source.
We
definitely
want
to
see
improve
as
well,
because
we
already
have
a
limit
for
that
page
and
after
files
over
a
megabyte
in
size
it
doesn't
get
showed.
We
might
want
to
look
at
other
limits
such
as
commit
history
or
other
stuff
like
that.
But
for
now
we'll
get
the
issues
reopened
or
raised
and
we'll
just
continue.
C
A
All
right,
we
are
at
the
end
of
the
agenda
I'll
go
through
and
summarize
follow-up
items,
but
is
there
anything
else
we
need
to
cover
for
today.