►
From YouTube: 2020 06 08 Memory Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Alright,
happy
Monday
everybody,
so,
let's
time
ID,
so
we
are
nine
days
till
the
end
of
the
milestone
and
I
have
created
sorry
I
didn't
copy
folks
earlier
the
planning
issues
been
out
there
for
a
while
saw
thanks
Matias
for
adding
the
telemetry
and
what
was
the
other
one
that
yeah
the
action
cable,
no
it's
on
there
so
that
it
does
is
themed.
We
can
start
working
on
breaking
down
issues
and
line
it
up.
Thirteen
there's
some
stuff,
that's
been
carried
over
thirteen
that
need
to
be
cleaned
up
as
well.
So
take
a
look.
A
B
C
B
B
To
blob
controller,
there
is
no
local
en
croute
from
my
opinion
and
make
as
simple
as
it
was
in
playing
controller,
but
I
open
the
measure
with
a
couple
of
improvements
and
I'm
looking
into
bonsai
how
it
works
and
to
see
if
there
is
some
way
to
reduce
the
time.
But
I
need
to
spend
a
bit
more
time
on
this,
so
to
say,
Aye
minutes.
B
E
So
the
current
state
is
that
kind
of
the
small
NBC
that
we
had
define
is
on
its
way,
I
think
the
main
problem
we're
still
struggling
with
this
a
bit
like
actually
testing
this,
because
you
know
it.
The
complication
is
that
we're
not
actually
building
this
flood
comm,
where
it
would
be
easy
to
test,
and
it's
because
the
carry
only
works
for
single
node
deployment.
So
there's
some
some
extent
of
local
testing
you
can
do
but
like
once.
Basically,
the
data
leaves
your
container
we're.
E
E
B
B
F
Yeah
so
we're
there,
so
the
politics
are
identified,
Northstar
metrics
for
all
the
groups
and
some
email
and
spring
metrics
so
for
the
most
part
sure
I
have
this
documented
in
the
handbook,
but
I
do
plan
to
ask
them
which
pages
correspond
with
these.
Just
we
can
I
can
make
guesses
for
each
group
here,
but
I
do
want
to
ask
them,
because
I
think
the
NBC
could
be.
F
The
smallest
thing
could
be
to
just
have
a
cap
on
a
page
dedicated
to
each
group
and
then
have
like
a
size,
feed
page
load
test
for
each
of
them
like
that
might
be
the
smallest
thing.
It
wouldn't
require
any
sort
of
engineering
work.
I
did
pull
the
punic
team
and
asked
folks
what
they
used
to
understand
the
performance
of
their
workflows
today
and
effectively.
The
answer
was
nothing
so
that
they
use
gitlab
and
that's
how
they
understand.
F
If
it's
something
slow
or
not
so
I
think
I
think
just
having
a
page
of
like
here
that
workflows
warehouse
load
here
it
would
be
a
good
first
iteration,
ideally
I'd
love
to
get
this
in
this
license,
but
that
can
be
a
further
iteration.
Once
we
have
data
being
collected
and
then
obviously
that
journeys
can
be
asked
like
connecting
the
pages
can
be
another
iteration
from
there.
So
I
think
that's
my
idea
for
iteration
here
at
this
point
in
time.
Does
that
make
sense
thanks.
B
A
So
Jinyu
was
hopefully
asleep.
I
was
working
on
the
rails
good
time
and
he
got
some
feedback
from
Stan
just
a
few
hours.
You
know
actually
about
speeding
up
the
grape
load
time
without
much
effort,
I
link
to
a
comment.
There
he's
continuing
your
work
on
that
and
then
yeah
atomic
processing
composite
status,
Camille
so
yeah.
There
was
an
issue
that
was
raised
over
I
think
last
week
and
I
created
a
separate
issue.
That
I
believe
was
a
duplicated.
She
make
sure
the
looking.
Oh.
A
D
Unable
for
the
last
finite
time,
I,
hope
and
see
if
it's
working
for
the
next
few
days,
because
like
I
am
actually
like
I
was
waiting
for
like
for
these
fixed
alone,
but
I
notice.
If
I
hit
the
plow
on
like
the
fewer
first
day
in
the
evening
and
like
we
need
that,
like
we
got
a
neighbor
that
on
Friday,
so
Nicola
said
that
he
gonna
enable
that,
after
the
meeting.
A
A
C
B
C
But
there
are
a
lot
of
refactoring
in
the
first
one.
This
may
get
a
diplomat
with
transaction
here,
I'm
trying
just
to
like
make
all
those
who
suction
more
resistance,
because
we
have
different
implementations
in
different
places.
But
this
is
a
huge,
mr
and
probably
it
will
be
broken
in
the
small
ones.
A
F
A
C
A
D
You
typing
yes,
if
enough,
like
the
random
comment,
like
a
condom
in
our
usage,
specifically
TC,
and
things
like
that,
so
as
soon
as
we
hope
that
we
could
take
a
look
at
the
endpoints
that
have
like
the
high
number
of
the
cached
queries
and
see
if
we
can
like
there
is
many.
If
there
is
some
low-hanging
fruit
that
we
could
piece
yeah.
B
Well,
that's
for
the
import
expert
performance
pipeline.
I
didn't
check
the
issue,
but
I
think
I
will
and
about
handling
it.
To
be
honest,
I
think
that
we
should
handle
it.
Maybe
after
fixing
this
issue
at
least
identifying,
because
what's
your
opinion,
I,
don't
think
that
we
are
working
on
it.
So
maybe
it
could
be
handled
to
import
experts.
You
yeah.
E
This
channel
kind
of
agree
I
mean
ideally
afternoon
because
I
mean
I,
think
it
is
a
good
tool
to
have
and
when
we
built
it,
but
I
don't
want
to
fall
into
this
trap.
Where
the
memory
team
like
starts
these
initiatives
to
get
into
my
inside
of
something
and
then
continues
to
own
all
these
pipelines
on
behalf
teams,
I
think
I
think
we
should
even
enabler
we
should.
E
We
should
build
these
tools
and
help
getting
insight
into
problems
and
then
hand
these
over
to
the
teams
that
own
the
part
of
the
product
and,
if
there's
a
problem
of
imports,
where,
like
some
stale
record,
blocks,
double
import,
which
is
interesting
right
because
it
actually
crashes
the
whole
import.
Even
though
we
added
this
like
catch-all
thing
that
should
track
it
into
just
a
failed
relation,
but
it
appears
to
be
just
like
crashing
the
whole
import.
E
If
that
is
a
problem
that
might
be
reducing
production
because
it
is
testing
I
mean
that's
a
product
issue,
so
I'm
wondering
like.
Should
we
like
make
them
more
aware
of
this,
like
I?
Think
the
important
maybe
should
be
looking
at
this
as
well
right,
because
it
is
like
a
test
of
that
product
event
every
day.
Okay,.
C
D
And
so
we
really
like
to
start
with
the
issue
or
like
some
investigation,
if
this
is
like
s1
or
s2
type
of
them
yeah.
We
can
definitely
do
that
yeah,
because,
because,
like
I'm
kind
of
worried
like
we
see
an
error
which
is
kind
of
photo
and
we
do
not
make
action
and
we
wait
on
deciding
whether
to
make
action
I
think
in
such
cases,
if
we
see
something
like
so
abnormal
like,
we
should
make
an
accident.
E
D
E
Agree
with
this,
but
on
the
question
is
more
that
this
does
what
happen
again
so
should
we
continue
to
like
be
the
team
responsible
for
monitoring
all
this
stuff
like
for
the
next
yeah,
whatever
opportunity
or
do
we
want
to
like
actually
hand,
is
over
I
think
it
would
be
better
like
once
it's
tool
is
build
and
we
have
all
this
in
size
that
we
can
pull
out
of
it.
I
think
the
team
that
builds
this
product
should
look
at
these
things,
so
they
saying
they
shouldn't
be.
E
A
E
E
E
E
D
D
A
A
E
A
A
D
If
you
would
have
to
like
improve
forking
mother,
this
will
be
like
the
very
quest
for
that.
So
at
least
for
now,
correct
I
would
likely
put
that
into
backlog
to
consider
that
as
part
of
the
bucket
of
like
I
don't
know
the
the
starting
over
like
I,
don't
know
more
usage
like
improvements,
because,
like
all
of
that,
could
would
be
maybe
like
a
part
of
like
these
good,
two
gigabytes,
raspberry
pie
or
something
like
that.
D
So,
just
maybe
like
we
have
do
if
maybe
some
epic
that
would
cut
like
these
I
almost
threw
Charminar
improvements,
because,
like
these
2
gigabyte
ones,
would
be
probably
one
of
these
like
Idol
or
structural
or
like
how
like
like
how
we
would
name
it
like,
or
maybe
like
that.
Another
aspect
like
I,
think
I
be
running
it
up.
Unlike
more
constrained
environments
and
like
this
could
be
like
one
of
the
yeah.
F
E
D
A
A
F
Yeah
I
mean
I,
guess
I
can
make
a
separate
epoch
for
Raspberry
Pi,
but
the
overlap
is
really
I
yeah
right
and
so
I
feel.
Like
anything,
that's
if
there's
anything
specific
about
a
Raspberry
Pi
that
can
go
into
Raspberry,
Pi,
epic,
but
none
of
these
things
I,
don't
think
horrific
poverty
or
a
time.
It's
always
about
running
on
a
more
my
reckoning,
device
right,
so
I
think
I've
ever
had
this
be
about
memory
constrained,
and
then
we
get
a
separate
epoch.
A
No
I
totally
agreed
that
it's
a
better
title
and
better
relation
to
the
it
makes
it
hurts
very
the
issues
that
are
already
listed
within
the
epoch.
I
just
didn't
know
if,
from
the
product
side
like
Sid
and
Jerry,
if
they're
still
really
excited
about
giving
gitlab
running
on
a
Raspberry
Pi
or
if
it's
more
of
a
like
a
secondary
goal
and
I
mean
the
raspberry
PI's
a
bit
of
a
moving
target
with
the
way
the
hardware
changes
year
over
year
anyway.
So
I
like
the
idea
of
changing
the
name
of
that
pic
anyway.
A
F
D
So
maybe
so,
maybe
like
like
looking
at
the
outcome,
like
probably
like
the
outcome
of
this
IPP,
wants
to
reduce
requirements
for
running
it
up
today,
it
says
for
from
nos
or
geeks
or
swap
so
maybe
like.
The
infantry
is
like
to
have
it
at
each
or
like
epic.
That
says,
reduce
the
requirements
like
two
two
weeks,
maybe
for
running
kita,
because,
like
I
I
think
that
this
is
what's,
it
was
kind
of
really
like
saying
meeting
what
what's
like,
what
takes
us
to
change
our
requirements
to
be
lower.
D
F
I
agree,
I
think
it's
like
a
quick
question
around
you
know
personally,
I've
run
I
run,
get
loving
a
lot
of
forgive
machines
without
any
swap
whatsoever.
I
GCP,
that's
the
default
and
runs
behind
at
least
for
limited
use
cases
you
know
of
me
being
on
it
and
testing
CI
and
doing
things
like
that.
So
think.
Also
some
Berman
minimum
recommended
or
minimum
requirements
are
a
bit
high
grant.
You
know
I
think
you're,
really
thinking
of
a
look
at
that,
but.
D
So
just
there
is
also
like
very
interesting
aspect,
maybe
like
a
spheric
intent
would
be
we
induce
requirements,
but
maybe
we
a
lie-in
requirements
to
the
actual
mashing
sites,
because
like
ground
like
rightfully
mentioned
recently
but
like
on
the
Google
cloud
and
I,
believe
it
also
on
the
AWS,
like
you,
don't
have
four
big
machines.
You
have
free
and
half
machines,
so
maybe
like
it's
also
like
are
lying.
Ourself
like
to
these,
like
they'll,
always
come
on
denominator,
which
is
like
it's
going
to
run
fine
on
the
free
and
half
which
is
like.
D
You
can
run
bit
on
the
Google
cloud
without
like
saying
that
you
require
for
geeks,
but
there
is
like
no
machine
that
gives
you
like
four
weeks
on
the
Google
cloud.
It's
like
three
and
a
half,
so
maybe
we
could
certify
that
on
this
way
and
have
to
be
like
the
most
common
denominator
and
like
the
first
step,
like
we
turn
to
our
requirements,
to
be
finally
saying
that
this
is
like
the
frame
cough.
It's
good
enough
for
running.
He
talked
before
that
skirt.
F
Yeah
I
agree
with
your
I
think
we
should
have
a
exit
criteria
for
the
Suffolk
and
that
makes
sense
to
me
that's
a
good
ball.
Some
point
on
the
yeah
I
think
we
did
have
one
step
of
supporting
3.75
gigs
or
whatever
that's
the
GCP,
sighs
I'm.
Looking
at
Amazon
right
now,
I
then
we
could
have
another
for
the
renovation
of
one.
That's
you
know
two
gigs
for
the
red
grape
I.
If
you
can
get
there.
F
E
Just
wondering
because
we're
talking
because
we're
talking
about
it
already,
would
it
be
useful
as
part
of
the
topology
data
that
we
collect
to
like
collect
some
kind
of
like
machine
class
from
the
customer
as
well,
because
I
don't
know
do
we
do
we
actually
know
right
now
how
many
customers
are
running
a
raspberry
pi
say
so.
I,
don't
know
like
how
we
would
identify
this,
but
I
thought
that
might
be
interesting
to
track
like
when
they
first
self
manage.
E
F
F
We
did
collect
like
and
think
the
package
types
right
now,
I
and
I
think
we
also
do
attempt
to
pull
back
something
on
like
machine
type.
I
think
it's
the
thing
I
think
is
more
about
the
cloud
provider
than
it
is
about
the
machine
type
right.
Now,
that's
a
good
look
and
see
what's
available,
I
think
when
it's
like
a
I,
don't
there's
some
kind
of
tension
in
there
that
tries
to
try
to
do
this,
but
that'd
be
great.
If
we
can
yeah.
C
E
F
E
F
E
D
E
So
for
single
note,
that's
always
easy
to
do,
because
we
could
we
can
go
beyond
Prometheus
and
and
just
get
it
from
the
current
machine.
But
the
problem
really
is
when
you
need
to
do
it
for
more
than
one
load,
because
if
you
run
like
ten
different
notes
that
run
all
kinds
of
things,
then
we
need
some
kind
of
external
data
store
to
query
for
what
all
these
machines
are
basically
but
yeah
like
if
it
is
in
permutations
some
way,
shape
or
form,
then
we
can
also
get
a
backup.
B
E
F
E
E
E
At
it
because
it
was
yeah,
I,
don't
know,
I
don't
even
know
if
that's
still
a
good
idea,
I
guess
I
watch
the
second
attempt
at
thinking
about
what
do
we
do
with
all
this
useful
data
that
we
collect,
but
that's
so
varied
away
in
places
that
just
like
it's
kind
of
useless,
because
no
one
looks
at
it,
it
might
be
better
way
to
go
about
it.
I
would
put
it
on
the
back
lawn.
B
E
E
The
answer
is
yes,
but
like
that
yeah
that's
what
that
was
the
thing
as
well.
Where
I
noticed
this,
this
graph
QL
schema
file,
that's
52,
megabytes
right,
I
mean
I,
don't
know.
E
B
D
A
I've
moved
it
off
to
13:2,
and
we've
got
quite
a
grouping
of
memory.
Improvement
issues
now
go
on
in
the
13.
E
Yeah
this
one
I'm,
really
unsure
about
and
I
think
Camille.
You
I
think
you
haven't,
because
we
we
sort
of
touched
on
this
in
the
GC
k,
mr
about
the
node
exporter,
so
basically
yeah
when
we
started
looking
at
exporting
this
topology
information
that
would
query
from
Prometheus.
The
first
thing
we
noticed
was
that
if
you
run
a
local
omnibus,
there's
only
really
two
labels
that
attached
to
all
of
these
targets,
which
is
the
instance
and
the
job.
So
that's
what
we
started
using
because
it's
what
we.
B
E
And
yeah,
so
me
I,
don't
know
we
want
to,
because
that's
not
quite
what
we
do
on
conference
is
we
have
like
a
well
more
refined,
like
label
set
up,
you
know
we.
We
sat
divided
further,
these
different
kinds
of
rails,
the
punks
that
we
have
right.
We
have
a
particular
cluster
just
by
API,
just
forget
just
for
wet
workloads,
probably
soon
for
our
action
game
Layton.
E
So
so
I
don't
know
like
I
opened
this
because
I
don't
have
an
answer
to
this.
Who
to
talk
to
about
this
even
but
maybe
we
should
be
looking
into
making
like
like
by
default
like
adding
more
specific
labels.
That
also
may
be
clearly
identify
the
role
of
this
particular
node
that
we're
scraping
yeah.
There
was
that's
how
this
issue
came
up,
so
I
found
it
on
telemetry.
E
It's
not
walking
like
it's
not
blocking
us
from
doing
anything
right
now,
but
it
makes
things
a
bit
imprecise
right,
because
right
now
yeah
we
can
only
look
at
a
job
which
will
be
good
luck
rails
for
everything,
sort
of
yeah,
unicorn,
Akuma
related
I
guess
so
so
we
it's
not
useful
to
say.
Oh,
that
is
like
an
API
node
or
like
on.
D
Like
my
opinion
is
like,
we
should
not
base
our
logic
on
neighbors.
You
can
join
matrix
together
to
perform
these
classifications,
okay,
interesting
table,
because
right
bells
are
kind
of
like
flaky.
It's
like
you
know
me
so
like
that.
We
already
have
this
like
the
difference
between
omnibus
and
like
our
production
and
probably
like
kubernetes
is
something
completely
different
as
well
and
like
libraries
also
have
this
deficiency,
that
what
happens?
If,
on
the
single
nodes
on
side,
key
rice
and
everything
were
like,
can
we
classify?
These
components
are
separate
or
chemical
are
c5?
D
E
D
E
D
D
E
I
mean
maybe
we
should
do
this
in
the
more
problem
oriented
way,
because
one
thing
I
mentioned
earlier:
we
struggling
with
us
testing
this
whole
thing,
because
I
can
only
retest
between
two
extreme
environments.
One
is
calm
which,
but
actually
by
now,
I
can't
test
it,
because
what
we
have
right
now
only
works
for
single
no
deployments,
and
so
what
I
use
for
testing
as
a
single
of
deployment
is
just
a
local
omnibus
container,
with
like
the
vanilla
kind
of
fake,
you
know
we're
like
not
not
changed
and
I,
don't
what
users
change?
E
No,
maybe
they
attach,
you
know
all
kinds
of
different
labels
or
remove
these
so
yeah.
We
have
no
control
over
lens.
That's
a
point!
Yes!
So
so
maybe,
if
we
can
tackle
this
more
from
a
let's
actually
verify
this
against,
like
representative
setups,
maybe
there's
something:
QA
inch
can
help
with
where
we
have,
because.
E
Reference
architectures
for
testing
right
there
we
spin
up
and
down
I,
don't
think
they're,
I,
think
there's
ephemeral
environments
right
in
DCP
that
we
spin
up
and
then
back
down.
But
maybe
we
can
get
some
yeah
GCP
time
to
test
like
our
usage
data
ping
on
one
of
these
to
even
see
how
it
looks
like
just
access
to
Prometheus
instance
would
suffice,
I
guess
just
to
see
like.
What's
even
what
even
happens
like
when
we
have
five
different
I
was
unable
to
like
kind
of
approximate
that
for
now.
G
B
E
Know
what
the
problem
is.
This
is
like
this
whole
vicious
cycle
where
I'm
like.
We
can't
actually
test
this
right
now,
because
we
don't
yet
have
the
support
for
querying
an
external
Prometheus,
so
I
think
that's,
maybe
the
thing
that
hinges
everything
on
that.
We
need
to
do
next
so
because,
for
the.
B
A
A
Okay,
everything
that
was
not
assigned
it's
been
their
assigned
or
moved
on
their
team
too.
We
have
a
lot
of
work.
That's
in
there
so
this
week
take
a
look.
The
planning
issue
add
some
comments
and
I
know.
There
are
some
issues
that
are
going
to
be
created
as
the
result
of
some
of
the
conversations
we've
had
today.
Please
add
them
13-2
and
anything
you
think
would
be
related
to
the
work
you're
doing
now
and
we
can
iterate
asynchronous.
A
What's
planned
for
13
now,
I
think
there
are
some
things
that
will
be
kicked
out.
We
have
about
three
minutes
left.
There's
one
thing:
that's
under
the
to
verbalize
but
I'm
going
to
anyway.
So
I
don't
know
if
anybody
caught
it,
but
there
is
a
new
bot
out
there
that
will
track
all
the
Idol
merge
requests.
So
anything
that's
older
than
four
weeks
it's
going
to
link
in
there
I'm
going
to
ask
for
some
updates
to
the
bots
to
include
the
author
to
make
it
easier,
so
you
won't
have
to
click
through
all
of
them.