►
From YouTube: 2019 07 11 Memory Team Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
we're
recording
alright
so
for
today,
I've
asked
everybody
in
one-on-ones.
I
will
ask
again
review
the
12:1
milestones
board
linked
here.
Viewed
here,
it
looks
like
I
mean
as
I've.
Yesterday
everything
was
pretty
much
on
track.
Last
night,
the
GDK
puma
on
by
default
in
the
GDK
was
closed.
That
was
one
of
the
big
things
you
want
to
ship
and
start
getting
feedback
on.
So
it's
great
everything
else
looks
like
it's
on
track.
A
Although
I
see
a
couple
new
issues
added
to
the
bottom
there
by
Camille,
so
the
any
changes
need
to
be
in
by
in
the
day
17
so
they're
taking
for
final
release.
So,
let's
take
a
look,
make
sure
it's
all
going
to
land
for
twelve
one.
If
it's
not,
let's
move
it
over
her
talk
about
whether
or
not
we
need
to
move
it
over
shift
things
around
any
questions
on
twelve
one.
When.
A
Okay
and
end
of
your
day,
I'm
seventeen
and
then
on
the
twelve,
so
Eric
and
I
are
talking
about
Puma
and
it
being
the
highest
priority
thing
that
we
should
be
working
on
right
now
we
threw
a
bunch
of
issues
into
twelve
to
take
a
look,
make
sure
you
understand
or
have
any
questions
before
we
dive
into
twelve.
There
are
actually
some
issues
for
Puma
that
are
under
the
infrastructure
project.
A
They
do
not
show
up
on
our
board,
so
probably
need
to
move
the
epic,
so
it
shows
up
on
our
board
or
we
just
need
to
figure
out
how
to
track
both
of
them.
So
right
now
any
issues
related
to
rolling
out
Puma
on
dev,
get
lab
or
not
showing
up
on
our
board
because
they
are
in
the
infrastructure
project.
So
I
need
to
figure
out
a
solution
there.
So
any
questions
on
twelve.
A
All
right
moving
on
to
the
next
thing,
so
go
get
lab
values,
throughput
over-planning,
but
some
level
of
planning
helps
other
teams.
You,
weights
and
I
have
created
an
issue
for
the
memory
team
to
talk
about
whether
or
not
we
want
to
use
weights
what
type
of
weights
we
want
to
use.
How
we
want
to
implement
it
so
I
would
appreciate
feedback
from
the
team
on
how
what
they
want
to
adopt
how
they
want
to
adopt
it.
We
should
use
it
going
forward
if
we
should
use
it
at
all,
I
think
having
the
weights
discussion.
A
All
right,
as
mentioned
and
Eric
mentioned
in
one
of
the
issues,
luma
is
our
highest
priority.
Right
now
we
have
created
an
epoch
for
rolling
out
on
devkit
lab
org,
still
trying
to
determine
how
much
how
blocked
we
are
on
this.
Do
we
need
permissions
in
the
environments?
Do
we
need
help
from
infrastructure?
Are
we
blocked
on
them?
Jarvan
get
back
to
us
on
the
issue
where
Camille
pinged
him.
He
said
he
could
help
and
just
ask
questions.
A
I
haven't
read
through
the
entire
comment
that
he
offered
to
help
out
so
figure
out
how
we
can
get
unblocked
there.
There
is
an
issue
in
that
epoch
to
determine
all
the
blocking
and
dependencies
that
stakeholders
we
need
to
work
with,
to
get
it
out
on
dev
so
that
we
can
start
getting
some
real-world
world
testing
and
figure
out
what
complications
Puma
may
add
to
our
production
environment.
A
Questions
on
that
one,
mostly
all
the
informations
in
that
link
that
I've
sent
there,
what
we're
doing
why
we're
doing
how
we're
gonna
do
it?
What
the
timeline
is
see
we're
in
the
middle
of
a
sync
retro
I
know
Aleksey.
Thank
you.
You've
already
added
some
thoughts
there,
I've
added
some.
So
that's
the
team.
Please
continue
to
add
yours
and
also
Alexi
reminded
me.
We
should
be
using
the
workflow
board,
so
it
accurately
reflects
where
the
work
is.
A
C
C
So
anything
else
to
add
to
that
I
think
you
have
to
read
that,
but
there
is
like
a
bunch
of
different
informations
about
how
it
was
tested
on
about
what
what
is
the
meaning
of
these
results.
They
are
slightly
also
includes
a
bunch
of
the
graphs
about
and
also
consider
the
performance
implications
that
is
not
very
boom
are
unique,
unrelated,
but
it's
a
generic
thing
that
try
to
analyze
the
graphs
that
are
there.
C
This
issue
and
grant
even
started
commenting
on
that
is
proposing
that,
like
we
figure
out
the
performance
quality
tool
to
to
be
able
to
give
like
some
comprehensive
benchmark
for
our
customers
or
everyone
in
the
team
or
everyone
who
wants
to
use
that
to
validate
how
their
github
environment
is
performing
and
by
having
like
reproducible
tests,
but
like
testing
areas,
people
would
fetch
put
into
their
key
draft.
They
would
be
able
to
use
this
tool
to
validate
the
performance
on
different
scenarios,
because
there
is
so
many
different
factors.
C
Some
of
these
factors
are
mentioned
in
the
point
number
three
that
is
really
hard
to
basically
to
consistently
validate
the
performance
of
environment
at
first,
this
numbers
gonna
be
meaningless,
but
more
people
start
using
that
we're
gonna
have
more
data
point
with
different
configuration,
and
then
it's
basically
like
machine
learning
and
classification
on
what
data
you
see.
What
kind
of
racket
efficiencies
you
see
system
sees
that
this
is
caused
by
the
by
the
environment
that
you're
running
in
and
in
turn.
Our
like.
C
My
perception
is
that,
like
we
should
build
a
tool
for
others
to
use
daily,
I
mean
developers,
customers
supporting
to
have
something
comprehensive.
That
would
be
able
to
test
and
go
back
and
get
back
to
us
right,
hey.
This
is
the
benchmark.
Free
seems
that
this
is
off.
We
think
that
it's
not
gonna
scale
and
kind
of
follow,
follow
something
that,
like
we
use
ourself
for
testing
our
environments,
but
we
are
not
building
just
for
testing
our
environments.
We
are
building
for
testing
just
get
up
on
much
wider
scale.
C
It's
basically
based
on,
like
on
my
expectation,
that
everything
that
we
do
for
the
quality
performance
I
think
I
should
be
able
to
replicate
it
locally
and
should
be
able
to
compare
that
with
the
results
that
are
posted
somewhere.
I
should
be
able
to
understand
what
these
reasons
means
to
me
and
in
what
cases
my
system
not
gonna,
perform.
Well.
C
We
should
have
like,
probably
twice
more
of
the
metrics
that
try
to
go
to
behavior
of
the
system
from
even
different
angles
and
like
it's
kind
of
performance,
testing
and
understanding
how
to
what
it
does
mean.
The
data
that
we
see
it's
quite
complex
but
I
think
that
tomorrow
openly,
like
put
that
told
and
more
white,
it's
gonna
be
use.
It's
gonna
should
be
way
easier
for
everyone
to
use
that-
and
this
is
this
is
my
primary
goal.
Why
I
created
this
issue?
A
B
C
It's
very
close,
but
it's
not
like
ideally
did
they
comparable
across
them
so
technically
like
what
I
would
expect
that,
like
the
data
set
that
you
get,
if
you
have
some
data,
so
that
is
predefined.
If
you
put
that
on
the
egg-white
performance,
two
systems
with
like
similar
specs,
you
should
see
the
same
results
easily
and
today
lie
getting
to
this
point.
C
I
think
that
the
amount
of
data
that
we
gather
is
just
not
enough
to
fully
understand
what
these
numbers
does
mean
because,
for
example,
the
number
that
you
see
may
may
seem
may
basically
differ
like
the
meaning
of
this
number
might
differ
because
of
the
factors
on
what
on
how
you
run
this
environment.
I
am
really
liking
that
that
I
believe
so
on.
Some
customers
like
having
so
interesting
configurations
like
that
it's
gonna
be
really
quite
hard
for
us
to
replicate.
C
In
some
cases,
I
saw
some
customers,
for
example,
having
and
it's
quite
common,
actually
having
database
ready,
so
NFS
servers
externally.
So
it
means
that,
like
for
each
database
call
or
like
it's
ready,
scone,
you
inject
one
or
two
milliseconds
of
the
latency
in
other
than
I'll,
be
able
to
model
all
of
that
Carleen
and
it's
quite
hard
to
figure
out
exactly
what
is
the
impact
of
this
injected
latency
on
the
behavior
of
the
system?
C
If
we
don't
have
something
that
is
quite
comparable
on
these
different
environments
and
it
produces
like
reproducible
and
predictable
numbers
on
these
environments
because,
like
for
example,
we
are
get
output
come
usually
run
in
the
idea
scenarios.
It
sucks
that
phone
is
sometimes
not
performing
it's
enough
for
me
because
of
the
application
bug,
but
if
not,
it's
usually
not
performing
because
their
infrastructure,
but
our
customers
very
often
have
the
problem
with
the
infrastructure,
and
we
have
very
hard
time
pointing
them
to
the
infrastructure
of
them
is
really
a
problem.
It's
not
application.
C
The
problem
at
whether
infrastructure
and
our
performance
testing
tour
should
also
be
able
to
point
out
that
this
is
not.
The
problem
of
the
application
is
the
problem
of
your
infrastructure
and
by
like
measuring
different
types
of
the
endpoints
and
the
correlation
between
them,
and
we
understand
exactly
how
these
endpoints
interact
with
the
system,
whether
they
are
maybe
I
of
heavy
because
of
their
IDs
I
am
heavy
because
of
the
database
or
anything
else.
It
would
really
say:
hey,
you're
ready
sees
basically
in
performance.
C
C
You
can
basically
very
accurately
measure
why
the
system
is
not
performant
or
like
in
what
cases
they
it's
gonna
break,
because
it's
actually
testing
these
different
audiences
that
are
otherwise,
where
it
hard
to
test
with
that,
to
like
a
pink
that
you
would
want
from
the
commercial
and,
secondly,
like
I,
don't
think
that
we
want
to
run
peep
from
the
commander.
It
should
rather
be
like
I,
think
customer
being
able
to
do
that
on
our
on
their
own
and
get
back
to
us
with
the
results
I
deserve
the
results.
C
It
means
that
we
are
like
benchmarking,
our
system
against
the
same
quality
criteria
as
our
customers,
so
we
better
understand
the
deficiency
of
the
tool,
also
where
it's
underperforming
in
terms
of
the
data
that
spirit
or
soul
brother,
like
as
being
the
center
of
the
quality
testing
tool,
I
would
say
that
our
customers
should
be
certain
of
the
quality
performance
testing
tool,
if
asked
just
being
another
customer
meetup.com
is
just
another
customer
I
think
in
this
case.
This
is
this
is
the
whole
idea
behind
a
decision.
D
This
is
awesome.
I
just
had
a
question
about
become
the
data
for
for
determining
this
benchmark,
see
different
customers
would
have
different
kind
of
architecture,
they
would
have
different
workers,
they
would
have
different
memory
and
all
that
single
number
of
thirds
that
they
are
using
they
ready
set
up
and
all
that
automated
weed
that
we
would
be
able
to
pull
that
data
like
in
a
completely
automated
fashion.
What
we
have
maybe,
depending
upon
depending
on
the
customer,
is
actually
providing
that
data.
C
Absolutely
this
is
what
we
should
do.
It's
getting
benchmark
means
it's
getting
like
their
numbers
is
getting
metrics
and
all
your
metrics
also
include
all
the
data
of
your
of
the
of
how
your
customer
system
is
configured
so
like
you
would
get
information
about
the
number
of
unique
on
workers.
Hours
side
notes
how
they
are
like
might
even
configure
what
is
the
I
often
see
on
the
system?
Basically,
all
of
that
you
already
have
that
we
just
wanna
get
to
figure
out
what
data
we
need.
Basically
and
I
started
these.
C
If
you
look
at
a
screenshot
in
these
another,
this
is
like
a
set
of
six
different
metrics
that
I
just
put
for
life
or
my
purpose,
but
technically
crack.
If
you
go
to
determine
configuration
if
you
open
one
of
these
screenshots
in
the
new
heights,
so
this
is
just
a
set
of
a
few
metrics,
but
technically
we
could
get
twice
three
times
more.
C
These
metrics
that
actually
describe
exactly
the
system
behavior
on
like
system
configuration
amount
of
the
memo
or
amount
of
the
CPUs
utilization
of
the
CPUs
outside
of
the
key
love
all
of
the
decipher
liable
for
us-
and
this
is
something
that
we
should
also
include,
because
kind
of
noisy
environment
of
get
love
is
also
the
problem.
For
example,
often
our
customers
run
on
the
VMS.
We
should
know
exactly
when
we
run
the
testing.
So
what
is
the
steal
time
on
the
CPU?