►
From YouTube: 2019 04 25 memory team weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
Yeah
recording
started
okay,
so
let
me
restart
so
this
word:
I
didn't
intend
to
duplicate
efforts
or
make
multiple
source
of
choice
choose.
So
if
people
feels
that
this
is
a
good
way,
we
can
stick
to
this,
and
I
can
merge
the
other
port
to
here.
Basically,
it's
more
columns
than
the
other
work
and
my
goal
is
to
get
us
to
prioritize.
Our
issues
then
add
labels
of
their
their
status
of
being
each
release.
C
So
we
can
track
the
progress
which
one
is
in
that
which
one
is
we're
ready
for
review
so
open
to
feedbacks,
so
a
long
light
line.
Let
me
stop
sharing
so
along.
That
line
is
basically
I
propose
that
we
use
P,
1
P,
2,
P,
3,
P
4
to
parrot
eyes
and
also
use
the
first
week
out
each
week,
each
month
to
charge
a
sinker
pacing
I
mean
just
to
take
a
week.
C
Everybody
go
there
and
get
your
opinions
the
the
priority,
and
if
we
choose
C
rate
here,
severity
also
choose
the
severity
and
then
see
if
any
issue
needs
to
break
down
or
gets
amended,
or
we
need
to
add
a
news
story
or
more
issues
in
the
box.
Then
we
converge
that
before
the
ace
before
the
next
30
started,
so
we
can
execute
for
next
month.
That's
my
thinking,
yeah
open
to
opinions.
A
Yeah
I
think
that's
that's
a
great
idea.
We've
been
trying
to
champion
the
async
charging
for
a
while,
and
other
teams
have
a
bigger
backlog
and
they
have
a
tribe
package.
I,
don't
think
we
need
one
for
memory
yeah,
because
it's
such
a
high,
visible
thing
in
the
air
they
performance,
which
is
just
level
memory
for
now.
Our
old
familiar
is
his
team
with
the
SL
on
the
priority
and
severity,
and
we
need
to
iterate
that
for
the
team
here,
so
we
use
it
correctly.
A
C
How
do
you
enforce
I?
Think
for
now
you
look
at
the
board.
I
only
listed
p1,
p2
and
I
so
as
to
I
would
prefer
that
we
only
use
p1,
p2
or
priorities
for
now
ignore
the
urges,
because
it's
mostly
engineering
engineering
chica,
is
not
the
customer
issues.
So
probably
priority
is
just
to
help
us
to
to
plan
our
our
work,
which
one
we
want
to
work
first,
which
one
we
want
to
work
later.
I,
don't
know.
A
If
that
makes
sense,
so
we
will
be
looking
into
time
to
resolve
s1
s2
and
if
it's
about
I
would
recommend
putting
severity
there.
So
we
measure
ourselves
and
I
think
this
team
is
on
track
to
deliver
fast,
so
that
will
help
bring
metric
down
so
it'll
be
the
reason
why
it's
just
trying
to
use
both
severity,
a
priority:
okay,
let's
use
boss
and
yeah.
D
C
C
Okay,
so
the
proposals
for
the
upcoming
release-
12-point,
oh
I,
think
yeah.
That's
to
me.
They
they
all
makes
total
sense.
The
first
line
of
metrics
Eric
Ark.
You
already
commented
in
in
that
issue.
So
let's
plan
for
12
point
Oh,
monitor
team.
So
let's
make
sure
that
the
monitor
team
actually
planned
that
for
the
12
oh
release,
can
you
help
to
ensure
that
yeah.
E
F
F
F
C
C
Okay,
so
so
can
you
follow
up
on
that
to
make
sure
that
this
is
planned
for
that
the
team
who
is
who's
responsible
for
ranch?
It's
not
ours
right,
it's
not
our!
We
depend
on
some
other
teams.
I
can
I
can
do
that
yeah.
Thank
you
and
then
Puma,
Puma,
timeout,
sym,
stenographer,
practical
element,
yeah
ptoo,
I,.
F
It's
quite
ambiguous
for
me
exactly
because
we
have
ability
to
an
ID
with
Puma
on
omnibus
and
also
on
its
app
development
kit,
but
I
think
that,
like
at
least
for
the
12.0,
we
should
be
looking
at
switching
everyone
to
using
Puma.
So
I
would
say
that
maybe
our
goal
at
the
end
of
the
next
phase
or
somewhere
between
and
the
end
of
the
video
and
before
the
end
of
the
race,
it
would
be
like
with
12.0.
F
We
basically
started
using
Puma
in
turning
on
like
full
full
extent,
and
we
start
figuring
these
issues
and
hopefully,
in
one
or
two
releases,
we
gonna
be
confident
enough
to
basically
a
name
with
it
under
products
when
I
start
seeing
how
he
behaves
there.
But
at
least
I
think
we
should
make
a
switch.
Those
make
everyone
to
use
Puma
and
three
port
errors
when
they
see
them.
F
F
C
C
C
Let's,
let's
do
it
py
so
and
then
the
next
one,
the
cache.
So
you
suggested,
if
you
add
the
communication,
key
ones,
anyone
agree
or
disagree.
Anyone
disagree.
It's.
F
All
about
the
guys,
this
is
something
that
we
started
researching,
because
it
seems
that
it
will
give
a
big
benefit,
for
these
were
often
requested
endpoint.
Definitely
on
NFS.
There
is
quite
amount
of
the
our
centrality
in
this
issue
direction.
It's
gonna
go,
but
we
kind
of
empty
V
assigned
with
the
Qatari
team,
to
12.0,
to
figure
out
exactly
in
what
way
and
how
we
want
to
support.
F
It's
very
likely
that
we
gonna
say
that
this
is
not
the
way
to
move,
but
this
is
something
that
we
we
want
to
continue
investigating
because
at
least
in
case
of
the
this
endpoint
generates
75%
of
all
the
requests,
and
it's
like
two
months
from
now,
it's
gonna
tell
it's
gonna,
be
generating
95%
of
the
requests
or
like
98%
and
right
now,
it's
very
expensive.
So
it
seems
like
that.
This
is
a
very
big
score
and
point
to
the
results.
That's
gonna
improve
the
performance
of
the
server
memory,
consumption
and
everything.
F
F
C
Yes,
so
how
about
this
I
break
it
down
to
two
issues?
One
is
to
do
the
research
first,
so
for
the
first
iteration
we
do
some
spike
and
the
research
and
determine
the
technology
paths
to
solve
the
problem
in
the
next
iteration.
We
actually
implement
the
solution,
so
the
first
amounts
for
12
point.
All
our
goal
is
to
determine
what's
the
right
solution
for
this
problem,
something
like
we
need
to
do
a
lot
of
research
and
try,
try
different
approaches
and
approved,
or
so
so.
F
C
Sounds
good,
so
so
I
said,
I
feel
the
best
days
to
make
another
tick
another
issue
and
associated
with
this
one.
So
the
the
experiment
or
the
investigation
is
associated
with
this
line,
and
once
that's
done,
we
will
have
another
issue
to
solve
the
problem.
So
this
one
is
a
kind
of
the
mastered
issue
of
the
things
we
need
to
do.
C
F
C
E
Hey
chuny,
can
I
go
back
up
to
the
metrics
one
I
was
just
thinking
about
this.
I
should
have
thought
about
this
earlier
sorry,
but
I'm.
Pretty
sure
that
the
way
that
we
we
shop
right
here
is
that,
even
though
it
technically
falls
to
the
monitor
team
I
think
because
the
memory
team
needs
it,
the
memory
team
should
go
in,
implement
it.
So
I
can
go
and
ask
the
mantra
team
but
similar.
It's
like
what
max
team
did
for
insights
like
they
just
went
and
built
a
thing
on
because
they
needed
it.
F
E
C
F
C
What
see
I
know
we
campaign
for
help
the
last
issue
there
is,
and
you
left
the
capacity
I,
not
sure
we'll
see.
If
there
is
any,
shall
we
removed
it
sure,
oh
yeah,
we
can
mention
there.
It's
not
recorded
yeah,
okay,
okay!
So
that's
about
the
proposal
for
twelve
all
yeah
Eric,
so
the
board
I'm
open
to
ideas.
If
that
board
a
it's
not
use
the
for
specific
purpose.
Probably
we
can
just
continue
to
use
the
board
I
just
do
it
because
I
already
have
the
columns
ready
there.
E
C
G
The
Layton
sees
with
I
mean
few
tests
where,
like
when
there
was
too
much
load.
There
was
very
high
latency
here's
what
we
observed
and
come
in
a
fund
that
we
should
not
it's
like
too
much
of
a
load
for
the
environment
assets,
so
we
just
have
to
a
person
talk.
So
what
we
are
planning
is
like
we
will
just
reduce
the
load
and
getting
the
correct
numbers.
G
That's
about
the
desks
that
we
are
running,
and,
apart
from
that,
we
have
Prometheus
code
set
up
with
few
basic
things
like
the
CPU
memory
and
also
through
NFS
related,
and
we
can
add
more
to
it
like
it's.
Just
as
if
not
this
thing,
you
can
actually
take
a
look
at
the
dashboard
link.
That's
present
there
and
that
regard
to
the
site,
speed
the
dashboard.
There
were
a
few
hiccups
and
we
could
not
get
it
set
up
and
yeah
once
the
site.
Speed
is
set
up.
G
F
C
A
Okay,
my
next
question
before
we
move
on
I
think
stan
is
also
using
a
test
bed
to
certify
that
when
11.5
is
faster
than
11
at
9:00
for
a
certain
customer,
so
we
should
quarry
with
Stan
as
well
and
I.
Don't
know
what
I
think
Romney
we
have
we're
in
the
middle
downgrading
back
to
we
loan
on
five
correct.
What's
what's
the
sailors
there
yeah.
G
So
right
now
we
have
11.9
deployed
in
a
machine
and
we
are
just
running
tests
against
11.9
and
what
I
mean
we
still
have
not
done
the
comparison
yet
in
a
sense
like
downplayed
and
to
the
level
got
five
and
then
compare
the
matrix.
That's
because
again
we
have
problems
that
we
actually
tried
doing
the
downgrade
and
doing
the
Kiska
before
and
we
again
faced
a
different
problem
in
the
sense
you
could
not
use
get
up
in
code
to
set
up
the
actual
data
that
we
need
to
run
our
tests
on
11.5.
G
So
that's
what
that
approach
did
not
work
so
yeah,
that's
the
state
where
we
are
now.
So
what
we
can
probably
try
is
like
from
11.9.
We
can
downgrade
to
11.5
and
try
retaining
the
data,
but
from
what
I
heard
from
Stan,
it
could
have
issues
like
a
few
data
migrations.
All
of
that
could
be
problematic
but
see.
That
is
something
we
could
try
and
see
if
it
works.
If
we
could
retain
the
data,
then
we
can
actually
try
running
with
this,
but
still
it's
not
like
our
hundred.
A
Ok,
yeah
I
think
we
don't
you
do
some
bad
traffic
control
with
with
that
cuz
Puma
and
eleven
at
five
I'm,
not
sure
what
your
plan
is.
Gonna,
be:
okay,
I
think
those
are
two
different
trains
and
unless
you
want
to
try
Puma
with
lemon
at
five
and
probably
need
to
quarry,
was
ten
how.
F
That
goes
so,
as
for
the
Puma
I
have
the
comment
at
the
bottom.
We
basically
could
set
up
additional
two
workers
and
have
a
unicorn
and
Puma
working
concurrently
and
configured
via
load,
balancer,
cookie
or
separate
URL,
and
basically
being
able
to
stress
the
unicorn
and
Puma,
because
I
think
that
you
are
angle
what
angle
we
want
to
test
a
continue
testing
unicorn
that
nothing
breaks
when
we
start
making
changes.
At
the
other
hand,
we
also
want
to
start
testing
Puma
as
early
as
possible,
so
having
per
that
is
like
do
all
a
web.