►
From YouTube: 2019 07 18 Memory and Quality Team Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
B
So
I,
just
let
me
go
into
a
little
bit
of
detail.
There
was
a
summary
posted
on
the
on
the
test
plan
issue,
which
I
think
is
number
three
which
is
close.
I
can
link
it
here
and
we
did
discuss
this
and
let
me
know
if
I
should
be
bringing
stuff
from
the
weekly
self-managed
group
to
you
to
memory
to
make
sure
that
they
were
closing
the
loop.
B
The
next
steps
that
we
have,
we
found
three
slow
end
points
when
we
conducted
the
full
load
test
against
a
10k
architecture,
so
we're
trying
to
investigate
the
root
causes
of
that
Camille.
You
were
involved
in
that
discussion
figuring
out
what
the
actual
cause
is
and
then
from
that
we'll
get
three
or
ish
issues
for
for
resolving
whatever
the
root
causes
are
we
need
to
lift
the
embargo
on
the
architecture
documentation.
So
there's
conversations
going
on
that
as
well,
which
I
saw
you
participating
and
figuring
out.
B
You
know
what
our
expected
levels,
it's
sort
of
a
deeper
conversation
than
just
undoing
the
work
in
progress
on
the
documentation
and
making
sure
that
all
the
stakeholders
agree
that
we're
certified
we're
done
for
now.
After
that,
those
are
things
that
we
expect
to
get
done
in
second
quarter,
starting
in
two
weeks
in
August.
We'll
move
our
proof-of-concept
over
from
the
thing
that
grant
built
over
into
like
a
full-on,
automated
performance
monitoring
solution.
So
we
can
see
graphs
of
the
CPU
everything
in
Griffon
a--.
We
need
to
add
more
endpoints.
B
We
need
to
move
the
environment
from
digitalocean
to
GCP
and
all
that
will
happen
before
the
rumors
that
were
mentioned
here
of
the
5k
30k,
whatever
the
levels
of
the
being
all
that
will
happen
before
we
would
implement
anything
like
that.
We
have
an
issue
to
investigate
and
to
maybe
start
to
plan
out
what
we
think
we
need
as
far
as
configurations,
but
that
won't
be
done
until
later
on.
A
That
kind
of
excuse
me
flows
into
number
two
like:
how
do
we
go?
We
want
to
finish
achieving
10k.
How
does
it
overlap
with
the
current
work
that
the
memory
teams
working
on
and
maybe
another
questions
not
added
here?
Is
there
anything
that
we're
doing
that
would
or
not
doing
that
would
block
achieving
the
10k.
A
B
D
The
obviously
I'm
picking
this
up
after
the
fact,
the
tank,
a
structure
in
particular
I,
was
working
on,
maybe
wrongly
the
assumption
that
the
support
team
have
built
this
different
structure.
With
that
knowledge
of
mind
of
what
take
a
I'm
sure
she
would
look
like
and
what
memory
used
would
you
look
like
if
we
find
for
our
testing
that
the
Mary
uses
even
higher
than
what
would
be
expected,
then
that's
where
the
marentina
stuff?
D
Maybe
we
need
to
comment,
but
my
understanding
to
take
a
well
it's
not
to
adjust
Mary
user,
in
fact
that
it's
just
a
collar.
This
is
what
you
currently
needs
for
our
10k
environment
and
then
the
Maritim
focuses
reducing
on
memory
usage
overall.
So
that
would
obviously
impact
future
terms.
But
in
terms
of
this
current
thing
it
was
just
getting
environment
of
the
gate
to
ask
the
intimacy
there
wasn't
anything
crazy
there
is
v3
is
the
end
points
their
duty
to
investigate.
C
C
The
first
question
and
the
second
question:
do
you
think,
like
the
data
search
like
what
we
are
testing
is
just
not
enough
to
like
to
make
this
assumption
that
this
is
good
reference
environment,
because
this
end
point
are
based
on
like
on
very
specific
customer
using
very
specific
workflow
and
as,
for
example,
pretty
much
nothing
about
created,
CI
or
like
other
parts
of
the
system.
So
do
you
consider,
like
this
end,
point
enough
to
beat
the
10k
architecture.
D
B
I,
don't
think
that
anyone
thinks
that
this
is
like
the
end
of
the
10k.
This
was
a
reaction
to
a
specific
customer
having
issues
and
it
sort
of
crafted
with
that
customer
in
mind
and
I.
Think
as
we
mature
this
and
expand
this,
especially
as
we
add
additional
reference,
architectures
I
think
that
that
will
change
over
time
for
sure
I.
B
Don't
think
that
this
is
like,
if
you're
a
10k
customer
this
is
it
period,
you're
done
and
then,
as
far
as
the
slowness
from
what
I
understood
from
the
working
group
meeting
once
we
investigate
what
the
root
causes
that
those
would
go
to,
whoever
the
correct
stages.
The
correct
group
for
those
specific
end
points
is
that
your
understanding
is
well
grant.
B
D
Let's
see
so
yeah
and
well,
that's
it
yeah.
What
time
you
said
there
I
agree
with
this
is
still
early
days
yet
for
the
environment
as
well
as
the
is
the
latest
frameworks,
that's
where
the
dais,
you
know
don't
have
talking
on
the
issues
that
is
the
base
that
probably
will
always
be
a
living
thing
will
never
be
finished.
D
What
we
have
currently
right
now
is
a
week
we
currently
have
four,
which
is
what
we're
being
using
for
testing
so
far,
so
it
gives
us
at
least
that
small
baseline
to
compare,
but
certainly
we
want
to
expand
that
and
even
further
areas
to
to
be
more
representative,
but
that
is
difficult
to
do
that.
So
it's
just
going
to
be
it's
going
to
be
an
ongoing
effort
to
do
them.
A
Early,
exactly
what
we're
trying
to
do
right
now
is
give
a
recommended
architecture
based
on
what
we
know
and
based
on
what
the
software
is
today
for
10k
users.
Is
that
and
if
I'm
expanding
that
out
of
it,
we're
not
going
to
let
any
performance
issues,
stop
us
from
you
that
architecture
right
now,
we
will
iterate
and
respond
based
on
our
findings
later
to
further
refine
that
10k.
A
B
D
B
A
Then
further
expanding
on
1b
there,
where
you
said
pointing
to
my
screen
like
loosing
my
hand,
there
will
be
some
more
work
in
q3.
It
could
include
follow-up
work
for
the
10k
reference
architecture
that
could
like
have
a
cascading
effect
like
blocking
the
other
reference
architecture.
Until
we
fix
those
bottlenecks
in
10k.
A
B
Think
that
that's
a
possibility,
but
I
am
Not
sure
that
we
would
yeah
I,
guess
that's
a
possibility.
I
think
it
depends
on.
If
we
decide
to
go
with,
you
know:
here's
the
3k
and
here's
the
25k
in
addition
to
the
10k.
If
we
decide
to
say
here,
are
various
user
personas
and
we're
gonna
shift,
we're
gonna
make
three
architectures
that
match
this
and
shift
the
10k
into
one
of
those.
So
I
guess
the
answer
is
that
could
happen?
A
C
So
I'm
actually
looking
at
the
way
that
would
allow
us
to
automate
a
lot
of
of
that,
because
we
have
a
ton
of
different
types
of
the
testing
right
now
in
so
many
different
configurations
that,
like
grant,
is
not
gonna,
be
able
to
do
all
of
this
by
himself
and
I
think
that
we
should
not
expect
him
to
do
all
all
of
that
by
himself.
So
I
think
I.
C
Think
that
really
my
recommendation
he's
like
we
should
work
focus
more
on
like
documenting
everything
or
the
like
data
set,
starting
with
some
first
iteration
of
the
data
set
like
starting
with
more
endpoints
documenting
and
really
asking
more
people
for
help.
I
mean
more
people.
What
I
would
like
what
I
would
like
to
achieve
his
like
for
every
case
that
we
have
like
his
performance
issue
like
he
said
my
status
testing
I
want
to
like
keep
every
developer
use
these
data
search.
Follow
this
documentation.
You
cannot
be
ready.
C
C
If
we
have
data
set,
we
could
really
create
in
this
architecture
by
the
script
or
anything
like
that.
The
kind
of
testing
different
combinations
of
different
like
I,
don't
know,
CPUs
memory
patterns,
disks
configuration
just
becomes
a
matter
of
executing
automation.
It's
a
little
more
upfront
investment.
C
It
actually
pros,
like
he's
kind
of
much
fixed,
testings
writer.
So
is
it
kind
of
like
my
perception
about
the
direction
that
we
should
be
focusing
on?
I
I
think
that
Tonya
I
agree,
like
refute
text
expose
on
parquet
and
maybe
like
to
us.
This
is
this
is
like
the
best
step
right
now
to
do.
Is
like
try
to
across
this
10
K,
whatever
how
meaningful
it
is.
C
I
think
that
we
cannot
be
kind
of
overshooting
a
lot
of
Hartley's
requirements
today,
but
if
the
data
that
we
have
today,
maybe
this
is
like
the
best
first
approximation
and
meaning
what
we
could
really
then
measure
from
them
and
more
and
more
Aditi
perspective
of
our
time
is
like
how
this
requirements
change,
because
we
will
expect
that
these
requirements
get
slower
over
time
and
I
would
say
that
this
would
be
like
the
goal
that
we
will
try
to
pursue
with
everything.
As
that
we
do.
B
C
Like
the
the
frickin
pointless
data
sets
and
update
our
guides,
this
is
like
the
first
step
that
would
allow
me
to
reach
to
like
everyone,
everyone
in
Vietnam
and
say:
hey.
We
have
this
awesome
performance
quality
tool.
Please
help
us,
add
additional
endpoints
and
help
us
building
these
kind
of
testing,
because
once
we
have
that-
and
you
like-
this
instruction-
is
fairly
easy-
we
could
really
reach
to
other
people
developer,
support
customers
on
asking
them
to
perform.
You.
C
No,
no,
no,
no
I
would
consider
adding
more
endpoints
as
like,
as
addition
but
easy
to
use
is
I
think
this
is
the
first
step
that
we
should
focus
on,
because
if
it
refer
to
sort
of
normal,
we
use
that
and
adding
additional
end
points.
It's
very
easy
thing
to
for
us
to
ask,
but
spreading
the
knowledge
about
what
we
work
on
is
probably
the
most
important
aspect
for
it
now.
D
I'll
check
I'll
jump
in
here
and
just
and
just
see
quickly
that
you
know
we
can
couldn't
be
more
agreed.
I
am
in
the
enablement
team
and
my
job
primarily
is
to
enable
developers
and
with
a
focus
of
performance,
and
that's
why
I
did
in
my
last
job
and
that's
what
we
doing
again
here
you.
We
will
need
some
time
to
develop
the
tool
to
make
it
ease
of
use,
particularly
when
it
comes
to
the
data
set
I
best
approximation
I.
D
Think
that
will
take
time
because
from
experience
that
has
always
been
the
most
difficult
piece
of
performance
testing
or
any
tests
in
general
is
having
a
readable
data
set.
That's
always
up-to-date
that
doesn't
break
any
laws
that
can
be
imported
easily
so
that
that
piece
will
be
that
one
of
the
hardest
one
of
the
biggest
that
we
have
to
jump
into
so.
C
Ground
I
I
think
that
the
approach
for
that
is
pretty
much
the
same.
It
arrived
with
the
10k.
We
just
start
with
something
that
we
have
today.
We
just
get
it
out
of
the
door
and
we
done
iterate
on
that.
We
may
be
about
this
issue
as
fulfilling
the
goal.
Please
use
these
letters
as
we
know
that
this
is
imperfect,
but
this
is
what
you
have
to
do:
Tony
import
that
and
rewrite.
We
start
with
we're
joining
dataset
incremental
building.
Why
we
add
more
endpoints
for
testing
now.
D
Great
I'm,
all
for
the
probe
I,
would
like
developer
student
to
just
check
out
this
this
framework
and
then
this
middle
steps
as
possible.
Yeah
I'm
running
today
about
suspended
a
running
low
test
against
Cuba
nice
environments
that
just
sail
from
scratch
and
with
some
gitlab
of
her
gait
lab
scripts
and
I've,
been
just
going
through
different
versions,
get
lab
and
proof
on
them
and
that's
been
quite
good
to
kind
of
learn
a
bit
more
about.
We
were
related
to
go
and
I.
D
Think
it'd
be
the
same
for
the
Vela
to
move
forward
that
they
can
yourself
environments
as
they
require
with
their
specifics,
run
the
little
test
against
a
passer
and
get
the
Reg
results
in
a
in
a
consumable
fashion
and,
as
you
say,
once
we
start
doing
that
they
enable
developers
to
actually
contribute
back
and
and
everything
else.
That
is
ultimately
the
goal.
So
we
can
be
more
aligned
and
that
I
am
all
for,
but
as
an
example,
we
only
either
deal
in
thing
to
forestry
or
work
last
week.
D
So
you
know
where
we're
making
progress
we're
trying
to
get
there,
but
it
will
take
some
time
back.
As
you
see
it's
going
to
be
up
very
much
and
well
always
will
be
an
iterative
process,
because
even
once
we
get
things
big
blocks
done,
there
were
new
features,
they'll
be
new.
Things
coming
in
they'll
always
be
going,
but
yeah
I
can
be
a
good
martyr.
D
We're
running
a
time,
I
guess
times,
gonna
short
I
might
have
enjoyed
more
meetings.
If
he
decide
we
need
that.
But
what
my
question
simply
was
that
you
know
this
was
my
approach
was
just
trying
to
implement
and
get
a
framework
as
materials
I
can
in
the
timeline
for
Puma
what
the
environments
we
can
get
our
hands
on
essentially
and
get
as
much
results.
We
can
over
that.
So
that's
why
I
be
focusing
on
getting
there
before
you
feel
more
done
and
trying
to
get
the
tool
at
a
bear
in
a
more
mature
state.
D
Was
there
anything
else
that
you're
expecting
from
us
for
the
memory
work
up
in
your
work,
in
particular
excited
that
grant
king?
So
again,
so
my
question
will
is:
is
that
we're
currently
working
again
at
farms
framework
as
mature
as
possible
and
getting
that
the
bright
results
over
as
well
and
making
that
much
more
visible
to
people
outside
of
that?
Or
is
there
anything
specific
that
you're
looking
for
the
Pima
work
or
the
Merritt
work
outside
of
the
test
plan,
which
we've
kind
of
talked
about?
C
I
think
like
what
I
am
expected
is
reflected
by
this
b1
and
b2,
which
seems
like
the
the
most
important
items
were
need
to
work
on
right.
Okay,
because
because,
as
for
like
enduring
metrics
like
I,
have
some
magic
does
that
is
able
to
do
that.
To
some
extent
it's
not
very
fully
polished,
but
this
is
something
that
we
could
work
on.
There's
something
there.
Yeah
are.
D
Of
falls
on
the
ground,
hence
I,
think
I
would
have
to
write.
My
understanding
is
I
would
have
I'm
going
to
be
kind
of
on
point
for
it,
but
I'll
be
needing
data
from
the
team
or
at
least
being
point
is
the
right
people
to
ask
them
to
say:
hey
so
like
their
epic
adversity
at
when
I
raise
issues
for
each
point.
That
I
need
three
results
so
stuff
like
war.
Each
war
is
the
first
results
threshold
ever
expected
for
Puma.
D
Are
we
just
expecting
no
increased
memory,
usage
or
newly-created
performance,
or
we
actually
expecting
a
specific
performance
increase
of
X
percent,
etc?
That
that's
the
kind
of
stuff
that
I'm
looking
for
for
the
test
plan
and
then
setting
up
the
automation
favorite
that
we
have
to
run
against
pre
and
pre
prima
and
then
trying
to
get
those
metrics
out
in
such
a
way
that
each
day
we
get
results
and
say
yep
memories,
that's
not
sure,
I
use
this
much
etc.
D
B
I
guess
the
question
here.
The
first
question
on
a
plan
is
what
what
can
we
commit
to
for
12?
We
have
a
couple
of
things
that
we've
already
committed
to
four
quarter,
two
which
is
within
12,
so
these
couple
of
things
this
necessarily
need
to
be
in
there.
We
also
have
the
stuff
that
you're
working
on
this
week
ahead
of
the
upgrade
on
Monday
for
a
particular
customer
with
validating
eleven
seven
and
eleven
eight
as
far
as
working
from
the
top
of
the
memory
team
goals
of
their
P
ones.
D
B
D
B
C
C
I
would
really
want
these
to
be
in
some
first
iteration
and
we
see
be
done
because
I,
for
example,
when
I
started
working
on
the
performance
quality
testing
tool,
I
found
it
increasingly
difficult
to
use
it
so
I'm
actually
like.
If,
if
we
don't
like,
provide
some
meaningful
documentation
and
tours
and
easy
to
use,
no
one
can
I
use
that,
except
us
that
are
very
deeply
involved
in
that
and
whatever
it
is.
C
D
We're
really
running
out
of
time,
but
I
just
want
to
add
that
that
particular
issue
is
a
bit
fuzzy
and
she's
completed.
We
need
a
define
goal,
I,
don't
one!
We
do
have
I
agree
that
the
readme
on
before
is
quality.
Is
you
know
an
update
is
in
the
food
completely
right
and
that's
fair
enough,
but
in
terms
of
actual
test
there,
so
we
are
important
to
get
lot
HQ
project
from
give
up,
which
is
the
entirety.
I
get
lab
and
they're
running
some
extra
performance
kind
of
there's
another
command
there.
D
C
I
think,
okay,
just
one
sentence
I
think
it's
not
about
completing
data
set
it's
about
making.
These
data
sets
easy
to
use,
which
is
very
completely
different,
though
we're
not
gonna
complete
data
set
like
ever,
because
they're
gonna
be
like
a
new
thing
swinging
there,
but
we
can
definitely
make
it
easy
to
use
and
the
if
I
would
recite
the
God
of
this
issue.
It's
like
I
would
say:
make
our
performance
testing
full
be
easy
to
use
whatever
it
takes.
C
We've
the
data
set
that
we
have
today
not
extending
data
just
making
it
easy.
Okay,
you
grant
your
grant
mention
about
the
github
for
me
to
import
github
repository
I
need
to
have
the
special
or
after
can,
and
it
takes
around
one
to
two
hours
to
import
it
hub
repository.
It
does
so
I
think
that
we
could
shrink
the
time
to
like
to
face
three
minutes.
It's
a.