►
From YouTube: 2019 10 28 Memory Team Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Then
the
view
time
sometimes
is
higher.
Sometimes
this
through
I
I
guess
the
view
time.
Maybe
we
don't
help
really
care,
because
this
may
be
the
creep.
Api
is
common
to
every
API,
that's
just
my
guess,
and
then
the
last
part
sometimes
the
DB
time
it
varies
from
100
milliseconds
to
sometimes
around
the
1
second
dose,
it's
bigger
sometime,
maybe
there's
parties
in
some
case
we
are
concerned
about,
but
I'm
still,
not
sure
whether
it's
a
duplication,
also
another
issue
Craig
mentioned
and
I'm
most
still
not
sure.
A
What's
the
efficient
way
to
proceed
and
find,
where
is
the
optimization
option
opportunity
so
I,
temporary
I,
don't
know
how
to
you
so
I
just
hope
we
can
I
can
go,
get
more
advice
from
the
discussion
now
on
Wednesday.
There
is
a
scheduled
session
so
hopefully
cameo
and
eco,
and
everybody
can
give
me
some
suggestion.
We
can
have
some
information
there.
So
for
that
reason,
I
temporally
don't
have
much
issues
to
work
on
before,
like
before
the
Wednesday
session.
So
I
temporarily
choose
the
another
issue
from
the
backlog.
A
B
A
The
Wednesday
part
is
majorly
about
the
CPU
intensive,
but
I'm
not
sure.
Maybe
there
are
some
other
related
and
the
pond
Kamio
mentioned.
We
have
potential
other
important
that
we
can
do
some
improvement,
but
that
depends
on
the
discussion
and
the
findings
of
all
tobacco
sessions
resolved
I,
guess:
ok,.
C
B
A
C
C
C
C
C
Yeah
look
like
because
that
memory
intensive
one
closed
and
because
the
CPU
intensive
issue
may
be
a
duplicate
that
other
teams
working
on
this
one,
to
call
out
that
the
issues
in
the
milestone
board
are
ordered
in
priority
order
from
top
down.
So,
if
you're
looking
for
work,
just
grab
the
next
one
in
twelve
five
column,
that's
not
assigned
then
sounds
like
chidden.
You
already
did
that
anyway,
as
he
was
looking
for
work,
so.
B
B
B
B
D
D
So
far
like
the
data
that
we
got
from
the
stating
it's
like
fully
non,
conclusive
I,
don't
understand
the
data
from
the
stating
so
I'm
kind
of
cross-checking
that
we've
death,
because
my
latest
shows
that
this
is
roughly.
He
behaves
pretty
much
the
same
as
it
was
behaving
before.
So
it's
kind
of
like
expected
outcome.
D
That's
it
and
it's
kind
of
like
I'm
doing
that
in
the
context
of
this
issue
that
I
plan.
To
finish
this,
my
son,
like
him
and
I,
want
to
finish
this
one
it's
being
opened.
There
is
some
hope
and
discussions
on
that
one,
but
I
think
that
I
I
have
some
idea
how
to
move
forward
or,
like
some
of
this
discussion
to
the
separate
Amar's
to
have
it
like
much
at
the
first
iteration.
B
Okay,
thank
you
so
from
my
side,
I
wanted
to
add
that
I
returned
to
my
work
on
the
import
and
the
last
night
requesting
q4
now
from
the
series
of
reporting
proposed
by
Kenya
is
ready
to
be
reviewed
and
ago
century
of
you,
and
while
it's
under
review
I
will
take
one
of
the
tasks
from
the
milestone,
as
mentioned
by
quick
Greg.
Is
there
something
you
want
to
it?
Anyone.
D
D
Given
duration,
it's
the
outcome
of
the
performance
of
the
endpoint.
If
we
improve
the
performance
at
any
point,
they're
gonna
be
less
queuing
because
this
ain't
gonna
be
faster
to
execute,
and
if
you
have
the
capacity
that
allows
you
to
process
for
it
was
at
the
given
time
and
you
process
this
for
occurs.
In
the
half
of
the
time
you
can
effectively
in
the
same
unit
of
time,
process
I
think
was
so
you're
gonna
have
much
less
queuing
on
the
workhorse
of
this
request,
waiting
to
be
processed.
A
A
D
It
still
doesn't
make
a
difference
and
because,
like
what
you
want
to
achieve,
you
want
this
request
to
finish
as
quickly
as
possible,
regardless
of
the
capacity
that
you
are
given.
So
if
it
takes
from
like
one
second,
it's
just
too
long,
it
should
take
like
100
millisecond.
This
is
this
is
like
your
goal
in
pretty
much
every
case,
but
like
the
the
request
that
is
being
executed
by
the
unicorn
or
Puma.
A
A
It
was
I
the
first
time
I
get
I
say
this
to
her
I
wasn't
using
the
performance,
tester
script
I'm,
just
a
simply
run
loop
to
do
a
local
call
request,
so
the
local
curve
request
it
basically
just
run
one
API
request.
Every
time
then
followed
by
another
one
one
by
one.
It
shouldn't
be
a
big
pressure
for
the
application
server
in
my
mind,
but
I
can
try
a
little
bit
more
so.
D
What
you
want
optimized
you
want
to
optimize
the
single
request
to
not
take
500
milliseconds
in
idea
our
conditions,
but
take
like
50
milliseconds
at
most.
This
is
what
you
want
to
really
optimize
like
you
want
optimize
like
the
small
as
you
need
to
be
super
fast,
because
then,
if
you,
like
kind
of
stress
test,
it's
gonna
have
much
higher
capacity
to
process.
Multiple
requests
in
the
given
unit
of
time.
But
technically,
like
your
smallest
item,
is
like
your
request.
A
D
D
It
shows
it's
very
expensive
to
to
get
this
and
generate
the
data,
and
if
you
look
at
the
discussion
from
Igor
he
and
like
some
other
folks,
though
they
are
mentioning
that
there's
like
a
few
hundred
SQL
queries,
a
number
of
data
calls,
it
seems
to
like,
be
very
inefficient.
The
way
how
it
is
doing
today,
cool.