►
From YouTube: 2020 04 06 Memory Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
and
I
put
as
a
follow
up
by
them.
We
can
review
release
post
items
next
week
since
that'll
be
effectively
our
last
meeting
before
the
release
so
end
of
that
for
the
next
one
M
our
total
so
far
for
the
month
is
5
had
an
awesome
month.
Last
month
he
was
59
or
60
1,
depending
on
which
graph
you're
looking
at
see
also
so
it
was
the
only
month
being
the
beginning
of
April
and
I
went
through
our
ok
hours
for
the
memory
team,
and
you
could
see
the
list
here.
A
We've
done
a
good
job
of
transitioning,
the
project
that
could
group
import
team.
Sorry,
the
answer
your
question:
Camille's
per
milestone,
I'm,
sorry,
no!
Sorry,
mr
ziz
per
month
and
that's
a
question
I
asked
you
about:
can
we
align
them
per
milestone
and
the
answer
was
it's
difficult
so
that
M
our
count
is
per
month.
A
A
A
This
was
we
started
looking
at
it
with
the
WebSockets
issue
that
we
kept
planning
for
future
milestones
had
never
got
to
the
real-time
editing
is
something
that's
a
super
high
priority
at
this
point
in
time
and
I
can't
remember
which
teams
working
on
it,
but
they
have
begun
to
actually
work
on,
and
so
Matthias
will
join
as
part
of
the
working
group
as
a
member
and
pull
us
in.
If
anything
comes
up
that
we
need
to
pay
attention
to.
C
I
think
just
I'll
make
him
a
doc,
but
voiceover
a
little
bit.
Why
right,
I,
just
a
couple
things
in
my
mind
on
this
to
keep
in
mind
is
what
is
the
impact
on
our
smallest
configuration
size
like
that,
like
the
minimum,
RAM
and
CPU
requirements,
if
we're
adding
a
new
process
or
new
worker
type,
and
then
what
do
we
expect
the
additional
incremental
cost
on
Comm
to
be
when
we
run
this
at
scale,
let
me
start
to
have
additional
connections.
C
Running
for
an
average
user
right,
so
physical
numbers
user
typically
will
have
or
know
a
periodic
I
think
pulling
interval
on
an
issue
and
now
they'll
maintain
a
web
connection,
and
so
you'd
expect
likely
that
our
worker
counts
will
have
to
go
up
I'm
calm,
so
keeping
those
two
things
in
mind.
I
think,
would
be
good
as
we
think
about
sort
of
the
impact
it
will
have
on
calm
margins
as
well
as
also
like
our
ability
to
actually
support
platforms
like
a
Raspberry
Pi.
C
B
D
A
D
B
Right
now,
it
looks
just
by
I
just
try
to
catch
up
with
what
has
been
said
and
been
done
so
far,
and
it
sounds
like
there
is
a
certain
minimal
cost
right
now
that
we
would
pay,
which
is
at
least
an
extra
thread
pool
that
will
be
serving
these
connections,
but
much
more
likely
separate
processes
on
separate
notes,
and
it's
become
that
we
would
break
out
just
to
serve
this
kind
of
real-time
traffic
for
real-time
editing,
so
yeah.
It
would
be
just
thing
to
see
if
there's
like
options
like
a
lot
of
customers
who
say.
B
C
I
I
asked
if
we
could
feature
if
I
deployed
on
get
like
calm.
The
answer
was
initially.
No,
that's
that's.
You
know
I'm
gonna
roll.
This
thing
out.
Are
we
gonna
just
turn
it
on
and
just
have
it
just
instantly
just
crank
up
all
all
these
nodes
and
it
sort
of
without
being
currently
deployed.
So
that's
I,
think
concerning
and
then
the
other
aspect
to
your
point
is
yeah,
whether
you,
whether
it's
an
optional
feature
or
not,
I,
think
the
idea
was
to
not
is
to
make
it
not
optional.
C
Obviously,
there's
costs
and
making
things
optional
need
to
maintain
configuration,
support,
matrixes
and
just
think
through,
like
what
happens
in
for
this.
You
know
this
thing
is
not
on
what
happens,
but
it
is
on,
and
so
it's
I
get
that
we
don't
make
everything
optional,
but
to
your
point,
if
he
becomes
a
cost,
is
pretty
high.
Maybe
it
should
be
yeah,
so
I'd
be
happy
to
connect
for
sure
sounds
great
and
we
can
sort
share
I
think
also.
C
On
the
cost
side,
we
have
Davis
Townsend
I'm
a
I'm,
a
finance
person
focused
on
comm
costs,
and
so
you
can
also,
potentially
you
can
help,
provide
some
analysis
of
what
the
impact
is.
If
we
can
tell
him
what
sort
of
like
general
needs
are
right
like
we
expect
to
have
X
amount,
more
worker
nodes
and
want
to
be
separate
pools
of
this
G
stream
instance.
Sized
can
help
plan
that
out
I
think
and
arrive
at
a
cost
and
begin
just
tell
him
like
what
the
impact
will
be.
E
A
The
question
I
have
below
so
I
mean
these
are
all
good
concerns
and
thoughts,
but
we
should
get
them
into
an
issue
somewhere.
Maybe
Matias.
You
can
figure
that
out
once
you're
in
the
working
group
where
the
works
going
on,
so
that
we
make
sure
we
keep
track
of
them
and
others
can
comment
and
it's
linked
to
ongoing
work.
C
C
Not
sure
before
I
forget.
C
E
Yeah
before
so
so,
just
like,
we
are
using
WebSockets
today
already,
oh
really,
yes,
we
are
using
WebSockets
to
communication
we
for
force
to
the
application
time.
You
know
what
time
that
is
on
the
environments
and
that
it's
on
the
runners.
So
if
this,
if
you
like,
even
this
whole
communication,
when
I
go
through
WebSockets
from
a
workhorse
which
I
believe
it's
gonna
go,
so
we
already
have
WebSockets
I,
don't
know
how
customer
load
balancers
are
configured,
but
we
are
using
WebSockets
on
very
limited
scope
today.
Already.
A
Okay,
we
have
some
follow-up
items
to
track
there
I'll
make
sure
they
happen
this
week,
so
I'm
going
to
move
this
along
now,
because
we
have
a
lot
to
talk
about
today.
So
next
milestone
when
we
do
a
retro
and
sorry
I'm
going
to
pick
on
the
kids
here,
because
you
were
the
first
one
to
comment,
can
we
break
out
the
comments
so
that
they're
each
in
their
own
thread,
so
that
folks
can
then
follow
up
on
each
of
the
additional
items?
A
A
C
Yeah
I,
just
quick
heads-up
that
there's
many
of
you
I
think
like
we
know
we
have
we've
paused
hiring
for
some
roles
due
to
the
global
economic
situation
and
the
unknown
impact,
currently
I'm
gonna
fight
on
our
on
our
business.
So
to
be
good
stewards
of
our
revenue
and
our
burn
rate
that
we
pause
some
roles
and
so
we've
paused
hiring
for
our
memory
and
database
p.m.
C
and
so
I
will
be
the
PM
for
from
the
physical
future
until
we
reopen
hiring
for
for
less
critical
sort
of
like
not
non-essential
roles,
if
you
will
not
sure
with
the
exact
correct
term
analogies
for
get
lab,
but
it's
a
handful
of
roles
that
are
continuing
to
hire.
But
this
is
not
one
of
them
and
so
I
will
be
a
p.m.
for
the
foreseeable
future.
A
And
then,
if
you
all
missed
it
in
the
company,
FYI
channel,
there's
a
very
long
description
about
what
we're
doing
from
Sydnor
all
right
and
then
reorder
Deloitte
items
here.
I
ran
through
the
memo
team
board.
Everybody
has
it
pretty
well
updated
for
the
week.
So
I
figured
we'd
talk
about
the
work
item
that
Josh
put
in
here
for
13
point.
Oh
and
beyond
no
back
to
you
sure
so.
C
I'm,
just
thinking
through
about
how
to
be
more
proactive,
but
also
one
sort
of
more
recent
data
point
is
that
we
just
completed
a
MPs
survey
if
you're
for
the
MPS,
it's
sort
of
it's
that
it's
that
technique,
that
survey
technique
where
you
get
to
asked
to
sort
of
like
rate
up
pot
and
a
one
through
ten
scale,
it's
very
common.
It
does
have
its
flaws,
but
it's
a
method,
and
so
we
completed
that
on
get
lab.
You
can
see
the
survey
results
in
the
presentation
summary
there
and
that
link.
C
C
C
Just
thinking
through
this
may
be
interesting
to
think
through
how
we
can
like,
what's
the
next
big
thing,
to
go,
tackle
to
try
and
improve
performance
and
like
what
does
that
look
like
I'm,
not
sure
I
have
a
good
taxonomy
in
my
head
of,
like,
as
there
are
a
couple
big
things
or
there.
Is
it
just
a
whole
bunch
of
little
things
it
to
go?
Do
I
did
ask
some
of
the
infrastructure
team
what
they
thought.
B
C
B
E
E
Think
there
is
like
not
only
gray
to
that
protein.
What
I
think
really
is
like
this
from
my
perspective,
how
it
seems
like
we
go
through
every
request:
every
age
like
we
didn't
cook
anything
about
like
the
front
end
performance
from
the
alpha
male
CPU
performance
and
the
memory
usage,
but
on
the
front
end
and
I
think
that
at
least
from
my
perspective,
it
seems
that,
like
we
need
to
start
hitting
more
and
more
I,
just
testing
them
may
be
manually
may
be
automatic.
D
B
Really
like
this
point
about
the
front
network,
that
was
something
Nate
actually
brought
mentioned
in
the
workshop
as
well.
I,
don't
know
why
I
didn't
mention
this,
because
I
definitely
remember
thinking.
You
said
a
lot
of
times.
This
is
where
people
under
under
invest
is
in
performance
like
it's,
it's
very
fast
to
jump
to
in
all
its
the
database
and
so
and
often.
E
So
much
yes,
it's
like,
if
not
only
render
it
like
very
simple,
find
in
an
RI,
it's
very
often
on
front
and
implemented
as
or
an
type
search,
because
there
is
no
good
others,
basically
for
that
to
doing
that.
Otherwise,
so
people
fall
back
to
like
to
doing
very
inefficient
methods
that
are
efficient
like
up
to
5
or
10
elements,
but
basically
we
processing,
if
can't
be
more
data
there,
so
front-end
JavaScript
is
I.
Think
it's
like
very
unexplored
area
for
us
really.
We
have
I
guess
also
like
very
little
on
the
knowledge.
E
E
It's
it's
so
painful
that,
like
you,
wait
2
or
4
or
5
seconds
to
load
on
my
pretty
beefy
PC
like
to
note
the
page
of
like
500
kilobytes
of
data,
so
I
think,
like
that
the
simplest
way
if
you
would
go
to
get
love,
org,
github
repo,
go
through
different
pages
and
we
will
start
optimizing
them
one
by
one
kind
of
try
to
find
an
optimized,
and
it
would
probably
would
find
a
lot
of
interesting.
Are
you
guys
to
like
to
improve
yeah.
B
E
F
That's
the
quality
person
in
charge,
performance
testing
I've
got
some
the
it's
an
interest
conversation
the
guys
have
already
taught
touched
on
some
really
good
points.
There
is
two
aspects
generally
to
ask
mr.
forest
s:
things
versus
her
performance
and
client
performance
climb
forest.
We
all
be
for
on-site
speeds.
We
are
not
doing
enough
of
I
want
to
increase
that
in
the
future.
There
is
I
think
there's
some
sigh
speed
tests
that
run
generally
on
get
low
carb
and
they
greatly
pop
up
with
the
classics.
F
F
There's
been
some
improves
that
page
very
recently,
that's
actually
improved
dramatically
so
that
we
are
catching
these
a
little
bit,
but
we
always
want
to
increase
our
coverage
every
really
ever
to
be
amazing.
If
we
could
get
more
detail
of
the
survey
but
I
guess
we
can
about
what
areas
we're
seeing,
but
there
is
other
areas
that
we
know
is
bad.
For
example,
like
search-
that's
that's
just
bad
search
is
slow.
It's
not
good
we're.
The
search
team
is
still
working
while
we
trying
to
get
elastic
search
performing
better
than
that
one.
F
So
the
I
think
there's
quite
a
few
common
paths
that
people
are
probably
crossing
on
CARICOM
that
are
just
slower
on
comm,
compared
to
maybe
our
self
installed,
because
it's
just
complete
the
very
same
idea
on
comm,
so
hi
I
will
have
more
faults
and
feel
feelings
on
this
as
we
need
to
go
for
it.
If
you
want
to
talk
more
Josh
or
wherever
else
we
need
to
look
at,
but
there
is
a
shoes
open
I
just
tried
to
get
a
list
there
we're
continuously
adding
to
this
with
transics.
F
Add
more
coverage
either
link
now
into
the
doc.
So
there's
now
some
Goods
kind
of
issues
in
there,
but
the
bigger.
No
one
problem
points
like
projects
and
groups.
They
just
exponentially
decrease
performance
wise
when
it
comes
to
when
the
number
of
groups
and
projects
increase
and
we've
got
our
tests
ever
other
issues
in
there
as
well.
C
E
F
F
So
it's
actually,
it
goes
up
quite
high.
We
have
a
week
there's
a
weekly
performance
crew
meeting
that
occurs
where
these
issues
are
looked
at
and
then
prioritize
accordingly
to
try
and
drop
it
down.
So
it's
hard
to
gauge
from
the
survey,
certainly
even
a
few
leases
ago,
get
lab
as
a
product
it's
lower
than
it
is
today.
Certainly
six
months
ago,
it
was
a
lot
slower,
so
it's
hard
to
differentiate
between.
F
If
it's
just
you
know
an
opinion,
that's
been
formed
over
time
compared
to
what
it
is
running
today
or
if
it's
actually
today
is
actually
still
slow
because
of
actual
performance
points
that
are
in
those
issues,
but
yeah.
There's
a
weekly
meeting
that
that
I
forget
what
the
group
is
called,
but
they
actually
prioritize
and
hand
out
the
teams
again
to
prioritize
and
what
they
can
to
to
tackle
performance
issues.
F
C
I
think
there
are
a
couple
other
quick
points,
they're
running
out,
running
out
them
yep
one
is
that
conduct
every
a
few
folks.
I
think
also
that
the
performance
about
calm
is
just
very
uneven
him
fine
for
sometimes,
and
then
it
can
really
get
slow
for
a
couple
hours
because
of
outages
issues
challenges
you
know
what
have
you
and
that
it
happens
often
enough
that
probably
does
impact
your
overall
perception
of
calm,
I.
C
Think
the
other
question
I
have
is
whether,
like
how
long
does
it
take
to
actually
triage
ease
and
find
these,
and
is
there
a
better
method
of
doing
it
like
if
we
did
distributed
tracing
and-
and
we
actually
have
support
for
it,
but
we
don't
have
it
running
on
comm
today.
Would
that
help
with
that
like
be
able
to
point
us
more
quickly
and
easily
to
potential
common
pain
points,
I?
Think.
F
So
only
yes
I
specific
line
test
it
for
browser
testing.
There
are
some
issues
that
are
just
like
bad
for
the
client.
Merge
quest
was
one
of
those
that
that
was
caught
with
that,
but
the
African
get
more.
The
automated
data
get
live
comm.
That
is,
you
know
it
has
to
be
a
little
bit
smart.
We
don't
we
a
one-off,
it's
a
one-off.
F
We
don't
really
potentially
care
about
that,
because
it's
hard
to
then
differentiate
between
is
that
performance
that
particular
page
or
end
point
for
him
badly
or
is
it
just
because
there's
a
lot
of
our
stuff
along
in
the
environment
and
a
very
specific
way
that
actually
just
loaded
there
and
I'm
Nathan
I'm
Senate
everything
is
fine.
So
if
we
can
get
a
date
there
and
say
here
was
last
week
this
page
was
slow.
It
was
a
couple
of
seconds,
for
example,
to
load
each
time
and
that
would
be
obviously
very
valuable
for
us
moving
forward.
F
There
is
a
page
that
I
just
put
in
doctrine
called
make
simple
errors,
which
is
a
useful
elastic
setup,
but
I
didn't
set
us
up.
I
just
know
that
I
think
it
kind
of
reports
on
some
top
level
stuff
from
calm
from
with
the
elastic
kind
of
monitoring,
and
in
that
you'll
see
in
that
channel,
you
will
see
a
currency,
here's
the
worst
forming
end
point
and
it's
always
projects,
and
it's
always
groups
and
sometimes
you'll,
see
something
else.
Per
block
searches
actually
know
sometimes
they're
coming
up
as
a
reels
controller.