►
From YouTube: 2020 04 27 Memory Team weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
We
had
some
of
the
tracing
and
user
stories
that
Josh
had
defined
for
thirteen
point.
Oh
those
are
under
fire,
so
I'm
assigned
right
now
we
can
talk
about.
What's
going
on
the
13
point,
oh
and
then
look
for
folks
that
are
coming
off
what
they're
working
on
and
maybe
assign
dris
for
some
of
those
areas
for
the
unassigned
work
so
get
into.
We
can
talk
about
puma,
so
Kamiel.
You
want
to
run
through
those
real
quick.
B
A
B
So
III
didn't
start
working
on
this
documentation.
Part
I'm
kind
of
like
thinking
that
this
is
like
the
most
important
aspect
right
now
to
finish,
because
then
we
could
start
tracking
upstream
Puma
version
instead
of
our
fork.
Yep
and
I'm
gonna
go
back
to
the
documentation
once
the
system
yeah
taking
me
so
far,
I
won't
take
you
long
right,
I'm
right
now,
I'm
actually
like
waiting
for
like
for
the
follow-up
on
this
one.
Okay.
C
B
So
probably
Ronald
could
be
tricky
because
I
think
at
this
point
we
really
need
this
infrastructure
of
the
runners
to
be
like
implemented.
Well,
they
don't
know.
There's
a
bit
issue
about
I
mean
we
gonna
discuss
that
I
believe
tomorrow,
because
I
I
think
at
this
point
like
we
are
like
being
blocked
by
the
lack
of
this
item
on
the
production
Oh.
What
was
the
item
again?
Kamehame?
Certainly
creation
of
the
get
logged,
okay
shirt
on
our
manager
on
the
Alex
one
circle.
B
Like
we
have
two
tracks
which
is
like
that
product
and
the
group
group
is
like
being
handled
by
another
team,
so
they
merge
the
group
and
DJ
some
support
in
the
import/export
excite
so
like
it
seems,
like
the
development
part,
should
be
pretty
much
done
at
this
moment.
There
is
like
really
like
what
we
have
left:
we've
n
DJ
zone,
it's
like
validation
and
the
documentation.
Basically
at
this
moment,
because
at
least
from
my
perspective,
it
seems
that,
like
all
changes
like
99%
of
changes,
are
done
as
long
as
everything
works
as
expected.
B
But
if
it
doesn't
work
as
expected,
we
may
need
to
do
additional
changes
so
right
now
we
need
to
kind
of
conclude
that
validation
part
and
do
figure
out,
like
the
most
minimal
changes
to
the
documentation,
to
say
to
it
closed
because,
like
I
mean
some
issues
about
the
other
is
like
very
little
progress
on
on
my
arethey
of
them
that
are
related
to
the
validation
and
the
documentation.
So
it's
defined
because,
like
we
still
have
something
like
three
weeks,
but
time
passes
by
quickly.
D
D
D
B
B
D
D
E
D
B
D
B
B
D
B
D
C
B
B
For
a
bit,
I
would
I
would
only
try
like
the
provided,
a
that
it
works
because,
like
I,
ought
only
try
to
ride
the
NVRA
task,
but
maybe
there
is
also
another
good
idea.
Ice
like
as
soon
as
we
enable
both
feature
facts,
because
we
have
to
feature
facts:
trending
Jason's,
try
to
perform
this
export
and
import
using
web
and
I
guy,
but
like
we
are
only
using
an
API
and
weapon,
our
import
export
metrics
if
I'm
correct
right.
D
B
Because,
like
I
like
ask
for
the
export
I'm,
not
sure
about
the
import,
it's
just
a
sidekick
task.
What
else
we
have
we
have
memory
consumption,
we
don't
be
mean
you
don't
really
have
that
in
the
site.
Key.
B
A
A
E
Yeah,
if
you,
if
you
want
to
read
up
on
the
details,
I'm
not
gonna,
go
into
the
details,
because
it's
too
much,
but
it
go
to
that
issue.
I
linked
this-
should
have
I'll
update
it.
I
keep
updating
it
with
like
the
latest
state
of
affairs,
but
maybe
as
like
a
super
high-level
overview.
We
basically
spent
last
week
looking
at
three
main
things.
A
E
Well,
maybe
I
don't
know
if
there's
like
at
the
server
portion
of
actually
cable.
If
we
could
split
this
out
from
the
rest
of
actually
cable,
then
maybe
you
know
we
could
only
pay
that
Emery
cuspid
small,
then
what
we
then
did
is
we
looked
at
what
could
be
a
technical
split
in
terms
of
if
we
run
normal
web
nodes
and
action,
cable,
specific
notes
of
any
memory
savings
we
can
do
there
and
what
we
focused
there
was
basically
on
memory
sharing.
So
how
can
we
make
sharing
of
memory
between
these
processes
more
efficient?
E
So
that's
currently
what
we're
looking
into
him
and
then
the
third
one
was.
Is
that
possibility
to
get
memory
savings
by
splitting
it
more
along
the
vertical
axis,
which
is
the
feature
axis
because
we
had
thought
well,
you
know
there's
only
like
a
number
of
features
where
we
actually
would
have
real-time
functionality.
E
So
maybe
this
is
something
we
can
separate
out
and
then
only
run
on
the
action
cable
service,
but
did
that
turn
out
to
be
like
super
tricky?
There's
an
open
issue
for
this
I'm,
not
totally
sure
it's
going
anywhere
to
be
honest,
because
what
we
had
identified.
What
we
could
do
is
all
a
really
difficult,
long
term
work
like
basically
restructuring
the
application
in
terms
of
like
feature
modules
or
whatever
so
I,
don't
think
that's
gonna
be
super
fruitful
by
these
were
the
three
main
areas
we
looked.
E
Question
would
be
at
this
point
like
I.
Don't
think
we
really
have
like
good
exit
criteria
for
our
involvement
in
this,
because
we
can
spend
a
long
time.
You
know
trying
to
squeeze
out
more
megabytes
from
you
know
in
terms
of
savings.
There
I
mean
I
like
I'm,
not
sure
like.
What's
good
enough
and
like
what
we
have.
We
have
never
said
what
is
acceptable.
You
know
to
run
WebSockets,
so
it's
really
hard
to
say
like
what
I
should
be
spending
more
time
on,
because
we
can
keep
working
this
for
months.
I
wasn't.
E
You
actually
can
get
get
a
lot
of
overlap
because
they
all
load
the
same
rails,
application
they're,
all
on
Rails,
they're,
all
load,
the
same
gems,
and
a
lot
of
this
is
like
static.
It
doesn't
really
change,
so
this
can
be
shared
so,
but
they
were
like
a
couple
blind
spots
and
where
we
like,
currently
in
production
Prometheus,
we
don't
even
report
the
unique
set
size
and
proportional
set
size
which
is
which
are
these
metrics.
That
actually
give
you
some
insight
into
what?
How
big
are
these
like
shared
memory,
spaces
and
in
production,
for
instance?
E
A
So
I
think
from
my
standpoint
and
correct
me
if
I'm
Way
off
on
this
one,
but
as
being
a
participant
in
the
working
group,
especially
from
the
memory
standpoint,
is
just
making
sure
that
they
understand
the
memory
and
what
they're
implementing
and
that
we've
given
them
guidance
on
here's,
how
you
can
measure
it
and
if
they
have
that
in
place,
you
can
continue
to
participate
and
kind
of
follow
along,
but
switch
your
focus
on
to
something
else.
In
our
backlog.
E
E
D
A
A
I
was
gonna
bring
up,
so
we
have
some
unassigned
work
in
our
backlog,
so
these
are
some
of
the
things
that
Josh
talked
about.
It
kick
off
right,
so
the
performance,
testing
of
common
gate,
lab
user
journeys
and
he's
got
some
more
content
in
there.
I'd
read
up
on
that
issue
and
then
the
distributed
tracing
forget
lab
comm.
A
E
A
A
E
Right
she
she,
yes,
it
sounds
like
she's
on
top
of
that
and
the
I
don't
know
about
this
issue
specifically,
but
some
of
the
kind
of
sibling
issues
they
also
fell
out
of
the
same
epic
I,
don't
know
this
more
in
here
they
were
being
discussed
in
the
same
thread.
So
it
sounds
like
scalability,
we'll
look
after
them.
Yep.
E
A
The
two
big
ones
that
Josh
wanted
to
get
some
traction
on
where
the
performance
testing
and
distributed
tracing.
So
if
anyone's
feels
like
they're
coming
up
and
are
looking
for
another
juicy
story
to
work
on
feel
free
to
assign
yourself.
When
you
start
talking
more
about
details
asynchronously
within
the
issue
itself,
we
have
about
three
minutes
till
I
need
to
jump
off
for
next
meeting
any
other
topics
we
need
to
cover
today
or
have
follow-ups
asynchronously.