►
From YouTube: 2021-06-28 Memory Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Welcome
to
the
memory
team
meeting,
it
is
june
the
28th-
and
this
is
our
weekly
team
meeting.
A
So
we
have
a
couple:
non-verbalized
updates
about
pdo
and
cz
is
unfortunately
not
able
to
attend
because
he's
on
vacation
in
a
very
nice
spot,
oh
yeah,
so
there
was
a
question,
though
I
think
I
raised
it
in
the
retro
because
I'm
like
never
sure
how
committed
we
were
to
this
delivery.
Deliverable
label
thing
because
it
seems
like
but
like
as
a
team,
maybe
do
not
apply
it
everywhere
like
because
last
time
the
summary
was.
Oh,
we
shipped
these
30
things.
A
Oh
sorry,
like
we
shipped
these
five
things,
but
yeah
these
25
more
things
we
also
did
so
like.
We
only
attach
it
to
a
90
percent,
so
I
don't
know
like
what
this
feeds
into
or
if
we
should
be
doing
it
and
when,
but
I
feel
like.
If
we
commit
to
doing
it,
we
should
do
it
everywhere
or
not.
Do
it
at
all.
Otherwise,
it's
just
kind
of
silly.
B
A
B
B
B
A
B
Yeah,
I
think
I
think
the
thing
that
we
do
in
geo,
which
is
slightly
different,
which
we
could
also
do,
is,
if
we
say
we're
going
to,
we
have
it
in
our
planning
issue.
Everything
that
is
explicitly
called
out
gets
a
delivery,
a
deliverable
thing,
because
that's
what
we
know
already
ahead
of
time.
We
want
to
do.
B
B
Don't
have
to,
I
think,
let's
do
it
everywhere
and
see
how
that
goes,
and
that's
probably
the
simplest
thing
we
can
do.
It
creates
the
least
amount
of
overhead,
and,
if
that
I
hope
that
at
some
point
we'll
we'll
do
this
on
epics,
at
which
point
it
becomes
a
lot
more
sensible,
at
least
for
me.
I
think
that's
that's
my
suggestion.
A
B
Yeah,
it's
like,
I
think
it
feeds
into
the
the
this
page
here.
One
second.
A
A
C
B
Yeah,
which
is
probably
why
the
ass
is
being
made
because
we're
not
rendering
anything
in
here.
I
have
no
data
for
how
many
people
look
at
this
page
or
any
of
it.
So
I
think
the
easy.
The
simplest
thing
today,
I
think,
would
be
to
just
apply
it
on
everything
and
move
on
in
my,
in
my
opinion,
sounds.
C
B
A
A
B
Yeah
I
verbalized
this
I'm
going
to
be
in
the
u.s
in
the
in
the
u.s
time
zone,
so
I'm
actually.
This
is
wrong.
I
won't
be
able
to
attend
our
office
hours
for
for
that
time,
because
I'm
going
to
be
on
u.s
time
zone
so
I'll
be
asleep.
I
still
will
be
in
our
weekly
meeting,
because
that
should
should
be
fine.
B
B
And
then,
lastly,
I
have
it
on
my
list
to
review
our
memory
direction
page
this
week,
maybe
I'll
get
to
it,
maybe
next
week,
because
we've
had
a
ton
of
really
good
discussions
on
where
we
want
to
go
and
some
longer
term
things,
and
I
want
that
to
be
reflected
on
there.
So
I'll
ping,
all
of
you
for
for
review,
just
a
heads
up.
A
Okay,
so
unless
there's
anything
else,
should
we
go
to
the
billboard
or
yeah?
When
do
we
go
through
the
validation
board?
Again,
we
haven't
really
done
it
much
lately,
right.
B
A
B
Sure
cool!
Well,
if
I
don't
talk
to
you
this
week,
again
have
a
good
rest
of
the
days
I'll
be
off
on
thursday
and
friday
yeah
trying
to
shepherd
and
a
one
and
a
half
year
old,
toddler
through
multiple
airports.
So.
A
B
D
A
Yeah,
okay,
I
mean
I
I
I
can
start.
I
guess
I
think
we
went
like.
I
think
nothing
has
changed
in
the
cost
column
right.
Just
looking
at
this
correct
me
if
I'm
wrong,
because
I
definitely
talked
about
my
stuff
last
week,.
A
D
A
Oh
it's
easy
fix.
This
really
cool.
That
was
the
leftover
from
the
deprecation
work
for
the
big
release
that
we
forgot.
So
cz
actually
picked
this
up
so
that
that
was
merged
last
week.
So
that's
cool
it'll
now
go
out
with
14.1.
I
don't
think
it's
a
big
deal
that
we
missed
it.
It's
just
cosmetic,
but
it's
nice
to
fix,
accept.
A
Yeah,
okay,
I
don't
know
like
my
stuff
is
from.
If
you
want
to
talk
about
anything.
Let
me
know
like
my
stuff
is
from
more
than
a
week
ago,.
C
Yeah,
I
think
nothing
super
specific.
I
we
close
like
the
whole
unicorn
epic
by
closing
the
very
very
last
documentation
update,
so
the
whole
apec
is
now
closed
and
yeah
we're
finally
over,
and
I
think
that
I
closed
everything
related
to
build
cube
worker,
but
I'm
not
really
sure
if
it
was
this
previous
week
so
yeah,
that's
it.
Okay,.
A
Yeah,
maybe
you
have
another
look
at
this
one
cool,
nothing
in
verification,
yeah.
A
I
have
two
things
in
review
this
time:
actually
yeah
nikola
some
good
feedback,
so
this
is
basically,
if
you
remember,
we
had
this
problem
when
we
it's
actually
not
specific,
to
load
balancing
it's
just
in
general,
we
have
like
a
client,
middleware
and
sidekick,
which
increments
the
prometheus
counter
and
just
the
way
psych
works
is
that
if
you
schedule
a
job
for
future
execution,
so
anything
like
crunch
ups,
but
now
also
because
of
load
balancing
these
delays
that
we
inject
into
workers
to
if
they
yeah
to
try
again
reading
from
my
replica,
for
instance,
so
these
now
get
counted
twice
because
they
first
go
through
the
client
middle.
A
It's
very
strange
that
the
sidekick
you
can
register
the
client
middleware
both
for
the
client,
but
also
for
the
server,
because
the
server
might
also
be
enqueuing
jobs.
So
if
you
want
the
client
unaware
to
well
be
enabled
always
you
need
to
edit
twice
so
so
this
means
if
this
job
gets
executed
in
the
future.
It
goes
to
that
middleware
twice.
A
First
because
you
schedule
it
and
then
when
it
actually
runs
so
we
double
count
all
these
jobs.
So
what
I'm
trying
to
do
in
this,
mr,
is
now
there
wasn't
back
and
forth
about
how
to
approach
this.
But
basically,
what
I'm
doing
now
is
I
I
still
count
it
twice,
but
I
add
an
extra
dimension
to
the
metric
like
a
new
label
so
that
we
can,
if
we
want
to
in
dashboards.
A
We
can
then
untangle
that
so
we
can
say
which
ones
are
actually
executions,
you
know
and
which
ones
were
just
in
the
queue,
because
they
were
waiting
to
run
at
some
future
points.
So
yeah,
I'm
looking
at
some
testimony
and
stuff
there
yeah
and
then
I
started
to
work
on
the
ruby3
bump
or
like
unblocking
this.
I
guess
there
were
like
a
dozen
or
so
gems
that
had
to
be
bumped
to
add
ruby,
3
support
and
get
rid
of
deprecation
warnings
and
all
that
stuff.
A
This
one
was
a
lot
of
work
because
it
was
one
of
these
issues
where
yeah
it's
a
one-liner
in
code,
but
there
were
like
a
dozen
different
things
that
were
really
complex
to
tests
related
to
that's
our
object:
storage
wrapper
from
google
for
gcp
gcs,
so
I
had
to
set
up
gcs
storage
and
then
walk
through
all
these
scenarios
like
build,
trace,
chunks,
break
clean
up
jobs
and
all
kinds
of
things
build
artifacts
like
all
these
things
that
end
up
running
to
object,
storage
for
different
paths,
and
we
actually
found
a
regression
in
this
gem,
so
stan
already
immediately
sent
a
fix
to
the
upstream
gem
and
that
already
got
merged.
A
So
this
is,
you
know
it's
actually
1.15
now,
because
they
bumped
the
team.
The
I
don't
know
who
maintains
that
gem,
the
community.
They
release
new
versions.
So
I'm
just
waiting
for
the
maintainer,
a
new
one.
A
D
Okay,
I
guess
I
can
go
next,
like
this
investigative
archive,
js
request
should
be
closed,
like
I
think
that
I
closed
the
issue,
I
don't
know
why
it's
there,
I'm
pretty
much
sure
that
I
closed
the
issue,
but
maybe
the
label
states
like
workflow
in
dev.
Maybe
it's
not
automatically
moved.
Oh
yeah.
D
Okay,
I
will
close
it
later
because
we
figured
out
what's
going
on
and
but
yeah,
I'm
not
sure
what
the
decision
in
the
end
was
I'll
check,
but
yeah
we
figured
out
what's
happening
and
based
on
that,
I
think
that
we
introduced
some
better
metrics
as
well,
but
this
is
related
to
the
second
issue.
So
I
don't
think
that
we
should
spend
any
more
time
on
this,
because
this
is
expected
behavior.
Maybe
I
will
ping
camille
or
someone
else
who
introduced
this
change
to
ask,
because
what
what's
happening
there
is.
D
This
one
was
very
strange
because,
from
the
beginning,
when
load
balancing
was
enabled,
we
we
saw
these
spikes
and
this
this
particular
worker
always
use
replicas.
So
since
we
enabled
the
load
balancing
for
sidekiq,
even
this,
this
worker
didn't
wasn't
configured
like
it
doesn't
have
data
consistency
at
all,
so
it
should
be
hitting
primary
by
default.
D
D
That
is
enabling,
like
some
stuff,
that
are
sticking
and
I'm
sticking
this
work
for
some
other
case,
because
for
this
work
here,
this
behavior
was
always
hitting
primary,
because
the
road
balancing
wasn't
enabled
but
like
clearing
the
artifacts
from
the
main
thread
like
was
using
this
sticking
and
unsticking,
and
now
when
we
enable
the
load
balancing
for
psychic
as
well,
it's
still
utilizing
this,
so
I
think
that
we
don't
have
problem,
because
the
data
consistency
is
still
so.
It's
just
check
that.
D
A
Basically
then
so
this
should
probably
be,
then
it's
hitting
replica
it
is
actually
supposed
to,
but
it
uses
a
different
mechanism
to
ensure.
B
A
D
Yeah
about
the
first
one
I
didn't
really
have
time
like
this
is
really
small
small
thing,
but,
like
recently
I
was
reviewing
a
zillion,
mrs.
C
A
A
D
D
Yeah
and
like
the
board
is
not
reflecting,
but
I
open
it.
Two
new
issues,
one
is
to
like
remove
the
default
environment
variable
that
enables
load
balancing
for
sidekick.
So
I
think
this
is
important
to
like
remove.
So
we
can,
like
finish.
A
D
Things
down
and
the
second
one
is
also
pretty
much
simple
to
remove
default,
defaulting
data,
consistency
to
always
and
just
convert
all
workers.
So
when
we
introduce
a
new
worker,
the
exception
will
be
raised
if
the
data
consistency
is
not
configured,
so
this
will
force
other
other
developers
to.
B
D
D
C
D
D
C
Yeah,
but
I
mean
we
don't
know
how
often
this
happens,
so
we
decided.
We
need
a
metric
to
understand
how
our
improvement
will
unfold,
because
we
don't
know
how
often
it
would
cause
the
code
to
choose
primary
instead
of
replicants.
However,
not
all
replicas
occurred
up,
so
we
started
with
the
metrics,
but
it's
tricky
because
middleware
order
screws
us
up,
because
when
we
deal
the
load
balancing
decision,
we
don't
have
metrics
yet
and
it's
a
bit
of
awkward
chain
of
notification.
Currently.
A
C
No,
no,
we
want
to
understand
how
often
we
not
picking
replica
because
of
this
chord.
A
C
We
want
this
particular
logic
to
be
quantified
somehow
to
understand
how
open
we
traverse
over
replicas
and
not
able
to
pick
it
currently.
If
only
one
of
the
replicas
is
delayed,
then
we
are
not
able
and
we
want
to
understand
how
severe
is
situation,
so
we
need
metrics.
Otherwise
our
improvement
would
not
be
very
easy
to
verify
validate
so
yeah.
We
want
just
a
simple
metric.
C
How
often
do
we
fail
to
primary
when
we
calling
this
all
code
up
code
so
but
yeah
it's
a
bit
of
mess
with
middleware,
because
we
need
to
rearrange
them
because
we
need
a
transaction
to
be
in
place
to
increase
counters
or
postpone
this.
So
I'm
currently
trying
to
make
it
happen,
but
it's
tricky
and
it's
only
a
first
stage.
We
introduce
a
replica.
C
D
C
As
for
other
issues,
I
see
that
matthias
posted
some
updates
on
memory
intensive
endpoints.
I
actually
was
able
on
you
to
look
at
a
single
one
on
thursday
and
I
think
cz
asked
for
a
group
source
quad
pm,
so
we
rearranged
one
of
the
issues
already
and
we
have
yeah.
A
Well,
the
follow-up
from
last
week,
because
we,
I
think
we
don't,
I
don't
think
we
should.
I
think
we
said
we
don't
need
to
work
on
them
yeah,
but
if
we
just
like
leave
them
in
the
backlog,
open,
like
my
argument
was,
nothing
is
ever
gonna
happen.
So
I
I
what
I
wanted
to
make
sure
is
that
we
check.
Is
it
still
a
problem
and
if
so
we
assign
it
to
the
code
owner
and
then
we
can
still
help
them,
but,
like
you
know,
it
shouldn't
sit
in
our
back
line.
A
A
Need
all
right
yeah.
I
had
like
another
question,
which
was
so
we
said.
B
A
Like
that,
our
actual
criteria
for
the
low
balancing
epic
they
are
yeah
were
pretty
well
defined.
There's
the
cleanup
right
that
you
mentioned
nikolai
and
I
think
it
was
one
other
issue.
There
was
one
issue
here.
There
was
a
follow-up
as
well.
Where
did
it
go.
C
A
That's
a
fair
point:
yeah,
that's
a
fair
point,
but
I
mean
even
this
I
mean
this
definitely
came
out
of
the
sidekick
load,
balancing
work,
nikola
rumor,
that's
the
one
where
we
thought
it
could
be
helpful
to
break
it
down
more
this
metric,
but
it
also
makes
it
more
complicated.
I'm
just
wondering
like.
A
Are
we
confident
that
this
is
something
we
want
to
have
now
or
like?
Is
this
something
we
still
want
to
do,
because
we
did
not
scope
it
as
part
of
the
exit
criteria?
So
I
could.
I
could
pick
this
up
now
or
I
could
work
on
the
ruby
free
stuff.
So
I
was
just
wondering
if
there's
any
preference
from
your
perspective
as
well.
D
B
D
So
I
don't
think
that,
like
memory
group
will
be
included
and
should
be
included
in
all
of
these
things,
but
the
issues
that
we
already
defined
can
stay
maybe
like.
We
can
extract
them
in
and
move
them
in
separate
epic,
but
I
think
that
we
can
wrap
it
up
with
14.1
and
the
other
issues
that
are
listed
in
excretaria,
that
they'll
maybe
spill
in
14.2.
D
We
just
moved
it
to
40.1
and
we
already
started
working
on
all
code
up
and
like
preparing
that
metric.
I
think
that
we
can
still
continue
working
on
it
like
prepare
metric
roll
it
out
under
the
feature
flag,
remove
the
feature,
flag
measure,
the
impact,
and
this
will
probably
split
to
14.2
as
well.
But
it's
not
part,
I
would
not
say
okay.
This
is
not
related
to
the
sidekick,
let's
move
it
to
the
separate
epic
and
then
do
all
the
stuff
in
that
epic.
Maybe
we
can
just
finish
there.
B
A
Yeah,
I
guess
the
reason
I
bring
it
up
is
that,
like
I
want
to
make
sure
we
work
on
stuff
that
we
know
will
be
useful
and
I'm
not
saying
this
is
not
useful
to
have,
but
so
far
I
have
not
missed
it.
So
I
I
think
I
I'm
just
wondering
like
if
there's
really
pressing
demand
to
have
this,
because
this
is
probably
going
to
take
a
week
or
so
to
do
this.
A
C
Probably,
as
I
understand
like
the
concern-
and
I
also
wasn't
missing
that-
but
probably
one
like
draw
towards
doing
it
now-
is
that
we
still
remember
the
all
the
details,
and
I
mean
it
would
be
much
tougher
for
the
team
to
do
that
later
on,
because
they
would
need
to
talk
to
us
and
even
for
us
it
would
be
tougher.
That's
probably
my
argument
towards
speaking
right
now,
but
I
also
understand
that
it's
not
a
pressing
event.
So
it's
we
need
to
make
a
compromise.
D
I
agree
with
alexa.
I
don't
think
it's
like
super
demanding
and
pressing,
because
we
use
this
metric
before
and
we
can
conclude
a
lot
of
things
even
at
the
moment,
but
before
we
start
using
this
metric
on
some
dashboards,
it
would
be
better
if
we
have
like
the
final
look,
so
I
would
say
that
it
will
be
easier
because
we
have
full
context
to
be
done
by
us
than
to
be
given
to
someone
else.
D
B
D
Why
I
would
do
it
like
sooner
than
later,
is
if
we
like
conclude
our
work
on
like
cloud
balancing
and
move
somewhere
else.
It
will
be
great
if
we
have
the
final
like
look
of
these
metrics.
So
if
we
start
using
it
in
some
dashboard,
it
will
be
more
difficult
to
change
them
later
as
well
and
as
alex
he
already
said,
it
will
be
difficult
to
explain
to
someone
else
what
we
like
him.
A
D
A
Probably
pick
it
up
soon,
okay,
I
think
that
was
it
anything
else.