►
From YouTube: 2020 04 20 Memory Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Married,
so
all
right
so
back
to
the
beginning
of
the
milestone.
So
next
milestone
ends
in
27
days,
mrs
for
the
month,
22
there's
a
new
feature
in
1210
health
status,
and
we
should
start
using
it
where
it
makes
sense,
probably
doesn't
make
sense
on
an
individual
issue
level,
but
at
an
epic
level
it
provides
a
pretty
good
overview
at
focusing
this,
it's
pretty
cool
anyway.
Take
a
look
when
you
have
a
chance.
B
Someone
Chun
pointed
out
to
me
that
we
have
the
number
of
mrs
that
we're
looking
at
don't
match
between
member
vs.
team
label.
So
please
be
sure
to
apply
the
group
memory
label
to
your,
mr,
so
that
we
have
a
consistent
count
more
looking
at
them,
I'll
go
through
up
today
and
try
and
catch
up
on
all
those
make
sure
they're
labeled
properly,
but
so
I'm
going
to
keep
in
mind
going
forward
for
May,
there's
still
some
slots
that
need
to
be
filled
for
support,
so
anybody
needs
to
put
in
their
time
for
support.
B
Please
take
a
look
and
also
12:10
retro
issue
is
opened
up
and
Camille's
already
put
in
something
there.
Some
nice
feedback
on
our
end
DJ's
on
efforts,
so
thank
you
jump
over
to
the
board,
there's
quite
a
few
they're
still
at
12:10,
we'll
run
through
those
real,
quick
and
move
them
accordingly.
So
I
think
with
this
one.
This
is
kind
of
the
issue
that
tracks
all
the
remaining
work
right.
This
isn't
a
specific
implementation
issue
that
make
plan
limits
available.
C
C
C
C
And
I'm
actually
like
rerunning
on
my
performance
tests
right
now
to
update
the
patch
because,
like
the
planning
says,
follow
I
I,
remarked
performance
test
figure
out
like
the
best
values,
something
the
parts
manager
to
get
off
release
that
it
get
up
confirmed
that
we
kicked
up
then
push
like
updated
version
to
tonight.
This
was
like
what
we
were
discussing
with
night
on
on
the
course
of
the
action
I'm
work
right
now
on
retesting
the
Patrick
to
push
that
to
our
local
testing.
First.
D
B
E
That's
waiting
for
code
review,
I
said
it
out
today.
It's
just
a
really
small
book
fix
for
the
main,
so
so
this
relates
to
this
whole
series
of
em,
ours
that
return
correlation
IDs
and
the
number
of
failed
or
the
failed
relations
from
imports
in
the
rest
API.
So
the
functionality
is
shipped
with
1210,
but
while
testing
I
found
this
problem,
but
it
but
it's
not,
we.
E
It's
not
a
big
deal,
so
there
will
be
a
fix.
Well
actually
had
a
question
about
this,
so
because
it's
tagged
as
1210,
but
the
buck
figs
will
go
into
the
next
patch
release.
So
instead
what
milestone
is
that
then?
Is
that
still
1210,
or
is
that
then
13,
because
we're
already
kind
of
moving
to
13
I
wasn't
sure
how
to
take
that.
E
E
F
B
A
Yeah
sure
so
just
talk
through
a
little
bit
of
thornado
and
and
kind
of
some
ideas
here.
The
first
one
I
think
is
as
far
as
priorities
go.
We
should
finish
up,
of
course,
any
work
that
we
have
in
progress
just
to
make
sure
you
finish
it
out.
So
let's
just
get
that
stuff
done.
It
sounds
like
most
of
it's
already
in
review,
which
is
awesome.
So
that's
great
nice
play
add
also
come
up
here.
A
Anything
comes
in
on
pluma
that
we
might
learn
or
find
out
about
part
of
ginetto
and
and
changing
it
to
be
opted
out.
Of
course,
we
should
pick
up
work
on
as
an
if
it
comes
in
and
then
just
making
sure
we
finish
out
the
important
CI
minute
work
is
anything
remaining
there
still
are.
We
can't
done
on
both
of
those
yeah.
D
We
need
to
test
it
on
staging
and
we
have
a
separate
issue
dedicated
to
that
which
is
stacked
with
thirteen
point.
Oh,
and
it
would
require
some
effort
from
outside
and
I
plan
to
start
it
today,
and
we
also
need
to
ask
our
site
reliability.
Engineers
to
like
actually
enables
efficient.
It
wouldn't
require
some
effort
from
our
side,
but
it
would
require
some
coordination
and
from
like,
as
it
you.
B
B
E
A
C
F
E
F
E
A
Necessary,
so
we
have
that
fallback
option
if
we
need
to
I
agree
like
a
little
warning
box
that
pops
up,
that
might
say
like
hey.
This
appears
to
me
in
an
old
format.
You
know,
are
some
kind
of
like
I'm,
not
sure
what
kind
of
messaging
we
can
give,
but
if
we
can
give
a
little
text
message
if
it's
a
low
effort
that
might
be
helpful,
another
question
would
be.
C
A
A
We
can
figure
out
mostly
message.
It
typically
like
something
like
personification,
so
I
get,
you
know
a
leprechaun,
it's
just
that
you
won't
be
backwards,
compatible
anymore
and
then
ously
think
I
think
backwards.
Compatibility
seems
like
am
that
have
actually
worked
in
the
first
place
right
whenever
actually
tested
it.
So
it's.
A
Yeah
I
agree,
so
I
think
we
can
what
she
got
turned
on
out.
What
I'll
do
is
I'll.
Do
the
look?
It's
an
issue
for
this
right
now
and
if
not,
let's
just
I'll
make
one
and
I'll
ping
Harris
I
apologize
if
I'm
mispronouncing
his
name
just
get
his
okay
and
that's
just
gonna
turned
on
stuff
and
just
go
forward.
A
B
C
A
A
A
I
suppose
I
could
right
and
it's
also
a
hair,
so
I'm
not
sure
I
can
I'm
fine
may
have
and
and
we're
kind
of
adding
things
onto
their
plate
a
little
bit
here.
Fritz
Instituto,
so
I
want
to
check
with
them
to
make
sure
that
they
have
capacity
or
they
can.
You
know,
have
understanding
here,
but
we're
proposing
so
I
will
do
that
and
for
now,
if
there
is
work
on
testing
and
staging
you
know,
and
we
have
time
we
can
pick
it
up
and
I'll.
F
I'm
not
sure,
maybe
today's,
because
we
just
need
to
import
expert
like
to
different
project
of
sizing,
compare
the
results,
but
beside
that
I
have
those
three
merge:
requests
that
are
moving
our
measurement
model
to
the
service
layer,
so
we
can
like
have
automatic
results
of
those
import
experts
shown
in
Cabana,
or
something
like
that.
So
this
is
a
separate
effort
that
is
currently
ongoing.
It's
almost
ready
for
like
maintain
review,
but
I
still
have
some
specs
that
are
failing.
F
E
B
B
There's
an
MRO,
that's
been
open
for
a
while
I
haven't
got
a
lot
of
feedback,
but
it
seems
like
scalability
is
doing
similar
work
and
we've
asked
on
the
mrs
if
it
makes
sense
for
the
scalability,
and
I
even
asked
them
again
this
morning.
It
makes
sense
for
them
to
just
take
this
effort
over
all
together.
Yeah.
E
So
I
always
need
to
wait
for
the
people
who
do
can
make
this
call
but
make
it
better
than
I
do
to
weigh
in
and
has
been
like
a
very
slow-moving
issue
for
a
while,
and
we
broke
up
a
bunch
more
out
of
these.
So
so
I
finally
got
feedback
on
this,
mr
that
has
been
open
for
a
while,
and
there
were
some
suggestions
that
was
to
break
it
up,
but
none
of
the
main
questions
answered
particularly
like
around
how
like
do.
E
We
want
to
set
these
mins
and
Max
values,
so
so
I,
don't
think
so.
I
mean
I
would
be
unblocked
in
the
sense
that
yeah
I
can
make
that
mr
smaller,
but
that
doesn't
solve
the
problem.
So
it
feels
like
it
would
be
way
quicker
if,
like
the
people
who
are
actually
close
to
that
topic,
would
work
on
this.
To
be
honest,.
E
I
mean
it
touches
on
both
right
because
it
applies
to
yeah.
It
applies
to
any
get
lab
instance.
It
doesn't
matter
really
how
you
run
it,
because
we
have
to
look
at
the
runtime
configuration
to
tie
it
all
together
and
some
of
it
might
come
from,
say,
an
omnibus
configuration
file,
and
some
of
it
might
be
user
managed
and
others
are
like
parameters
that
just
come
in
through
the
environment,
such
as,
like
you
know,
I
you're,
running
cycling,
running
Puma
and
like
how?
How
many
frets
is
it
scale
and
so
forth,
and
tie.
E
Then,
from
this
arrive
at
a
like
at
a
number
that
we
consider
acceptable
as
a
minimum
pool
size
and
arrive
at
a
number
which
we
consider
acceptable
to
be
a
maximum
pool
size,
then
the
idea
was
to
kind
of
clamp
it
in
in
between
that.
That's
just
a
safety
measure,
yeah
and
yeah.
So
I
said
Oh
NMR
with
like
a
suggestion
how
to
do
that.
But
it's
been
like
yeah,
very
slow,
mover
and
there's
like
other
work
going
on
in
parallel.
E
Into
team,
that's
yeah
operates
kind
of
in
the
same
direction,
so
it
feels
a
bit
detached
from
all
the
other
stuff.
At
this
point
we
do
so
to
me
it
seems
like
a
good
point
like
where
we
cut
this
over
to
the
group.
That
actually
has
you
know
more
experience
running
the
infrastructure
that
this
would
directly
yeah.
A
Do
I
concern
there
is
the
scalability
is
pre
focusing
good
calm,
not
so
much
self-managed
and
so
the
degree
that
and
there's
any
like
sort
of
awareness
of
self-managed?
Obviously
a
Rachel
and
Morin
and
the
whole
team
are
pretty
free
experience,
get
led
people,
so
something
I'm,
no
no
concerns
if
they
can
actually
keep
self-managed
in
mind.
So
that's
that's.
The
only
thought
I
had
there
whether
it's
illustrate
I'm,
happy.
A
E
I
think
I'm
curious
what
Camille
things
but
I
think
so
these
are
mostly
safety
measures
so
that
we
don't
under
on
or
over
run
or
connection
pool.
I
would
argue
that
what
and
these
are
can
create
lazy
liens.
So
I
would
argue
that
whatever
whatever
works
for
get
left,
calm
will
work
for
any
self-managed
customer,
because.
E
Something
it's
not
like.
We
if
we
say
oh,
the
maximum
is,
like
you
know,
a
thousand
connections
or
whatever
it's
not
like.
We
like
pre-allocate
these,
it's
just
a
an
upper-bound
where
we
do
send
like
a
hard
cap.
So
if
then,
like
a
customer
would
spin
up
a
tiny
deployment
of
gitlab
that
doesn't
mean
we
will
ever
they
would
never
see
like
you
know
that
limit
be
exhausted
and
and
it
remains
to
be
configurable,
you
know
in
Bukit,
lab
dot.
E
Rb,
no
I
think
that
the
so
from
like
a
risk
perspective,
I
think
what's
at
risk
here
is
get
less
calm,
not
not
not
really
self-managed
but
I
mean
like
I,
think
so
maybe
we
can
like
with
it.
Maybe
this
should
be
the
other
way
around.
You
know,
with
our
input
for
self-managed,
we
should.
We
should
look
into
I
mean
the
reason
we
created
this
story,
and
the
sub
stories
was
because
of
a
good
letter
commented
that
this
was
not
a
self
managed
thing.
This
is
about
self-managed,
okay,.
A
A
E
A
Consumption
is
not
the
most
impactful
thing
we
can
do
and
right
now
performance
would
be
a
little
higher
impact,
and
so
I
was
trying
to
figure
out
what
we
could
try
to
help
out
here
from
both
a
pearl-like
corrected
level
and
like
arming
the
rest
of
the
team
to
have
a
better
idea
of
what
they
need
to
go.
Prioritize
I
think.
C
A
Overall,
the
biggest
challenge
here
is
that,
frankly,
PMS
haven't
been
prioritizing
performance
issues
as
they
should,
and
so
now
we're
in
this
position
where
data
comes
really
slow
and
and
that
one
needs
okay,
our
is
to
try
and
fix
the
situation
that
we're
in
so
I
feel
like
we're
sort
of
got
into
a
very
bad
spot,
with
the
very
blunt
tool
to
try,
reorient
performance
and-
and
so
that's
where
we're
at
so
a
couple
things
there.
One
is
couple
ideas
and
I
love
people's
opinions
on
them.
A
One
is
tracing
a
link
in
the
issue,
but
I
think
it's
already
in
the
third
to
no
board
is
that
we
we've
had
tracing
the
situation.
Forget
lab
for
a
long
time
like
over
a
year.
We
haven't
gotten
it
across
the
line.
The
observability
team,
from
intros
on
board
with
setting
up
so
I'm
like
investing
time
to
get
tracing
set
up
and
this
side
on
our
side
would
be
just
try
and
make
sure
we
get
any
kind
of
engineering
across
the
line
here.
So
there
is
a
little
bit
instrumentation,
depending
on
what
solution
we
take.
A
We
might
find
out.
That's
not
quite
performance,
and
we
could
then
just
further
drive
tracing
forward
and
then
the
other
one
would
be
some
of
the
core
performance
testing
a
new
experience.
So
we
don't
really
do
a
good
job
at
or
not
understand
like
how
long
it
takes
to
accomplish
an
objective
like
going
to
your
to
do
is
quickly.
You
pump
the
on
issue,
for
example,
and
so
I
think
I'm
approaching.
A
This
is
that
if
we
have
this
information,
it
would
be
much
easier
for
PMS
to
prioritize
understand
the
impact
of
these
performance
problems
if
it
was
more
contextualized.
As
far
as
like
hey
this
core
workflow,
what
you're
saying
is
like
your
Northstar
metric
is
actually
quite
slow.
Maybe
you
should
consider
prioritizing
fixing
it
and
that
might
be
more
consumable
than
a
bunch
of
rest
controllers
or
towards
JSON
size
improvements,
and
things
like
that.
A
As
far
as
like
the
problem
to
go
attack,
so
I'd
love
give
us
some
hands
on
any
of
these
things
and
all
these
things
if
they
are
helpful
and
helpful,
the
quality
team
might
end
up
picking
some.
The
picking
up.
Some
of
you
extremes,
I,
need
to
get
with
neck
to
understand
like
to
what
degree
they
want
to
try
and
take
that
one
up
and
so
tt
there
and
the
final
one
is
the
proposed
performance.
A
A
source
is
number
one,
but
the
concern
is
is
that
it
does
illustrate
how
slow
get
web
is
in
some
of
these
cases,
and
so
from
there
there's
an
Okie
are
being
proposed
to
try
and
solve
this,
and
so
I
think
so
the
grew
the
memory
team
can
help
there
in
particular,
on
both
core
foundational
components
that
might
need
improvements
would
also
be
great,
and
so,
if
folks,
here's
think
about
there's
those
three
items
that
would
be
very
helpful.
I'm
Camille
I
think
I'm
a
chance
to
connect
with
you
on
these
things
that
your
thoughts.
A
A
B
A
Yeah
thanks
the
first
rate
basis.
The
second
one
is
trying
to
help
PM's
prioritize
better,
have
more
knowledge
of,
like
you,
externus
and
the
second
and
the
first
one
I'm
tracing
it's.
If
people
think
it'd
be
helpful,
trying
to
blog
more
quickly
understand,
what's
happening
in
production
more
quickly,
then
then
we
can
do
that.