►
From YouTube: 2020 05 18 Memory Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
B
C
E
E
E
E
F
F
D
F
F
It's
not
about
like
did
it
happen
like
at
some
point.
We
we
freeze
on
picking
the
new
changes
and
it's
happens
around
18th
so
like
we
just
give
ourselves
a
few
days
to
like
to
stop
us
right
to
ensure
that
every
key,
stable,
ok,
I,
I
click
MVPs
on
this
match
requests.
Let's
see
if
it's
gonna
be
merged,
I'm
gonna
ask
like
the
daddy
per
channel.
E
F
F
F
D
I
think
I
muted
Craig.
Are
you
talking.
D
A
Maybe
it
is
too
early
all
right.
One
thing
who
talked
about
last
week
who
was
bringing
back
the
memory,
teamwork
flow
board
and
making
sure
we
use
that
throughout
the
milestone,
so
I
put
it
back
together.
What
are
folks
thoughts?
Do
we
want
to
use
this,
and
how
do
we
want
to
review
and
make
sure
it's
up
today?
So
first
question:
do
we
want
to
bring
this
back.
D
B
E
D
A
Just
for
us
I
mean,
if
you
were
doing
like
a
Kanban
workflow,
then
you'd
want
to
set
column
limits
and
stuff,
but
we're
not
doing
that.
This
is
just
another
view,
I'm
right.
If
nobody
finds
any
value
in
it,
then
I'll
keep
it
around
for
curiosity
for
me,
but
we
won't
use
it
as
part
of
our
weekly
review.
A
Okay,
all
right,
and
then
we
talked
about
during
the
plant
climbing
discussion
just
instead
of
going
through
issue
by
issue
in
progress
by
person
or
point
pairs,
since
this
is
the
first
time
we're
gonna.
Do
it
this
way.
How
do
we
want
to
do
it?
We
want
to
go
in
past
I've,
just
gone
through
alphabetically,
so
we're
always
going
through
the
same
order,
people
and
then
I'll
kind
of
bundle
of
individual
efforts
and
point
pairs.
B
As
I
put
in
the
next
section,
I
have
three
stories
and
working
on
like
I
would
like
to
highlight.
There
is
a
small
jQuery
security
scan
issue.
We
just
closing
it
with
Jeannie
I
believe
in
closet
soon
just
wanted
to
give
you
heads
up:
it's
not
related
to
our
memory
work
directly
and
through
this
user
journeys.
Journeys
story
and
I
put
my
summary
in
the
notes
and
I
would
like
you
team
to
check
it.
If
you
have
some
time,
because
it's
really
I
think
close
to
what
we
discussed
about
our
benchmarking.
B
If
we
want
to
measure
some
numbers
on
real
dot-com
website
or
I,
will
give
you
synthetic
testing
and
I'm
still
on
the
fence
about
this
and
I
think,
depending
on
this,
our
next
step
is
also
would
be
fresh
result.
So
yeah
I
would
need
your
input
on
this.
If
you
have
some
time
and
Josh
and
great,
please
also.
C
B
B
A
B
F
D
I'll,
do
it
for
Jinyu
and
myself
because
we're
both
working
on
a
telemetry
topic,
yeah,
I'm
kind
of
like
just
knee-deep,
into
trying
to
understand
how
Prometheus
setup
works
and
how
we
would
it's
not
just
about
like
how
to
access
this
data
like
more
about
it
like.
Under
what
circumstances
is
it
available
and
in
what
form
or
shape,
because,
like
like
Josh,
found
an
interesting
problem
that
I
wasn't
aware
of
that?
D
I
was
kind
of
like
mostly
looking
at,
like
what
data
is
no
production
Prometheus,
but
not
all
of
that
might
be
on
like
a
random
on-prem
omnibus
deployments,
the
labels
might
change
some
there's
a
bunch
of
things.
We
still
need
to
clarify
and
also
under
what
circumstances
we
can
rely
on
like
the
Prometheus
and
where
to
reach
it.
So
that's
still
a
lot
of
questions
to
figure
out,
but
I
want
to
get
my
hands
dirty
as
well.
D
So
so
what
I've
done
is
wrote
a
simple
API
wrapper
for
the
Prometheus
API
Kubik
line,
which
is
just
like
this
gem
that
you
can
use
to.
If
you
know
how
to
locate
a
Prometheus
or
you
can
just
query
it
and
gives
you
back
Jason
pretty
much
like
what
the
web
front-end
does
so
that
that's
what
I'm
working
on
right
now
as
a
work-in-progress,
mr
yeah,
we're
still
like
one
of
my
biggest
like
gray
areas
here
like
where
I'm
yeah
so
fuzzy
on,
is
what
like
how
this
data
is
ingested.
D
D
So
my
suggestion
was
to
get
like
I,
don't
know
what
team
it
would
be
if
it's
what's,
the
team
name
is
not
telemetry,
but
sorry,
monitoring
isn't
right
if
it's
monitoring
or
if
it's
the
data
team,
because
there's
the
data
team
as
well
right
I
think
would
be
good
to
get
some
help
from
them
to
understand
how
this
data
is
processed,
because
otherwise,
it's
really
hard
to
say
like
what
we
should
even
track.
If
that
makes
sense,.
D
About
the
data
model
like
we
need
to
know
what
to
track
in
and
what
form
you
know.
Is
it
a
counter
and
like
what
counter
would
it
be?
If
we
don't
understand
how
its
aggregated,
we
don't
know,
if
it's
going
to
be
able
to
answer
the
questions
that
we
are
asking
right,
which
is
like
how
many
uses
run
a
10k
architecture,
for
instance
right
so
so
that
might
not
always
be
possible
like
if
I
look
at
this
version,
I
get
that
become
versions
that
become
metrics
endpoint.
D
That's
not
just
Goldberg
global
counters
like
it
would
like.
It
tells
you
pretty
much.
Oh,
like
there
were,
like
four
billion
commits
across
all
our
customers,
or
something
like
that,
but
that's
not
useful
for
us.
If
we
count
the
number
of
kimono
nodes
that
in
aggregate
run
across
all
on
premise,
relations
that
we
would
be
not
be
able
to
do
anything
meaningful
with
that
number.
C
Okay,
yes,
it's
so
I
can
help
they're,
basically
just
real
quick.
How
it
works
is
the
usage
thing
gets
sent
up
every
week
from
the
get
lab
node
and
then
it
it
just
over.
Writes
would
already
there
and
so
there's
a
new
identity,
though,
and
so
you
might
report
back
that
you
know
this
deployment
looks
like
a
tank
interprets
architecture
or
it's
similar
to
it,
and
so
you
might,
you
know,
sent
a
tank
a
you
know
true
flag
or
something
like
that,
and
then
the
backend
side.
D
D
Yep
right,
that's
the
two
approaches.
I
all
night,
I
wasn't
sure
which
one
we
wanted
to
take,
because
the
drawback
of
this
is
that
we
kind
of
lose
a
lot
of
optionality
in
terms
of
basically,
we,
if
you
collapse,
data
upfront
before
your
data
warehouse
and
you
lose
a
lot
of
signal
right.
I
mean
yes,.
G
G
D
Have
that
run
like
a
1k
architecture,
5k,
10k
and
so
forth,
and
the
way
the
idea
was
the
way
to
find
this
out
was
to
count
number
of
nodes
and
they
have
for
a
certain
component
like
oh,
they
run
like
five
kimonos
or
whatever,
like
one
cache
known
and
so
forth,
and
then
the
question
was:
do
we
aggregate
this
in
the
app
itself
as
part
of
the
usage
thing
like
where
we
would
go
to
the
local
Prometheus,
then
we
would
count.
You
know
give
me
all
like
count
the
number
of
metrics.
D
We
omit
that
from
cumin
Owens,
for
instance,
if
we
would
know
like
how
many
Puma
notes
they
have,
and
then
we
sort
them
into
these
reference.
Active
chef
buckets
before
we
even
submit
the
usage
data
or
the
other
approach
would
be
to
send
the
raw
data
which
would
be
or
here's
a
customer
who
runs
50,
Puma
notes
and
we
just
sent
that
count
and
then
somewhere
on
the
data
pipeline.
On
our
end,
we
would
then
create
this
clustering
so
that
that
was
my
question
like
which
way
we
prefer
so.
D
F
F
It
was
like,
like
raw
data,
for
me,
would
be
that,
like
we
send
from
each
node
individually
aggregate,
it
means
that
piece
like,
for
example,
some
amount
of
the
nodes
and
amount
of
the
CPUs,
and
then
there
is
like
the
third
item
which
we
categorize
this
data
from
from
the
aggregates,
because
they
aggregate
also
gives
you
if
you
know
the
amount
of
the
CPUs
versus
amount.
The
treads
amount
of
the
notes.
You
cannot
understand
how
big
topology
you
are
running.
D
C
D
But
that
will
only
work
if
and
that
might
be
true
I
just
don't
know
at
this
point.
That
will
only
work
if,
if
we,
if
that
data
remains
an
atomic
unit
in
the
pipeline
right,
because
if
we
all
throw
these
into
big
counters
that
are
mixed
up
with
the
other
customer
data,
then
we
will
only
get
these
global
figures
and
this
that's
what
I'm
currently
seeing
on
the
others,
metrics
and
conversions
right.
D
G
G
D
Decide
in
which
reference
architecture
about
could
they
fall?
You
can't
just
look
at
only
kuhmo
nodes
or
because
it's
a
combination
of
things
like
there
was
even
one
case
where
it's
even
the
same.
Topology
like
3k
and
5k.
A
thing
is
the
exact
same
topology.
We
only
the
only
differentiator
is
like
the
Machine
requirements.
We
set
right
so
yeah.
It's
it's
tricky.
D
F
A
recommendation,
so
there
is
one
aspect
also
look
at
the
brackets
is
our
representation
of
right.
Now
these
pockets
can
change
over
time
when
we
find
them
and
how
it's
gonna,
basically
like
survive
at
the
time.
If
we
refine
buckets.
If
we,
for
example,
add
new
markers
because,
like
from
one
angry
spec
hard-coding
packets
on
the
client
side
from
the
other
angle,
is
like
consuming
light
on
them
on
the
on
the
other
side,
where
you
have
flexibility
to
adapt
your
magic.
Yes,.
D
That's
why
I
would
be
in
favor
of
not
pre
aggregating
data,
but
it's
also
harder
to
do
that
all
by
at
least
we
would
need
more
help
from
because,
if
we
start
writing
ETL
jobs,
I'm
happy
to
do
that.
But
I
know
nothing
about
this
pipeline.
It
would
take
me
probably
a
week
to
like
get
into
that
as
well.
It
would
just
make
this
even
bigger
this
epic,
or
we
need
some
help
from
data
to
you
to
figure
that
out
the.
D
D
Yeah
I
think
that's
pretty
much
pretty
much
it
that's
kind
of
where
we
are
right
now,
Oh,
like
I,
told
you
know,
maybe
because
he
was
actually
owning
the
topology
story,
but
I
think
he
was
struggling
still
a
bit
with
like
just
basic
Prometheus
stuff,
so
he
was
still
looking
into
that
so
I
suggested.
Maybe
we
can
switch
if
he
takes
on
the
memory
aspect,
because
there's
a
bunch
of
what
we
can
do
without
prometheus,
because
that's
just
like
what
the
Ruby's
simpler
does
as
well.
E
To
add
the
database
calls
to
our
logs
and
metrics.
It
should
actually
add,
read
and
write
number
of
calls,
and
only
a
lot
of
is
girls
that
are
like
cemeteries
and
also
it
should
th
queries
as
well.
So
I
guess
this
is
something
that
will
be
helpful
in
detecting
I,
know
some
queries
and
actually
people
benefit
from
it
for
sure
so,
I
hope
to
get
some
feedback
if
I'm
on
right
track.
E
E
E
E
In
order
to
have
those
metrics
available
and
I
hope
that
when
this
is
done,
I
just
I
also
think
that
it
will
work
for
experts.
I,
think
the
job
told
me
that
it
will
work
for
expert
without
injecting
so
I
can
try
and
see
if
it's
there,
so
we
can
connect
it
with
our
see
a
job
for
report
experts
and
like
track
data
Impa
and
then
afterwards
we
can
probably
hand
it
over
to
group
important
and
like
next
step
time.
Trying
take
a
look.
Is
this
issue.
G
E
G
A
So
I
put
this
together,
you
similar
formats
in
the
past,
or
you
know,
tools
like
JIRA.
It
has
a
pretty
good
built
in
Road
mapping
and
then
hopefully
none
of
this
is
surprised
and
Josh
and
feel
free
to
correct
me
if
I
got
the
priorities
in
the
wrong
order.
So
telemetry
is
at
the
top
of
the
priority
order.
Right
now
got
the
triaging
non-performing
endpoints
and
we've
added
a
few
and
they're
the
ones
that
Nikola
our
work.
A
That's
working
on
I
have
been
added
in
there
and
there's
a
couple
listed
below
that
I
can
add
as
well.
There
are
some
individual
issues
that
have
been
called
out
that
would
have
either
a
big
memory
or
performance
impact
that
are
listed
here.
So
live
tracing
and
rails
boot
time,
front-end
performance
improvements,
that's
something
that
was
brought
up
during
our
okay
ours.
We
don't
actually
have
anything
identified.
A
I
think
there
was
a
random
example
called
out
by
Camille
I,
can't
remember,
which,
which
page
it
was,
but
ultimately,
if
that's
not
something,
we
feel
we
can
contribute
to
or
can't
find
issues
to
we'll
end
up
either
rewriting
or
just
removing
that
kr
altogether.
But
we
could
spend
some
time.
We
can
spend
some
time.
Take
a
look
at
that
see
if
there's
any
major
memory
or
performance
impact
we
can
have
on
the
front
end.
I.
G
Can't
comment
a
little
bit
in
that,
if
you
want
sure
we
are
our
visibility,
food
two
front
ends:
performance,
isn't
as
good
as
I'd
like
it
to
be.
We
were
just
we've
had
a
few
instances
in
the
last
few
weeks,
a
long
survey
that
Josh
brought
up
we've
had
at
least
some
comments
on
hockey
news,
our
competitor,
actually
putting
up
a
comparison
website
which
there's
a
penis
and
a
good
lane.
I've
been
deep
diving
into
those
recently
and
I
found
a
few
kind
of
gotchas
they're.
G
Well,
that's
things
too
long,
because
the
service
taking
longer
but
then
there's
also
X
tiny
spent
in
the
browser.
For
so
the
moment
would
not
got
too
much
visibility
in
that,
but
regard
we
are
prioritizing
that
it's
going
to
be
coming
up,
hopefully
sooner
adding
layer,
but
off
top
my
head.
Certainly,
the
areas
that
we
keen
to
look
at
would
be
marriage
quests
off
top
ahead.
Those
are
great
baths.
Large
files
in
the
browser
also
generally
bad.
If
you
guys,
are
really
struggling,
trying
to
find
places.
F
G
Last
week,
with
the
comparison
site,
we
realized
that
the
blame
functionality
was
slightly
bad
and
that's
also
in
the
server
side,
but
also
on
the
rendering
side
as
well
we're
going
to
see
it's
just
about
it
more
and
that
we
just
wait
for
the
server
sites
been
through
first
and
then
we'll
see
how
that
affects
the
rendering
side,
but
just
like
yeah
anything,
that's
heavy
like
job
traces
as
well.
That's
that's!
A
very
heavy
page,
I
think
odd.
It's
pretty
bad
as
well.
I,
don't
think
that's
paginate,
so
I
think
scale.
G
F
We
do
we
even
have
like
an
library
or
whatever
that
would
truck
like
I,
don't
know
I'm
already
usage
or
performance
of
the
loaded
pages
and
Joe
must
be
because,
like
it's
very
easy
like
to
see
like
memory
usage
or
like
that,
I
see
of
the
back
and
requests.
But
we
have
enemy's
ability
into
front
end,
and
maybe
maybe
maybe
these
kr
is
like
front-end
performance
improvements
is
the
arch
for
integrating
something
that
would
give
us
the
visibility
into
front-end.
G
Yeah
high
speeds,
as
long
as
I
speed
exposes
that
data
for
us.
Yes,
that,
obviously,
would
be
very
useful
to
have
and
the
benefit
of
site
speed
is
that
is
rendering
performance
is
bizarrely
easier
compared
to
server
performance,
because
everything
soul
is
all
local.
So,
besides
badru
I
truly
hope
that
I
can
expose.
I
am
almost
confident
it
can
give
us
the
data
that
says
this
page
took
up
X
memory
and
I'd
like
to
hope
we
can
break
it
down
into
because
this
runner
is
running
Chrome.
G
So
I
should
be
able
to
extrapolate
those
those
numbers
about
JavaScript
how
much
JavaScript
there
is
what
size
it
is.
I
have
seen
a
little
bit
things
I
think
in
cost
obviously
performance.
This
is
a
big
focus
at
the
moment.
I
think
there's
a
big
epic
track
performance,
I'm
sure
I've
seen
a
few
comments
here
and
there,
but
trying
to
reduce
JavaScript
size.
But
again.
This
is
why
we're
prioritizing
our
pipeline,
because
we
don't
have
like
a
unified,
canĂ¡--
pipeline.
G
That
says:
here's
all
here's
a
list
of
pages
that
we
know
are
bad
or
just
a
common,
even
which
would
be
the
target
and
then
with
hopefully
a
concise,
and
that
says
you
know
this
is
speed
index.
This
is
the
the
times
first
byte
and
then
we
also
have
memory
and
and
other
relevant
bits.
So
yeah,
certainly
that's
a
good
idea,
we'll
we'll
make
sure
that's
highlighted.
F
G
Sighs
Peter
comes
with
a
thing
called
coach,
which
is,
it
will
try
to
give
recommendations
of
how
to
improve
the
performance
of
the
page.
Obviously,
it's
only
an
attempt.
It
may
not
actually
correct,
but
it
will
least
obviously
there's
anything
was
like
an
obvious
red
flag
like
yeah
you're,
JavaScript's
massive,
it's,
not
it's
never
going
to
perform
well,
unless
you
dramatically
improve
it,
that
kind
of
thing
will
be
earned
in
its
open.
That
might
not
be.
G
G
Automated
there
is
some
automated
test
being
run,
it's
getting
the
biggest
purchase
to
just
lack
which
I
monitoring
softened,
but
it's
not
very
useful
size
pizza.
It's
not
very
useful
in
its
native
form.
The
task
is
to
do
that
data.
A
way
that
is
useful
like
say
like,
like
a
table
like
we
get
one
for
current
server
tests,
that's
what
that's
what
we're
hoping
to
do
and
then
that
would
be
automated,
so
yeah,
okay,.
B
G
So
we
have
the
issue
to
build
out
the
pipeline,
which
we'll
be
using
to
discuss
this
in
more
detail,
and
these
tests
are
gonna,
be
more
of
a
kind
of
a
big
bank
and
I
got
you
at
the
end
kind
of
deal.
They
won't
necessarily
be
covering
tests
that
should
be
run
higher
up
in
the
test
chain.
They'll
be
up
to
the
individual
teams
to
do
but
yeah
I,
don't
know.
If
that
answers
your
questions
really,
but
but.
B
G
Yeah,
the
the
difficulty
I
think
is,
is
that
we
have
got
there.
Is
the
capability
in
the
actual
product
to
run
so
speed
is
part
of
our.
You
know,
part
poor,
urban,
mr
or
kind
of
thing,
the
problem
with
Foreman's
testing-
and
this
is
the
same
for
other
forms
as
well
as
that
the
review
apps
don't
have
the
data,
they
don't
have
the
large
files
or
other
bits
and
bobs
that
you
would
really
want
to
have
to
be
able
to
do
those
tests.
G
B
C
C
Initially,
we're
looking
at
sites
for
you
to
collect
more
in
this
content,
but
max
planners
to
the
existing
n
Bank,
you
a
suite,
but
it
this
is
on
the
way
for
you,
when
you're
using
site
Scalia
died.
Maybe
my
son's
fresh
on
a
quick
chat
as
far
as
what
you're
doing
here,
what
you're
playing
to
actually
measure
and
because
this
might
be
also
another
Avenue
to
get
this
data
yeah.
The.
C
Is
to
is
to
just
try
to
surface
this
really
like
the
performance
problems
at
the
same
level
as
other
business.
Dropping
features
like
number
of
times
that
users
completed
this
given
action
right,
which
is
which
is
frankly,
of
becoming
really
important
for
a
lot
of
business
decisions
around
what
what
features
are
being
utilized
and
where
we
should
invest.
Yeah.
B
Just
just
my
too
concerned
sorry,
those
my
first
concern
is
that
we
didn't
add
it
to
this
high
level
epic
table
by
Craig
like
we
need
to
add
this
user
journey.
Somehow
or
is
it
present
and
I?
Don't
see
it
it's
under
it,
tonometry
yeah
I
think
it's
it's
a
bit
different
I
mean
right,
so
the
individual.
A
B
And
the
second
is
like
the
current
solution
proposed
by
Mac
and
qualities
to
measure
against
our
CI
pipelines,
which
is
I,
don't
think
if
it's
a
good
way
to
measure
actual
team
performance
and
Northstar,
because
we
need
to
run
versus
Chrome,
I,
think
versus
real
datasets
and
real
words,
because
otherwise
we
could
just
run
crazy.
Synthetic
stuff.
No
I
agree
with.
C
C
G
Yes,
it's
a
lot
there,
so
just
give
me
a
sense
price
for
it.
The
m4
Northstar
metrics
you're
looking
to
collect
data
from
customers.
That's
my
understanding.
I've
not
had
the
chance
to
dive
into
that.
Yet
yeah,
that's
also
quite
difficult
with
browser
informants
tested,
because
you
then
need
to
have
implement
a
subsequent
send
from
the
browser
to
see.
This
is
actually
how
long
I
took
site
speed.
Obviously,
isn't
that
so
for
North
for
the
North
Star,
it
looks
like
we
need
to
be
different
solution
that
actually
would
be
implemented
into
our
actual
products.
G
That
says:
hey
the
JavaScript
that
says,
hey
and
actually
took
X
to
render
this
is
my
memory
usage
et
cetera,
so
that's
difficult,
I'm,
sorry,
sorry,
server
site,
it's
easier
is
bizarrely,
it's
the
embarrass
force.
That's
discussing
earlier.
It's
harder
for
browser
side
is
for
service
a
server
side.
You've
got
the
data
there
they're
easy
to
extrapolate
and
send
it
for
run
against
calm
I
need
to
see
what
Mac
is
said
and
detail
for
farmers
testing.
G
As
a
general
rule,
we
don't
do
performance
testing
comm,
for
the
very
exact
reason
that
go
comm
is
a
Live
service.
We've
we've
interrupted
service
one
to
four
and
we
are
very
reticent
to
make
sure
that
doesn't
happen
again.
That
being
said,
bears
are
random.
Performance
is
not
as
bad
and
that's
it's
not
testing
things
at
scale.
It's
just
going
to
be
rolling
a
single
page,
so
that
could
be
that
could
be
part
of
the
part
of
the
test
pipeline
that
we
design.
G
But
when
we're
testing
things
at
scale
with
test
data,
the
data
we
are
using
is
real
and
it's
get
lab.
It's
get
labs
on
source
code,
but
there
is
always
going
to
be
a
balance
between
synthetic
and
real.
It's
a
kind
of
real
testing,
because
we
need
to
me
this
week
you
able
we
need
to
make
automatable.
We
need
to
keep
consistent
and
always
have
comparative
benchmarks.
So
it's
it's
a
balancing
that
in
that
regard,
but
yeah
for
nicer,
I,
don't
know.