►
From YouTube: 2020 05 11 Memory Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
josh,
you
have
the
top
of
the
agenda
list
and
you
need
to
jump
for
the
enablement
group
conversation
so
I'll.
Let
you
take
it.
B
Yeah
sure
so
first
I
just
wanted
to
apologize
for
being
a
little
less
than
a
little
less
available
than
normal.
Last
week
we
had
a
pretty
major
effort
underway
to
create
urgent
under
upper
underway
to
try
to
reduce
the
amount
of
money
we
spend
on
gitlab.com.
B
So
we
we
need
to
get
that
into
the
plan
to
present
to
the
board
and
just
organize
the
company
around,
and
so
we're
trying
to
figure
out
how
much
money
we
could
save
on
gitlab.com
for
some
context.
Right
now
we
spend
18
million
dollars,
we're
projecting
to
spend
18
million
dollars
on
free
users
this
year
and
that's
a
lot
of
money
just
been
done
for
users,
so
we're
trying
to
bring
that
down.
B
Just
given
the
you
know,
overall
operating
environment
that
we
face
right
now
and
probably
the
reduced
revenue
or
against
expectations
this
year,
so
some
context
there.
That
will
continue,
but
I
think
it
will
continue
with
less
resiliency,
as
we
have
most
of
the
plan,
and
now
it's
just
about
refining
it.
B
So
that's
why
I
was
a
little
less
available
last
week
than
normal,
and
that
dovetails
into
the
last
item
here
we
can
talk
about
more
more,
but
if
we
have
opportunities
to
improve
the
efficiency
of
gitlab.com,
that
would
be
great,
as
we
are
looking
to
again
primarily
save
money
on
our
free
users.
The
paid
users
are,
I
think
we
got
around
80
margin
from
so
as
long
as
we're
paying
where
we're
okay,
but
the
free
users,
in
particular
our
problem.
B
So
that
said,
we
can
dive
into
the
13.1.
So
I'll
go
through
more
of
the
comments
this
week.
I
I
sort
of
missed
or
didn't
have
a
chance
to
a
lot
of
time
to
last
week,
but
I
I
listed
some
others
here.
I
think
we're
good
candidates
to
to
think
about
for
13.1.
B
I
think
camille
you
had
added
the
or
just
the
usage
ping
one.
I
think
that
one's
great
a
lot
of
the
well
in
particular
a
lot
of
our
enablement
metrics.
We
want
to
track
as
part
of
our
north
star
metrics
project
actually
are
already
captured
today
in
prometheus,
and
so
if
we
can
determine
a
way
to
collect
some
of
these
metrics
from
prometheus,
that
would
be
awesome.
B
I
realized
that
the
first
iteration
may
not
be
getting
it
from
prometheus,
but
I
think
it's
a
good
journey
to
go
down
likely
if
we
can
figure
it
out,
so
I
think
that
one's
fantastic
and
the
more
we
can
learn
about
the
performance
of
our
users.
You
know
of
our
installations
that
are
not
getlab.com
the
better
so
that
that's
great.
A
So
I
don't
recall
there
being
a
story
for
prometheus
in
this
epic.
Do
we
need
to
add
one
in
there
to
review
existing
items
that
we
have?
B
B
Well,
I
think
in
our
discussion
maybe
last
week
or
we
thought
that
prometheus
might
be
a
potential
data
source.
So
so
we
can.
We
can
talk
about
that
since
it
already
has
much
of
this
information.
It
has
a
little
more
broader
scope,
but
it's
probably
easier
to
just
call
like
like
the
ruby
interpreter
and
figure
out
what
the
memory
is,
but
I'm
not
sure
if
you'll
get
everything
like
all
the
processes.
I
don't
know
something
like
prometheus,
so
yeah.
That
was
great.
C
C
D
C
B
So
so
out
of
the
box,
we
bundle
grafana
and
all
of
our
prometheus
exporters
are
turned
on
in
both
in
both,
I
believe,
the
charts,
but
also
for
sure
omnibus.
So
so
by
default,
it's
on.
I
don't
think
we
collect
the
usage
ping
on
on
those,
so
it
could
be
a
place
to
start,
but
by
default
it's
turned
on
and
talking
to
a
number
of
customers.
B
B
However,
you
know
it
that's
pretty
common,
so
we
could
offer
a
way
to
sort
of
point
ourselves
somewhere
else
if
you
needed
to,
for
example,
but
there
are,
there
is
a
subset
of
people
who
also
utilize
data
dogs
work
for
prometheus
metrics.
I
don't
think
it's
high,
but
I
have
seen
some
tickets
based
on
problems
with
that.
I
mean
the
data
dog
surface
crashing
on
the
number
of
metrics
we
export.
B
C
It's
it's
definitely
going
away
like
there
are
still
some
data
in
the
influx
that,
like
they
are
not
enabled
anywhere,
but
so
far
prometheus
and
kibana.
C
I
know
that,
like
with
the
influx,
the
biggest
issue
was
like
the
volume
mean
of
data
that
we
had
to
process.
There
were
some
crazy
aggregate
aggregations
being
done
with
the
influx
and
like
it
didn't
care
very
well
that,
like
we
had
to
drop
a
ton
of
data
so
like
the
like,
the
prometheus
with
the
federation
and
kind
of
hierarchy
of
like
aggregating,
the
great
data
and
kibana
are
basically
having
all
data
from
the
logs.
This.
C
Yes,
like
unlikely
that
they
run
kimball,
but
it's
also
unlikely
that
they
run
influx
db.
C
C
So
I
think
it's
just
of
the
scale
like
on
the
small
instances.
Prometheus
is
gonna,
be
very
meaningful
on
our
scale.
It's
like
good
to
have
a
quick
summary
of
the
whole
infrastructure
and
other
thing,
but
forget
for
getting
like
the
granularity
of
data
and
projects
like
endpoints
being
used.
We
use
basically
kibana
pretty
much.
D
It
is
an
interesting
point,
though,
because
they
we
do
have
a
logging
feature
in
gitlab
as
well
right
I
can
go
to
the
system
logs
and
I
can
I
don't
know
where
they're
pulled
from
right
now,
but
that's
a
feature
we
have
right.
Maybe
we
can
use
that
as
well
to
put
like
some
limited
extraction.
C
A
B
B
We
had
it,
but
obviously
if
we
can
incorporate
self-managed,
that
would
be
a
significant
improvement
to
our
understanding
of
how
our
instances
behave
out
there
in
the
wild,
which
would
be
awesome
so
plus
one
to
that.
I'm
not
sure
we
need
to
have
prometheus
as
in
the
first
iteration,
but
I
think
keeping
in
mind
that
would
be
really
helpful.
Next
one
is
the
live
tracing.
I
I.
B
I
agree
that,
especially
with
the
infrastructure
is
de-prioritizing
of
this,
that
it's
probably
not
that
important
and
the
fact
that
they're
going
with
the
yeager
makes
us
even
more
dependent
upon
infrastructure.
B
So
I
think
we
can
be
prioritized.
The
thing
I
would
ask
is
that
if
we
can
just
be
responsive,
if
they
do
find
something
that
we
can
help
out
for
background,
we've
had
tracing
support
and
or
added
it
under
dunigate
in
the
product
a
year
ago,
and
it's
just
sort
of
stalled.
It's
available
in
the
gdk
no
one's
ever
tested
in
production.
B
B
We
can
tell
customers
to
use
it,
and
support
can
use
it
to
figure
out.
What's
going
on,
one
quick
point
there
from
in
talking
support
their
most
challenging
tickets
to
to
deal
with
are
ones
where
the
gitlab
is
slow.
B
C
B
I'm
sorry
I'm
talking
about
the
one
dot
the
one
two
bolts
below.
I
got
confused
the
tracing
on
an
as
dated
basis.
This
this
is
the
ci
tracing.
I
think
sorry
we
should
we
should
move
these
down.
Yes,
I
think
this
one's
good
sorry
context
on
this
one
is
that
this
is
blocking
further
migration
into
kubernetes
for.
B
Gitlab.Com
because
we
don't
have
live
tracing
on
gitlab.com
and
we
don't
want
to
bring
nfs
into
the
kubernetes
deployment.
The
fact
that
we,
the
current,
live
tracing
model,
the
current
tracing
model
requires
nfs,
and
so
that
prevents
us
from
move
some
of
these
nodes
into
kubernetes,
which
the
which
the
deployment
team
would
like
to
do.
B
C
Like
so
it's
functional
it's
just
in
performance
because,
like
it
uses
significant
amount
of
the
memory
to
transfer
data
back
and
forth,
so
it
uses
memory,
io
and
network,
I
mean
io.
Basically,
it
uses
memory
and
io.
There
is
some
openmr
from
me
that
tries
to
solve
that.
C
C
There
is
still
a
challenge
of
really
really
like
running
out,
because
it's
quite
complex
topic
because
of
the
concurrency
but
maybe
like.
We
could
discuss
that
on
the
wednesday
in
more
detail,
maybe
have
more
dedicated
to
like
understand
the
weight
of
that
compared
to
everything
else.
B
Okay,
cool
sounds
good.
I
I'm
not
sure
this
is
the
most
important
thing
right,
probably
and
globally.
As
far
as
what's
going
on
com
moving
to
kubernetes,
it
might
not
be
the
most
important,
but
it's
it's
it's
good
for
to
unblock
this.
B
I
think,
is
you
know
ways
to
try
to
improve
that,
but
that
be
some
cost
benefits
there
and
also
some
obviously
fixing
these
things
for
our
customers
who
are
on
kubernetes
today
with
google.com
would
be
helpful,
but
it's
probably
not
the
highest
priority
thing.
The
next
one
down
below
is
actually
an
okr
that
we
have
going
on.
I
hope
I'm
not
sure
if
everyone's
aware
link,
the
okay
are
there,
but
basically
to
improve
the
speed
index
of,
in
particular
that
the
tree
page,
which
is
like
the
repository
list.
B
B
So
you
can
see
it's
in
comparison
there.
This
isn't
okr,
I
I
think
so.
This
would
be
great
to
work
on
if
there's
actually
large,
back-end
items
that
we
can
do
to
fix
this.
I
did
see
the
discussion
on
the
front-end
side
if
we
want
to
venture
into
the
front
end
for
memory
reduction
or
further
performance
improvements-
great,
I
I
was
thinking
we
would
we
could
stay
back
in,
but
but
this
is
one
that
is
a
key
result
for
us.
B
This
look
for
the
company
and
engineering
this
this
this
quarter,
so
I
think
that
one
should
probably
be
number
two
or
or
similar
by
higher
priority
than
this
one.
B
B
Other
than
this,
the
tracing
one
we
discussed,
I
think
I
think
that
can
be
relatively
lower
priority
because
it
gets
in
front
de-prioritized
it
and
then
the
journeys
to
spend
a
little
time
talking
here.
I
think
I
mean
I've
discussed
this
but
reason
why,
and
particularly
I
find
this
important
is
because
I'm
trying
to
think
about
solving
performance
for
the
for
the
company,
and
I
think
right
now
we
generally
have
most
issues
known
like
even
in
the
speed
index.
B
One
they've
already
had
a
lot
of
issues
open
that
would
have
fixed
this.
We
just
haven't
prioritized
them
correctly
and
that
frankly
falls
on
product
for
not
prioritizing
prioritizing
them
properly,
and
so
that
I
think
the
question
in
my
mind,
is
how
do
we
fix
this
long
term
without
having
our
ceo
sort
of
propose?
Okrs
for
performing
you
know
and
putting
for
particular
sites
in
the
page
is,
is
to
really
try
to
make
the
performance
impact
more
consumable
and
also
more
related
to
the
business
impact.
B
We
have
like
a
thousand
plus
performance
issues
identified,
and
you
know
it's
not
always
easy
to
figure
out
like
if
point
two
percent,
you
know
point
two
mills
are
200,
milliseconds
faster,
you
know,
rails
controller
will
actually
really
move
the
needle
for
anything.
But
if
we
had
an
idea
of
how
long
a
a
job
to
be
done
took
like,
I
want
to
comment
on
a
merge
request,
or
I
want
to
review
a
merge
request
and
all
the
workflow
that
goes
into
behind
it.
B
That
could
really,
I
think,
crystallize
the
impact
to
product
managers
across
the
company,
and
we
also
have
this
corresponding
north
star
metric,
where
you
have
certain
activities
which
are
your
north
star
and
their
activity
as
such,
and
so
if
that
activity
is
slow,
you
should
probably
look
into
fixing
it,
and
so
that's
where
I
was
coming
from
here,
which
is
to
try
and
be
more
proactive,
which
is
to
help
more
of
the
company
prioritize
performance
and
fixing
their
performance
problems,
because
you
know
for
a
lot
of
things
that
we're
talking
about
here.
B
It's
already
been
known
just
that
people
previously
haven't
done
it
until
we
came
around
and
that's
not
the
best
pattern.
Probably
it's
better
to
have
these
areas
fix
their
problems
when
they
should,
but
sorry,
that's
the
problem
I'm
trying
to
solve
here.
I
totally
understand
the
concern,
which
is
that
just
I
agree.
B
A
Yeah
and
alexi
had
a
question
on
the
same
topic,
so
perhaps
we
can
just
take
that
async
since
we
gotta
get
through
this
and
you
gotta
run
josh.
So
do
you
wanna
run
through
the
last
two.
B
Yeah
last
two,
I
think
the
boot
times
is
also
a
kubernetes
issue.
I
believe
that
how
long
it
takes
to
to
boot,
our
various
services
can
take
a
long
time,
and
this
also
prevents
on-demand
scaling
of
in
kubernetes
like
up
and
down,
because
it
can
take
four
or
five
minutes
for
a
new
node
to
come
up
or
a
new
pod
to
come
up,
and
so
that
can
really
slow
down
how
aggressive
you
can
respond
for
other
services.
B
Let
me
briefly
find
it
if
I
can
here
it
is
so
I'll
share
my
screen
real
fast
thanks,
but
just
some
context
on
this,
that
this
is
our
our
one
of
the
workloads
we
have
fully
improvised
right
now
is
our
is
our
doc
registry
and
you
can
see
throughout
the
course
of
the
day.
This
is
over
seven
days,
we're
actually
pretty
dynamically
scaling
up
and
download
based
on
demand,
and-
and
so
this
can
give
us
again
a
pretty
good
insight
into
efficiency,
as
we
can
spin
up.
B
Both
pods,
but
also
nodes,
if
the
question
needs
to
grow
on
demand
as
needed,
so
this
is
kind
of
more
along
the
lines
of
the
efficiency,
but
it
can
be
harder
to
do
this
if
it
takes
five
minutes
to
add
one
new
pod
to
consume
new
requests,
based
on
what
we're
seeing
as
far
as
demand.
B
Yeah,
I
think
if
we
can
commence
other
ways,
the
other
impact
of
boot
times,
as
you
know,
sir,
is
why
they
do
it
is
that
kubernetes
likes
to
do
rolling
updates,
and
so
when
they
do
a
rolling
update
right
now
it
takes
like
an
hour
plus
to
do
a
rolling
update
because
it
it
drains
one
pot
at
a
time.
So
it
stops
it
which
takes
a
minute
or
two
to
stop
it
and
then
updates
it
and
then
starts
it.
That
takes
five
minutes,
and
so
every
pod
takes
like
seven
minutes.
B
Ten
minutes
to
actually
drain
and
restart,
and
so
working
through
the
whole
fleet
takes
a
long
time.
It
actually
takes
so
long
that
helm
gives
up
and
quits
halfway
through.
So
you
have
to
tell
him
to
wait
longer
for
our
service,
but
but
yeah
anyways,
that's
great,
so
we
please.
I
again.
I
don't
want
to
dismiss
anything,
I'm
just,
I
think,
trying
to
add
color
and
context
here
and
we
can
do
more
in
the
issues
and
then
then
yeah.
B
B
Yeah
I'll
hang
out
from
there
three
minutes
and
then
hop
on.
A
Okay,
yep
so
yeah,
so
we
have
a
group
conversation
right
after
this
and
then
excuse
me
so
some
other
topics
domain
expertise.
This
is
in
the
weekly
weekend
review.
Please
read
and
consider
adding
your
domain
expertise
to
team
ammo
and
there's
a
new
file
out
there
for
domain
expertise
as
well.
A
The
planning
session
on
wednesday
there's
kind
of
two
tracks
of
topics.
One
is
just
reviewing
how
we
do
things,
which
is
probably
we'll,
probably
start
the
meeting
with
that,
and
if
we
need
to
split
off
for
another
session
on
what
we're
doing,
we
can
set
up
another
session
or
just
identify
the
issues
that
we
need
to
talk
about,
asynchronously,
to
get
more
specific,
on
work
items
for
now
and
for
13.1.
A
So
please
read
the
doc,
there's
a
dock
full
of
topics
and
then
there's
kind
of
a
road
map
dock
as
well
that
I
threw,
together
with
some
suggestions,
so
take
a
look
at
those.
It
is
release
post
item
week.
So
the
things
that
I'm
aware
of,
I
don't
know,
does
the
ci
minutes?
Does
that
qualify
as
a
feature
josh
for
release
post?
You
think.
B
Yes,
I
would
think
so
like,
but
hopefully
kenny
can
write
that
one
up
but
yeah
for
sure.
If
we're
gonna
be
doing
cm
minutes
type
things,
then
we
should
sure
announce
that
okay
on
gitlab.com,
I'm
not
sure
going
to
the
release
post
unless
there's
been
like
self-managed
potentially
do,
but
you
can
see
I'm
I'm
not
sure
kenny's
plan.
Is
there
how
to
roll
us
up.
C
So
josh
it
doesn't
affect,
like
on-prem
installs,
you
know
affects
github.com
and
I
believe-
and
I
believe
ken
is
like
preparing
announcements
like
he
he
really
and
from
what
we
discussed.
He
was
aiming
on
doing
that
this
week.
A
These
are
all
the
work
items.
Is
there
any
urgent
work
items
that
anybody
wants
to
cover?
We
have
five
minutes.