►
From YouTube: 2022-12-05 Application Performance Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
There,
hello
everyone:
this
is
December
5th
2022
and
this
is
application
performance
team
meeting.
So
let
me
share
my
screen
and
yes
do.
We
have
anything
that
needs
to
be
addressed
immediately
or
we
start
with.
The
board
here
seems
like
no
one
else
outside
of
our
team
on
the
meeting.
So,
let's
start
with
the
board
and
starting
closed
issues,
it's
Unique.
A
B
So
last
week
we
enabled
like
a
hard
limit
for
both
Shard
and
what
both
shards,
like
The,
ketchall,
Shard
and
I,
also
enabled
for
the
Urgent
CPU
bound
shard
and
like
we
followed
the
metrics,
and
it
seems
that
we
reduce
the
number
of
om
kills
by
doing
that.
So
I
don't
plan
to
do
any
fine
tunings
in
the
near
future.
I
think
that
we
like
achieved
what
we
wanted,
that
we
would
use
the
OM
kills.
B
B
Yeah,
this
is
just
prerequisite
where
I
introduce
a
new
event
reporter,
so
we
can
introduce
the
specific
reporter
for
sidekick
because
for
sidekick
we
also
want
to
log
and
add
addition
have
additional
metrics
for
currency
running
jobs.
So
when
we
like
exceed
the
limit-
and
we
decide
to
restart
the
sidekick
process,
we
want
to
log
like
currently
running
jobs,
and
we
want
to
log
them
and
we
want
to
increase
the
metrics,
so
I
needed
separate
reporters.
B
So
I
can
introduce
the
the
additional
ones
for
for
psychic
and
maybe
in
the
future
we
will
need
the
same
thing
for
Puma
as
well.
So
it's
just
like
iteration
process
to
extract
additional
components.
C
Yeah,
that
was
in
response
to
a
massive
broken
incident
that
we
had
just
just
to
summarize
quickly,
but
what
happened
was
that
we
sent
a
much
request
to
the
build
images
repository
where
we
bumped
the
Ruby
version
in
several
places,
and
we
have
a
build
step
that
applies
patches
to
these
rubies
and
some
of
these
are
necessary
for
the
application
to
function
correctly,
like
it's
actually
what
we
own
the
memory
instrumentation
and
because
there
was
no
build
verification
in
CI
that
this
should
happen.
C
It
passed
the
bill,
so
we
merged
that,
and
we
deployed
that.
So
then
all
of
the
all
of
CI
basically
broke
because
it
would
then
fail
for
every
Mr
and
on
Master
in
NCI.
C
So
we
did
two
things
in
response
to
that.
First
of
all,
the
way
it
failed
in
CI
wasn't
particularly
useful
because
there
was
a
check
in
our
instrument,
annotation
spec
helper,
that
we
actually
make
sure
that
this
functionality
is
patched
into
a
ruby.
But
for
some
reason,
when
running
CI,
there
was
a
step
where
we
proceeded
to
run
the
test
anyway.
So
obviously
that
would
fail
the
test,
but
it
didn't
really
give
you
the
reason
why
it
was
failing.
C
It
was
just
basically
complaining
that
that
is
that
this
isn't
working
so
the
first
time
I
was
to
just
make
that
clearer,
and
now
we
just
failed
fast.
If
that
should
happen
again
to
just
raise
an
exception.
Basically
saying
this,
this
should
never
happen
with
a
clear
error
message,
but
that's
still
patching
a
symptom.
C
So
what
I
suggested
as
well
was
to
actually
go
into
the
gitlab,
build
images
repository
and
Patch
the
patch
the
build
pipeline
there
to
actually
include
a
verification
step
so
that
we
make
sure
that
all
of
these
patches
that
we
consider
mandatory
for
the
application
to
run
a
verified
that
they
are
applied.
So
we
have
a
small
like
script
in
there
now
and
we
keep
like
this.
C
This
kind
of
like
allow
list
of
patches
or
patch
names
so
and
it
verifies
that
for
all
ruby
versions,
We
patched
them
in
and
I
did
the
same
change
for
CNG,
just
just
in
case
yeah.
So
that's
done.
A
Thanks
for
fixing
it
it's
great
and
the
next
one
is
from
me.
This
was
a
B2
S2
issue
sitting
at
our
board
for
quite
some
long
time.
We
picked
it
up
this
Milestone
because
we
asked
to
and
we
found
that
it's
yeah
it
was
a
radius
5
issue.
Actually
Sean
helped
me
to
discover
that
so
I'm
still
considering
potential
follow-up.
A
If
we
need
to
run
Maybe
I,
don't
know
like
it's
the
same
radius
on
local
environment
same
as
CI,
because
it
was
very
confusing
that
you
could
not
reproduce
it,
but
yeah
I
didn't
prepare
follow-up
yet,
but
this
one
is
fixed
yeah,
that's
it
for
them.
Let
me
remove
this
label,
forget
it
yep
and
workflow
verification.
So
this
one
is
also
from
me.
A
So
I
was
also
asked
by
engineering
productivity
team
to
to
clean
up
our
feature
flags
and
help
sense
materials
for
merging
the
main
Mr,
which
renames
all
the
feature
flags
from
memory
to
application
performance
and
adjusts
the
feature
ownership
for
the
image
scaler,
but
I
also
need
a
follow-up
from
Nikola
so
about
these
three
feature
plugs.
So,
as
far
as
I
understand
technical
mentions
that
we
could
remove
them,
you
could
do
have
some
comments.
B
A
B
B
Issues
so
I
can
remove
the
currently
implemented
feature
Flags,
because
they're
not
no
longer
used.
We
use
them
to
analyze,
import
export
issue
and
forgot
about
them.
They
are
probably
disabled
on
production,
but
just
need
to
be
removed
completely
from
the
system
and
we
can
discuss
about
like
what
is
the
future
of
measurable.
A
B
D
E
B
A
A
Yeah
and
Ruby's
free
explorationary
testing
also,
we
are
sitting
at
the
same
position
right.
We
are
waiting
for
yeah.
C
C
Think
yeah
yeah
also
especially
the
cloud
profiling
thing:
I
I,
don't
I,
don't
know
how
long
that
will
be
sitting
blocked.
I!
Think
maybe
okay
is
this
yeah
I,
don't
know
I,
don't
know
what
we
do
with
these
issues,
where
it's
completely
unclear
when
anyone
will
work
on
this
because
they
will
keep
showing
up
I,
don't
think
it's
necessary
to
talk
about
it
every
every
week.
Basically,.
A
Okay,
so
yeah,
okay
and
in
depth.
First,
one
is
from
you
Matthias.
C
Yes,
so
this
is
done
up
to
mr5,
which
is
in
maintainer
review.
If
you
go
back
to
the
top.
E
C
So
so
mr5
is
the
maintainer
review,
I'm
just
waiting
for
that
to
get
merged,
so
this
is
basically
a
prerequisite
to
pull
and
keep
them
so
that
we
can
compress
Data
before
we
write
it
to
disk
because
they
can
get
very
large
but
I
labeled
this
as
blocked,
because
we
can't
really
so
so
because
going
beyond
that
and
actually
writing
Heap
dumps
we're
running
into
this
problem
where
Prometheus
client
mm
writes
corrupt
string
data
into
memory
so
that
when
you
walk
the
object
space,
it
will
crash
MRI.
C
This
turned
out
to
not
be
a
bug,
an
MRI,
but
it
turned
out
to
be
a
buck
in
our
library
and
that
library
is
really
not
maintained.
It's
kind
of
in
maintenance
mode,
it's
basically
yeah,
it's
based.
It's
it's
managed
like
Stan,
is
maintaining
this
on
a
best
effort
basis,
so
he
helps
out
whenever
something
comes
up,
but
we're
looking
into
how
to
do
that.
I'm
I'm,
not
super
helpful,
yet
to
be
honest,
I'm
just
trying
to
wrap
my
head
around
like
why
this
is
even
happening.
C
So
we
kind
of
know
why
it's
happening.
So
we
know
what
the
root
cause
is.
We
have
executable
test
cases
to
reproduce
this.
That's
that's
not
the
problem,
but
we
we
still
weren't
able
to
fully
make
the
proper
go
away
and
the
code
is
pretty
messy
honestly
in
that
Library
yeah.
D
C
Does
all
kinds
of
like
things
should
be
doing,
probably
whatever
yeah.
C
I
I
can't
say
at
this
point:
I'm
just
not
comfortable
with
shipping.
This
feature
with
us
having
yeah
test
cases
that
really
reliably
reproduce
these
crashes
I.
C
Say
that
this
has
not
happened
in
production,
yet
because
it
only
crashes
in
certain
scenarios
like
when
you,
when
you
look
at
certain
data
in
memory
and
apparently
this
has
so
far
only
happened
when
we
try
to
pull
heat
dumps,
which
isn't
something
we
typically
do
in
production,
but
but
still
it
sounds
concerning,
and
it
points
to
a
general
problem
in
in
the
implementation.
The
memory
management
implementation
of
this
Library
so
I
think
we
should
probably
fix
it.
Regardless
of
of
what
we're
doing
here,.
A
So
just
to
give
some
context,
we
found
that
Jimmy
was
ballooning
on
memory
when
we
tested
it,
but
the
problem
is:
we
are
not
able
to
actually
reproduce
the
data
sample
on
which
it
belongs
memory
and
we
discussed
that
with
materials
on
Friday
the
office
hours,
and
we
decided
that
we
will
compile
side
by
side
gme
and
through
the
exporter
memory
usage,
because
we
don't
really
want
to
you
know,
make
it
optimal.
We
just
want
to
be
sure
that
it
doesn't.
It
doesn't
act
worse
than
previous
implementation.
A
Current
implementation
written
a
little
bit
and
in
fact
I
did
some
analysis.
I
constructed
three
various
data
sets
and
unfortunately,
or
fortunately,
for
us
Jimmy
is
reliably
better
in
every
possible
case,
I
tested
so
far,
so
it's
much
more
stable
on
memory
because
it
pre-allocates
a
stable
amount
of
memory
and
keeps
keeps
it
like
this,
unlike
in
Ruby
where
it's
and
I
didn't
find
the
case.
Yet,
where
Jimmy
is
worse
on
memory
or
where
it
spikes
on
memory,
so
I'm
still
figuring
out
what
could
be
the
next
step
here?
A
So
maybe,
if
you
have
some
ideas,
Matthias
feel
free
to
drop
a
line
in
the
issue,
because
I
also
try
to
rewrite
it.
I
try
to
rewrite
the
way
how
we
parse
file,
but
I,
found
this
problematic,
because
we
need
to
keep
a
map
of
all
the
samples
we
bars,
which
also
kind
of
bottleneck.
So
it
needs
much
bigger
right
actually
to
avoid
accumulating
samples
in
memory,
so
I
decided
not
to
do
it
and
go
with
the
comparison
first,
so
yeah.
This
is
still
in
progress.
A
D
C
C
This
because
yeah
I
also
think
at
this
point
the
reason
we're
looking
at
this
is
twofold
right
I
mean
one
because
it
is
clearly
a
problem,
but
also
we
need
to
keep
in
mind
that
the
main
reason
we
even
we're
even
working
on
this
is
to
get
the
system,
production,
ready
and
I.
Think
the
criteria
should
be
doesn't
make
things
worse
right.
Yes,.
D
C
As
long
if
it's
at
least
as
good
I
think
we
should
just
complete
this
epic
and
ship
it
ship
it
right
yeah
as
long
as
we
don't
make
matters
worse,
so
that
doesn't
mean
they're.
Gonna
have
to
be
dramatically
better.
So
so
I
think
that
should
be
our
I.
C
A
little
disappointing,
but
if
we
make.
D
C
Maybe
what
I
would,
if
you
can't
reproduce
this
locally,
which
sounds
like
that's
the
case
right?
Maybe
we
can
because
we've
seen
it
happen
on
staging
right
pretty
frequently,
but
the
Ruby
exporters
are
still
running
in
prop
right.
So
maybe
we
can
look
at
Thanos
again
and
see
because
frankly,
like
I
I,
don't
even
think
we
have
alerts
in
place
for
that
system.
So
maybe
we
can
see
if
that
happens,
to
the
Ruby
exporter
as
well,
and
if.
C
The
same
the
same
the
same,
each
like
if
it's
degenerates
in
the
same
way
then,
and
if
it
hasn't
caused
problems
in
production,
then
at
least
we
can
mention
that
during
the
rollout
and
say
you
know,
hey
look,
this
is
expected
to
happen
and
we,
you
know
what
I
mean
we
can
kind
of
separate
these
two
problems
from
what.
A
C
Gme
production
ready
so
that
we
can
finish
this
rollout
in
this
Epping
and
move
on
onto
something
else
from
this
like
General
problem
because,
like
you
said
earlier
to
some
extent,
this
isn't
fixable,
because
it's
in
the
design
of
how
of
a
multi-process,
metrics
exporter
right,
we
have
all
of
these
files
and
we
need
to
merge
them
in
memory.
It's
evaporators
problem
right,
so
we
all
you.
B
C
C
Compete
with
memory
for
memory
with
Ruby
workers,
which
it
currently
does
because
it
runs
in
the
same
C
group,
it
would
run
in
its
own
container,
which
I
agree.
It
should-
and
everyone
has
been
saying
that,
like
ever
since
we
worked
in
this
system,
but
that
was
the
easiest
path
was
just
to
make
it
a
drop
in
replacement
right
just
to
replace
the
binary.
Basically.
A
I
think
that's
reasonable.
So
to
sum
up,
I
will
just
check
tennis
for
Ruby
exporter.
Spikes
try
to
compare
them
with
what
we've
seen
with
Jimmy.
If
there's
a
similar
and
I
still
can't
find
the
reliable
example
on
which
I
could
reproduce
the
issue
on
a
local
environment,
we
just
say:
Jimmy
is
just
not
worse
than
Ruby
exporter
and
in
fact,
better
from
what
I've
seen
on
my
example
so
yeah,
and
we
just
move
on
and
separate.
Maybe
this
issue
it's
a
separate
one
to
to
improve
Jimmy.
A
If
you
need
yes,
so
that's
it
from
the
in-depth
and
anything
else.
Anything
I
need
to
refresh
no
okay.
So
then,
let's
move
to
the
to
the
document-
and
this
is
next-
is
from
UCT.
E
Yeah-
and
this
is
a
follow-up
issue-
it's
a
discussion
at
the
risk
in
the
monthly
product
performance
indicator.
Meeting
I
think
Josh
had
a
question
about
what
could
be
some
new
indicator
for
our
team.
A
E
It's
just
like,
maybe
you
I
I'm,
not
sure,
like
a
whether
we
can
have
a
conclusion
in
this
meeting.
Maybe
it's
just
table
counting
compatibility,
I.
C
Have
one
thought
I
feel
like
this
is
putting
the
card
before
the
horse,
because
we
can't
reasonably
find
a
proper
performance
indicator
if
we
don't,
even
that
doesn't
even
seem
to
be
agreement
on
what
our
team
should
be
doing.
I
had
a
discussion
with
Roger,
it's
basically
his
next
point.
So
it's
a
nice.
D
C
Like
all
of
these
projects
that
we're
working
on-
and
we
know
we
will
work
on
at
the
the
rate-
limiting
framework
is
just
another
example
of
this.
This
has
nothing
to
do
with
performance
application,
performance
right,
so
I
think
we're
at
a
point
now,
where
I
feel
like
this
Vision
to
Pivot
to
application
performance
has
not
worked,
and
it
doesn't
look
like
it
will
work
in
the
next
year,
either
at
least
not
in
the
first
quarter
or
two
so
like
I.
C
Don't
think
it
makes
sense
to
talk
about
pis
for
the
app
perv
team,
if,
if
we
can't,
if
we
can't
even
execute
on
that
Vision
right
and
that
the
investment
case
is
still
open
as
well,
so
so
there's
so
many
questions,
question
mark
right
now
as
to
like
what
what
we
even
should
do
or
what
we
are
as
a
team
that
we
need
to
work
through
with
Roger
and
Josh
Fabian,
and
everyone
on
this
team
I
think
we
should
do
that
first
and
then
I
think
we
should
decide
if
this
team
stays
the
application
performance
team
or
if
we
should
pivot
again
into
to
better
reflect
the
work
that
we
are
doing.
B
A
C
Opened
an
issue
I,
don't
know
if
you
copied
everyone
on
it,
but
clarifying
it's
a
team
task.
Let
me
link
it
in
the
excuse
me.
C
I
did
this
also
did
this
came
from
various
angles?
This
also
came
from
the
MRI
opened
that
proposed
the
platform
team
to
take
on
some
of
these
concerns
that
partially
we
are
working
on
partially
others
are
working
on
partially.
No
one
is
working
on
oh
owning
them,
and
one
of
the
suggestions
was
that
maybe
we
should
be
that
team,
so.
C
Discussion
we
need
to,
we
need
to
have,
because
it
sounds
like
it's
not
clear
if
that
could
be
yeah
a.
E
E
First
I
I
agree
with
what
Matthias
mentioned
about
the
under
the
indicator.
I
think
that's.
That
was
a
very
good
argument
and
secondly,
for
this
bloggers
issue
well
before
we
would
have
a
simple
discussion,
and
maybe
we
can
through
our
ideas
on
that
issue,
asynchronicity
first,
yes,.
C
I
think
it's
yeah.
E
A
E
Yes,
this
is
just
a
reminder
and
if
we
have
any
release
post
item
for
15
8,
sorry
I
think
it
should
be
57
yeah.
C
Nikola,
do
we
need
to
mention?
Did
we
want
to
mention
this
whole
memory,
Watchdog
replacement,
or
is
it
something
we
should
announce
or
is
it
more
like
a
quite
silent,
rollout.
D
C
C
B
D
Yeah,
just
a
small
Ruby
tree
update,
the
audit
is
100
review
done,
they're
still
quite
a
bit
of
action
required,
though
cool
that's
great
yeah,
there's
44
gems,
which
have
actually
required
it's
pretty
up-to-date
I
checked
it
about
a
week
ago
might
be
some
Jason
Alberts
should
be
around
40
gems.
Still
left
to
do.
I
got
a
ping
from
someone
asking
if
we
had
a
due
date
for
updating
the
gem,
which
is
one
of
the
action
requires
yeah.
C
Yes,
I
think
we
do
like
I
think
we
should,
because
there's
an
engineering
okr
for
this
entire
epic,
so
the
due
date
should
be
end
of
January,
okay,
I
thought,
I.
C
C
C
Did
do
it
but
like
we
actually
kicked
it
back,
we
pushed
it
back,
so
this
should
probably
be
it
so
it's
actually
already
passed
due,
but
so
so
that's
just.
D
C
It
but
like
no,
we
pushed
it
back
to
end
of
January
because
it
was
earlier.
It
wouldn't
happen.
Okay,
not.
C
Right
yeah
I
think
the
due
date
of
January
is
for
the
parent
epic.
There
is
this
whole
like
having
green,
builds
and
making
sure
the
app
works
in
you
know,
as
as
our
test
testing
and
test
Suite
is
concerned,
no
basically
no
known
issues
is
maybe
another
way
to
appraise
that
we
said
that
would
be
end
of
end
of
January
I,
don't
know.
If
there's
another
way,
we
can
communicate
this
more
quickly.
E
So
actually
Roy
also
reads
this
question
to
me:
I
think,
there's
something
like
a
dual
action
for
My
Side.
How
about
I
will
pin
the
yams
and
maybe
make
some
noise
on
the
product
channel
to
say
turn
all
the
teams
plan
to
finish
their
gem
updates
in
well
in
the
toaster
planning
for
the
next
Milestone,
a
15.
You
know
58
I,
think
50
58a
ends
mid
of
January.
C
No,
that
sounds
good
and
I
think
the
actual
if
the
actual
audit
is
done
because
it
sounds
like
it
is
100
reviewed,
which
is
already
way
more
than
I
ever
hoped
for.
Yeah
yeah
I
think
we
should
close
that
issue.
It's
done
like
because
this
was
only
about
understanding.
What's
broken,
it
was
never
about
the
exit
Criterium
for
this
specific
issue
that
right
hook
is
was
was
never
everything
working.
It
was
just
like
understanding
what's
what's
broken
and
what
is
so
I
think
we
should
close
it
as
done
nice
yeah
great
job.
Thank
you.
E
Joy
and
and
Matthias,
according
to
your
experience
for
each
of
the
the
the
Gen.
What
are
the
expected
work?
It's
just
like,
maybe
up
awkward
to
a
newer
version.
D
C
They're
unmaintained,
there
are
no
new
versions,
so
we
just
found
another
case
in
object,
storage,
there's
the
the
alipay
cloud
infrastructure,
alien
I,
think
it's
called.
We
have
a
gem
there
that
looks
like
it.
Doesn't
it's
not
being
it
doesn't
see
any
releases
anymore
and
it's
not
Ruby
3
compatible.
So
we
need
to
decide
now
we
did
probably
patch
it
ourselves
right.
B
E
D
Yeah
next
Point
there's
still
a
bunch
that
are
Renaissance,
I,
think
about
10,
15.
gems.
Maybe
I
think
we
can
work
on
them.
Some
of
them
don't
necessarily
have
an
owner
as
well.
So
yeah.
D
Of
focus
hours
that
might
be
helpful
to
pair
on
this,
because
a
lot
of
them
are
here
all
in
gitlab
like
all
over
the
place
or
our
super
old
they're,
Pro,
basically
they're
not
assigned
to
anybody,
because
nobody
wanted
to
pick
them
up.
Yeah.
A
D
A
B
Yeah
this
is
the
issue
related
to
the
Lord
balancer.
So,
as
Matthias
mentioned,
it
would
be
good
if
someone
else
picked
up.
I
can
pair
with
whoever
choose
to
work
on
this.
We
have
proposed
solution.
It
just
need
to
be
implemented.
Maybe
it
would
be
useful
for
the
team
to
just
try
it
locally
and
maybe
prepare
fix.
I
can
also
do
review
and
we
can
probably
ping
Dylan
for
the
maintainer
review
or
actually.
B
C
A
C
A
Thank
you.
That's
it
from
the
team
topics
anything
else
you
want
to
share.
E
C
I
mean
we
don't
own
that
many
gems,
yeah
I,
think
the
ones
that
we
do
own.
We
have
reviewed
as
far
as
I'm
over
here.
Okay.