►
From YouTube: Profiling Deep Dive: Gitaly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Right,
hi,
everyone
and
we're
here
to
take
a
look
at
the
continuous
profiling
that
we
rolled
out
to
all
of
our
go
services
a
while
back
and
we
haven't
used
them
a
lot
yet-
and
this
is
just
like
to
see
what
kind
of
information
is
in
there,
what
we
can
use
it
for
and
see.
If
we
have
some
ideas,
see
if
we
spot
some
weird
things
ourselves.
I
took
a
brief
look
before
this
call
and
I
dropped
some
screenshots
in
the
document
to
kind
of
get
the
ball
rolling.
A
But
if
you
can
open
the
the
profiler
yourself
and
have
a
look
around
and
like
point
out
stuff,
that'd
be
awesome.
I'll,
maybe
start
by
sharing
my
screen
and
showing.
A
A
What
do
you
mean?
What
we
well
continuous
profiling?
Is
this
kind
of
runs
in
the
service,
and
it
takes
samples
of
what
the
service
is
doing
at
a
certain
time
but
like
it
will
take
samples
of
the
cpu
time
through
the
call
stack
it
will
see
where
in
the
call
stack
we
allocate
memory
that
kind
of
stuff
does
that
answer
your
question.
B
Yes,
and
so,
where
it's
better
than
the
the
tools
we
got
is
so
what's
different
from
prometheus,
is
that
prometheus
is
great
at
metrics
that
you
increment
a
lot
or
change
a
lot
or
at
least
if
an
action
is
taken.
A
lot
logging
is
very
granular,
but
you
wouldn't
get
this
granularity
and
I
think
that's
the
main.
A
Thing
like
in
prometheus,
you
would
see
timings
for
a
certain
rpc
call,
while
here
we
could
dig
into
what.
What
is
what,
within
that
rpc
call
is
actually
allocating
a
lot
of
memory.
C
C
Yeah
yeah,
but
the
how
how
it's
like
the
x
and
the
y
axis
are,
are
sort
of
what
they
what
they
represent.
A
Yeah,
okay,
I
think
that's
maybe
worth
explaining
I'm
going
to
give
it
my
best
shot,
but
please
interrupt
me
or
whatever,
and
if
you,
if
you
can
do
better
than
I
suspect
you
will
so
what
we're
seeing
here
is
like
method
call
method
calls
and
the
lower
we
get
down
the
graph,
the
deeper
the
call
stack.
A
So
if
we
like
these
are
things
that
are
yeah
on
the
top,
so
when
goodly
starts,
these
are
the
functions
that
will
get
called
first
and
and
that's
yeah,
that's
on
the
top.
So
if
we
dig
here,
we
have
the
serve
streams,
call
that's
probably
somewhere
in
the
yeah
like
we
can
see
it
here
in
in
the
grpc
library,
and
that
goes
all
the
way
into
the
the
rpcs
that
are
get
the
lead
itself
and
that
are
yeah
the
ones
that
we're
writing
and
that
aren't
part
of
the
library.
C
Yeah,
like
pretty
much
that
that
that's
good,
like
the
one
thing
that
I
think's
worth
understanding,
is
like
the
way
that
it
works
is
it'll
run.
C
Basically,
what
happens
is
that
the
the
profiler
will
kind
of
strobe
the
process
if
he
wants
like
99
times
in
a
second,
for
example,
so
yeah
it'll
it'll
take
a
snapshot
of
the
stack
trace
and
then
it'll
get
like
99
stack
traces
and
then
the
way
that
it's
ordered.
So
this
isn't
like
a
sequential
order
from
like
left
to
right
in
time,
but
what
it
is
is
it's
it's
like
a
relative
percentage,
so
you
know
100
of
the
time
was
was
in
that
top
and
then
as
a
fraction
of
time.
C
You
know
the
next
thing
down.
I
can't
quite
see
your
screen
there
bob,
but
like
that.
Next,
like
the
m
call
on
the
left
and
the
second
row,
you
know
that
was
in
29.3
percent
of
the
of
the
stack
traces
right
and,
and
that
stack
was
in
20,
and
so
it
breaks
it.
It's
it's
a
like
a
good
way
to
think
about.
It
is
like,
if
you
had
99
stack
traces
and
you
wanted
to
visualize
what
99
stack
traces
would
look
like
that.
C
That's
the
representation,
but
one
of
the
things
that
confused
me
originally
when
I
was
looking
at
stack
traces
was,
I
kind
of
assumed
that
there
was
like
a
time
a
left
to
right
time,
sequence
in
them
and
there's
not
it's
it's
it's
just
purely
based
on
percentages.
I
don't
know
if
anyone
else
has
been
confused
by
that,
but
I
certainly
was
so.
It
might
be
worth
raising.
A
I
hadn't
actually
thought
about
that.
I
just
thought
about
the
the
width
of
each
of
the
yeah,
the
not
the
samples
the
like
the
width
of
one
of
these
blocks
is,
in
this
case
cpu
time,
but
not
when.
A
Thing
yeah
good
point,
so
one
of
the
some
of
the
questions
that
zj
has
already
answered
to
me
like
in
chat
before
one
of
the
questions
that
I
know
like
one
of
the
things
that
I
noticed
here
is
that
we
can
actually
see
how
much
cpu
time
we're
spending
on
prometheus.
That's,
if
I
remember
that's
here
somewhere,
wasn't
it.
D
B
Was
already
browsing
the
code
like?
Why
is
this
so
predominantly
visible?.
B
A
That
yeah
yeah,
so
here
we're
actually
looking
at
well,
it's
too
bad.
We
don't
know
what
that
function
is,
but
it's
a
grpc
middleware
and
then
that
calls
out
into
all
of
our
these
are.
I
think,
our
rpcs
at
the
time
that
I
was
looking
last
time.
There
was
a
very
big
post
upload
back,
like
a
lot
of
time,
was
being
spent
on
pulsar
plug
back
cj
pointed
out
to
me
that
that's
normal,
it's
ci
doing
stuff,
but
you're
right.
A
C
I
mean
I've
seen
that
on
the
on
the
client
side
as
well,
because
you
know
if
you
have,
if
you
have
gitlab
ci,
oh
sorry,
gitlab,
gitlab,
gitlab
with
thousands
20,
000
tags
or
more.
We
call
it
a
lot
and
it
spends
a
huge
amount
of
time.
Processing.
B
A
Sorry,
you
go
ahead,
bob
yeah,
that's
what
I
mentioned
in
the
in
the
dock
as
well.
Is
it
is
it
just
expensive
or
is
it
just
call
really
really
a
lot.
D
I
found
a
project
a
while
ago
with
56
000
tags
that
showed
up
a
lot
in
things
like
this.
So
if
we
could
isolate
that
out,
somehow,
like
maybe
by
italy
node,
that
would
be
interesting.
C
Yeah,
the
one
thing
to
keep
in
mind
is
that
at
the
moment
we're
looking
over
a
seven
day
time
span,
and
you
know
this
is
collected
over
50
giddily
nodes
or
more
than
50
kilo
notes.
Now
so
there's
a
massive
amount
of
aggregation
in
this
data
right.
So
I
don't
know
the
frequency
at
which
any
single
giddily
process
gets
profiled,
but
there's
seven
days,
which
is
a
lot,
but
then
there's
also
50
processors.
So
there's
there's
a
lot
in
there.
So
you
know
it's
very
aggregated.
C
A
Another
thing
that
I
noticed
that
I
would
like
you
to
explain
to
me
andrew,
is
at
some
point
where's
the
document
I
was
looking
at
the
find
commit
rpc
here
and
then
this
this
stack
is
like
split
right
down
the
middle
and
then
exactly
the
same
on
both
sides.
C
No,
maybe
what
it
was
bob
was
that
it
was
if
you're
looking
over
seven
days-
and
there
was
a
small
change
in
the
code
and
it's
you
know
like
say
3.5
days
into
the
view,
and
there
was
a
small
change
in
the
stack
trace
that
was
kind
of
you
know.
Maybe
one
extra
line
or
something
like
that,
then
it
might
kind
of
bifurcate
along
that,
but
now
that
you've
got
you
know
no
changes
on
that.
It's
it's
kind
of
unified
again.
Does
that
make
sense?
Yes,.
A
Yeah
yeah
that
was
fine,
commit
but
like,
and
I
can
already
yeah
it
doesn't
really
look
like
that
anymore,
yeah.
Okay,
thanks
for
that,
this
I
didn't
know.
B
I
I
wonder
if
this
is
like
the
question
is
a
screw
and
your
tool
is
a
hammer.
So
if
I'm
all
wrong
and
and
the
answer
is
cj
no-
and
that
would
be
fine,
but
I
this
is
something
that
the
tooling
we
got
isn't
great
at
like
logging,
you
can't
correlate
gcs
with
increase
of
timings
or
different
stack
traces,
or
I
don't
even
know
how
much
time
we
consume
per
minute
or
by
any
time
span
for
gc.
A
Or
how
it
impacts
us?
I
think
I
think
one
thing
we
need
to
pay
attention
to
is
that
this
is
not
wall
time,
so
it
doesn't
show
how
long
an
rpc
takes.
It
shows
how
much
well
in
this
case,
how
much
cpu
like
how
much
time
a
threat
spends
on
the
cpu
like
actively
doing
stuff.
A
So
I
don't
think
that
we
will
be
able
to
be
able
to
correlate
that
to
rpc
latency.
C
B
B
Golems
as
well,
okay,
yeah.
E
G
Side,
I
think
with
gcbg:
do
you
want
to
share
your
screen.
G
A
C
B
C
A
G
What
are
we
writing
into?
This
is
actually
a
git
cad
file
dash
dash
bash.
So
we
are
writing
all
the
object
ids
into
that
process
that
we
want
to
have
statistics
about.
A
C
It'll
be
interesting
to
know
like
maybe
if
we
didn't
flash
until,
like
we'd
kind
of
written
a
whole
batch
out
to
the
standard.
I
I
mean
I'm
now
like,
but
you
know
what
I
mean
like
it'd,
be
interesting
to
look
at
it
and
see
if
there's
like
an
easy
performance
win
there,
but
obviously
I'm
kind
of
just
guessing
at
what
that
is.
G
Yeah
I
mean
we
could
maybe
maybe
even
catch
the
results
we
have
on
this
process,
but
I'm
not
too
sure
about
that.
I
know
that
we
actually
do
that
for
our
other
parts
and
other
rpcs
that
we
do
can
catch
the
results
and
I
think
it
should
be
doable
here
too.
G
G
G
I
Then
no.
C
Yeah
that
that's
100,
correct,
but
what's
interesting,
is
that
effectively
what
they're
doing
is
they're
taking
p-prof
data,
and
they
are
you
know,
so
they
they're
running
prof
over
and
over
and
over
and
they're
collecting
all
the
stack
traces
and
then
they're
presenting
it
as
a
ui,
and
I
think
craig
misskill
spoke
to
them
and
said
that
we
really
want
this
for
ruby
and
google's
response
was
like,
oh
well,
we
kind
of
it
wouldn't
be
very
much
work
for
us,
because
if
you
can
give
us
a
p
prof
of
ruby,
which
of
course
we
can
do
using
other
tools,
then
we
can
do
that.
C
But
by
the
same
count
you
know
if
we
use
perf
record,
we
could
get
like
a
similar
output
from
the
git
processors
and-
and
we
could
put
it
in
the
the
the
risk
is.
Obviously
you
know
if
you
have
800
900
get
processors
on
a
machine.
It
becomes
tricky
like.
Maybe
what
you
do
is
instead
of
doing
a
per
process
performance.
C
I
don't
know
if
you
know
much
about
perf,
but
you
can
actually
run
a
kind
of
system
level
like
what's
on
the
cpu
of
the
system,
and
maybe,
if
you
have
like
800
git
processors
running,
it
makes
more
sense
to
to
run
that
perf
at
the
system
level
and
say
what
are
we?
What
is
the
cpu
doing
on
this
computer?
C
It
it
there's,
definitely
an
expense.
I
can't
remember
what
it
was.
It's
it's
definitely
not
free,
but
it
was.
I
can't
remember
what
it
was,
but
we
deemed
it
to
be
low
enough.
I
don't
know
sean
looks
like
he
might
know.
A
A
C
Like
my,
my
guess
is
that
if
the
gc
is
so
small-
and
this
is
heat
allocation
right,
yeah
yeah
yeah,
so
if
if
gc
is
as
small
as
it
is,
then
we're
probably
not
doing
anything
like
too
bad
like
if
gc
was
like
10
of
the
time,
then
we'd
probably
want
to
figure
out.
What
are
we
doing
so
you
know
I
mean
it
is
quite
interesting
how
much
time
you
spend
on
the
post
upload
packs,
but
I
guess
that's
kind
of
expected
as
well.
A
Yeah
my
surprise
was
more
like
the
disproportionate
just
proportion
between
ssh
and
http,
but
then
yeah
ci.
F
A
B
But
this
suggests
we
spend
more
time
on
uploading
packs
to
clients
than
compared
to
the
the
http
one,
which
is
five
boxes
to
the
right.
Yeah.
C
F
A
A
C
B
So
you
were
clicking
the
box,
no
yeah
the
profile
screen,
and
I
think
you
could
say
something
like
the
last
seven
days
or
the
last
week
or
something
I
don't
know
where
you
clicked
here,
but
anyway
I
saw
some
okay
thanks.
B
Sorry,
okay
and
our
data
retention
policy
is
30
days
right.
So
if
someone
optimizes
final
tags
tomorrow,
then
they
go
within
the
next
24
days,
29
days
to
see
their
no
actually,
that's
the
question:
how
do
you
visualize
improvements?
A
Which
there
it
shows
the.
C
A
Compared
versions,
compare
that
you
can
compare
with
right
so
now,
I'm
comparing
two
versions
to
1340
rc,
something
and
rc2
and
then
like.
B
B
B
A
So.
That's
interesting!
If,
because,
if
we
were
to
change
that
in
the
client,
then
we
would
see
this
bar
get
shorter,
but
we
wouldn't
be
able
to.
We
should
compare
it
like
not
to
versions
but
to
like
times
and
then
we'd
see
the
difference.
A
G
Anybody
who
knows
coming
back
to
the
text
again
where
they're
certainly
efficient,
I
mean
the
problem-
could
also
be
that
we
are
just
using
git
cat
objects
for
this,
because
this
is
definitely
very
inefficient
to
do
because
to
actually
only
get
the
tags,
you
don't
have
to
take
a
look
at
the
object
at
all
and
but
only
really
active
references
that
we
have
inside
of
the
repository.
So
we
are
already
doing
more
work
than
is
actually
required
to
actually
do
inside.
G
B
We
call
one
get
for
each
ref
based
on
the
namespace
of
the
ref,
and
then
we
need
to
build
the
commit.
The
tag
is
the
referencing
to
oh.
G
A
A
Okay,
then
we'll
leave
it
at
that.
Thank
you
very
much
everyone
for
joining
I'm
going
to
post
this
recording
and
link
to
the
dog
in
the
issue
and
I'll
drop
a
message
in
your
channels
with
them
as
well.
If
you
want
to
re-watch
or
share,
and
thank
you
very
much,
everyone
talk
to
you
later.