►
From YouTube: Profiling Deep Dive: Registry
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
There
we
go
and
I'll
start
sharing
my
screen,
and
then
we
can
see
what
we
can
dig
up
from
the
registry
profiles.
A
So
can
we
already
see
that
so?
This
is
the
gitlab
registry
and
the
profiles
we
we
get
from
there
and
what
we're
looking
at
here-
and
I
kind
of
summarize
this-
every
call
like
this
a
little
bit
and
somebody's
always
around
to
like
improve
my
wording.
So
please
eagle,
if
you
wouldn't
mind
improving
after
I'm
done.
That
would
be
awesome.
A
So
what
we're
looking
at
here
is
the
profiler
is
running
and
it
takes
a
sample.
The
samples
like
the
length
of
the
sample
is
about
10
seconds,
and
then
it
looks
in
this
case,
what's
happening
on
the
cpu
during
these
10
sec
10
seconds,
and
now
we're
looking
at
an
aggregate
of
all
of
those
samples
collected
over
the
past
seven
days
and
and
then
we're
looking
at
averages
so
for
the
the
10
seconds
average
sample.
A
The
registry
process
was
spending
1.3
seconds
on
the
cpu,
so
that's
not
a
lot
in
total
and
then
from
that
we
can
see
what
it
was
actually
doing.
So,
most
of
the
time
this
is
serving
requests.
I
think,
and
one
of
the
things
that
popped
up
is
like
we're
building
a
url
from
a
request,
and
we
seem
to
do
that
a
lot.
A
B
Yeah,
I
was
kind
of
expecting
something
like
something
like
this,
because
the
current
routing
logic
it's
a
little
bit
on
the
creative
side,
so
yeah,
it's
almost
like
a
vso
around
the
a
custom
structure
that
is
then
passed.
It
passes
everywhere
and
it
kind
of
treats
everything
automatically
with
a
lot
of
regular
expressions,
which
is
also
showing
up
there.
Yeah.
A
I
was
meant
to
ask
that,
because
we
have,
these
are
regular,
regular
expressions
that
are
being
compiled,
but
the
what
is
because
here
we're
also
compiling
a
bunch
of
regrets,
and
I
was
wondering
what
that
was
and
why
the
stack
is
so
deep.
B
Yeah,
so
basically,
this
is
used
all
over
the
the
codes,
because
every
request
for
the
registry
contains
the
repository
name
and
then,
with
that
repository
name.
We
do
a
lot
of
stuffs
like
finding
the
right
place
to
get
objects
from
the
object,
object,
storage,
manipulating
the
caching
and
also
even
the
routing,
is
handled
with
that
as
well,
so
yeah.
B
It
should
pop
up
quite
often
and
room
for
improvement,
but
I
I
I
think
that
we
looked
at
this
in
the
past,
but
it
won't
go
away
without
a
more
serious
rewrite
of
the
writing
logic,
and
we
also
have
quite
outdated
dependencies
for
the
the
max
router
and
the
gorilla
anglers
as
well
so
bumping
those
I
believe,
ali
already
pushed
nmr
for
that
might
help
in
the
in
the
midterm.
But
we
really
need
to
rewrite
the
routing
logic.
A
Well,
all
the
small
steps
should
show
up
here
and
you
can
compare
versions
of
the
of
the
app
as
well.
So
if
you
deploy
a
new
version,
you
want
to
compare
the
whole
one
to
the
new
one.
You
could
use
this
for
that
in
theory,
because
we
don't
have
a
lot
of
different
deployed
versions
yet
so.
A
That
is
good
yeah.
It's
also
worth
noting,
like
here,
we're
seeing
that
the
process
isn't
actually
spending
a
lot
of
time
on
the
cpu,
and
I
also
like.
I
don't
think
that
the
registry
is,
I
think,
it's
one
of
our
best
performing
services
like
in
terms
of
alerts
and
what
now
so
like.
We
can
look
at
these
things
here
and
see
where
there
might
be
room
for
improvement,
but
things
are
already
not
bad
at
all.
B
C
Yeah,
I
guess
one
related
comment
is
that
these
flame
graphs
are
always
to
be
considered
in
context
of
what
the
service
is
actually
doing
and
so,
depending
on
how
many
instances
of
this
that
we're
running,
maybe
10
of
100
instances
like
might
be
a
lot
and
we
could
actually
save
quite
a
bit
on
capacity
there.
C
C
But
maybe
before
that,
one
point
that
I
always
find
kind
of
well,
it's
kind
of
hard
to
read
these
flame
graphs.
So
have
you
had
any
experience
to
playing
graphs
before.
B
We
yeah
we
looked
at
them
in
the
past
when
we
did
some
optimizations
for
the
the
garbage
collector,
also
that
it's
much
easier
to
to
identify
because
there
are
much
less
moving
pieces.
So
it's
much
flatter.
A
Yeah,
when
you
were
doing
optimizations
to
the
garbage
collector,
was
that
to
the
garbage
collection
itself
or
was
that
to
object
allocation.
B
No,
it
was
to
the
algorithm.
It
was
basically
mostly
around
the
number
of
network
requests
that
are
required
to
do
the
analysis
and
the
deletion
as
well,
so
basically
cutting
the
time,
and
in
some
cases
we
also
cut
the
memory
usage
as
well,
but
the
main
driver
was
cutting
the
execution
time.
A
Not
this
garbage
collection
bit
because
yeah
it's
showing
up
as
a
as
a
big
thing
here,
but
that's
because
the
the
overall
sample,
like
the
time
spent,
is
quite
short.
So
let's
have
a
look
at
the
the
memory
usage
here
we
can
see.
This
is
all
the
memory
allocated
over
the
250
profiles
on
average,
so
we're
looking
at
the
average
of
30
000
profiles
actually
either.
Perhaps
you
know
this
like
how
does
this
averaging
work
and
why?
What
is
over
250
profiles?
But
here
we're
looking
at
all
of
them.
B
B
Only
only
self-managed
instances
can
run
the
current
offline
garage
collector,
so
it
won't
show
up
here
what
what
shows
appear.
The
most
is
the
is
the
world
function
and
the
biggest
reason
for
that
is
that,
for
example,
to
to
get
the
list
of
tags
or
not
even
repository,
we
have
to
walk
through
a
series
of
folders
list
them
and
then
continue
down
that
route
until
we
have
everything
enumerated.
So
that's
quite
expensive.
A
That's
that's
going
to
be
something
that
we
because
then
I
was
just.
I
just
came
from
reviewing
a
merger
quest
by
david.
I
think
that's
going
to
be
deleting
a
bunch
of
those
by
walk
like
by
walking
through
walking
through
them.
I
think
like
going
through
the
old
repositories
that
are
in
there,
and
so
that
might
be
something
that
we
need
to
keep
an
eye
on
as
well.
Then,
if
we're,
what
it's
going
to
do
is
going
to
take
all
the
repositories
that
haven't
been
cleaned
up
in.
B
A
So
yeah
we're
going
to
go,
we're
going
to
go
start
cleaning,
those
old,
not
use
repositories
up,
and
that
would
mean
requesting
the
tags
of
these
repositories
over
there
like.
That
could
be
a
lot
of
tags.
So
then
we
might
have
to
keep
this
into
account
that
it
might
grow
the
heap
when
it's
loading,
those
tags
into
memory
to
serve
them
back
to
gitlab
rails.
B
Yeah
yeah
that
that's
true,
that's
one
of
the
reasons
why
we
wanted
to.
We
wanted
to
do
it
on
incremental
basis,
so
start
small
and
see
and
see
out
vfs,
because
that
will
put
a
lot
of
loads
on
on
the
registry.
A
Yeah,
as
far
as
I
know,
the
rollout
is
planned
to
start
with
all
the
project
places.
So
we
just
pick
a
few
projects
and
then
perhaps
you
can
do
a
percentage
based
project.
So
but
it's
nice
to
see
that
here.
A
The
memory
that's
allocated
but
not
released
during
the
runtime
of
the
sample,
so
anything
surprising
here
that
shows
up
this
might
be
because
it
takes
a
while.
So
it's
it
extends
like
the
because
it
takes
a
while.
It
extends
like
the
sample
finishes
before
the
memory
is
returned
and
the
upload
is
finished.
B
We
kind
of
keep
that
open
until
the
the
client
finishes
uploading
all
of
the
parts
and
then
we
also
see
that
the
urls,
the
url
building
is
going
up
again
because
again,
this
is
also
used
to
construct
the
paths
where
we
need
to
upload
the
data
to
in
the
in
the
gcs
bucket.
So
it's
used
everywhere.
Basically,
yeah.
B
Yep
somehow
yeah
yeah,
the
the
only
the
only
the
only
problem
is
that,
because
that
is
used
everywhere,
changing
it
will
affect
everywhere
as
well.
So
it
needs
to
be.
We
need
to
be
careful
about
that
as
well.
A
B
So
we
need
to
be
able
to
clean
that
up,
but
to
clean
that
up
and
because
mainly
layers
from
images
and
also
some
of
their
configuration
files
as
well,
are
stored,
duplicated
in
the
in
the
in
the
cloud
storage
buckets,
which
means
that,
in
order
to
determine,
if
we
can
garbage
collect
them
or
not,
we
first
need
to
determine
if
there
is
any
repository.
That's
still
relying
on
that
specific
specific
product
yeah
and
to
determine
that
we
have
to
scan
all
repositories
and
with
the
existing
structure.
B
That
means
transversing
the
the
wall
file
structure,
folder
to
folder
and
file
to
file
finding
all
of
the
references
and
then
calculating
the
blocks
that
were
not
references
anywhere.
But
to
do
that,
we
need
to
to
be
offline,
which
we
can
do
for
gitlab.com,
of
course.
Otherwise
we
would
be
in
read-only
mode
for
two
or
three
weeks.
B
And
basically,
it
invalidates
whatever,
whatever
calculations
we
have
done
so
far
yeah.
So
that
can
happen
and
the
only
way
to
to
be
able
to
get
those
results
fast
enough
to
not
be
invalidated
by
an
incoming
request
is
by
having
a
a
sql
database.
B
So
that's
what
we're
going
to
do
migrate,
the
metadata,
so
the
repositories
that
exist,
which
manifests
are
in
those
repositories
which
tags
as
well
and
which
vlogs
are
references
by
each
one
of
those
artifacts
put
that
information
on
a
sql
database
and
then
the
we
can
do
all
of
the
querying
using
sql.
Instead
of
http
requests
against
gcs.
A
That
could
also
clean
up
the
speed
up.
The
cleanup
on
the
rails
side
because,
like
right
now
rails,
needs
to
make
a
request
to
the
repository,
and
then
the
repository
needs
to
check
object,
storage
before
it
can
get
back,
which
is
why.
B
Yeah
yeah,
it
will
actually
have
to
be-
probably
it
will
have
to
be
in
the
opposite
direction.
So,
first,
ideally,
we
would
run
the
expiration
policies
so
that
we
can
delete
as
many
tags
as
we
are
able
to,
because
we
will
then
have
to
migrate
the
existing
repositories
from
the
current
registry
to
the
new
one
by
kept
by
the
metadata,
otherwise,
which
means
that
the
less
text
that
exists,
the
faster
the
less
work
we
need
to
do
but
yeah
in
the
end.
B
For
example,
once
we
have
the
database,
the
the
time
that
we
are
spending
with
the
the
walk
in
the
other
graphs
will
vanish,
because
we
will
no
longer
have
to
search
the
the
bucket
the.
B
B
Yeah,
so
that
that
will
go
away
because
we
will.
We
will
not
need
that
that
function
anymore.
A
D
A
B
A
B
For
that
already
like
because
yeah
I'm
not
sure
if
we
have
one
already,
but
if
we
don't
have
one
I'm
going
to
go
to
open
it
quite
often.
A
B
A
Thank
you
very
much
for
your
time.
I'm
going
to
post
this
recording,
if
there's
anything
more
to
add,
feel
free
to
reach
out
any
questions.
Anything
you
know
where
to
find
us
thanks
thanks.
Thank
you
very
much.
Bye,
bye,
bye,.