►
From YouTube: Package ThinkBIG April 2021
Description
We discuss displaying usage data in the application, how to help onboard new team members, how to measure performance and scalability, and other topics.
A
Hello,
welcome
to
this
month's
think
big
for
package,
where
we
talk
about
ideas
that
are
a
little
further
out
than
this
milestone
and
give
everyone
a
chance
to
collaborate
on
things
like
design,
discussions
and
strat
product
strategy,
as
well
as
like
how
we
organize
and
the
team
and
how
we
work.
A
We
have
a
full
agenda
today.
I
think
the
first
item
is
it
says
me,
but
ian,
maybe
you
would
be
willing
to
go
through
this
one.
It's
discussing
the
design
for
the
dependency
proxy
and
how
we
could
potentially
move
this
nbc
forward.
B
B
I
am
on
the
wrong
laptop
and
my
zoom
does
not
have
permission
so
tim.
Would
you
mind
pulling
up
the
design
and
I
would
be
happy
to
talk
about
it
in
that
super
awkward
way.
B
B
B
What
we're
doing
is
in
terms
of
the
design,
we're
introducing
a
usage
tab,
then
using
some
of
the
components
that
we
already
had
in
tabs
to
show
a
trend,
so
in
this
example,
there's
an
increase
in
downloads
by
12
week
over
week,
and
so
we
show
a
pretty
basic
chart.
This
is
this
is
a
chart
we've
seen
in
a
lot
of
other
gitlab,
so
in
terms
of
the
design
of
the
data
visualization,
it's
not
that
unusual.
B
It
should
be
pretty
similar,
but
we're
starting
with,
in
theory
how
many
downloads
this
week
and
then
the
week
previous
and
the
week
previous
and
showing
some
historical
data
the
mvc
would
be
week
over
week
because
I
think
we
discussed
that.
That
was
the
easiest
thing
for
us
to
track.
B
It
would
be
good
if
we
could
expand
this
out
to
how
day
how
it's
going
on
the
daily
basis
weekly
monthly
and
give
some
users
some
flexibility
based
on
their
needs,
as
well
as
being
able
to
select
some
timelines
after
we
get
through
the
mvc
is
really
where
it
gets
interesting.
This
is
where
the
conversation
I
think
can
get
really
exciting.
B
B
If
everyone
is
using
gitlab
the
way
we're
hoping
they
will
so
we're
kind
of
representing
that
and
then
external
tools,
if
they
have
them
I'd
like
to
be
able
to
represent,
if
a
individual
downloads
it
to
like
a
local
host
as
well
as
if
it's
being
pulled
by
securing
defense
or
from
other
sources.
B
So
that
is
the
overview
of
what
is
actually
downloading
all
of
these
things
and
then
the
latter
part
is
the
gandalf
beard.
If
I've
ever
seen,
one
is
the
usage
history,
which
is
every
time
it
downloads.
We
tell
them
what
pulled
it.
So
you
can
see
in
the
example
it
was
pulled
by
a
specific
pipeline
on
job
whatever
and
here's
the
time
the
next
one
nico's
uploaded
it,
and
so
on.
B
Just
representing
where
that
package
is
going,
because
that's
kind
of
a
black
box
for
our
users
right
now
and
when
you
start
thinking
for
a
developer,
that
information
isn't
quite
as
useful,
but
when
you
move
up
to
the
devops
to
the
secure
quality
things
like
that,
if
there's
a
package
that
they've
discovered
has
a
vulnerability.
The
next
important
thing
is
well
what's
using
this
package,
so
we
can
get
rid
of
that
vulnerability.
B
B
B
D
May
I
make
a
comment
of
course,
so
looking
at
the
history
gave
me
an
idea,
if
we
have
all
the
stats
saved,
we
will
be
able
to
devise
our
critical
is
a
package
for
a
certain
organization
in
the
meaning
that
if
a
package
is
pulled
by
a
lot
of
different
actors,
especially
a
lot
of
different
pipelines,
it
means
that
if
I
break
this
package,
I'm
gonna
be
stopping
everything
right.
Instead,
a
package
that
is
single
of
a
single
project-
maybe
it's
okay,
if
it
breaks
and
not
it's
gonna,
be
a
showstopper
for
the
whole
company.
B
A
B
Yeah
in
the
design
I
represented
that
by
the
pipeline
itself,
so
you
could
tie
it
back
to
the
project
or,
however,
it's
being
used,
especially
with
organizations
that
I've
heard
where
one
project
may
hold
the
package.
But
then
the
pipelines
are
in
another
different
location,
so
I
felt
it
was
the
priority,
but
I
think
it
would
be
good.
E
B
That's
a
really
good
call
out.
We
have
been
from
the
design
perspective,
at
least
pretty
intentional,
that
each
version
of
a
package
is
a
unique
entity
that
gets
its
own
ui
and
stuff
like
that,
but
we
could
certainly
explore.
The
idea
of
here
is
package
name
x
and
then
here's
version
one
and
it
got
pulled
a
thousand
times
version
two.
It
got
pulled
two
thousand
times
version.
3
has
never
been
used,
so
there's
a
problem
there.
I
could
definitely
see
that
being
useful,
especially
for
organizations
who
revolve
around
updating
their
packages
more
frequently.
F
Yeah,
I
guess
my
question
was:
do
we
have
a
plan
to
add
filtering
to
this,
so
I
could
filter
the
rate
at
which
the
top
chart,
based
on
the
bottom
pie,
chart
to
sort
of
say.
I
want
to
see
internal
only
or
I
want
to
see
external.
B
F
B
F
B
Yes,
I
have
not
thought
about
that
dan,
I
will
admit,
but
that
would
be
really
interesting
to
start
elevating
those
kinds
of
charts
and
usages,
so
we
could
do
it
by
project.
We
might
may
be
odd
but
being
able
to
be
a
group
level,
devops
admin
who
doesn't
want
to
really
dive
into
the
project
level
and
say
this
package
is
being
used
frequently
or
this
package
that
shouldn't
be
used
is
getting
used
and
surface
those
at
a
higher
level
for
sure.
F
And
then
annoyingly
the
other
one
would
be.
How
do
we
pivot
on
this
data
to
look
at
it
from
the
other
perspective
of
like,
instead
of
having
a
pie,
chart,
have
a
look
at
this
in
the
context
of
internal
pipelines
and
what
are
they
broken
down
by
across
the
package
repositories?
F
Or
could
I
look
at
it
from
a
users?
What
am
I,
what
are
my
team
pulling
down
the
most?
What
are
they
looking
at.
B
I
think
there's
a
really
unique
opportunity
there.
I
would
be
hesitant
to
start
getting
too
into
the
pipeline
usage
on
our
page.
However,
I
think
there
is
a
great
opportunity
for
cross-stage
collaboration
where
we
could
go
to
vitica
nadia
on
the
pipeline
side
and
start
surfacing
how
often
certain
packages.
A
And
the
other,
well,
you
know
I
think
one
thing
you
were
looking
for
is
like:
what
can
we
do
to
move
this
mvc
forward
like
is
this
possible?
Is
this
design
feasible?
Is
it
some
what's
a
good
first
step,
maybe
that
we
could
take
to
move
in
this
direction.
B
Yeah,
that's
definitely
a
good
question
and
I
think
one
of
the
things
from
the
usability
standpoint
is
just
being
able
to
get
that
chart
of
how
many
times
was
it
downloaded
this
week
versus
last
week.
That's
definitely
the
first
thing
that
we
need
to
show
them
in
terms
of
what
matters
from
the
ux
side
of
things.
B
That's
going
to
get
huge,
and
so
my
question
is:
can
we
feasibly
store
that
amount
of
information?
Is
it
reasonable
to
have
it
and
is
there
a
way
for
us
to
build
out
the
design
and
the
back
end
to
start
accommodating
it,
instead
of
needing
all
of
it
at
once,
and
that's
technical
information
that
I
don't
have
so
it's
hard
for
me
to
steer
the
design
that
way
and
that's
true
for
both
the
package
registry,
which
is
where
we're
focused
on
this
design,
as
well
as
the
container
registry.
G
Yeah,
it
might
be
interesting
to
collaborate
a
little
bit
with
the
product
intelligence
team
that
deals
with
usage,
ping
and
snowplow,
and
all
that
good
stuff
to
see
if
we
can
utilize
any
of
those
tools
that
we're
already
using
to
gather
some
of
this
data.
G
Because
I
mean
I
I've
experienced
where
you
can
use
like
roll
up
tables
and
and
sort
of
just
aggregated
data
in
the
in
the
standard
database
to
track
weekly
and
monthly
counts.
But
we
might
already
have
systems
in
place.
We
might
be
able
to
use
in
enjoying
efforts
with
those
teams.
D
I
think
in
the
past
we
discussed
log
based
approach.
I
remember
having
a
conversation
around
it
with
david,
where
we
would
attach
a
logo
event
to
every
package
manager
and
to
every
package
that
we
could
use
to
build
up
this
kind
of
information
that
we
are
looking
for
now.
It
was
a
little
bit
of
time
ago.
So
I'm
not
that
sure.
If
it's
the
right
direction.
H
I
recall
gigi
working
on
an
event
event
based
system
for
pulling
packages
per
user
or
something
like
that
and
the
moment
he
enabled
it
on
production
at
the
table
was
filled
in
in
a
super
quick
way.
So
the
growth
is
was
not
viable,
so
he
switched
to
redis
directly.
A
A
B
It
would
certainly
start
us
somewhere
and
get
us
on
the
page
in
terms
of
usage
data.
However,
from
the
usability
standpoint,
usually
the
full
month
kind
of
lead
time
is
a
little
long
for
them
to
make
decisions
based
on
that,
so
it'd
be
good
to
present
the
data
because
we
have
it,
but
definitely
one
of
the
priorities
will
be
to
refine
it
down
to
a
more
actionable
cadence.
A
B
D
B
That's
a
really
good
question.
I
would
imagine
from
the
usability
standpoint
at
least
the
being
able
to
get
at
least
the
last
30
days
or
the
last
90
days
or
whatever
the
time
window
is
of
data.
We
kind
of
presented,
as
this
is
just
showing
you,
the
latest
stuff
in
the
history
would
be
really
useful,
still
be
good
to
get
it
longer,
but
especially
for
the
bottom
section
where
we're
actually
saying
it
was
this
pipeline.
B
B
That
sounds
like
a
great
first
step,
and
I
would
love
to
collaborate
during
that
investigation
with
the
design
so
that
we
can
make
the
best
use
of
what
data
we
have
and
we
can
get
creative
on
how
we
present
it
to
make
it
seem
like
we
have
more
robust
data
than
actually
there
and
then
actually
start
filling
it
in.
That
would
be
a
great
next
step.
F
A
B
I
think
we've
we
haven't
made
any
changes
since
the
last
time
we
reviewed
it.
The
question
I
have
at
this
point
and
I
I
don't
maybe
we've
investigated
it
and
I've
lost
it
but
from
the
design
and
what
is
currently
implemented
for
the
dependency
proxy.
What
can
we
build?
That's
the
question
that
and
nico.
Please
let
me
know
if
that's
inaccurate,
but
that's
the
question.
I
would
really
like
to
answer.
D
Well,
we
did
create
a
list
of
issues
that
are
needed
to
to
kick-start
those
design.
I
think
we
also
put
them
with
which
issue
blocks
wealth.
So
if
we
wanted
to
start
that
effort,
we
can
follow
up
that
chain
until
we
can
take
whatever
is
unblocked
and
unblocks
things
down
on
the
downstream.
D
G
Yeah,
if
I
recall
it's,
it's
mostly
like
we
need
to
start
with
setting
up
graphql
to
to
bring
some
of
those
attributes
forward
to
the
front
end.
A
I,
like
it,
okay,
that
makes
sense.
I
added
a
note
here-
and
we
talked
about
this
yesterday
in
the
quad
planning,
but
a
couple
of
things
on
this,
so
we
we've
been
hearing
requests
that
people
would
like
to
use
it
the
dependency
proxy
to
pull
images
from
ecr
gcr
as
well
and
maybe
have
some
way
of
making
it
more
generic,
that's
not
on
the
front
end
or
the
design
front,
but
I
wrote
it
down
so
now
I
have
to
mention
it.
You
know.
A
Okay,
the
next
item
is
may
so
other
feature
ideas.
I've
been
hearing
a
lot
about
are
the
dependency
firewall,
we've
had
a
couple
of
customers
come
and
say
they're
using
nexus,
and
they
are
really
liking
this
the
feature
of
the
firewall
that
allows
them
to
basically
say
never
download
a
package
with
this
author
or
this
license
type
or
anything
like
that.
B
Frequently,
this
is
something
where
we
should
open
up
to
the
secure
and
defend
team
and
kind
of
push
towards
them
to
do
it,
and
then
we
implement
it
or
in
the
roundabout
way.
Or
do
you
think
this
is
something
that
we
should.
A
They're
not
going
to
implement
this,
because
this
is
our
it's
this
I
tried.
This
is
our
category.
It
falls
on
us
to
do
it.
I
think
I
think
honestly
I
mean
yeah.
It
seems
like
we're.
Gonna
wait
on
this
until
we
are
a
little
further
along.
I
was
hoping
that
there's
a
minimal
mvc
here,
but
I
think
we
need
to
be
a
bit
further
along
until
we
get
to
this.
A
E
Yeah
I
saw
that
we
have
plans
to
expand
the
scope
of
the
dependency
proxy
and
support
other
external
registries,
and
I
I
think
this
is
a
good
time
to
think
about
supplying
the
quota
and
expiration
policy
on
the
dependency
proxy
cached
artifacts,
because
it's
easier-
if
we
do
it
now
rather
than
later,
because
later
maybe
a
bit
too
late
and
it's
better.
If
we
don't
go
down
that
route.
G
It's
been
discussed
that
we
we
could
set
up
like
a
simple
setting
where
you
know
the
dependency
proxy
cache
will
clear
every
week,
30
days
whatever
automatically
and
users
can
change
that.
That
time
range
would
that
be
maybe
a
good
starting
point
in
terms
of
like
something
we
can
provide.
E
E
G
Does
the
dependency
proxy
storage
contribute
currently
to
any
limits
that
we
have
like
group
limits.
F
No
do
yeah.
My
question
is:
do
we
have
sort
of
a
published
opinion
or
whatever
we
want
to
characterize
it
as
it's
like
in
general,
with
packaged
data,
we'll
have
ttls
we'll
have
limits.
I
know
this
is
a
conversation
and
I
see
we're
recording.
So
I'm
not
mentioning
specific
things,
because
I
don't
know
where
this
conversation
will
end
up,
but
we
we
do
need
to
consider
this
in
general.
A
No,
the
only
the
only
limits
we
have
right
now
on
packages
are
in
the
maximum
file
size,
individual
file,
size
that
can
be
uploaded.
We
don't
have
any
limits
on
the
registry
either
registry
in
terms
of
how
much
you
can
store
or
have
it,
but
anything
like
that.
A
This
might
be
a
little
bit
different
because
it's
a
cache
we
do
have
like
the,
for
instance,
the
any
build
artifacts
that
get
billed.
I
think
they
automatically
get
expired
after
30
days
or
something
like
that.
If
it's
not
30
it's
n
days,
I
don't
remember
the
exact
number.
So
we
have
some
policy
in
terms
of
gitlab
clearing
the
cache
of
of
objects,
but
not
for
our
stage
in
particular.
Yet.
F
And
then
I
guess,
if
we,
if
we
I'm
sort
of
thinking
this
in
general,
as
data
retention
right
like
and
so,
if
we're
thinking
about
it.com,
that's
one
thing
and
then
capabilities
for
self-managed
customers
to
be
able
to
configure
this
disable
enable
as
they
see
fit,
would
be
something
else
to
add
to
it.
A
Yeah,
I
think
that
would
be
a
theme
for
the
second
half
of
2021
for
our
team
too,
but
for
for
the
dependency
proxy,
I
think
we
could
take
a
stronger
opinion
because
we
already
have
the
the
artifact
exploration
in
place,
so
we
could
mimic
that
and
I
like
joan's
idea
about
having
a
setting.
So
if
we,
if
we
set
it
to
30
days,
then
self-manage
can
adjust
it
to
never
if
they
want,
if
they
want
to
use
up
the
storage
but
yeah.
I
like
that
idea.
E
I
Yeah
and
we're
me
and
tim
actually
we're
on
a
customer
call
well
again
now,
and
it
wasn't
for
this
exactly
exactly,
but
they
they
wanted
something
where
an
admin
of
the
gitlab
instance
could
set
a
limit
for
expiring
data
that
you
know
individual
repositories.
Individual
groups
could
set
a
more
restrictive
limit,
but
they
couldn't
set
a
more
generous.
D
I
And
I
think
that's
a
pretty
interesting
idea-
and
I
think
that
applies
to
you-
know
any
multi-tenant
instance
of
get
gitlab
mothers.com
or
our
large
self-managed
installs.
A
B
I
B
F
Yeah,
I'm
kind
of
worried
about
that
from
a
security
perspective.
You
know,
I
guess
just
require
some
tooling-
to
allow
an
administrator
to
sort
of
manually
clear,
a
cache
if
they
determine
there's
some
issue
with
it.
Like
some
security
failure-
or
you
know
these
types
of,
I
can't
even
brain
right
now,
but
these
types
of
attacks
on
the
supply
chain
supply
chain
attacks.
You
know
if
it's
permanently
cash
that'll
have
a
way
to
invalidate
it
as
well
manually
right.
I
Do
you
think
that
needs
to
be
on
object
by
object
basis
or
there
should
there
be
like
a
big
red
button
that
just
clears
the
cache
resets
it
completely.
F
But
I
think
a
a
big
red
button
as
it
were,
would
probably
be
helpful
in
the
event
of
something
having
a
problem,
because
you
know,
I
know
I'm
sure,
we've
all
seen
scenarios
where
we
have
something
cached,
that's
actually
not
correct,
and
then
we
need
to
get
rid
of
it
or
we
get
something
cached
from
a
wrong
source.
Maybe
there's
some
dns
attack
and
it's
like
okay.
A
So
this
one
I
wrote,
I
think
while
we
were
after
maybe
I
interviewed
one
of
the
candidates
for
the
engineering
manager
positions
and
it
just
made
me
think
that
I
think
I
forget
what
question
they
asked
me,
but
it
made
me
think
about
onboarding
people
and
it's
been
a
while
all
of
us
have
been
on
the
team
for
quite
a
while.
I
was
wondering
if
there
are
things
thinking
back
to
two
to
one
and
a
half
years
ago.
A
If
anyone
had
any
ideas
for
helping
people
on
board
quickly,
was
there
anything
that
really
helped
you
or
is
anything
that
we
could
do
to
help
our
future
cohorts
on
board
as
seamlessly
as
possible?.
F
Yeah,
I
think,
just
to
get
the
conversation
started.
You
know.
Historically,
we
had
onboarding
issues
for
people
we
determined,
you
know,
information
that
was
relevant
to
the
person
and
where
they
were
joining
the
team,
because
we
have
these
sort
of
functional
areas
in
package
and
we
sort
of
tried
to
figure
out
simple
tasks
that
someone
pick
up
that
initially
at
least
weren't
scheduled
just
to
allow
that
person
plenty
of
time
to
get
up
speed.
We
do
still
have
a
template
for
the
team
that
we
used.
F
So
that's
still
a
theme,
so
I
think
probably
a
good
first
step
would
be
to
review
that
template,
and
you
know
the
people
in
the
team
could
take
a
look
and
determine
if
that
works
for
all
of
the
functional
areas
that
we
have
in
the
team
update
that
and
I
could
contribute
to
that
as
well
and
then
from
there.
F
So
like
that's,
not
something
we
want
to
think
of
as
a
target
by
any
means
and
then
the
other
impact
that
we're
seeing
in
data
and
doing
analysis
across
various
teams.
Is
it
just
the
whole.
The
capacity
of
the
team
is
reduced
in
general
because
a
team
we
all
work
together
right
and
we
all
add
value
to
each
other
and
deliver
more
because
of
the
presence
of
the
team,
and
so
when
someone's
out,
because
they're,
coaching
or
or
mentoring
or
on
body
on
boarding
buddying.
F
That's
now
a
word.
You
know
that
means
less
capacity
in
the
team,
so
we
sort
of
account
for
fewer
deliverables.
I
think,
as
as
we
at
least
initially
get
team
members
and
then
the
other
patterns
that
we
have
is
everyone
has
a
one-on-one
with
each
other.
So,
like
these
sorts
of
things,
basic,
that's
sort
of
more
like
mechanical
onboarding
set
up
all
the
meetings.
Add
people
to
the
meetings
add
them
to
the
async
retros.
All
that
stuff.
I
Yeah
so
my
onboarding
template
there's
a
bunch
of
rail
specific
stuff
like
it
just
wasn't
useful
to
me
as
a
go
programmer
and
it
wasn't
optional.
So
it
was
a
bit.
I
F
Definitely
I'll
I'll
steve.
Do
you
want
to
speak
to.
G
You
coming
back
yeah,
it
looks
like
we're
agreeing
that
that's
partially
the
fault
of
members
of
the
team
in
creating
these
onboarding
issues
and
thinking.
Oh,
this
could
be
useful.
We
should
learn
about
these
things
and
because
this
is
what
I'm
doing.
F
Yeah,
I
think,
fortunately,
we're
a
little
ahead
of
that
curve.
Now
I
mean
I
know
when
you
know
you
joined
the
team
haley,
we
were
pretty
early
in
the
sort
of
formalizing
of
hiring
go
engineers
at
get
lab,
and
so
it
was
pretty
early
days
where
we
were
like
what
are
we
doing
so
I
hope
we'll
address
that
and
having
you
and
jerome
in
the
team
will
definitely
be
helpful
there
I
mean,
I
think
the
one
other
thing
I
might
add
here
is.
F
I
think
I
definitely
want
the
whole
team
to
look
at
onboarding.
The
template
I'll
go
find
it,
but
I
definitely
need
sophia,
ian
and
tim
as
well
as
myself,
to
look
at
it
and
make
sure
it's
addressing
the
quality
product,
design
and
product
management
practices
in
the
team,
just
sort
of
make
sure
that
people
are
aware
of
of
those
elements.
So.
D
D
Yeah,
so
I
wanted
to
say,
even
though
we
we
have
the
like
onboarding
body,
sorry,
the
the
the
body
system,
I
think
encouraging
and
stressing
it
helps
a
lot
for
me.
It
was.
I
got
up
to
speed
much
faster
by
having
a
few
sync
with
nick
and
reading
any
documentation
and
other
things
that
I
grew
that
you
still
need
to
do,
but
the
sync
time
is
really
valuable.
In
the
beginning.
G
I
was
adding
that
I
totally
agree
I've
experienced
through
my
own
onboarding
and
through
helping
onboard
others.
The
difference
between
like
kind
of
only
meeting
once
with
your
onboarding
buddy,
ever
versus
kind
of
having
a
regular
catch-up
and
being
able
to
be
more
comfortable
pinging
each
other
and
having
more
comfort
and
interaction
is
much
better
and
more
helpful.
B
Oh,
I
do.
My
question
was
for
onboarding
on
ux.
Getting
to
dive
into
the
research
is
obviously
pretty
big
and
important,
and
we,
as
a
group
have
worked
really
hard.
That
engineers
are
also
have
all
of
that
user
data
and
user
research
context
in
terms
of
onboarding.
As
an
engineer
would
including
all
of
that
research
information
or
some
tl,
dr
version
of
that
information.
Would
that
be
useful?
Or
would
it
just
be
overwhelming
data
until
you
kind
of
got
comfortable.
F
F
I
think
most
of
us
probably
remember
you
know
this
whole
series
of
we
need
to
do
these
sort
of
things
today
or
this
week
or
whatever,
and
then
there's
a
bunch
of
like
hey
here's,
some
longer
term
stuff
to
look
at
so
maybe
we
can
think
about
in
those
terms,
but
I
think
it's
really
valuable
to
make
sure
there's
a
point
of
reference
for
people
and
then
excuse
me
setting
that
expectation
up
front
and
sort
of
saying,
hey,
here's
the
is
there's
all
this
data
we've
generated.
F
A
Sense,
cool
thanks
thanks
for
all
that,
if
there's
nothing
else,
we
can
move
on
to
the
next
item.
I
shared
an
issue
for
discussion,
so
something
that's
happening
from
the
engineering
and
product
perspective
is
they're,
trying
to
dedicate
first
for
stages
with
high
levels
of
adoption
and
usage
they're,
trying
to
dedicate
more
time
for
scalability
and
performance.
I'm
sure
you've
heard
me
say
this
and
dan
say
this
and
everyone
else,
but
it's
up
to
our
team
to
define
which
metrics
and
success
criteria
we
have.
G
And
we
have
on
the
package
dashboard
a
days
since
last
incident.
Oh.
A
A
A
E
Yeah
I
was
going
to
say
that
if
you're
looking
for
a
single
metric,
maybe
smash
all
of
the
api
endpoints
together
and
and
track
the
average
response
time
and
also
the
average
number
of
laptops
queries
basically
get
everything
every
single
route
together
smashed
in
a
single
average.
And
then
you
can
easily
track
progress
by
seeing
the
line
decrease.
I
Hopefully,
should
we
separate
routes
that
out
that
serve
objects
that
are,
you
know,
I
think
that's
a
bigger
there
might
be
some
delay.
That's
not
necessarily
that
same
kind
of
metric
right.
H
If
you
have
a
maven
project
and
gitlab
is
one
of
the
registry,
the
gitlab
registry
will
get
pinged
for
all
the
dependencies
of
the
of
the
project,
whereas,
if
you
use,
I
don't
know,
npm
gitlab
will
only
get
ping
for
the
the
given
dependency.
So
the
usage,
it's
not
the
same,
I'm
not
sure
that
it's
a
good
idea
to
aggregate
all
the
metrics.
H
F
Yeah,
I
think
I'm
up
next
aptx
mean
time
for
resolution
were
the
ones
that
I
was
thinking
of.
Definitely
talking
more
sort
of
operational
measures.
A
C
A
G
I
I
The
way
clients
interact
with
container
registry
is
to
make
a
lot
of
head
requests
that
during
the
happy
path,
are
gonna
404.
So
I
think
that's
a
part
of
the
container
registry
and
we
don't
need
to
that's
noise.
We
need
to
eliminate
so
I'm
wondering
if
this
is
going
to
be
another
thing
where
it
has
to
be
per
package
type
or
like
service
type
is,
if
it's
going
to
have
to
be
bespoke
to
that.
F
It
might
that's
one
of
the
incidents
we
saw
is
we
never
saw
it
pop
up
on
the
radar
enough
for
ops
for
infra
to
look
at
it,
because
it's
just
a
small
number
of
errors
in
the
scheme
of
things,
but
for
us
in
the
context
of
maven,
for
example,
it
was
a
significant
spike
in
issues,
so
you
know
that's
one
of
the
values
there.
I
think
of
having
specifically
to
various
components
that
we're
working
on
that
doesn't
exist.
J
I
was
just
gonna,
I
was
just
gonna
ask,
because
how
are
we
actually
being
alerted
about
any?
Do
we
have
baselines?
And
how
are
we
being
alerted
about
this.
E
Yeah
we
have
service
level
indicators
for
each
api,
that's
true
for
package
and
container
registry
and
those
have
a
threshold.
Actually,
two
thresholds,
one
is
the
satisfiable
threshold.
So
if
everything
stays
under
that
even
amount
of
time
to
deliver
a
request,
then
everything
is
okay
and
then
there
is
a
tolerance
threshold
which
is
above
the
desired
one
and
that's
when
we
start
getting
alerts
because
we
are
starting
to
get
word
on
the
service
level
indicators
and
those
are
all
in
the
in
grafana.
D
D
A
F
Well,
I
think,
in
the
we
also
have
to
look
at
when
we're
talking
about
monitoring
and
usage
in
all
this.
We
have
to
think
about
capacity
as
well,
and
so
that's
going
to
be
really
helpful
to
understand.
F
Not
necessarily
something
we
have
in
the
context
of
what
systems
we're
using
but,
for
example,
something
that's
fairly
isolated.
Like
container
registry,
we
probably
need
to
think
about
our
own
capacity
planning
and
our
own,
like
how
close
we
are
getting
to
those
sorts
of
capacity
limits.
F
F
No,
that's
that's
not
a
good
way
to
have
like
engagement.
I
think
I
don't
want
to
propose
an
action
because
then
don't
have
to
do
it.
That's
sorry,
yeah,
no
problem.
I
can
do
that.
Do
you
want
me
to
just
create
it
in
the
get
lab
project
or
you
wanted
an
arc
package
project.
A
A
Steve
I
took
one
of
your
comments
from
slack
and
added
it
to
the
think
big
agenda.
So
earlier
this
week
or
last
week,
I
was
look
checking
out
this
open
source
project
called
they
have
an
api
for
generically
managing
packages,
and
I
was
kind
of
swooning
over
the
format
of
the
docs
and
looked
really
awesome
and
I
shared
it
in
slack
and
then
steve.
You
mentioned
this,
you
added
this
comments.
Do
you
want
to
verbalize
this.
G
Sure
yeah,
and
so
I
was
just
kind
of
commenting
that
like
since
there
are
products
like
pulp,
which
is
essentially
it
looks
like,
and
I
only
briefly
glanced
this.
It
looks
like
it's
an
api
based
package
manager
where
or
package
registry
where
it's
not
necessarily
interacting
with
all
of
the
package
manager,
clients,
but
it's
allowing
you
to
generically
use
various
apis
to
manage
your
packages.
A
It
seems
like
like
would
we
ever
consider
doing
something
like
that?
Instead
of
integrating
with
each
package
managers
api?
Would
we
ever
say:
oh
we're
just
gonna,
abstract
that
from
the
user,
because
on
one
hand
that
seems
nice.
On
the
other
hand,
it
seems
like,
as
a
user,
don't
you
want
to
use
the
typical
npm
endpoints
that
if
you're
using
npm,
you
don't
want
to
like
use
a
different
api,
you
don't
want
to
learn
something.
A
A
F
Yeah,
I
think,
in
the
past
we
generally
wanted
to
build
things
in
ourselves
to
increase
integration
and
provide
make
sure
we
have
opportunities
to
add
metrics
and
understand.
What's
going
on,
it's
kind
of
been
a
gitlab
thing
for
many
many
years.
So
I
think
if
we
want
to
use
a
like
a
third-party
tool,
that's
fine,
but
I
think
we
need
to
really
have
a
very
clear
justification
of
what
that
saves
us
and
given
that
we've
already
sort
of
created
a
lot
of
this
stuff,
we'd
be
sort
of
going
okay.
Well,
let's
not
work.
F
Move
forward
with
this
other
solution
that
effectively
replaces
everything
we
want
to
be
really
clear
that
that's
not
taking
one
of
our
key
differentiators,
our
key
strengths
and
just
off
off
boarding
it
as
it
were
into
another
company's
effort
or
another
team's
effort.
I
know
this
is
open
source,
so
we
contribute
back
to
the
project
which
is
cool
but
like
something
to
think
about
in
in
past.
Our
patterns
have
been
sort
of
like
try
something
fork
it
use
it
integrate.
G
A
I
really
like
the
idea
of
abstract.
Maybe
the
abstraction
layer
makes
sense
when
you
start
thinking
about
like
integration
with
ci,
where
you,
maybe
you
don't
want
to
put
in
like
the
if
you
could
avoid
putting
in
the
full
npm
install
with
all
the
parameters,
and
you
could
just
have
something
that
just
says
package
publish
that
might
be
a
nice
way
to
to
integrate
like
a
more
generic
package
manager
feature.
So
that's
kind
of
the
direction
that
I
want
to
go
and
then,
personally,
I
thought
reviewing
the
documentation
was
helpful.
A
G
Maybe
it's
something
we
can
like
ask
users
about
to
see
if
they're
interested
in
some
of
these
things,
because
it's
hard
to
like
I
mean
the
the
question
you
asked
that
got
silenced
earlier
of
like
would
this
be
useful?
I
think
it's.
None
of
us
have
used
this
personally
before
or
this
type
of
system,
so
we
don't
really
know
so.
It'd
probably
be
helpful
to
to
get
input
from
users
yeah.
A
B
A
I
added
an
item
yesterday
during
our
quad
planning,
but
I
haven't
had
time
to
follow
up
on
it,
but
we
were
talking
about
what
data
is
useful
in
understanding,
product
usage
and
customer
growth,
so
we
could
punt
on
that
one
until
next
time,
since
we're
already
in
overtime.
Anyone
have
anything
else
to
to
add
before
we
break.