►
From YouTube: Package Think Big: June 2021
Description
We discuss a new grafana dashboard for the container registry, a POC of pipelines for packages and onboarding large enterprise customers.
A
B
B
This
is
only
filled
in
pre-production,
no
staging
yet,
but
basically
this
this
summarizes
everything
that's
going
on
with
the
garbage
collection,
it's
the
overview
section
where
we
can
have
a
glance
at
the
the
size
of
the
queues.
If
there
are
a
lot
of
stuff
cued
for
review
or
not
also
glancing
over
the
storage
space
that
was
recovered
by
deleting
stuff
from
the
storage
backends,
the
time
that
it
takes
to
do
the
analysis
if
something
should
be
deleted
or
not,
and
also
the
median
delete
latency
both
on
storage
and
at
the
database.
B
B
The
garbage
collector
will
try
to
post
on
the
review
of
that
task
for
the
future
date
so
that
it
doesn't
keep
retrying
it
over
and
over
again.
So
it's
like
an
exponential
back
off.
We
also
see
the
time
between
reviews,
because
we
have
two
workers,
the
blob
worker
and
the
manifest
worker,
and
they
run
at
a
configurable
cabin.
So
that's
what
we
are
seeing
here
they
kick
in
and
they
back
off
waiting
for
the
next
run
for
the
next
time
to
come
also
have
the
run
rate,
so,
basically
how
many
operations?
B
How
many
runs?
Are
we
having
per
second?
So
these
are
operations
per
second,
then
we
have
the
breakdown
to
see
how
many
of
those
were
successful,
fails
or
those
that
there
was
nothing
to
do
so.
This
is
because
the
the
workers,
the
online
dc
workers,
will
always
kick
in
at
that
predefined
cadence,
but
when
they
kick
in
that,
maybe
there
is
not
nothing
to
be
done.
There
are
no
tasks
cues,
so
those
show
up
here.
B
We
can
see
that
most
of
the
time
we
kick
them,
there
was
nothing
to
do,
and
then
we
have
the
dollar
latencies.
This
is
all
for
the
the
p90.
B
B
And
similarly,
we
can
see
that
for
the
database
it's
offering
around
30,
35,
milliseconds
and
force
storage
is
around
the
300
milliseconds
and
this
is
for
blobs
and
we
also
have
the
latency
for
the
lead
soft
manifest
on
the
database.
Yeah,
that's
pretty
much
it
in
case.
Anyone
is
curious.
There
is
a
description
for
each
one
of
the
of
the
metrics
and
that's.
C
It
super
good
I've
seen
it
before,
but
I
really
I'm
wondering
if
we
need
to
with
the
blob
key
size
and
the.
C
Queue
says
I'm
wondering
if
we
should
add
two
extra,
which
are
it's
gonna
know
the
total
key
size,
but
I
think,
having
the
size
of
the
cube,
that's
actually
ready
to
be
reviewed
versus
like
things
that
are
in,
like
a
review
delay.
I
think
that
would
be
an
interesting
thing
to
have
at
a
glance.
C
I
know
we
can
sort
of
tell
that
from
the
from
the
no
ops
and
and
the
wait
time,
but
I
think
having
just
those
the
numbers
would
be
like
a
good,
a
pop
in
and
a
good
health
metric
to
have.
B
Yeah,
I
agree
also
for
that
we
have
to
count
the
rows
on
the
queues
and
right
now,
every
single
worker
will
would
have
to
do
that,
because
we
can't
limit
the
number
of
actors
in
the
cluster,
so
that
would
be
a
performance
permanently,
but
definitely
something
that
we
should
look
at
once.
That
is
no
longer
a
problem.
D
Just
out
of
curiosity,
this
is
it
possible
to
have
some
sort
of
alerting
or
where,
if
I
would
be
checking
this
dashboard,
how
would
I
know
that
there
is
probably
an
alarming
size
of
tasks
skewed
or
or
some
other
metric?
How
do
we
see
this.
B
Yeah
we
have
thresholds
for
each
one
of
these
metrics
and
within
those
thresholds
we
define
what's
tolerable
or
not,
and
if
some
of
these
metrics
go
above
the
tolerated
threshold,
they
can
trigger
an
alarm
for
pre-prod.
We
don't
have
that
because
it's
not
a
critical
environment
for
production,
we
will.
We
will
have
that
and
on
on
top
of
it.
We
also
let
me
check.
B
We
also
have
the
garbage
collection,
service
level
indicators
which
are
basically
an
aggregate
to
alert
us
if
something
goes
wrong.
So
let
me
oh
it's
over
here.
D
B
E
Thanks
so
I
have
the
showing
off
the
ci
pipeline
for
packages.
E
E
So
the
idea
is
the
following.
I
have
three
private
projects
that
each
one
of
them
has
a
pipeline
for
the
repository.
So
when
there
is
a
change,
there
is
a
npm
package
that
is
built
and
this
package
is
pushed
to
the
package
registry,
which
is
public.
Sorry,
do
you
see
my
screen?
Do
you
need
me
to
increase
the
font
size.
E
So,
as
I
said,
three
private
projects,
each
one
building
a
an
npm
package
and
this
npm
package
will
be
pushed
to
the
public
package
registry,
which
is
simply
an
empty
public
project,
and
so
we
can
check
here
in
the
package
we
just
three
that
we
have
our
three
packages
and
on
this
screen
I
added
the
ci
status
for
the
pipeline
for
the
package.
E
E
I
wanted
to
show
different
aspects
of
package
validation,
so
one
of
them
is
license.
Compliance
is
the
license
file,
the
one
we
expect,
the
other
one
is
linting
the
package
json
file,
which
contains
metadata
about
the
package
and
make
sure
that
there
are
some
fields
present
there.
The
third
one
is
npm
audit.
E
This
npm
package
is
trying
to
read
some
information
about
the
the
the
environment
variable
where
it
is
installed
and
try
to
read
the
value
and
send
it
to
a
third
party
server.
So
that's
exactly
what
happened
with
the
package
dependencies
security
thing
we
heard
about
some
weeks
ago.
E
So
we
can
say
that
we
expect
the
mit
license.
It
seems
that
it's
not
what
we
have.
It
is
not
so
I'm
going
to
fetch
the
mit
license.
D
E
E
E
Okay,
that's
one
job.
Next,
one.
E
Because
we
are
working
together
in
this
next,
one
should
be
npm
audit.
E
So
we
can
check
the
report
here.
We
have.
This
package
has
a
dependency
on
low
dash,
which
this
the
version
that
is
used
as
a
high
security
vulnerability.
So
we
will
just
update
the
dependency
here.
It
is.
E
Hunter,
so
we
can
see
the
the
results
of
this
analysis
and
in
all
of
these
things
we
can
see
this
rule.
There
is
a
connection
on
on
a
third
party
server
or
address,
and
does
that
is
super
fishy,
and
it's
triggered
by
this
thing.
So,
post
install
is
a
script
that
is
executed
when
the
package
has
finished
his
its
installation,
so
we
simply
curl
and
send
the
value
of
the
path
environment
variable
to
a
third-party
server.
E
So
we
will
just
remove
that
because
that's
not
what
we
want,
I'm
going
to
commit
those
changes
before
that
we
need
to
bump
the
version
number,
because
the
npm
package
registry
doesn't
allow
duplicates
upload,
meaning
that
I
can't
upload
again
for
version
12.
So
we
need
to
upload
a
new
version,
I'm
going
to
commit
the
main
branch.
That's
just
a
shortcut.
E
E
Yeah
here
it
is
my
the
new
version
we
created
and
now
we
have
a
green
pipeline.
E
E
A
small
thing
to
note
is
that
we
have
the
code
source
of
the
package
and
the
package
registry
in
the
same
location
or
same
group,
but
it
could
be
very
well
in
different
locations,
meaning
that
we
could
host
the
cut
tools
for
the
package
elsewhere,
build
it
elsewhere
and
just
have
something
pushing
the
package
to
the
package
registry.
E
It
will
not
change
anything
for
the
package
pipeline
because
it
will
just
get
triggered
when
the
package
is
uploaded
and
and
we
will
just
run
the
jobs
as
usual,
but
I
wanted
to
highlight
that
there
are
many
different
architectures
that
are
compatible
with
those
package
pipelines
yeah,
that's
about
it
for
the
demo,
and
I
think
one
thing
I
didn't
show
in
my
previous
video
is
that
you
can
open
the
package.
E
Well,
it's
not
straightforward
here
for
npm
because,
as
I
said,
we
can
we
don't
allow
duplicates
to
be
uploaded,
but
for,
for
example,
maven
packages.
You
could
have
many
many
many
package
files
and
each
package
file
is
a
like
separated,
upload
and
and
thus
we
have
a
dedicated
ci
pipeline
for
each
package
file.
E
E
E
Another
thing
to
see
is
that
yeah,
you
look
at
at
all
the
things
that
I
that
were
brought
for
free
like
this
small
icon.
This
is
a
view
component.
I
just
pulled
the
view
component
and
plug
it
in
the
screen
and
it
worked
without
any
modification.
I
just
passed
the
pipeline
and
it
worked
the
same
for
the
this
screen.
I
didn't
change
anything
on
the
screen.
It's
it's
as
it
is
on
dot
com,
and
you
can
see
your
pipelines
working.
E
There
are
a
few
things
that
are
not
working
because
pipelines
usually
refers
to
a
branch,
but
package
by
plan
doesn't
have
this
concept
of
a
branch.
So
that's
some
things
that
don't
work,
but
still
out
of
the
box
is
still
working
well
working
well
and
the
same
for
the
the
pipeline
details.
We
have
all
the
jobs.
I
didn't
do
anything
here.
It's
it's
the
same
code
as
the
one
running
on
dot
com
same
here.
A
Where's,
the
this
is
awesome
where's
the
pipeline
defined.
E
E
Used
a
single
file
for
all
the
the
packages,
but
it
will
also
be
used
for
the
repository
pipelines,
so
we
will
need
a
settings
here
for
this.
This
thing
like
this
is
the
configuration
file
for
the
repository
pipelines
and
we
will
need
something
similar
for
the
package
pipeline
and
currently
so
it
is
in
the
packages
project
and
it
is
in
a
specific
folder.
So
here
it
is.
A
E
Basically,
yeah
it's
on
the
upload
endpoint
on
the
api
when
it
receives
and
it
successfully
creates
a
package
right
after
that
it
will
create
a
pipeline
and
then
once
the
pipeline
is
created,
it's
picked
up
by
a
background
worker
that
will
prepare
it
for
the
runner
and
then
it
it
will
basically
read
this
file,
create
all
the
jobs
make
them
available
for
runners,
and
then
the
runner
will
pick
it
up
as
any
other
job.
A
And
one
more
rapid
question:
you
mentioned
that
the
project
will
not
allow
downloading
a
package.
That's
in
a
failed
pipeline
state
is
that
a.
E
E
No,
it's
currently
it's
hardcoded,
but
we
need
a
definitely
a
setting
for
that.
It's
basically
in
the
in
the
endpoints
for
the
download
package
action
for
npm,
I
just
hard
coded.
If
the
this
package
doesn't
have
a
green
pipeline,
it
will
not
be
available,
but
yeah.
We
need
a
setting
for.
E
E
We
do
have
some
events,
but
I
think
I
read
those
are
not
reliable,
so
you
could
miss
a
pull.
A
a
push
action.
A
E
A
A
Thanks
david,
that
was
awesome.
I
think
I
have
the
next
item.
If
there
was
no
other
questions
steve,
I
saw
you
change
your
background.
Okay,
so
I
don't
have
a
lot
of
details
that
I
I'm
sure
details
would
be
more
helpful
for
this
conversation,
but
I
was
on
a
call
with
a
large
enterprise
customer
who
is
not
particularly
interested
in
moving
to
the
package
registry.
A
We
kind
of
were
just
talking
about
it
and
they
were
like
well,
what
kind
of
scale
do
you
handle,
and
I
mentioned
that
we
support
you
know
hundreds
of
millions
of
events,
and
they
mentioned
that
they
have
billions
of
events,
and
it
just
got
me
wondering
how
can
we
help
some
of
these
larger
customers
as
they
start
to
think
about
moving
to
get
that
package?
How
can
we
make
them
feel
comfortable
that
we
can
one
handle
their
scale
and
two?
A
How
can
we
make
ourselves
feel
comfortable
that
it
won't
be
a
problem
so
yeah?
That
was
just
that?
That's
kind
of
the
question
I'm
asking:
not
necessarily
can
we
handle
billions
of
downloads
per
day
or
something
like
that,
but
more
so,
how
can
we?
How
can
we
make
customers
feel
like
okay,
you're,
moving
terabytes
of
data
or
a
petabyte
of
data
to
the
registry?
Can
we
handle
that
yes
or
no
and
then
jerome?
I
saw
you
and
added
some
helpful
comments
there.
If
you
want
to
verbalize
those.
B
What's
the
percentage
of
the,
what
percentage
would
that
represent
on
our
current
request
rate?
So
if
we
are
handling
1000
requests
per
day
and
the
customer
is
expecting
to
do
100
10
of
them,
so
that's
10,
that's
a
10
increase.
Probably
it's
not
nothing
to
worry
about,
but
if
they
want
to
go
much
higher,
that
may
be
20
30
of
what
we
currently
handle
for
every
customer
that
we
have,
and
in
that
case
it
would
likely
be
a
problem.
A
I
mentioned
we
would
know
in
advance,
so
it
would
be
something
we
could
do.
We
could
prep
for
that,
and
I
saw
that
you
shared
we
currently
handle
340
million
reads
and
9
million
writes
per
week
for
the
container
registry.
A
I
could
generate
that
same
data
using
the
usage
ping
that
we
have
for
the
package
registry
and
just
say
this
is
what
we
currently
support
and,
basically
anything
plus
10
of
over
our
total,
now
anything
more
than
10
percent.
I
should
bring
back
to
the
team
and
say:
okay,
let's
check
is
there
any
concerns
about
handling
this.
B
Yeah
that
10,
I
just
came
up
with
it.
The
10
is
probably
already
pretty
significant,
but
yeah.
We
should
determine
what
is
the
percentage
that
needs
to
be
that
should
make
it
a
discussion
between
infra
and
development
before
acknowledging
the
customer
yeah
that
that
won't
be
a
problem
to
handle
whatever
you
want
to
to
do
on
our
platform,
but
yeah
that
that
would
be
the
idea.
A
On
the
specific
customer,
like
I
said,
they're
they're
pretty
happy
with
their
current
solution.
It
might
be
something
that
in
the
future,
they
were
thinking
about
moving
to
gitlab.
So
if
I
have
another
call
with
them,
I
could
dig
in,
but
we
have
a
couple
of
other
large
enterprise
customers.
I
have
a
call
this
week
tomorrow
with
one
and
they
they're
talking
about
moving
terabytes
to
the
package
registry,
so
this
will
be
a
good
opportunity
to
test
this
workflow.
A
Okay,
I'm
sure
we
had
another
item,
but
we
are
about
out
of
time,
so
we
might
have
to
wait
hold
off
on
that
one.
Okay,
any
other
comments
before
we
break.
A
No,
in
that
case
thanks
everyone
for
your
help
for
the
demos
and
everything
it
was
great,
and
I
will
talk
to
you
later,
bye.