►
From YouTube: Package ThinkBIG: May 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
I
have
the
first
one
actually
so
in
our
recent
quad
planning
meeting
we
were
discussing
what
data
would
be
most
useful
in
understanding,
product
usage
and
customer
growth,
and
I
opened
this
issue
to
discuss
and
I
was
hoping
that,
if
that
anyone
had
any
ideas
that
they
wanted
to
talk
about
here,
that
we
could
bring
it
up.
I
could
also
bring
up
the
issue
and
go
through
some
of
the
things
that
I've
documented
there
as
a
to
kick
off.
Brainstorming
but
nico.
You
have
the
the
next
comment.
B
Yeah,
so
this
is
like
a
raw
comment
on.
I
was
mulling
on
this
idea
when
we
discussed
that
during
the
quad
planning
meeting
so
now
we
have
usage
data
and
error
data
into
different
word.
It
will
be
cool
if
we
could
pull
them
together
in
a
way
that
we
understand
where
the
user
flow
stopped
right.
So
we
could,
for
example,
identify.
B
Keep
pushing,
I
don't
know,
I'm
inventing
a
maven
package
that
is
missing
an
important
file
and
then
we
can
process,
and
this
way
we
could
like
prevent
those
better
documentation
and
increase
the
experience
for
the
users
in
general.
I
think
that
putting
together
usage
data
and
our
error
data,
it
will
be
a
cool
thing
to
do.
A
D
I
have
I'm
not
sure
if
this
really
applies
to
this
idea
here,
because
we
want
to
understand
product
as
usage,
but
one
of
the
risks
that
we
were
identifying
back
then,
when
we're
doing
the
exercise
was
how
sometimes
we
don't
know
how,
how
other,
how
the
users
switch
to
other
solutions,
and
sometimes
we
lose
a
bit
of
the
feedback
there.
D
I
think
like
in
the
in
the
funnel
in
the
usage
funnel
we
have
retention
and
referral
as
stages,
so
I'm
just
wondering
if
there
is
a
way
to
also
understand
the
information
like
how
users
are
dropping
off
our
product.
A
A
E
I
think
one
other
thing
that
I
think
is
useful
with
with
these
charts
is
at
least
as
I
see
it
is
just
trying
to
make
them
as
actionable
as
possible
in
the
sense
that
you
know.
For
example
with
me,
I
looked
at
the
error
budgets
and
I
was
like
okay.
How
do
I
work
out?
E
Do
I
know
how
to
isolate
what
the
concern
is
around
error
budgets?
How
do
I
dig
into
that,
and
so
that
that
would
be
kind
of
my
thought
process
is
adding
that
data
is
cool,
but
how
do
I
dig
in
to
identify
something
that
I
can
actually
do
with
it?
I'm
sure
that's
already
part
of
the
plan,
but
it's
just
a
top
of
mind
with
error
budgets.
As
I
mentioned
yeah,
it's
a
good.
A
High
level
call
out,
you
know
we
need
to,
it
should
be
actionable.
C
D
Just
just
one
more
thing
to
add
in
the
so
when
in
the
handbook
we
have
a
link
to
the
package
metrics,
which
is
basically
a
google
spreadsheet,
I'm
going
to
link
it
here
and
it
seems
that
we
have
some
tracking
added
but
not
added
to
the
dashboard.
So
would
that
also
be
part
of
this
effort
or
is
just
about
adding
the
metric
like?
A
A
A
One
item
that
I
like
is
where'd:
it
go
the
number
of
projects
that
have
pipelines
but
are
not
pushing
to
the
registry.
That
could
be
helpful
too,
because
that
could
be
a
those
could
be
customers
that
we
could
reach
out
to
and
notify
and
say,
like
hey
check,
check
out
the
package
or
container
registry
for
your
use
case.
A
A
On
oops
that
looks
like
has
two
duplicate
topics:
the
next
one
too.
Okay,
I'm
gonna
delete
that
one
all
right.
So
the
next
topic
that
I
was
hoping
that
we
could
discuss
speaking
of
error
budget
stand
is
that
maybe
we
could,
just
as
a
team,
walk
through
those,
and
actually
I
was
asking
juan,
if
he's
here,
if
he
would
be
willing
to
like
walk
us
through
the
grafana
dashboard
that
that
he
created
for
the
container.
C
C
C
So
that's
the
service
level
indicators
and
we
use
two
at
gitlab,
mainly
which
is
replex
and
error
rates,
and
so
so
the
apex
gives
you
an
idea
of
how
fast
your
requests
are
being
served
and
the
other
right
gives
you
an
idea
of
how
many
errors
your
api.
C
Your
service
is
returning,
basically
the
percentage
of
the
hours
that
it
is
returning
when
serving
all
requests
and
then
so
that's
sla,
sles
and
then
each
sl
sle
has
slos,
which
are
the
service
level
objectives,
and
that's
what
tells
us
where
we
should
be
for
the
aplex
and
the
right.
Basically,
that
defines
the
objectives
that
we
should
met,
that
our
service
should
met
in
order
for
us
to
not
spend
more
budget
than
what
we
have
and
right
now.
C
I
think
all
services
have
the
same
sla,
which
is
99.95
that's
around
up
to
20
minutes
per
month
of
long
time.
So
we
can
see
here
that
we
are
already
in
negative.
So
we
went
above
that
because
we
spent
40
minutes
in
total
on
the
period
that
I
have
here,
which
is
pretty
slow.
Let's
see
the
last
3
30
days
yeah,
so
we
are
also
in
negative
minus
18
minutes,
so
we
have
spent
almost
40
minutes
of
downtime
in
the
in
the
aggregated
package
api
routes.
C
They
basically
give
you
an
overview,
but
they
don't
give
you
that
much
detail.
So
if,
if
I'm
not
sure
where
the
the
specific
metrics
for
the
the
package
registries
are,
but
if
it
was
for
the
registry
specifically,
I
would
go
to
the
registry
overview
dashboard
and
then
on
the
registry
overview
dashboard.
C
You
have
the
aggregated
apex
and
error
crate,
which
is
not
loading
now
probably
too
much
later.
Let
me
just
narrow
down
the
scope.
C
Seven
days,
maybe-
and
here
you
can
see
the
day
where
your
your
sle
has
dropped
below
the
tolerated
threshold,
and
with
that
you
could
look
at
each
one
of
the
of
the
api
routes
and
the
metrics
for
those
api
routes
and
identify
which
ones
add
too
much
latency
or
return
it
to
many
others,
and
that's
how
you
identify
what
causes
that
drop
on
your
sle,
which
in
turn
causes
a
drop
on
the
sla
and
therefore
causes
us
to
spend
the
effort
budgets.
C
So
yeah,
I'm
not
completely
sure
how
to
delve
through
that
for
the
package
registries,
but
there
must
be
somewhere.
So
that's
something
to
follow.
E
Up
on
yeah,
just
a
quick
introduction,
I
linked
an
issue
into
the
agenda
and
there's
an
issue.
I've
asked
for
that,
because
what
I
was
seeing
when
I
looked
at
this
dashboard
the
last
time
I
looked
at
it,
which
was
the
other
day
in
terms
of
evaluating
that
what
I
was
seeing
was
that
the
error
rates
were
empty,
because
if
you
have
a
look
at
sorry,
I've
got
these
pulled
up
in
another
window.
So
I'm
just
grabbing
that
now.
E
But
if
you
have
a
look
at
the
general
slas,
which
is
the
one
I
just
linked
to
in
the
agenda,
the
general
sla
shows
the
container
registry
at
100
and
so
like
determining
then
okay.
Well,
this
is
specifically
inside
our
rails
environment.
How
do
we
dig
into
that?
Based
on
the
error
budget?
E
Dashboard
was
the
question
I
was
asking,
and
so
that's
that's
kind
of
where
I
went
and
then
the
issue
I
shared
shows
a
really
extensive
explanation
from
the
team
up
there,
which
is
awesome
so
just
something
to
think
about
there.
E
It
ends
up
being
that
we're
actually
seeing
a
bunch
of
those
errors
coming
out
of
puma
or
puma,
depending
on
how
you
pronounce
it
so
yeah,
that's
that's
kind
of
where
we're
at
because
then,
when
I
said
actionable
before
when
we're
talking
about
our
own
metrics,
this
is
what
I
was
getting
at
like
how
do
I
come
in
or
how
does
someone
on
the
team
come
in
and
look
at
come
and
look
at
these
dashboards
and
go
okay,
what's
happening?
E
How
do
I,
how
do
I
help
improve
our
error
budget
or
improve
our
actual
app
decks
or
error
rates.
E
So
yeah
take
a
look
at
the
issue
I
shared
and
then
there's
a
really
awesome
walkthrough
there
on
how
to
dig
into
it
a
bit
more.
So
I
want
to
streamline
that
a
bit
of
course,
but
yeah
is
that
kind
of.
Is
that
helpful.
A
E
I
like
jealous,
do
you
want
to
answer
that?
I
think
you
have
a
good
answer
for
that.
C
C
So
here,
if
you
look
at
it
you,
this
is
all
we
have
all
the
services
we
have
and
there
is
web,
which
includes
frontlines,
that
is
git.
Okay,
what
is
guitarly,
how
to
get
endpoints?
And
then
you
have
the
github
api,
which
is
where
I'm
sure
that
it's,
where
the
package
endpoints
and
the
dependency
proxy
are
the
the
cr
runners,
the
container
registry
and
gitlab
pages.
So
I'm
pretty
sure
everything
is
grouped
inside
this
one.
C
Which
has
much
more
spent
budget
than
what
we
have
so
in
on
the
package
stage.
We
have
18
minutes,
I
think,
but
the
total
budget
spent
for
the
gitlab
api
is
over
one
hour
so
that
when
you
group
all
of
the
all
of
the
stages
it
gives
this.
C
To
this
oops,
if
it
loses,
you
go
from
this
github
vpi
here
to
this,
which
is
just
for
the
package
stage,
and
then
from
this
you
go
to
the
individual
api
routes
to
identify
the
problem.
A
A
C
I
think
the
e5
api
is
my
measure.
It
has
a
single
service
right
for
for
at
least
four
users
when
they
interact
with
the
gate
lab
api.
It's
the
github
api,
regardless
of
the
of
the
endpoint
that
they
use.
So
it's
the
same
thing
for
them.
It's
the
same.
It's
a
single
thing,
so
maybe
that's
why
they
decided
to
go
with
that
approach,
but
I'm
not
sure,
okay,
yeah.
I
think
I.
E
Think
my
my
recommendation
here
would
be
to
look
at
the
we
have
a
this
99.95
generalized
slo
service
level
objective.
E
I
think
we
could
start
with
that
and
evaluate
whether
that's
actually
satisfactory,
depending
on
our
customer
needs,
as
I
think
we
all
know,
we
have
some
enterprise
customers
operating
the
cloud,
and
so
from
that
perspective
this
relates
to
gitlab.com,
and
so
we
should
maybe
think
about
how
you
know
what
an
slo
would
be
for
those
customers.
If
they're
satisfied
with
that,
I
think
there
is
a
process
documented
somewhere
on
how
we
should
determine
those,
but
that
might.
C
B
A
good
question
we
go
in
error
budget,
even
when
the
request
takes
too
long
right.
C
Yeah,
so
there
are
two
main
things
that
will
influence
that
the
request,
the
response,
latency
and
the
and
the
response
status.
So
if
it's
in,
if
it
is
an
error,
it
will
count
for
the
error
rate
right,
slo
sli.
If
it
is,
if
it
is
a
response
latency,
it
will
come
for
the
apex.
So
if
our
request
takes
a
long
time
more
than
what
we
define
as
satisfiable,
then
it
will
be
discounted
on
the
project.
B
So
since
we
accessed
the
api
of
the
container
edges
through
the
rails
api-
and
we
get
a
lot
of
time
out
of
those,
especially
on
some
container
repository-
those
results
are
going
to
be
I'm
saying
polluted,
because
there
is
not
anything
that
we
can
do
right
now
in
the
rail
size
side
about
it
by
those.
I
think
this
is
to
be
considered,
because
there
may
not
be
many
action
items
there.
E
Nico,
sorry,
I
didn't
capture
all
of
that
in
the
notes
I
apologized.
You
were
saying
that,
because
we
see
timeouts
in
other
parts
of
the
the
the
main
application
that
aren't
directly
in
our
code
area,
then
that
would
impact.
B
I
don't
know
so
in
our
area,
so
whenever
rails
talk
with
the
container
registry,
we
have
a
slow
communication
there
and
we
know
why
right
and
we
get
and
we
get
slow
response
and
timeouts
there.
But
we
don't.
We
don't
have
action
point
on
those
I
mean
we
already
doing
what
we
need
to
do
and
it's
going
to
take
longer.
A
C
Yeah
we
had
a
single
sle
for
the
wool
api,
so
all
operations
were
on
the
a
single
sle
and
like
a
month
ago,
or
so
we
started
working
on
splitting
those
per
endpoint
so
that
each
endpoint
as
a
specific
sle,
because
before
we
had
one
that
was
like
the
tolerable
threshold,
this
request
served
under
10
seconds,
but
we
have
some
endpoints
that
should
be
served
below
one
second,
for
example,
but
some
others
are
okay
if
they
take
five
or
ten
seconds,
because
users
are
uploading,
a
large
amount
of
data.
C
So
what
we're
doing
now
is
splitting
that
unique
ssle
into
several
sles
that
are
to
fine
tune
it
for
each
one
of
the
endpoints
and
we
started
with
the
manifest
console.
If
you
look
here,
we
have
a
set
of
sles
and
you
have
the
the
service
level
indicator,
which
is
basically
the
aggregation
that
I
was
talking
about.
So
this
contains
everything,
and
now
we
have
a
new
one
here,
which
is
the
server
route
manifest
reads:
sle,
which
is
a
specific
gasoline
that
we
defined
it
for
the
for
the
manifest
throughout
the
route.
C
When
doing
reads
so
the
get
and
add
method-
and
we
have
a
separate
one
for
rights
as
well,
so
we
know
we
not
only
separate
by
route.
We
also
separate
my
operation
so
reading
a
manifest
should
end
more
quickly
than
uploading
a
manifest.
So
the
sle
should
reflect
that.
We
shouldn't
have
the
same
sle
for
two
operations
that
are
expected
to
perform
differently.
C
Yeah
yeah,
we
will
have
a
in
the
gitlab
api,
I'm
not
sure,
but
we
can
do
this
without
any
code
changes,
because
this
is
all
on
the
run
books
projects
and
that
only
looks
at
prometheus.
So
on
that
end
we
can
aggregate
metrics
as
we
want
without
any
code
changes.
So
it
should
be
relatively
straightforward
if
we
want
to
add
specific
sles
for
the
package
endpoints
if
we
don't
already
have
them,
but
I
don't
see
them
anymore.
A
D
Yeah,
I
was,
I
was
thinking
of
probably
integrating
what
we
are
measuring
in
production
for
our
performance
tests,
so
we
have
as
realistic
feedback
as
possible
when
we
are
doing
code
changes
before
hits
production,
but
it's
still
really
in
the
early
stages
in
terms
of
performance
tests
development.
I
I
hope
we
could
get
this
more
developed
during
this
and
by
the
end
of
the
quarter
we
would
have
something
decent.
A
A
A
F
Yeah,
so
there's
been
a
few
points
of
discussion
in
various
issues
that
have
come
up
around
tokens
and
permissions
and
how
there's
just
not
necessarily
enough
granularity
or
the
type
of
granularity
that
certain
users
want
around
the
package
registry.
So
there's
been
a
few
ideas
that
have
been
floated
around.
F
I
I
opened
an
issue
to
discuss
the
idea
of
creating
a
new
token
specific
to
the
package
registry
that
would
allow
for
much
more
customizable
permissions
and
the
idea
there
being
that
right
now,
all
of
the
existing
tokens,
the
permissions
are
sort
of
tied
to
the
project.
F
So
I've
kind
of
opened
that
issue
to
discuss
the
idea
of
a
unique
package
token
and
then
there
I
didn't.
I
wasn't
aware
until
more
recently
that
there
is
an
epic
for
updating
the
existing
tokens.
I
believe
to
follow
a
slightly
different
pattern
and
make
use
of
jwts
sort
of
in
a
similar
way
to
what
the
container
registry
does,
where
we
grant
permissions
on
different
resources.
F
G
I
would
love
to
jump
in
here.
I
know
from
the
ux
perspective,
especially
in
the
leadership
side
of
things
when
managing
package
registries
and
stuff,
like
that
permissions
that
are
unique
to
the
registry
compared
to
other
types
of
access,
is
different,
so
I
can
see
there's
a
lot
of
value,
we're
going
to
be
talking
to
some
users
here
shortly,
especially
on
that
higher
level
in
terms
of
how
they
want
to
manage
their
registries
and
access
to
the
registry
inside
of
their
organizations.
C
G
A
technical
standpoint,
I
would
love
to
know
what
you
would
need
to
learn
from
users
to
differentiate
the
advantage
or
disadvantage
one
way
or
the
other.
What
kind
of
questions
would
you
want
to
ask
the
cto
of
some
large
company
if
they
need
something
that
would
require
us
to
create
specific
tokens
or
if
we
just
need
to
adjust
the
tokens
we
have?
A
The
the
issue
with
a
lot
of
people,
a
lot
of
folks,
don't
like
using
the
personal
access
tokens
because
they're
they
think
that's
an
anti-pattern
when
you're
considering
a
large
working
organization,
they
don't
want
to
give
they
don't
want
to
use
their
personal
access
tokens
and
we
don't
allow
deploy
token
access
to
the
gitlab
api
or
we'd
like
to
avoid
doing
that.
We'd
be
great
to
avoid
doing
that.
A
So
that's
that's
one
of
the
frustrations
and
then
right
now,
if
you
are
using
a
personal
access
token,
you
have
to
give
access
to
the
api
to
give
access
to
publish
and
read
packages,
and
that's
really
troublesome
for
people
as
well.
So
I
think
there's
that's!
Maybe
the
idea
of
item
two
that
steve
recommended
is
part
of
that
epic
is
they
would
create
a
new
scope
for
the
personal
access
token,
which
would
be
read.
Package
registry
write
package
registry
or
we
can
create
a
new
token.
That
just
does
that.
A
A
F
I
think
one
question
for
users
would
be
you
know
when
working
with
the
package
registry.
Do
you
want
your
token
to
have
access
to
anything
else
like?
Is
it
common
that
you
want
a
token
that
has
access
to
multiple
parts
of
the
application
or
with
package?
Is
that
sort
of
unique
and
we
want
it
fully
isolated
or
not?.
F
F
You
know
things
like
that
would
be
useful
to
know.
G
G
F
G
G
G
I
will
admit
that
a
lot
of
the
conversations
that
I
had
with
users
about
the
dependency
proxy
in
the
virtual
registry
idea
is
that
they
could
manage
the
permission
of
that
and
then
distribute
that
as
they
needed.
So
that
could
be
another
place.
We
handle
that,
but
I'm
not
sure
if
tokens
are
the
way
to
do
that
or
if
there's
a
new
way
to
handle
that
not
my
my
area
of
expertise.
F
When
you
say
that
they
we'll
talk
about
the
virtual
registry
and
say
that
that's
interesting
for
for
being
able
to
manage
the
tokens,
do
you
mean
like
that?
They
would
have
a
token
to
access
a
specific
virtual
registry,
and
that's
all
it's
allowed
to
do
kind
of.
F
G
I
know
that
also
popped
up
a
lot
for
the
virtual
registries
in
the
conversation
of
distributed
organizations.
So
we
were
talking
to
some
clients
that
were
customers
that
were
researching
artificial
intelligence,
which
is
really
cool
and
somebody
on
the
other
side
of
the
world,
and
that
team
is
working
on
something
and
they
need
access
to
packages
in
one
format
versus
an
organization
in
europe
is
totally
separate.
I
I
G
F
G
I
can't
I'm
not
sure
if
it's
specifically
the
virtual
registry
aspect,
but
from
what
I've
heard
from
users
as
they
talk
about
it
is
they
want
a
place
for
packages
to
share
across
the
organization
and
so
putting
it
in
its
own
project
somewhere
that
people
can
access.
It
is
the
most
straightforward
solution
that
we
have
right
now.
The
virtual
registry
would
give
us
an
end
point
to
punch
all
those
individual
projects
together
and
create
the
access
point,
but
I
feel
like
that.
One
place
to
go
is
really
what
those
large
organizations
are
looking
for.
A
That's
right,
I
would
say
you
have
a
question
tim.
Oh,
we
were
talking
about
what
questions
we
would
want
to
know.
One
thing
I've
heard
from
admin
is
that
they
have
a
need
to
like
regularly
rotate
tokens
on
behalf
of
users,
so
I
think
just
understanding
like
if
we
do
create
a
new
package
token.
Are
there
requirements
around
that
that
we
need
to
consider
as.
E
E
Yeah,
I
think
I
think
this
is
an
area
of
exploration,
taking
a
look
at
what's
required,
definitely
considerations
when
we
start
talking
about
security
concerns
and-
and
you
know,
that
class
of
errors-
I
always
forget
the
name
for
but
the
supply
line,
supply
chain
type
things
where
we
need
to
invalidate
a
bunch
of
stuff
that
doesn't
answer
the
question.
But
in
my
head
it's
all
grouped
into
the.
How
do
we
make
sure
that
the
package
registry
is
a
secure
and
how
we
support
our
customers
in
making
sure
their
own
processes
are
secure?
E
So
I
definitely
think
that
we
should
be
at
least
have
a
line
item
and
maybe
chat
with
customers
and
see
how
they'd,
like
that
to
work,
because
there's
a
lot
of
overhead
that
comes
in
with
it
as
well.
If
you
don't,
you
want
to
be
able
to
turn
it
off
and
on
for
to
find
the
customer
managing
my
own
space,
or
maybe
I
don't
want
it.
Maybe
it's
fine.
G
Yes,
speaking
of
research
and
talking
to
people,
we
are
exploring
with
our
higher
level
users
what
their
needs
are.
Specifically
when
it
comes
to
the
package
registry.
We've
done
a
good
job
at
the
project
level
with
small
groups
and
the
group
level
is
pretty
smooth.
But
we
want
to
know
those
devops
engineers,
the
platform
level
engineers
system
admins
at
highest
level.
What
do
they
need
and
expect
from
the
package
registry?
G
Instead
of
kind
of
right,
now
we're
a
little
bit
of
an
isolated
level.
If
you
could
think
of
opportunities
that
we
should
explore
with
users,
areas
of
the
products
you
think
would
be
useful.
I
would
love
to
get
your
thoughts
as
technical
experts
on
the
subject.
B
Okay,
I
think
I'll
comment
advise
what
I
wrote
should
we
consider
looking
up
the
registries
in
the
out
of
the
box
feature
in
the
meaning
that,
if
we
detected
there
is
something
that
can
be
published
in
the
repo
we
straight
up,
publish
in
our
package
registry
and
say:
hey
your
pipeline
run.
Is
your.
G
Package
yeah
I've
been
working
with
maria
on
the
configure
team,
who,
I
think,
is
the
organization
in
charge
of
auto
devops
and
they're,
making
some
changes
which
are
really
exciting.
One
of
the
first
ones
that
I
want
to
explore
is:
should
we
in
the
auto
devops
turn
it
on
switch?
Just
include
the
package
registry
and
publishing
to
it
as
a
default
and
what
that
looks
like.
So
that
is
a
great
area
to
look.
G
I
I
don't
have
any
results
yet,
but
I
do
have
a
intermediary
result.
I
guess
I
created
a
pipeline
out
of
a
package
and
it
was
accepted
by
the
gitlab
instance,
and
so
it's
it's
visible
on
the
pipelines,
page
of
the
project
and
so
yeah.
The
idea
is
that
when
you
push
a
package,
you
have
a
pipeline
that
is
started
and
if
this
pipeline
fails,
the
package
will
not
be
available
within
the
package
registry,
and
so
then,
on
the
on
the
pipeline.
I
B
Yeah,
yes,
the
idea
why
I
was
saying
that
is
because
it
it
will
provide
insight
to
the
user.
What
is
happening
when
so,
if
we
could
have
mandatory
jobs
right
and
they
see,
oh,
it
failed.
The
extraction
worker
failed
because
I
sent
a
password
protected,
zip
file,
I
don't
know
whatever,
and
we
will
get
a
lot
of
feature
out
of
for
free,
basically,
because
they're
already
there.
E
E
You
know
you
can
see
exactly
what's
happening
every
step
of
the
way
and
then,
if
it
fails
extraction
for
whatever
reason,
even
if
that
was
some
type
of
handoff
to
a
background
process
and
a
response
which
I'm
not
saying
what
I
do
by
the
way
but
yeah,
I
can
see
the
value
in
that
the
visibility
the
visual
of
that
kind
of
going.
Oh,
my
package
executed
is
a
thing.
Where
did
it
get
to
a
license
fail?
Let
me
go
check
that
instead.
E
And
you-
and
I
talked
about
this
before
david-
I
think
it's
a
really
cool
idea.
I
really
like
the
integration
idea.
I
like
the
idea
of
being
able
to
plug
in
existing
pipeline
jobs
from
gitlab
on
package.
Push
or
you
know
on
that
type
of
event,
and
then
you
know
we
could
run
scanning
tools
integrate
it
with
whatever
we
wanted
to.
I
Yeah,
actually,
when
I,
when
we
discuss
this,
your
face
lighten
up
like
it's
sparkle,
and
I
think
it's
a
really
simple
idea
that
users
can
get
it's
not
something
hard
to
explain.
It's
like
the
same
thing
that
we
have
ci
pipelines
for
source
code.
Well,
we
know
we
have
ci
pipelines
for
package
and
these
open
ups,
a
lot
of
possibilities.
E
Not
only
what
I
was
there
I'll
stop
going
on
about,
because
I
know
their
other
questions,
but
not
only
that,
but
it's
also
like
somewhat
extensible.
You
could
manage
the
pipeline
in
a
way
that
we
can't
right
now
and
that
just
it
means
that
it
it
presents
users
and-
and
you
know
everyone
an
opportunity
to
ex-
to
extend
those
pipelines
to
do
things
we
haven't
even
thought
about,
which
I
think
is
really
awesome.
That's
why
I
was
so
excited
about
the
idea.
I
Yeah,
there
is
also
this
idea
of
cascading
pipelines
that
could
be
useful.
Like
you,
you
push
code
to
master,
then
you
have
a
ci
pipeline
building
a
package
pushing
that
package
due
back
to
the
package
registry,
and
this
one
will
kick
another
pipeline
that
will
check
the
package
and
then,
if
everything
is
all
right,
the
package
will
be
available.
So
you
are
cascading
everything
and
everything
is
automated
kind
of.
I
I'm
not
sure
I
understand
the
question.
The
second
pipeline
is
one
that
is
on
the
package.
I
I
B
It
just
came
to
mind
if
we
have.
This
doesn't
doesn't
matter
for
com,
but
if
we
have
self-managed
customers
that
use
packages,
but
they
don't
have
runners
running,
for
example,
on
that
pipeline.
I
don't
think
this
is
a
possible
configuration,
but
maybe
it's
an
edge
case
that
needs
to
be
taken
into
account.
I
So
the
runner
will
need
to
be
changed
because
it
will
need
to
pull
a
package
from
the
pipeline
and
not
a
git
commit.
So
that's
something
I'm
working
through.
I
I
I
Yeah,
my
idea
is
exploring
quickly
what
is
possible
or
not,
and
depending
on
the
results
we
will
need
to
discuss
with
the
runner
team,
but
also
with
the
team
behind
pipelines
because,
basically
for
pipeline,
you
need
a
reference
to
something.
So
when
you
push
code,
it's
the
commit
shot
on
git.
But
when
you
push
a
package,
we
don't
have
anything.
So
I
had
to
create
it's
like
a
push
event,
so
that
could
even
help
us
to
build
a
historical
view
of
pushes
and
from
this
object
I
created
a
do.
I
Misha
do
me,
jackson,
and
basically,
I
I'm
made
the
pipeline
believe
that
this
is
a
git
comet,
but
it's
actually
a
package
push.
So
I'm
not
sure
if
that's
the
best
way
to
handle
that
and
yeah.
Basically,
we
need
changes
on
the
pipelines
and
changes
on
the
on
the
runner.
So
we
will
need
to
discuss
with
both
teams.
G
G
You
could
raise
your
concerns
around
the
commission.
We
can
have
that
big
discussion
as
a
team
and
then
figure
out
how
to
break
it
down.
What
are
your
thoughts.
A
Okay,
I
think
we
are
at
time
any
other
questions
or
comments
before
before
we
break
no,
I'm
going
to
stop
this
recording.
Thank
you
for
everyone
watching
at
home,.