►
A
Sure,
okay,
we're
recording
that
so
yeah,
it's
a
we
care
just
the
time
to
kind
of
go
through
the
use
cases
and
just
list
them
out
and
hopefully
break
them
into
issues
after
the
recall
just
around
what
specifically
we're
trying
to
tackle
with
this
talking
and
before
we
go
ahead
and
just
start
prototyping,
some
tools
so
I'd
start
we've
had
some
so
one
of
our
issues,
which
was
issue
84,
which
was
searching
for
module
dependence
on
the
tape.
A
A
A
Citroen
could
be
used
for
this
and
in
the
past,
I've
seen
people
create
their
own
custom
lookups
and
to
solve
this
problem.
But
I
know
from
the
release
side.
We've
had
some
challenges
and
would
search
him,
and
that
is
with
interpreting
results,
etc.
So
does
it
make
sense
for
the
team
to
try
and
prototype
a
more
lighter
weight,
search
and
solve
this
problem
or
not?
What
does
how.
B
Does
how
do
you
use
such
so
like
I,
understand
the
stitching
part
where
you
say:
okay,
I'm
gonna
run
the
tests
from
these
modules
using
a
current
version
of
node?
How
does
that
extend
to
say,
okay,
now
I'm
going
to
use
a
new
version
of
my
package
like,
particularly
since
those
packages
might
have
like
a
specific
version
or
something
specified
in
their
package,
that
Jason
files.
C
One
version
of
made
against
usually
a
single
version
of
a
hard-coded
list
of
packages.
What
would
be
really
ideal
for
me
is
something
that's
basically
like
greenkeeper,
but
in
the
reverse
direction.
We're
like
dynamically,
something
can
figure
out
who
are
all
of
my
dependence
on
NPM
and
then
from
that
list,
sort
them
by
some
metrics
popularity.
You
download
count
something
like
that
and
then,
like
automatically
make
a
like
do
what
greenkeeper
does
and
sort
of
like
make
figure
out
some
way
to
run
that
project
see
I
automatically
and
like
the
truth.
C
Is
that
the
with
such
the
reason
why
it
doesn't
just
run
on
every
node
PR
is
because
it's
resource
intensive
and
the
reason
why
greenkeeper
doesn't
run
their
own
CI
and
why?
Instead,
you
have
to
authorize
your
repo
and
then
they
run.
They
push
a
branch
to
your
repo
which
uses
your
CI
and
whatever
your
a
throttle
you
know
like,
like
travis
gives
you
only
five
free
builds
at
it.
You
know
simultaneously
at
a
time
and
github
actions,
probably
if
it
doesn't
have
a
throttle
now,
I'm
sure
will
end
up
with
one.
C
So
it's
like
green
people
are
basically
leverages
that,
by
reusing
your
projects,
insisting
seattle
so
like
it'd,
be
something
I'm
kind
of
envisioning,
something
where,
like
in
the
ideal
case,
a
project
signs
up
to
have
it
itself
automatically
tested
when
it's
dependencies
are
about
to
update,
as
opposed
to
after
they've
already
published
it.
I,
don't
know
if
that
makes
sense.
C
It's
like
I
can't
think
in
terms
of
scalability,
like
I,
have
two
hundred
plus
modules
and
like
I,
can't
possibly
enumerate
the
important
dependents
of
even
a
significant
chunk
of
those,
but
it'd
be
really
great
to
know
before
I
publish
a
release
like
the
one,
a
reputable
thing
that
will
break
through
and,
like
you
know,
that's
about
to
break
people.
What
I
currently
do
is
I
put
and
then
I
search
github
for
issue
titles
that
match
greenkeepers
convention.
That
tell
me
if
any
green
keeper
enabled
dependence
are
broken
because
of
my
update.
D
The
parts
of
that
tool
would
be
valuable
for
me
on
the
project
that
I'm
involved
in
you
know
in
the
daily
job,
which
means
that
there's
some
overlap
there,
which
means
that
it
would
be
easier
to
get
time
and
budget
for
that
to
work
on
that.
For
me
so,
and
the
tool
itself,
so
I
described
an
MVP,
a
very,
very,
very
basic
thing
right.
So
there's
there
several
parts
of
the
tool
that
to
make
this
work
right,
you
need.
You
need
two
big
chunks.
One
is
to
resolve
the
dependents.
D
The
other
one
is
to
actually
run
the
tests
resolving
the
dependents
in
a
way.
This
is
the
easy
part
because
it
could
just
take
the
sim
JSON
format.
That's
already
there,
as
in
you,
have
a
hard-coded
list
and
then
have
some
tooling,
which
automatically
updates
that
list
with
an
approval,
because
it's
not
like
you're
going
to
want
to
test
every
single
library
that
everybody
forked
that
I
mean
you
may
want
to
do
that.
Knowing
knowing
somebody
who
runs
node
0.10
and
doesn't
do
breaking
changes,
they
may
want
to.
C
That's
kind
of
what
I
was
thinking
like
sorting
by
metrics
like
so.
This
is
the
sort
of
thing
that
I
think
NPM
probably
already
has
an
API
for
dependents
and,
if
not
for
dev
dependents,
then
that
part
of
something
we've
already
been
asking
them
for
so
like
in
a
way
where
you
say
like
this,
like
where
you
apply
some
filters-
and
you
say
like
I-
want
this
number
of
my
dependents
sorted
by
this
metrics
and
then
we
use
the
NPM
API
just
spit
out
the
list.
It's.
D
It's
not
necessarily
even
an
NPM
API
for
that,
because
I
I
would
find
that
a
bit
hard
to
get
into
the
enterprise
context
with
NPM
Enterprise
and
all
the
other
alternative
registries.
However,
get
of
itself,
it
can
query
across
a
lot
of
things
and
you
can
say
file
name
package
JSON
and
you
can
query
for
certain
strings
and
you
can
probably
use
bigquery
or
something
like
that
to
retrieve
the
actual
dependence
from
get
up.
D
Nan
forked
once
sort
them
by
star,
but
this
is
I,
think
resolving
the
dependence
is,
is
one
workstream
in
terms
of
automating
that
but
I
think
the
very
initial
version,
the
MVP.
It
does
not
need
to
have
automatic
resolution
of
the
dependence
right.
What
you
need
to
do
is
you
can
start
off
with
the
the
way
no-decision
works.
Have
a
hard-coded
JSON,
which
says
these
are
the
dependents
that
I
want
to
test
with.
Well
then
comes
the
second
part
which
is
running
the
actual
tests
and
in
my
proposal,
is
you
actually
use?
D
If
you
own
the
dependents,
then
you
can
set
up
the
thing
which
would
just
open
up
the
pull
request
and
insert
a
git
based
dependency
into
package
JSON,
so
that
from
your
branch.
While
you
have
your
pull
request
open
and
you
have
your
branch,
you
can
insert
a
base
dependency
instead
of
the
assembler
specifier,
and
then
it
will
just
download
and
install
that
there
are
some
gadgets
there.
D
If
you
have
some
compiled
stuff,
but
that
all
can
be
solved
right
and
then
you
open
up
a
pull
request
in
the
dependent
saying
that
you
know
if
I
were
to
run
with
this
branch,
that's
still
in
get
what
would
the
result
be?
The
major
benefit
of
that
is
that,
because
you're
opening
up
a
pull
request,
you're
running
in
the
exact
same
Travis
configuration
or
get
up
action
configuration
that
would
that
that
a
normal
master
or
pull
requests
would
run
right,
so
you're
getting
the
exact
same
configuration.
D
D
Just
opening
up
all
these
four
requests
and
running
these
things
by
using
the
get
based
dependent
in
in
the
real
CI
that
the
defendant
is
using
I,
think
it
gives
the
major
benefit
because
you
can,
you
can
get
the
result,
you
can
get,
you
can
inspect
and
you
can
build
further
automation
on
top
of
that.
So
this
is
this.
Is
the
MVP
that
I'm
thinking
about.
C
That
requires
secrets
will
just
not
work,
but
currently
anything
that
requires
secrets
won't
work
on
any
pull
request.
That
comes
from
a
fork
anyway,
so
on
any
open
source
project
where
you
expect
pull
requests
from
people
who
don't
have
write
access
on
the
repo.
You
already
can't
rely
on
secrets.
So,
theoretically,
it
should
be
fine
for
the
open
source
case.
D
D
But
I
think
these
are
more
complex
cases
which
which
can
be
covered
later,
because
if
your
tests
depend
on
using
browser,
stack
or
or
that
kind
of
thing
with
an
API
key,
then
then
this
approach,
then
there
is
no
simple
approach
right,
but
usually
they
would
then
probably
be
a
command
that
you
can
run
locally
for
somebody
who
can
develop
that
right
and
then
and
then
maybe
it
can
be
worked
around
in
some
way.
I
think
that's
a
bit
more
than
the
in-between,
though,
and
I
guess
it's
it's.
B
It's
really
how
open
to
people
people
be
to
you
know
having
I
think
the
PR.
If
there
was
some
way
to
trigger
the
testing
in
the
original
project
without
a
PR,
would
people
be
worried
about
that?
I
can
see
the
PRS
might
kind
of
clutter
things
up
right.
We
have
a
bunch
of
cars
that
are
opened
and
then
yeah.
D
D
C
System,
including
could
have
actions,
lets
you
trigger
on
branch
and
or
pull
requests.
It's
just
a
question
of
whether
they
have
branch
builds,
enabled
and
enabled
for
non
master
branches.
But
then,
on
top
of
that,
like
you
said,
if
you
don't
have
write
access
to
the
repo,
you
can't
push
a
branch,
which
is
why
what
the
way
greenkeeper
works
is
you
have
to
give
it
right
access
to
your
repo,
so
it
can
push
a
branch
and
do
get
the
builds.
That
way.
D
And
I
mean
it
could
be
built
as
eventually
extended
to
be
an
application
which
does
do
that.
What
what
greenkeeper
does
right-
but
this
is
I,
think
a
is
a
large
amount
of
work
which
I
mean
unless
there's
funding
and
budgets
and
all
of
that
I
I,
don't
see
myself
being
involved,
because
there's
only
so
much
that
can
be
devoted
here.
I.
C
D
Sure,
absolutely
that's
why
I'm
thinking
about
about
a
set
of
libraries
to
do
and
and
even
start
off,
not
even
as
a
kid
of
application
or
anything
just
a
CLI,
which
you
know
you
can
compose
them
and
split
off
and
and
and
so
that's
why
resolving
the
dependencies
and
running
tests
are
two
completely
different
things
in
mind.
I.
B
Think
yes,
starting
with
a
list
with
and
when
I
think
the
Jordans
point
we
can
also
say,
but
you
know
this
set,
the
separate
piece
of
work
would
get
you
to
the
point
where
it
could
automatically
help
you
choose
is
would
be
good.
The
in
terms
of
the
any
of
the
app
I
guess
the
app
that
you're.
What
you
were
saying
is
that
the
logic
that
greenkeeper
has
to
be
able
to
run
the
tests
without
a
PR
or
two
is
that
you
figure
that's
a
fair
amount
of
work
put
together.
I.
D
Think
putting
up
a
everything
that
has
everything
is,
which
is
I
mean
you
have
to
get
an
application.
You
have
to
publish
it,
you
have
to
get
keys,
you
have
to
set
up
all
the
access
and
then
use
the
API
swear
as
something
as
simple
as
a
CLI
and
even
could
leverage
the
latest
get
up.
Cli,
tubing,
right,
you're
on
a
branch
locally
and
you're
opening
up
a
pull
request
and
you
run
a
command
locally,
which
opens
up
a
bunch
of
pull
requests
and
then
collects
the
results
of
these
quests.
B
D
C
Order
to
store
the
access
like
we
do
have
to
have
a
service,
so
I
think
you're
right
that
setting
it
up
in
such
a
way
that
we
could
add
a
service
later,
but
we
don't
need
a
service
meaning.
What's
a
bunch
of
sea
lies
and
you
provide
your
own
access
keys
when
needed
and
so
on.
I
think
that's
definitely
the
better
first
approach
and.
D
It
could
eventually
the
tooling
could
eventually
evolved
into
a
get
up
action,
because
at
the
end
of
the
day
I
get
up
action
is
you
can
call
this
T
ell
I
and
then
it
can
happen
automatically,
as
you
open
a
pull
request
and
post
a
comment
but
yeah.
That's
that's
why
I'd
suggest
a
CLI
in
the
library
as
a
starting
point.
C
Yeah
I
guess
I
just
hope
that
it's
like
the
current
searching
approaches
is
certainly
an
MVP
in
the
sense
that
it
would
do
something
but
like
the
essentially
all
that
would
let
me
do
is
set
up
the
like
five
dependents
that
have
complained
in
the
past
when
I
broke
them
or
something,
and
it
wouldn't
actually
like
help
me.
It
wouldn't
help
me
at
a
sufficient
scale,
and
so
here
is.
C
B
Yeah
and
I
guess,
starting
with
the
fork,
is
much
more
manageable,
but
it's
it's
and
it's
like
yes
like
that,
could
be
in
the
end,
a
configurable
thing
that
says
for
this
package.
You
know
you
use
this
fork
for
other
packages.
People
who
have
opted
in
you
can
actually
you
know,
do
something
against
their
repo
itself.
Yes,.
A
Okay,
so
that
kind
of
covers
the:
how
can
a
module
for
chess
with
their
changes
are
going
to
pack
the
modules
that
depend
on
them?
Is
there
any
as
module
orphans?
Do
you
have
any
concerns
about
the
health
of
the
tree
above
you
or
words
that
tool
actually
solve
both
so
the
dependent
module,
kids
kind
of
like
subscribe
or
opt
into
this
thing,
and
then
that
they
are
protected
or
they
get
the
updates
into
that
repo.
C
So
I
think
that
is
awesome.
A
way
for
me
to
opt
in
to
having
my
dependencies
versions
checked
prior
to
being
published
against
me
would
be
amazing,
but
I
already
get
that
on
publish
from
green
keeper
and
renovate
and
all
the
equivalent
services
so
like
the
but
being
able
to
do
it
before
publish
is
amazing,
like
I
was
very
stoked
to
add,
resolve
to
stitch
him,
because
now
it
means
I
never
have
to
scramble
to
fix,
resolve
clink
I've
been
notified
in
advance.
So
that's
like
really
valuable
and.
C
Then
there
was
a
question
in
chat
about
test
cases
like
hey
what
I.
What
I
like
about
green
keepers
approach
is
that
it
just
automatically
works
with
whatever
my
repo
already
does
for
tests,
because
that
way
I
don't
have
to
like,
because
otherwise
you
hurt
you
like
not
every
repo
is
gonna,
be
as
simple
as
NPM
tests.
Some
of
them
are
getting
out
of
a
build
process.
Some
are
gonna,
have
stages
and
like
run
battle
on
latest
note
and
then
use
the
battle
output
on
multiple
node
versions
and
stuff,
like
that.
A
They
also
told
it
would
test
the
dependencies,
so
the
health
of
the
tree
above
and
so
it
dune,
install
and
then
run
each
of
the
tests.
Currently,
each
of
the
modules
test
Suites
are
currently
installed.
It
doesn't
sound
like
that's
the
way
we
want
to
go
forward
with.
It
sounds
like
we
want
to
do
kind
of
the
other
way.
The
the
parent
module
is
trying
to
work
out
the
impact
of
that
changes
rather
than
yep
way.
So
there
was
a
PR
incidents.
C
A
C
D
E
C
Okay,
actually
yeah,
so
I
do
like
that.
A
lot
first
agenda,
cific
lis,
not
as
like
a
general
ecosystem
thing
but
as
like
you
know,
it
makes
sure
that
if
I
have
a
module,
that
node
cares
enough
about
to
pretest
that
all
the
things
I
dynamically
choose
to
depend
on,
or
you
know
not
deadly
but
like,
but
over
time.
I
choose
to
depend
on
also
or
test
it
against
the
node
versions
I
like,
and
that
would
cover
things
that
my
tests
don't
cover
within
my
dependencies.
Is
that
accurate?
A
I
guess
it's
just
kind
of
like
a
shortcut
so
rather
than
you
having
to
explicitly
list
out
and
the
ones
that
you're
interested
in
and
add
them
to
the
search
and
look
up,
and
you
could
just
run
this
then
again
in
my.
You
might
want
to
actually
explicitly
add
them
to
look
up,
because
maybe
some
modules
you
care
about
more
than
others
in
your
jury.
B
Okay,
yeah
I,
think
I.
Think
what
you
know.
What
Jordan
might
have
been
saying
is
he'd
be
happy.
If
no
did
that
testing,
basically
we'd
validate
that
not
only
kind
of
aversion
of
node,
not
breaking,
you
know
so
much
so
he
created,
but
any
of
the
dependencies
that
he's
ending
on
it,
wouldn't
make
so
much
sense
for
the
package
maintainer
z--
to
run
it
because
they're
they're,
probably
just
running
on
one
of
the
LTS
versions
of
node
or
one
of
the
already.
You
know
ship
versions.
A
E
D
D
B
It's
like
a
you
know
something
that
you
start
starting
point
and
then
he
talks
about
the
fork
management,
something
that
would
basically
take
one
of
those
create
or
manage
the
fork.
You
know
update
the
fork
run.
The
tests
is
another
small
bit
and
then
you
know
something
that
pulls
that
together
for
a
list
of
them.
D
A
B
C
B
C
I'm
pretty
confident
NPM
does
have
an
API
for
this,
because
anything
they
don't
offer
anythi
I.
Four
people
are
gonna,
scrape
their
website
for
us
so
like
they
don't
show
dev
dependents
separately
on
the
page,
they're
not
they're,
not
bundled
into
the
pendens
either,
which
kind
of
sucks
a
lot
for
like
if
you're
an
es
link
plug-in.
For
example.
C
Almost
nobody
directly
depends
on
you,
but
you
might
have
a
hundred
thousand
dev
dependents,
but
that's
something
that
there's
already
in
open
RFC
for
for
NPM
to
add
so
I'll
make
sure
the
next
RFC
call
to
ask
about
it
like
whether
dependents
and
step
defendants
have
an
API.
It
doesn't
seem
like
too
difficult
a
thing
for
them
to
add.
A
So
that
sound
like,
if
someone
wanted
to
try
and
create
a
small
tool
today,
they
could
try
and
just
get
that
list
out
and
surface
somehow
filters
and
the
response,
maybe
by
down
recounts
or
potentially
even
allow
people
to
configure
how
how
its
filtered
and
then,
if
that
was
an
NPM,
even
just
having
a
list
of
hey.
These
are
the
modules
that
depend
on
me
that
get
a
lot
of
downloads,
then
that
might
be
interesting
as
a
module
on
its
own
and
a
good
first
step.
C
With
relation
to
tell
me,
because
it's
constrained
earlier
about
enterprise
stuff,
like
I,
mean
I,
think
there
is
enterprise
uses
not
github
or
github
enterprise,
just
as
often
as
it
uses,
not
NPM
or
NPM
enterprise,
and
so
there's
gonna
be
alternative
entity.
I'm
servers
and
alternatives,
get
version,
control,
servers
and
so
like
a
tool
that
could
had
any
way
to
combine
multiple
graph
sources
so
that
I
didn't
have
to
make
one
list
from
github
and
one
from
NPM
and
one
from
artifactory
and
one
from
you
know,
and
so
on
would
be
really
helpful.
B
D
Yeah,
so
I
I'm,
not
sure
I
get
up,
he
seems
to
have
get
up
itself,
seems
to
have
a
notion
of
dependencies
right.
It's
parsing
package.json,
it's
giving
you
alerts
and
all
of
that,
so
it
has
some
smarts
around
that,
but
I
haven't
seen
any
API
around
that
it
also
has
a
graph
of
Forks.
So
it
definitely
can
do
you
know
a
lot
of
things
in
there,
but
if
you're
not
using
either
get
up
or
NPM,
then
I'm
afraid
I'm,
not
sure,
because
I'm
using
get
up
Verizon
and
game
ends
of
rice
right.
B
C
D
F
B
E
C
D
B
Think,
based
on
the
the
approach
of
having
smaller
command-line
tools,
you
can
imagine
one
which
uses
NPM
to
generate
dependencies
and
gives
you
some
sort
of
prioritized
list.
You
could
imagine
a
another
separate
one
which
uses
github
and
if
the
you
know,
if
it's
gonna
be
basically,
you
know
you
use
that
as
a
maintainer
to
choose
the
ones
you
want
to
include
in
your
manually
configured
file
for
now
that
doesn't
preclude
either
like
or
even
other
ones
that
people
think
of
other
ways
of
automatically
figuring
out
what
you
want
to
that's
right.
F
Are
you
sorry,
yes
dependent,
and
so
the
idea
is
to
have
a
static
list.
You
configure
right,
yeah,
okay,
I'm,
definitely
on
board
with
that.
The
so
I
Netflix
tried
internally
a
thing
that
is
similar
to
what
it
sounded
like
you
were
describing
with
the
API
by
getting
all
dependence,
and
it
ended
up
having
major
problems
in
our
internal
infrastructure.
So
I.
F
Imagine
that
would
have
exponentially
worse
problems
in
you
know
open
source
land,
just
as
feedback
of
like
some
people
actually
tried
that
one
thing
we've
discussed
doing,
which
might
we
might
move
forward
a
little
bit
on
is
out
of
those
hundred.
You
know
dependence
I
mean
I
express
is
much
worse,
but
you
know
smaller
packages.
C
F
That's
a
good
point,
so
one
thing
I've
been
thinking
about
how
I
was
doing
something
like
this
with
a
github
action
and
they
have
the
storage
API
now
which
we
could
probably
use
here.
If
we
built
this
on
top
of
github
actions
and
it
stores
as
long
as
you
continually
update
the
data
set
right,
it
can
I
think
it
stores
it
for
like
90
days.
F
F
F
F
F
So
the
way
that
we
were
talking
about
getting
around
some
of
that
is
trying
to
randomly
pick
five,
something
that
you
knows
within
your
limit
of
being
able
to
test
in
a
reasonable
fashion
and
then
picking
a
random
five.
Every
time
you
run
the
tests
so
that
you
know
so
that
you
can
build
up
like
a
history
of
those
tests
are
fickle.
We
know
when
that
particular
library
fails.
We
might
want
to
flag
it
for
manual
review
and
then
like
not
use
it
as
an
automated
blocker
right,
as
opposed
to
like.
Oh
those
tests
fail.
B
I
think
we,
you
know,
we
were
thinking
very
simple.
Starting
point
would
be
like
you
know
you
like
today.
Even
just
you
know,
you
want
to
figure
out,
you
look
at
the
Express
list,
thirty-nine
thousand,
how
would
you
decide
unless
you
already
have
a
good
idea,
which
are
the
most
important
ones
to
test,
so
the
simplest
would
be
like
you
know,
a
tool
that
says
get
my
dependents.
B
Maybe
just
look
at
the
download
counts
and
pick
the
top
ten
or
something,
and
then
that
would
be
information
for
the
package
maintainer
to
decide
how
to
you
know
which
ones
to
put
into
their
hard
coded
lists
in
terms
of
testing
the
picking
or
you
know,
a
random
said.
Every
time
sounds
like
a
great
way
to
then
add
to
that,
but
wouldn't
necessarily
you
know,
be
needed
to
start
off
in
the
beginning,
yeah
that
makes
sense.
I.
F
F
B
Yeah
I
guess
even
that
part
is
maybe
something
part
of
this
part
of
the
process
or
solution
or
whatever
needs
to
say
you
know,
for
the
ones
you're
choosing
to
include.
You
need
something
to
figure
out
whether
they're
stable
enough
to
include
or
not,
because
I
can
chew
up
a
huge
amount
of
time
right.
F
D
So
the
old
github
CLI
has
helped
CI
stages
was,
just
importantly,
you
know
the
yellow
thing.
The
the
green
tick
mark
that
you
have
next
to
each
commit
I
will
saying
that
if
you're
running
a
test
against
dependency
dependent,
then
you
probably
need
to
pay
attention
to
whether
your
starting
point
is
at
the
check
mark
or
whether
the
currency
of
the
current
master
that
you're
modifying
whether
it's
green
or
not.
They
flake
it.
Yes,
yeah!
That's
that's
a
bit
of
a
more
erection
because
you
might
not
know
you
know.
D
It
will
be
hard.
You
will
get
some
scent
unless
you're
working
with
a
33-day
up.
You
will
not
get
a
good
feeling
what
might
or
might
not
be
flaky.
So
this
will
definitely
be
a
problem,
but
that
doesn't
mean
that
the
release
needs
to
be
blocked
or
not.
Block
I
mean
this
this.
This
can
always
be
sorted
out
with
the
author
of
the
dependent
32,
but
also
if
without
having
data
to
back
up
all
of
these
decisions
as
au
tainer
of
library,
probably
popular
library,
do
you
do
I
think
without
actually
trying
to
do
this.
D
C
I
mentioned
earlier
in
the
call
that
my
current
approach
for
this
is
github
issue
search
for
green
keepers,
like
message
that
tells
me
that
my
dependency
failed
them
and
I
subscribe
to
all
those
and
a
good
large
percentage
of
them
are
failing
unrelated
to
my
published
and
so
I
usually
comment
on
their
and
I'm.
Like
hey
just
heads
up,
it's
not
this
isn't
my
fault
and
then
I.
Wait
for
you
know
them
to
close
the
issue,
but
they
rarely
do
and
that's
in
those
cases.
C
The
other
people
that
don't
complain
you
know
I
can
still
fix
it
for
them
so
like
it
would
still
be
useful
for
me
to
just
go
back
on
all
my
dependents
and
say
from
the
last
green
point.
Do
you
stay
green,
like
with
all
my
new
versions
or
not,
and
if
there
is
no
last
green
point
with
the
you
know,
NPM
installed
at
the
latest
version
or
whatever
then
like
I,
don't
care
yeah.
F
C
F
C
And
you
can
also
NPM
supports
them
before
parameter
that,
may
you
may
even
be
able
to
specify
about
environment
variable
so
that
you
could
tell
them
to
install
all
of
their
dependencies
as
it
was
like
with
the
versions
on
the
last
good
date.
So
you
can
actually
like
that.
That's
a
potential
technique
as
well
to
isolate
like
did
a
different
dependency
update
and
break
them,
because
you
can
install
everything
as
it
was
on
the
last
time.
A
C
C
B
A
Yeah
kind
of
as
an
aside,
we,
we
see
this
problem
with
sit,
Jim
and
no
testing
as
well.
So
when
we
do
in
load
release,
we
have
to
figure
out,
didn't
they'd
release
actually
break
the
module.
Will
was
the
module
broken
anyway
because
such
impulse
from
latest
so
it's
possible
that
we've
had
both
a
module,
update
and
a
node
update
at
the
same
time?
And
then
we
have
to
dig
in
and
go
and
figure
out
which
of
the
two
MS
cause
breakage
in
our
solution.
F
F
C
F
C
F
F
Can
get
really
high
here,
but
also
to
add
to
this.
Unfortunately,
one
thing
we
were
just
dealing
with
yesterday.
No
diversions
here
do
matter
right
so
aims
like,
for
example,
the
G
RPC
library.
Pretty
popular
has
a
bunch
of
versions
that
are
not
compatible
with
node
twelve
in
their
current
version
line
right
their
current
major
line,
so
the
newer
ones
are
compatible.
F
So
if
you
ran
your
tests
against
that,
you
know
newer
version
and
it
failed,
and
then
you
went
back
to
the
previous
success
and
it
happened
to
be
one
that
previously
didn't
work
with
no
twelve
and
now
you're
testing
in
no
twelve
you're
gonna
get
a
failure
where
it's
actually
because
of
no
twelve,
not
because
of
your
module
right.
Does
that
all
make
sense,
yeah
yeah.
C
And
this
also
suggest
like
why
it
would
be
better
more
beneficial
to
run
a
service
as
opposed
to
having
this
be
just
CLI
tools,
because
such
a
service
could
for
any
dependency,
that's
about
to
be
run.
It
could
like
rerun
without
any
like
it
can
basically
determine
if
a
given
dependent
is
in
a
passing
state
or
not,
and
then
that
result
can
be
shared
across
all
of
the
things
that
it
depends
on
yeah.
F
As
long
as
it
was
a
federated
system
and
not
some
central
service,
I
agree
with
that
right
sure,
like
I,
would
like
to
be
able
to
run
from
a
CLI
and
say:
hey,
run
this
thing
but
use
whatever
cast
results.
You
find
on
the
net
right
like
that
on
the
service,
and
then
it
only
runs
the
new
stuff
that
hasn't
been
rerun
before.
F
B
I'd
be
tempted
to
try
and
start
by
limiting
the
variations
and
change
in
the
dependence.
As
a
first
step,
like
you
know,
maybe
to
the
latest
published
version
of
your
dependence,
because
in
theory,
when
they
publish
a
new
version
of
their
module,
they
ought
to
be
doing
some
testing
against
the
versions
they're
gonna
allow
in
and
if
you've
published
a
new
one,
they
would
catch
it
like.
You
might
not.
B
F
F
Right:
okay,
that's
what
I
was
right
so,
but
pardon
we've
got
a
lot
of
threads
going
on
here.
So
one
of
the
threads
was
we
have
a
baseline,
which
is
their
existing
tests
running
without
your
change.
In
theory,
that's
the
most
reliably
working
one
with
your
existing
version
with
your
new
version
right,
so
that
what
would
be
a
more
likely
to
break
scenario
would
be
your
new
version
with
an
older
version
of
one
of
their
packages.
That's
still
within
your
some
December
range
right
that
your
that
consumers
might
have
right
like
yeah.