►
From YouTube: Think BIG 11-06-19
Description
In which we talk about remote and virtual repositories and migrating from other universal package managers
A
Okay,
thanks
for
joining
the
think
big
session,
we
have
looks
like
only
a
couple
of
things
here
on
the
agenda
so
I
did
want
to.
We
can
talk
through
these,
but
also
feel
free
and
now
or
any
time
to
add
ideas
to
here
as
well.
It
shouldn't
be
just
eaten
and
I
adding
to
this,
but
let's
go
through.
It
looks
like
Ian,
you
added
an
agenda
item
for
interviewed
date,
interview
data
and
information.
A
Okay,
so
I
had
two
ideas
that
I
thought
would
be
worth
adding
and
possibly
discussing
this
week,
so
the
first
actually
just
came
up
in
slack
and
in
our
co-working
day
in
Colorado,
Ian
and
Steve
and
I
were
talking
about.
Why
do
people
buy
the
product
right
or
by
our
stage?
Why
do
they
want
to
use
it,
and
a
big
reason
that
we
hear
over
and
over
again
from
buyers
is
I
want
to
cancel
either
J
frog
or
Sonu
type
Nexus,
or
something
like
that?
A
C
I
can
add
one
other
thing
that
we
kind
of
discussed
in
person
that
might
give
some
good
context
was
so
right.
Now
all
of
our
packages
are
connected
to
a
project,
and
so
if
a
company
or
organization
is
trying
to
migrate
everything
and
at
once,
they
might
not
have
those
projects
on
get
lab
yet
or
it
might
be
sprinkled
in
a
variety
of
different
places.
A
That's
a
good
add
thanks
Steve,
so
yeah
that
was
kind
of
the
idea
that
we
were
talking
about
is
what
what
doesn't
easy
migration
look
like.
What
does
it?
What
how
would
someone
migrate
a
package-
that's
not
associated
with
the
given
project,
and
how
would
we
cover
read
repositories
or
registries
that
don't
that
are
not
that
we
don't
have
coverage
for
yet
so
if
I'm
at
a
frog
and
I
have
maven
NPM
and
Python,
how?
How
will
we
handle
that
in
a
migration?
A
B
Thing
I
want
to
call
out
about
the
experience
here
is
that
for
a
lot
of
companies
when
they're
switching
over
to
get
lab
and
want
to
use
the
package
registries,
this
is
their
first
experience
with
us
and,
as
I
said
it
and
forget
it.
This
is
the
memory
that
will
hold
the
longest
wait,
so
I
think
I
want
to
call
it
that
it's
kind
of
valuable
to
make
sure
the
experience
is
really
pleasant
as
well,
as
you
know,
technically
capable,
while
we're
thinking
about
how
do
we
handle
somebody
migrating
that
has
a
package
manager.
D
An
idea
that
just
came
out
from
this
will
it
be
a
possible
first
step
to
actually
have
like
a
template
project
with
inside
pipelines
specialized
for
every
register
that
we
are
bringing
in
that
just
run
to
the
registry
and
bring
into
this
project.
So
they
even
if
they
don't
the
project,
they
still
have
all
their
packages
in
one
single
project
inside
Keith's
lab
and
then
maybe
we
can
work
on
to
have
this
project
move
this
packages
move
from
one
project
to
the
other
inside
our
system,
which
is
much
easier.
E
So
I
was
gonna
say:
don't
we
also
need
to
tackle
the
problem
of
the
MPM
scopes
and
how
they're
scoped
to
project
names
because
obviously
like,
if
you're
bringing
in
from
a
from
j4l
or
something
like
that,
the
scope
isn't
necessarily
going
to
match.
I
know:
we've
got
some
issues
for
opening
up
the
scaping
of
NPM
projects
every
week,
I
think.
C
B
Would
it
be
possible
that
we
use
the
namespace
and
just
say
you're
a
part
of
an
org,
and
then
we
copy
the
namespace
of
what
you
had?
If
you
don't
already
have
a
project
with
that
name
in
your
instance,
then
the
namespace
becomes
the
project
that
just
holds
all
of
the
associated
things,
even
if
the
rest
of
the
project
is
empty.
Is
that
a
way
to
think
about
it.
C
I
think
that's
one
possibility
I
feel
like
that's
sort
of
a
way
that
we
can
use
pieces
weird.
You
have
to
put
things
together,
but
I'm
wondering
like
if,
because
at
the
same
time
it's
you
know,
why
do
we
need
the
project
if
it's
not
really,
if
it's
just
kind
of
like
a
container,
so
it's
it's
I
feel
like
there
should
be
another
way
to
group
things
together
that
doesn't
use
a
structure
that
isn't
necessarily
needed.
If
that
makes
sense.
F
Right
now,
the
package
we
just
raised
are
quite
link
to
project
and
what,
if
we
I'm
sorry
I'm
reading
silence?
What?
If
we
just
take
this
registry
object
and
we
linked
it
with
a
book
instead
of
linking
it
with
a
project
so
that
we
can
create
an
empty
group
and
it
will
have
a
package
registry,
and
then
we
can
use
that
for
anything
without
having
the
project.
First.
G
Yeah,
like
I,
think
that
I
think
the
concern
there
is
like
if
we
think
about
and
I
may
be
misunderstanding
some
of
some
of
that
point
as
well.
But
if
we
think
about
this
as
being
I
have
a
package
that
I
want
to
pull
in
from
some
other
package,
manager'
solution
like
artifactory,
that
is
one
part
of
it,
but
in
theory
the
code
should
be
appearing
as
well.
G
So
if
we
are
just
pulling
in
dependencies
that
are
just
package
managers,
but
packages
win
every
night,
they
don't
have
the
code
that
makes
sense
to
do
an
import.
But
if
the
code
is
part
of
this
import,
then
we
would
need
a
repository
for
the
code.
We're
able
to
be
able
to
in
generate
the
package
again
so
like
I,
don't
know
whether
that's
sort
of
incorporated
in
the
way
people
are
thinking.
Oh,
that's
kind
of
how
I'm
thinking
about
maybe
that's
wrong.
G
I
think
static.
Imports
of
packages
that
were
developed
in
the
past
and
built
in
the
past
is
something
yeah
push
them
up
and
save
them
somewhere,
but
I,
don't
know
how
often
we
would
be
seeing
customers
willing
to
bring
in
code
packages
that
they're
disassociated
from
the
generating
code.
Maybe
that's
an
edge
case
more
than
it's
something
that
we
need
to
really
worry
about
in
terms
of
like
how
we
build
the
solution,
but
I'm
thinking
repository
comes
in.
B
If
there's
a
clever
way,
we
can
start
pulling
them
apart,
maybe
by
namespace
into
projects
and
give
them
a
way
to
make
the
full
shift
and
then
start
organizing
it
and
connecting
it
to
the
different
aspects.
I
do
know,
a
couple
of
users
have
called
out
that
they
have
packages
that
haven't
been
updated
in
four
years
and
they're
still
getting
used,
but
they're
not
really
attached
to
anything
anymore.
So
I
know
people
are
experiencing
it.
It's
not
my
favorite
situation,
but
it
is
happening
and.
C
One
thing
about
the
sort
of
overall
architecture
here
is
that
so
we
have
a
project
that
has
the
code
that
builds
a
package,
and
then
we
have
the
packages
that
are
built
by
that
code
generally
and
in
in
most
other
systems.
That
I've
seen
like
those
are
not
coupled
in
any
way
once
you've
generated
the
package
and
pushed
it
you're
pushing
it
to
just
like
a
place
for
storage.
Where,
then
you
can
access
it
in
the
future.
C
Certainly,
you
know
the
whole
benefit
of
having
the
code
and
the
package
in
the
same
place.
Is
that
we'll
be
able
to
add
all
of
these
really
cool
and
beneficial
features,
but
I
I
do
think
that
there
might
be
like
you
know,
a
first
step
towards
that
might
be
okay
if
they
just
need
to
be
able
to
like
you
know,
even
if
they
already
have
their
code
on
get
lab
and
their
project
name
is
foo,
but
they're
publishing
a
package.
C
You
know
that's
called
like
stats,
they
want
to
maybe
just
change
the
remote
so
that
stats
gets
published
to
get
labs
registry
and
then
they
can
worry
about
implementing
and
connecting
it
to
their
project
and
working
with
those
additional
features,
as
sort
of
step.
2
of
that
migration
process.
Us
because.
G
But
I,
don't
wanna,
hear
you
both
saying
the
same
thing
and
maybe
there's
a
bit
of
different
approach
here.
So
I'm
not
saying
we
don't
need
that
I,
just
I'm
kind
of
I
guess
what
I'm
trying
to
call
out
here
is
that
is
that
a
solution,
that's
valuable
for
our
general
customer
base
as
an
ongoing
thing,
we're
effectively
saying
you
can
disassociate
any
any
of
package
managers
and
in
fact
they
contain
the
registry.
If
you
make
the
same
argument
right,
we
can
disassociate
it
from
the
code.
It's
not
being
really
at
length.
G
It's
just
a
part
of
our
system
that
you
can
enable
and
use.
If
you
want
to
use
it
and
you
can
just
push
stuff
up
to
it,
we
have
no
association.
It's
not
related
to
the
code.
I
guess
my
main
concern
there
is
that
as
get
load,
what
value
are
we
providing?
That's
get
laddie
and
so
I
think
if
this
is
a
case
that
our
customers
think
is
useful,
great
cool,
let's,
let's,
let's
try
to
figure
out
a
way
to
make
it
work.
G
C
C
Get
it
yeah,
I
think
like
a
couple
of
the
questions
that
would
be
certainly
worth
asking,
especially
any
customers
that
are
interested
in
importing
right
now.
Are
you
know
if
they've
attempted
to
move
anything
over
what
is
prohibiting
them
from
doing
that?
Is
it
the
naming
change?
Is
it
that
they
have
to
have
a
project
connected?
C
B
There's
a
cool
way
just
to
jump
right
in
it,
there's
a
cool
way
during
UX
research,
where
you
can
ask
them
to
tell
a
story
of
the
last
time
they
try
to
do
it
ask
how
they
would
solve
the
problem
and
after
you've
gathered
all
that
information
and
gotten
the
unbiased
view,
then
you
give
them
the
free
cookie
where
it's
like.
Well,
what
if
this
was
here
with
that
solve
your
use
case,
so
you
ask
all
the
non
biased
questions
and
then
you
get
the
binary
cookie
answer,
so
it
is
totally
possible.
C
G
C
H
C
What
if
we
publish
to
them
to
the
git
lab
remote,
but
once
it's
there?
It's
it's
not
really
associated
to
that
project.
At
that
point,
it's
just
there.
So
you
can
still
push
and
pull
using
that
same
package
name
to
that
same
remote,
but
it's
not
associated
with
the
project.
Is
that
does
that
create
any
problems
or
concerns,
and
is
it
even
worth
doing.
B
Another
thing
I
just
want
to
add
about
that
kind
of
same
story.
It
may
not
necessarily
be
dangling
packages
forever.
It
could
also
be
alright.
My
boss
told
me
we
need
to
switch
to
get
lab
by
the
end
of
the
year.
I
have
a
bunch
of
packages
that
aren't
gonna
fit
anywhere
in
the
architecture,
but
I
still
need
to
put
them
somewhere
and
that's
kind
of
the
story
that
I've
been
hearing
so
far
and
I
definitely
want
to
dive
in
and
explore
it
more,
but
it
does
feel
less
about.
B
This
is
an
archive
of
our
package
manager
from
the
past,
and
we
just
want
it
forever.
It's
more
of
I,
don't
have
time
right
now
to
put
every
package
into
place
yet,
but
I
still
need
to
put
it
somewhere
and
that's
part
of
what
they're
trying
to
solve
is
not
needing
to
have
all
of
the
answers
at
the
beginning
of
the
process.
I
think.
G
I
think
as
they've
called
out
earlier,
if
that
is
on
gitlab
comm,
then
we
have
all
of
the
considerations
around
naming
conventions
and
so
how
they
go
to
find
things.
They've
got
build
scripts
that
they're
using
to
actually
go
and
generate
stuff
that
rely
on
certain
certain
pods.
Then
that's
just
not
going
to
work.
C
I
think
that's
totally
right.
Gilad
comm
is
a
strange
situation,
because
if
two
companies
want
to
import
the
same
or
same
named
project,
they
can't
do
that
I'm
getting
live
comm
on
our
self-managed
instance.
There's
no
reason
why
we
should
prevent
that
I.
Don't
think
bond
allowed
comm
that
can't
be
allowed,
because
at
some
point
someone
might
say,
npm
install
this
package
name
or
may
even
install
this
package
name,
and
how
do
we
determine
which
package
it
is
or
who
it
belongs
to
just
by
the
name?
That's
sorry.
G
It's
the
I
think
this
I
think.
The
thing
there
is
is:
how
do
we
solve
the
problem
of?
Let's
just
say,
I
mean
in
theory,
if
they're
already
publishing
somewhere
public
I've
already
run
into
some
scoping
concerns
anyway,
they're
associated
with
an
account
or
a
project
on
the
whatever
side
that
is
in
a
shared
environment
unless
I'm
misunderstanding
the
way
Jay
from
does
it
unless
they've
got
like
walled
gardens
for
all
of
the
customers,
and
they
can
do
all
of
their
own
naming
conventions
that
unassociated,
then
maybe
they've
already
dealt
with
this
problem.
G
But
how
do
we
solve
it
like?
What
what's
the
solution
it
is?
Do
we
force
go
being
based
on
what
our
structure
is
and
then
they
just
have
to
adapt
what
they're
all
four
year
old
processes
for
filling
that
package
down
like
I?
Don't
I,
don't
think,
there's
any
concern
with
that
I'm
good
with
that
as
a
way
to
move
forward,
but
we
just
have
to
understand
what
steps
are
required
to
say
to
a
customer-
hey
cool
it
in
here,
but
your
scoping
changes.
D
C
We
I
think
this
came
up,
I
think
Ian.
He
might
have
brought
this
up
it
and
Denver
last
week.
Could
we
build
a
pipeline
that
goes
through
all
of
an
organization's
repositories,
maybe
with
some
rules
in
place
that
and
we'll
create
a
commit
or
some
sort
of
merge
equesticle
all
of
the
places
that
it's
identified
in
so
that
you
know,
if
you
have
to
import
this
package
name,
it
will
update
it
to
the
near
scope.
Is
that
something
that
would
be
feasible.
D
A
Well,
this
was
really
really
helpful.
It
sounds
like
Ian.
This
is
definitely
worth
opening
up
a
problem,
validation
issue,
and
we
could
do
some
research
on
this
and
get
some
more
answers,
and
then
we
could
once
we
have
those
answers.
We
could
talk
through
a
bit
more
of
those
yes,
yes,
Stan.
That
was
true.
These
all
of
these
things
that
we're
talking
about
our
will
be
turned
into
issues.
This
is
just
we
haven't
done
it
yet,
okay,
so
yeah.
A
Well,
we
will
open
this
up
as
a
research
issue
and
we'll
talk
through
this
over
time,
and
the
other
thing
that
I
wanted
to
discuss
was
another
thing
that
we
talked
about
in
Denver,
which
needs
it,
which
we
do
need
to
open.
An
issue
for
is
an
idea
that
originally
came
from
actually
Dan.
Your
friend
I
think
Greg
that
you
introduced
me
to
and
when
he
was
talking
about
the
value
of
j4,
he
was
saying
one
of
the
most
valuable
pieces
for
him
is
to
be.
Is
the
Maven
settings
file?
I?
A
Guess
it's
settings
that
XML?
You
can
set
the
priority
order
for
which
you
look
for
packages
from
so
you
look
first,
he
wants
to
look
from
there
J
frog
instance.
Then
they
want
to
look
for
the
Maven
server.
Then
they
have
another
s3
server
set
up
somewhere
that
they
want
to
pull
things
from,
and
so
the
idea
of
having
a
one,
a
pret,
a
remote
server
for
four
different
repositories.
A
So
we
can
have
you
could
point
to
multiple
locations,
so
you
could
say,
like
okay,
pull
first
from
NPM,
then
from
get
lab,
then
from
J
frog.
So
that
was
another
idea
that
we
were
talking
about.
Is
the
idea
of
a
virtual
and
a
local
server
that
you
could
push
packages
and
pull
packages
from,
and
then
we
would
proxy
requests
and
and
according
to
the
priority
order
that
you
said
so
that
was
another
thing
that
we
talked
briefly
over.
A
I
probably
have
a
picture
on
my
phone
if
I
knew
where
that
was,
but
I'll
pause
there
for
a
second
Steve,
do
you
think
I
missed
anything?
No
I
think.
C
That
was
the
general
idea
was,
was
being
able
to
set
the
priority
of
remotes
that
are
checked
as
you
are
trying
to
install
or
find
repository
or
packages,
I
mean
one
thing
that
would
be
beneficial,
and
this
ties
into
that
first
item
we
discussed
is.
This
could
also
be
a
first
step
to
the
import
process
is
well
right.
C
Now
they've
got
their
J
frog,
remote
they've
got
their
npm
remote
they've
got
this
remote,
so
switch
everything
to
the
git
lab
remote
and
then
the
logic
and
gitlab
will
go
through
and
identify
which
registry
to
pull
from.
So
the
first
step
is
update
all
the
code
to
only
pull
from
get
lab
and
then
get
lab
pulls
from
the
other
locations
and
then
eventually,
after
the
full
import,
they
don't
have,
and
they
don't
have
to
change
any
more
code.
A
So
it's
kind
of
a
step
in
that
direction.
It's
like
well,
we'll
just
have
it
all
and
get
lab.
Even
if
you
don't
migrate
anything
you
could
slowly
migrate
it
once
it's
there.
You
could
still
see
it.
You
could
still
see
details
and
maybe
copy
a
pull
command
to
download
it
or
something
like
that.
But
you
don't
have
to
worry
about
migrating
everything
on
day
one,
but.
C
So
if
you
have
a
bunch
of
private
packages
for
JavaScript
and
then
you're
also
using
packages
that
are
hosted
on
NPM
and
github,
someone
has
to
manage
all
of
that.
Whereas
if
we
have
this
priority
listing
type
of
a
deal,
you
can,
one
person
can
set
that
within
get
labs
and
then
all
of
the
projects
or
and
then
as
long
as
you're,
using
the
get
lab
remote.
It
will
pull
based
off
of
the
back
I.
B
You
know,
eighty-four
percent
of
you
have
moved
over
to
get
lab
their
registry,
which
is
awesome
and
then
there's
these
ten
packages
that
are
still
going
to
NPM,
so
we
should
check
on
those,
and
they
won't
like
to
see
that
kind
of
data
to
know
if
everything
is
actually
making
it
over
or
kind
of
see.
Your
point
is
this
a
temporary
solution
to
a
permanent
problem
or
the
other
way
around,
so
that
could
be
a
way
that
we
work
around
that
is
actually
showing
them.
G
Yeah
I
was
going
to
say
it
might
be
concerned
on
this
particular
area.
Is
there
authentication
problem?
So
if
we
are
hitting
other
registries
on
behalf
of
customers,
we
need
to
have
their
author
info
to
get
to
MPN
on
a
frog
or
anything
like
that.
So
I
have
a
concern
that
we,
our
code,
then
has
to
have
the
correct
author
information
to
be
able
to
do
those
things,
but
in
my
head
at
least
there
may
be
other
things
in
my
head.
At
least
that's
the
the
biggest
technical
issue.
A
Can
they
use
environment
variables
to
do
it
like
they
were?
Just
we
could
have,
we
could
document
the
process
and
as
part
of
it,
they
can
create
environment
variables,
that'd
be
like
a
frog
user,
ID
and
password,
and
then
they
wouldn't
have
to
put
their
actual
they
would
they
would
they
wouldn't
have
to
have
their
tokens
in
code?
They
could
just
have
the
environment
variables
even.
A
C
This
going
to
also
overlap
like
it
sounds
like
some
of
what
we're
describing
here
kind
of
overlaps
with
the
idea
of
the
dependency
proxy,
where
we
are,
you
know
able
to
pull
packages
from
other
places,
but
with
the
authentication
aspect,
it's
pretty
much.
The
dependency
proxy
for
private
packages
like
if
I
have
a
private
package
hosted
on
NPM,
but
I
wanted
to
use,
get
Labs
dependency
proxy
for
it.
Would
we
run
into
that
same
problem?
Probably
yeah.
G
We
would
and
I
think,
as
we
start
trying
to
integrate,
I
think
steps
for
in
my
head
again.
None
of
this
is
meant
to
be
like
we
have
to
do
it
this
way
or
anything
like
that,
but
like
steps
in
my
eggs
and
right
now
we're
only
working
with
our
container
registry
on
the
what
layers
it's
getting
from
upstream
and
then,
when
we
go
to
implement
that
the
package
managers,
then
we're
gonna
have
to
start
thinking
about
okay.
Well,
how
we
proxying
this
house
is
working.
What
is
the
actual
man
you
can
use
case
visit?
G
Is
it
just
the
CI
pipeline
builds
where
we're
trying
to
have
the
local
thing
on
the
machine
that
then
they
can
go
call
it
doesn't
have
to
keep
going
and
getting
it.
If
it's
in
an
external
world
posit
ori,
then
we
have
to
work
out
how
to
do
it
off
anyway.
So
I
think
you're
right
see
that
we
have
to
solve
that
problem
anyway.
But
if
it's
in
my
head
at
least
it's
a
couple
of
steps
out
from
where
we're
at
right
now
I
may
be
wrong.
A
Yeah,
but
you
know
we're
talking
long
term
thinking
big,
so
I,
like
I
love
these
discussions,
this
it's
really
helpful,
yeah
I.
Think
that
often
that's
a
good
call
out
on
authentication.
We
do
a
fennec
eight
now
for
the
dependency
proxy
to
docker
hub,
so
we
I
think
it
not
from
the
git
lab
registry,
so
I
think
we
are
doing
it
but
I
agree.
There's,
there's
gonna
be
a
lot
of
gotchas
there
like
for
NPM,
for
example,
we
had
to
learn
that
we
could
only
use
OAuth
tokens
and
we
had
to
change
our
personal
access.
A
A
I
C
And
that
can
be
a
good
starting
point
is,
is
to
start
you
fetched
from
get
lab
and
then,
if
not
only
just
fetch
from
the
main
public
repository
and
then
you
know,
the
idea
of
setting
up
additional
private
repositories
to
fetch
from
could
come
later,
if
at
all,
because
that
seems
a
lot
more
complicated.
Well.
G
Then
we
should
actually
be
able
to
communicate
with
the
other
repositories
using
that
same
protocol
without
having
to
go,
we've
got
a
you've,
got
to
think
of
a
whole
new
way
of
talking
to
you
know,
Sano
type,
as
opposed
to
maven
or
NPM
or,
as
opposed
to
you
know,
say:
git
hubs,
implementation
of
it
because
it
actually
should
put
support
the
same
protocol.
It's
more
about
each
time.
At
least
in
my
head
is
I'm
thinking
about
it
like.
If
we
can
support
one,
we
should
be
able
to
support
the
others
using
the
same
thing
right.
A
Maybe
we
could
try
it
for
NPM,
since
that
it
seems
like
NPM
might
be
the
most
straightforward.
If
maybe
that's
true
I,
don't
know
if
maven
is
more
straightforward,
but
we
have
the
most
experience
with
NPM
on
the
team.
Maybe
it
would
be
a
good
use
case
to
try
and
solve
the
firend
p.m.
first,
and
then
we
could.
We
can
go
from
there.
D
Isn't
probably
this
was
already
mentioned
up
until
now,
but
just
to
reinforce
it?
Isn't
it's
like
actually
a
very
good
way
to
allow
user
to
barking
forth
older
their
stuff
just
by
running
on
dependents
proxies
another
nice
interface
in,
say
we
you
managed
to
import
87%
of
your
packages
Oh
up
to
until
now.
You
never
touch
this.
Maybe
you
don't
really
meet
them.
Click
here
to
import
the
rest.
G
So
I
think
my
main
concern
with
the
dependency
proxy
is
it's
intended
to
be
time
box
as
far
as
I
understood,
it's
not
meant
to
store
stuff
forever,
because
then
we
run
into
the
same
problem.
We
always
have
so
I've
worried
that
if
that
would
be
kind
of
going
against
the
basic
idea
of
the
proxy
of
saying
we
pull
this
thing
in
the
proxy
and
then
keep
it
forever.
I,
like
the
idea
of
using
that
logic
and
saying
we
have
this
external
repository
to
sort
of
build
on
what
you
and
Steve
both
said,
I.
G
Think
I,
like
the
idea
of
saying.
Oh,
this
is
our
get
labs,
our
primary
and
the
absolute
moved,
but
we
have.
This
is
our
original,
so
say
it's
NPM,
because
that's
where
religion
comes
from
we'll
say
it's
on
a
type
or
concern
on
some
type,
J
frog
or
some
other
system,
and
then
we
just
start
pulling
them
in
and
saying.
How
do
you
want
to
save
this
I?
G
Think
that's
a
good
idea,
but
it's
I
think
the
only
like
I
said
the
concern
there
is
whether
that
really
matches
the
model
of
what
a
dependency
proxy
is
supposed
to
be,
because
we
should
be
caring
about
the
life
of
those
packages
and
how
much
we
end
up.
Storing,
because
that
will
become
a
problem.
I
think,
but.
D
Would
it
be
this
a
good
way
to
like?
Please
we
tell
the
user
okay,
we
grab
it.
We
have
it
for
three
months
and
then
like
that
two
months
and
the
reminder
today,
you
can
take
this
package
from
the
dependency
proxy.
It
like
correctly
put
in
the
right
registry
because
it
interests
you
or
not.
It's
not
just
trash
it
and
the
dependency
drugs
is
for.
Let
it
go.
B
It
will
be
really
cool
from
a
UX
perspective
if
we
could
tell
them
that,
instead
of
asking
them
and
say
like
hey,
we're
pulling
this
package
from
the
dependency
proxy,
you
don't
have
it
in
an
actual
storage,
but
we're
pulling
it
all
the
time
it
is
getting
like
regularly
downloaded.
It
seems
like
you
should
move
this
one
over
because
that's
where
it
should
belong
with
the
rest
of
your
stuff
and
kind
of
give
them
that
information
ahead
of
time.
Does
that
make
sense,
or
does
that
end
up
being
more
confusing
from
a
developer
perspective?.
E
To
us,
why
would
the
dependency
proxy
be
a
manual
thing?
Wouldn't
this
be
automatic?
So
say
your
pipeline
is
pulling
a
bunch
of
packages
down
from
IVA,
locally
or
MPN
or
whatever.
Surely
the
point
of
the
dependency
proxy
is
to
cash
those
automatically
so
that
the
next
job
that
runs
five
minutes
later
would
use
those
cash
assets
and
the
user
like
user?
Doesn't
care
user
doesn't
want
to
be
clicking?
Yes?
Yes,
yes,
yes,
yes,
250
million
packages
that
they
MPM
packages
that
they've
got
in
store
their
massive
JavaScript
projects.
E
They
just
want
to
know
that
their
next
build
in
the
future
is
going
to
be
faster
than
the
first
build.
And
that's
surely
that's
the
point
of
the
dependency
proxy
and
then
you
could
do
things
like
you
could
track
when
a
particular
package
is
pulled
from
the
proxy
and
then,
if
it
hasn't
been
pulled
in,
say
30
days,
then
you
could
start
garbage
collecting
packages
that
aren't
being
pulled
regularly
and
then
that
way,
it
just
becomes
a
completely
automatic
process
that
the
user
doesn't
care
about.
G
Yeah
I
think
that
a
good
summary
of
what
I
was
thinking.
Thank
you
for
doing
a
better
job
than
I
did
Nick
of
like
I,
like
the
idea
of
stead
of
having
reports
for
people
for
what
the
dependency
proxy
is
storing,
so
they
could
go.
Look
at
it
and
I
think
you
called
that
earlier
as
something
that
customers
find
valuable
of
like
I'll.
This
came
from
this
location
and
you're,
using
this
all
the
time
and
stuff
like
that.
That
could
be
that
sort
of
report
or
that
interface
could
be
a
place
where
customers
could
say.
G
Oh
wow,
we're
still
using
this.
Let's
pull
it
into
our.
You
know.
Previous
topic,
you
know
anonymous,
unconnected
storage
for
this
type
of
package,
but
I
think
you
call
that
a
good
point
here,
Nick,
which
is
that
if
we
are
looking
at
this
as
an
automated
pipeline
process,
that
the
interaction
isn't
a
real-time
interaction,
it
is
something
that
happens
later
where
a
customer
might
be,
or
even
an
engineer
is
like
built.
This
thing
I
looked
at
my
pipeline.
G
A
Cool
anything
else,
Steve
wanna
jump
in
I.
C
Was
just
gonna
add
that
yeah
like
I,
think
because
we
we
do
have
this
interesting
aspect
where
a
lot
of
these
things
can
overlap
and
can
be
built
plumbing
in
very
cool
ways.
It
is
important
to
always
remember
like
the
primary
reason
for
any
given
feature
or
system
so
that
we
don't
start
to
mutate
it
in
a
way
that
wasn't
the
initial
goal.
G
G
A
A
good
point
so
for
for
these
two
things
that
we
talked
about
today
and
they
were
well.
They
were
probably
a
bunch
of
ideas.
The
way
that
it
will
work
now
is
we'll
put
this
on
the
problem:
validation,
tracks
or
products.
So
that
means
that
we
open
up
issues.
They
go
in
our
problem,
validation,
backlog
and
then
from
there
to
validate
that
we
have
to
go
through
a
process
of
similar
to
what
we
did
recently
for
the
survey.
A
So
we
would
talk
to
users,
we
could
survey
users
the
end
output
as
we
create
this
opportunity,
canvas
that
gets
reviewed
by
product
and
design
leadership
and
then
once
they
say,
okay,
you've,
you
validated
that
this
is
a
problem
now
go
validate
the
solution,
so
then
we
would
could
test
like
is
this
with
this
work
and
then
it
would
get
moved
to
the
build
track
where
we'd
actually
work
on
it.
So
I
guess
I'm.
A
This
forward
would
be
to
answer
these
questions
from
a
user's
perspective
and
then,
when
we
get
to
the
build
track
to
start
thinking
about
it
like
how
do
we
actually
solve
this
down
technically
is
feasible
and
things
like
that,
so
I
think
these
are
both
worthy
of
validating
with
our
users,
that
this
is
something
that
they
a
want
to
do
and
that
it
would
be
useful
and
that
it
makes
sense
for
gitlab
I.
Think
those
are
the
three
questions
that
I
have
for
for
myself.
H
B
Really
like
that,
and
just
to
keep
going
on
Steve's
point,
because
it
was
a
really
good
one
part
of
the
process
and
the
design
side
that
we
go
through
is
actually
testing
our
product.
Based
on
what
we
know,
the
primary
things
are
to
make
sure
that
none
of
our
new
features
actually
made
it
harder.
So
we
do
have
that
final
backstop
of
all
right.
Everyone
still
is
able
to
do
the
primary
function.
We
didn't
get
in
anyone's
way
and
that's
also
a
last-ditch
chance
for
us
to
say.
B
Oh
no,
we
did
make
a
primary
job
harder.
We
have
to
adjust,
so
we
have
several
warning
systems
in
place
in
case.
We
accidentally
do
it
and
then
I
think
on
top
of
that,
adding
a
Lee's
idea
would
be
great
of
it's
not
meant
to
do
this.
It
is
only
meant
to
do
this.
I
think
those
two
together
will
keep
us
safe
from
making
users
jobs
harder
when
we're
trying
to
make
them
better.
G
That
seems
like
do
you
do
we
feel
like
this
worthwhile,
adding
like
a
section
on
to
our
issue,
description
to
sort
of,
say,
hey.
This
is
not
what
this
is.
What
we
are
not
trying
to
solve
here,
Tim,
is
that
something
I
feel
like
that's
something
I've
seen
already
but
I'm
sure.
That's
true
enough.
Yeah.
A
Usually
when,
when
the
something
is
complicated
or
new,
we
put
in
what
what
it
will
not
do
or
other
considerations
like
that,
but
it's
not
part
of
the
standard
template,
I,
actually
I.
Think
it's
a
good
idea
to
talk
about
that.
Always
like
one
of
the
things
we
won't
do.
We
do
it
in
our
strategy
and
Direction
pages,
yeah
I
think
for
any
new
issues
where
we're
breaking
ground
I.
Think
that's
really
important
to
cover
like
that
MVC
or
the
epic
for
something
new
yeah
I.