►
From YouTube: Package ThinkBIG: August 19th, 2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Today
we
have
a
full
agenda,
so
I
will
jump
right
in
if
that's,
okay,
okay,
so
the
first
item
on
the
list,
I
pasted
an
issue
that
was
shared
with
me
and
with
the
whole
product
team
from
the
leadership,
the
product
leadership
team.
This
issue
is
really
about
if
you've
been
here.
Actually
this
whole
team
has
been
here
long
enough
that,
for
a
long
time,
we've
been
focusing
on
breadth
over
depth
as
sort
of
the
get
lab
strategy
adding
it
was.
A
We
put
a
huge
emphasis
on
adding
new
net
new
features
and
categories
with
this
issue
that
scott
shared
around
last
week,
and
we
reviewed
yesterday
the
product
meeting,
there's
that
we
want
to
shift
towards
depth
over
breadth.
So
we
should
be
focusing
on
usability
and
depending
on
which
group
you're
in
either
driving
group
monthly,
active
users
or
paid
monthly
active
users.
So
for
us
we
we
are
one
of
the
20
of
stages.
That
should
be
focused
on
just
driving
non-paid
usage
activity
or
gmail.
A
So
that's
kind
of
going
to
be
the
big
shift
that
we
make
and
the
implications
of
that
probably
are
are.
I
think,
we've
been
kind
of
going
this
route
for
the
past
several
milestones
already,
but
I
bring
it
up
here
so
that
we're
all
aware
that
this
is
the
corporate
direction
and
then
also
to
maybe
bring
up
a
topic
of
like
what
should
we
improve?
We
all
use
the
product
every
day.
A
Are
there
particular
areas
or
use
cases
that,
when
you're
using
them,
it's
quite
frustrating
and
should
we
resolve
those
and
another
yeah
I'll
pause
there?
Any.
A
Cool
I'll
say
that
the
next
thing
that
I
was
thinking
about
is
the
problem
that
I'm
having
and
maybe
the
reason
that
the
other
reason
that
I
brought
this
up
is:
we
could
track
monthly,
active
user
monthly,
active
users,
but
that
doesn't
really
give
us
a
sense
of
if
the
changes
that
we're
making
to
user
experience
are
working.
So
I
opened
an
issue
to
to
measure
some
additional
data
points
things
like
like.
B
A
Why
the
error
failed
or
something
like
that
or
it's
really
frustrating.
If
I
I'm
in
a
merge
request-
and
I
see
that
I
try
to
apply
a
suggestion
and
it
just
it
doesn't
work.
So
I
think
if
we
could
start
to
to
measure
how
frequently
those
events
occur,
then
we
could
try
to
solve
to
reduce
the
amount
of
times
that
they
occur.
Although
we'll
never
get
to
zero,
I'm
sure,
and
then
we
could
test.
We
could
test
different
things.
A
D
I
just
wanted
to
call
out
that
the
work
we've
been
doing
on
the
ui
and
refining
it
the
cleanup
policies
and
the
work
we
did
with
suzanne,
as
well
as
what
nico
and
I
have
been
working
on
on
the
just.
The
flat
ui
has
already
been
used
as
examples
of
how
to
switch
from
breadth
to
depth
and
refining
and
iterating.
D
B
You
it's
one
thing
I
would
be
kind
of
curious
about.
I
don't
know
if
you've
looked
into
it
yet
is
like
there's
the
frustrations
over
sort
of
like
something
not
working
as
expected
or
being
more
difficult
to
figure
out,
but
I'm
curious
with
certain
package
types
like
the
level
of
frustration
of
you
know
not
having
that
sort
of
like
full
implementation
like
there's,
missing,
commands
of
npm
or
whatever
package
type
or
there's
missing.
B
You
know
like
stuff
that
maybe
not
every
user
uses,
but
that
if
you
use
packages
a
lot
you're
like,
why
can't
I
do
this
thing
that
you
can
do
on
npm
or
or
on
one
of
the
other
package
types,
I'm
curious
about
that
as
well.
I.
A
Think
that
would
within
reason,
would
fall
under
us
and
trying
to
improve
our
product
and
and
drive
more
usage
would
be
to
like
increase
the
usefulness
or
number
of
use
cases
of
the
product.
There's
probably
a
line
there
to,
though,
to
say
that
we
don't
want
to
recreate
npm.
But
I
I
think
that
those
are
the
kinds
of
issues
that
we
should
be
investing
in,
not
not.
C
A
A
Have
you
know
this
concept
of
virtual
registries
or-
and
we
could
add
more
functionality
around
that,
so
I
think
that
falls
under
things.
We
would
want
to
do
that
would
drive
usage
to
our
existing
features
and
categories.
What
we
wouldn't
want
to
do
necessarily
is
open
up
a
whole
new
categories.
At
this
point,
we
wouldn't
want
to
say
like
well:
let's
tackle
rpm
right
now,
for
instance,
cool.
A
A
Okay,
I
can
move
on
to
the
next
agenda
item,
which
is
a
quick
one.
So
ian-
and
I
have
you
know-
I
think
everybody
knows-
there's
this
corporate
push
to
get
away
from
direct
messages
and
move
everything
into
public
channels
and
prior
to
that
push
ian,
and
I
would
have
like
sort
of
a
a
running
conversation
in
our
in
our
private
channel
that
we
we've
tried
to
move
to
this
s
package
strategy
channel
and
now
it's
been
going
for
about
a
week,
we'd
like
to
share
it
with
everybody
else.
A
If
you'd
like
to
join,
feel
free.
No,
it's
not
obligatory,
but
if
you
want
to
see
some
of
the
ux
and
product
conversations
that
ian
and
I
are
having
feel
free
to
join
okay
ian,
you
want
to
take
the
next
item.
D
Yes,
so
our
traditional
research
update
for
think
fig.
We
have
two
big
initiatives
for
research
in
flight
right
now.
The
first
is
unmoderated
testing
for
this,
that
is,
for
solution,
validation
for
the
cleanout
policies,
changing
up
the
verbiage
and
the
ui,
making
sure
that
it
actually
makes
sense
to
our
users
compared
to
before
where
there
were
some
struggles,
I've
included
in
the
agenda
the
research
issue
itself,
as
well
as
something
that's
pretty
cool.
Our
team
in
this
research
initiative
is
going
to
pilot
using
a
new
tool
called
usertesting.com
to
facilitate
unmoderated
testing.
D
This
is
really
great
in
a
lot
of
ways.
It
opens
up
our
schedules,
it's
more
in
line
with
gitlab's
asynchronous
nature,
so
the
basis
of
it
is
that
instead
of
me
scheduling
the
time
with
the
user,
asking
them
questions
and
making
them
do
things
and
then
responding
in
real
time.
We
put
out
a
test
that
is
kind
of
guided
through
the
tool,
and
then
they
perform
the
tasks
and
it's
all
recorded,
and
then
I
just
have
to
synthesize
the
recording.
D
I
actually
just
checked:
all
eight
of
them
have
already
done
their
tests,
so
it
took
less
than
eight
hours
to
get
eight
tests
done,
and
I
think
tim
can
can
agree
with
this
usually
doing
scheduled,
moderated
testing.
That's
like
a
three
week
process
to
schedule
it
and
get
them
done
so
to
get
that
feedback
in
under
a
day
is
pretty
awesome.
A
D
Yes,
I
shared
a
recording
of
me
taking
the
test
myself,
so
you
can
kind
of
see
what
it's
like
it
pops
up
with
a
little
ui
that
says:
here's
a
task
who
wants
you
to
complete
and
then
you
go
and
do
it
and
hit
next
and
it
kind
of
guides
you
that
way.
I
can
definitely
share
that
out
any
other
questions
or
thoughts.
D
D
This
is
a
big
deal,
because
this
was
validating
an
entire
category
and
the
direction
we
want
to
take
it
and
make
sure
it
actually
works
for
our
users,
and
we
heard
from
several
different
customers
that
the
lack
of
these
features
is
stopping
them
from
moving
over
to
gitlab's
package
tools
or
is
creating
restrictions
that
they're
not
sure
how
to
deal
with
so
finding
out.
If
what
we
want
to
put
together
will
actually
help
them
was
really
important.
D
One
of
the
reasons
we
did
we
stopped
at
five
is
that
we
got
insanely
consistent
results,
specifically
the
languages,
the
package
formats
that
they
used
the
size
of
the
organization,
the
size
of
the
individual
teams.
The
number
of
teams
didn't
seem
to
have
any
impact
on
how
they
felt
about
the
feature
which
is
really
great,
because
that
means
the
feature
is
designed
in
a
way
to
help
all
of
those
kinds
of
combinations
of
users.
So
that's
another
reason
we
stopped
at
five.
Is
we
just
kept
getting
the
same
results?
D
Some
of
the
highlights
that
I
wanted
to
go
over,
I'm
still
in
the
middle
of
synthesizing,
all
the
results
and
creating
the
insights
in
dovetail
the
unmoderated
testing
kind
of
popped
up,
and
so
my
timing
is
a
little
bit
off.
That's
why
it's
not
done
yet.
I
will
get
them
done
and
share
them
all
formally
with
everyone,
but
the
big
ones
are
overall,
the
solution
was
positively
received.
All
of
the
participants
at
the
very
end
of
it.
D
After
being,
given
the
scenario
and
having
gone
through,
the
task
got
the
blanket
question
of
if
this
solution
was
available
today,
would
it
help
your
organization
all
of
them
said?
Yes,
they
all
explained
in
details
why
they
thought
it
would
be
helpful.
Some
of
them
were
that
we
make
it
so
that
the
individual
developers
don't
have
to
worry
about
where
their
packages
are,
we
solve
it
for
them.
We
solve
it
consistently,
so
we
don't
have.
D
The
user
interface
for
the
dependency
proxy
and
the
data
that
we
were
presenting
was
consistently
well
understood.
I
asked
each
user
at
each
screen
to
say:
walk
me
through
everything
you
see
all
of
what
they
responded
with
was
accurate
in
terms
of
what
the
data
meant,
how
it
was
relevant,
how
to
work
with
the
interactions,
so
that
part
was
really
sound.
D
Just
really
great
one
of
the
most
important
things
I
learned
from
the
users
is
that
caching
is
incredibly
important
and
I
think
four
of
the
five
participants
called
out
specifically
that
when
npmjs.org
goes
down
their
pipeline
stop
building
and
they
have
to
kind
of
wait
until
the
registry
pops
back
up.
That's
not
the
only
example,
but
that
was
the
one
that
they
kept
using.
D
The
registry
limit
right
now,
you,
you
are
only
allowed
to
add
10
registries
to
the
virtual
registry.
This
is
for
performance
and
david
has
a
lot
of
really
great
insight
as
to
why
that
is.
But
we
talked
to
our
users
and
they
all
said
that
probably
will
work
except
in
these
situations
and
for
one
of
them
it
flat,
wouldn't
work.
They
needed
more
registries
involved.
D
However,
when
we
kind
of
dug
into
it
a
little
bit
more
with
some
of
the
participants,
we
learned
that,
if
hosted
registries
were
excluded
from
that
account
or
if
they
could
utilize
a
group
level
registry
endpoint
instead
of
each
individual
hosted
registry.
That
would
likely
prevent
this,
because
all
of
their
teams
have
their
own
registries,
which
is
great,
and
then
they
have
an
archive
and
some
over
here,
and
then
they
have
the
default
that
they
need
to
pull
their
public
packages
from
and
that
broke
the
number
a
lot.
D
The
story
we
usually
heard
was,
I
have
already
created
150
projects
and
if
each
one
of
them
created
a
package
with
the
goal
of
being
able
to
share
those
packages
throughout
the
entire
organization,
I
blew
that
10
number
right
out
of
the
water,
so
I
think
there's
some
some
playroom
I've
already
shared
this
with
david,
and
he
has
some
ideas
and
thoughts
around
it.
D
This
could
just
be
that
they
didn't
have
any
context,
two
of
them
even
called
out
before
I
would
get
started.
I
would
go,
read
the
documentation
to
understand
how
gitlab
solves
this
problem,
so
it
could
be
just
the
test
not
giving
them
that
context,
but
we
should
be
aware
and
very
explicit
and
clear
about
what
each
of
those
terms
mean
and
how
they
impact
the
result.
D
Two
of
the
users
expected
the
dependency
proxy
to
simply
have
one
url
that
you
could
pull
anything
from.
I
think,
if
I
understood
correctly,
this
is
similar
to
how
artifactory
works.
I
don't
know,
I
think
that's
what
they
said,
and
so,
if
I
pulled
from
npm
it
would
be
intelligent
enough
to
only
check
npm
registries
to
get
their
packages,
and
there
was
only
one
url
for
every
package
format,
which
is
a
little
different
than
how
we
set
it
up.
D
The
vanity
urls
was
a
little
feature
that
we
included
in
there
just
to
find
out
how
users
would
feel
about
it,
and
if
it
was
important
to
them,
they
thought
it
was
cool.
They
understood
the
idea,
but
it
certainly
wasn't
a
deal
breaker
or
heavily
sought
after
feature
for
any
of
them.
I
will
say
that
if
you,
the
larger,
the
organization
got
the
more
interested
they
became
in
the
vanity
url,
which
I
thought
was
an
interesting
correlation
from
the
largest
participant,
the
participant
of
the
largest
organization
that
we
talked
to.
This
was
more
of
a.
D
We
have
2
000
engineers,
and
they
don't
want
to
have
to
think
about
or
remember
where
this
url
came
from,
and
so,
if
I
could
add
a
vanity,
it
would
make
sense
smaller
organizations,
it
seemed
like
they
could
just
go
directly
to
whatever
place.
This
was
and
find
that
themselves.
So
the
vanity
aspect
wasn't
as
important.
D
I
will
have
several
more
insights
and
specifics
and
stuff
that
I
can
share
and
I
will
add
them
to
the
research
issue
and
and
publicize
that,
so
everyone
can
see
all
of
the
details,
but
these
were
the
big
highlights
that
kind
of
stood
out.
D
That
was
a
lot
of
information.
Does
anybody
have
any
questions
or
comments
about
the
research
around?
This
awesome
feature.
E
I
would
actually
I
oh
sorry
as
a
new
person
I
would
kind
of
love
to.
I
would
love
if
you
could
help
me
define
those
terms,
because
I've
had
that
when
tim's
mentioned
virtual
registry
too
I've
gone
like
oh,
I
wonder
what
that
is.
E
So,
if
I
don't
know
if
I
should
open
an
issue
or
what's
the
best
way,
because
I'd
like
to
make
sure
the
documentation
has
those
definitions,
like
you
said.
D
For
sure,
the
next
step,
once
I'm
done,
synthesizing
the
results,
we're
gonna
partner
with
david
and
I'm
assuming
david
the
engineers,
I
should
be
a
little
more
open
and
yourself
and
nico,
and
I
to
figure
out
how
to
break
down,
because
this
was
a
very
large
way
out
of
the
scale
of
an
mvc
test
and
so
breaking
it
down
into
its
smallest
pieces.
So
there
may
be
an
issue
dedicated
to
creating
a
virtual
registry,
and
we
should
definitely
have
a
conversation
of
define
this
in
the
documentation.
D
B
D
I
really
wanted
them
to,
but
I
they
didn't
they,
the
ones
that
were
confused,
really
didn't
expect
that
the
virtual
registry
and
the
dependency
proxy
were
two
different
entities,
one
being
inside
of
the
other.
They
expected
the
dependency
proxy
to
be
the
one
url,
and
so
the
idea
of
a
virtual
registry
just
kind
of
confused
them,
and
so
they
thought
a
virtual
registry
was
an
artificial
connection
to
a
remote,
posing
as
a
hosted
registry
is
how
they
kind
of
explained
it.
And
then,
when
I
offered
up
the
remote
registry,
they
were
like.
D
Well,
that's
nothing!
That
means
nothing
to
me.
However,
once
we
went
through
the
tests,
these
were
pretty
easy
for
users
to
overcome
and,
as
we
went
through
the
tasks
and
got
to
the
end,
it
all
made
sense
by
the
end.
So,
as
I
kind
of
mentioned
earlier,
this
could
be
a
problem
that
is
simply
solved
by
being
very
clear,
having
sound
documentation
and
even
just
marketing
and
talking
about
the
features,
as
we
introduce
them
very
accurately,
to
kind
of
teach
our
users
as
we
start
building
them,
but
no
none
of
them
were
like.
D
F
A
few
words
on
the
limit
of
10.
I
use
that
limit
on
the
analysis
to
have
something.
I
guess
the
idea
is
to
not
be
able
to
have
infinite
list
of
registries
within
the
virtual
one
so
that
we
have
an
application
limit.
So
I'm
pretty
confident
that
we
can
scan
hosted
registries
really
quickly
in
a
few
sql
queries.
F
So
I
guess
we
we
can
bump
that
number.
The
most
important
limit
would
be
the
remote
registries,
because
it's
a
network
request
within
a
another
network
request.
Since
this
this
whole
list
scanning
or
searching
for
packages
will
happen
when
a
cli
tool
will
try
to
pull
the
package.
So
you
are
within
a
request
so
yeah.
I
guess
we
we
can
do
something
around
that
where
we
would
be
analyzing
the
list
and
if
we
have
several
hosted
registries
together,
we
can
just
query
the
database
in.
F
D
Things
thanks
for
sharing
that.
I
think
if
we
can
either
find
a
way
to
exclude
the
hosted
registries
or
count
them
all
as
one
and
then
limit
the
number
of
remote
registries
and
then
have
a
default
that
will
meet
I'm
going
to
say
95
of
what
our
users
are
expecting
and
stay
within
that
realm.
The
big
one
that
I
heard
multiple
times
is
my
team
is
moving
over
to
microservices
and
we
have
a
more
dispersed
team
and
so
we're
just
going
to
have
our
inside
our
organization
will
have
more
registries.
A
D
One
user
brought
that
up
as
if
I
just
had
one
endpoint
for
every
hosted
registry
inside
my
organization
that
would
solve
this.
That
could
be
an
easy
solution.
I
don't
want
to
get
technical,
because
that's
not
my
my
wheelhouse
at
all,
but
one
of
the
concerns
that
comes
up
in
my
mind,
is
if
you
have
packages
that
have
the
same
name
in
different,
hosted
registries
inside
of
a
group
np
group
endpoint.
How
do
we
handle
that?
D
So
if
one
team
produces
a
package
that
has
the
same
name
as
a
different
team
produced
it,
one
of
them
is
supposed
to
be
the
priority.
How
do
we
identify
that?
I
don't
that
could
be
not
an
issue
that
nobody
actually
has,
and
so
I'm
creating
problems
out
of
nowhere.
But
that's
the
my
one
hesitation
with
that
group
level.
A
A
B
I'd
be
curious
because
I
I
feel
like
it's
one
of
those
things
where
right
now
all
we
hear
about
is
is
we
want
the
drop
of
naming
restrictions.
So
I'm
curious
if,
when
we
do
that,
if
we
would
see
an
increase
of
like
oh,
we
need
to
handle
conflicts
because,
right
now
nobody
is
saying
that
they
would
have
a
problem
with
the
conflicts,
but
it's
also
not
an
option.
At
this
point.
A
B
F
For
the
vanity
url,
they
also
could
have
a
custom
vanity
url
within
their
organization.
So
imagine
that
I'm
working
at
the
company
football.net
I
could
create
an
alias
registry.fubar.net.
That
would
be
just
be
a
redirect
to
the
virtual
registry
url,
but
that
that's
within
their
infrastructure,
not
ours.
D
That's
a
really
cool
idea
and
kind
of
fits
with
the
idea
that
it's,
the
larger
organizations
that
have
the
infrastructure
and
staff
to
be
able
to
make
those
aliases
easily
are
the
ones
that
want
our
most
interested
in
that
endpoint.
So
that
could
be
a
way
for
us
to
not
actually
implement
the
feature,
but
still
connects
them
with
that
flexibility.
F
It's
it's
really
similar
to
I'm
sure
that
gitlab
pages
do
that
too,
where
you
have
documentation
on
how
you
can
set
up
your
own
custom
url
for
your
blog,
but
your
blog
is
actually
hosted
on
gitlab
pages.
So
it's
a
simple,
alias,
I
guess
in
your
dns
server
and
that's
it
and
they
could
be
doing
the
same
here
for
the
virtual
registry.
A
B
A
Yeah
well,
it's
kind
of
a
continuation
of
this
conversation.
I
guess
so.
In
a
couple
of
milestones
ago,
we
conducted
an
investigation
and
the
design
issue,
and
now
that's
powered
a
lot
of
the
insights
that
ian
just
shared,
and
so
I'm
wondering
now,
as
we
start
to
think
about
how
to
actually
start
building
this.
Maybe
we
could
just
have
a
conversation
about
what's
the
best
way
to
get
started
and
that
could
be
opening
issues
that
could
be.
You
know
talking
through
the
mr
plan,
I'll
sort
of
leave
that
up
to
everybody.
H
F
So
the
the
the
models
and
all
the
logic
in
the
in
the
back
end
would
be
the
same
for
all
the
package
types.
But
then
what
would
be
different
is
the
url
well
the
api
endpoint
for
the
virtual
registry,
because
each
package
manager
has
its
own
set
of
urls.
You
need
to
implement
to
so
that
you
can
support
the
pull
pull
package
action.
F
F
But
apart
from
that,
everything
will
be
shared
and
centralized
so
that
we
don't
have
many.
I
don't
know
how
do
we
won't
call
that
virtual
registries
and
joins,
or
whatever
it's
just
a
single
one,
because
the
logic
is
always
the
same.
You
just
scan
through
the
whole
list
and
you
just
look
for
packages.
D
F
Well,
the
deployment
I
did
in
the
analysis
was
more
about
starting
from
the
database
and
you
just
start
building
the
blocks.
You
will
need
and
just
start
building
things
on
top
of
them,
and
the
last
step
would
be
to
implement
this,
this
endpoint
for
virtual
registries,
for
each
package
manager.
Well,
obviously,
we
will
choose
one
for
the
the
the
first
iteration,
but
if
you
implement
the
the
api
without
having
the
all
the
logic
and
all
the
models
behind
you
just
are
implementing
an
empty
shell,
I
guess
an
empty
endpoint.
F
So
that's
not
super
valuable.
I
guess
so
that's
why
it
should
be
better
to
start
with
the
database
and
just
start
building
what
you
need
in
the
database.
Then
you,
you
jump
into
the
services
layer
where
the
logic
of
the
whole
thing
would
live,
and
then
this
these
services
would
be
used
by
the
endpoint.
A
What
would
it
be
like
for
someone
to
help
contribute
like
if
we
let's
say
we
were
working
on
maven
first
and
someone
wanted
to
help?
Add
support
for
nougat
how
far.
F
Ideally,
we
would
need
to
have
the
first
virtual
registry
for
a
given
type
implemented
so
that
we
have
all
the
pieces
implemented
and
then
it
would
be
just
a
matter
to
okay.
I
will
implement
the
virtual
registry
for
maven,
and
so
I
need
to
implement
those
urls
and
connect
the
endpoints
to
the
existing
services.
A
F
Yeah,
it's
I
I
don't
well,
perhaps
I'm
I'm
wrong,
but
I'm
not
sure
that
there
are
ways
to
parallelize
everything
in
a
vertical
way.
You,
you
really
need
to
do
in
a
horizontal
way
where
you
need
to
okay.
First,
the
database
layer,
then
the
service
layer,
then
the
api
layer
and
and
that's
it,
but
having
something
that
would
yeah.
Okay,
let's
implement
the
model,
the
service
and
the
api.
For
for
this
side
of
the
of
the
of
the
feature.
F
Now
it
I'm
not
sure
that
this
will
be
working,
because
if
you
create
a
virtual
virtual
registry
and
you
implement
the
logic,
you
will
need
to
handle,
hosted
and
remote
registry
so
that
you
can
implement
the
scanning
loop
and
if
you
don't
have
the
models.
Well,
what
are
you
going
to
handle
within
the
loop?
So
yeah?
It's
it's
not
a
small
feature
and
that's
a
lot
of
pieces,
but
we
need
all
of
them.
F
B
But
when
it
comes
to,
I
mean
like
I'm
looking
at
the
mr
plan
and
that
issue
when
it
comes
to,
because
there's
a
lot
of
mrs
and
there's.
Obviously
a
lot
of
you
know
work
to
get
that
initial
big
structure.
Can
any
of
that
be
parallelized
like
within
the
team,
or
do
you
think
it's
kind
of
like
you
know,
one
person
is
just
gonna
have
to
like
really
dig
in
for
a
month
or
two
and
like
crank
all
the
mrs
out
to
get
everything
initially
set
up.
F
So
the
the
graphql
mrs,
could
be
left
for
a
side,
I
guess
and
we
could
not
for
a
side,
but
they
could
be
implemented
in
parallel,
so
some
someone
can
be
working
on
the
models
and
the
services
and
well
the
models
would
be
the
base
to
everything.
So
we
need
the
models
once
we
have
that
we
can
have
someone
on
the
on
the
services
and
someone
someone
on
the
graphql
apis
and
then
the
virtual
registry
api
can
also
be
implemented
at
once.
F
F
A
few
ways
within
the
team
to
parallelize
things,
but
we
are
not
going
to
cut
that
the
models
we
will
need,
all
all
of
them,
the
services
we
will
need
all
of
them
before
implementing
the
virtual
registry,
endpoint
and
well.
Obviously,
the
graphql
api
can't
work
without
without
the
model,
so
the
models
are
really
the
the
base
of
everything.
F
F
But
having
said
that,
we
can
implement
the
models,
the
services
and
the
virtual
registry
endpoint
and
just
start
playing
with
that
by
creating
the
virtual
registries,
hosted
registries
and
remote
ones
within
the
rails,
console
and
start
playing
with
that
and
start
looking
at
the
endpoint.
If
it
works
properly
or
not.
If
we
can
add
some
things
or
not,.
A
It
sounds
hard
to
get
to
an
mvc
in
terms
of
like
value,
and
it's
the
and
you
know
it's
not
important-
to
deliver
value
first,
but
it
sounds
like
there's
a
lot
of
groundwork
or
like
foundational
work
that
needs
to
be
done
before
we
can
get
to
giving
the
user
something
right.
A
One
thing
that
we
could
do
that
would
be
valuable,
is
just
to
share
the
process
of
what
we're
building
and
just
like
how
we're
building
it
and
communicate
openly
about
that.
I
think
that
would
be
some
a
way
that
we
could
show
value
and
while
we're
working
on
that
stuff-
and
it
could
be
a
good
way
to
help
people
contribute
in
the
future
too.
If,
if
we
feel
like,
we
are
interested
in
sharing
that
content.
F
Yeah
I
was
thinking,
but
perhaps
it's
a
crazy
idea.
Once
we
have
everything
on
the
back-end
side,
we
could
have
some
customers
that
are
interested
in.
I
don't
know
alpha
testing
the
feature
and
we
could
have
a
virtual
registry
created
through
through
the
rails
console
on
on
production
and
so
that
they
can
start
using
it.
B
B
You
know
a
portion
of
documentation
of
like
set
up
some
virtual
repositories
and
then
have
some
steps
of
how
to
do
that
through
the
console
and
and
just
invite
users
to
to
try
that.
F
Out
well,
in
this
case,
the
the
all
the
graphql
mrs,
can
be
seen
as
a
part
of
the
front-end
support.
So
if
we
go
that
route,
we
don't
need
it
in
the
first
situation,
because
we
would
just
need
the
models,
the
services
and
the
virtual
registry
endpoints,
and
that's
it.
We
could
be
starting
using
it
with
that.
You
don't
need
a
graphql
api
for
that.
A
F
F
F
That's
a
that's
a
good
question:
yeah,
I'm
pretty
sure
that
we
can
centralize
and
have
the
logic
one
for
look
for
packages
and
container
registries.
But
then
the
models
would
be
different.
The
graphical
apis
would
be
different
and
the
virtual
registry
endpoint
would
be
different
because
we
would
need
to
implement
the
docker
client
api.
B
C
F
I
think
it
should
be
one
for
nuget,
it's
two,
because
you
have
this
service
and
the
the
service
and
index
first.
So
it
depends
on
the
package,
manager
and
well
the
challenge
here,
and
I
guess
that's
why
I
put
a
three
there:
it's,
how
do
you
put
all
these
endpoints
within
the
same
grape
plus.
F
Now
that
I'm
saying
this,
I
think
I
recall
I
put
the
virtual
registries
with
the
package
type
in
the
url,
so
it
can
be
mapped
to
different
grape
classes,
so
that
would
you
don't
have
a
class
handling?
All
the
virtual
registry
is
endpoint.
F
Which
is
similar
to
what
we
have
for
for
package
registry
at
the
project
level,
where
you
have
packages,
slash
npm,.
A
A
A
It
would
be
good
to
do
like
just
a
and
I
won't
hold
not
holding
us
to
this
date
at
all.
But
let's
say
an
mvc
for
maven,
where
you
can
set
up
a
few
three
remote
registries
and
some
hosted
registries
in
a
virtual
registry,
and
it
works.
F
A
A
F
A
D
H
F
Yeah
I
was
thinking
if
there
are
some
things
that
could
be
well,
some
hidden
surprises,
but
I
don't
think
that
there
is
a
lot
of
complexity.
It's
more
on
the
amount
of
work
we
have
to
well.
Amount
of
code
we
have
to
implement
is
like
the
the
graphql,
mrs.
F
They
are
not
hard
or
complex,
they
are
quite
simple
to
implement,
but
you
need
a
lot
of
code
to
implement
a
graphql
api
and-
and
probably
here
I
just
put
one
mr
per
per
model,
but
on
the
quad
operation.
You
would
split
each
operation
in
one,
mr,
but
other
than
that.
B
I
forget
the
eventual
addition
beyond.
This
will
be
to
add
a
cache
for
the
virtual
repositories,
the
dependency
proxy
pretty
much.
I
was
just
kind
of
thinking,
I'm
wondering
if
we
need
to
keep
that
in
mind
with
how
we
structure
anything.
B
C
C
B
If
there
is
a
user
that
says,
I
don't
care
about
virtual
repositories
and
all
that,
but
I
do
want
you
know
like
npm
forwarding
with
caching.
B
Would
that
have
to
be
separate,
or
you
know,
is
that
something
that
we
can
kind
of
you
know
combine
in
a
way
that
that
is
still
friendly
for
users,
I'm
just
kind
of
speaking
out
or
thinking
out.
F
F
First,
I
will
need
to
check
the
cache
if
there
is
something-
and
if
there
is
nothing
I
just
will
need
to
fetch
it
from
the
external
registry
and
then
the
I
will
need
to
cache
the
the
response-
and
this
is
always
the
the
same
thing
and
that's
why
I'm
mentioning
the
dependency
proxy
for
the
container
registry,
because
it's
it's
really
the
the
same
thing
like
go.
Fetch
this
blob
and-
and
this
logic
is
okay,
let's
check
check
the
cache.
Is
the
blob
present?
No
okay,
I
will
just
fetch
it
from
the
container
registry.
F
Oh,
I
know
I
have
this
blob.
I
will
put
it
to
the
cash
and
for
water
the
the
response
further,
so
yeah.
I
I
think
we
can
do
something
to
have
the
caching
feature
centralized
in
a
common
logic
but
yeah.
We
can
then
add
this
caching
discussion
aspect
to
the
simple
request
for
the
forward
feature
that
we
have,
which
is
the
application
setting.
F
I
I
had
this
very
same
question.
I
think
I
asked
it
to
team
in
in
an
issue
which
is
which
path
do
do
we
take.
Do
we
go
full
bananas
and
we
implement
the
virtual
registry
thing
or
do
we
take
a
look
at
the
request
for
forward
feature
and
we
just
support
more
package
managers
to
to
this?
We
add
more
support
to
to
this
feature,
and
on
top
of
that,
we
have
caching,
that's
also
a.
A
I
think
my
issue
with
that
was
it
the
reason
we
did
it
for
npm
is
because
for
for
gitlab
you
have
to
scope
your
packages
specifically
for
the
npm
registry,
so
it
made
it
difficult
for
people
to
add
in
npm.com
as
a
as
a
remote,
whereas
with
maven
and
with
the
other
remotes,
you
could
pretty
easily
specify
multiple,
multiple
remotes.
It's
not
really
as
much
of
a
challenge.
A
A
That's
why
I
didn't
schedule
the
ish,
the
maven
issue
and
I
had
someone
a
a
couple
of
users
that
mentioned
the
same
thing
about
pi
pi
as
well.
So
I
it's
that's.
Why
that's
why
I
was
thinking.
We
go
full
bananas
with
methodically,
though,
but
I'm
open
to
feedback
and
change.
Of
course,.
B
C
C
F
But
if
you
are
at
the
package
level
the
package
for
at
the
project
level,
you
already
know
where
you
are
so
the
back
end
can
just
check
the
packages,
and
so
I
guess
it
would
be
possible
to
drop
the
scope
restriction
for
the
project
level.
Api.
A
We
went
over.
Thank
you,
everyone
for
staying
patient.
I
know
good
thing
dan
wasn't
here.
He
would
have
appreciated
that
okay,
so
next
steps
are
for
I
I
could
create
a
bunch
of
issues
based
off
of
the
investigation
thread
that
we
have
open
and
I'll
basically
just
go
through
the
mr
plan
and
do
my
best
to
fill
in
in
the
details
and
then
I'll
share
those
issues
in
the
in
the
channel.
Once
they're
created.
A
B
Just
one
one
thought
I
had
was,
it
might
actually
be
good
to
also
do
the
whole,
like
rails,
console
back
end
only
implementation
with
some
docs
at
first,
because
then
you
could
even
put
in
the
docs
all
of
the
like
screenshots
of
of
what
the
front
end
will
look
like
of
like
kind
of
like
a
coming
soon,
and
then
we
can
start
to
get
feedback
of.
You
know
I'm
using
this,
and
this
front
end
is
going
to
be
missing
this
thing
or
something.
To
that
extent,
before
we
even
build
it.
A
All
right,
thank
you,
everyone.
This
is
great.
It's
really
helpful.
Another
action
item
that
I
have
is
just
to
create
an
issue
for
understanding
how
prevalent
duplicates
are
in
the
type
of
registry,
and
I
think
that's
it
thanks.
Thanks
again
and
we'll
talk
to
you
all
soon,
bye.