►
From YouTube: Think BIG session 10-23-19
Description
In which Iain presents findings from our recent round of user research and we all ask questions.
A
B
I
wanted
to
kick
off
today
to
share
some
results
from
the
user
research.
As
many
of
you
know,
I
did
interviews
last
week
in
the
week
before
that,
with
some
of
our
actual
users
to
learn
about
how
they
think
about
packaged
data
and
donker
data,
how
they
organized
it
and
just
kind
of
how
they
think
about
the
registries.
In
general,
it
was
really
successful.
B
Oh
my
goodness,
okay,
the
who
of
who
I
talked
to
you.
So
there
were
seven
users.
We
would
tried
to
schedule
Freight
with
the
realistic
goal
of
five.
So
getting
to
talk
to
seven
people
is
exciting
and
ahead
of
where
we
wanted.
They
were
all
over
the
world.
The
highlight
reel
of
the
people
that
we
talked
to
were
from
Sweden
United,
Kingdom,
USA,
South
Africa,
and
the
one
that
I
was
most
excited
about
was
from
Sri
Lanka.
B
When
we
first
started
that
interview
he
started
outside
I
was
excited
because
I
thought
this
was
going
to
be
like
a
tropical
virtual
vacation.
Well,
I
got
to
talk
to
him,
but
then
he
went
in
his
office,
so
it
was
kind
of
sad.
But
for
that
brief
moment
it
was
quite
tropical.
We
had
a
solid
mix
of
engineers
and
DevOps.
B
There
was
a
bit
of
a
spectrum
where
they
were
senior
engineers
who
often
acted
like
DevOps
or
had
a
role
in
DevOps,
and
so
that
was
a
bit
of
a
hazy
line,
but
they
kind
of
identified
as
one
of
those
two
one
person
said
they
were
a
system
admin,
but
that
really
meant
they
were
either
Bob's
engineer,
and
that
was
just
as
fancy
titles.
So
that's
one
thing
we
may
want
to
look
at
they
all
used.
They
all
use
get
lab
in
some
form,
not
everybody
used
the
container
registry
offering
or
the
package
offering.
B
So
we
got
perspectives
of
our
product
as
well
as
J,
frog
and
just
standard
in
NPM
perspective,
which
was
great
something
about
that.
Their
perspectives
were
pretty
universal,
which
is
interesting
in
terms
of
how
they
use
it,
what
they
think
about
data,
so
there
isn't
currently
a
strong
correlation
between
if
they
use
gitlab
for
their
solution
and
if
they
don't
in
terms
of
how
they
think
about
their
data,
which
gives
a
nice
universality
to
it.
B
It
means
that
when
we
present
something
it's
going
to
feel
familiar
because
it's
kind
of
what
they're
used
to
outside
of
it.
So
it's
a
good
feeling.
When
we
did
the
survey
there
was
a
bit
of
a
hiccup
because
of
the
way
I.
Where
did
the
question?
Where
I
couldn't
tell?
If
users
were
part
of
a
five-person
team
that
belonged
to
a
larger
organization
team
or
if
they
were
identifying
that
they
only
worked
for
a
small
group,
got
some
clarity
at
the
seven
people
that
we
talked
to?
B
Almost
all
of
them
gave
a
I'm
part
of
a
small
team,
so
7
to
15
was
around
the
number
and
that
team
belongs
to
a
much
larger
100
200
plus
workforce
team
was
kind
of
the
common
answer.
So
it's
nice
that
we
have
that
qualitative
response
to
a
quantitative
question
that
we
ended
up
with,
which
is
what
we
were
expecting.
So,
let's
get
a
got
a
feel
good.
The
most
common
architecture
we
talked
about
was
what
I'm
calling
the
classic,
which
is
we
have
kind
of
older
code.
B
That's
in
this
big
mono
repo,
and
we
are
attempting
to
eventually
transfer
over
to
micro
services.
All
of
our
new
stuff
is
micro-services
and
eventually
we're
going
to
pick
apart
this
mono
repo.
That
was
the
most
common
story,
so
that
may
be
something
where
down
the
road.
We
see
if
there's
a
way
to
facilitate
that
transfer
from
mono
to
a
micro
service.
Yeah.
Sorry,
yes,
good.
B
That's
a
good
call
out:
I
have
to
do
the
formal
synthesis.
So
when
you
see
insights,
it'll
actually
say
5
of
7
users
said
when
I
say
most
it's
in
that,
like
four
or
five
people,
so
at
least
a
majority
but
I'll
get
an
actual
number
to
you
by
the
end
of
the
week
for
sure
cool.
Thank
you
yeah.
You
bet
uh-buh-buh-buh.
B
Where
was
I
threw
me
off?
Yet
languages
were
a
pretty
good
variety.
Javascript
was
the
most
common,
so
NPM
packages
were
the
most
common
just
because
that's
the
only
language
to
use
in
a
client
I,
don't
think
anyone's
surprised
that
that's
the
most
popular
one
and
almost
everyone
used
docker
a
couple
of
users
didn't
know
docker
themselves
well
enough
to
have
the
conversation,
but
they
knew
their
company
used
it.
B
So
the
fact
that
we
focus
in
on
that
container
registry
is
kind
of
continuing
to
validate
that
that's
a
good
place
for
us
to
focus
which
always
feels
good
and
then
almost
everyone.
One
person
had
one
manual
push,
but
they
had
a
unique
situation.
So
I
wouldn't
really
qualify
them.
Everyone
else
used
some
sporm
of
the
gate
labs.
The
ICD
offering
there
was
a
common
trope
between
probably
three
of
them.
B
That
said,
we're
working
on
automating
even
more
so
it's
kind
of
we're
starting
the
CI
to
eventually
moving
to
a
bigger
like
CD,
ID
idea,
and
it's
kind
of
interesting
to
see
that
a
lot
of
the
people
we're
working
with
our
mid-transition
and
a
lot
of
their
ass
in
their
effort.
So
that's
areas
we
can
explore
how
they,
how
we
can
facilitate
them.
Moving
faster,
just
awesome,
diving
into
docker
the
image
and
creation
and
update
cadence.
So
a
lot
of
images
are
being
created
daily
if
not
weekly,
they're,
getting
updates
all
the
time
net.
B
B
Most
organizations
seem
to
fall
in
the
either
one
month
or
every
other
month,
cadence
for
creating
those
new
images
and
as
I'm
going
through
this.
If
you
have
questions
feel
free
to
jump
out
and
ask
them,
I'm
just
want
to
call
that
out.
Cuz,
I'm
talking
a
lot.
They
tend
to
actively
manage
between
10
and
40
images.
Nobody
broke
the
50
mark.
B
This
is
because
we're
kind
of
struggling
with
the
terminology
they
were
saying,
images
is
in
one
image,
may
have
many,
so
new
versions
being
updated,
but
they're
only
managing
between
10
and
40,
and
that
was
directly
correlated
to
the
size
of
their
team.
So
the
more
projects
involved,
the
bigger
their
team,
the
more
images
they
were
I
would
love
to
explore
later.
If
there's
a
direct
number
that
we
can
say
so
work
a
team
of
a
hundred
people,
it's
I,
don't
know,
point
two
images
per
engineer.
B
It
would
be
great
to
come
up
with
that
number
so
that
we
can
kind
of
directly
tell
how
many
images
versus
how
big
the
workforce.
So
that's
something
I
want
to
explore
later
when
we
talk
to
them
about
the
UI
and
I
asked
them
walk
me
through
the
last
time
you
went
to
the
UI.
What
brought
you
there?
What
were
you
hoping
to
accomplish
when
you
were
there
and
what
was
the
outcome?
B
B
That's
the
package
offering
in
the
container
registry
was
they
set
it
and
forget
it
feature
that
users
are
happiest
if
they
don't
have
to
deal
with
us,
because
everything
is
working
like
it's
supposed
to
everyone
had
that
similar
kind
of
vibe.
They
only
need
to
investigate
when
something
is
wrong
or
when
something
doesn't
work,
and
so
a
lot
of
their
attention
is
trying
to
make
sure
they
capture
when
things
aren't
working,
so
that
could
be
an
area
we
could
explore
error,
handling
or
maybe
checking
vulnerabilities.
B
So
there's
an
area
that
we
can
explore
later
again
when
users
were
asked
to
define
metadata,
they
stumbled,
which
is
something
that
we've
kind
of
explored
before
between
two
different
things.
They
were
strong
but
started
struggling
with
the
difference
between
an
image
and
a
tag,
because
a
tag
is
theoretically
one
complete
image,
but
an
image
has
many
tags
and
nobody
quite
had
the
same
answer
there.
This
is
an
area
where
we
should
probably
define
a
strong
opinion
to
help
our
users
navigate
this
successfully
in
our
environment.
B
Nicko
opens
an
issue
to
kind
of
just
explore
this.
As
a
team
I've
added
some
of
the
notes
that
I've
had
from
the
experience.
It
would
be
great
if
everyone
could
explore
that,
but
it's
good
to
know
that
those
are
the
areas
where
our
users
are
stumbling.
The
rest
of
the
terminology
was
very
straightforward.
They
were
very
quick
to
say
this
is
this.
You
know
this
is
the
dockerfile.
That's
where
the
instructions
are.
This
is
the
last
commit
date.
B
You
know
this
is
the
date
of
the
commit
related
to
the
little
recent
change
to
the
image.
One
thing
that
was
really
nice
as
we
are
exploring
how
the
vision
of
package
and
what
we
should
do.
One
of
our
thoughts
is
having
a
much
deeper
connection
to
the
code
itself,
so
the
repository
in
the
project
and
the
pipeline
data
of
how
it
was
built.
Nobody
stumbled
on
the
idea
that
metadata
associated
with
a
project
was
related
to
the
image.
B
I
did
a
blind
test
so
as
we
walked
through,
they
get
a
list
of
random
metadata
points
and
I
asked
them
to
define
what
each
metadata
meant
to
them,
and
this
is
where
we
got
an
image
versus
the
tag
is
where
people
kind
of
had
different
answers,
or
had
to
double
down
and
same
with
the
registry.
Most
people
actually
thought
registering
that
the
project,
the
code,
which
is
an
interesting
insight
that
that's
the
way
they
went
and
then
a
couple
of
user
is
called
out.
Specifically,
this
is
something
I
am
unsure
about.
B
I've
read
the
documentation,
I've
read
docker
and
I'm
still
not
sure
what
each
one
means,
especially
when
I'm
navigating
the
API
documentation
when
it
says
you're
looking
at
their
docker
repository,
they
kind
of
get
confused.
So
that's
something
we
should
investigate.
I
think
axel
actually
already
has
an
issue
to
explore
that
documentation.
B
So
it's
good
that
we're
getting
validation,
that's
something
we
need
to
work
on
sorry,
I'm
reading
my
notes,
I
just
did
this
and
I
should
have
it
all
memorized
when
users
were
asked
to
categorize
their
data,
so
the
way
that
we
did
the
activity
is
we
had
that
list.
B
You
could
drag
it
over
and
form
little
categories
and
when
you
were
doing
that,
I
asked
them
to
organize
all
the
data
as,
however
they
wanted
and
almost
always
we
got
the
important
package
data
which
is
I
would
define
as
this
is
the
information
I
need
to
use
sorry
the
image,
so
it's
the
image
name
or
the
tag
ID
or
that
kind
of
stuff,
and
then
there
was
the
code
area
where
it's
like
is
where
the
image
like
the
dockerfile
is,
and
this
is
where
get
it.
This
is
the
commits
that
it
was.
B
The
next
area
was
the
pipeline
data.
So
this
is
how
my
image
got
built.
This
area
was
actually
fluctuated
in
importance.
If
you
were
a
DevOps
manager,
you
were
more
likely
to
want
to
explore
the
pipeline
information.
That
would
be
your
go-to
and
there's
a
troubleshooting
thing.
Engineers
gravitated
towards
the
project.
They
wanted
to
see
the
code
and
to
commit
more.
So
that's
an
interesting
distinction,
and
then
there
was
just
kind
of
another
junk
area
which
is.
B
When
we
looked
at
it,
we
asked
them
just
overall.
What
is
the
most
important
pieces
of
data
for
you?
It
was
the
installation
instructions,
so
the
information
I
needed
to
actually
pull
down
this
image
and
use
it
in
my
code
base
that
was
number
one
that
makes
sense,
and
then
the
next
was
the
commit
sha
and
when
I
asked
them
to
to
kind
of
explain
why
the
commit
sha
was
so
valuable.
The
commit
code
turned
out
to
be
the
like
string
in
the
blizzard
kind
of
thing.
B
It
was
the
one
unique
identifier
that
carried
from
the
final
image
that
we're
looking
at
to
the
pipeline.
To
the
you
know,
the
actual
commit
which
is
tied
to
a
branch
was
just
tied
to
code
which
is
tied
to
a
change,
so
that
was
the
one
piece
of
data
that
they're
using
to
kind
of
string
everything
together.
So
this
is
useful
for
her.
B
People
one
person
called
out
that
a
lot
of
people
do
it.
They
just
don't
that
one
person
has
far
fewer
images,
so
he
actually
called
out
on
his
own
that
if
he
worked
for
a
larger
organization,
this
may
be
a
thing.
But
if
it's
because
it's
just
him
he
was
a
freelancer
and
a
consultant,
he
already
knew
so
it
didn't
make
as
much
sense
and
he
had
his
own
system,
but
he
did
call
out
like
if
I
was
at
a
larger
organization.
C
Did
you,
while
you
were
doing
this,
did
you
investigate
how
people
were
actually
tracking
it,
whatever
just
noting
down
the
Conejo
and
then
just
following
it
manually?
It
was
just
some
of
the
methodology.
You
don't
have
to
answer
the
question
I'm
just
wondering
if
we
asks
a
question
all
the
people
respondents
so.
B
For
a
few
of
them
that
called
out
the
commit
show
was
especially
important.
I
asked
them
to
describe.
Why
and
the
story
that
I
told
you
is
kind
of
what
they
gave
me.
I
did
not
actually
break
down
and
investigate
how
they're
using
the
commit
shop.
I
think
it
would
be
very
interesting,
so
you
figure
that
out,
because
that
could
be
something
we
surface
in
the
UI
as
like.
D
B
Is
an
idea
we've
explored
previously
during
the
survey
I
asked
just
as
a
researcher
effort
if
that
data
was
relevant
and
it
tended
to
rank
higher
in
the
stack
ranking
activity.
So
we've
definitely
had
the
conversation
now
we're
getting
the
qualitative
story
behind
why
pipeline
data
is
useful
or
why
they
want
to
connect
back
to
the
project
that
makes
sense.
A
A
B
A
B
B
B
The
reason
being
is
that
some
people
started
the
story
about
editing
the
docker
file,
which
would
mean
sending
them
to
the
codebase,
makes
more
sense
than
using
the
web
IDE
in
that
kind
of
area,
and
others
were
referencing
the
docker
file
to
see
what's
going
on
and
then
would
jump
into
pipelines.
So
I
don't
have
a
strong
feeling
about
whether
linking
to
the
actual
code,
blob
in
projects
or
just
showing
and
displaying
the
information
is
more
valuable.
B
This
could
be
something
where
we
display
an
expandable
item,
so
you
can
like
click
on
it
and
it
expands
to
show
the
docker
file.
But
you
takes
that
manual
step,
so
we're
not
like
cluttering
the
space
that
might
be
a
good
option
and
then
you
can
maybe
directly
click
to
the
edit
this
page
and
do
that
web
ie
de
flow.
That
kind
of
moves
you
into
the
merge
requests
and
passes
you
along
might
be
another
option
and.
B
For
a
docker
file
or
readme
was
not
necessarily
as
interesting
when
I
asked
them
to
you
know
what
kind
of
data
are
you
thinking
about?
Is
there
any
other
things
that
could
be
useful?
Dockers
seems
to
be
one
of
those.
It's
it's
self-described
because
of
the
docker
file,
so
that
technical
aspect
is
there
and
then
the
way
you
use
docker
files
are
pretty
consistent,
so
they
weren't
really
looking
for
that
in
the
package
realm.
However,
there
was
that
a
much
stronger
want
for
that.
A
few
people
called
out.
B
They
liked
the
fact
that
NPM
had
that
readme
file
and
even
if
it
just
took
the
readme
from
the
project
and
displayed
it
on
the
package.
That
would
be
fine
enough
and
that's
a
lot
of
people
were
saying
that
that's
what
they
do
brenn
p.m.
is
they
just
kind
of
paste
their
readme
into
the
project
description,
because
it
tends
to
have
all
the
same
information.
So
there
should
be
a
connection
there.
B
B
Now
I
actually
have
to
think
on
my
feet
and
I
wasn't
prepared
for
that.
Could
you
ask
your
question
in
a
different
way?
You.
B
Sure
so
some
organizations
are
set
up
in
a
way
that,
depending
on
the
environment,
you're
gonna
be
working
in
depends
on
the
image
you
should
be
using,
and
so
they
want
a
way
to
describe
if
you
are
running,
if
you're
expecting
to
run
Alpine-
and
it
has
this
thing
in
this
certain
setup
and
you're
working
on
a
java
application,
and
so
you
need
this
nice
docker
image.
That's
alpine
and
Java
then
use
this
image.
That's
the
primary
one
for
that
versus.
B
If
it
is
for
a
different
language
or
a
different
environment,
they
want
a
way
to
identify
that.
Currently,
there's
really
just
the
this
is
the
primary
image
and
then
this
is
not,
and
after
that
it
becomes
like
naming
conventions
and
so
they're
trying
to
figure
out
and
solve
for
ways
to
do
that,
because
there
isn't
a
better
option
right
now.
B
So
if
we
explored
away
it
could
be
as
easy
in
the
UI
as
saying
everything
every
package,
that's
the
latest
package
built
from
master
should
be
called
the
primary
and
then
they
can
set
new
rules
or
they
could
manually
say
this
is
now
the
primary
one.
So
they
want
to
explore.
That
is
more
interesting
than
necessarily
how
to
use
the
specific
image
and.
D
So
that
kind
of
like
makes
me
think
because
I
feel
like
that,
can
all
sort
of
be
tied
into
like
tagging
and
labeling,
but
it
sounds
more
like
they're
looking
for
a
way
when
you
know
when
there's
these
pipelines
just
spitting
out
these
images
to
have
that
sort
of
automate
in
a
better
and
more
digestible
way.
Is
that
am
I
hearing
that
right?
Definitely.
B
Cool
for
sure,
and
a
lot
of
users
tied
this
back
to
retention
policies.
So,
instead
of
trying
to
identify
what
was
primary,
they
were
taking
opposite
approach
of
removing
everything
that
isn't
so.
That
could
be
a
more
elegant
solution
where
we
just
get
rid
of
most
of
your
tags
and
we
declutter
it.
So
it's
easier
for
your
organization
to
use
simple
structures
to
determine
it.
B
Can
probably
say
yes
to
both
of
those
things,
they
would
like
the
ability
to
identify
that
this
is
our
base
alpine
image.
It
has
past
security.
It
has
gone
through
all
the
things.
Every
image
can
use
this
base,
everyone
in
the
organization
uses
base.
They
also
want
the
ability
to
say
this
is
the
finer
dockerfile,
because
it
is
from
master
and
it
has
passed
all
of
our
tests
and
it
went
through
QA.
This
is
the
final
docker
file.
We
should
use
so
they're
looking
for
both.
E
B
Perfect
I
will
dive
into
package.
Then
package
data,
I
will
admit,
is
shockingly
similar
to
the
image
data
with
just
a
few
nuance,
differences,
which
is
really
great
because
that
means
we
don't
have
to
hold
too
many
models
in
our
head.
But
it
does
mean
this
bits
a
bit
boring
so
I'll
try
to
speed
through
it.
B
The
cadence
for
updating
and
creating
new
images
or
new
packages
was
just
pushed
forward.
One
level,
if
you
would
so
instead
of
creating
new
packages
daily,
they
tend
to
be
created
either
weekly
or
up
to
the
monthly.
If
you
have
the
larger
release
cycle,
it
tended
to
be
more
tied
to
a
release
cycle
and
the
images
which
seemed
to
be
every
commit.
B
The
number
of
packages
that
were
actively
being
managed
fluctuated
a
lot.
Some
organizations
had
two
or
three
custom
private
packages
that
solved
a
unique
case
for
them
and
they
used
it
in
a
couple
different
places
in
their
code
base,
which
tends
to
be
the
like.
I,
the
open
source
NPM
model
right,
I
created
a
small
thing
that
solved
a
problem.
I
created
a
package
and
I
used
it
in
my
product
and
then
on
the
opposite
end.
B
So
oftentimes
individual
teams
only
had
to
manage
two
or
three
product
packages
at
most,
so
a
single
user
really
only
has
to
pay
attention
to
a
couple
and
that's
an
important
thing
to
know,
especially
as
we
consider
a
group
view,
because
that's
going
to
be
our
DevOps
trying
to
manage
all
of
the
packages
being
created
by
each
team
and
so
keeping
that
kind
of
mental
model
in
mind
may
be
helpful
for
how
we
organize
and
show
the
group
package
level.
What's.
A
B
Some
teams
were
organized
in
a
way
that
if
they
were
using
packages,
if
it
applied
universally
across
many
things,
so
if
they
wanted
to
surface
certain
pieces
of
data
that
multiple
products
would
be
using,
then
they
would
have
maybe
a
package
that
manages
that
communication
and
then
the
teams
themselves
would
just
manage
their
codebase.
That
uses
the
shared
package.
B
That
seemed
to
be
a
struggle
to
make
sure
that
connecting
the
dots,
if
you
would
to
know
that
other
teams
are
going
to
use
a
certain
piece
of
code
based,
and
so
that
was
a
part
of
his
job
was
like
this
should
become
a
package.
This
big
effort
should
become
a
package
because
multiple
people
are
gonna
use
it,
which
was
in
a
unique
case
and
flow
from
what
I
had
really
considered,
and
then
there
are
other
teams
that
are
just
producing
it
and
they're,
not
really
touching
anybody
else.
It's
just
their
final
product
did.
B
B
C
A
Makes
me
think
of
Nick's
idea
of
creating
packages
as
projects
as
being
like
a
good
workflow
that
we
could
support
so
like
when
that
person
says.
Oh,
this
should
be
a
package,
they
can
create
a
project,
and
then
people
can
track
and
contribute,
and
things
like
that,
so
I'm
more
and
more
in
favor
of
that
idea
as
time
goes
on
it.
B
Is
really
cool?
One
person
brought
up
the
idea
of
a
way
to
pull
out
code
into
a
project
so
or
into
a
project
to
become
a
package.
So
if
you
can
create
it
like
a
I'm
diving
too
deep
into
into
depth
terms
that
I
don't
know
very
well.
So
please
forgive
me
if
I
use
the
wrong
word,
but
there's
a
directory
and
it
has
like
the
different
attributes
of
a
function
right
that
has
you
know
the
MVC
model.
B
Then
they
want
the
ability
to
pull
that
directory
out
and
turn
it
into
a
project
that
becomes
a
package,
and
that
was
again
to
address
that
unique
flow
of
this
looks
like
a
universally
needed
thing.
You
should
turn
it
into
a
package,
so
I'm
curious.
If
there
is
a
convert
folder
to
package
feature
in
the
future,
I
could
be
just
this
one
use
case
and
so
the
amount
of
effort
it
would
take
to
actually
do
that.
B
D
B
It's
not
very
helpful
and
they
called
us
some
pretty
basic
things
that
could
make
it
more
helpful.
Nic.
Next
me,
mr
of
adding
sorting
to
the
package
list,
made
almost
every
person
that
brought
it
up
really
happy
because
that
could
have
just
saved
them
so
much
time
if
they
could
sort
by
the
created
date
and
they
could
see
when
the
last
one
wasn't
just
jump
in
so
once
they
see
the
data
it's
helpful
getting
to.
It
is
kind
of
a
difficult
path.
Right
now.
B
Here,
even
more
so
than
docker
I
would
say,
there
was
no
hesitation
with
connecting
a
package
to
its
project,
which
is
really
good
news
that
that
connection
just
makes
logical
sense.
Again.
We
presented
them
with
data,
it
included
projects
and
pipelines
data
and
when
I
asked
them
to
describe
it,
it
was
very
clear
what
is
the
pipeline
name?
It
is
the
pipeline
that
built
this
version
of
the
image,
so
very
straightforward,
easy
to
jump
in
and
helps
us
know
that.
That's
not
a
cognitive
leap
we
have
to
overcome.
B
B
B
B
Some
users
thought
that
was
the
most
important
thing
and
others
thought
that
it
was
kind
of
a
useless
piece
of
information
because
you
shouldn't
install
without
you
should
install
using
NPM
or
package
Oh,
Jason
or
yarn
when
I,
when
I
asked
him
kind
of
a
follow
up
a
why
this
file
was
so
important.
A
lot
of
the
answers
he
he
was
giving.
You
was
well
when
I
downloaded
it
open.
B
It
I
see
exactly
what
the
package
is
going
to
be
and
the
sign
I
know
that
code
is
safe
or
is
at
least
what
I
know
I'm
dealing
with
so
I'm
curious
for
that
person.
If
connecting
them
directly
to
do
the
code
would
have
been
more
successful
than
having
and
download
it,
and
there
was
this
uneasiness
that
the
code
in
the
tarball
tgz
was
different
than
the
code
base
he
had.
So
that
is
a
an
interesting
idea.
So.
B
So,
yes,
all
good
everything
is
lovely.
Most
important
date
almost
always
responded
with
the
install
instructions
and
then
the
idea
of
a
change
log
also
kind
of
got
floated
around
a
few
times.
So
that
could
be
something
that
we
surfaced
about
like.
This
is
what's
happened
with
our
package,
which
could
make
the
troubleshooting
process
a
bit
easier
after
the
card
sorting
activities,
we
wrapped
up
with
some
blue
sky
conversations.
These
are,
if
you
had
a
magic
wand,
and
your
package
manager
can
suddenly
have
a
feature.
B
What
would
it
be
and
on
the
opposite,
if
you
had
a
magic
wand,
it
can
change
anything
about
your
package
manager.
What
would
it
be
and
then
the
last
one
is
if
you
can
have
anything
in
the
world
that
would
help
you
manage
your
packages?
What
would
it
be
something
that
was
really
interesting?
It
just
kind
of
popped
in
my
head?
B
Is
people
kept
coming
back
to
how
cool
it
would
be
to
surface
the
project,
data
and
different
ways
that
they
could
use
it
and
how
that
just
felt
natural,
and
that
was
something
that
they
struggled
with
when
they
used
a
frog
or
when
they
use
NPM?
Is
it
there
is
no
connection,
so
it's
fun
that
they
got
excited
when
they
they
had
the
blue
sky
open
field
that
they
wanted
to
come
back
and
talk
about
the
relationship
between
code
and
package.
That
made
me
feel
very
validated
in
that
aspect.
B
One
person
brought
up
that
using
they
tend
to
use
the
CI
pipeline
views
as
their
dashboard
of
the
health
of
their
overall
ecosystem,
so
surfacing
package
errors
or
docker
image.
Errors
in
that
screen
could
be
really
beneficial.
I
think
it
catches
the
old
errors,
but
it
could
be
other
ones.
One
example
that
I
thought
was
really
cool
was
if
a
packager
image
gets
added
to
the
registry,
that
is
zero
kilobytes
to
raise
an
error,
because
that
is
likely.
B
One
thing
that
kept
coming
up
was:
how
is
this
you
image
performing
and
what
is
its
usage,
so
they
wanted
to
know
how
fast
did
it
take
for
an
image
to
fire
up
into
a
container?
How
many
times
is
a
docker
image
being
poles?
How
many
times
is
a
package
been
polled?
How
many
things
is
my
package
dependent
on
the
dependent
for
not
good
with
that
one?
So
a
lot
of
the
like
after
the
package
is
built,
what's
happening
with
it.
B
A
B
Sure
there
was
definitely
some
just
kind
of
vague.
It
would
help
me
troubleshoot.
So
if
it's
taking
ten
minutes
to
fire
up
a
container,
there's
probably
something
we
should
look
at,
they
shouldn't
be
that
slow.
So
that
was
one
aspect,
one
that
I
thought
was
really
clever,
was
I
want
usage
data
on
everything,
but
specifically,
when
I
have
something
that
is
outdated
or
is
vulnerable
or
is
not
safe
to
use
anymore.
I
want
to
know
if
it's
still
getting
used,
and
you
know
as
the
DevOps
hat
on
my
head
I'm
just
like.
B
C
B
Yes,
I
think
this
was
more
in
the
like
general
evolution
idea,
so
we
don't
want
to
break
suddenly,
but
I
do
want
to
see
that
everyone
is
moving
over.
So
I
think
it's
less.
Maybe
the
extreme
version
that
I
presented
and
more
of
the
I'd
like
to
know
how
successful
I
am
in
getting
everyone
to
move
over
to
this
new
version.
Maybe
that's
a
better
way
to
explain
it.
No.
C
And
that's
a
good
point:
I
was
just
what
I'm
hearing
is.
There's
a
security
concern
to
these
particular
things.
So
there
are
people
still
using
it
can
I
track
it
and
yeah
that's
an
extreme
example,
but
then
I
go
to
some
more
policy
mindset
of
like
you
know.
If
anything
checkbox,
if
anything
is
flagged,
is
not
secure
anymore.
You
can't
you
can't
pull
it
down.
It's
basically
disabled
that.
B
So
it
was
a
really
exciting
idea
for
them
if
they
could
look
at
the
homepage
of
a
package
and
just
see
that
it
passes
basic
security
standards
or
the
opposite,
that's
a
there's,
a
red
flag
that
this
may
be
vulnerable
or
may
have
multiple
vulnerabilities
tied
to
that
was
the
idea
of
connecting
them
to
the
issues
active
in
the
project
associated
with
the
pageant
the
package.
Specifically
the
bug
queue.
They
wanted
to
know
a.
How
many
bugs
were
there
and
be.
B
B
A
And
what
do
you
think-
and
maybe
this
is
question
for
the
whole
team?
What
are
the
implications
of
this
of
that
idea?
Knowing
that
our
users
have
this
mindset
of
like
it
should
just
work
and
the
only
time
that
I
want
to
go
to
the
the
only
time
I
want
to
I
just
want
the
only
time
I'm
going
there
is
if
something's,
wrong.
I
was
just
wondering
what
people
thought
the
implications
of
that
work.
I
think.
C
Yeah
I
think
my
my
sense
of
that
is.
If
we
had
tools
that
proactively,
let
people
know
there's
a
problem.
They
wouldn't
be
hunting
around
in
there
in
the
actual
application
to
figure
out
what
was
going
on.
You
know
so
whether
that
is
some
security
concern,
whether
there's
some
other
consideration
going
on
it,
if
they're
actual
did
that
for
them,
I
notified
them
and
they
wouldn't
be
hunting
around
all
the
time,
and
so
if
it
was
clear
list
of
packages,
maybe
when
something
came
off
they
were
notified.
You
know
if
something
was
dead.
There's.
C
C
C
We
have
this,
it's
it's
that
conversation
I,
think
we've
had
a
few
times
before,
which
is
the
product.
Yes,
they
use
the
package,
but
the
packages
of
the
repo,
and
so,
if
I'm,
if
I'm
happy
working
in
my
repo
and
it's
passing
on
like
my
pipeline
and
everything's
going
well
and
the
package
is
just
like
whatever
it's
fine
I,
don't
need
to
care
about
it.
That
makes
total
sense
to
me
because
we
engineers
bright
code,
we
don't
write
packages.
B
B
So
that's
one
aspect
and
Avenue
to
pay
attention
to
the
other.
One
would
be:
how
do
we
gently
surface
problems
in
a
way
that
they're
going
to
digest
it?
Just
like
Dan
was
saying:
how
do
we
get
them
ahead
of
the
curve,
instead
of
my
whole
pipeline
breaking
a
couple
days
later,
because
this
package
was
zero
kilobytes?
How
do
we
surface
that?
This
probably
is
not
a
good
package?
Those
are
kind
of
my
two
perspectives
on
that
set
it
and
forget.
It
idea
the.
C
Policy
can
support
that
right
like
if
we
had
the
concept
of
policy,
you
could
say
anything
that
files
security,
anything
that
cause
it
has
a
build
problem
in
the
pipeline.
Anything
that's
zero
and
like
we
don't
have
to
be
prescriptive
about
what
those
policies
are,
but
some
of
those
inconsistent
without
users
and
therefore
we
keep
stop
there
and
have
someone
get
you
to
write
off
means
we
get
killing
with
it
for.
B
Sure
I
think
that
is
another
problem
area
to
address
of
like
this
is
the
good
package
to
use
use
that
this
is
the
good
image
use
that,
and
this
is
not
and
making
it
very
easy
and
clear
for
users
to
be
able
to
automatically
say
what
is
this
and
is
not
as
well
as
options
to
manually?
Do
it
because
that's
just
some.
There
are
some
users
that
have
the
perspective
that
if
I
checked
it
manually,
my
my
human
brain
now
knows
it
is
safe.
B
B
Other
terminology,
oh
the
terminology,
I,
will
say
darker
labels.
People
got
confused
about
just
because
they
don't
know
that
docker
has
labels,
so
they
initially
would
say.
Oh
it's
the
tag
and
then
they
would
see
the
tag
and
be
like
Oh
scratch
that
I
have
no
idea
what
that
is.
So
if
we
some
users
are
asking
for
it,
but
we
should
kind
of
make
sure
we
explain
what
it
is
for
those
who
aren't
as
familiar
with
docker.
A
A
If
that
doesn't
work
or
it's
you
know,
it
leaves
me
like
I'm
super
frustrated,
so
I
think
it's
the
same
thing
like
if
we,
if
we're
taking
the
responsibility
of
creating
this
like
it's
just,
it
should
just
work
and
when
you
come
here,
we're
gonna
surface
the
problems.
If
we
have,
if
we're
not
surfacing
the
correct
problems
or
not
pointing
them
to
the
correct
package
or
image
shoes,
it's
gonna
lead
to
a
lot
of
frustration.
So
I
think
we
need
to
one
of
my
old
CEOs.
I.
Remember
telling
me
like
don't
be
objectionably
wrong.
A
C
But
I
think
I
agree
with
that.
I
don't
want
to
be
objectionable
or
than
I
normally
am,
but
you
know
I
would
I
would
say
to
say
get
lab
as
a
tendency
to
go
in
the
other
direction,
which
is
to
make
small
changes
and,
if
they're
wrong,
just
fix
them
quickly
right,
but
in
in
having
a
set
it
and
forget
it
process
that
wasn't
necessary
moving
the
tools
that
they
are
only
have
to
establish
whether
there's
an
issue
announce.
Oh,
that
doesn't
change
that
workflow
necessarily
just
adds
another
layer.
C
On
top,
that's
a
little
bit
more
or
a
DevOps
/
pipeline
waste
that
just
takes
away
some
of
the
pain
points
from
creating
these
things
and
whether
that's
what
our
labeling
or
there's
a
manual
labeling
process,
whether
that
can
change
the
label
after
the
fact
it's
a
bunch
of
things
we
can,
you
can
do
there
that
I
think
still
leaves
the
bar
long,
so
they
can
pick
them
all
automated
system.
We're
at
time
I'm
just
to
call
that
out
two
minutes
over.
A
Hey
yeah,
you
bet
I
have
one
quick
question
in
sorry:
I
think
you
mentioned
that
you're
gonna
have
returned
some
of
the
data
and
actually
synthesized
it
a
bit
more
and
say,
like
five
out
of
seven
users,
say
this
and
we'll
have
those
insights
and
did
you
mention
that
was
going
to
be
done
at
the
end
of
this
week?
It.