►
From YouTube: 📦Package Managers SIG Weekly Sync May 14, 2019
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
D
Think
so
this
week,
so
I
set
up
labels
for
the
github
project.
It
is
an
absurd
amount
of
labels.
Looking
through
it.
If
you
look
at
the
link,
there's
like
a
ridiculous
number
of
them,
so
part
of
that
was
actually
sort
of
is
like
a
proof
of
concept
for
how
we
might
want
to
scale
that
labeling
methodology
out
using
on
things
outside
of
the
package
manager
world.
D
All
that
much
we're
well
so
far,
I
think
Andrews
got
his
head
set
back
so
so
yeah
I
also
took
a
passionate
assigning
labels
to
all
of
our
open
issues.
Part
of
those
were
pretty
straightforward.
Part
of
that
was
just
me
making
stuff
up.
So
again,
if
you
happen
to
be
sitting
in
one
of
those
issues,
if
you
don't
mind
taking
a
look
at
the
labels
just
to
see
if
they
make
sense
and
like
I
said,
any
feedback
is
more
than
welcome.
D
D
Yes
and
no,
the
good
and
bad
is
sort
of
switched
around.
So
if
you
look
at
the
bottom
of
that,
there's
also
an
alternative
box
for
another
way
of
expressing
the
same
thing
had
an
open
question
out
to
Eric
as
to
whether
that
made
sense
to
him
but
sort
of
my
intent
is.
This
is
kind
of
one
of
those
things
that
you
know.
D
Obviously,
it's
helpful
to
us
to
get
our
thoughts
together,
but
as
part
of
like
tying
this
into
Andrews
effort
to
putting
things
in
a
Doc's
directory
the
more
sort
of
outside
world
useful,
we
can
make
any
of
those
artifacts.
The
better
off
we're
going
to
be
as
a
whole
socializing
the
package
managers
effort
so
watch
that
space
I'm
gonna,
be
you
sort
of
have
that
as
a
background
task
of
really
reworking
as
many
of
those
things
as
we've
got
in
our
issues
list
that
we
feel
could
be
helpful.
D
Documentation
wise
for
the
greater
community
as
a
whole
ditto
also
went
through.
You
know
all
those
outcomes
workshops
that
we
did
like
in
February
and
March
before
I
worked
here
and
I
got
to
sit
in
on
some
of
those
I
took
a
stab
at
consolidating
that
information
into
from
everybody's
conversations
into
one
document
that
really
separates
everything
out
into
outcomes,
requirements
to
get
to
those
outcomes
and
then
ideals,
ideals,
meaning
like
the
entire
world,
uses
ipfs
for
all
package
managers
by
default
yay.
D
So
that's
really
a
first
batch.
That's
probably
gonna
inform
some
additional
work
just
in
terms
of
doing
other
sort
of
user
journey
and
segmentations
heck
stuff
in
the
future.
It's
maybe
more
of
a
working
document
from
me
and
people
I
might
ask
questions
to
them
for
anything
else.
Anyone
else,
but
it
also
might
be
interesting.
Just
for
you
to
drop
in
on
that
and
see
how
all
of
those
things
are
consolidated
from
all
of
our
discussions.
Yes,
Andrew.
C
That
feels
like
they'll
be
interesting
to
connect
those
different
blocks
you
have
to
like,
which
Docs
do
we
have
in
the
repo
and
which
ones
do
we
not
have
and
like
to
give
a
to-do
list
of
things
like
there's
no
details
here
for
her
publishers,
there's
only
stuff
her
like
there's,
there's
no
documentation
for
that
bit.
Yeah.
D
Exactly
exactly
I
think,
especially
the
requirements
section
of
that
mural
is
going
to
give
us.
You
know,
obviously
that's
going
to
feed
technical
requirements,
I
think
by
and
large
we
are
aware
of
many
of
those
technical
requirements
of
having
talked
about
them
amongst
ourselves,
but
there's
also
sort
of
another
spin
to
those
requirements
like
you're,
saying
of
generating
requirements
for
knowledge,
sharing
or
outreach,
or
just
like
general
level,
setting
among
the
community
as
a
whole.
D
So
yeah
very
good
point:
it's
pretty
messy
right
now,
there's
some
duplication
in
the
back
says
again:
it's
you
know
it's
a
working
document
for
me,
but
it,
but
it
is
calling
it
might
be.
A
good
thing
just
to
you
know,
make
yourself
cup
of
coffee
and
glance
at
it
and
see
if
it
brings
up
any
thoughts.
D
Additionally,
I
also
started
using
some
of
those
to
generate
persona
documents
for
our
key
audience
segments
that
opened
up
sort
of
a
rabbit's
hole
of
you
from
from
UX
perspective.
There's
a
couple
of
different
ways
that
you
can
like
do
personas
and
what
goes
in
the
sort
of
information
boxes
for
each
persona
is
gonna,
be
very
different.
Based
on
the
type
of
thing
that
you're
building
and
I
realized
you're
sort
of
standard
persona
methodology,
wasn't
gonna
be
real
helpful
for
us.
D
So
that's
how
my
list
for
next
week,
but
that's
probably
going
to
detour
back
into
UX
strategy
toolbox
land.
Well,
I
am
figure
out
some
some
best
methods
for
doing
for
sodas
for
product
in
this
thing.
Chair
next
is
kind
of
more
the
same
and
blockers
is
just
like
I'm
I'm
in
like
moving
hell
right
now.
So
I
am
my.
D
Blocker
at
the
moment,
so
if
I'm
sporadic
I
really
apologize
in
advance,
hopefully
this
will
be
all
wrapped
up
by
like
at
least
it.
The
boxes
will
be
in
the
new
place
and
kind
of
unpacked
by
like
the
end
of
next
week.
So
then
everything
will
be
back
to
normal,
but
if
you
think
it's
really
bad,
you
know
just
like
send
me
DM
son
slack,
because
it
pings
like
my
arm.
So
if
you
need
me
in
a
hurry
funny
that
way
so
he's
exactly
what
all
he
did.
C
Okay,
headphones
plugged
in
I
can
I
can
hear
again.
What
did
I
do
so?
I
wrote
up
a
little
review
of
the
github
package
registry
that
was
announced
on
Friday
without
having
beta
access,
but
as
far
as
I
can
tell
I'm,
not
missing
anything
and
highlighted
some
of
the
the
areas
that
they
could
do
better
or
that
they
have
done
as
good
as
existing
package
managers
in
the
sense
of
things
can
be
deleted.
C
There's
no
connection
between
the
tag
of
the
source
code
that's
released,
and
the
thing
that's
published
as
a
package
as
much
as
some
people
on
Twitter
seem
to
think
that
that
was
the
case.
You
can
publish
anything.
It's
literally
just
push
a
tarball
connected
to
the
release.
It
there's
no
guarantee
of
anything
I
I
should
ask
for
beta
access.
I
did
not
ask
he.
C
Similarly,
with
maven
NPM
is
not
affected
by
that
ongoing
other
integrations
will
probably
run
into
that
same
problem
and
there
so
I
posted
that
it
got
really
good
response
on
Twitter
and
then
github
reached
out.
I
just
had
a
I
just
got
off
a
call
with
them
kind
of
to
talk
about
some
of
those
bits
more
and
they're
kind
of
that
their
line
is
there.
They
don't
want
to
replace
package
manager.
C
Registries
they're,
it's
primarily
focused
on
private
things
and
like
github
Enterprise
set
ups,
but
they
want
to
work
towards
improving
package
managers
or
as
a
forcing
function
to
make
package
managers
kind
of
that
have
got
very
stale,
NPM
friend.
Recently
is
like
security
is,
is
just
been
disappointing
to
github
and
people
within
blend
github
for
NPM
problems
and
so
they're
trying
to
take
ownership
of
that
a
little
bit
more
and
thinking
about
like
ways
that
they
could
have
more
reproducibility.
C
Maybe
a
github
action
is
the
only
thing
that
could
be
allowed
to
publish
a
package
or
that
if
a
github
action
published
a
package
without
any
human
interaction,
it
would
get
more
like
a
verified
badge,
or
some
like
providence.
Trail
of
here
is
the
the
log
that
shows
you.
What
happened
to
generate
that
package
still
potential
for
some
nasty
stuff,
because
you
could
have
your
github
action
go
and
fetch
a
URL
from
from
arbitrary
place
and
insert
that
inside
the
package.
C
You
can
no
longer
download
this
because
as
a
security
vulnerability,
whether
or
not
that
applies
to
your
particular
usage
for
my
PFF,
this
point
of
view,
that's
a
an
interesting
kind
of
like
opposite
stance
to.
If
you've
already
downloaded
the
package,
then
you
should
keep
the
package
like
locally
and
it
should
continue
to
work
on
ipfs.
They
also
want
to
put
on
a
package
manager
conference,
so
I've
kind
of.
C
Ooh
only
a
like
co-organizer
I
guess
no
idea
of
the
date
be
in
San
Francisco,
most
likely
in
their
office
for
about
like
a
hundred
people
and
possibly
doing
a
podcast
episode
with
them.
They
also
seem
quite
open
to
potential
ipfs
integration
around
the
package
registry,
so
that
might
be
a
case
of
giving.
C
Secure
in
the
immutability
aspects,
as
well
as
adding
more
layers
of
kind
of
what's
the
word
I,
don't
know,
I,
don't
know
what
the
word
is,
but
they
seem
quite
interested
in
it.
I
basically
said
I
would
put
together
some
kind
of
a
dock
for
them
as
a
primer
for
the
different
ways
that
ipfs
could
be
used
or
different
areas
that
we
could
essentially
kind
of
try
and
align.
C
That
which
would
be
really
interesting,
I
mean
we
can
start
with
just
the
releases
but
potentially
having
any
content
from
github
that
it
can
be
content
addressable
with
github
running
a
full
suite
of
ipfs
nodes
would
be,
would
be
super.
So
that's
interesting,
cool
and
I
need
to
follow
up.
There's
also
some
stuff
that
I
can't
really
talk
about
on
the
private
on
the
public
call
and
that
can
save
for
later.
C
What
else
did
I
do
so?
I
started
reorganizing
the
package
manager'
repo
content
a
little
bit,
there's
a
lot
of
stuff
in
issues
that
doesn't
it's
good
for
discussion,
but
once
it's
kind
of
the
discussion
has
died
down,
then
it
gets
lost
in
a
never-ending
sea
of
new
issues.
Scrolling
past
so
I've
been
plucking
the
interesting
content
out
into
Docs
as
a
pull
request.
C
That's
open
and
like
it's
kind
of
moving
towards
being
almost
like
a
book
or
like
a
guide
with
like
references
and
reading
materials
for
different
areas
that
might
I
have
not
had
the
headspace
to
try
and
imagine
what
that
looks
like
in
total.
But
maybe
I
can
make
some
time
with
Jessica
to
organize
thoughts
around
that
and
like
almost
end
up
with
the
skeleton,
rather
than
try
and
create
it
from
what
we
already
have
to
have
a
little
bit
more
intentioned.
C
And
then
the
other
area
that
I've
been
working
on
is
on
the
previous
decentralized
package.
Manager'
research
they
did,
which
is
kind
of
coming
towards
some,
some
thinking
around
how
to
make
or
one
approach
to
make
decentralized
package
management
more
resilient.
Because
if
you
have
many-
and
this
would
apply
to,
it,
would
even
work
for
centralized
package
management
as
well.
And
the
idea
is
that
for
every
registry
that
you
depend
upon
for
fetching
information
from
if
any
one
of
those
has
availability
issues
either
that
something
is
deleted.
C
That
is
temporarily
down
or
broken,
or
your
offline
will
actually
cause
problems
if
any
one
of
them
doesn't
work.
Your
whole
dependency
resolution
strategy
fails
because
you
need
a
piece
of
data
from
each
one
of
those
registries
and
with
the
the
advent
of
this
github
package
registry,
every
individual
organization
is
its
own
registry.
There's
not
one
giant
registry,
there's
many
small
ones,
each
with
their
own
little
bits
of
dependency,
metadata
and
so
I've
been
calling
it
resilient
resolution.
C
And
it's
essentially
saying
if
you
take
your
wherever
than
having
three
entities
involved
in
dependency
change,
let's
say:
you've
got
your
your
application
that
has
some
dependencies.
You've
got
your
the
new
dependency
that
you'd
like
to
add,
and
that
could
be
an
updated
version
or
it
could
even
be
a
removed
version.
But
let's
say
it's
a
it's
just
a
new
one
that
you
want
to
add.
You
currently
need
the
registry
as
a
third
piece
at
least
one
registry,
potentially
many
more
in
the
case
of
go
ipfs.
C
You
need
125
of
them
by
my
account
for
every
different
git
repository
that
it
references,
but
you
can
find
that
from
go
mod.
The
go
mod
file
in
the
repository
and
if
any
one
of
those
goes
away.
Obviously
you
you've
broken
that
the
ability
to
resolve
that
and
doesn't
mean
that,
where
you
could
have
gone
to
Mars
and
then
you're
gonna
have
to
talk
225
different
servers
on
earth,
which
is
like
it's
just,
not
gonna
work.
C
So
if
you
can
take
if
you
can
store
more
of
the
dependency
information
involved,
that
was
involved
at
the
time
that
you
did
the
previous
resolution
or
that
you
published
that
package.
So
this
is
basically
listing
all
of
the
possible
acceptable
versions
of
each
one
of
the
dependencies
involved
in
this
package
has
one.
C
Here
are
all
the
versions
that
it
sees
are
acceptable
at
publish
time
this.
This
application
has
a
list
of
dependencies.
Here
are
all
the
possible
versions
that
can
be
used
can
be
accepted
when
successfully
completing
a
dependency
resolution,
but
the
usually
only
pics
like
say
the
latest
one.
For
example.
If
you
store
all
of
that
available
version
data,
you
don't
need
to
go
back
to
the
registry
to
be
able,
in
the
case
of
a
conflict
or
some
other
dependency.
C
Where
it
comes
into
ipfs
land
is
like
then
you're
like.
Oh
actually,
some
of
these
problems
become
less
problems
because
they're
always
available
for
everyone
who's
sharing
that
content,
but
it
means
that
you
can
still
like
lose
particular
access
to
the
network
while
still
being
able
to
work
on
what
you
have
so
most
of
the
way.
C
E
I
assume,
when,
when
we're
storing
these
things,
also
restoring
the
see
IDs
of
them
so
that
we
can
also
do
this
validation
that
like,
for
example,
if
I,
if
I'm
I'm
offline
but
I,
find
someone
else
who's
offline.
Who
has
an
an
updated
or
you
know
so,
one
of
these
options
that
I
would
like
to
use
that
I
could
also
be
like.
E
C
So
I've
imagined
I've
tried
to
not
imply
any
kind
of
implementation
to
begin
with,
but
when
you
add
in
IP
OD
kind
of
treating
IP
FSF
as
the
Tarbosaurus
is
like
a
done
deal.
This
is
like
having
key
and
Tarble
is
very
reasonable
and
easy,
but
storing
a
the
data
and
IP
OD
would
mean
that
your
even
d
duping
like
where
we've
got
similar
bits
of
dependency
tree
across
many
different
packages,
they're
all
referencing,
the
same
things
actually
like.
Oh
I've,
probably
already
got
half
of
this
dependency
data
in
the
registry.
C
Bits
I've
already
got
locally
before
I
need
to
go
out
to
the
network,
to
fetch
stuff
or,
if,
as
you
say,
like
I'm,
connected
to
a
local
peer
that
has
some
of
that
I
don't
need
to
to
do
the
work
it.
It
feels
like
there's
a
good
proposal
for
any
package
manager
and
that
implementing
it
on
IP
FS
is
the
easiest
way
to
do
it.
C
You
could
do
it
otherwise,
but
if
you
use
IP,
FS
and
IP
LD,
then
you
get
like
boost
from
it
or
it's
just
easier
to
do
the
other
thing
that
I'm
talking
to
Eric.
Although
he
said
he's
inputting
right
now,
I
think
and
trying
to
find
where
how
much
this
crosses
over
with
his
timeless
stack
work,
because
it
feels
like
it's
kind
of
coming
from
the
opposite
angle.
C
The
time
the
stack
is
all
about
how
the
end
result
ends
up
on
your
computer,
but
it
currently
skims
all
of
the
actual
discovery
of
modules
and
catalogs
and
lineages.
It
basically
just
assumes
that
you
already
know
where
all
of
those
things
are,
and
you
don't
you
don't
try
and
connect
the
dots,
whereas
that's
like
the
whole
network,
effective
package
managers
is
people
sharing
stuff
and
you
being
able
to
find
it
and
then
to
be
able
to
like
resilient
lis,
use
it
to
actually
get
work
done.
F
Yeah
I
just
put
up
one
to
stuff
in
this
week.
I
did
some
brainstorming
with
you,
while
I
was
in
Portland
and
then
I
went
to
CSV
comp
and
I've
dropped
a
few
interesting
links
into
my
notes.
There
none
much
time
to
talk
about
them,
so
I
think
there
was
really
interesting
stuff
at
the
conference
the
conference
was
mostly
science,
science
background,
academia,
PPO
people
and
their
interests
are
reproducibility
and
so
having
immutable
packages
is
a
big
part
of
that.
F
There's,
a
lot
of
work
involved
and
actually
packaging
data
which
I
wasn't
really
aware
of,
and
also
I
met,
with
Geoffrey
askin
from
the
chrome
team
and
learned
a
bit
more
about
web
package
and
web
package
is
actually
intended
for
JavaScript.
So
it's
actually
pretty
relevant
to
the
packaging
discussions
here.
So
that's
I
can
talk
more
about
that.
If
Amy's
got
questions.
F
F
E
So
I
know
that
Chrome
has
I,
think
I.
Think
Chrome
has
this
idea
of
maybe
the
cache
a
lot
of
common
JavaScript.
Maybe
this
is
clog.
Flare.
Someone
caches
a
lot
of
the
common
JavaScript
libraries
that
many
people
use
to
make
it
more
efficient
for
people
to
load
websites.
This
ringing
any
bells
for
anyone.
E
B
E
There
was
like
someone
of
one
of
the
big
guys,
one
of
the
big
like
central
chunks
of
the
internet.
Like
does
actually
does
this
on
the
back
end
somewhere
such
that
it
serves
really
fast
and
you
can
load
a
lot
of
a
lot
of
web
pages
really
quickly.
I
think
yeah.
It's
like
Google
caches,
jQuery
and
all
that
stuff,
not.
A
A
F
E
So
this
is
a
model
whereby
I
visit
a
web
site
and
we've
talked
about
companion
doing
this
right,
but
I
visit
a
website
and
I
cache
everything
I
load
in
order
to
browse
that
web
site
Akasha
offline
in
my
ipfs
code
and
if
I
ever
go
to
that
website
again
I'm,
like
oh
wait,
I
already
have
all
this
data
locally,
I'll
just
load
it
locally,
and
if
I
want
to
refined
that
piece
of
text
that
I
already
saw
well,
that's
that's
great.
You
already
loaded
that
you
don't
need
to
do
it
again.
F
Well,
it
doesn't
really
so
the
only
part
that
exists
right
now
is
the
signed
exchanges
which
is
being
used
by
a
MP
but
MP.
Isn't
the
primary
customer
of
this
so
I
think
that
they've
been
doing
some
work
around
like
extending
JavaScript,
so
JavaScript
will
actually
have
a
standard
library
going
forward,
which
will
be
so
I
have
a
feeling
that
this
will
fit
quite
well
with
that,
so
they'll
be
able
to
upgrade
the
standard
standard
library
in
the
browser,
but
by
signing
everything
with
SSL.