►
From YouTube: Working Group: 2020-07-21
Description
* GCR
* On boarding: https://github.com/dwillist/PaketoOnboarding
C
C
B
Awesome:
okay,
so
starting
with
the
oldest
community
charter,
do
we
know
what
the
status
on
this
is
anybody.
B
E
What's
the
word,
I'm
looking
for
a
working
group
meeting,
let
me
just
pull
it
up
really
quickly.
B
You
can
like
represent
it.
If
you
want
to
talk
about
it,
I'll
stop,
sharing.
E
No,
actually,
this
is
totally
fine.
You
pretty
much
got
everything
that
I
would
want
to
point
out,
so
basically
all
the
change
services.
The
last
time
I
presented
this
is,
I
have
a
little
bit
more
discussion
around
what
it
actually
looks
like
to
be
a
quote-unquote
actively
maintaining
a
project
it's
left
pretty
open-ended,
because
I
think
that
that's
something
that
should
be
improved
as
we
sort
of
look
into
this
sort
of
thing.
So
I
wanted
to
leave
it
with.
E
You
know
some
suggestions,
but
not
necessarily
completely
lock
it
down
on
my
first
first
rfc
for
this,
the
implementation
section.
I
don't
know
that
that's
changed
a
lot
either.
This
is
generally
the
same
rfc
that
I
presented
a
couple
weeks
ago
on
this
there's
a
couple
of
unresolved
questions
that
I
need
to
reach
out
to
community
members
about,
and
I
guess
I
should
probably
do
that
before.
We
completely
ratify
this,
but
I
I
moved
this
from
draft
to
ready
to
publish-
essentially,
I
guess,
contention
on
those.
E
So
if
y'all
have
any
strong
feedback
either
way
on
some
of
this
stuff,
I'd
be
more
than
happy
to
address
that.
E
Yeah
pretty
much
I
I
yeah,
that's
pretty
much
it
there's
just
some
discussion
and
even
even
something
like
what
the
organization
should
be
called
could
maybe
actually
be
pushed
off
into
something
later.
So
I
might
actually
remove
that.
But
I
just
I
want
to
sort
of
chat
a
little
bit
about
what
the
cfs
cff
allows
for
licensing
for
things.
That's
inside
of
its
project.
E
B
B
D
For
for
rfcs
at
the
that
are
project
wide
rfc's,
there
are
rfcs
that
are
per
you
know,
individual
language
group
that
are
just
approved
and
submitted
by
me
or
they're,
submitted
by
anybody
and
approved
by
maintainers
of
that
sub
team,
but
for
project
wide
things
which
are
usually
just
changes
to
governance
or
like
committing
to
switching
everything
wholesale
to
something
else.
D
It's
it's
requires
a
hundred
percent
of
the
steering,
or
I
think,
if
the
steering
committee
were
more
than
two
people,
it'd
be
super
majority
of
the
steering
committee
which
at
this
point
is
just
being
done
for
now,
and
so
I
think,
just
ben
and
I
need
to
check
off
this
one.
D
Sounds
good,
I
had
a
question
actually
what's
the
plan
with
the
word
community,
because
we
got
feedback
from
the
cff
that
you
know,
incubator,
sandbox
or
something
might
be
better.
E
Yeah,
I
am
more
than
happy
to
change
it
to
either
incubator
or
sandbox.
I
was
gonna
just
not
worry
about
it
for
right
now,
and
then
we
could
change
it
later.
I
think
that
the
structure
in
general
is
something
that
we
want
to
move
forward
on
and
what
the
name
is,
what
the
name
is
like
what
color
the
bike
shed
is
doesn't
really
matter
to
me.
D
Got
it
I'm
happy
to
approve
without
making
a
decision
on
the
name,
but
I
think
we,
whether
the
iep
belongs
to
the
cff
or
to
the
you
know,
original
person
really
changes.
What
the
use
of
the
org
is,
if
that
makes
sense.
So,
let's,
let's
reach
out
to
chris
and
figure
that
out
yeah.
E
D
Worries
no
rush,
just
just
want
to
figure
that
out
before
we
kind
of
position
it.
I
think
it
makes
a
it's
a
big
difference
for
the
use
cases
for
it
for
end
users.
B
The
idea
is
that
when
people
are
using
the
kettle
build
packs,
they
might
be
behind
a
firewall
that
stops
them
from
getting
dependencies
from
the
url.
That's
in
the
build
pack
tunnel.
D
Packs
have
you
seen
dan
thornton's
proposal
for
offline
build
packages?
He
has
a
hack
mdoc,
it's
a
little
more
up
to
date
than
rfc
upstream.
That
breaks
the
dependencies
out
into
a
special
structure
inside
of
the
build
package.
D
B
In
my
mind,
those
two
formats
wouldn't
go
together
because
that's
sort
of
mapping
to
layers
in
a
registry-
and
this
is
more
mapping
too-
this
doesn't
have
that
restriction
right.
This
can
be
a
uri
where
you
put
a
jdk
in
some
internally
hosted
server
somewhere,
but
I
can
look
at
that.
If
you
feel
like
there
is
synergy
there.
D
E
I
might
be
misunderstanding
this.
Would
this
allow
us
to
completely
like
decouple
basically
dependency
information
from
the
build
pack
like
tommel,
because,
right
now
as
it
stands,
we
have
like
the
a
dependency
section
of
most
of
our
build
pack
tumbles.
Could
we
just
use
like
this
mappings
construct,
no
matter
what
and
sort
of
make
them
very
make
it
very
generic
and
then
have
it
hook
into
this
mappings.
B
It
doesn't
because
when
a
build
pack
is
deciding
whether
to
pull
uri
out
of
the
mapping
the
way
it's
going
to
look
for
that
override
uri
is
by.
If
it
has
a
dependencies
buildpacktomo
with
the
right
id
and
version
right,
then
it
would
come
in
this
file
and
look
up
a
different
uri.
So
this
is
really
just
a
way
to
override
uris
in
a
way
or
map
them
to
map
a
dependency
to
a
new
uri.
D
One
use
case
for
the
offline
build
package:
rfc
is
allowing
a
platform
to
decide
what
dependencies,
what
offline
dependencies
from
the
like
build
pack
store,
get
put
on
the
node.
That
does
the
build
right.
You
can
only
put
the
latest
version
like
it
can
dynamically
when
a
build
starts,
make
a
decision
of.
Does
it
copy
all
the
whole
gigabyte
of
all
the
dependencies
over
just
some
of
them.
That,
like
makes
the
performance
characteristics
of
having
a
whole
offline,
build
package
always
transferred
to
the
edge
node
a
bit
better.
D
B
D
B
I
feel
like
the
effort
that
goes
into
then
packaging,
all
of
them
and
creating
a
builder
is
a
lot
to
ask
of
people
versus
just
creating
a
single
file
where
they're
remapping
things
they
know
they
want.
You
know
just
upload,
add
a
file
that
gets
mounted
into
every
build
and
then
you're
off
to
the
races,
rather
than
sort
of
repackaging
your
environment
and
then
updating
all
those
packages.
Every
time
the
upstream
updates.
A
I
have
a
couple
questions.
The
first
one
is:
how
does
this
interact
or
what
is
the
expected
behavior
if
the
build
pack
itself
is
an
offline
build
pack?
A
So
it
isn't.
It's
no
longer
that,
like
canonical
external
facing
url,
it's
like
an
internal
tarball
reference.
So
is
the
expectation
if
a
build
pack
is
offline
and
like
knows
that
it's
offline
and
doesn't
need
to
make
a
network
connection
to
download
the
dependency.
Is
it
allowed
to
ignore
this
behavior,
like?
Is
that
an
expected
allowed,
like?
A
B
A
Okay.
The
second
question
I
have
is
about
stacks.
I
don't
know
if
this
exists
today,
but
I
can
imagine
scenarios
wherein
we
have
a
dependency
version
for
a
specific
dependency,
but
then
we
have
two
different
dependencies,
depending
upon
which
stack
you're
actually
trying
to
download.
For
I
don't
see
in
an
example
that
you're,
including
stack,
is
like
an
extra
like
identifier
for
how
we
would
do
that.
Mapping
between
the
two
sets
is
that
something
you've
thought
about.
B
Not
something
I've
thought
about
particularly
hard.
Let
me
ask
a
clarifying
question
that
maybe
I
should
know
the
answer
to
when
I'm
looking
at
a
build
pack,
tommle
and
there's
a
list
of
dependencies.
D
I
think
we
have
that
even
today,
with
maybe
the
cf
linux,
fs3
old
cloud,
foundry
stack
id
versus
the
new,
I
o
build
packs,
bionic
stack
id
and
some
build
packs
for
the
migration
strategy,
so
I
think
it
would
need
to
be
in
there
before
it
could
get
approved.
In
my
my
mind,.
B
I
feel
like
different,
build
packs,
there's
something
to
stop
different,
build
packs
from
having
using
the
same
name
to
mean
different
things
in
their
dependencies.
So
I
think
we
have
to
scope
it
so
that
a
build
pack
can
look
up
things
that
are
really
intended
for
it.
In
the
case
where
two
different
build
packs
were
trying
to
use
the
same
dependency
name.
D
So
I
know
I
keep
bringing
up
dan's
offline
build
package
rfc,
but
in
that
there's
no
scoping
per
build
pack
and
all
the
build
packs
that
run
in
a
build
together,
live
in
the
same
builder
and
so
dependencies
need
unique
ids
at
that
point,
and
so
I
wonder
if
we
could,
because
that
has
the
offline
thing,
has
the
same
requirement.
We
could
do
something
symmetrically
over
here
and
an
idea
there
is
you
could
have
if
you
want
to
have
a
short
name
and
a
long
name.
D
You
could
put
that
in
your
buildpack
metadata,
that's
kind
of
what
dan
was
proposing.
I
think
use
different
id
fields.
That
way
you
don't
have
to
spec,
like
it's
really
nice
on
the
offline
thing,
because
it
means
the
layers
of
the
same
dependency
when
they're
used
by
different,
build
packs.
Don't
you
know
they
actually
do
get
de-duplicated?
Otherwise
the
path
names
would
be
different.
They
wouldn't
get
too
duplicated.
So
it's
kind
of
necessary
to
do
in
that
case
unless
we
use
the
uuid
or
something
for
it.
B
Yeah,
do
we
want
to
like
add
to
this
something
like
as
part
of
this
rfc,
we
hereby
state
forever
and
ever
that
no
two
paketo
build
packs
will
have
the
same
dependency
id.
We
want
to
do
that.
D
F
B
D
I
would
support
that
given
given
the
progress
the
offline,
build
packages
rfc
upstream,
I
think
we
kind
of
want
to
make
that
requirement
and
then
that,
but
that
would
probably
influence
the
metadata
format
here.
B
D
B
A
Yeah,
I
think,
that's
totally
reasonable
to
say
like
if
you
don't
specify
it
then,
like
the
expectation,
is
the
dependency
that
you're
talking
about
is
able
to
run
on
all
stacks.
Or
there
really
is
only
one
stack,
that's
in
question,
but
that
if
there
is
the
same
dependency
with
the
same
version,
that
has
multiple
entries,
because
this
may
be
available
in
different
stacks.
That
that
should
fail
in
that
case
and
the
end
user
should
have
to
specify
a
more
specific
mapping
in
order
to
get
the
behavior.
They
want.
B
A
D
Because
we're
moving
to
stack
ids
that
are
like
ubuntu
bionic
that
are
very
generic
and,
in
general,
like
a
language
runtime,
would
need
to
be
rebuilt
against
a
different
version
of
dynamic
libraries
or
a
different
operating
system.
I
worry
a
little
bit
about.
D
We
made
it
so
currently
build
techs
have
to
specify
exactly
what
stacks
they
support.
Because
of
that,
like
you
know,
sort
of
risk,
I
worry
a
little
bit
about
allowing
users
to
say.
Oh,
I
want
you
know
this
and
then,
if
there
are
several
stacks
available
it
it
just
breaks.
You
know
other
stacks
in
in
some
confusing
way.
I
think
it's,
I
think
it's
the
right
thing
to
do,
because
it's
an
override
and
the
user
is
explicitly
saying
for
these
builds.
This
is
exactly
the
dependency
I
want
to
use.
D
One
thing
I
noticed
is
this
is
coming
from
the
platform
directory.
Are
there
platforms
that
intend
to
support
this
yet
or
like
what
is?
What
is
the
platform
side
of
this?
Look
like.
B
So
this
would
already
work
in
packs
and
pack
you
can
mount
random
things
into
the
platform
directory.
If
you
want
one
of
my
open
questions
was:
should
this
actually
be
a
binding
with
kind
dependencies
instead
of
an
arbitrary
dependencies,
it
doesn't
totally
fit
the
canonical
example
of
a
binding,
but
it
with
a
just
a
tiny
bit
of
moving
strings
around
could
be
made
to
fit,
and
it
might
be
easy
because
there's
already
support
for
that
feature
everywhere.
B
B
B
Instead
of
having
a
single
mappings.tamil,
you
could
have
like
a
mappings
per
per
stack,
that's
making
it
required,
but
then
it
would
be
easy.
If
a
new
stack
came
out,
it
wouldn't
break
things.
Then
you
could
go
and
like
add
a
new
file
for
the
new
stack.
D
Sometimes
the
same
dependencies
or
sometimes
the
same
dependency,
runs
on
multiple
stacks.
Sometimes
you
need
to
re-specify
the
same
dependency
with
a
different
url
for
different
stacks.
So
if
you
broke
it
out
into
separate
directories,
you'd
have
to
you'd
be
specifying
the
same.
Url
or
like
you'd
have
a
lot
of
repeats
of
the
same
data,
whereas
if
you
put
it
all
in
one
file,
you
can
list
the
urls
just
once
and
then
list
the
stacks
they
support
with
them.
I
don't
know
if
we
care
about
that.
D
D
Another
question
about
stacks,
so
there's
I
think
dependencies
are
keyed
in
in
the
current
way,
build
pectomal
setup
dependencies
are
keyed
to
a
stack
id
and
there's
a
list
of
stacks
at
the
bottom.
That
then,
has
the
mix-ins
required
for
that
stack.
I
think
that's
how
it
works
right
now,
and
the
mix-ins
aren't
per
dependency
these
overrides
kind
of
like
they
is
there
some
way
do
we
need
to
specify
mix-ins
here
somehow
too,
like
if
it's.
B
Dependent
everything,
except
for
the
uri,
is
just
a
set
of
coordinates,
so
we
can
match
the
uri
to
the
right
thing
in
the
billbag
tunnel.
I
think
the
thing
in
the
bill
peg
tom
wall
describes
everything
about
the
dependency,
and
this
is
just
trying
to
override
just
the
uri
and
other
fields
only
exist
as
a
lookup.
D
Yes,
are
we
worried
about
a
ruby,
that's
compiled,
that's
statically,
linked
to
a
package
versus
a
ruby
of
the
same
version
for
the
same
stack
for
the
same.
You
know
everything.
That's
not
statically,
linked
to
the
binary
and
needing
to
specify
different
urls,
depending
on
the
mix
and
set
for
the
same
thing,
because
I.
B
B
So
if
I'm
working
in
a
organization
where
my
firewall
doesn't
let
me
get
an
artifact
off
github
my
operator,
I
can
tell
my
operator
and
they
go
download
the
things
that
I
would
normally
want
and
then
put
them
someplace
that's
accessible
internally
like
I
think
it
should
be
the
exact
same
file
that
you
would
normally
get
from
the
other
uri
like
notice,
I'm
not
putting
a
digest
in
here
like
it.
It
has
to
be
the
same
thing.
D
I
think
this
is
also
a
problem
with
having
unique
ids,
but
imagine
you
have
a
two
different
build
packs
that
depend
on
the
same
dependency
on
the
same
version,
but
with
different
the
same
stack
with
different
mix
and
sets
and
there's
two
different
build
packs.
They
have
two
different
sha's
for
the
same
thing.
If
you
had
this
mapping
and
it
wasn't
specific
to
build
pack
ids
then
you'd.
You
know
that
could
be
a
problem,
but
it
seems
like
if
we
agree
that
we
shouldn't
have
more
than
one
dependency
with
all
those.
D
This
combination
of
stack
id
version
and
dependency
id,
then
that
solves
that
problem
anyways.
So
it's
probably
not
an
issue
just
trying
to
think
through
all
the
educations.
B
I
guess
the
other
thing
you
could
do
here
is
take
id
version
and
stack
out
and
just
put
digest
in
right,
so
we
put
in
the
build
pack
tom
on
the
dot
of
the
thing
you
download
from
the
uri.
So
if
we
had
digest
as
a
way
to
key,
then
we
need
to
worry
about
all
the
different
permutations
and
it
doesn't
break
when
new
stacks
come
out.
D
B
It
could
be
a
it
could
just
be
a
you
know,
you
could
think
about
it
like
casey,
neigh,
config
maverick
shot
as
a
file,
and
each
value
is
a
that
actually
might
be
the
more
analogous
way
to
do
it.
So
people
aren't,
you
know,
editing
the
same
file
and
reapplying
it.
They
can
just
add,
add
things,
add
keys
to
their
configmap.
B
Okay,
I
know
I
just
suggested
it,
but
I'm
going
to
say
the
reason.
The
argument
against
that
now
that
I
think
about
it
more
is
that
right
now
my
bill
pack
can
well
maybe
there's
an
argument
for
it
right
now.
My
build
pack
can
change
the
meaning
of
the
id
and
version
to
be
a
patched
different
thing
right.
D
D
D
B
D
Yes,
next
one
gcr,
so
I
I
was
looking
through
the
pocato
gcp
dashboard.
Just
kind
of
reviewing
infrastructure
stuff
this
morning
and
gcr
is
a
lot
more
expensive
than
I
think
we
we
assumed
it
was
to
start
it
it
charged
for
both
storage
and
for
transfer
and
we're
already
without
any
offline
build
packages
right
just
like
a
bunch
of
go
binaries
and
some
stack
images
we're
already
like
400
a
month.
D
I
wonder
if
we
should
think
about
going
back
to
docker
hub.
We
already
published
the
stack
images
there,
just
not
the
builder
image
there
and
it
kind
of
like
talked
to
marty
a
little
bit.
I
think
it'd
be
easy
to
you
know.
Put
those
images
over,
but
also
that's
like
might
be
a
lot
of
work.
I
wanted
to
see
what
people
thought
we
already
have
potato
build
packs
user
on
docker
hub
too.
D
B
I
would
say
the
first
thing
like
when
we
moved
to
gcr
has
a
little
give
me
a
little
bit
sex
if,
like
the
searchability,
is
lacking
like
the
user
interface
compared
to
docker
hub,
I
realize
the
reliability
is
better,
but
just
sort
of
like
looking
up
what
images
exist.
Is
it's
not
as
easy?
If
you
don't
know
the
exact
urls
to
go
to
just
find
that
in
gcr.
A
As
far
as
answering
the
complexity
question,
I
don't
think
it's
that
difficult
for
us
to
change
where
these
are
pushed.
I
think
it's
merely
like
getting
a
new
set
of
credentials
to
be
able
to
push
to
that
registry
and
then
changing
the
like
path
that
we're
pushing,
I
think,
it'd
be
pretty
straightforward
in
terms
of
like
actual
complexity
of
making
the
change.
D
What
about
migrating
users
over
the
thing
I
worry
about
most
is
like
we
have
the
pax
address
builders
command.
That
suggests
a
builder.
That's
on
gcr.
E
F
D
A
D
I
don't
know
what
happened
if
they
had
it.
I
don't
know
if,
when
you,
if
the
suggested
builder
is
changing
untrusts
previous
because,
like
the
suggested
builders
are
all
trusted
by
default,
but
I
don't
know
what
happens
if
you
change
the
suggested
builders
and
then
your
builder
is
set
to
the
default
builder.
Was
a
trusted
suggested
builder
before
if
the
trust
stays,
I
kind
of
think
the
trust
would
stay,
but
I'm
not
sure
it's
just
hard
to
have.
C
D
Over
see
ya,
we
got
ben
here
we're
talking
about
so
gcr
is
kind
of
expensive
compared
to
what
we
thought.
I
sent
you
some
stuff
over
slack
thinking
about
what
it
would
look
like
to
migrate
to
the
docker
hub.
Dicato
buildpanics
account
instead.
F
Okay,
how
am
I
actually
the
person
who's
responsible
for
all
the
expense?
Do
you
have
an
itemized
expense?
Is
it
specifically
the
potato,
build
packs,
or
is
it
also
the
other
stuff
that
we
do?
I
I.
D
It's
a
reminder:
this
is
a
recorded
now
working
group
meeting.
It's
just
talking
about
open
source
stuff,
the
I'm
just
looking
at
the
potato
bill
tax
account
in
gcp,
okay,
it's
up
to
it's
up
to
four
hundred
dollars
already
and
we
don't
even
have
offline
build
packages
than
it
is
just
just
go
buying
areas
and
some
stack
images.
A
Also
factor
into
that
the
community,
the
paqueto
community
project
as
well,
there
are,
although
I
don't
think
as
many
there
are
a
non-insignificant
amount
of
images
getting
pushed
there
as
well.
F
I
am
also
reasonably
sure
that
there
is
a
bunch
of
offline
build
packs
there,
a
bunch
of
my
offline,
build
packs
specifically,
including
and
like
all
of
the
cnbs
before
we
had
images
and
stuff,
I
think
there's
a
possibility
that
we
can
do
some
pruning.
If
you
are
worried
about
this,
I
can't
I
like
I
refuse
to
accept
the
idea
that
all
of
our
online
build
packs
actually
occupy
that
much
storage
space.
I.
D
F
D
I
think
that,
just
for
the
stack
images,
though,
it's
going
to
increase
a
gigabyte
a
week
or
more
definitely
into
the
future,
the
it
wouldn't
be
hard
to
move
at
least
not
ever
yeah.
It
might
be
worth
doing
that
sooner.
D
We
were
talking
about
ways
to
migrate
users
across
and
pack
like,
could
pack
rewrite
the
old
old
suggested
builder,
the
new
one
automatically
when
you
grab
a
new
version
if
it's
set
as
a
default,
but
the
we
do
have
it's
nice
that
we
already
have
potato
build
packs
on
docker
hub
docker
hub
has
infinite
storage
for
free
forever,
so
it's
pretty
attractive
compared
to
paying
for
it.
If
you
think
the
400
includes
some
offline
stuff,
we
could
take
that
to
account
too,
but
even
even
that
it
seems
like
it's.
F
D
I
think
if
it's
not
so
bad
right
now,
though,
that
gives
us
yeah.
It
means
that
it's
probably
okay
to
keep
everything
in
gcr
and
keep
it
up
to
date
for
a
good
transition
period
in
between
and
then
you
know,
but
switch
the
suggest
builders
over
to
the
new
one.
It
seems
like
it's
good
to
know
that.
C
I'm
just
sharing
that
on
behalf
of
dan.
I
don't
know
if
he's
coming
to
these
working
group
meetings
anymore,
but
he
spent
a
lot
of
time
last
week,
just
putting
together
general
sort
of
onboarding
for
pocato,
which
I
thought
was
super
neat,
and
I
wonder
if
we
could
pull
that
in
something
into
like
the
cato
community
or
something
like
that.
Just
felt
like
a
good
way
to
you
know
just
get
started,
learn
about
packet,
tooling,
stuff
like
that,
and
how
to
create
builders,
build
packs
all
that
sort
of
stuff.
D
See
for
people
that
don't
notice
the
branches
get
you
to
different
things.
I
thought
it
was
just
one
readme,
but
the
different
readings
on
different
branches.
D
If
we
pull
this
into
pocket
community,
is
that
using
potato
community
in
a
different
way,
then
for
us
what
you
were
thinking?
Maybe
this
should
go
into
the
pocato
or
proper,
like
okay,
build
packs,
work.
E
Have
a
bit
of
a
read
through
it
and
if
it's
more
of
a
if
it
looks
like
it's
more
of
a
teaching
tool
for
the
project
as
a
whole,
maybe
we
just
pull
it
into
the
whole
community.
But
if
it's
more
focused
on
how
you
might
want
to
get
started
developing
it
might
make
more
sense
to
actually
just
keep
it
in
the
community
where
more
people
who
were
trying
to
look
to
develop
for
it
might
look.
I
have
no
idea,
I
don't
have
any
strong
opinions.
D
Cool
is
it
worth?
Should
we
open
an
rfc
to
add
this
repo
to
the
one
of
the
orgs.
C
D
I
was
wondering
if
someone
should
open
an
rfc
to
propose
adding
it
to
one
of
the
two
orgs
like.
I
can
do
that
just
so,
we
can
have
some
place
on
github
where
we
decide
where
it
lands.