►
From YouTube: Working Group: 2020-08-19
Description
* New Implementation Contributor
* How is KubeCon going?
* Stackpack Deep Dive Tomorrow
* Mixin Contract
* Offline Buildpackages: https://github.com/dwillist/rfcs/blob/offline-buildpackages/text/0000-offline-buildpackages.md
A
So
great,
so
so
so
easy
to
do
everything
I
like
I
like
it
when
I
can't
invite
other
people
to
meetings,
I
create
that's
just
a
great
experience.
B
B
A
Yep
totally
I'll
take
it
off.
I
guess
any
new
faces
this
week,
don't
see
anybody
who
hasn't
been
on
the
call
before
so
release,
planning,
updates,
release,
planning
and
updates
any
updates.
We
have
javier
here
yeah
yeah.
I
could
speak
to
pac,
which
released
yesterday.
A
Everything
went
well
with
the
exception
of.
I
think
we
discovered
an
issue
on
the
spring
boot
side,
where
they
were
relying
in
on
a
deprecated
sim
link
which
we
removed,
and
I
think
this
the
goal
right
now
or
the
the
strategy
that
we're
going
to
go
with
is
to
roll
back
the
removal
of
that
deprecation
and
kind
of
plan
it
out
or
phase
it
out
for
a
later
time
but
yeah.
So
we
should
have
a
patch
here
for
pack
most
likely
today.
B
Yeah,
just
a
little
bit
more
background
on
that
steven,
since
you
missed
it
before.
This
stems
from
the
fact
that
builders
don't
actually
have
a
specification
and
the
boot
integration
was
basically
because
they
they
ported
the
go
implementation
of
pack
from
a
year
ago
or
whatever
it
was,
and
pac
treated
this
as
an
internal
change
when
it
really
probably
needs
to
be
considered
an
external
change.
Now
I
assume
k-pack
will
have
very
similar
problems
with
builders
in
the
same
way
if
they
use
pre-built
builders.
A
Ipad
doesn't
use
pvp
builders,
anymore
yeah,
but
the
does
spring
boot
plan
to
stick
to
the
pure
java
implementation.
I
think
it's
good
to
have
more
platforms.
I'm
just
curious
yeah
cool.
B
A
Cool
any
life
cycle
updates.
B
A
Awesome
next
thing
on
the
list
is
review
outstanding,
rfcs.
A
Can
we
see
that
cool,
so
first
thing
relax,
mixing
contract?
I
put
on
the
agenda
for
today.
I'm
gonna
just
do
a
quick
overview
of
after
the
label.
Rfc's
with
spec
is
next
forrest
open
this?
I
don't
see
him
here.
B
It
was
a
pretty
short
rfc,
just
fact,
creating
labels
to
help
kind
of
focus
and
target
the
rfc
overall.
I
think
it's
a
good
idea.
I
think
there's
potentially
a
little
more
detailed
news
flashed
out,
but
I'd
like
to
see
this.
A
Just
keep
discussion
open,
no
change.
There
looks
like
rfc
authors
create
repo
issues.
This
is
about
rfc
authors,
creating.
B
B
David,
yes,
sorry,
I
think
I
basically
agree.
I
made
a
comment.
I
think
emily
also
had
a
similar
or
a
very
similar
kind
of
response
back.
So,
let's
see
if
you'd
be
willing
to
yeah.
A
I
I
also
have
that
concern.
I'm
not
trying
to
get
out
of
you
know
creating
issues
for
rfcs,
but
I
also
have
concerns
about
you
know
what
this
means
for
people
who
aren't
on
the
core
team
so
happy
to
chat
about
that.
Next
on
the
list
is
deprecate
service
bindings.
This
is
an
fcp
anything.
This
is
just
going
to
be
closed
or
leave
it
closed.
Today,
yup
19th
today
is
the
19th,
so
that
can
go
in.
A
Do
we
need
to
create
any
issues
for
this?
One.
A
This,
I
think,
is
when
does
fcpn
today
also,
so
I
should
open
issues
for
this.
I
guess
before
it
gets
merged
in
and
it
stays
in.
Fcp
is
our
label.
We
want
to
create
for
when
it's
it's
been
accepted
and
scp
is
closed,
but
we're
not
going
to
merge
it
until
we've
created.
B
Just
leave
it
open
it'll
force
us
to
review
it
next
week.
If
you
haven't
yet
done
it,
I
haven't
done.
I
haven't
yet
done
the
merge
and
the
or
the
issues
in
the
merge.
A
There
was
a
question
here
about
the
ordering
ben
and
I
were
chatting
about
like
the
way
we
defined,
which
order
they
get
added
in,
but
I
think
that
can
go
into
the
spec
pr.
Oh,
if
there
are
any
like
kind
of
minor
changes
to
what
order
we
source
versus
execute
xd
scripts,
and
let
me
get
agree
on
it.
There
next
thing
list
any
stack,
build
packs
this
approved
fcp
and
I
don't
think
it
needs
any.
No,
no,
we
do
need
some.
We
need
some
changes
so
that
we
accept
the
yeah.
A
The
aspect
basically
yeah
so
same
thing:
right,
just
stays
in
the
state
yeah
rfc
for
project
descriptor,
flexibility.
B
B
A
A
request
review
from
the
core
team
for
this
yeah:
let's
open
it,
I
don't
see
that
on
there
cool
and
layer
origin
metadata
as
a
draft
pack.
A
Sorry
yeah.
B
So
hi
author
of
this
one
I
just
popped
in
because
you
know
I've
been
here
a
few
weeks
ago
and
we
were-
and
I
I
was
I
think
I
mentioned.
A
Then
that
I
kind
of
didn't
know,
I
felt
that
was
kind
of
at
the
limit
of
of
my
knowledge.
With
with
what
I
could
do
for
this
rfc
and
that-
and
we
were
talking
and
saying
that
it
might
be
good
to
have
some
of
the.
B
Like
up
and
coming
maintainers
like
help
me
along
with
this,
so
I
just
kind
of
wanted
to
come
in
and
and
see
where
that.
A
Might
be
at
if
there
might,
if
those
people
were
on
vacation
at
the
time,
so
maybe
seeing
if
they're
around
or
if
we
can
start
organizing
that.
B
That
made
sense
to
me.
I
don't
know
that
I
don't
remember
the
specific
people
we
were
talking
about.
B
I
think
we
mentioned
voluntold
in
their
absence
that
either
natalie
or
jesse
might
be
interested
in
this.
Also
sorry,
I
was
turned
distractingly
pink.
I
don't
know
how
to
undo
it.
B
So
maybe
we
could
sync
up
after
this.
I'm
not
sure
it
seems
like
natalie
couldn't
make
this
meeting
today.
Nathalie
is
once
again
on
vacation,
but
it
is
just
a
one
day
thing:
oh
okay,
great
thanks
for
the
context,
simon
jesse's,
also
here
so
yeah
yeah
I
mean
I
don't
know.
I'd
have
to
reread
this
to
kind
of
reorient
myself,
but.
B
And
this
was
based
off
of
as
well.
I
think
at
the
time,
you're
writing
the
original
one
waiting
for
report
tunnel
to
come
through
which
has
finalized.
If
I
recall
correctly,.
A
Okay,
I'm
gonna
note,
natalie
and
jesse
to
assist
on
us.
That's
good
yeah.
The
implementation
slack
channel
might
be
a
good
place
to
have
those
conversations.
A
Sounds
great
and
then
do
I'm
gonna
update
this
from
we.
I
skipped
over
this
time
just
because
we
said
we
skipped
drafts
during
this.
Do
you
want
me
to
pull
this
out
of
draft,
or
is
it
still
kind
of
early
days
want
to
keep
it
in
this
and
you'll
put
it
on
the
agenda?
If
you
want
to
talk
about
it,.
A
A
Just
like
not
be
presumptuous
and
like
keep
this
out
of
the
way
I
don't
know
draft
is
like
you,
don't
want
people
to
immediately
review
it
for
acceptance.
It's
still
like
you
know,
you're
still
working
on
things.
You
want
to
get
it
out
there
if
people
are
interested,
but
you
know
kind
of
no
pressure
and
we
we
skip
them
over
unless
you
put
it
on
the
agenda.
A
B
A
Understand
now
that
sounds
good
cool
sounds
good.
Thanks,
no
problem
pack
sub
commands.
A
B
Yeah,
I
think
I
was
saying
I'm
not
really
going
to
try
and
drive
consensus
on
this.
So
if
you
care
about
it,
vote
on
it.
B
A
So
is
there
any
action
needs
to
be
taken
just
vote.
I
guess
yeah
got
it
application
mix-ins.
This
is
the
maybe
most
a
lot
of
activity
in
this.
This
rfc
right
now.
I
think
we're
gonna
do
a
deep
dive
on
this.
It's
on
the
agenda.
Emily
was
suggesting
that
we
do
a
deep
dive
on
this
on
thursday,
any
any
short-term
actions
to
take
joe,
no
cool
and
then
the
rest
are
all
drafts
is
joe
doing.
The
deep
dive
are.
A
All
right
sure
I
I
wasn't
volunteering,
but
there
are
some
very
particular
things
we
want
to
talk
about.
One
of
them
is
related
to
that.
I'm
interested
in
I
opened
109
in
order
to
unblock
some
aspects
of
that
one
and
I'll
kind
of
talk
about
that
today.
But
then
there
are
some
big
questions
about
that
that
we
probably
really
want
to
talk
about
tomorrow.
It
might
be
better
with
a
smaller
group,
all
right.
B
A
It
back
that's
cool,
sounds
good
on
sharing
and
next
thing
on
the
agenda
is
a
new
implementation
contributor.
B
B
I'm
here
sorry
my
video's
just
turning
off
and
turning
pink
and
turning
back
on
again,
I
wanted
to
officially
congratulate
yael
on
becoming
a
contributor
on
the
implementation.
Sub
team.
She's
done
a
lot
of
work
on
the
lifecycle
over
the
last
couple
weeks.
Helping
us
get
everything
ready
for
the
oh
nino
release
and
adding
acceptance,
tests
and
doing
very
thoughtful
reviews
of
life
cycle
features
on
both
life
cycle
and
spec
prs.
A
B
A
We
gave
up
on
email
after
they
switched
us
to
outlook
right.
So
next
on
the
list
is
how
is
kubecon
eu
yeah?
We.
B
I
feel
like
a
lot
of
people
have
stepped
up
to
do
booth
and
then
ben
and
I
did
our
talk
as
well.
It's
not
quite
done
but
curious.
How
kind
of
other
talks
have
been
going?
How
attendance
the
for
those
of
you
who
have
helped
out
with
the
booth
how
that
has
been
going.
A
So
the
booth
didn't
get
a
lot
of
attraction
or
interaction.
I
should
say
as
far
as
visitors,
I
don't
know
that
we
have
that
visibility,
but
we
have
a
couple
of
interactions,
nothing
too
great.
But
then,
if
we
look
at
like
other
slack
channels
and
social
media,
there
is
a
lot
of
really
good
stuff.
That's
coming
out
in
regards
to
build
packs.
So
at
least
there's
that
would
you
mind
linking
some
of
the
social
media
stuff
in
the
doc
yeah
I'll.
Do
that.
A
B
Yeah
and
then
I
know
we,
we
got
some
good
questions
out
of
our
talk
from
ben
and
my
talk
and
the
organizers
are
going
to.
I
guess
the
benefit
of
the
virtual
conferences.
We
were
answering
questions
throughout
the
talk
but
they're
going
to
give
us
the
list
of
those
questions.
B
B
A
To
answer
to
that,
we
do
have
that
feature
page
that
I
just
created
an
issue
for
this
morning
where
it'll
add
x.
You
know
amount
of
other
tools
that
other
people
might
bring
up
on
that
in
those
conversations.
So
hopefully
that'll
help
answer
that.
B
A
Most
of
those
tools
just
are
other
ways
of
building
docker
files.
So
at
least
you
know,
we
have
a
pretty
simple
answer
for
docker
images.
Yeah.
Well
I
mean,
like
you,
know,
conoco
build
a
you
know.
All
those
things
are
basically
tools
that
build
docker
files
or
things
that
look
exactly
like
docker
files.
I
don't
think
they're,
jib
and
co
are
still
the
only
things
I
know
they.
You
know,
look
like
us.
A
Cool
next
thing
in
the
list
is
stack
pack
deep
dive
tomorrow.
I
think
emily
put
that
on
there.
B
Then,
on
here
we
mentioned
wanting
to
go
into
depth
on
the
app
mix-ins,
slash,
stack,
packs,
rfc
and
also
the
mixing
contract
rfc,
which
sort
of
relate
to
each
other
in
some
ways
for
tomorrow.
I
just
wanted
to
announce
that
to
everyone
so
that
people
come
prepared
to
make
sure
they
don't
miss
it
if
they
are
passionate
about
stack
packs.
B
B
A
A
I
think
part
of
the
reason
we
pushed
it
to
the
thursday
is
so
that
people
can
self
select
out
if
they
don't
want
to
wait
there
an
hour
forking
group
about
it.
A
And
the
next
thing
on
the
list
is
mixing
contract,
so
this
is,
I
put
this
on
there.
I
opened
an
rfc
that
last
week
that
helps
support
that
and
it's,
I
think,
I'm
going
to
try
to
just
talk
about
it
and
not
not
argue
the
merits
of
whether
we
should
implement
that
just
so
folks
get
an
overview
in
this
group,
so
I
will
share
my
screen
really
quickly.
A
A
You
know
basically
for
a
given
mix-in,
you
can
reference
the
mix-in
as
or
so
right
now,
a
mixin
named.
My
package,
for
instance,
can
kind
of
take
three
forms:
prefixed
with
run
colon
prefix
with
build
colon
or
without
a
prefix,
and
the
way
the
spec
is
defined
right
now
those
are
separate
mix
ends
they
don't
they
don't
necessarily
have
to
imply
the
same
change,
they're
not
really
related
to
each
other.
A
They
just
mean
that
the
thing
with
the
run
in
front
of
it
you
know
it
doesn't
need
to
be
on
the
build
image
for
the
stack
images
to
validate
with
each
other.
This
mix
and
or
say
this
rfc
changes
it
so
so
that
those
mixins
are
related
to
each
other.
In
that
like,
if
a
build
pack
requires
my
package
and
run
my
packages
on
the
run
image
and
build
my
packages
on
the
build
image,
you
know
that
that's
all
satisfied.
A
It
just
means
that
the
contract
between
the
two
stack
images
doesn't
have
to
be
so
that
you
know
they
they
have
like,
so
so
that
you
could
remove
one
stack
image
and
you
know
not
have
the
build
colon
version
of
the
mix
and
everything
would
still
work.
For
instance,
the
you
know
it
kind
of
changes,
the
power
that
the
mix
interface
gives
you,
because
before
you
could
say,
run
colon
my
package
kind
of
implies
a
different
set
of
changes
from
build
coal
in
my
package
and
this
kind
of
relaxes
that
restriction.
A
So
the
build
packs
can
be
satisfied
by
you
know:
bill
pecks,
don't
have
to
put
run,
call
in
or
build
colon
beforehand
in
order
to,
you
know,
select
the
one
that
the
stack
image
has
and
the
stack
image
doesn't
have
to
list
both
the
build
colon
and
the
run
colon
versions
of
it
or
both
the
build
colon
and
the
not
prefixed,
or
both
the
run
coal
and
the
not
prefix
version
of
it.
In
order
to
satisfy
all
ways
that
build
packs
could
request
mixins.
So
it
cleans
up
the
contract
a
lot.
A
This
is
important
for
application
mix-ins,
because
without
it
there
are
situations
where
it's
hard
to
like.
We
couldn't
validate
that
a
stack
pack.
A
Or
like
if
a
build
pack
requires
a
prefix
mix
in
a
stack
pack,
couldn't
necessarily
satisfy
that
or
a
stack
pack
may
try
to
reinstall
that
package.
If
the
non-prefix
version
is
already
on
the
stack
image,
I
think
that's
one
of
the
edge
cases
I
did
this
last
week
and
I'm
forgetting
all
the
details,
so
this
this
just
relaxes
that
contract,
so
that
stack
packs
can
only
you
know,
like
I
think
the
biggest
benefit
is
stack.
A
A
The
more
general
thing
here
is
that,
like
mix-ins
on
the
stack
images
or
mix-ins
kind
of
imply
two
contracts,
the
you
know
if
they
don't
have
the
prefix
on
the
stack
images,
they
imply
a
contract
between
the
images
and
they
also
imply
a
contract
between
the
build
pack
and
the
stack
and
those
mixing
those
contracts
together,
I
think,
is
kind
of
confusing.
A
If
we
completely
redid
mix-ins,
I
think
I'd
make
them
objects
and
then
let
you
specify
when,
when
they
have
those
requirements,
but
until
we
make
a
breaking
change
in
this,
the
idea
is
just
just
relax
the
restrictions
so
that
we
can,
you
know,
build
packs
can
satisfy
stacks.
Stacks
could
satisfy
you
know
or
so
that
we
can
yeah.
So
if
a
build
pack
requests
something
with
or
without
the
stage
specifier,
it's
always
satisfied
by
stack
images
that
may
or
may
not
have
it,
regardless
of
the
prefix.
A
A
quick
question
as
it
relates
to,
I
believe
how
mixins
are
presented
when
we
do
an
inspect
builder.
A
If
I'm
understanding
this
correctly
with
this
enable
where
we
could
collapse
a
mix
in
that
is
in
both
build
and
run
scopes
to
then
just
be
listed
only
once
listed
only
once
where
in
in
the
list
of
mixins
in
the
build
pack,
when
it's
requiring
it
or
in
the
stack
in
just
in
the
presentation
of
the
available
well,
actually,
no
in
the
mixins
themselves,
right
in.
A
Correct
this
would
make
it
so
that
so
this
this
doesn't
get
rid
of
the
stage
specifiers
on
the
in
the
mixins
list.
In
the
stack
image
they
can
there's,
they
can
still
be
used
to
imply
a
contract
between
the
stack
images.
A
A
It's
like
if
a
build
pack
needs
image
magic
and
it
only
needs
it
during
build
time,
but
image
magic
is
on
both
the
run
and
the
build
image.
If
the
build
pack
requires
build
colon
image
magic
right
now,
that
would
fail
if
the
stack
images
both
have
image
magic
on
it,
because
neither
the
build
image
doesn't
have
build
colon
image
magic,
and
so
all
the
stacks
now
have
to
have
build
colon
image,
magic
and
image
magic.
Does
that
sort
of
make
sense
right
and
going
forward?
A
A
The
this
also
implies
part
of
this
that's
controversial,
potentially,
that
I'm
kind
of
willing
to
change
is
that
this
still
says
that
the
when
the
in
the
stack
pack
case
when
stack
packs
receive
mix-ins,
they
receive
them
with
the
stage
specifier,
and
we
still
like
kind
of
treat
that
stage
specifier
as
representing
half
of
the
mixin,
or
something
like
that,
and
I
think
I
I'm
willing
to
just
say
like
nope:
they
just
received
the
package
names.
A
We
just
strongly
imply
that
you
know
the
specifier
really
just
means
this
mix
and
it's
restricted
to
you
know
it
really
just
defines
the
contract
between
the
stacks.
It
needs
to
be
on
the
build
image
or
it
doesn't
need
to
be
able
to
build
image,
and
so
there's
a
lot
of
comments
here.
I
think
this,
I
think,
I'm
okay,
you
know
making
that
change,
but
then
there's
like
maybe
a
bigger
question
about
whether
we
should
make
larger
changes
to
make
sense.
We
can
talk
about
tomorrow,
but
my
goal
here
is
just
iterative
change.
A
If
that,
if
that's
complicated,
if
people
you
know,
I
I'm
not
sure
how
worth
it.
That
is,
if
that
makes
sense,
but
I
wouldn't
stand
in
the
way
of
a
larger
change
like
that.
I
still,
I
still
think,
the
underlying
functionality
of
being
able
to
imply
a
contract
between
images
so
that
you
can't
use
a
run
image
that
doesn't
have
image
magic
with
the
build
image
that
has
image
magic
and
all
the
dev
headers
for
linking
things
against
image.
Magic
right.
A
A
I
just
changed
how
it's
specified,
so
it's
more
clear
in
the
definition
but
like
I
don't
want
to,
I
don't
want
that
large
ux
change
to
block
application
mixes,
and
so
just
for
now
I
just
want
to
you
know,
get
the
underlying
functional
change
into
shape
and
so
that
you
know
a
package
that
has
run
before
it.
You
know
just
means
the
part
of
this
mix
and
it's
on
the
run
image
and
when
you
require
it
that
way,
it
doesn't
matter
how
it's
specified
in
the
stack
image.
A
B
So
is
this
required
to
move
the
other.
A
Rfc
throne,
I
would
mercy
I
would
block
the
other
rfc
on
the
underlying
functional
change
in
this,
because
otherwise
we're
gonna
there's
some
really
not
fun
edge
cases
where
your
build
pack
requires
a
mix-in
and
it's
either
already
there
and
the
stack
pack
runs
every
time
over
it
or
you
require
mixing,
and
I
think,
there's
even
there's
even
a
case
where
you
don't
end
up
installing
the
package.
Strangely,
it's
like
the
the
way.
We've.
A
The
assumptions
that
stack
packs
make
about
mix
ends,
don't
create
a
great
interface
for
build
pack
authors
who
would
require
mix-ins
without
without
making
mix-ins
a
little
more
like
os
packages
and
then
contracts
about
os
packages,
because
os
packages
are
the
only
real
use
case.
We've
seen.
I
think
I
started
too
generically
with
the
mixing
interface
I'm
happy
to
to
relax
those
restrictions,
make
the
mixing
contract
less
powerful,
because
I
don't
think
we
need
the
power
necessarily.
B
A
B
B
This
goes
as
far
as
just
like
pulling
those
dependencies
all
the
way
out
into
their
own
images
that
people
can
then
use
they
can
combine
them
with
builders
or
someone
can
pack
build
and
explicitly
specify
these
asset
images
to
use
during
a
build
so
walking.
Through
this.
We
have
kind
of
the
definition
of
what
one
of
these
asset
images
looks
like
the
layer
format,
some
metadata
some
rules
about
making
it
reproducible
you
build
it.
B
It
outlines
a
pack
interface
to
kind
of
create
one
of
these,
and
this
will
kind
of
propose
some
additive
changes
to
the
package
tamil
file,
to
figure
out
where
these
dependencies
are
and
finally
outlines
how
you
would
specify
this
in
both
the
pack
build
and
create
builder
settings
as
well
as
some
like
resolution
cases.
We
have
to
worry
about
case.
There
is
conflicts.
B
B
So
I
did
some
comments
on
this
five
days
ago,
but
I
see
that
you've
made
some
changes
to
it
that
I
have
not
had
a
chance
to
look
at
what
was
the
what's.
Your
final
resolution
of
the
idea
around
you
originally
had
a
create
asset
would
unarchive
archives?
B
Is
that
still
in
play?
So
I
think
it's
an
optional
setting
now,
so
we
have
this
like
boolean
value.
If
you
want
to
have
a
package
that
gets
unzipped
and
all
the
raw
files
be
splatted
out,
you
should
totally
be
able
to
do
that,
but
yeah
you're
totally
right.
We
don't
have
to
worry
about
doing
this
for
every
archive
format.
Okay.
Now,
how
would
you
calculate
the
sha
256
of
that
new
artifact?
B
So
I'm
sort
of
thinking
about
this
in
the
context
of
let's
just
theorize,
that
there
could
be
another
component
inside
of
the
build
pack
registry.
That's
an
asset
registry
and
two
different
build
packs.
One
wants
an
unzipped
jerry
and
one
one
to
zip
to
jre
or
and
untard
like
how
do
you
resolve
a
conflict
now
between
of
that
shot,
256,
yeah,
yep
and
so
kind
of
the
original
way?
This
is
laid
out
is
that
this
sha256
is
just
based
exclusively
on
the
original
uri
that
you
like
pass
in
to
create
this
asset
package.
B
So
this
is
going
to
give
you
a
path
that
shaft
256
is
just
used
to
address,
whatever
the
bits
are
that
you
put
there.
So
if
you
specify
the
same
thing
in
two
formats,
that
would
be
erroneous
right,
yeah
and
that's
my
concern
is
imagine
a
universal
repository
of
this.
It's
not
erroneous
right.
Two
different
build
packs
may
want
the
same
dependency
in
two
different
forms,
and
that's
my
that's
my
concern.
There.
A
I'm
a
little
concerned
about
the
underlying
and
the
underlying
image
you're
creating
the
layer
is
going
to
be
tgz
and
in
the
format.
In
the
example
here
like
where
you
wouldn't
unzip
the
you
know,
tgz
you're,
going
to
end
up
with
a
tgz
inside
of
a
tgz
stored
in
the
registry
double
compressed.
A
I
think
that
that
could
make
sense
for
like
a
a
jar.
That's
in
zip
format,
yeah
you're
going
to
apply
double
compression
to
it,
but
but
you
want
to
see
it
on.
The
other
end
is
what's
the
use
case
for
having
or
like
is
it
given
that
we're
going
to
store
it
as
a
compressed
file
inside
of
a
compressed
file
anyways?
Would
you
prefer
not
to
have
to
double
compress
it
initially
like?
A
B
Sorry,
but
I
don't
think
so,
I
don't
think
we
can
prescribe
to
universally
to
all
build
pack
builders
and
all
asset
builders
that
for
a
given
dependency
right.
That
is
occupying
a
sha,
that
you
must
use
that
thing,
either
untarred
or
tart.
A
I'm
just
saying:
can
we
make
the
requirement
that
if
you
want
to
use
a
tar
jre,
you
do
a
tgz
of
the
jre
and
then
when
you
consume
it,
and
that
way
we
have
a
really
consistent
thing.
That's
a
that
we're
taking
a
checksum
of
at
the
beginning
and
also
it
makes
as
a
user.
When
you,
when
you
do
that
weird
operation,
it
makes
it
clear
to
you
that
this
is
going
to
happen.
On
the
other
end
like
like,
I
would
feel
weird
changes,
seeing
a
tgz
right
like
making
me
do
that
and
the.
A
B
So
you're
but
you're
effectively
breaking
a
sort
of
chain
of
custody
usability
thing
there
right
like
today,
there
is
when
jre
is
published,
a
shaw,
256
sits
next
to
it,
you
can
go
to
the
website
and
figure
out
what
that
is
today.
We
put
it
in
our
buildtap
tunnel,
so
you
can
inspect
it
there.
B
If
we
wanted
to
also
put
that
in
as
the
asset
itself
like,
we
can't
read,
we
can't
or
double
tar
explicitly
because
now,
all
of
a
sudden
that
asset
appears
to
have
a
different
shot
and
sure
you
could
go
inside,
but
a
lot
of
the
places
you'd
most
commonly
look
to
try
and
verify
that
value
are
now
wrong
or
not
like
you'd
have
to
look
inside
of
them.
Somehow
you
want
to
reuse.
A
The
check
some
of
the
yeah
original
artifact
yeah
use
case.
A
B
Right,
like
no
like
anything,
is
possible
here.
Right,
like
we
can
add
these
kinds
of
indirections
in
all
of
the
places
you
might
look,
but
I've
considered
it
to
be
a
plus
that
someone
can
crack
open,
build,
pack.tommel
and
verify
a
url
and
a
checksum
right
like
straight
up.
You
can
say:
oh,
this
is
exactly
where
I
would
have
downloaded
this
from.
B
B
B
It
doesn't
have
to
occupy
the
same
shot
256
we
could
use
you
know
just
like
we
do
a
layer
digest
or
something
for
it
and
address
it
by
its
layer,
digest
instead,
but
then,
like
so,
you
put
in
a
little
flag
and
we
do
the
shock
calculation
of
one
way.
But
if
you
don't
put
in
that
flag,
we
do
a
shot
calculation
in
another
way
and
like
it
doesn't
really
hold
up.
When
you
think
about.
A
It
yeah
they're
a
bunch
of
weird
weird
cases
there
I
think
yeah
cross-platform
support
and
it's
like,
I
can
think
of
a
couple
things.
B
Of
this
stuff,
dan
looked
really
good,
but
this
was
like
the
one
thing
that
kind
of
stuck
out
to
me
is.
We
certainly
have
a
lot
more
focus
internally.
B
It's
at
vmware
about
this
sort
of
chain
of
custody
and
doing
bill
of
materials
and
verifying
checksums
throughout
an
entire
software
delivery
lifecycle-
and
it
just
happens
to
be
front
of
mind
to
me
at
the
moment
that
we
have
a
discrepancy
here,
yeah
and
so
it's
kind
of
something
you're
looking
for
just
a
mapping
between
that
we
could
make
like
it
seems
like
if
I
want
to
explicitly
provide
something
and
unzip
like
the
unzipped
format.
B
That
seems
like
an
option
I
should
be
able
to
do
like
it
does
make
the
delivery
of
this
chain
of
custody
a
little
bit
more
difficult,
but
like
build
pack,
authors
should
probably
have
the
option
to
have
this
behavior
if
they
really
agreed.
Yeah
yeah,
like
a
one
possible
thing,
might
be
that
actually
the
untarring
and
unzipping
is
handled
late
might
be
an
option.
So,
as
the
life
cycle
builds
these
things
or
something,
I
know,
I
guess
you
don't
get
to
share
the
layers,
then
yeah.
That's
a
problem.
A
To
me,
it's
weird
that
you'd
have
a
single
artifact
inside
of
a
directory
and
then
the
checksum,
the
directory
name,
is
the
checksum
of
the
single
artifacts
contents
or
like
that
that
interface,
given
that
you
could
have
other
things
in
the
directory
too
and
the
file
name
doesn't
matter,
could
an
interface
be?
If
things
are
single
artifacts
you
get
a
checksum
and
then
you
know
as
a
as
a
file
check
some
dot.
You
know.
B
Well,
so
yeah,
so
the
question
is
more
of
how
exactly
we'd
go
and
do
look
up.
So
when
we
had
the
big
meeting
a
while
back,
we
toyed
around
with
the
idea
of
oh,
you
just
generate
a
uuid
right
like
putting
the
shaw.
There
is
had
other
problems
and
I
think
actually,
the
other
problem
problem
has
been
solved
by
the
additional
paths
configuration
to
guarantee
that
you
can
mount
any
one
of
these
dependencies
at
more
than
just
its
sort
of
unique
identifier
location.
So
it
may
be
sufficient
to
dump
that
shot
completely
right.
B
B
That
problem
right,
what
if
we
did
something
where
okay?
So
we
have?
Let's
say
we
always
explode
it,
so
you
double
compress
if
you
want
a
compressed
artifact
and
we
use
this
straw
just
to
validate
as
we're
creating
the
file.
But
then
we
have
a
different
lookup.
Where
you
look
up,
you
can
look
up
individual
files
by
the
sha
of
the
file
rather
than
by
the
sha
of
the
download.
So
let's
say
someone
else's
tgz
didn't
contain
another
ttc,
but
it
contained
three
files.
B
B
If
it's
the
exact
same
file
with
the
exact
same
digest,
we
can
just
write
a
new
sim
link
that
points
to
one
or
the
other,
doesn't
matter.
It's
the
same
file.
A
B
B
B
A
B
In
turn
contain
another
tgz,
but
in
our
asset
layer
we've
contain
we've
written
metadata
that
describes
the
digest
of
the
particular
file
in
that
layer.
So
now
we
can
create
a
different
lookup
table
to
read
files
out
of
layers
by
digest
for
ours.
In
this
ttz
case,
there's
only
one
file
and
in
one
digest
you're
going
to
look
it
up
the
way
you
want
with
the
canonical
like
jre
digest
for
someone
else's
asset
layer,
maybe
there's
multiple
files
and
you
have
multiple
lookup
points
yeah,
but
like
in
the
case
of
multiple
files.
B
A
B
A
B
Well
unique
right
like
in
this
particular
example.
The
shaw
wouldn't
be
unique
right
like
I
am
not
a
fan
of
emily's
suggestion,
yet
anyway,
she
has
a
way
of
convincing
me
that
trying
to
wrap
that
and
have
a
level
of
indirection
so
that
the
original
shot
can
resolve
to.
It
is
a
good
idea
because,
like
logically,
we
still
come
back
to
this
idea
that
there
are
two
artifacts,
both
of
which
occupy
a
shop
like.
B
I
want
to
be
able
to
say
in
my
build
pack
when
you
build
when,
when
you
build
a
build
package,
I
would
also
likely
like
you
to
include
the
asset
for
shop.
One
in
my
build
package
right,
add
the
asset
layers.
What
exactly
does
that
mean
right?
Does
it
mean?
Go
get
me
the
one,
that's
the
doubly
version
of
the
jre,
or
does
it
mean
go,
get
me
the
one.
B
That's
expanded
and
who's
to
say
both
of
those
possible
choices
exist
at
any
given
time,
and
that's
where
good
actually
comes
into
play,
the
the
advantage
of
a
good
is.
We
can
assign
a
unique
identifier
like
ignoring
for
a
moment
the
path
problem.
We
can
identify
a
unique
identifier
for
every
single
artifact
right.
We
don't
end
up
with
the
same
hash,
representing
both
an
exploded
and
non-exploded
version
of
a
given
artifact.
A
If
you,
if
you
had
the
flag,
mean
wrap
the
individual
artifact
in
a
in
a
tgz
and
then
take
the
checksum
of
that,
then
you'd
have
unique
art,
so
you
could
specify
a
single
artifact.
Then
you
pass
a
flag
with
that
saying:
hey
it's
single
artifact
mode
or
vice
versa.
You
pass
a
flag,
in
the
other
case,
it
wraps
in
the
tgz
and
then
gives
you
the
shot.
You
have
the
same
benefit
as
exactly
the
same
benefit
as
a
good.
A
It's
still
a
you
know,
random
number,
there's
no
overlap
and
we
get
the
deduplication.
A
B
B
B
A
So
like
that
idea
aside,
you
know
the
good
using
a
good
is
exactly
identical
to
using
a
sha.
As
long
as
you
wrap
the
artifact
in
the
final
layer
of
tgz,
when
you
pass
a
flag,
you
get
a
unique
shot.
Every
time
you
get
perfect
to
duplication,
you
have
a
random
number,
that's
only
the
same.
If
all
the
bits
are
exactly
the
same.
Every
time.
B
A
If
it's
a
single
file,
then
you
just
just:
do
it
all
in
memory?
You
do
it,
you
just
do
the
compression
live
right,
you
you
put
tar
headers
around
it
and
you,
you
know
too
z
it
up
and
you
run
it
through
shot
just
purely
in
memory,
throwing
away
the
bits
completely
and
that's
the
shot
you
use
in
the
end.
In
the
single
file
case,.
B
A
A
I'm
just
kidding,
but
you
know
you
get
the
idea.
It's
it's
functionally
equivalent
to
a
good.
It
just
looks
like
a
shawn.
I
think
it
looking
like
a
shot
bothers
you,
and
that
makes
sense
because
it
you
know
like
you,
would
think
that
that
would
correspond
to
some
artifact,
but
that
would
be
that
that's
the
closest
idea
I
have
that
meets
the
requirements.
If
that
makes
sense.
A
The
tar
compression,
oh
like,
like
you
know,
if
you
compressed
your
tar
ball,
yeah.
A
B
A
Exactly
and
also
like
you
know,
how
often
is
someone
gonna
try
to
take
the
same
art
like
if
with
the
good
it's
different
every
time,
but
in
this
case
it's
only
different
in
the
case
that
someone
references
it
once
with
their
own
compressed
thing,
and
then
someone
later
references
it
later
with
expand
mode
on
right.
So
it's
like
it's
a
pretty
edge
case
where
it
doesn't
get
to
duplication.
Anyways,
you
really
really
do
get
the
duplication
benefit
a
lot
more
cases.
B
A
Something
to
think
about
dan
does
that
is
that
some
good
feedback
for
yeah-
that's
something
I
can.
B
The
usage
of
shaw's
in
this
or
more
more
likely
in
the
case
where
there
are
two
logical
versions
of
the
same
artifact
figure
out
how
you
can
disambiguate
between
the
two
of
them
yeah.