►
From YouTube: CNB Weekly Working Group: 2021-12-09
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
release
planning
wanna
start
with
platform.
B
This
is
an
interesting
one
because
I
haven't
been
all
in
the
loop,
so
I'm
hoping
maybe
someone
can
fill
in
the
gaps,
but
my
understanding
is
that
david
is
planning
to
do
a
pack
release
at
some
point
today.
I
don't
know
in
what
time
zone
that
is,
but
yeah
that's
something
on
the
rise
there.
C
Or
a
draft
release
is
up,
so
I
think.
A
Okay,
no
updates
from
distribution,
bat
team,
any
release,
planning
updates.
B
It
would
take
care
of
that
yesterday
at
the
core
team.
I
think,
for
the
sake
of
of
the
people.
B
In
this
meeting,
there
are
two
spec
releases
that
we're
working
on
distribution
and
project
descriptor.
I
think
project
descriptor
is
probably
at
the
forefront
and
I
think
joe,
you
might
be
a
little
bit-
have
a
little
bit
more
context
on
exactly
when
we're
trying
to
shift
that,
but
then
soon
thereafter
we're
trying
to
get
distribution
spec
out,
and
so
we
can
get
the
builders
all
in
line
and
complaint.
B
A
Should
we
jump
into
the
rest
of
the
agenda.
A
Cool
first
one
is
proposal
to
move
to
cncf
slack.
D
I
think
there
terence
is
here,
but
there
isn't
much.
I
think
it
was
just
an
fyi
that
that
rfc
exists
and
we're
looking
into
moving
to
the
cncf
slack.
D
There
are
still
some
open
questions
and
we
filed
a
service
test
ticket
on
our
side
to
figure
out
like
some
of
those
questions.
For
example,
what
the
experience
of
past
projects
have
been
when
they've
tried
to
migrate
to
the
cncf
slack,
what
will
happen
to
our
users
and
how
much
of
the
history
would
be
preserved
and
other
things
so
like
we're
currently
on
the
limited
plan?
Would
we
get
all
of
the
history
from
the
past,
or
will
it
just
be
that
limited
history
of
ten
thousand
messages
and
other
things
like
that?
But
yeah.
B
E
E
I
mean,
I
don't
think
it's
the
end
of
the
world
or
not.
If
we
don't
get
it
but
still
be
good
to
have
the
info.
F
E
But
definitely
compared
to
something
that
rfc's
this
directly
affects
probably
people
here
in
this
working
group.
So
if
you
have
an
opinion,
please
chime
in.
E
A
Cool
anthony
something
you
want
to
do,
show
and
tell
with.
G
Absolutely
thanks,
so
I
did
want
to
talk
about
interaction
mode
for
pac.
I
don't
know
if
you
all
recall
the
rfc
that
I
put
up
months
ago.
The
whole
idea
was
basically
that
you
know
our
community.
I
think
we
would
say
has
a
lot
of
experienced
engineers
on
board
right,
like
our
our
customers,
our
platform
developers
right
like
I,
I
think
we
have
a
certain
target
audience
already
and
I
was
hoping
we
could
broaden
it
up
to.
You
know
more
entry
level,
more
casual
developer,
people.
You
know
that
way.
G
We
could
sort
of
get.
You
know
just
growth
in
a
different
market
right
and
the
hope
was
that
by
you
know,
introducing
a
visual
mode.
You
know
introducing
something
that
lowers
the
learning
curve
a
little
bit
that
introduces
all
the
abstractions
in
a
sort
of
more
engaging
way.
We
could
sort
of
get
there
right,
so
I've
sort
of
been
working
on
it
in
isolation.
G
For
months
now-
and
I
was
hoping
to
you
know
demo
it
you
know
just
to
see
if
it's
worth
investing
in
right,
you
know
as
a
community,
I'm
you
know
believe
it
or
not.
You
know
I.
I
don't,
I
don't
feel
strongly
about
anything.
No.
This
is
really
you
know
to
see
where
it
can
go
to
solicit
feedback
right.
So
I'm
going
to
share
my
screen
and
you
know
I'm
just
going
to
run
the
commands
and
show
you
what
it's
about
and
I'll
leave
it
up
for
questions.
That's
really
about
it.
All
right!
E
G
This
is
a
simple
go
application,
just
a
really
really
simple:
go
application,
so
we're
expecting
like
a
go,
build
of
sorts
here
just
to
re-familiarize
with
what
the
default
looks
like
right.
So
this
is
what
let's
say:
a
normal
person.
Who's
introduced
to
bill
packs
will
see
right.
They
they
run
this
pac
command,
they're,
building
their
thing.
They
get
some
logs.
G
Patient
is
going
through
the
phases.
Okay,
you
know
keep
in
mind.
You
know
some
of
us
are
intimate
with
what's
going
on
here,
but
you
know
someone
who's
kind
of
new
might
not
get
all
the
steps
being
provided
here,
but
we
know
we
have
an
export.
There
was
a
build
phase
that
that
did
some
things
and
then,
at
the
end
of
the
day,
we
have
this
built
image
right.
So
I
was
hoping
we
could
try
the
same
command
now,
but
it
you
know
if
it's
an
experimental
flag
right.
G
But
let's
say
you
add
this
interactive
flag,
which
is
still
experimental,
but
it
is
it.
Is
there
let's
try
and
see
what
it
looks
like
now?
What
is
the
user
journey
like?
G
That's
a
good
question:
it
actually
isn't
it
because
of
a
recent
update
actually
with
s-bomb
we're
now
reading
the
previous
layer
of
the
last
image,
and
because
of
that,
the
it's
actually
downloading,
like
the
last
previous
image
and
the
analyze
page,
which
is
now
first
but
that's,
sort
of
a
technical
question.
Right,
you're
wondering
it's
really
just
because
this
thing,
which
doesn't
print
out
logs,
is
showing
up
and
because
the
text
happens
after
it.
It's
sort
of
waiting
for
this
that
that's
sort
of
the
reason.
E
G
No
there's
no
problem
anyway,
so
this
is
what
the
dashboard
looks
like
now
right,
we
off
the
bat.
You
know
we
have
two
concepts,
we
have
our
app
and
we
have
our
builder
images.
You
know
in
case
I
forgot
what
builder
you're
using
I
was
using
the
keto
full.
You
know
my
favorite,
my
favorite
builder,
you
see
the
run
images
and
you
know
sort
of
put
in
front
of
us.
Are
these
build
packs
here
right?
If
we.
G
We
can
cycle
through
these.
We
can
select
these
all
right.
I'm
not
I
mean
I'm
not
gonna,
explain
anything
like
it's
supposed
to
be
figured
out
as
you
go,
so
I'm
just
going
to
click
one
and
see
what
happens.
G
Okay,
we
should
have
a
new
view
here.
We
have
a
dive-esque
view,
sort
of
broken
out
by
phases
and
in
between
the
build
trays
we
have
bill
packs
right.
These
are
implementation,
build
packs
right.
We
can
cycle
through
these.
Let's
say
I
go
to
this
one.
G
I
can
hide
hide
things.
You
know
I
can
go
through
here
and
hide
folders
as
I
wish.
Whatever,
let's
go
back,
I
go
here,
I
can
hide
more
folders
all
right,
so
here's
here's.
What
I'm
hoping
is
gonna
happen
right,
like
somebody
who's
smart
right,
but
not
necessarily,
you
know,
well-versed
in
buildbacks
is
going
to
say.
Okay,
I
think
I
get
what's
going
on
here.
We
have
three
build
packs
that
were
selected
for
me.
G
Whatever
these
bill
pack,
things
are
the
first
one
installed
ca
certificates
for
me,
the
second
one
downloaded
go
for
me
and
then
the
last
one
did
go
build
right
and
that's
what
happened
to
my
container
image
right,
I
mean,
ideally
I'd
really
love.
I
still
really
would
love
like
the
actual
description
to
be
written
here,
but
this
is
just
some
metadata
mishaps,
but
that's
the
gist
of
it
right
now.
Right
and
the
reason
I'm
demoing.
G
F
F
Yet
is
the
intent
like
you
know
to
maybe
repeat
the
list
or
show
something
under
detect
that
shows
like
here
all
the
dependencies
matching
up
and
then
you
can
view
information
about
those
or
show
something
under
export,
like
here,
all
the
layers
that
got
exported
and
hear
their
digests
or
like?
Are
you
planning
to
build
that
phase?
This
thing
out
kind
of
more
in
the
future,
because
I
really
like
how
the
information
is
organized
there.
I
feel
like
that's,
like
the
clearest
visual
explanation
of
what
the
build
pack
processes
I've
ever
seen.
G
Thank
you
for
the
question.
To
be
honest,
I
wasn't
planning
on
much
more
with
the
phases
right,
there's
a
support,
docker
files
rfc,
which
I
think
would
introduce
a
new
sort
of
extend
phase.
I
was
really
hoping
to
depict
that
in
here
like
okay,
once
once
you
do
your
docker
falcon
man,
we
could
show
what's
happening
there.
Also,
you
know
I
had
ideas
for
s-bomb,
right
and
people
pocket
has
been
asking
begging
for
this.
Like
hey,
can
I
show
my
actual
bombs?
I
think
this
is
the
perfect
place
for
it.
G
I
really
just
wanted
to
show
like
what
is
happening
in
order
here.
That's
really
about
it
and
then
whatever
is
like
sort
of
you
can
drill
into
is
the
stuff
that
would
be
highlighted,
and
those
are
the
only
two
things
that
that's
on
my
mind.
To
be
honest,
I
didn't
have
much
plans
for
detect
or
restore
or
something
like
that,
but.
F
F
Think
yeah,
I
think
this
would
be
a
great
place
to
show
like
here
the
you
know:
here's
the
grid
of
dependencies
that
got
mesh
with
other
dependencies
to
people.
I
think
one.
We
have
a
problem
with
the
transparency
in
the
build
process,
like
people
don't
understand
how
the
image
is
built
to
some
extent,
even
though
it's
very
clean
and
contractual,
but
I
think
we
also
have
a
problem.
People
don't
understand
how
their
their
build
pack
got
detected,
and
this
would
be
a
great
interface
for
showing,
like
you
know
this
build
pack.
F
You
know
had
these
provided
dependencies
that
matched
and
then
maybe
in
like
red
or
gray,
these
didn't
match
right,
they're
optional,
and
then
this
other
build
pack.
You
know
required
these
dependencies,
and
you
know
these
other
ones
didn't
match
because
they
are
optional
and
then
you
could
just
get
an
instant
visual
representation
of
how
your
you
know
the
plan
for
your
build.
If
that
makes
sense
through
that
detect
interface,
you
know
I
mean
like
not
saying
we
shouldn't
ship
this
immediately.
This
is
great.
I
just
you
know.
G
No,
this
is
good.
You
know,
I
I'm
I'm
begging
for
ideas
at
this
point,
so
I
I
have
noted
that
sam
I
see
your
your
hand
is
up.
What's
up.
D
Is
it
reliant
on
docker
or
is
it?
Is
it
just
gonna
work
with
any
of
the
life
cycle,
input,
output,
options.
G
Great
question
again,
which
is
sort
of
my
point
right
like
everybody
here-
is
kind
of
really
smart,
but
I
don't
know
if
it's
a
good
representation
of
average
developer
but
yeah
great
question,
so
it
would
work
with
publish
flag.
It
really
doesn't
matter
because
what's
happening,
is
I'm
streaming
in
the
layers
directory
and
workspace
directory
before
it's
before
the
final
thing
happens?
Does
that
make
sense?
G
So
basically
it's
just
whatever
the
layers
directory
looks
like
will
get
dumped
on
here
right,
but
you
know
I
have
the
ability
to
you
know
manipulate
the
data
beforehand
right
I
can.
I
can
depict
certain
files
here.
If
I
want,
I
can
split
it
on
the
screen,
so
you
know,
if
you
think
about
what
publish
does
it
doesn't
really
interrupt
this
flow
right
now,
because
it's
it
happens,
you
know
before
that
code
path.
Does
that
make
sense.
G
So
you
know
you
have
to
think
of
you
know
those
things
matter
in
the
final
app
image.
This
isn't
it
looks
like
it,
but
it's
not
the
final
app
image.
This
is
the
layers
directory
right,
so
you're
talking
about
launch
layers
and
things
like
that.
You
know
that
matters
in
the
final
app
image
that
would
only
show
up
in
the
final
apps
and
here
right,
but
things
like
build
layers
here,
which
should
appear
cache
layers,
will
show
up
here
too
right.
So
it's
it's.
G
B
Yeah,
I
think
that
brings
up
my
my
not
necessarily
concerned,
but
just
point
I
wanted
to
bring
up
is
that
when
we're
looking
at
this
like
build
and
that
information
that
you
have
the
file
system,
it
gives
me
the
idea
that
these
are
the
things
that
might
be
in
my
final
lab
image
right.
So
if
we
were
to
expand
on
export
and
in
the
export
actually
show
the
app
image
you
know,
information
or
essentially
a
dive
file
system
view
of
the
final
app
image.
G
Oh,
I
I
like
it.
I
get.
I
guess
you
know
that's
the
internal
discussion
I've
been
having.
I
really
don't
know
to
me.
There
are
already
good
tools
for
looking
at
the
final
app
image
to
me.
The
the
sort
of
the
big
mystery
of
this
is
the
build
process
like
what's
happening,
so
I
guess
I
wanted
it
to
be
more
of
like
making
the
learning
curve
for
the
build
process
easier,
and
you
know
if
we
need
to
add
some
explanation,
documentation
to
say
hey.
G
B
G
No,
that's
that's
fair
right,
but
you
know,
I
guess
you
know,
think
of
the
pros
and
cons
right
like
this.
Go
this
thing
right,
which
downloads
go
right.
There's
no
reason
for
the
gold
source
code
right
to
show
up
in
the
final
app
image,
but
as
a
person
trying
to
learn
about
build
packs,
I
want
to
know
how
it
works.
I
want
to
know
about
this
intermediate
step
about
like
this.
This
this
thing
right.
I
I
guess
you
know
it's
just
it's
just
about
ui
right
like
I'm,
not
a
you
know
I
can.
G
C
I
think
this
looks
great
I'd
love
to
see
us
continue
to
iterate
on
this
great
great
stuff,
yeah,
I'd
love
to
see
like
the
cache
and
like
whether
the
previous
image
existed
or
not
like
with
like
a
simple
check
or
something
at
least
just
to
kind
of
know.
Like
you
know,
was
there
a
cache
image?
Was
there
a
previous
image?
C
And
just
so
I
can
start
to
kind
of
put
those
pieces
together
in
my
head
about
what
you
know,
seeing
something
under
the
restore
phase
that
you
know
at
least
the
image
name
that
we
restored
from,
or
the
cache
directory
that
we
restored
from
just
like
you
have
to
build
here.
It'd
be
kind
of
neat
to
be
able
to
see
what
was
actually
in
those
committed
cache
directories
or
something.
G
E
Yeah,
this
is
great
anthony,
I'm
definitely
in
the
line
with
steven,
for
some
of
the
other
phases
like
I
feel
like
for
me,
even
more
so
than
build
like
detect,
is
one
of
the
most
opaque
parts
of
the
build
process,
especially
when
things
fail
or
something,
and
it
would
definitely
be
if
you're
streaming
data
from
lifecycle
on
some
of
that,
it
would
be
neat
to
see
like
the
build
plan
or,
like,
I
guess,
some
more
data
around
detect,
because
I
guess
it's
great
when
it
works,
but
it
definitely
is.
E
I
feel
like
one
of
the
harder
parts
to
debug,
because
we,
I
guess,
suddenly
swallow
some
of
that
input,
but
your
output
rather.
G
I'm
you
know
it's
just
a
good,
I'm
thinking
about
it.
I
like
it,
I
like
I'll,
think
about
it
as
well.
F
I
think
the
screen
with
the
phases
is
like
it's
like
the
best
opportunity
to
you
know
have
a
as
much
information
as
possible
kind
of
in
a
single,
like
you
know,
place
diverts
out
to
you
know
different
things
that
you
show
on
the
pain.
I
really
like
the
the
way
you
set
up
the
organization
of
the
data.
A
Yeah,
I
was
thinking
the
same,
there's
similar
thing
of
like
the
way
that
you
present
the
concepts
like
that,
like
that's
some,
that's
something
important
to
think
about
and
can
come
like.
We
should
align
that
as
we're
talking
about
project
descriptor
and
things
like
that,
like
just
that
way
of
communicating
to
a
user
what's
going
on,
can
you
go
back
to
the
previous
view?
G
I
yeah
I
don't.
I
don't
feel
too
strong
about
it.
I
really
just
wanted
to
me
like
when
I
first
started
this.
I
thought
it
would
be
great
to
show
buildbacks
as
like
a
bunch
of
steps
just
like
hey.
This
is
writing
your
docker
file
for
you.
That
was
that
was
how
I
had
it
in
my
head.
Right
like
this
is
run
step,
one
run
step
two
and
again,
you
know.
I
really
wanted
to
pull
the
descriptions
out
of
this,
but
the
metadata
wasn't
working
for
me
not
that
right.
G
So
you
know
to
me
that
would
be
a
great
way
to
like
you
know
it
wouldn't
be
accurate
right,
but
it'd
be
a
great
way
to
describe
it
to
a
newcomer
like
hey.
This
is
writing
your
docker
file
for
you,
so
that
was
that
was
what
I
meant.
It
didn't
mean
anything
with
the
the
actual
plan
file
that
livestream.
A
F
I
you
know
not
as
strong.
I
think
users
will
realize
quickly
that
it
works
like
that,
but
you
know
that
that
is
is
something
that
kind
of
on
joe's
point
makes
me
feel.
Like
you
know,
maybe
this
thing
could
be
even
more
higher
level,
but
at
the
same
time
like
I
don't
want
to
waste
screen
space
on
generic
metadata
about
the
image
or
you
could
use
it
to
do
this
kind
of
exploratory.
C
When
I
first
saw
your
your
first
screen
where
you
had
planned
there,
I
thought
it
was
going
to
be
kind
of
like
canines
or
dive,
where
you
press,
like
0
or
something
and
you'd,
see
all
the
other
build
packs
that
weren't
selected,
because
I
thought
that
was
just
a
re
interpretation
of
like
detect
pass
fail,
which
it
kind
of
is,
but
that's
what
I
thought
was
probably
going
to
happen.
There
just
kind
of
yeah.
G
I'm
not
offended,
I
like
it,
please
please!
These
are
great.
These
are
great.
I
did
promise.
I
did
promise
to
keep
it
at
15
minutes,
but
please,
if
you
know
this,
this
can
be
an
ongoing
thing.
You
don't
slack
me
or
make
pack
issues
or
whatever
you
know,
I
not
doing
it
for
myself.
I
I
really
want
you
know
things
to
work
out
for
this
community,
so
that
was
the
whole.
That
was
the
whole
just
that's
my
whole
presentation.
A
Cool
looking
forward
to
posting
a
cool
tweet
with
a
gif
of
this
that'll
go
well.
A
All
right
I'll
move
on
to
the
next
one,
thanks
anthony
rc
for
cosine
integration,.
D
D
As
of
last
week,
github
also
announced
integration
with
cosine
so
that
you
can
now
transparently
sign
your
images,
so
you
don't
need
to
provide
any
keys
or
anything
if
you're
using
cosine
in
your
workflows,
you
can
just
do
cosine
sign
the
image
name
and
it
will
automatically
generate
ephemeral,
keys
and
sort
of
push
this
out.
D
The
reason
that
so
this
rfc
currently
suggests
integrating
cosine
into
the
life
cycle
directly
instead
of
a
platform
like
back
the
main
reason
being
that
each
platform
would
have
to
implement
something
like
this
for
image
signing
so
rather
than
that,
if
the
lifecycle
provided
it
as
an
optional
flag,
platforms
could
have
a
consistent
way
of
signing
things
and
exporting
them
out.
D
But
if
they
want
to
implement
their
own
logic,
they
can
still
just
skip
the
flag
entirely
and
do
anything
else
they
want
with
coastline
if
they
wish
to
bypass
it
or
change
how
they
do
it.
Primarily,
this
flag
is
introduced
to
the
analyzer
and
exporter
phase.
The
analyzer
phase
uses
this
to
verify
registry
access
and
a
bunch
of
other
things.
The
exporter
takes
it
in
to
figure
out
what
additional
oc
images
it
needs
to.
Export
cosine
currently
uses
a
convention
to
attach
signatures
and
s-bombs
to
the
main
image.
D
D
The
reason
I
wanted
to
bring
this
up
is
that
because
of
cosines
convention,
this
can
only
work
if
you
have
the
digest
of
the
output
manifest,
which
means
that
if
we
use
this
in
the
demon
mode,
there
wouldn't
be
anything
to
sign,
and
the
other
issue
is
that
these
are
oci
artifacts,
technically,
not
images
that
docker
expects.
So
you
can't
load
them
into
the
daemon.
If
you
try
to
load
or
pull
one
into
docker,
it
complains.
D
So
that's
the
other
issue,
but
yeah
the
the
idea
would
be
if
you're
running
back
with
a
published
flag
or
and
provide
the
course
and
arguments
to
pack,
it
will
publish
your
image
sign
it,
export
the
s
form
if
you
want
and
sign
the
s1.
Also,
if
you
want
in
the
cosign
format,
this
does
not,
as
of
yet
remove
the
s1
structure,
we
have
in
the
image
right
now
for
restore
and
analyze.
D
Just
an
additional
way
to
export
the
s
bomb,
because
if
you
want
it
to
be
exported
in
this
way,
you
can.
The
other
thing
is
like
it.
We
did
discuss
some
parts
of
this
rfc
yesterday
in
the
implementation
team.
One
alternative
if
people
are
hung
up
on
the
damn
support
is
pac
could
potentially
introduce
a
back
publish
command.
D
If
we
added
like
a
pack
published
flag,
we
could
at
least
guarantee
that
if
you
do
a
pack
build
with
a
published
flag
or
we
did
a
pack
build
loaded
it
into
the
daemon
and
then
did
a
pack
publish
the
output
would
be
reproducible
at
the
same.
It
also
means
that
pack
will
then
have
to
figure
out
a
way
to
store
all
of
these
artifacts
somewhere
and
do
all
the
necessary
operations
when
it
does
publish.
D
I
have
not
included
the
back
publish
parts
to
the
rfc,
yet
I've
just
kept
the
rfc
to
conditional
on
the
fact
that
this
will
only
apply
when
lifecycle
is
run
in
registry
mode.
But
if
people
have
strong
opinions
on
the
published
part,
you
can
add
that.
F
This
the
publish
flag,
question,
I
think,
brings
up
an
age-old
question
that
has
come
up
every
few
months
for
the
last
I
don't
know.
Probably
two
years
should
we
get
rid
of
the
diamond
support
and
pack
and
replace
it
with
a
local
registry
or
a
local
image
store
of
some
sort.
It's
not
the
daemon
with
support
for
loading
it
into
the
daemon.
You
know
the
same
way
we
have
support
for
exporting,
but
with
a
different
source
of
truth
and
also
does
build,
can
help
us
here
at
all.
B
F
From
a
user's
perspective
right
when
pac,
I
I
just
mean
when
you
do
a
pack
built
locally
regardless
of
what
ends
up
like
yes,
it
is
the
life
cycle
that
ends
up.
You
know
talking
to
the
damon
to
export
the
image,
but
should
that
experience
really
use
the
daemon
as
a
source
of
truth,
or
should
it
use
the
you
know,
use
some
other
mechanism,
a
story,
the
image.
Luckily
yeah.
B
D
So
this,
combined
with
a
bunch
of
other
rfcs
I've
put
in,
are
all
sort
of
facing
the
same
limitation.
They're
like
they're,
trying
to
use
oci
specification
and
docker
doesn't
follow
all
of
it,
and
then
we
always
hit
a
snag
where
we're
trying
to
do
some
good
things.
But
we
can't
because
we're
being
held
back
with,
like
the
difference
between
the
game
use
case
and
the
registry
use
case
directly.
D
The
other
alternative
is
like
exporting
this
whole
thing
out
on
an
ocl
layout
format.
So
the
good
thing
is
that
ggcr
directly
supports
like
layout
as
a
sync,
so
we
would
be
able
to
add
support
for
reading
and
writing
out
to
it
without
potentially
any
changes
and
the
life
cycle
is
just
going
to
be
dependent
on
the
schema
of
the
input
and
output
references.
So
if
the
schema
says
I'm
forgetting
the
schema
name,
it's
like
oci
layout
call
and
slash
slash
and
then
give
the
output
and
just
load
and
export
that.
F
Way,
I
think
that's
a
nice
feature,
but
I
I
don't
think
it
solves
the
underlying
problem
which
is
like
what
should
the
experience
be
when
you
pack
build,
and
you
end
up
with
an
image
in
your
docker
demon,
and
it
has
all
the
wrong
kind
of
metadata
on
it,
and
you
know
there's
no,
no
way
to
store
information
that
you
could
store
in
a
registry
case
right
and
so
like.
I
think
it.
F
D
It's
because
there's
no
equivalent
for
it
in
the
name,
the
demon
has
no
equivalent
for
the
manifest
it's
equivalent
for
the
index,
which
is
a
manifest
list
in
the
name
use
case
and
there's
a
config,
so
it
dynamically
constructs
the
output
layers
when
it
pushes
to
registry.
So
you
also
don't
know
which
compression
algorithm
it
will
use.
So
you
don't
know
what
the
digest
of
the
final
image
would
be.
So
all
of
that
information
is
required
if
you
want
to
do
anything
with
like.
D
F
B
I
guess
I'm
aware
of
you
know.
Basically,
if
you
create
an
image
on
the
daemon,
there's
no
place
to
put
that
sort
of
information.
I
guess
what
I
was
more
curious
about
is,
if
you
pull
something
from
the
registry
right
into
the
daemon
and
then
push
it
into
another
registry,
does
it
persist
to
the
additional
metadata.
D
No,
I
think,
crane,
as
far
as
I
know,
is
probably
one
of
the
only
tools
that
has
a
way
of
migrating
things
from
one
register
to
the
other,
while
preserving
the
digest.
D
B
F
The
layers
aren't
stored
in
any
compressed
form
locally,
and
so
every
time
you
push
you
know
you
could
use
a
different
compression
algorithm.
You
know
many
like
there
are
many
things
that
could
change
at
it
for
every
single
blob
it
could
could
be
different.
Not
just
you
know,
for
some
preset
metadata
on
the
manifest.
D
Oh,
I
I
don't
know
if,
like
what
the
outcome
of
this
proposal
would
be,
whether
people
are
like
people
think
we
should
make
it
a
requirement
that
facts
should
work
the
same
with
publish
versus
non-publish
and
whether
we
can
just
say
that
cosine
flags
are
only
applicable
when
you
do
a
I
build
with
publish
or
whether
we
do
want
to
introduce
like
a
black
published
command
on
its
own.
F
I
think
I
would
I
would
approve
the
rfc
with
just
you
can
only
use
this
in
pack
publish
mode
for
now
and
then
iterate
on
it
later.
I
think
we
should
encourage
more
people
to
use
tag,
publish
because
it's
faster
and
you
know
it
has
there's
a
bunch
of
benefits
to
it.
I
I
think
separately.
We
should
fix
the
damon
problem.
F
You
know
we
keep
saying
we're
going
to
do
it,
but
one
of
these
days
we
should
get
around
to
it,
the
because
all
the
on
the
s
bomb
topic,
because
the
s
bomb
is
stored
entirely
in
the
image
right
now,
and
you
know
it,
I
think
a
nice
benefit
of
that
is
that
we
don't
have
the
problem
with
going
between
the
daemon
and
the
registry,
because
it's
you
know,
stored
in
image
layers
and
you
can
always
export
it
out
and
so
later,
if
you
did
a
pack
built
locally
and
you
wanted
to
generate
a
coastline,
you
know
you
did
a
docker
push
to
the
registry.
F
You
want.
You
wanted
to
sign
that
and
generate
the
cosine
s
bom,
and
sign
that
as
well
right.
We
could
add
functionality
to
do
that
because
all
the
information
would
be
available,
and
so
that
makes
me
not.
You
know,
I'm
not
concerned
about
figuring
out
how
we're
going
to
patch
up
that
workflow
later,
because
it's
you
know
should
be
very
easy
to
support
very
straightforward.
C
Taking
a
step
back,
is
there
what's
the
major
advantage
of
life
cycle
doing
this
other
than
just
each
platform
like?
I
know
cosine's
sort
of
the
de
facto
leader
right
now,
but
I'm
just
you
know.
I
know
you
have
it
in
your
drawbacks
and
alternatives.
C
I'm
just
wondering
like
you
know,
is
it
that
hard
to
do
cosine
for
a
platform?
Is
it
really
worth
us
adding
this
complexity
to
life
cycle
in
general?
What's
the
big
advantage
for
bill
pack's
project.
D
D
So
it's
it
was
mainly
about
like
between
different
platform
versions,
api
versions,
that's
one
less
thing
for
the
platforms
to
care
about,
and
it
also
puts
a
nice
little
check
mark
that
hey
everything
that
you
build
out
of
buildbacks
like
if
you
use
the
default
implementation
of
the
lifecycle
everything
is
is
signed
if
you
wish
it
to
be
like
it's
fairly
easy
to
integrate
and
that's
what
I
was
sort
of
going
for
here.
We
can
obviously
just
have
each
platform
implement
this.
B
Yeah
the
downside
with
that,
though,
is
like
more
recently
I've
been
trying
to
push
more
stuff
into
the
life
cycle
because
from
a
platform's
perspective,
you
know
the
maintenance
aspect
of
it.
It
just
doesn't
scale
like
we're
horizontally
having
to
replicate
the
same
functionality
over
and
over
again
and
that's
very
costly.
B
So
if
we're
trying
to
increase
adoption
by
being
able
to
spread
ourselves
wide
as
far
as
the
platforms
that
we
can
provide
this
sort
of
integration,
I
think
it
makes
a
lot
more
sense
and
we've
been
trying
to
push
other
stuff
into
the
life
cycle
like
the
repair
and
the
project,
the
script
and
stuff
like
that.
So
I
think
we've
learned
from
that
that
it's
very
costly
to
have
to
re-implement
everything
at
the
platform
level.
C
Yeah,
I
don't
disagree
in
theory
with
that.
I
just
don't
know
what
I
guess.
Maybe
the
rfc
is
missing
for
me,
the
technical
details
on
like
with
that
config
file.
What
is
the
life
cycle
export
going
to
do
like
you
mentioned
a
bunch
of
the
stuff
that
like
doing
something
with
the
s-bombs
and
like
yeah,
I
guess
I'm
wondering
I
I
haven't
done
the
effort
of
a
platform
integrating
with
cosine,
so
I
don't
actually
know
what
it
looks
like
the
alternative
of
like
each
platform.
Doing
it
like.
C
Is
it
as
simple
as
calling
cosines
binaries
with
a
few
flags
that
are
from
that
file,
or
do
I
have
to
go
like
get
stuff
out
of
the
layers
files
or
like
inspect
the
image
to
to
build
what
I
need
to
send
cosign
like?
I
guess,
I'm
not
clear
on
what
that
looks
like
from
a
platform,
direct
integration,
so.
D
If,
like
I
have
sort
of
worked
on
some
of
this,
so
in
terms
of
exporting
and
signing
the
image
out
like
it,
I
believe
life
cycle
at
the
very
end
generates
a
report
or
something
I'm
forgetting
with
the
output
images
and
the
digest.
D
D
You
can
use
the
cosine
cli,
I'm
not
not
sure
if
you
can
do
it
directly
with
the
cosine
cla,
but
in
it
with
the
combination
of
crane
and
cosine
sili,
you
may
be
able
to
do
it
the
way
it
was
implemented
in
k,
packers
just
directly
using
the
cosign
as
a
library
with
a
bunch
of
other
things.
D
D
If
it's
it,
it
may
not
be
present
in
the
canonical
locations
which
are
part
of
the
platform
yeah,
because
I
think
it's
just
that
the
platform
will
have
to
find
out
where
all
those
s
bombs
are
and
then
run
cosine
attach
for
each
of
them
each
of
those
files
and
push
them
out
option.
Okay,.
C
Yeah,
it
might
be
worth
adding
something
to
the
rc
there,
just
to
kind
of
make
it
clear
like
if
we
don't
do
this.
This
is
kind
of
what
a
platform
or
what
the
platform
is
going
to
have
to
do
to
to
do
this
yeah,
because
I
guess
it
was
just
wasn't
clear
to
me
what
that
is,
but
yeah.
If
it's
digging
into
all
the
file
locations
and
doing
a
whole
bunch
of
work
with
the
cosign
library,
then
I
don't
disagree
that
this
should
be
a
life
cycle.
The.
G
D
They're
building
it
on
top
of
the
same
api,
that
ggcl
provides
so
like
the
same
concept
of
references
and
syncs
and
sources,
so
it
should
play
very
well
with
what
the
lifecycle
already
does
with
ggcr.
D
B
I
think
terence
called
time,
or
at
least
notice
for
time
if
we
wanted
to
talk
about
the
oci
layout,
which
probably
ties
in
really
well
with
the
conversation.
D
Yeah,
the
the
the
I
guess,
if
you're
done
with
this,
I
can
move
on
to
the
other
one
which
relates
to
like
the
ocl
layout
parts.
I
think,
like
we
sort
of
touched
upon
it
a
bunch
already,
but
we
were
discussing
this,
but
I
know
one
has
worked
on
experimental
pr
on
pac
and
life
cycle
to
to
support
outputting
to
the
oc
layout
format.
D
I
just
like
I'm
curious
what
people's
thoughts
are
on
removing
name
support
entirely
from
life
cycle
and
life
cycle
just
being
like
oci
centric,
nothing,
docker
specific
should
be
in
the
life
cycle,
and
then
it
should
be
the
platform's
responsibility
for
taking
whatever
inputs
there
are
and
converting
it
and
putting
it
wherever
they
want
it
to
be.
The
questions
I
had
were
mainly
around,
like
I
saw
that
produce
currency
outputs
to
a
local
directory
as
opposed
to
the
daemon.
D
I
know
eric
in
the
past
said
that
when
he
was
doing
his
poc
with
bill,
kitt
would
have
been
easier
if
the
life
cycle
outputted
to
an
ocean
layout.
I
did
not
find
any
good
reference
on
like
how
built
it
whether
lb
sorry
look
at
lb.
D
Has
some
nice
operations
for
loading
and
ac
osa
layout
image
directly
into
the
daemon
or
not,
and
whether
it's
just
like
we'll
have
to
take
all
of
those
things
and
manually
put
it
back
into
daemon
and
whether
we
are
happy
with
the
slow,
slow
downs
it
might
introduce,
because
right
now,
life
cycle
does
some
things
to
make
the
game
and
export
faster.
D
The
other
thing
is
things
like
boardman
all
directly
support
loading
images
from
boc
layout
format,
so
it
also
sort
of
my
next
question
was
like
how
it
tied
are
we
to
docker
in
impact
and
whether
we
should
have
like
some
native
support
for
podman?
I
know
there's
a
blog
that
javier
has
been
working
on
for
portman
support,
but
would
it
make
sense
to
just
have
like
a
config
flag
that
says:
hey
you
spoiled
man
directly
natively
versus
use
board
man
with
dame
and
support,
so
wardman
works
both
in
non-game
and
demon
mode.
D
I
think
the
blog
we
were
working
on
was
more
on
around
the
team,
specific
marxia,
so
whether
we
want
just
a
lot
of
questions,
the
thing
that
I'm
really
hoping
to
get
out
of
this
is,
if
we
support
the
osa
spec
directly,
it's
just
going
to
make
things
easier
for
us
in
the
future
to
like
keep
up
with
the
spec
and
not
have
to
worry
about
questions
like
how
do
we
like
think
about
how
do
we
deal
with
damon
again,
or
that
then
ends
up
being
a
back
specific
concern
rather
than
a
project.
B
In
that
I
know,
I've
talked
to
emily
right
before
she
headed
out
on
leave,
and
we
were
definitely
on
that
path
where
we
wanted
to
get
rid
of
damon's
support
at
the
life
cycle
level,
and
I
think
that's
where
the
oci
layout
effort,
you
know
poc
and
even
some
of
the
discussion
about
the
build
gate
came
to
be
so
I
am
a
hundred
percent
for
it,
and
I
think
it
should
be
up
to
the
pack
to
provide
that
sort
of
interoperability
with
the
daemon
or
with
podman
and
stuff
like
that
and
just
push
it
at
the
platform
level,
because
we've
seen
a
lot
more
about
cost,
as
opposed
to
a
benefit
for
trying
to
support
the
daemon
case,
and
that
also
simplifies
our
spec
and
the
lifecycle.
B
Implementation
to
actually
adhere
to
other
specifications,
as
opposed
to
a
very
you
know.
One-Off
implementation
of
the
daemon,
which
isn't
really
supported
like
podman,
doesn't
even
really
fully
support
their
daemon
socket
api
and
that
sort
of
stuff
anyways,
that's
my
flavor
on
it,
I'm
for
it
anthony.
G
My
hand
up
sorry,
if
I'm
cutting
you
off
one,
I
thought
you
wanted
to
speak,
but
you
know
I
you
know
I
did
want
to
say
if
it
gets
removed
at
the
life
cycle
level,
I'm
sort
of
indifferent,
but
you
know
I
would
really
like
to
see
you
know
damon's
support
at
pac
level.
You
know
I
think
you
can
tell
by
now
I
care
very
much
about
the
onboarding
experience
and
I
do
feel,
like
you
know,
getting
your
image
built
directly
into
the
damon.
Is
a
very
you
know
not
only
conventional.
G
At
this
point,
it's
it's
easier
right
to
sort
of
wrap
your
head
around.
I
don't
think
many
people
have
even
seen
an
oci
layout
like
directly
on
tart
right,
so
that
was
just
my
my
two
cents.
There.
F
I
have
similar
worries
like
as
a
refactoring.
You
know
the
idea
that
life
cycle
only
supports
you
know
exporting
oci
layout
on
disk.
F
You
know
the
I
think,
there's
a
disadvantage
from
a
user's
perspective
outside
of
pac
there,
where
you
know
like
if
you're
using
a
builder
and
you're
exporting
directly
to
a
registry.
You
know
how
is
that
something
we're
going
to
drop
I'd
say,
probably
not.
We
would
still
want
to
modularize
or
do
something
to
allow
that
oci.
F
You
know
output
of
pac
to
be
exported
to
a
registry
in
those
cases,
or
so
that's
right,
oci,
output
of
the
life
cycle
to
be
exported
to
the
registry
in
those
cases,
but
you
know,
assuming
we
can
patch
that
right
or
we
have
some
extra.
You
know
module
that
you
use
when
you
create
a
builder
that
does
the
export.
F
I
still
worry
about
what
you
know
fantastic
cli
case.
What's
what's
that
story
for
you,
you
do
a
pack
build
and
then
you
can,
you
know,
treat
the
image.
Just
like
the
background
I
mentioned
in
your
local
daemon.
I
I
don't
think
we
can
get
rid
of
that
right.
I
also
worry
about
like
caching
like
if
you're
gonna
like
imagine,
we
don't
continue
to
support
the
daemon
locally.
If
you,
you
know
pack
exports
things
to
oci
images.
How
are
we
gonna
share
base
layers?
F
Is
it
going
to
be
a
problem
that,
if
you've
pulled,
you
know
ubuntu
bionic
locally,
you
know
you
have
to
pull
it
again
somewhere
else
right
in
order
to
share
it.
The
I
think,
there's
a
lot
of
stuff
to
work
through
like
a
lot
of
details
to
work
through,
and
you
know
I
I
think
I
do
see
it
more
as
a
refactoring
or,
like
you
know,
a
way
to
preserve
more
metadata
locally,
as
opposed
to
like
something
that
should
drastically
change
the
user
interface.
For
you
know
someone
using
techdon
or
something
using
pack.
D
So,
just
to
clarify
I'm
not
proposing
we
drop
game
and
support
from
pac
I'm
proposing.
We
drop
it
from
the
life
cycle
back
for
pac
users.
There
would
be
no
behavioral
differences
between
these
two
things.
Maybe
it
will
be
a
bit
slower
because
we're
not
doing
the
optimizations
that
we
were
doing
before.
But
apart
from
that,
simply
in
terms
of
the
user
experience
that
they've
come
to
expect
that
wouldn't
change,
they
would
still
load
things
like
into
the
pack
will
still
look
the
output
into
the
daemon
for
them
to
use.
F
Why
make
that
feature
only
accessible
to
users
at
pac,
though?
Why
not
like
make
it
so
that
the
life
cycle,
one
part
of
the
life
cycle,
outputs
things
in
oci,
layout
format
and
then
there's
you
know
other
modular
parts
that
could
go
to
a
registry
or
go
to
a
you
know
like
why
take
that
out
of
the
life
cycle
and
make
it
just
a
pack
specific
thing.
D
I'm
just
proposing
we
take
out
the
game
parts
because
there's
lots
of
information
there,
I'm
fine
with
keeping
it
in
the
life
cycle
if
there
was
no
divergence
and
what
the
outputs
of
the
lifecycle
were.
The
the
only
reason
I'm
proposing
we
drop
it
from
the
life
cycle.
Is
that
there's
lots
of
information
and
should
be
at
least
from
a
platform's
perspective?
F
I
think
it's
like
going
down
that
track
right,
say:
life
cycle,
you
know,
can
only
do
oci
on
disk
and
registry.
Something
like
that
right.
Then.
If
you
know
in
the
future,
if
we
think
about
how
should
pac
you
know,
use
that
functionality
in
order
to
achieve
that
same
goal
of
not
losing
information,
we
might
keep
a
separate.
You
know
like
we
might
run
a
local
registry
or
keep
a
separate
local
cache
with
images
and
then
have
that
automatically.
F
You
know
load
into
the
daemon
at
the
end
so
that
we
preserve
the
user
experience
like
anthony's
talking
about,
and
so
then
we've
we've
taken
a
we've
taken,
the
name
and
stuff
out
of
the
life
cycle
and
then
re-implemented
it
in
a
more
complex
way
in
pack.
Why
not
just
do
that?
More
complex
implementation
in
the
life
cycle
and
keep
pac
kind
of
dumb,
if
that
makes
sense.
B
Because
of
the
specification
piece
right
for
me,
I
think
it's
always
rubbed
me
the
wrong
way
on
the
specification
aspect
of
it.
So
if
we
look
at
the
spec
right
now
it
has
daemon
support,
and
so
what
we're
saying
is
that
for
a
you
know,
a
life
cycle,
another
life
cycle
implementation
to
be
compliant
right.
It
has
to
support
the
damon,
and
I
think,
instead
of
saying
that
I'd
rather
see
it
say.
F
Yeah,
but
I
think
you
could
also
suspect
that
the
life
cycle
you
know
any
life
cycle
implementation-
can
contain
any
number
of
post
oc
like
like
a
valid
life
cycle,
outputs,
oci
format
right.
Any
number
of
you
know
any
life
cycle.
Implementation
can
provide
any
number
of
valid
export
strategies,
post
oci
format
to
different
places,
and
then
we
can
keep
the
functionality
that
might
be
useful
for
platforms
right.
You
know
somewhere,
that's
not
just
specific
to
pack
like
I
don't.
F
It's
like
imagine,
you're,
you
know
some
sassy
eye
runner
and
you
want.
You
want
to
end
up
storing
your
image
in
the
docker
demon
at
the
end
right
so
that
it
can
be,
you
know,
run
in
a
subsequent
step,
for
you
know
something
like
that
right.
If
the
implementation
lives
in
pack,
it
forces
you
to
do
the
build
itself
in
a
nested
docker
container,
instead
of
in
whatever
container
your
platform.
F
Natively
provides
like,
like,
I
guess,
a
contrived
example
say
you're
going
to
do
a
build
on
case
right
and
you
wanted
to
dump
the
image
in
the
node
and
then
run
it
immediately
afterwards.
Right,
you
know
now
you
wouldn't
be
able
to
do
that,
because
you'd
need
access
to
the
damage
to
run
the
image
and
you're,
not
just
using
the
name
of
storage
like
those
are
kind
of
two
different
operations.
B
But
I
think
it's
a
very
hypothetical
right
where
right
now,
it's
like
coming
at
us
at
an
expense
right,
and
I
think
that's
where
a
lot
of
alignment
has
come
from
both
sides
saying
that
we'd
rather
not
pay
that
expense
until
we
see
a
use
case
that
actually
requires
it
and
we'd
rather
align
with
the
the
existing
specs,
because
you
could
do
a
lot
of
stuff
with
the
oci
layout
right
once
it's
on
file,
you
could
use
different
tooling
to
load
it
into
whatever
you
want
to
and
yeah
you
pay
a
penalty
of
that
loading.
B
B
I
think
that's
one
of
the
things
that
we
were
thinking
about
implementing
and
I
think
a
lot
of
this
may
be
better
served
as
a
conversational
piece.
If
we
create
like
a
plc
right
like
we,
we
do
a
poc
see
what
the
performance
implications
are
and
see
what
we
can
do
to
remedy
some
of
those
and
then
say:
okay,
you
know
this
is
not
that
bad
right
like
and
then
I
think
we
would
be
a
little
bit
more
accepting
and
less
concerned
that
way.
F
H
I
I
think
I
can.
I
think
I
can
help
with
that
part
of
the
poc,
because
currently,
what
I
have
was
life
cycle
was
exporting
the
image
in
the
oci
format
and
letting
the
image
in
the
layers
directory
right
then
back
it's
taking
that
image,
that
file
from
there
and
then
copy
the
file
into
my
local
machine
as
a
user
from
pack
perspective,
and
that's
it
that's
the
output
right
now
from
my
poc,
then
you
can
see
the
in
your
local
directory.
H
You
can
see
the
oci
image
image
there
and
then
I
was
using
another
tool
to
create
the
bundle
to
actually
run
that
image
right.
So
what
I
can
do
right
now
from
that
point
is
okay.
Let
me
try
to
add
some
featuring
back
to
complete
the
workflow
right,
take
the
image
and
then
probably
push
that
into
the
local
daemon,
for
example.
H
And
then
we
can
compare
okay,
how
how
much
it
takes
for
for
for
the
process
to
build
the
image
using
that
way,
right
life
cycle
exporting
to
oci
format
and
then
the
the
back
tool
to
create
the
bundle
and
push
that
and
then
using
the
normal
way
to
to
to
let
the
life
cycle
to
push
the
image
to
the
local
demo.
And
we
can
compare
both
stuff
so
that
I
can.
I
can
try
to
work
on
that.
B
Yeah
there's
also
a
tool
called
scopio
that
lets
you
do
migrations
right
or
copies,
and
maybe
that's
optimized
or
maybe
them
optimize-
that
promoting
into
the
daemon.
D
For
what
it's
worth
build,
it
also
has
like
a
few
flags
to
directly
interact
with
the
container
the
image
store
and
export
the
image
in
a
runtime
bundle.
So
I
am
not
sure
if
the
docker
demon
does
that.
But
if,
if
it's
purely
like
the
build
kit
implementation,
it
might
be
faster,
I'm
not
sure
just
like
skim
through
eric's
poc
and
through
the
bill
get
dogs
to
see
how
things
fit,
but
yeah.
E
B
F
B
I
think
it
it
might
just
be
the
standard
method
of
working
with
image
util,
but
I
I'm
not
confident
right
now
on
the
exact
answer
to
that.