►
From YouTube: CNB Office Hours - 24 Feb 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Anyway,
so
this
morning
we
were
discussing
like
these
kind
of
binaries
that
platforms
want
to
use,
but
they
don't
deal
as
much
with
the
buildback
api
and
what's
a
good
place
for
them,
and
also
where
should
they
go,
whether
they
should
go
in
the
platform
api,
whether
they
should
be
some
separate
extension,
spec,
hey
steven.
C
A
We
were
just
starting,
but
the
the
first
item
on
the
agenda
was
like
this
whole
concept
of
binaries
that
just
the
platform
uses
not
as
much
buildbacks
and
where
they
should
live.
What
should
we
do
with
them?
A
A
C
Is
we're
defining
the
buildpack
api
is
exactly
what
the
lifecycle
is
going
to
do
when
it
receives
a
build
pack
and
the
platform
api
is
exactly
what
the
lifecycle,
what
the
platform
needs
to
do
to
the
life
cycle
in
order
to
exercise
its
features,
and
so,
by
extension,
the
build
pack
api
is
what
does
buildpack
author
need
to
create
in
order
to
interface
with
the
lifecycle
specifically
and
the
platform?
Api
is
what
does
a
life
cycle
author
right?
You
need
to
do.
A
A
We
have
the
prepare
phase,
we
have
this
sign
or
phase,
and
we
have
potentially
this
published
phase,
which
will
not
be
doing
anything
with
the
buildbacks,
but
just
the
inputs
that
go
into
them
and
the
outputs
that
come
off
but
they're
not
doing
anything
in
the
middle.
C
Can
we
talk
through
each
of
the
things
so
there's
like
a
better
understanding
of
the
different
touch
points
like
the
cosine
integration
is
something
that
you
know
takes
some
output
of
the
build
process.
You
know
in
a
format
decided
by
the
life
cycle
in
the
end
right
and
uses
that
information
to
assign
container
images.
A
A
What
we
just
need
is
a
way
to
publish
all
of
this
extra
information
like
any
like
any
annotations
on
the
manifest
or
the
manifest
itself
that
was
supposed
to
be
published,
or
these
signatures
or
attestations
at
the
end,
at
the
end
of
this
whole
process,
in
a
way
that,
like
it,
preserves
all
of
this
information,
so
the
exporter
can
still
export
to
the
daemon.
A
A
We
should
be
able
to
compute
them
during
the
export,
because,
from
our
conversations
with
jason
and
like
ggcr
folks
in
general,
it
looks
like
we
can
pre-calculate
the
digest
that
gdcl
will
actually
push
out
sure
and
and
with
with
some
certainty
we
can
like
if
we
use
the
same
thing,
we'll
we'll
get
the
same
output.
So
when
lifecycle
is
exporting
things
to
the
daemon,
it
can
just
calculate
what
the
output
digest
would
be
if
it
were
to
push
it
out
to
the
registry
store
all
of
that
information
and
these
signatures.
C
So
put
maybe
have
architectural
opinions
about
some
of
that,
but
putting
that
aside
the
have
you
actually
tried
that
digest
reproducibility,
because
there
are
a
lot
of
things
that
can
change
right,
like
gzip
compression
algorithm
can
vary
right,
the
even
like
config
blob
versus
its
local
config
file.
I'm
not
totally
sure
it
preserves
bit
for
bit.
A
We
we
asked
about
that
as
long
as
we're
publishing
things
through
ggcr.
The
output
should
be
reproducible.
I
don't
know
if
one
got
a
chance
to
like
try
out
the
text.
Okay,
but
according
to
jason,
those
things
are
reproducible
the
gzip
compression
is
set
by
ggcr
and
you
can
define
exactly
what
you
want
it
to
be.
Docker
uses
a
different
one
too,
but
you
can.
C
So
so,
but
but
you're
still
gonna,
if
you're
doing
it
entirely
through
ggcr,
it
means
you're,
gonna
put
it
in
the
damage.
And
then,
when
you
push
you
don't
dock
or
push
you
pull
it
back
out
of
the
damage,
publish,
yeah
and
then
publish
it.
But
then,
when
you
pull
it
back
out
of
the
demon
you're
going
to
have
to
recompress
everything-
and
you
know
why
not
just
keep-
why
not
just
keep
every
like
why?
A
It
can
just
say
like
oops
something
something
happened
in
the
middle,
and
this
is
not
what
I
was
expecting
and
these
signatures
and
other
things
that
you
have
are
incorrect.
You
can
still
talk
or
push
that's
you're,
not
you're,
not
you're,
not
losing
your
app
image.
It's
just
that
pack
publish
will
have
some
safeguards
in
case.
Something
goes
wrong
or
something
has
changed
in
the
middle
for
whatever
reason,
but
you
will
get
the
same
output
that
you
were
supposed
to.
A
If
you
were
exporting
things
directly
to
the
registry,
bag
doesn't
have
to
care
about
this
whole
thing
again
anymore,
because,
like
so
far,
pack
is
just
like
a
life
cycle
orchestrator.
It
just
runs
things
and
containers
or
imports
logic
from
the
life
cycle.
It
doesn't
actually
have
to
deal
with
all
of
this
logic.
A
C
I
see
so
if
this
were
in
the
life
cycle,
it
would
not
be
part
of
exporter,
because
exporter
runs
after
a
build.
It
would
be
a
separate
thing.
Adjacent
to
exporter
that
takes
an
image
that
was
exported
into
daemon
adds
the
additional
metadata
it
was
exported
to
daemon,
regardless
of
where
the
bids
are
coming
from.
In
that
case,
exports
it
and
exports
that
to
the
registry,
and
is
it
important
to
version
that
see
I
for
that?
C
I
worry
about
not
versioning,
that
with
the
life
cycle,
because
you're
going
to
do
things
like
reading
layers
in
the
cache
format
you
know
or
reading
the
metadata
that
extra
metadata
that
was
generated
during
the
build
process.
Right
like
it
feels
important
to
version
that,
along
with
the
lifecycle,
binaries
that
are
responsible
for
generating
those
artifacts.
D
C
But
I
think
you're
you
need
to
expose
a
lot
of
implementation
detail
of
the
life
cycle
right
in
order
for
in
the
case
of
maybe
not
in
the
case
of
cosine,
with
report
tunnel,
because
that's
that's
defined
in
the
spec
right.
But
in
the
case
of
this
publisher
thing
right
now,
you
have
to
expose
the
detail
of
where,
where
do
you
get?
What
what?
Where
can
I
find
the
compressed
layers
on
disk
to
export
them?
This
is
why
I
feel
like
I
shouldn't:
don't
you
know,
but
sorry
about
that.
A
That's
again,
a
platform
concern
right,
like
a
platform,
is
responsible
for
mounting
volumes
in
appropriate
places
and
passing
data
along
the
phases.
The
life
cycle
in
this
case
would
just
output
these
layers
somewhere
on
disk.
The
responsibility
of
this
binary
is
to
just
take
whatever
is
on
disk
use,
an
appropriate
compression
algorithm
that
would
ideally
mimic
what
the
exporter
would
send
out.
We
can.
We
can
define
that
compression
level
and
algorithm
somewhere,
sorry
and
and
then
it
just
becomes
a
platform
api
concern.
As
long
as
the
platform.
E
C
A
But
I
think
like
if
you
just
consider
that
the
exporter
exports
on
disk
in
ocl,
layout
or
oci
format
and
all
the
publishers
doing
is
just
verifying
a
few
things
and
publishing
the
dossier
layout.
I
think
it's
a
very
minimal
api
and
we're
not
exposing
anything,
that's
internal
to
the
life
cycle.
This
is
saying:
here's
something
on
the
disc
push
it
out.
A
Just
make
sure
that
there
are
like
just
follow
a
few
guarantees
along
like
the
life
cycle,
produce
these
values
just
check
those
values
remain
the
same
and
publish
all
of
these
different
oci
artifacts
out
to
the
register.
So.
C
We're
standardizing
on
oci,
so
if
this
implementation
is
if
this
implementation
you're
talking
about
exports,
the
daemon
but
then
also
just
keeps
the
entire
image
on
disk
and
oci
format,
in
addition
to
exporting
to
the
data
right
and
then
publish
publishes
that
on
disk
oci
image
and
oci
format,
and
so
publisher's
job
is,
is
a
binary.
You
point
it.
You
know
it's
like
50
lines
of
ggcr
that
has
probably
been
written
a
thousand
times
already
right,
take
oci
format
on
disk
and
export
to
registry.
C
A
Except
the
additional
checks
like
we
want
to
make
sure
that
whatever
digest
that
the
life
cycle
calculated
for
it
in
order
to
compute
like
in
order
to
create
the
appropriate
signatures
for
examples
or
adjustations,
for
example,
remain
valid,
and
in
order
for
it
to
do
that,
it
again
has
to
take
the
report.
Tomorrow's
input
or
something
that
says
here
were
the
output
like
here.
Were
the
output
tags
or
something
that
was
supposed
to
be
published
out,
like
let's
say,
like
here's.
A
The
list
of
output
tags
that
were
supposed
to
be
published
out
here
is
the
calculated
my
manifestations
like
we
need
those
those
inputs
to
publish
things
out
again
right.
So
when
you,
when
you
did
pack
build
minus
t
whatever
it
stored
all
of
those
images
in
the
daemon
with
the
appropriate
tags.
But
now
you
want
to
do
fact,
publish
and
it
it
should
publish
like
all
of
those
things
out
to
the
tags
here.
A
But
yeah
I
mean
that's
one
suggestion
it
doesn't
have
to,
but
that
seems
to
be
part
of
the
platform
api
contract
right
now,
that's
exposed
to
the
platforms
and
right
now,
platforms
use
it
for
other
things.
I
know
kpac
uses
it
again
for
some
other
stuff,
but
that's
part
of
the
platform
api
right
now.
C
C
I'm
very
I'm
worried
that
we
have
too
many
apis
and
specs
and
contracts
already,
and
we
need
to
do
less
of
that
versioning,
at
least
until
we
have
version
numbers
that
we
we
can
bump,
at
least
until
we're
101
version
numbers
that
are
more
maintainable
right,
but
the
that
seems
like
a
a
good
path
forward
to
me
all
right.
It
seems
worth
it
in
this
case.
A
D
The
platform
spec
is
defining
or
yeah
it's
defining
that
a
platform
is
compliant.
The
reason
why
I
would
want
to
prepare
operation
to
be
in
that
document
is
for
that
compliance.
That
says
that
you
you
do
these
things
right.
You
call
it
prepare
really.
What
I'm
trying
to
get
at
is
I'm
trying
to
like.
D
I
just
ran
into
this
recently,
where
going
through
google
cloud
functions
when
it
builds
your
application,
it
doesn't
take
into
consideration
the
project
descriptor
right,
so
I'm
trying
to
say
that
hey
in
order
for
you
a
platform
to
be
compliant
with
build
packs
and
say
that
it
supports
build
tags,
that
it
accepts
and
verifies
and
utilizes
the
project
descriptor,
that's
kind
of
more
or
less
my
goal.
A
I'm
so
bit
worried
about
that,
because,
right
now
the
project
descriptor
is
an
extension
spec
and
we've
also
talked
about
moving
the
project
descriptor
entirely
out
of
our
spec
repository,
since
it
doesn't
really
have
anything
to
do
with
buildbacks.
At
this
point,
like
we
have
the
buildbacks
namespace,
which
has
got
everything
to
do
with
build
packs,
but
the
idea
of
a
project
descriptor
itself
doesn't
really.
D
D
That's
I
guess
the
way
I
envision
that
and
I
don't
see
a
problem
with
that.
The
I
think
there
is
a
bit
of
contention
on
us
saying
that
hey
in
order
for
you
to
be
a
compliant
platform,
you
have
to
recognize
the
project
descriptor.
I
you
know
again
I'm
kind
of
on
that
hill.
I
don't
know
if
I'll
die
there,
but
if
it's
really
painful
to
use
the
project
descriptor
or
to
use
build
packs
across
multiple
platforms
without
the
project
descriptor.
C
Project
descriptor
aside
right
just
thinking
about
our
architectural
options.
If
this
is
a
separate
repo,
that's
versioned
separately,
you
could
still
say
a
builder
includes
the
lifecycle,
components
and
the
metal
lifecycle
components,
and
then
the
builder
sets
up,
you
know,
can
be
configured
to
always
run
whatever
right
or
like
that
separation.
Doesn't
you
know
preclude
us
from
having
interactions
between
those
components
right.
A
Yeah
that
that
that
was
the
idea
that
these
are
components
that
but
nothing
to
do
with
the
look
like
cpa.
But.
D
Yeah
I'm
trying
to
think
through
mine
and
sorry,
if
I'm
being
selfish
right.
But
if
I
were
to
say
the
prepare
operation
is
treated
just
like
the
publisher
right,
where
it's
completely
separated
from
the
the
life
cycle
and
not
necessarily
even
part
of
the
platform
api.
It
just
leverages
the
platform
api
in
some
way
I
go
back
to
my
like.
I
guess
the
benefit
is
that
there
is
tooling
for
platforms
to
say.
Like
hey,
you
know
now
google
cloud
functions
they
like.
D
If
I
throw
a
bug
and
say
like
hey,
why
isn't
it
recognizing
my
project
descriptor
and
I
could
point
them
to
like?
Oh
you
could
do
it
very
easily
by
using
this,
you
know
tool
that
the
build
packs
project
provides
that
might
be
easier
for
them
to
implement
it
and
then
maybe
just
get
on
board,
but
maybe
that's
good
enough
right
like,
and
maybe
we
don't
get
stuck
in
this
trying
to
enforce
a
a
preparer
phase.
Part.
A
Doesn't
that
mean
that
we
are
still
keeping
the
project
descriptor
or
the
prepared
phase
option?
It's
just
that
now
that
we've
provided
it
as
a
tool
platform
should
be
more
likely
to
implement
it
because
they
see
other
platforms
doing
it
and
they
don't
have
to
like
handle
the
maintenance
burden
of
keep
updating.
A
E
Yeah
I
like
this
concentration
of
having
you
know
strong
input
and
output
guarantees
and
that's
sort
of
what
we're
interacting
with,
like.
E
I
think,
exporter
outputting,
either
an
image
or
an
oci
layout,
there's
a
very
known
outputs
that
a
platform
today
can
integrate
with,
and
I
think
it's
cool
that
we
as
a
project
could
add
something
like
a
cosigner
binary
that
does
the
signing
and
happens
to
read
oci
layout
and
someone
could
use
that
even
outside
of
buildpacks,
if
they
really
want
but
obviously
and
having
the
conveniences
of
reading
like
report
tommle
or
something
to
get
that
information
instead
of
you
having
to
provide
it,
would
be
a
nice
convenience
yeah.
E
D
Yeah,
I
think
that's
what
I'm
you
know
bumping
my
head
against
is.
I
could
definitely
see
platforms
saying
no,
I'm
not
gonna
call
a
preparer
right
and
then
we
really
don't
have
a
say
in
in
you
know:
badging
platforms
to
say
yep
they're
100
compliant
right.
Maybe
we
need
a
certain
in
our
badging
of
sorts,
but
that's
separate
good.
I
mean.
C
C
You
know,
I
think,
I'm
hesitant
to
say
you
know
if,
if
somebody
like,
for
example,
I've
been
thinking
about
you
know
what
does
it
look
like
to
run
the
cloud
into
build
packs
api
on
platforms
like
cloud
foundry
or
heroku,
as
they
are
today
right
and
that's
something
like,
I
think
it's
totally
doable
with
a
one-to-one
mapping
right
if
you
just
run
the
builder
and
the
detector,
but
you
don't
run
the
exporter
right
and
you
could
you
could
have
a
platform,
that's
compatible
with
build
packs,
but
can't
do
rebasing,
you
know
or
whatever
I
think,
there's
a
lot
of,
and
you
know
you
didn't-
want
the
project
descriptor
in
some
very
different
way.
C
Right,
like
the
way
you
you
make.
A
compliant
platform
would
actually
be
hindered
by
the
tooling
being
really
restrictive,
and
the
best
thing
we
could
do
for
platforms
is
to
say
you
know.
If
you
want
to
say
cloud
native,
build
pack
certified
or
something
like
that,
then
you
got
to
meet
these
requirements.
If
we
really
care
about
that
kind
of,
you
know,
consistency
between
applications,
but
the
tool
that
we
provide
is
flexible
enough
that
you
can
integrate
it.
However,
you
need
to
integrate
in
order
to
make
your
platform
work.
D
So
how
does
how
do
we
envision
this
working?
So
like?
Let's
say:
okay,
you
know
we
we
get
the
preparer,
we
get
the
publisher,
we
get
the
cosigner
right,
we
all
go
into
separate
repos
once
a
j
romero
preparer,
the
other
one
says
sam.
You
know
which
one
are
you
co-signer
and
the
other
one's
juan
publisher
right
hypothetically,
like
there's
no
specification
for
any
one
of
these
components.
A
C
A
We
can
do
that.
We
can
figure
out
the
packaging
problem
separately.
We
can
we
can.
We
can
still
keep
them
as
separate
repositories
and
import
the
main
like
the
entry
point
and
create
one
mega
binary.
If
you
want
to
that's,
that's
a
separate
concern,
but
I
think
like
it
would
also
be
easier
from
just
to
purely
repository
management
point
of
view,
like.
A
Like
people
who
want
to
work
on
the
prepare,
just
work
on
the
preparer
people
who
want
to
contribute
to
the
co-sign
or
signing
stuff,
can
contribute
to
co-signing
stuff
without
having
to
worry
about
affecting
prepared
things
or
like,
let's
say
note,
3v2
comes
along
and
we
want
to
support
notre
v2
signing.
Now.
We
don't
have
to
worry
about
such
deep
integrations
with
a
signing
solution
that
we
can't
create
a
notary
signer
in
the
future.
C
I
worry
about
creating
tested
paths
through
for
platforms
through
the
different
components
we're
offering
so
like.
If
we
end
up
with
you
know,
the
bill
packs
project
provides
you
12
different
tools
for
implementing
the
build
pack
api.
Each
of
those
tools
has
a
separate
version
and
here's
a
giant
excel
spreadsheet,
of
all
the
compatibility
between
all
the
different
versions.
You
can
choose
right.
That's
that
feels
less
good
to
me
than
like.
C
Maybe
we
say:
okay,
it's
okay,
if
pac
life
cycle
and
other
thing
are
separate
and
yes,
we
have
to
manage
the
compatibility
between
those
versions,
but
that's
it
right,
I'm
a
little
hesitant
to
create
a
lot.
Something
that's
going
to
make
platforms
need
to
set
up
a
ton
of
integration,
testing
right
against,
for
you
know,
version
constraints,
but.
A
E
If
I
was
writing
the
cosigner
one
like,
I
don't
actually
care
if
it
comes
from
the
bill
packs
project
like
if
it
came
from
cosigner
like
they
could
write
this
thing
that
we're
talking
about
writing
like
because
there's
nothing
specific
about
a
cloud
native,
build
pack
like
it's
just
a
published
image
and
they're
gonna
sign
it
with
a
bunch
of
credentials
mounted
by
the
platform
like
it
doesn't
even
need
to
be
owned
by
build
packs,
and
like
I
like
that
ability
that
it
doesn't
really.
I
mean
it's
cool
that
we'll
publish
one
like
it's
great.
C
E
That
reads
that
report
tumble
makes
sense,
but
like
minimizing
those
react,
those
those
places
and
making
it
so
that,
because
you
could
make
a
version
of
the
signer
that
actually
reads
oci
layout
and
does
the
publish
as
well
like
it
could
publish
the
the
image
and
sign
it
all
in
one
thing
were,
and
that
would
be
maybe
better
from
like
a
spec
perspective
and
yeah.
A
No,
I
I
mean
I,
I
don't
really
it's
it's
more
of
a
packaging
and
repository
management
problem,
I'm
more
like
more
interested
in
in
just
the
specific
idea
around
creating
reusable
platform.
Api
components
like
we
have
utility
build
packs.
This
is
just
the
same,
but
instead
of
weld
banks,
it's
platform,
components.
C
D
E
To
make
exporter
to
oci
layout
is
all
that's
required
for
someone
to
go
create
this.
What
is
a
publisher
right,
like
I
mean.
A
E
B
B
B
A
Then
you
can
write
a
proof
of
concept
around
the
publisher
and
then
say:
hey
this.
This
works
I
I
would
actually
go
about
it
like
just
first
test.
If
our
theory
actually
works
like
the
the
the
digest
or
the
digest
stuff,
yeah
and
yeah,
like
figure
out
what
would
be
a
good
interface
for
this
publish
thing
to
accept
as
inputs,
and
then
we
can
mold
our
lifecycle
output
rfc
accordingly.
So
like
do,
we
need
any
other
information
apart
from
report
normal
in
this
case
exactly.
B
B
E
E
Yeah
yeah
report
time
was
fine
for
like
for
that
sort
of
stuff,
but
I
meant
like
is
there?
Is
there
enough
information
in
just
oci
layout
to
do
what
you
need
in
publisher?
You
know
like
because
we
talked
about
having
more
digest
like
is
it?
Can
you
calculate
that
in
publisher
is
ocla
enough,
like
you
know,
just
to
minimize.
A
We
we
can,
it
was
just
about
so
again,
let's
say:
let's
say
it
calculates
the
digest,
but
it's
it's
slightly
were
for
whatever
reason
different
than
what
the
life
cycle
expected
and
the
signing
binary
used
that
to
create
the
signatures.
E
A
A
B
Sorry,
but
I
mean
if
the
exporter
is
doing
the
same
thing
in
the
demon
or
in
the
registry,
do
we
it's
going
to
use
that
thing
that
it's
exporting?
To
the
I
mean
the
oci
image
that
is
exporting
on
disk?
It's
like
what
we
are
saying
is
export
to.
Please
put
this
thing.
You
know
ci
format
there
for
me
right,
so
it's
not
going
to
reduce
anything
in
any
other
way.
I
think
right
or
am
I
wrong.
E
B
Compression,
yes,
I
I
I
I
at
the
beginning,
I
thought
exporter
was
not
going
to
save
in
daemon
or
the
registry.
It
was
only
going
to
save
in
oci
legal
format
and
then
published
was
going
to
take
care
of
putting
the
things
in
some
other
place.
That
was
my
my
first
idea,
but
now
I
I'm
confused
because
I
what
I
understood
was
exported
it's
going
to
do
the
same
thing
that
it's
doing
right
now
and
saving
the
image
no
ci
out.
Formatting.
A
Or
so
with
what
natalie
said
around,
we
are
saving
the
uncompressed
layers
on
disk
in
launch
cache
like
we
obviously
don't
want
to
repeat
this
whole
thing
like
we.
We
don't
want
to
put
these
uncompressed
layers
in
the
launch.
Cache
also
export
to
ocl
format,
also
export
daemon
so
like.
We
also
need
to
figure
out
the
middle
ground
between
all
of
these
things
like
if
we
can
reuse.
B
A
D
D
If,
if
the
life
cycle
could
internally
just
start
using
the
oci
layout
structure
right
because
it's
our
literally
it's
the
same
thing,
it's
just
these
directory
structure
for
the
blobs
needs
to
be
in
a
certain
way
and
then
that
oci
layout,
you
know
compatible.
E
B
A
A
D
A
I
think
the
one
one
thing,
but
one
reason
why
I
wanted
to
report
tommle
as
the
common
format
was
like.
Let's
say
the
you
run,
the
signer
takes
in
the
report
normal
and
you
say,
like
I
don't
want
to
publish
it
right
now.
I
just
want
to
store
it
to
disk.
What
would
the
sign
or
output?
For
example,
it
could
again
just
output
report
with
the
list
of
foresee
artifacts
and
the
tags
and
that
need
to
go
out
and
the
publisher
could
just
take
them
in
and
put
them
in
the
registry.
B
B
So,
okay,
yes,
because
I
said:
okay,
that's
what
I
understood
it's
it's
a
little
different,
what
I'm
understanding
right
now,
but
okay,
yeah
that
the
one
idea
was
like
that
right,
the
exporter
just
put
things
and
the
oci
format
that
said
and
the
published
will
take
care
of
saving
that
in
some
other
place,
but
in.
If
we
do
that,
then
it's
not
it
shouldn't
be
a
different
tool
right
or
or
it
it's
more
like.
It
should
be
part
of
the
life
cycle.
So.
A
Like
if
we,
if
we
want
a
user
interface,
that
is
like
build
impact,
publish
or
pack
push,
we
need
these
two
things
and
if
we
need
something
that
says
pack
build,
minus
mine
is
published
and
we
want
to
optimize
the
output.
Then,
like
the
the
speed
and
performance,
then
we
just
need
to
publish
it
out
to
the
registry.
A
D
So
can
I
make
a
a
maybe
slight
suggestion,
and
you
know
please
shoot
me
if
if
this
has
already
been
discussed,
but
what,
if
the
exporter
just
supported
instead
of
daemon,
it
was
like
you
know,
registry
or
oci,
layout
right
and
then
it's
really.
The
publisher
just
takes
ocr
layout,
which
is
essentially
the
same
thing.
It's
just
renaming
daemon
use
case
to
oci
layout
use
case.
A
D
A
D
D
Like
do
we
need
to
solve
it
in
a
much
better
way
than
saying
like?
Oh,
the
poor
performance
at
this
point
in
time
is
good
enough
right
and
have
this
a
bolt-on
approach,
because
we
don't
want
to
go
full,
you
know
external
and
then
that
be
slow
or
slower
most
likely,
and
maybe
we
just
have
to
rethink
the
daemon
altogether.
A
E
D
A
The
spec,
maybe
not
the
life
cycle,
just
purely
from
an
implementation
perspective,
it
makes
sense
for
it
to
just
load
things
from
the
game
like
when
we
last
discussed
this
with
the
old
registry
pass
through.
It
was
just
way
too
complicated
and
we
were
doing
what
the
lifecycle
currently
does,
but
in
a
less
efficient
manner,.
B
But,
but
if
is
okay,
I
I
like
javier's
idea:
okay,
yes,
I
agree.
We
need
to
pull
from
the
demon,
that's
it
okay,
cool!
We
can
pull
from
the
demon.
But
now
what
I
understand
is
okay.
If
we
take
the
oci
format
as
the
output,
then
something
like
the
publish
like
you
know
that
can
be
used
for
with
from
a
platform
to
then
put
that
thing
into
the
demon
it
will
be
outside
life
cycle.
It's
not
it's!
It's!
It's
not
a
problem
from
the
life
cycle.
B
A
E
A
No,
no,
so
I
I
think
like
just
to
give
this.
We
discussed
this
whole
thing
in
in
our
last
office
hours
and
the
reason
why
this
was
so
complicated
was
because,
like
one
we're
thinking
about
it
from
a
back
perspective,
but
not
other
platforms
that
rely
on
gaming
like
the
spring
boot
plug-in,
for
example,
so
they
would
then
have
to
figure
out
a
way
of
concocting,
this
fake
registry
pass-through
that
the
life
cycle
can
use,
making
their
lives
harder
and.
E
A
Even
with
this
registry
pass
through
like
the
final
implementation
was
like,
but
the
final
implementation
is
literally
what
the
life
cycle
is
currently
doing,
look
call
calling
docker
save,
which
puts
the
output
in
the
docker
v1
format
and
then
loading
it
in
ggc.
That's
what
gci
does
that's
what
your
registry
passthrough
will
do.
It's
just
moving
it
from
one
place
to
another.
D
Well,
I
think
it's
the
same.
I
mean
to
me,
you
know,
in
my
simple
mind,
it's
it's
the
same
thing
that
we're
doing
with
cosigner,
right
and
publisher
and
preparer.
It's
like
we're
removing
complexity
from
within
the
life
cycle
and
delegating
that
complexity
of
to
the
people
or
the
components
that
actually
care
about
those
things.
So
if
a
person
or
or
platform
cares
about
the
daemon
support,
then
they
should
be
the
one
somewhat
responsible
for
it.
Yeah
we
could
build
it.
D
You
know
a
utility,
you
know
to
help
them
out,
that's
fine,
but
I
don't
know
that
it
should
be
inside
of
the
life
cycle.
I
think
that's
what
we're
trying
to
get
rid
of.
I
mean
another.
Crazy
idea
is
like
what,
if
the
life
cycle
didn't
even
use
registries
at
all
right-
and
it
just
did
oci
layout
and
stuff,
but
that
is
just.
A
E
Well,
I
mean,
I
think,
docker
like
at
least
the
exporter
itself
will
run
faster
because
if
it
already
had
a
previous
build,
like
it'll,
actually
be
able
to
skip
re-exporting,
some
layers
that
are
already
in
that
registry
and
then
you'll
start
to
do
the
docker
save
at
the
end
of
the
day,
but
so
yeah,
I
don't
know,
I
guess
it'd
be
maybe
a
little
slower.
I
don't
know
I'd
have
to.
I
wouldn't
be
surprised
if
it's
pretty.
A
Much
the
same,
the
issue
with
introducing
another
cache
is
just
inconsistency
like
now
that
fake
registry
has
to
ensure
that
take
care
of
cash
and
validation
problems,
which
is
like
just
yet
another
thing
to
worry
about
from
a
platform
perspective,
the
other
thing
was
so
far.
Platforms
have
never
had
to
deal
with
any
of
this.
D
D
Unless
you
really
wanted
to,
then
you
could
do
the
docker,
the
pack
publish
to
the
daemon
right
so
and
then
then
you
know
that
hey
my
poor
experience
is
really
because
of
the
damon
and
not
so
much
the
build
process.
A
E
If
you
understand
how
docker
works
locally
right
like
because,
like
sometimes
it's
pulling
images
that
you
don't
have,
when
you
do
a
back
build,
but
if
it's
there,
then
it's
not
like
you
have
to
understand
how
the
cool
policies
and
all
that
stuff
work
which
can
be
confusing,
but
it
does
allow
you
to
use
your
like
local
docker
state
of
the
world
to
like
build
stuff
with
and
anything
we
choose
would
either
have
to
match
that
or
be
a
complete
breaking.
E
Change
like
javier
was
saying
like
like
just
saying,
be
like
we,
don't
really
do
local,
like
you,
can
load
stuff
into
pack
like
here
pack
load
this
builder
like
because
you
could
go
that
way,
right
pack
load
builder
and
you
pull
it
from
the
demo
and
put
it
into
packs
registry
and
then
read
and
then
use
that
when
you
do
a
build
and
but
then
those
are
very
explicit
operations
that
are
like
that
are
very
obvious
to
the
developer,
though
right,
because
sometimes
you
get
bit
today,
you
do
a
pack
build
on
a
builder,
and
if
you
didn't
do
pool
policy,
none
it
gets
wiped
out.
E
B
A
C
A
D
Yeah-
and
I
don't
argue
with
that
at
all
right-
it's
just
that
the
workflow
I
think,
needs
a
little
bit
more
focus
because
even
as
it
stands
today,
it's
a
really
poor
experience
from
a
performance
standpoint,
and
so
I
guess
I'm
not
seeing
a
lot
of
the
downsides
for
things
that
might
actually
improve
the
sort
of
speed
at
which
we
could
deliver
additional
features.
A
E
We'll
see,
that's
because
your
experience
doing
it
like
anyone
who
does
it
for
the
first
time
or
hasn't
done
it
in
a
while
you're
just
going
to
immediately
wipe
out
the
builder
you
just
built
if
it's
named
the
same
as
something
else
like
if
you're
like
extending
another
one
and
doing
it.
I
don't
know,
I
wouldn't
say
it's
like
like.
I
said
it
needs
work
regardless
the
local
experience,
but
I
wouldn't
say
it's
like
super.
E
I'd
like
to
give
it
a
thought,
I
wonder
if
some
explicit
url
schemes
could
make
like
the
the
front
end
seem
still
very
approachable,
but
then
we
could,
like
you
know,
use
those
as
hints
to
do
things
through
either
proxies
or
through
the
daemon
itself.
To
I
don't
know
like
yeah,
I
think
I
think,
moving
faster
in
life
cycle
is
something
we
want.
E
E
So,
like
I
don't
know
like
having
the
baggage
of
being
able
to
connect
to
a
demon
like
we're,
gonna
get
like
random
like
oh,
this
doesn't
work
in
podmen's
version
of
the
docker
damon
when
I'm
running
kubernetes
you're,
like
oh,
that
sounds
so
complicated
like
I
just
want
to
make
my
life
easier
on
the
life
cycle
side
so
like
I
want
to
push
that
to
the
platform
like
it's
your
job.
To
connect
the
daemon
or
whatever,
if
you
want
to
use
this
reverse
proxy
registry
thing,
go
for
it.
Yeah.