►
From YouTube: CNB Sub-Team Sync: Implementation - 2021/12/08
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
I
guess
we
can
proceed
to
status
updates.
Does
anyone
have
something
they'd
like
to
share.
B
We
have
mostly
just
been
reviewing
some
pr's
occasionally
and
helping
kind
of
keep
up
to
date
with
what's
going
on
with
the
extension
stuff,
the
builder
and
run
extensions.
So
it's
coming
along
nicely.
A
Yeah,
I
guess
I've
been
focusing
on
getting
the
patch
release
out
and
I'm
now
trying
to
get
up
to
speed
with.
What's
all
the
conversation,
that's
been
happening
around
extensions
and
docker
files.
A
A
A
A
C
C
Unrelated
to
like,
oh
well,
sort
of
related
to
the
ocl
layout
discussion,
but
I'm
gonna
table
that
for
the
next
officers.
But
I
have
like
a
couple
three
rfcs
or
something
for
on
the
implementation.
Sorry,
that
I
wanted
to
talk
about.
A
Yeah,
I
think,
actually
you
could
go
ahead
and
talk
about
them.
I
still
believe
we
have
anything
else
on
the
agenda
so.
C
C
C
I
think
the
first
rfc
is
a
fairly
simple
one.
I'm
hoping
it
wouldn't
require
any
buildback
changes
and
like
there
would
be
a
platform
api
bomb,
but
I'm
not
hoping
that
they
like.
In
order
to
support
this
platform
api.
I
don't
think
the
platform
implementers
will
actually
have
to
do
anything
apart
from
saying
that
it's
a
platform
api,
but
the
idea
is
that
currently
oci
config
has
this
object
known
as
history,
which
is
what
like
a
lot
of
things
like
dive
or
docker
hub
or
any
container
registry.
C
Or
there
was
this
layer
visualization
tool
recently,
which
sort
of
displayed
images
like
a
github
repository
with,
like
with
all
the
layers-
and
I
noticed
some
of
these
tools
break
when
you
use
buildback
images,
for
example,
that
layer
website
doesn't
work
at
all
with
buildback
images,
because
I
think
it
just
expects
like
history,
like
the
history
key
to
be
defined
and
contain
some
valid
values,
and
this
is
also
useful
for-
I
think,
like
in
general.
C
So
the
the
idea
is
simple,
that
for
like
we
populate
this
history,
key
with
information
we
already
have
so
like
if
it's
a
buildback,
specific
layer
we
just
put
like
I
have
a
template
in
here,
but
we
don't
need
to
follow
that
template
if
people
have
any
opinions
about
it,
but
the
idea
was
to
just
put
the
bill
back
id
if
there's
a
little
back
name
put
that
in
there
I
know,
name
is
optional,
but
the
id
will
always
be
there
and
put
the
layer
name
if
it's
like
a
buildback
layer
and
then
for
the
application
slices.
C
C
I
don't
know
yeah,
I
am
I'm
fairly
like
I
don't
have
any
strong
opinions
on
either.
I
just
thought
that
if
a
billback
is
going
ahead
and
making
the
distinction
of
actually
putting
a
set
of
files
in
a
slice,
I'm
guessing
they
are
somehow
correlated
and
if
they
are,
it
might
make
sense
to
name
them,
I'm
guessing.
We
can
make
it
an
optional
key
so
that,
if
it's
not
present,
we
just
put
slice
zero
one,
two
three:
if
there's
a
name
there,
we
can
put
it
there
that
way,
no
backwards
and
compatible
changes.
C
You
still
get
the
history
stuff,
but
if
you
want
to
put
a
name
in
there,
you
can.
I
can
make
that
option
or
I
can
remove
it
entirely.
I
don't
have
strong
opinions.
It's
just
that.
I
thought
that
if
someone's
gone
ahead
and
made
a
slice
which
I
have
not
seen
a
lot
of
buildbacks
too,
it
must
hold
some
meaning.
C
So
that's
by
otherwise,
like
I
have
this
thing,
which
is
like
application
workspace
or
you
can
that's
for
the
generic
one.
You
can
just
call
it
application
slice,
one
two,
three,
four,
whatever
you
wanna
call
it
yeah.
The
the
other
reason
was
like
I
recently
was
using
this
feature,
because
I
wanted
some
things
in
an
application
slice
and
later
on.
I
wanted
to
retrieve
that
application
slice
like
just
that
layer
through
the
definitely,
but
I
couldn't
find
it
for
the
other
buildback
layers.
C
But
yeah
I
just
ended
up
using
a
buildback
layer.
Not
it
doesn't
require
a
slice.
I
thought
this
might
be
useful.
That's
why
I
added
it,
but
the
main
motivation
was
just
so
that
we
could
use
like
we
could
use
all
of
these
tools
and
visualize
buildback
images
more
easily.
A
Yeah,
I
I
pasted
in
the
chat,
an
issue
that
ben
hale
opened
on
the
life
cycle
a
year
ago
over
a
year
ago.
I
think
it's
the
same
thing
right.
It's
just.
A
And
it
looks
like
this
rfc
that
you
wrote
is
the
the
one
we
were
waiting
for:
okay,
I'll.
C
A
Oh
I'll
just
point
out
that
one
of
the
like
not
sticking
points,
but
I
guess
source
of
conversation
on
this
issue
was
some
disagreement
over
what
the
format
should
be,
whether
it
should
be
json
or
some
other
bespoke
format.
C
So
I
think
so,
there's
a
field
in
there-
that's
not
used
as
often
but
the
history
field
has
a
comment
field
or
a
document
field
and
you're
created
by
you.
Most
of
these
tools,
like
that
you've
used
the
created
by
field
to
actually
show
whatever
is
in
there
like
the
command,
and
the
thing
you
see
here
is
from
the
created
by
field.
I've
not
seen
anything
use
the
comment
field,
but
you
could
potentially
put
structured
data
in
the
comment
field.
C
If
you
want
to
be
there,
I
I
I
don't
know
what
would
require
like.
C
I
I
don't
know
what
would
require
something
like
this.
The
other
alternative
which
I
can
add
is
like
these
are
the
default
values
were
created
by
the
the
lifecycle
ports
in
case
there's
nothing
there
and
if
you
want,
you
can
add
something
to
your
layer
normal
to
set
that
field
which
might
address
both
the
issues.
If
people
want
to
json
structured
fields,
they
can
do
that.
B
Yeah,
that's
kind
of
where
I'm
at
like
getting
something
in.
There
is
better
than
nothing
at
all
and
we
can
iterate
on
whether
someone
wants
to
be
able
to
control
this
more
from
the
build
pack
or
platform
layer.
But
I
feel
like
yeah
I'd,
be
okay
with
this,
even
not
having
the
the
slice
naming
part,
and
we
can
circle
back
around
that
later.
When.
A
C
B
Who
uses
slices
really
wants
that.
C
Yeah,
I
I
can't
do
that
sorry.
What
I
will
do
is
I'll
remove
this
api
change,
I'll
change
that
to
slice
index
0,
1,
2,
3
4.
However,
you
want
to
call
it,
and
I
once
this
is
added
since
this
this
I'm
assuming
will
require
no
changes
whatsoever.
Apart
from
the
life
cycle,
we
can
then
put
an
rfc
for
like
if
you
want
to
introduce
api
changes
to
allow
users
to
set
these
values
to
some
custom
strings
or
whatever.
C
I
I
just
didn't,
want
to
introduce
a
bunch
of
api
changes
right
now,
because
I
know
there's
so
many
api
changes
in
flux
but
yeah.
I
also
don't
know
what
happens
in
cases
of
rfcs
like
this,
which
are
like
purely
life
cycle.
Only
changes,
or
I
I
don't
know
whether
this
would
be
a
platform
api
bump
or
whether
we
can
just
put
it
in
the
lifecycle.
Given
that
nothing
changes
and
like
now,
this
was
part
of
the
spec
anyway.
C
C
C
The
idea
is
that
oca
defines
a
bunch
of
annotations
on
the
manifest
which
we
have
pretty
close
values
for
in
a
bunch
of
places,
so
we
currently
put
so
these
are
all
the
annotations
that
oci
defines
as
standard
keys
and
I
have
put
in
here,
a
mapping
of
where
they're
currently
stored
or
where
they
can
be
sourced
from
without
introducing
anything
new.
C
So,
for
example,
there's
the
base
image
name.
That's
currently
stored
in,
I
o
build
fox
metadata,
run
image
reference,
the
space
image
digest,
which
I
guess
it's
all
so
technically
this,
but
like
in
the
reference
we
put
both
tags
and
unresolved,
like
unresolved
tags
and
like
digest,
depending
on
what
the
wrong
image
input
was.
C
But
I
guess
we
can
just
resolve
the
tag
if
we
were
given
the
tag
and
put
that
the
digest,
there's
image
source
which
comes
from
the
source
metadata
repository.
This
is
also
part
of
the
like
project
metadata
normal,
which
is
part
of
the
current
platform
api.
C
Although
it's
not
used
much
platforms
can
put
things
in
here
from
the
project
normal,
but
there
are
other
platforms
like
k-pac
who
also
just
put
default
things
in
there,
even
if
you
didn't
specify
things
in
the
project
so
like,
if
you're
building
something
from
a
git
repository,
it
will
put
the
source
repository
and
things
like
that.
I
think
pack
also
started
to
recently
do
this
as
part
of
one
of
the
lfx
projects.
C
But
the
idea
is
like
you
can
use
a
combination
of
like
the
platform
can
choose
to
use
a
combination
of
project
normal
and
other
things
to
populate
this
project
metadata
file.
C
C
C
C
D
I
don't
know
enough
about
the
ecosystem.
What,
roughly,
what
percentage
of
pac
invocations
would
you
say
are
done
with
push
versus
the
demon
mode?
C
Hey
for
some
of
you
wouldn't
know
either.
I
think
it
depends
from
organization
to
organization
and
what
their
practices
are
around.
Creating
these
images.
I've
seen
a
lot
of
people
do
back,
build
and
then
do
a
docker
push,
even
though
we
recommend
doing
publish
because
it's
faster
yeah
I've
seen
people
do
that,
and
then
there
are
platforms
like
takedown
and
kpac,
which
don't
have
a
game
mode
whatsoever
and
just
rely
entirely
on
registry
back,
as
far
as
I
know,
is
the
only
platform,
only
popular
platform
that
uses
the
daemon
mode.
C
But
yeah,
like
I
think,
like
a
bunch
of
new
features
or
rc's
that
I've
proposed
are
conditioned
on
the
same
effect
like
we
can
do
things
in
the
registry
mode,
because
it
follows
the
normal
oc
is
back
and
then
the
game
mode
things
just
go
away
because
there's
no
manifest,
there's
a
config
there's
a
manifest
list
or
whatever
calls
it
and
then
just
creates
them
on
the
fly.
I
guess
when
you
do
push.
D
Yeah,
so
I
guess
this
seems
to
me
safe
to
do
because
it
doesn't,
if
you're,
using
daemon
mode.
It
just
doesn't
change
anything
and
provides
value
for
people
that
are
doing,
publish
and
maybe
adds
it
to
the
list
of
things
of
reasons
you
should
prefer
to
use,
publish
if
you
don't
need,
like
the
the
peck
build
into
the
demon
and
then
docker
push
if
you're,
not
using
the
image
in
the
meantime
is
like
an
anti-pattern
right.
D
You,
you
lose
information,
it's
it's
slower,
it's
worse
in
a
lot
of
ways,
and
this
is
another
reason
that
you
should
prefer,
publish
and
and
skip
the
demon
entirely
right.
I
think
it's,
I
think
it's
a
useful,
like,
I
think
it's
a
step
forward.
It
might
not
be
a
step
forward
for
all
users,
but
it
could
be
a
carrot
to
get
some
of
them
to
do
the
better
thing
anyway,
which
I
think
is
a
good
thing.
D
I
don't
know
of
any
that
consume
like
licenses
in
any
useful
way,
but
I
wouldn't
be
surprised
if
there
were
image
scanning
policy
enforcement
things
that
I
mean
it's
sort
of
it's,
not
it's
not
verifiable.
In
any
way
like.
I
could
say
that
this
image
is
licensed
under
some
made
up
license,
and
nobody
could
really
tell
me
that
I'm
lying
the
newest
ones
that
the
the
basic
base
name
and
base
digest.
D
I
don't
know
of
anyone
using
them
yet,
but
it's
a
bit
of
a
chicken
and
egg
problem
also
like
registries,
could
do
interesting
things
with
these,
but
there
aren't
enough
images
in
the
wild
yet
to
that
expose
that
information
to
build
the
stuff
to
take
advantage
of
it.
I've.
Certainly
I've
built
tools
that
you
can
run
on
an
image
that
will
check
it's
check
if
there's
a
base
image
update
needed
and
do
a
rebase
for
you
and
etc,
but
nothing
like
in
production
in
force
in
the
wild.
D
Yeah,
I
mean
the
the
in
the
case
of
the
base
name
and
base
digest.
Those
were
explicitly
like
inspired
by
build
packs,
automatic
rebase
support
and
then
explicitly
proposed
to
oci,
as
a
like.
This
should
be
this
fundamental
issue
at
stake
is
not
like
build
pack
specific.
It's
you
can
apply
this
to
anything,
so
you
should
so
that's
definitely.
This
is.
This
is
a
comforting
next
step
in
the
march
toward
this
being
everybody.
You
know
everybody
can
rebase
their
images,
but
yeah
yeah,
I'm
a
huge
fan
of
this.
C
B
B
Expecting
but
maybe
if
pac
hit
it
through
like
an
option
or
something
I'm
not
sure,
yeah.
D
C
It
would
be
nice
for
all
of
these
use
cases.
There
was
a
pack
publish
so
that
it
loaded
things
in
the
game
regardless,
but
it
also
stored
all
this
extra
metadata
somewhere
so
that
when
it
does
need
to
publish
things
out,
it
can
take
the
things
stored
in
the
game
and
take
this
extra
major
data
and
combine
it
together
and
push
it
out
so
that
you
get
the
same
behavior.
C
B
Yeah,
I
must
rather
have
like
a
like
if
we
want
to
go
with
this
behavior
and
I
think
I'm
fine
with
it
like
if
the
demon
version
doesn't
match
the
public's
version,
but
maybe
have
an
option
on
publish
to
you
know:
match
match
the
daemon
behaviors
as
far
as
outputs,
so
that
you
can
at
least
prove
they're
the
same.
If
you
need
to
do
that
for
some
reason,
but
I'm
okay,
I
think
I'm
okay
with
defaulting
the
publish
to
have
different,
you
know,
go
somewhere
else.
It
has
different
abilities.
C
Hello
I'll
keep
this
rfc
as
it
is.
If
people
have
strong
opinions
on
the
published
part,
I
might
put
up
like
in
additional
rfc
on
on
pack
publish
which
is
platform
specific
for
platforms
that
are
choosing
to
export
things
to
the
daemon
and
thing
to
the
registry,
so
that
they
have
consistency
between
directly
doing
back,
publish
and
then
sorry
pack
build,
will
publish
and
pack
build
separately
and
then
back
publish.
C
A
I
feel
like
there's
already
like
inconsistency
in
the
image
that
you
you
build
locally
and
then
push
compared
to
the
one
that
you
publish
directly.
I
think
we
we
have
people
asking
in
slack.
You
know
why.
Why
are
they
different?
I
think
the
what
we
figured
out
is
the
compression
algorithm
that
doctor
uses
is
different,
so
it
may
be
that
this
is
not
necessarily
something
that
people
are
relying
on.
D
Yeah,
that's
a
good,
that's
a
good
point.
Actually,
not
only
could
dockers
like
docker's
compression
algorithm
be
different,
but
they
can
also
change
it
in
the
future.
So,
even
if
we
did
align
them,
both
the
same
docker
is
more
than
you
know
more
than
free
to
change
whatever
they
want
about
how
they
push
something
and
we
wouldn't
want
to
have
pac
have
to
chase
it.
I
think
it's
a
useful
like
guaranteed
to
break
actually.
C
So
what
does
that
make
sense,
then,
to
like
still
export
to
the
game
and
if
we
want
to
but
publish,
should
be
like
something
packaged
on
if
we
want
reproducibility
across
these
things
like
that,
will
then
have
to
be
responsible
for
tracking
all
the
extra
metadata
somewhere.
That's
not
the
game
to
match
the
outputs.
D
D
This
is
one
of
those
cases
where
I
would
wait
for
a
user
to
bump
up
against
it
and
see
if
they
see
if
they
actually
need
that
and
then
based
on
their
experience,
you
can,
you
can
add
that,
like
it
seems,
it
seems
way
easier
to
just
have
this
like
publish
we'll
set
these
annotations
and
in
the
proposal
there's
a
if
this
is
a
problem
in
the
future.
D
Here
is
how
we
would
solve
it.
Here's
a
message
to
myself
in
six
months
when
this
becomes
an
actual
issue
and
then,
if
it
never
becomes
an
issue
great,
we
didn't
have
to
build
it.
That's
always
nice.
C
B
C
I
have
you
jason.
Do
you
by
any
chance
know
what
the
different
fields
under
history
are
actually
used
for?
I've
only
seen
created
by
used
in
places,
I've
not
seen
like
comment
or
other
things
used.
D
I
I
I
have
also
only
seen
create,
I
mean
it's
a
freeform
field
that
docker
docker
build
happens
to
set
with
the
command
that
that
produced
that
layer
in
co.
I
think
we
set
the
history
created
by
to
be
like
the
invocation
of
code
that
created
this
or
something
I
could
look
it
up
and
and
slack
you
the
specifics,
but
I
don't
think
it's.
I
don't
think
it's
like
load-bearing
in
any
way.
I
don't
think
it's
useful
for
anything.
It's
kind
of
just
to
be
able
to
show
in
docker
inspect
something
nice.
C
D
I
I
don't
think
it.
I
think
it
assumes
something
more
like
the
dockerfile
way
of
building
an
image
and
buildpack's
way
of
building
an
image.
Is
a
bit
inconsistent
with
it
because
buildpex
is,
you
know,
sort
of
willing
to
do
any
kind
of
crazy
stuff
to
make
an
image
as
long
as
it'll
make
the
build
faster
and
doesn't
necessarily
try
to
make
the
history
look
pretty
so
yeah?
I
don't
know
that.
I
don't
know
that.
D
C
C
Don't
remember,
I
remember
it
was
some
proprietary
thing
created
by
some
company.
I
don't
remember,
but
buildback
images
show
like
didn't
even
work
on
it,
because
I
think
it
was
just
assuming
created
by
there
so
like.
If
I
tried
to
load
a
black
image
then
see
the
files
where
it
works
for
all
the
normal
docker
images
which
it's
like.
I
was
hoping
to
share
that
too,
for,
like
all
the
buildback
users
and
they're,
like
oh,
doesn't
work
with
any
of
the
perfect
stuff.
D
C
C
C
With
all
the
push
behind
cosine,
like
with
the
recent
github
announcement,
and
also
more
more
people
being
involved
in
the
whole
supply
chain
conversations
and
like
we,
it
would
be
really
great
if
bill
packs
could
just
offer
this
out
of
the
box
and
natively
so
that
all
the
platforms
that
want
to
use
it
don't
have
to
reimplement
it,
and
if
this
is
part
of
the
life
cycle,
the
other
good
thing
that
I
was
hoping
for
like
we
can.
C
We
can
still
continue
to
use
what
we
currently
do
for
like
rebuilding
images
and
fetching
s-forms
and
stuff,
but
like
being
able
to
add
new
attachments
to
the
output
image,
once
oca
attachments
become
a
thing
and
like
we're
not
following
the
cosine
convention
of
attaching
things,
we
could
do
some
really
nice
things
in
the
future
if
we
set
ourselves
up
for
it.
So
I
I
don't
even
know
if
this
should
be
in
the
cosine
terminal.
C
The
only
reason
I
have
it
here
is
because
these
are
currently
cosine
related,
artifacts
or
conventions
to
export
s-forms
in,
but
the
idea
is
that
you
can
specify
all
of
these
fields
or
the
platform
can
specify
all
of
these
fields
in
a
cosine
terminal
and
provide
it
to
the
life
cycle,
and
the
life
cycle
would
be
responsible
for
exporting
all
of
these
signatures
and
s
forms
in
the
right
format
alongside
the
image.
C
C
I
know
also
people
have
been
really
excited
about
a
bunch
of
these
supply
chain.
Security
features
we've
introduced
in
buildbacks
recently,
and
I
think
this
would
be
really
nice.
C
The
cosine
exports
this
things
as
non-charge
files
and
their
media
types
are
also
different
than
what
docker
expects
to
load
in
the
daemon
and
hosang
currently
has
a
convention
where
it
chooses
to
name
these
images
based
on
the
manifest
digest.
C
D
Yeah,
just
just
a
just
a
couple
comments
on
this,
I
think
I
would
I
would
recommend
not
trying
to
solve
s
bomb
signing
just
yet
like
it's.
It's
a
useful
scoping
like
scope
down
the
issue
so
that
you
only
do
signing
the
image
and
then,
as
as
soon
as
that
works
like
signing
the
s
bomb
should
be
relatively
straightforward,
but
at
least
you
don't
have
to
deal
with
it
right
away
and
the
other
one
is.
I
think
you
definitely
don't
want
password
data
and
password
password
in
a
file
in
this
config.
D
You
can
either
prompt
the
user
for
it
or
if
the
cosine,
password
environment
variable
is
set,
use
that
or
or
something
else,
but
it.
It
scares
me
that
this
file
would
have
base64
password
data
in
it
and
then
yeah
sorry
go
ahead.
C
I
I
took
most
of
these
things
from
like
the
cube
config
format.
I
I
I
don't
know
what,
because
like
these
files,
if
you
mount
them
as
volumes
on
on
a
part,
it
will
be
somewhere.
I
guess
I
I
just
didn't
know
where.
To
put
it
like
that,
so
that's
why
it's
optional:
either
you
can
provide
a
pointer
to
a
file
or
a
yeah
or
embed
it
in
line.
C
D
Yeah,
that's
interesting
I'll
have
to
think
I'll
have
to
think
about
that
more,
but
I
think
there's
a
way
to
solve
it.
Maybe
some
mapping
of
key
foo
cosine
password
to
cosign
password
or
something
like
that.
I
don't
know
designing
on
the
fly,
but
definitely
all
the
rest
of
these
things
look
look
great.
Oh,
the
the
private
key
might
be
another
sensitive
thing.
You
don't
want
to
include
it.
Yeah
yeah.
B
You
think
about
the
repository
I'm
just
thinking
about
platforms
that
maybe
abstract
away
where
they're
actually
pushing
the
image
to
like
a
private
repo
like
we
do
that,
like
you
know,
you
might
have
a
public
registry
somewhere
that
you
can
push
things
to,
but
we
are
currently
overriding
that
when
we
actually
execute
the
life
cycle
with
the
like,
you
know,
maybe
a
sidecar
already
authenticated
like
path
to
a
registry
or
something
like
that
yeah.
How
do
we
think
we
should
handle
something?
That's
in
a
config
file
like
that.
B
So
today,
when
someone
pushes
a
a
you
know,
source
code
up
and
we're
turning
it
into
an
image,
they
don't
necessarily
know
where
we're
pushing
it
like.
As
part
of
like
when
you
do
an
export,
we
actually
pass
the
the
image
tags
ourselves
from
the
platform,
and
so
these
are
like,
maybe
like
a
local
host
with
a
colon,
something
that's
going
to
a
side
car
and
then
pushing
off
to
an
actual
registry
somewhere.
B
And
so
I'm
wondering
if
we
want
to
co-locate
the
cosine
data
with
that
image.
Are
we
going
to
have
to
like
read
in
this
file
and
like
change
the
repository
to
be
that
sort
of
local
proxied
repository,
or
should
we
have
flags
to
override
that.
D
Yeah,
but
I
think
jesse's
jesse's
point
is
that,
if
a
user,
if
a
user
is
exporting
and
thinks
they're
exporting
to
x,
dot
io,
that
certain
platforms
would
intercept
that
change
it
to
something
else
and
have
an
intermediate
that
pushes
to
x.io-
and
this
is
this-
is
a
new
place
where
they
would
have
to
have
logic.
For
that.
I
think
this
might
also.
This
might
also
be
something
that
should
just
be
omitted
from
this
initial
proposal
and
in
a
future
work
known.
D
Gaps
like
this
is
something
we
think
the
use
case
for
signing
for
for
cosine
signing
something
and
pushing
the
signatures
to
a
different
repository
is
relatively
niche.
I
think,
and
mostly
motivated
by
pcr
having
immutable
tags,
and
so
I
think
I
think
it
would
still
be
useful
to
support
cosine
without
it.
Having
an
answer
for
the
cosine
repository
code
path,
you
know
right
away.
I
think
I
think
there's
a
lot
of
value
you
can
get
just
from
signing
in
in
build
packs.
C
So
the
reason
I
introduced
this
was
like
the
k
by
cosine
rfc
currently
implements
the
repository
part,
and
one
of
the
reasons
why,
like
we
personally
use
cosine
repository,
is
we
want
to
push
the
signatures
to
a
different
repository
than
where
the
images
are
pushed
so
that
we
control,
but
how
someone
can
like
delete
or
modify
the
signatures
versus
how
to
delete
or
modify
the
image.
C
So
like
technically,
this
allows
us
to
like
control
that
the
signatures
would
never
be
deleted
or
like
if
someone
accidentally
does
something
crazy
with
the
image
they're
not
doing
anything
crazy
with
the
signature.
That's.
A
B
Yeah,
I
guess
this
being
optional
is
fine
but
yeah
it's
it
is.
It
is
interesting
because
if
it's
like,
you
said,
if
they're
thinking
it's
going
to
xyz
and
it's
not
really
going
there
now,
we
have
to
intercept
that
in
some
way.
But
if
they're
going
to
a
completely
different
registry
with
different
credentials,
then
that
sounds
like
a
perfectly
valid
use
case
that
I
would
want
to
allow
as
well.
B
Yeah
and
that's
probably
what
we
would
do,
yeah
in
the
in
the
short
term,
at
least
okay.
B
C
B
B
As
far
as
I
know
right
because
it's
the
same
registry
or
it
would
be
the
same
registry
that
the
primary
one
is
too.
If
we
make
those
assumptions,
then
I
think
the
interface
becomes
a
lot
simpler,
but
obviously
we
lose
cosine
repository.
C
C
D
C
D
Yeah
I'd
be
curious
to
see
how,
because
k-pac
already
supports
this
right
already
supports
cosine
I'd
be
curious.
How
kpac
would
work
with
its
cosine
config
and
with
build
packs
config
under
it.
Having
like
at
what
point
like?
Should
users
then
stop
using
capex,
cosine,
config
and
start
using
the
build
packs,
cosine,
config
or
both
or
neither
or
whatever?
I
don't
know.
C
C
So
this
is
not
exposed
to
the
user.
This
is
only
exposed
to
the
platform.
The
platform
can
choose
to
convert
its
inputs
to
this
value.
However,
it
wants
so
at
least
on
the
kpac
side.
There'd
be
no
difference
on
on
in
the
way
the
user
interacts
with
it,
the
the
the
the
export
board
or
like
whatever,
is
doing
the
exporting
or
sets
up
that
container.
Well,
we'll
have
to
mount
this
file
somehow
and
run
the
lifecycle
exporter.
D
C
That
was
the
hope
for
this
and,
like
any
other
platform,
that
wants
to
do
this,
that,
like
they,
don't
have
to
implement
all
of
this
cosine
logic
again
and
again,
they
just
have
to
construct
this
file
the
the
way
kpac
stores.
These
things
right
now
is
in
form
of
a
secret,
so,
like
cosine,
has
a
case
generate
key
pair
command
that
that
directly
uploads
to
a
gate
secret.
We
just,
I
believe,
added
a
few
annotations
on
that
secret
to
add
to
add
the
environment
variables.
C
So
if
I
remember
correctly,
that's
what
it
does.
It
just
mounts
the
whole
thing
as
a
volume
with
the
private
key,
the
password
and
the
public
key,
because
the
cosine
also
generates
the
public
key
as
part
of
that
command,
but
yeah
yeah,
yeah.
D
Yeah
I
mean
like
the
last
one.
I
am
hugely
in
support
of
this.
I
think
it
might
be
useful
to
scope
things
out
until
especially
anything
that
has
a
question
about
it,
like
spams,
just
scope
it
out
for
now
get
some
signing
in
get
kpec
to
use
it,
and
then.
C
C
I
mean
we
added
support
for
as,
like
the
newest
formal
package
form
stuff.
Recently,
I
don't
think
it's
it's
it's
moist,
but
it's
not
it's
released,
but
it
doesn't
export
them
out
in
the
cosine
format.
C
D
This
looks
cool,
I'm
gonna
have
to
read
this
more
and
I'll
have
more
comments,
but
this
looks
great.
A
There's
like
so
many
things
to
work
on,
but
it's
hard
to
tell
what's
the
most
important
and
the
most
pressing
I
do
want
to
help
where
I
can,
with
the
just
the
conversations
that's
happening
around
image,
extensions
and
docker
files,
and
you
know,
there's
a
lot
of
people
kind
of
chiming
in
and
I
don't
want
to
duplicate
other
people's
work,
but
I
also
want
to
help
so,
but
I
think
yeah.
C
Yeah,
I
also
don't
know
how
much
of
a
priority
like
all
of
this
supply
chain
stuff
is
gonna,
be
over
the
image
extension
stuff,
but
I
also
think
the
scope
is
fairly
different.
I
I
still
don't
know
how
well
scoped,
out
or
large
those
image
extension
changes
are
going
to
be,
whereas,
like
most
of
these,
things
are
pretty
well
defined,
I
think
at
least
we
know
what
to
expect
and
what
kind
of
api
changes
to
expect
yeah
I'm.
That
being
said,
I'm
happy
to
help
out
with
these
things.
C
But
I
I
also
want
to
discuss
with
the
larger
working
group
on
what
their
thoughts
are
on.
All
of
these
changes,
the
the
biggest
one
that
might
impact
the
other
extension
stuff
is
it's
the
decision,
whether
we
want
to
diverge
between
the
name
and
use
case
and
the
registry
use
case,
and
whether
we
should
keep
on
supporting
the
game
in
use
case.
C
If
we
export
it
in
the
oc
layout
format,
the
other
good
thing
is
like
ggcr
has
the
ability
to
preserve
the
whole
thing
entirely
when
moving
between
registries,
including
difference
in
compression
we
get
between
daemon
and
registry.
So
I
am
still
not
sure
how
it
does
that,
but
it
claims
that
it
can
move
images
between
registries
without
changing
the
digest.
C
C
C
It
would
also
mean
that
we
are
in
the
more
standardized
future
facing
oci
specification
path,
rather
than
having
to
support
something.
That's
docker
specific
versus
oh
oci,
specific
because
they
diverge,
and
then
you
have
to
do
other
things
to
keep
them
in
sync.
A
C
I
saw
that
pull
request.
It
exports
things
to
a
local
folder
right.
Now,
though
they
can
be
back.
I
think
the
piece
that's
missing
is
the
life
cycle
should
export
in
in
ocl
layout
and
pack
should
take
that
and
load
it
into
the
daemon
yeah.
C
A
A
C
But
I
I
think,
like
fundamentally,
the
changes
to
the
life
cycle
should
hopefully
not
be
a
lot
and
we
can.
If,
if
we
are
using
ggcr,
we
might
also
be
able
to
get
away
with
not
introducing
any
new
flags
at
all.
We
can
just
use
like
the
prefix
dimension
that
ggcr
uses
to
parse
references.
C
So
if
it's
local,
I
think
it
has
some
convention
for
specifying
local
ocl
layout,
so
they
just
if
it's
a
layout
colon
slash
in
the
tags
or
whatever
for
the
analysis
phase
of
the
exporting
phase.
We
can
just
look
at
the
local
part.
If
it's
a
registry,
we
just
push
to
registry
and
lifecycle,
doesn't
have
to
know
anything.