►
From YouTube: CNCF Security TAG Supply Chain WG 2021-07-15
Description
CNCF Security TAG Supply Chain WG 2021-07-15
A
A
C
Oh
no
just
scratching
my
beard.
B
D
D
B
D
All
right
folks,
let's
go
ahead
and
get
started,
quick
reminder
that
this
meeting
is
being
recorded
and
posted
to
youtube
shortly
thereafter.
Your
agreement
to
participate
in
these
meetings
is
to
also
abide
by
our
code
of
conduct.
If
you
have
questions
about
what
that
is,
you
can
find
it
in
the
security
technical
advisory
groups
repository.
D
So
we
have
a
relatively
late
agenda
today.
Hopefully,
we've
got
some
presentations.
I
believe
we
were
expecting
to
have
a
presentation
from
build
packs
and
one
other
is
that
correct,
or
am
I
making
stuff.
E
D
Awesome
excellent
happy
to
hear
it
so
before
we
get
started
in
the
presentations,
a
couple
of
reminders
for
things
that
are
outstanding
folks,
we've
got
our
trello
board
up
and
definitely
needs
some
more
feedback
on
breaking
down.
Those
user
stories
into
manageable
tasks,
looks
like
we've
made
significant
progress
in
that
area,
but
can
always
use
more.
So
as
a
quick
reminder,
if
there
is
content
in
a
user
story
that
needs
to
be
merged
with
another
one
go
ahead
and
make
the
comment
or
make
the
move.
D
If
you
have
just
commented
and
wanted
a
second
set
of
eyes
on
it,
hit
me
up
in
slack
or
dan
or
another
member
to
go
ahead
and
make
the
change.
Everybody
should
have
admin
access
to
the
board.
If
you
don't
and
you
would
like
it,
let
me
know
I
will
also
make
the
change
and
then,
if,
as
another
reminder,
everyone
should
have
gone
through
and
reviewed
the
best
practices
document.
D
That
was
also
discussed
previously
kind
of
breaking
that,
apart
more
into
examples,
we'd
like
to
be
able
to
take
the
content
of
the
best
practices,
doc
and
the
steps
by
which
we're
expecting
those
builds
to
occur
and
integrate
them
within
the
reference
architecture.
So
we
can
take
advantage
of
that
within
the
user
stories
and
get
that
incorporated.
D
Nope:
okay,
build
packs,
I'm
gonna
hand
it
over
to
you
to
go
ahead
and
start.
F
Can
everyone
see
this
yep
hi
everyone,
so
I'm
sam
of
kotari,
I'm
one
of
the
maintainers
on
the
cloud
native
buildbacks
project.
Today,
myself,
along
with
steven
and
matthew
mcnew,
will
be
presenting
a
brief
introduction
to
buildbacks
and
sort
of
the
security
of
some
security
focus,
features
that
we've
baked
in
to
the
ecosystem.
F
So
the
agenda
for
today
is
just
a
brief
introduction
to
build
packs
what
they
are,
how
they
can
help
in
various
image
building
scenarios,
especially
in
the
context
of
security
and
supply
chain,
and
how
and
where
can
you
use
them
and
at
the
end
we
can
we'll
be
taking
some
questions
that
you
may
have
on
our
presentation.
F
So,
first
of
all,
what
cloud-native
build
packs
are
if
you're,
not
familiar
cloud
native,
build
packs
are
a
way
to
transform
your
application
source
code
into
runnable
images
without
docker
files.
Now
you
may
be
wondering
why
this
is
helpful
through
these
slides,
we'll
focus
on
three
main
benefits
that
this
provides
us
one.
It
allows
application
developers
to
focus
on
what
they
are
building
and
now
not
how
to
support
it
in
production,
in
that
it
also
has
the
added
benefit
of
building
and
packaging.
F
It
gives
operators
or
devsec
cox
teams,
precise
control
over
what
build
inputs
are
permitted
using
the
builder
concept
that
we'll
introduce
in
a
few
slides
and,
lastly,
the
abstraction
of
build
applications
as
a
collection
of
distinct
layers,
stitched
together
to
be
an
application
image
can
allow
for
a
system
to
precisely
switch
out
one
layer,
for
example
the
base
operating
system
layer
from
the
image
without
disturbing
any
of
the
other
layers,
and,
as
we
will
see,
this
can
have
some
dramatic
consequences
for
large-scale
reactions
towards
vulnerabilities.
F
So
going
more
into
like
the
concepts
of
buildbacks
and
we'll
be
going
over
a
bunch
of
these
in
the
next
few
slides.
So
what
exactly
are
buildbacks
at
the
core
of
it?
A
build
pack
is
just
two
executables
one
called
detect
which
detects
whether
the
build
pack
is
needed
or
not,
and
the
other
called
build,
which
does
its
part
in
building
the
final
application
image.
F
For
instance,
like
a
java,
buildpack
could
look
for
a
java
file
or
an
npm.
Build
pack
could
look
at
the
presence,
package.json
file
and
then
run
subsequent
commands
during
the
build
step
to
install
dependencies
set
up
the
environment,
compile
source
or
set
up
some
start
commands.
Interestingly,
multiple
build
packs
can
also
work
together.
F
So,
if
you
have,
let's
say
a
full
stack
application
that
has
a
javascript
front
end
and
a
go
back
end,
you
could
have
the
javascript
build
pack,
compile
the
appropriate
front-end
assets
for
you,
whereas
the
google
pack
can
run
the
back-end
server
that
serves
it.
F
This
allows
you
to
combine
build
packs
and
utilize
them
in
a
variety
of
like
in
a
in
a
combinatorial
fashion,
allowing
you
to
build
separate
parts
of
your
application
as
distinct
layers.
One
thing
to
note
is
that
the
cloud
native
build
packs
project
actually
doesn't
reduce
buildbacks.
F
Rather,
we
define
the
specification
and
then
this
is
followed
by
a
variety
of
different
vendors
who
know
the
best
on
how
to
build
these
applications.
So
some
of
the
most
well-known
build
packs
are
produced
by
google
heroku
and
the
potato
project.
F
F
There
are
convenient
way
of
distributing
all
of
the
build
logic
for
buildbacks
in
the
format
of
an
oci
image.
The
build
image
provides
the
base
environment
for
the
build
packs.
For
example,
imagine
an
ubuntu
bionic
os
image
with
appropriate
build
build
tooling,
whereas
the
run
image
provides
the
base
environment
for
the
application
image
during
runtime,
and
the
combination
of
these
two
images
is
called
the
stack.
F
It's
very
helpful
to
have
these
two
images
as
separate
images,
because
real-time
dependencies
can
be
left
out
of
the
application
image,
making
it
smaller
and
lowering
the
attack
surface
area
and,
as
a
platform
operator,
you
can
choose
which
builders
are
safe.
You
can
construct
builders
out
of
different,
build
packs
that
you
like,
and
also
define
what
sort
of
applications
languages
or
base
operating
systems
you
want
to
support
and
inject
any
necessary
environment,
variables,
settings
or
proxy
configurations,
etc.
F
Finally,
taking
this
builder
image
your
application
source
code
together,
you
have
something
known
as
a
platform
that
uses
both
of
these
as
inputs
to
produce
an
output
image
app
developers,
don't
have
to
know
how
any
of
this
works.
They
just
have
to
push
their
code
and
run
a
simple
command
that
just
gives
us
input
the
application
source
code
and
the
builder
name
and
the
platform
can
automatically
produce
the
output
image
under
the
hood.
F
The
platform
actually
uses
the
life
cycle,
the
reference
implementation
of
which
is
maintained
by
the
project
to
run
the
appropriate,
build
packs
in
order
running
the
detect
and
build
phases
and
making
sure
that
appropriate
values
are
set
in
the
output
oci
image.
This
allows
us
to
have
a
single
tool
that
can
take
in
different
builders
and
build
all
kinds
of
applications.
F
F
Now
rebase
allows
app
developers
or
operators
to
rapidly
update
an
application
image
when
it's
run,
image
has
changed
by
using
image
layer
rebasing
this
command
avoids
the
need
to
do
a
full
rebuild
at
the
end,
like
all
it's
doing
is
changing
like
the
pointer
to
the
base
operating
system
layers,
and
today
you
have
a
new
app
image
with
the
patched
os
without
having
to
rebuild
your
entire
application
from
scratch.
F
Build
packs
can
do
this,
because
the
output
layers
in
the
application
image
are
distinct
and
we
store
appropriate
metadata
about
which
layers
are
the
base
image
and
which
layers
are
the
application
image,
and
we
can
use
this
metadata
to
do
this
kind
of
a
rebase
operation.
F
So
moving
on,
like
we've,
looked
at
all
of
these
concepts
and
features,
I
wanted
to
call
out
a
couple
of
them
which
this
particular
working
group
may
be
interested
in,
and
these
are
also
features
that
I
think
have
been
talked
about
a
lot
recently.
F
So
first
up,
we
have
s,
form
integration,
so
build
packs.
Allow
you
to
generate
an
software
bill
of
materials
during
the
build
process
itself.
What
this
means
is
that
you
can
have
precise
and
accurate
information
about
the
components
of
your
container
instead
of
relying
on
heuristics
and
container
scanners
that
scan
the
image
after
the
container
is
built.
This
also
allows
you
to
have
a
full
inventory
of
your
of
any
artifacts
that
you
have
compiled,
which
often
cannot
be
detected
by
container
scanners.
F
This
improves
the
accuracy
of
the
bill
of
materials
generated
for
the
output
image,
and
it
also
like
reduces
the
actual
scanning
time
needed
to
go
through
the
application
image
and
figure
out
what
components
were
installed
in
the
first
place.
F
Next
up,
we
have
separation
of
responsibilities
and
reusability.
What
that
means
is
it
allows
different
personas
to
do
what
they
do
best
app
developers
can
focus
on
application
code.
Buildback
authors
can
provide
good
production
quality,
container
building
logic
for
individual
languages
that
they
are.
Experts
in
and
operators
can
choose
to
reuse.
These
build
packs,
along
with
base
images
that
they
approve
of
and
provide
this
whole
thing
as
a
bundled
builder
for
their
users.
F
And
finally,
we
have
the
rebase
operation,
which
we
already
talked
about,
which
allows
you
to
mass
update
your
base,
runtime
images.
So
what
all
of
these
features
enable
you
to
do?
Is
these
two
workflows?
F
F
After
we've
identified
the
images
we
can
like
selectively
update
the
application
dependencies
by
updating
the
build
packs
that
were
used
to
create
the
image.
So
let's
say
you
have
a
go
buildback
that
provides
the
go
compiler
and
you
notice
that
it
was
using
an
older
version
or
something,
and
you
want
to
update
it
to
the
new
one.
You
can
just
update
the
build
back
and
it
will
check
the
existing
metadata
and
update
the
dependencies.
You
can
do
similar
things
for
other
languages
and
something
that
matthew
will
show
us
later.
F
Finally,
looking
forward
on
how
like
some
of
the
features
that
we're
focusing
on
to
improve
the
security
around
images
built
by
buildbacks,
we're
looking
to
tighten
and
automate
our
security
focus
features
even
more
you're
thinking
of
moving
and
integrating
cyclone
dx
as
a
standard
as
form
format,
particularly
because
of
a
few
reasons,
one
it's
extremely
fast
and
lightweight
to
parse
and
analyze,
and
a
large
part
of
the
buildbacks
project
is
how
quickly
it
allows
you
to
build
your
images.
F
The
cyclone
dx
format,
we've
noticed,
is
also
highly
automatable,
and
it
has
a
very
good,
tooling
ecosystem
and
it
comes
from
oh
wasp.
It
has
a
horse
group,
so
it
comes
from
a
security
focus
group
and
while
preserving
security
or
vulnerability
information,
it
can
also
capture
compliance
information
using
spdx,
license
tags
and
there's
an
rfc
in
place
where
we
propose
this
as
the
standard
format
for
buildback's
s
form
output.
F
If
you're
interested,
you
can
take
a
look
and
comment
on
it.
Apart
from
that,
we're
also
looking
into
six
store
and
cosine
integration.
This
is
again
been
a
hot
topic
recently
cosine
seems
very
promising
in
that
it
does
not
require
any
external
dependencies.
F
Apart
from
a
docker
registry
which
you
would
already
have,
if
you're
building
container
images,
it
works
for
like
old
registries
as
well-
and
I
think
recently
it
also
added
sform
support,
so
that
you
can
also
sign
and
attach
s
forms
to
your
container
images
and
if
you're
interested
in
cosine
integration,
we
have
a
pack
issue
and
a
k
pack
rfc
for
the
same.
F
Finally,
how
and
where
can
you
use
build
packs?
Build
packs
can
be
used
in
a
bunch
of
places.
Today
we
will
be
demoing
two
popular
platforms.
One
is
pack
which
is
a
cli
tool
that
allows
you
to
build
container
images.
It
primarily
relies
on
the
docker
daemon
and
the
other
is
kpak,
which
is
a
kubernetes
native,
build
service
and
it
can
work
without
privileged
builds.
F
It
can
run
without
privileged
permissions
and
does
not
require
any
demon
whatsoever,
and
it
also
allows
you
to
manage
your
images
and
scale
so
first
up
the
back
demo.
Can
everyone
see
the
kafka
coder
website,
so
this
is
just
part
of
a
normal
documentation
and
I'm
running
like
the
sample
builder
and
sample
app
repository
that
we
have.
F
I
just
want
to
know
that
this
is
a
sample
builder
and
this
is
not
an
actual
production
builder,
as
I
noted
earlier,
like
the
project
itself,
does
not
maintain
any
production
quality
built
back
so
builders.
So
if
you're
trying
to
use
it,
please
use
it
with
one
of
the
production
quality
build
packs
in
our
registry.
F
F
F
F
And,
finally,
it's
caching
some
layers
which
are
actually
not
part
of
the
output,
app
image
but
they're
stored
locally,
and
if
you
do
a
rebuild,
for
example,
it
should
take
a
lot
less
time.
This.
F
F
So
another
thing
to
note
is
that
all
builds
produced
by
buildbacks
are
reproducible
in
the
sense
that
if
you
have
two
source
repositories
which
are
exactly
the
same
and
you
have
a
builder-
that's
exactly
the
same-
it
outputs
the
same
exact
digest.
We
do
so
by
zeroing
out
the
timestamps
in
the
output
image.
F
So
if
you
inspect
your
image
and
if
you
see
it
being
created
like
40
years
or
years
ago
or
something
you
will
know
why
it's
because
we
are
trying
to
make
these
builds
reproducible-
and
this
is
a
side
effect
of
that.
But,
as
you
can
see,
we
have
an
image.
Now,
let's
try
and
run
it.
F
Although
this
like
seemed
like
a
long
journey
for
an
actual
app
developer,
they
already
have
the
code
in
place
and
they
can
also
set
a
default
builder
which
can
automatically
handle
multiple
languages
so
most
of
the
times.
The
only
argument
they're
supplying
is
fact
build
the
application
image
name,
that's
it
and
that's
it
for
the
pack
demo,
I
believe,
matthew
is
gonna,
show
us
a
demo
with
kpac
that
deals
with
detecting
vulnerabilities
and
patching
and
rebuilding
images
of
multiple
images.
At
once,.
G
G
G
All
right,
can
everyone
see
my
screen
with
a
lot
of
green
boxes?
Excellent.
So
what
we're
looking
at
right
now
is
a
visualization
of
a
k-pac
install.
So
what
each
of
these
boxes
represent
is
an
actual
image
that
k-back
knows
about
and
is
aware
of,
and
as
you
can
see,
we
have
about
some
images
here.
They're
all
built
with
unique
source
code
and
they're
built
using
build
packs
originated
from
the
paquetto
projects.
G
If
you
look,
if
you
can
see
it's
not
important,
but
in
here
each
of
these
image
actually
calls
out
the
exact
build
packs
that
were
used
to
build
them
and
what
the
last
demo
was
showing
was
the
ability
for
build
packs
to
with
the
pax
cli,
easily
build
an
image
and
capex
really
leveraging
that
tech
technology
in
build
packs
at
scale.
So
it's
managing
all
these
images,
keeping
track
of
the
dependencies
inside
of
them
and
is
able
to,
as
we
kind
of
discussed
earlier,
automatically
rebuild
them
if
there's
an
update
to
patch
a
vulnerability.
G
G
So
in
that
time
all
30
some
images
were
scheduled,
a
rebuild
of
them
and
patched
with
the
new
underlying
run
image.
As
you
can
see,
none
of
them
are
red
anymore,
because
they're,
using
that
new
underlying
run
image
digest
that
was
really
fast
across
all
these
images
and
the
reason
we
were
able
to
do
that
so
fast
is,
as
mentioned
earlier,
we're
actually
leveraging
the
rebase
so
we're
not
scheduling
a
full
rebuild
of
all
of
them.
We're
just
scheduling
a
simple
build
that
is
going
to
take
the
new
run
image
rebase.
G
Cool,
so
the
other
thing
I'll
show
is
just
like
the
run
image.
Some
dependencies
are
provided
by
the
build
packs,
so
what
I
can
do
in
this
visualization
is
similar
to
what
I
did
a
second
ago.
I
can
mark
a
build
pack
as
a
dependency
provided
by
kpac.
That
is
vulnerable,
so
I
am
for
this
demo
just
going
to
mark
the
pacquito
belsoff
liberica
built
pack,
which
is
a
build
pack
that
provides
the
underlying
jvm
as
vulnerability.
G
So
we
can
imagine
if
there's
a
necessary
fix
in
the
jvm
we'd
want
to
rebuild
a
fleet
of
images
built
with
that
dependency.
So
if
I
say
this,
not
all
the
images
in
the
cluster,
because
obviously
some
are
built
with
other
build
packs
and
other
languages
runtimes,
but
all
the
ones
built
with
this
version
of
bellsoft,
bilsophilamerica
and
java
have
now
been
marked
as
red
and
what
I
can
do
is
similar
to
before.
G
Capex
actually
going
to
rebuild
the
underlying
builder
image
that
built
these
images
and
as
we
can
see,
it's
already
detected
the
handful
of
images
built
with
that
version
of
bellsoft
liberica
as
needing
a
rebuild
and
as
you
can
see,
it's
spinning
here,
which
is
visualizing,
that
an
actual
build
has
started
up.
So
in
a
little
bit
of
time,
we
should
see
these
fixed,
and
this
is
pretty
fast
too,
because
I
was
able
to
leverage
the
speed
of
build
packs
in
the
underlying
buildback
cache,
but
it
is
a
little
slower
than
the
operation.
G
C
G
Sure
I'm
happy
to
share
terminal.
I'm
curious
what
you
want
to
see.
I'm
happy
to
show
the
underlying
statuses
update,
show
vlogs.
What
would
you
like
to
know
a
little
bit
more
about.
G
G
C
Else,
yeah
thanks
for
this.
This
was
really
great.
I
have
one
questions
so
here
it
shows
the
the
vulnerabilities
being
fixed
in
the
runtime,
but
what
about
the
users
application
dependence
if
they're
like?
I
have
python
application
and
I'm
bringing
some
dependencies
with
me
yeah,
so
are
those
also
going
to
get
handled,
I
mean,
and
what
is
the
responsibility
of
the
developer
to
if
they
are
getting
automatically
fixed,
how
developers
are
getting
awareness
of
those.
G
Yeah,
that's
a
great
question
so
currently
what
kpac
does?
Is
it
tracks
the
underlying
stack,
which
is
the
operating
system
and
packages,
as
well
as
the
build
packs
that
will
most
likely
provide
the
language
runtime
like
the
python
runtime
or
the
jvm?
The
underlying
the
application
dependencies
are
still
going
to
be
managed
by
the
underlying
app
developer
teams.
G
So
at
this
time
the
the
app
developer
teams
would
still
need
to
update
their
underlying
application
and
tell
kpac
about
it
to
rebuild.
But
the
kind
of
the
s-bomb
kind
of
we
were
discussed
earlier
should
make
it
easier
for,
like
the
downstream
or
the
applica.
The
platform
operators
be
more
of
to
be
made
aware
of
what
dependencies
in
the
app.
So
they
could
work
with
the
app
teams
to
update
them.
F
I
just
wanted
to
add
that,
since
buildbacks
can
contain
arbitrary
logic,
you
can
actually
just
have
a
buildback
that
does
some
basic
vulnerability
scanning
for,
like
node
or
python,
and
maybe
even
error
the
build
out.
F
If
it
knows
notices
some
vulnerable
components
being
used,
you
can
also
have
it
auto
fix
some
of
those
things,
although
that
may
not
be
desirable
by
app
developers.
But
if
you
are
the
operator,
you
can
choose
to
do
all
of
these
operations
and
automate.
All
of
this
build
packs,
provide
a
framework
to
allow
you
to
easily
detect
and
do
these
sort
of
operations,
but
in
terms
of
what
you
actually
do,
it's
entirely
up
to
you
as
an
operator
yeah.
F
I
had
the
last
couple
of
slides
that
leftover,
although
it's
not
much,
I
just
wanted
to
highlight
some
of
the
platforms
that
allow
using
build
packs
to
build
images,
so
we
already
know
about
back
in
kpac,
but
we
all
we
also
have
integrations
with
gcp
gitlab,
techton
and
a
bunch
of
others.
F
We
also
like
you,
can
also
use
these
images
and
deploy
them
to
various
cloud
providers,
so
aws,
gcp,
openshift,
etc,
and
if
you're,
interested
or
curious
about
contributing
back
to
the
project.
Here
are
some
community
links
and
that's
pretty
much
it.
Let
us
know
if
you
have
any
questions
about
the
presentation
in
general
or
any
of
the
other
features
we
discussed.
C
Great
demo,
I
I
have
a
couple
of
questions.
I
think
one
of
the
first
ones
is:
what
can
you
know
sort
of
end
user
security
folks
do
to
sort
of
validate
the
platform
of
build.
You
know
the
build
packs.
The
the
platform
that
you
know
is
using
to
actually
build
the
build
packs
like
what
can
we
do
to
make
sure,
for
example,
that
we
didn't?
You
know
download
a
vulnerable
version
of
of
bill
pax.
F
Itself,
I
think,
in
terms
of
like
the
platforms
again
like
the
buildback
project
itself
only
owns
a
single
platform,
which
is
the
back
cli.
So
I
believe
currently
we
provide
like
digests
and
checksums
for
you
to
validate
the
binary
that
you're
using
for
pac
for
other
platforms.
F
B
F
Definitely,
I
think
we
currently
also
ship
back
cli
as
a
container
image.
We
can
look
into
signing
that
and
I
believe,
there's
also
the
s
get
sort
of
demo
from
six
store
which
provides
like
wget
equivalent
to
that
but
yeah.
We
will
definitely
look
into
that
as
well.
F
Integrations
steven,
do
you
want
to
take
that.
D
With
platforms
with
some
of
the
other
open
source
projects,
whether
or
not
we're
missing
specifications
or
standards
or
any
kind,
pretty
much
anything
that
you
can
think
of
so
for
for
your
awareness,
we're
working
on
of
creating
a
reference
architecture
for
organizations
to
be
able
to
adopt
and
we're
exploring
the
current
technology
landscape
and
trying
to
fill
in
those
gaps.
E
E
You
know
application
layers
contractually
separate
from
you
know
the
base
image
layers,
but
operating
system
packages
are
generally
arbitrary
functions
that
apply
on
top
of
the
base
image,
and
so,
as
soon
as
an
application
needs
to
install
additional
operating
system
packages,
you
lose
a
lot
of
the
benefits
of
rebasing
for
that
application,
because
you
can't,
you
know
you
have
to
make
a
custom
base
image
just
for
that
application
right.
You
can't
do
things
at
scale
anymore.
E
You
know
really
quickly
like
what
matt
mcnew
showed
with
kpac,
so
we're
working
on
ways
of
you
know:
adding
support
for
adding
operating
system
packages
on
a
per
app
basis
for
lots
of
apps
at
the
same
time
into
the
api,
so
that
we
kind
of
for
those
use
cases
where
you
know
you
have
a
ruby
app
that
uses
image
magic
and
your
base
image
is
missing
the
image,
magic
c
library.
Things
like
that.
E
You
have
a
couple
of
operating
system
package
options
and
build
pack
options
for
solving
that
problem
that
have
different
trade-offs.
So
I
think
that's
that's
kind
of
one
of
the
really
big
ones
deciding
on
a
standardized.
S-Bomb
format
is
another
one.
That's
you
know
a
really
interesting
problem
right
now.
I
think
we
really
like
cyclone
dx,
the
just
mostly
due
to
the
supported
tooling
and
the
space
for
kind
of
automated
vulnerability.
Matching
against
the
s.
E
Bombs
looks
really
nice,
but
you
know
there's
still
a
lot
of
questions
about
which
way
the
ecosystem
will
go
there.
So
that's
that's
kind
of
something
that's
in
flight.
I
think
there's
a
big
effort
right
now
to
reduce
complexity
in
the
project,
especially
we
have
a
lot
of
terms
like
stack
and
builder,
and
you
know
build
pack
that
make
it
a
little
inaccessible
for
new
users
coming
in.
E
You
know
who
want
to
understand
how
the
project
works,
understand
the
specification
and
so
we're
trying
to
spend
a
little
time
before
you
know,
bumping
all
the
version
numbers
to
100
to
say:
let's,
let's
cut
the
you
know
some
of
the
terminology
and
remove
some
of
the
complexity.
That's
built
up
over
time
based
on
individual
features
that
may
not
have
panned
out
the
way
we
thought
so
we're
hoping
in
a
couple
months.
Everything
is
a
lot
more
consumable
for
end
users.
E
So
I
don't
know
those
are
just
some
things
I
don't
know.
If
that's
does
that
kind
of
answer,
your
question
sure.
D
It
does
I'm
I'm
curious
for
the
other
folks
on
the
call.
How
are
we,
how
are
we
feeling
about
build
packs
and
everything
that's
been
presented,
given
our
other
discussions.
B
So
I'm
curious
on
the
build
pack,
specifically:
what
languages
does
it
support?
Do
you
run
into
challenges
with
clients
they're
doing
something
like
a
static?
They
compile
binary,
go
line,
pick
your
language,
and
does
that
mean
that
if
they're
doing
something
that
build
pack
doesn't
work
now
we
have
two
solutions
for
a
single
client
that
they
have
to
implement.
E
Yep,
that
makes
sense,
so
the
build
text
project
just
provides
specification
and
tooling
for
a
platform
to
implement
build
packs,
which
is
a
like
it's
a
lot
of
stuff.
So,
like
the,
I
think,
the
key
thing
that
the
bill
packs
project
creates
is
a
life
is
the
set
of
binary
is
called
the
life
cycle.
That
knows
how
to
run
build,
packs,
export
layers.
In
this
way,
it
makes
it
really
easy
for
a
platform
to
implement
running,
build
packs
like
tekton,
it's
just
a
single
aml
file.
E
E
Project
in
the
cloud
foundry
foundation
called
potato.
That
provides
a
really
you
know,
I
think,
probably
eight
different
language
ecosystems
worth
of
build
packs,
probably
over
100
build
packs
total
that
I
think
covers.
You
know
python
go
node
java
keep
going,
but
google
also
has
fieldpack
implementations
that
cover
a
lot
of
languages.
E
E
If
you
need
to
no
the
we
haven't
seen
issues
supporting
most
common
languages
like
you
know,
you
said
for
go
where
you're
building
static
binaries,
you
know
the
potato
project
has
like
a
distrolus
like
image,
that's
kind
of
uses,
a
bunch
of
bionic
underneath
you
know
to
do
for
the
build
process
that
works
really
well
for
producing
small
statically
compiled,
go
apps
where
you
want
maybe
glimpsey
on
the
image.
You
know
in
addition
to
that,
the
because
the
run
image
and
build
image
can
be
separate.
B
B
During
the
yeah
I'm
thinking
of
the
demo
we
we
saw,
you
know
you
got
30
40
images
out
there
and
you
just
quickly
swapped
out
a
layer,
because
all
that
is
just
a
quick
little
manifest
change
in
the
docker
image
and
that's
easy
to
do.
But
if
you
have
to
do
a
build
to
create
a
statically
compiled
binary
right
in
a
build
step
and
you're
swapping
layers
out
there,
you
still
have
to
do
a
compile
in
there.
E
Yeah
generally
build
packs,
try
to
dynamically
link
their
runtimes
as
much
as
possible,
so
they
can
take
advantage
of
that
rebasing.
It's
it's
probably
good
security
practice
not
to
include
static
libraries
in
your
build
image.
For
that
reason,
if
you're
going
to
use
rebasing
so
that
when
you
do
that,
rebase,
you
don't
end
up
with
the
static
libraries
copied
into
the
you
know:
statically
compiled
binaries
and
then
a
rebase
that
doesn't
have
any
effect.
So
buildpack,
authors
and
users
are
responsible
for
managing
that.
E
B
B
E
Yeah,
I
think
the
a
challenge
is
that
the
rebase
operation
it
can
lead,
if
you're,
not
very
careful
about
how
you
use
it.
This
is
what
this
is
kind
of
a
reason
for
the
build
pack
api.
If
you're
not
very
careful
about
how
you
use
that
rebase
operation,
you
can
actually
create
security
vulnerabilities.
So,
for
example,
if
you
have
an
operating
system
package
that
modifies
etsy
password
right,
you
get
a
copy
of
etsy
password
and
a
subsequent
layer
and
say
there's
a
vulnerability
that
requires
changing
a
user.
E
That's
patched
in
the
lower
layer,
you
replace
the
lower
layer
right
you're,
not
that
that
change
isn't
going
to
permeate
the
higher
layers.
So
the
kind
of
a
big
reason
behind
the
build
pack
api
is
it
keeps
really
contractually.
The
build
happens
as
a
normal
user
keeps
really
contractually
separate
layers
for
different
parts
of
the
application
and
then
uses
path.
E
Led
library
paths,
cpa
things
like
that
in
order
to
create
the
environment,
and
so
that's
a
lot
of
as
I'd
say,
most
of
what
it
is
is
a
way
to
enable
safe
rebasing
of
those
lower
layers.
If
that
makes
sense,
it's
not
something.
You'd
want
to
do
with
a
docker
file
that
installs
packages
try
to
swap
it
lower
things
out.
That
does
nothing.
B
I
think
people
are
looking
at
the
rci
on
that
annotation,
there
think
of
adding
on
there
we're
seeing
that
exact
same
problem,
so
that
a
lot
of
them
were
saying.
You
know
we're
not
going
to
try
to
do
a
rebase.
This
is
just
our
trigger,
do
a
rebuild,
and
so,
if
they
have
that
annotation
in
there,
they
know
when
they
need
to
do
it
at
least
yeah.
That.
E
Makes
sense,
I
think
the
annotation
adds
the
original.
You
know
image
digest
of
the
base
image.
We
can't
actually
use
that
with
the
buildpacks
project,
because
we
don't
want
to
assume
that
the
previous
image
is
still
available
on
the
registry,
so
we
have
to
add
another.
You
know
annotation.
In
addition
to
that,
to
say:
here's
actually
the
the
top
layer
digest,
but
it's
still
useful
to
understand.
You
know
what
base
image
digest
your
application
came
from.
I
think
it's
a
good
feature
for
oci.
A
I
think
that
may
be
what
he
was
just
saying
about.
The
the
sort
of
separation
of
the
rebase
may
help
answer
some
of
my
questions,
but
I
was,
I
was
wondering
in
a
in
a
supply
chain
security
context.
One
of
the
things
we're
worried
about
is
if
something
that
we've
brought
in
from
upstream
is
has
been
compromised
and
if
that's
been,
if
that
compromised
upstream
dependency
has
been
stored
in
the
cache,
and
we
just
keep
rebuilding
off
of
that
cache.
Are
we
just
reusing
the
compromised
package?
A
If
that
makes
sense,
or
you
know
if
we
want
to
build
out
of
a
registry
that
we
own
and
know
so
that
we
have
built
everything
from
source,
and
we
have
a
little
bit
more
assurance
about
what's
in
there
to
avoid
some
of
those
issues?
How
hard
are
those
sorts
of
things
to
to
manage
and
manipulate
inside
of
build
packs.
E
E
Yeah,
that's
actually
it's
a
really
good
point.
So
when
you
know,
as
sam
said
earlier,
we
you
know,
build
packs,
result
and
reproducible
images.
What
that
really
means
is
the
build
pack
api
if,
if
build
packs,
produce
reproducible
artifacts
the
build
pack,
api
does
as
much
as
it
can
to
ensure
that
the
base
image
at
the
end
is
reproducible.
E
So
your
build
packs
have
to
produce
the
same
output
given
the
same
input,
the
timestamps
will
get
fixed
automatically
for
for
files,
but
you
know,
aside
from
that,
you
know
it
does
rely
on
the
build
pack
supporting
that
functionality
too,
and
so,
if
you
include
a
build
pack
that
produces
an
artifact,
it's
not
reproducible,
you
won't
get
to
reproduce.
You
won't
get
the
same
thing
at
the
end,
so
that's
definitely
a
limitation.
C
E
So
there
are
two
s-bombs
that
get
produced,
there's
one
that
gets
currently
there's
one
that
gets
put
on
the
image
itself
and
there's
one
and
the
one
that
gets
put
on
the
image
is
reflective
of
the
contents
of
the
image
and
there's
one
that
for
reproducibility
reasons
actually,
because
we
want
to
be
able
to
produce
the
same
digest.
Even
if
you
know
your
version
of
curl
that
you
use
to
download
the
thing
changes
right,
there's
a
separate
s-bom
that
gets
produced
as
a
file
at
the
end.
E
That's
a
build
time
s
button
that
contains
all
the
build
tools,
we're
working
on
moving,
both
those
to
cycle
and
dx.
There's
a
current
rfc
to
do
that,
and
we
may
actually
allow
you
to
put
the
build
timeline
on
the
image.
If
you
want
to
also
some
people
feel
like
build
tools
change,
then
you
should
always
it's
good
to
get
a
separate
digest.
Then
your
inputs
weren't
exactly
the
same.
E
It
makes
a
lot
of
sense
to
me,
so
you
know
speculating
here
for
sure,
because
it's
still
an
rfc,
but
you
could
imagine
you
know
you
do
a
build
and
you
end
up
with
two
cyclone
dx
formatted
s,
bombs
that
have
build
time
dependencies
and
runtime
dependencies
and
they're
baked
onto
the
image,
and
you
can
use
tooling
to
you
know,
look.
A
F
I
think
one
other
good
thing
is
that,
since
build
packs
like
contribute
layers
which
make
some
sense
or
semantic
rather
than
just
an
arbitrary
collection
of
commands,
you
can
also
specify
that
this
dependency
came
from
this
layer
or
this
command.
So
that's
also
something
we've
talked
about
in
our
rfc.
F
So
if
we
swap
out
that
layer
in
the
future
or
like
in
in
like
in
a
rebuild
process,
some
build
pack
is
excluded,
for
whatever
reason
we
can
be
more
clever
about
what
bill
of
materials
that's
finely
merged
together
gets
generated,
so
we
are
only
merging
together
and
generating
the
parts
that
were
present
in
the
final
image
while
still
being
efficient
about
it
and
also
knowing
the
source
of
where
it
came
from
so
which
belt
back,
contributed
it
and
which
their
those
dependencies
live.
On.
C
Yeah
thanks,
I
was
actually
going
to
ask
the
same
question
like
if
we
do
the
rebasing,
how
does
the
s-bomb
gets?
How
does
that
changes
get
reflected
in
their
bomb
because
you
are
not
actually
rebuilding
the
whole
application
right?
So
I
think
you,
as
you
said,
if
you
have
those
references
to
the
layers,
then
you
can
easily
update
this
form
and
is
that
the
same
way,
you
are
doing
updating
your
on
rebasing,
okay,.
F
Yeah
so,
as
I
said
since,
like
the
the
bill
of
materials
that
are
currently
produced
are
attached
to
layers
and
a
build
back
instead
of
the
whole
container
image,
we
know
exactly
like
based
on
the
references
in
the
manifest
like
which
layers
are
present.
So,
let's
say
in
the
rebase
operation,
the
base
operating
system
images
are
changed.
We
can
take
the
newest
one
for
the
base,
images
and
the
new
one
for
the
app
images
and
try
to
combine
them
together
and
put
them
out.
F
So
the
the
only
operation
we're
doing
is
a
merge
together.
This
is
still
very
much
an
rfc
like
we're,
still
figuring
out
how
to
handle
these
kind
of
operations,
but
that's
the
sort
of
direction
we
are
thinking
since
we
already
have
all
of
these
things
in
place
like
it
should
be
very
easy
for
us
to
implement
this.
C
E
But
by
build
this,
do
you
mean,
like
you
know,
take
build
packs
and
create
an
internal
platform
that
uses
build
packs
and
offers
those
to
developers
something
kind
of
like
that
correct.
That's
a
good
question.
I'd
say
the
maybe
maybe
sam
and
mcnew
actually
would
kind
of
have
a
better
better
take
on
this.
F
So,
like
I
I'm
personally
a
ml
platform
engineer
in
bloomberg
and
like
we
had
to
adopt
this
platform
internally,
I
think
in
terms
of
considerations
like
the
major
thing
to
think
about
is
like
building
some
of
these
things
from
source
and
like
some
of
the
build
packs
download
things
from
the
internet,
figuring
out
the
right
proxies
for
them
or
like,
if
you
have
private
registries,
figuring
out
how
to
replace
them.
That's
a
major
part
of
it.
F
I
think,
like
some
of
the
built
back
projects
and
ecosystems,
allow
you
to
easily
replace
or
provide
a
proxy
for
some
of
the
artifacts
they
use.
F
So
it's
still
very
much
manageable,
but,
like
those
are
the
typical
considerations,
like
figuring
out
what
kind
of
policies
you
have
in
place
in
your
company
and
like
trying
to
adopt
this
whole
ecosystem
internally,
so
figuring
out,
trusted
mirrors
or
building
from
source,
where
you
have
to.
C
D
I
was,
I
was
just
going
to
recommend
that
that's
an
urban
question
that
we
need
to
still
figure
out
is
whether
or
not
our
architecture
is
going
to
refer
to
a
configuration
file
for
pointing
to
internal
mirrors
or,
if
we're
going
to
recommend
that
folks
just
go
out
to
the
internet
for
grabbing
everything.
I
have
my
own
opinions
on
this,
but
I'm
interested
to
see
where
what
direction
we
end
up.
Taking.
C
D
D
Okay,
if
no
one
has
any
other
questions,
so
take
the
rest
of
this
week
up
until
thursday
to
continue
to
go
through
those
user
stories
and
those
tasks
you
can
begin
assigning
yourself.
Some
tasks
also
put
the
association
of
any
tooling
projects,
points
of
contact
against
those
particular
tasks
and
user
stories
to
capture
a
lot
of
this.
D
If
you
happen
to
come
across
something
that
we
need
as
a
group
to
make
a
decision
on,
go
ahead
and
drop
it
in
the
slack
channel.
That
way,
we
can
bring
it
up
as
a
topic
of
conversation
at
the
next
meeting,
but
expectations
for
next
week
unless
dan
pop
overrides
me
we're
going
to
be
doing
a
review
of
all
those
stories
and
tasks
determine
what's
missing,
begin
prioritization
and
assignment.
C
I'll
do
a
quick
additional,
so
if
you
think
of
any
projects
that
also
may
be
good
to
include,
but
I'm
not
necessarily
part
of
the
current
discussion.
If
you
can
add
that
as
well,
we
can
try
and
reach
out
to
those
projects.
C
Hey
super
quick
like
last
minute
thought
stephen,
you!
You
talked
about
techcon
passing
by
I'm
googling
like
for
that
integration
found
some
great
documentation
and
the
length
of
the
wrap
up.
Do
you
have
a
sense
of
like
how
mature
and
stable
that
is.
E
So
the
the
tecton
integration
like
it's
it's
you
know
it
uses
the
tecton
catalog,
it's
a
template
for
tekton.
It's
not
you
know
extra
controllers
or
things
like
that
right
and
so
that
it
has
some
limitations
around.
Like
you
know,
kpac
has
controllers
that
are
built
against
build
packs,
specifically
for
example,
and
so
it
can
understand
when
a
build
pack
is.
You
know,
there's
a
one
old
build
pack
version
out
of
all
the
build
pack
versions,
your
app
uses
and
know
to
do
a
rebuild
right.
E
You
know
it's
kind
of
like
if
you
had
the
pax
cli
integrated
into
a
tactile
pipeline,
but
instead
of
using
pack
and
docker,
it
uses
kate's
containers
to
do
the
build
it
doesn't
require
any
nested
containers.
If
that
makes
sense.
C
E
The
integration
you
know
it
uses
mature
tooling,
you
know
you
know,
project
life
cycle
is
part
of
it.
It
supports
all
the
existing
build
packs
in
the
ecosystem.
As
a
thing
that
does
a
build
it's,
you
know
it
does
a
great
job.
As
far
as
I
know
you
know,
does
it
give
you
some
of
the
you
know,
kind
of
fancier
features
of
the
api
rebasing
things
like
that.
You
know
you'd
have
to
implement
that
separately.
If
that
makes
sense.