►
From YouTube: Cloud Foundry Community Advisory Board Call [April 2021]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
to
see
you're
here,
that's
that's
good,
so
welcome
everybody
april
cab
call
should
be
a
fun
one.
We've
got
presentation
a
little
bit
later
from
emily
casey
on
the
current
state
of
cloud
native
bill,
packs
and
vocato.
That's
very
cool
also
shortly
thereafter
conveniently
ram
and
shedrack
are
running
a
live
stream.
This
morning
at
9,
30
am
pdt
on
cloud
native,
ci,
cd
pipelines
on
kubernetes
with
tecton
and
vaquetto.
So
that's
topically,
relevant
aside
from
that
cloud.
A
Foundry
foundation
updates
summit
registration
going
quite
well
there's
a
couple
hundred
registrations
already,
which
is
nice.
The
cfp
closes
in
one
month
on
may
21st
so
good
time
to,
if
you
haven't
put
together
something
good
time
to
start
thinking
about
that
and
voting
for
track
co-chairs
that
closes
on
monday,
and
let
me
put
a
put
a
link
to
the.
A
Put
a
link
in
the
chat
here
for
everyone
who
has
not
voted
for
that.
Yet
that's
just
right
to
the
survey
monkey
voting
there
and
I'll
also
put
the
link
to
the
live
stream
at
9
30
today.
A
Here
it's
a
long
link
yeah.
So
that's
all
I've
got
from
the
foundation,
so
I
guess
we
can
move
into
pmc
updates
eric
you
gotta!
You
wanna
run
down
some
app
runtime
off.
Please.
B
Yeah
happy
too
chris
thanks
yeah,
just
a
few
highlights
from
the
past
month
or
so
across
the
app
runtime
teams
continued
updates
from
the
integration
teams.
I've
had
a
minor
update
on
cf
deployment
from
relevant
to
16.11.0
and
then
another
major
release
of
cf
for
gates.
I
believe
they
had
to
add
some
certificate
properties
related
to
injecting
the
instance
index
into
the
application
pods
that
required
the
major
version
bump
and
then
qcf's
been
doing
some
work.
B
I
know
a
little
while
ago
they
had
removed
the
cc
mode
that
updates
the
app
deployments,
but
they're
reintroducing
that,
after
understanding
how
it
fits
into
that
set
of
cc
jobs,
then
they're
also
doing
some
things
that
improve
the
operation
of
the
log
cache
job
to
be
within
its
memory
limit
inside
the
containers.
B
I
think
some
of
that
is
depending
on
some
work
in
the
go
cigar
library
kind
of
in
the
bosch
realm
there
across
some
of
the
component
teams.
No
cappy's
been
doing
some
work
on
app
manifest
diffs
for
setting
those
app,
manifests,
server,
side
and
having
them
apply,
and
then
irene's
been
doing
some
really
interesting.
B
Work
they've
been
filling
in
some
gaps
that
they've
had
between
their
crd
representation
of
the
their
lrp
resource
and
the
rest
apis
that
have
manipulated
those
before
and
then
they're
planning
on
starting
some
work
to
explore
the
consequences
of
running
apps
as
deployments
instead
of
stateful
sets
so
see
exactly
what
we
might
need
to
change
to
accommodate
that
mode
of
running
apps
they've
also
been
doing
some
work
to
move
their
backlog
and
icebox
into
github
projects
from
trackers.
B
So
I
dropped
in
a
link
to
that
if
anyone's
curious
about
how
that's
going
or
even
what
see
things
that
they'd
be
interested
in
contributing
to
there,
then
a
couple
other
updates
around
networking.
The
networking
team
I
think
we've
mentioned
in
the
past,
there's
been
some
issues
with
starting
in
go.
B
115
are
in
a
stricter
validation
of
metadata
in
certificates,
so
they've
been
building
the
go
router
to
operate
in
the
more
permissive
mode
where
it
doesn't
need
to
have
the
hostname
show
up
strictly
in
a
dns
san
and
a
certificate
that
the
co-writer
is
interacting
with
it's
okay
for
it
to
be
present
in
the
common
name
but
they're.
Now.
B
B
So
hopefully
at
this
point
all
the
certainly
all
the
internal
certificates
have
been
updated
to
have
that
metadata
in
the
right
place.
So
this
might
just
be
an
issue
for
any
client
certificates
that
are
interacting
directly
with
go.
Router
they've
also
been
fixing
some
bugs
around.
I
think
tcp
router
report
allocations
and
then
some
cases
where
there's
some
issues
with
the
policy
server
when
it's
running
against
a
mysql
database
and
then
finally
aj
proxy
has
they've
had
another
minor
update
in
their
bosch
release.
B
A
A
All
right,
no
questions
forever.
Do
we
have
anyone
from
the
bosch
team
here
today
updates
on
that
it
doesn't
look
like
it.
Okay,
as
far
as
extensions,
I
don't
have
any
significant
updates,
of
course,
locato
and
bill
pax
being
an
extension
will
be
kind
of
the
main
feature
today.
So
we've
got
that
going
on.
Is
there
anything
else
anyone
wants
to
bring
up
or
talk
about
before
we
move
on
to
pacquiao
and
build
packs.
C
I
have
one
question:
oh
sorry,
it's
stefan
from
from
sap
there's
one
project
called
cloud:
foundry
community,
cf,
python
client,
and
we
did
some
contributions
in
the
past
and
also
have
some
one
pr
open
and
the
question
in
our
team
came
up.
What
is
actually
the
contribution
model
for
this
cloud?
Foundry
community
project?
A
So
that's
kind
of
a
loosely
organized
community.
I
know
that
dr
nick
from
stockholm
wayne
started
that
a
long
time
ago.
So
as
far
as
cool
thanks
see
a
python
client.
A
Yeah,
I
don't
know
if
there's
there's,
there's
any
official
contribution
guidelines
for
the
cloud
foundry
community
org
cff
doesn't
really
manage
that,
but
I
norm
or
tyler.
Do
you
all
know
what
the
best
plan
is
for
getting
pull
requests
and
attention
on
that.
E
I
don't
know
that
particular
just
stefan,
just
let
me
know,
and
I
will
make
sure
things
get
merged
in
it's
usually
what
we
see
activity
we
try
to
do.
Yeah.
C
We
have
one
request
that
hurts
us
a
bit
regarding
logging.
That's
why
I'm
asking
oh
okay.
C
Nothing
revolutionary
but
would
help
us
in
debugging.
C
E
B
Oh
and
one
one
other
thing
I
wanted
to
mention
a
couple
of
weeks
ago.
Some
of
us
had
a
document
around
the
vision
for
cf
on
kubernetes
that
we
published
on
cf
dev
and
we've
been
soliciting
comments
on
for
alignment
with
you
all
and
the
broader
community.
So
I've
been
having
some
great
discussion
on
the
document
itself
and
then
we
had
also
an
asynchronous
discussion
yesterday
in
the
morning
pacific
time
afternoon
in
europe
around
some
of
the
the
comments
on
that.
B
So
we're
planning
on
continuing
that
at
the
next
cf4k
sig
call
in
about
two
weeks.
I
believe
that's
on
may
4th,
but
certainly
if
you
wanna
any
of
you,
are
interested
in
commenting
on
that
doc
or
participating
in
that
discussion.
You're
very
welcome
too
I'll
drop
a
link
into
that
document
as
well.
A
Yeah
the
cfragate's
sig
call
yesterday
was
was
really
informative.
I
don't
think
the
video
for
that's
been
published
yet,
but
if
you
check
probably
later
in
the
day,
cf
for
kate's
slack
channel
will
be
a
link
to
that
video.
So
if
you
want
to
catch
up
on
that
conversation,
you
can.
A
All
right
any
other
last
bits
of
business.
A
F
Thanks
for
inviting
me,
I'm
happy
to
be
here,
let
me
share
my
screen
here.
F
Oh,
hopefully
I
want
to
go
back
to
the
beginning
of
it.
It's
a
usually
good
place
to
start
okay,
so
my
name
is
emily
casey
and
I'm
here
today
to
talk
to
you
all
about
cloud-native,
build
packs
and
pocketo
sort
of
remind
everyone
of
what
our
high-level
goals
are
and
tell
you
how
we're
progressing
towards
meeting
those
goals.
F
So
usually,
when
I
talk
to
people
about
cloud
native,
build
packs
or
bricketto,
we
start
off
with
a
comparison
to
docker
files,
because
this
is
the
common
touch
point
across
the
ecosystem.
People
associate
container
images
with
docker
files
and
it's
really
hard
to
create
a
docker
file.
F
So
let's
jump
right
into
build
pack
history.
Instead,
heroku
originally
invented
the
concept
of
bill
packs.
At
the
time
they
weren't
called
v1
build
packs.
They
were
just
build
packs
at
cloud
foundry.
We
thought
that
was
a
pretty
good
idea
and
we
borrowed
it
and
the
two
specs
from
that
point
drifted
such
that
there
were
heroku,
build
packs
and
cloud
foundry
build
packs
and
they
were
not
interoperable
with
the
between
the
systems.
F
So
we
came
together
a
couple
years
ago,
a
couple
employees
from
pivotal
and
a
couple
folks
from
heroku.
We
decided
that
the
bill
pack
spec
needed
an
update
for
the
modern
era,
and
we
founded
the
cloud
native,
build
packs
project
and
started
it
as
a
sandbox
project
in
the
cncf.
F
These
are
going
to
be
things
you
all
are
familiar
with:
packs
receive
application,
source
code
or
an
artifact
like
in
the
java
case,
usually
a
jar.
They
generate
a
drop
letter
slug,
which
is
a
tgz
containing
the
application
and
its
runtime
dependencies,
and
then
it's
the
platform's
job
to
create
a
container
that
runs
this
droplet.
So
it
creates
a
container
from
the
base
image
or
rudifest
streams
in
the
droplet
and
starts
your
application.
F
So
there
are
a
lot
of
really
good
things
about
build
packs
and
cloud
foundry.
They
raise
the
value
line,
so
application
developers
can
focus
on
writing
their
application
code.
Instead
of
spending
a
lot
of
time
reinventing
the
wheel
in
the
devops
space,
they
provide
a
lot
of
security
guarantees,
so
your
dependencies
are
always
up
to
date
as
long
as
you're
installing
the
latest
build
packs.
The
build
packs
will
keep
your
runtime
dependencies
up
to
date.
F
If
you're,
installing
the
latest
root
ifs,
any
cves
and
your
operating
system
packages
are
rolled
out
automatically
so
things
that
were
a
big
problem
for
the
rest
of
the
industry
like
heartblade,
for
example,
to
patch
that
for
an
entire
cf
instance,
all
you
have
to
do
is
upload
a
new
root
fest
and
every
application
is
updated
and
rolled
forward.
F
They
produce
consistent
results
and
they're.
Very
robust
folks
have
been
using
these
for
a
lot
of
years
and
we've
run
into
every
wall
that
a
person
could
run
into
containerizing
a
lot
of
these
typical
applications
and
we've
developed
a
lot
of
best
practices
that
now
folks
can
just
consume.
Instead
of
reinventing
some
of
the
cons,
which
have
been
a
bit
more
brief
here,
because
we're
going
to
dive
into
them
more
in
detail
are
platform
portability.
So
can
you
run
the
artifact
that's
generated
somewhere
else?
F
The
getting
started
on
ramp
experience
the
speed
of
developing
in
the
inner
loop.
So
how
can
I
iterate
on
changes?
Modularity
of
build
packs?
It
can
be
a
bit
hard
to
because
build
packs
were
originally
monolithic.
F
It'd
be
hard
to
inject
a
small
change
or
a
small
piece
of
logic
or
switch
out
a
piece
of
logic,
visibility
into.
What's
going
on
in
the
build
process,
there
are
some
edges
around
performance
that
could
probably
be
improved
and
finally
ubiquity.
So
folks
in
cf
and
heroku
are
familiar
with
build
packs,
but
there's
an
education
burden
when
you're
bringing
folks
onto
the
platform.
It's
a
new
model
that
needs
to
be
explained
to
people.
F
So
the
goal
of
cloud
native
build
packs
was
to
keep
all
the
good
stuff
in
v2
build
packs
because
we
couldn't
imagine
a
world
without
it.
While
addressing
some
of
these
drawbacks,
you
want
to
bring
this
to
a
wider
audience
and
hopefully
make
it
an
industry
standard.
So
it's
not
just
a
piece
of
the
cloud
foundry
platform.
It
is
a
standard
for
builds
that
is
familiar
to
folks
across
the
industry.
F
F
So
one
of
the
first
things
we
did
in
cognitive
build
packs.
We
were
certain
that
the
output
of
the
build
had
to
be
an
oci
image,
not
a
droplet,
so
that
it
could
be
runnable
by
any
container
runtime
and
any
container
scheduler
orchestrating
that
runtime.
So
kate
stalker
diego
container
d,
nomad
once
you've
moved
from
creating
a
droplet
to
creating
an
oci
image.
Then
the
world
opens
up
for
the
possibilities
of
how
you
can
use
that
output
and
how
you
could
move
it
and
promote
it
between
systems.
F
F
You
require
an
instance
of
cf
to
get
started
or
you
can
push
to
heroku
but
again
they're,
using
a
slightly
different
specification
and
different
build
packs
for
a
while
pcf
dev
helped
a
bit
with
this,
but
the
startup
time
for
pcf
dev
has
gotten
slower
as
cloud
foundry
has
grown,
and
the
demands
of
running
all
of
cloud
foundry
in
a
single
vm
are
extensive.
F
Looking
into
the
inner
loop,
it
can
be
hard
to
iterate
on
changes
in
your
code
as
an
application
developer
using
built,
packs
and
cf
and
see
them
run
in
a
product
container.
F
So
one
of
the
promises
of
build
packs
is
that
you
don't
have
to
do
the
work
of
figuring
out
how
to
set
up
your
environment
to
get
your
application
to
run
like
I
push
it
and
it
just
runs,
and
that
works
well
until
I
want
a
a
tighter
inner
loop.
So
I
want
to
you
know
edit
a
couple
lines
of
code
and
then
see
those
changes,
live
and
test
them
out,
because
you're
waiting
for
an
entire
upload
staging
droplet
creation
run
in
cf.
F
This
is
still
a
work
in
progress,
so
I've
highlighted
things,
including
build
packs
that
I
think
are
areas
that
we
still
need
to
focus
on
improving
in
order
to
meet
our
future
goals.
So
the
pack
plus
docker
workflow
makes
this
inner
loop
quick,
iteration
a
bit
faster,
but
we
don't
think
it's
fast
enough.
F
In
terms
of
modularity,
the
original
cf
build
packs
are
pretty
monolithic,
so
I
spent
a
couple
months:
maintaining
the
java
build
packs
and
the
way
folks
typically
modify
the
java
build
pack
is
to
create
a
fork
of
the
entire
build
pack
and
modify
the
one
or
two
things
that
they'd
like
to
change
and
then
keep
that
fork
to
date
for
a
long
time
in
cloud
native
bill
pack,
land
there
are
things
called
composite,
build
packs
that
are
made
up
of
smaller
component
build
packs,
and
you
can
add,
remove
or
replace
a
component
in
this
larger
build
pack
system
that
allows
for
more
modularity,
so
build
packs
that
used
to
be
you
know.
F
F
We
have
a
variety
of
builds
this
in
the
build
packs.
So
I
put
gradle
here
all
of
the
build
packs
in
here
can't
fit
on
the
slides.
These
are
just
examples,
but
there's
maven
sbt
line
engine
there's
like
a
variety
of
build
system,
build
packs
that
you
could
that
exist
in
our
java,
build
pack
where
you
could
compose
into
your
own
build
pack.
F
F
We
have
build
packs
that
know
about
specific
types
of
applications
like
we
have
the
spring
boot
build
pack
that
will
look
at
the
version
of
spring
boot.
That's
in
your
application
add
some
metadata
to
the
outside
of
the
image,
so
you
can
inspect
it.
Maybe
do
a
little
bit
of
tuning
on
the
application,
so
it
says.
Oh,
I
know
this
is
a
reactive
app,
I'm
going
to
reduce
the
number
of
threads,
so
we
have
modular
build
packs
like
that.
F
So
thinking
about
how
this
architecture
affects
you
as
a
user,
you
can
imagine
that
if
you're
a
big
azure
user,
you
might
want
to
swap
out
the
bells
off
liberica
build
pack
with
a
azul
zulu
build
pack.
So
you
get
a
different
jdk
and
jre
down
here.
I've
taken
out
the
google
stackdriver
build
pack,
because
I
know
everyone
using
this
wants
to
use
azure
application
insights,
because
this
is
an
azure
build
pack,
so
you
can
by
adding
removing
replacing
components.
You
can
compose
your
own
vision
of
what
the
ideal
java
build
pack
would
be.
F
This
happens
to
be
a
build
pack
that
paquetto
provides.
We
have
an
azure
flavor
of
the
java,
build
pack
where
we've
remixed
our
own
components,
but,
as
you
can
imagine,
you
could
do
this
yourself
right.
So
maybe
you
want
the
amazon
coretto
build
pack,
you've
written
your
own
new,
relic
integration
that
you'd
like
to
stick
at
the
end.
Here
you
can
see
how
this
modularity
allows
folks
to
design
a
build
system
in
a
more
intuitive
way,
and
then
you
can
obviously
take
components
of
these
and
use
them
in
a
totally
different
context.
F
So
the
ca
search,
build
pack
and
the
image
labels
build
pack
are
not
specific
to
java
or
paquetto
at
all.
Really
so
you
could
have
a
totally
different
system
that
runs
a
build
pack
for
running
functions
written
in
rust.
But
maybe
you
know
you
want
to
stick
these
two
components
that
paquetto
creates
in
your
build
pack
to
get
these
features.
F
Moving
into
another
pain
point
that
we
have
seen
in
our
experience,
running,
build
packs
and
cf
that
we
would
like
to
improve
in
the
world
of
cloud
native.
Build
packs
is
visibility
into
what's
happening
in
the
build
folks.
Sometimes
talk
about
how
build
pecs
are
too
magic
and
magic
is
a
good
word
and
a
bad
word.
F
To
some
extent,
people
want
magic.
They
would
like
it
to
just
work
and
they
would
like
to
not
have
to
put
a
lot
of
effort
into
getting
it
to
work
right.
So
magic
can
be
really
good,
but
we
found
in
our
experience
that
it's
not
always
true,
that
people
are
okay.
F
F
So
we
found
that
folks
both
want
more
visibility
into
what's
happening
in
their
build
pack,
the
build
process
itself
and
visibility
into
the
output.
So
they
want
to
know
exactly
what
we've
installed
in
this
image,
so
they
can
make
sure
it
meets
their
open
source,
licensing
requirements
and
audit
requirements
and
figure
out
which
images
have
cves
in
them.
F
We
have
created
a
bill
of
materials
which
gets
applied
as
an
image
label
to
every
image
created
by
cloud
native,
build
packs.
So,
as
the
build
packs
contribute
things
to
the
image,
they
add
descriptions
of
what
they've
contributed
to
the
bill
of
materials
and
at
the
end,
you
should
get
a
build
materials
describing
everything
in
your
image.
F
We're
still
doing
some
work
to
improve
that
so
we'd
like
to
right
now.
This
works
for
build
pack
provided
dependencies
and
there's
a
different
form
format
where
you
can
get
the
dependencies
that
came
on
the
base
image
we're
working
on
creating
everything
in
one
place.
F
That's
totally
comprehensive,
we're
thinking
a
lot
about
the
format
of
this
bill
of
materials.
F
F
The
complaint
that
folks
don't
know
exactly
what
the
build
system
is
doing
is
not
a
totally
solved
problem
yet,
and
we've
just
started
looking
into
ways
to
improve
this
either
by
creating
better
visualizations
of
the
build,
creating
interactive
builds
where
you
could
hijack
a
particular
component,
build
pack
or
run
them
one
at
a
time.
So
this
is
a
goal
that
is
in
progress,
but
we
think
we
could
do
even
better.
F
F
It's
not
terribly
slow,
but
it
can
be
a
little
bit
slow
slower
than
it
should
be.
If
nothing
has
changed
in
cognitive,
build
packs
land
we've
been
trying
to
use
the
layers
in
the
oci
image
and
intelligent
way
to
reduce
duplication
and
unnecessary
data
transfer.
So
if
you're
building
an
app
with
the
paquetto
drop
a
build
pack-
and
it
has
a
certain
jre
in
it-
that
layer
is
going
to
look
identical
to
any
other
app
built
with
that
build
pack
using
the
same
jre.
F
So
this
means
that
you
only
need
to
wait
for
the
parts
of
your
system
that
have
changed
to
rebuild
and
upload
you're,
never
going
to
wait
for
the
same
jre
to
upload
to
storage.
Again,
because
you
changed
a
single
app
file.
We
can
just
reuse
that
jre
layer
and
we've
broken
up
your
application
into
what
we
call
slices,
which
are
layers
where
we
group
things
that
we
think
change
and
unison
together.
So
you're
gonna
get
one
tiny
layer
upload
that
contains
your
change.
F
Instead
of
rebuild
and
re-upload
of
the
whole
system,
there
are
still
areas
where
we
could
be
improving
performance.
Cognitive
bill
packs
is
great
on
a
rebuild,
but
in
the
case,
where
you're
not
using
an
offline
version
of
a
build
pack,
we
do
have
a
bit
of
a
performance
problem.
On
the
first
build
where
let's
say,
I'm
building
10
different
java
apps.
F
It's
still
going
to
go
ahead
and
download
that
jre
10
times
once
for
the
first
build
of
each
artifact,
and
that
feels
like
an
unnecessary
performance
penalty
to
us
or
looking
into
better
ways
to
handle
asset
caching,
so
that
even
that
first
build
can
be
really
fast.
If
you're,
if
you're
using
components
that
you've
used
before
and
they
they
should
be
available
to
you.
F
Ubiquity,
so
we're
all
familiar
with
cf
build
packs
cloud.
Foundry
and
heroku
users
are
very
familiar
with
the
concept
of
build
packs,
but
for
folks
who
haven't
used
either
one
of
these
platforms,
most
people
are
not
deeply
familiar
with
build
packs
and
therefore,
if
you
want
to
get
someone
excited
about
cloud
foundry,
you
have
the
burden
of
education
of
teaching
them
about.
This
whole
new
build
system,
and
you
know
walking
them
through
concerns
and
talking
about
the
potential
upsides.
F
F
We
still
have
a
ways
to
go.
We
feel
really
good
about
the
progress
we've
made
so
far,
but
average
developers
are
still
unfamiliar
with
build
packs.
So,
there's
more
work
to
do
here
so
diving
into
this
a
bit
deeper
on
the
left.
You
can
see
an
incomplete
list
of
platforms
that
have
adopted
build
packs,
like
incorporated
a
cloud
native,
build
pack
build
as
a
feature
in
their
platform
and
sort
of
a
a
graphic
that
can
help
us
understand
more
what
incubating
in
the
cncf
means.
F
F
So
lifecycle
is
a
cloud
native,
build
packs
component
that
we
ship
you
could
implement
your
own
by
implementing
the
specifications,
but
that's
generally
not
how
it's
done.
Most
people
are
just
using
our
lifecycle,
but
there
are
a
variety
of
folks
who
have
implemented
either
build
packs
or
platforms
to
participate
in
this
ecosystem.
F
F
But
since
then
like
interesting
developments
that
we
didn't
foresee
have
allowed
this
technology
to
reach
more
users.
So
the
spring
boot
team
was
pretty
excited
about
cloud
native,
build
packs
and
also
the
teams
that
spring
that
are
building
native
images.
Like
java
native
images,
where
the
build
is
a
is
a
difficult
process
wanted
to
provide
a
more
standard
experience
for
how
to
get
a
container
that
implements
all
the
best
practices
for
running
spring.
So
they
implemented
a
platform
that
drives
the
lifecycle
using
the
platform
api
and
it's
a
gradle
and
maven
plugin.
F
Another
platform
of
note
is
kpac,
which
runs
cloud
native.
Build
packs,
builds
in
kubernetes
in
a
fully
declarative
way.
So,
whereas
pac
is
a
great
platform
for
getting
started,
k-pac
is
very
intuitive
for
folks
who
are
familiar
with
kate's,
and
it
is
great
for
running
images
at
building
images
at
scale.
So
because
it's
using
the
declarative
engine
and
kates,
it
will
rebuild
your
image
when
a
build
pack
has
changed
with
new
dependencies
when
a
new
base
image
comes
out,
so
you
just
sort
of
describe
what
you
want.
F
F
The
latest
build
is
out
of
date
either
because
you
got
a
new
base
image
or
there's
an
update
to
your
source
code.
It
spits
out
a
new
build
which
puts
a
new
image
in
the
container
registry,
and
you
get
this
declarative
engine
for
building
images
that
are
are
always
secure
and
always
the
latest,
and
then
you
can
compose
even
building
on
top
of
other
platforms.
So
cf
for
kate's
is
using
kpac
to
create
an
entire
paz
experience.
So
well,
kpac
is
a
great
kate's
native
build
experience.
F
We
all
love
the
cf
push
experience,
so
something
like
cf
for
kate's
can
use
kpac
as
a
build
system
to
create
an
even
higher
level
platform
that
meets
the
needs
of
different
types
of
users,
and
this
works
across
the
board.
So
there's
a
circle
ci
orb
that
uses
the
pax
cli
so
we're
seeing
this
ecosystem
of
tools
that
build
on
tools
start
to
emerge,
and
that
was
our
goal,
and
it
seems
like
that
piece
of
how
we
saw
the
ecosystem
evolving
is
on
track.
F
F
Implementations
of
the
cloud
native,
build
packs,
build
pack
api,
but
we
are
not
the
only
ones
who've
written
bill
packs
google
has
built
packs.
Heroku
has
built
packs,
there
are
vmware
tons
of
build
packs,
we
know
of
end
users
who
are
doing
some
combination
of
all
of
these,
like
using
picato
build
packs,
but
maybe
they've
written
one
for
themselves.
F
So
implementing
a
build
pack
is
a
constrained
problem,
and
someone
who
wants
to
extend
the
build
system
only
needs
to
worry
about
solving
this
problem.
Someone
who
has
a
vision
for
a
platform
and
that
platform
needs
a
build.
Functionality
only
needs
to
worry
about
this
end
of
the
problem
and
they
can
just
plug
in
the
the
cloud
native,
build
packs,
lifecycle
and
engine.
F
So
we
have
implementations
for
all
of
the
most
commonly
used
major
language
groups,
using
our
experience
in
cf
to
predict
what
will
be
the
most
popular
languages,
they're
open
source
free
to
use
we're
drawing
from
years
of
experience,
developing
the
v2
build
packs
for
cloud
foundry.
F
And
trying
to
incorporate
new
learnings
to
create
new
features
that
are
enabled
by
the
cloud
native
build
pack
specification
so
just
to
dive
deeper
into
one
build
pack,
I
feel
like
the
java
build
pack,
provides
a
good
example
of
how
we're
combining
sort
of
the
best
of
what
we've
done
on
cf,
with
new
features
enabled
by
the
new
specification.
F
So
a
lot
of
these
runtime
helpers
things
that
link
your
application
to
the
local
dns.
The
component
that
calculates
memory
stuff
that
loads
the
components
that
load
the
ca,
search
from
the
system,
trust
store
into
your
jvm
trust
store
a
component
that
kills
your
jvm
correctly
when
it's
out
of
memory,
which
is
really
important
in
a
containerized
ecosystem.
F
All
of
these
were
things
we
developed
for
cloud
foundry
that
we've
now
ported
over
to
the
kettle
world,
and
this
is
why
we
think
paquetto
build
packs
a
real
chance
of
being
the
best
build
packs.
We
have
so
much
experience
with
all
of
these
little
nitty
gritty
details
that
you
would
need
to
have
the
best
containerized
java
application.
F
We
have
robust
command
lines
for
multiple
types
of
deployments
again.
A
lot
of
this
is
coming
from
our
experience
in
cloud
foundry
and
then
we're
taking
advantage
of
some
of
these
new
features,
so
we're
adding
the
version
of
boot
in
an
image
label.
So
you
could
do
a
query
across
your
whole
case
or
cf
and
figure
out
exactly
what
version
of
spring
boot
is
in
each
application.
Stuff
like
that,
and
then,
of
course,
the
bill
of
materials.
F
We've
talked
about
before
we've
put
a
lot
of
effort
into
making
this
really
thorough
in
the
java
build
pack.
So
folks
can
do
things
like
security
audits
or
open
source
license
gathering
stuff
like
that.
F
C
Can
I
start
with
one
question:
what
are
my
options
as
a
application
developer
running
on
a
traditional
cf
deployment
with
the
standard
java
build
pack
in
v2
and
I'm
curious
whether
my
application
would
actually
run
also
with
the
new
type
of
build
packs
on
my
existing
cloud?
Foundry
landscape.
F
I
think
the
the
lightest
weight
way
to
start
testing
that
out
is
with
the
pax
cli.
So
one
of
the
nice
things
the
way
the
specification
works
is
that
it
shouldn't
matter
what
platform
is
orchestrating
your
build.
If
you
have
the
same,
build
packs
in
the
same
application,
you
should
get
a
totally
identical
output,
so.
F
F
We've
been
thinking
a
lot
about
migration
from
v2
to
v3,
build
packs
and
there's
two
ideas
that
have
been
floated.
I
can't
make
promises
about
which
of
them
will
be
implemented
if
they'll
be
implemented
or
in
what
order,
but
folks
are
definitely
interested
in
it,
and
it
would
involve
writing
a
sim
or
two
shims,
one
that
would
allow
v2
bill
packs
to
run
in
a
v3
system.
F
G
And
and
to
stefan's
question,
so
we
would
just
treat
the
output
from
the
current
pac
cli
as
just
a
docker
image
and
run
it
in
cf
that
way.
Yeah
you
could
see
f,
docker
push
it.
F
I
didn't
mention
this
in
the
presentation,
because
it's
getting
a
bit
into
the
nitty
gritty,
but
you
might
be
aware
that,
like
a
rudifest
update,
wouldn't
work
for
something
cf
docker
pushed
in
to
cf,
but
we've
sort
of
like
reimagined.
What
a
routerfest
looks
like
in
cloud-native
build
packs
with
an
operation
called
a
rebase,
so
you
can
swap
out
the
base
layers
with
new
base
layers.
Using
impact
could
be
the
pack
rebase
command,
but
a
platform
like
kpac
would
just
do
a
rebase
if
it
saw
that
your
base
image
was
out
of
date.
F
D
G
I'll
jump
in
with
a
with
another
question:
how
is
your
team
making
the
product
management
decisions
of
which
to
prioritize
the
v3,
build
packs?
From
my
perspective,
at
cloud.gov,
with
the
potential
compliance
advantages,
it
would
bring
and
letting
small
teams
identifying
outdated
dependencies
and
potentially
rebase
them
would
be
huge.
So
I'm
kind
of
hoping
you
can
go
to
get
it
running
with
the
cf
push
experience
sooner
rather
than
later,
but
I
know
there's
other
priorities
involved
too.
F
There
is
a
a
crushing
amount
of
priorities
and
asks
in
different
directions
from
different
people.
So
I
can't
but
there's
there's
a
lot
of
interest
in
making
the
v3
build
packs
available
to
cloud
foundry
users
without
requiring
cloud
foundry
to
implement
the
new
api.
So
I
think
if
we
were
going
to
build
one
of
the
two
trims
I
talked
about,
it
would
be
a
v3
to
be
e2
shim.
F
E
Except
the
focus
where
you
guys,
where
the
where
the
bill
pack
is
going,
and
my
experience
with
the
original
locator
was
actually
was
very
painful
to
build
things,
because
you
had
all
these
different
layers
and
stuff
like
that.
You
know
how
to
use
another
community.
It
was
making
it
very
difficult
for
stuff.
So
hopefully,
this
new
stuff
that
you're
coming
up
with
with
simplifying
something.
F
F
F
Look
for
help
from
a
designer
as
well
in
that
area,
because
it
turns
out
that
you
might
be
really
good
at
building
build
packs,
but
making
a
comprehensible
website
is
comprehensible
and
beautiful
website
is
something
we
are
only
okay
at,
so
we're
trying
to
get
some
some
help
in
that
area.
As.
A
Well,
awesome:
well,
thank
you
so
much
emily
for
that
very
thorough,
informative
presentation.
That
was
great
and,
I
believe,
that's
probably
a
wrap.
Anyone
else
have
any
last
minute
things
they
want
to
bring
up.
A
All
right
well,
thank
you
all
very
much
and
thank
you
again.
Emily
and
yeah
we'll
see
you
next
time.