►
Description
In the sixth meeting of the Kubernetes/CMS SIG, DDEV CTO Kevin Bridges dives into Buildpacks, sharing a presentation from BADCamp 2019.
Full slide deck: http://bit.ly/2KjrCBw
Catch up with the group on GitHub: http://bit.ly/338dXC5
A
Going
to
do
everything
on
one
computer,
so
hopefully
that
helps
with
the
outages
that
we
had
last
time,
I
tried
to
record,
but
just
to
kick
it
off.
You
know
this
is
a
regular
kubernetes
meeting
that
we
have
to
talks
about
all
things
kubernetes
as
they
relate
to
Drupal,
and
you
know
other
platforms
out
there.
Excuse
me,
in
this
session,
I
think
it'd
be
good
to
talk
a
little
bit
about
Bill,
packs
and
kind
of
help.
A
People
understand
what
build
packs
actually
are
what
they
mean
as
far
as
kubernetes
is
concerned
and
how
they
can
apply
to
communities
like
Drupal,
I,
think,
there's
a
lot
of
value
in
them
and
there's
something
that
might
be
able
to
have
a
lot
of
forward-facing
ramifications.
So
a
lot
of
what
I'm
going
to
be
talking
about
today
is
based
off
of
the
version.
A
3
specification
of
build
packs,
I'm
not
going
to
be
showing
any
code,
but
I
think
generally
going
over
some
of
the
base
concepts
behind
what
a
build
tack
is
and
how
it
can
be
useful
in
the
day-to-day.
So
let
me
go
ahead
and
work
through
the
technical
difficulties
of
getting
everything
shared
properly.
B
B
B
A
B
C
C
A
A
One
of
the
motivating
factors
to
doing
that
is
something
called
advancing
developer
communities
and
by
that,
what
we
basically
need
to
do
is
work
together
in
different
communities
to
understand
the
patterns,
the
tools
that
are
necessary
to
solve
common
problems
in
a
way
that
are
very
effective
for
people.
So
essentially,
we
need
an
enduring
way
to
describe
Drupal
to
modern
infrastructure
so
that
we
can
address
both
day.
One
and
day
two
operations
and
I
believe
that
built
acts
can
help
address
that
specific
topic.
A
Additionally,
containers
have
become
the
de-facto
means
of
distributing
and
running
applications
on
modern
infrastructure.
Docker
files
are
popular
and
they've,
been
the
kind
of
go
to
a
way
to
describe
what
a
container
actually
is,
but
they
do
have
a
significant
number
of
limitations,
especially
when
it
comes
to
day
to
operations.
A
Exclusive
proprietary
approaches
don't
help
the
community.
So
those
are
the
four
main
bullet
points
of.
Why
we're
even
having
the
conversation
that
we're
having
right
now
and
let
me
dive
right
in
and
start
talking
about
what
Bill
packs
are
bill-
packs
provide
a
higher
level
abstraction
for
building
apps
compared
to
doctor
files
bill
Platts.
Excuse
me
bill
packs,
our
pluggable
modular
tools
that
translate
source
code
and
OCI
images.
A
Oci
is
the
open
container
initiative
which,
interestingly
enough,
was
co-founded
with
docker,
but
it
is
an
open
source
initiative
to
help
people
understand
what
a
container
is
and
how
it
should
function
both
at
build
time
and
at
run
time,
and
those
are
two
different
concepts
that
we'll
get
into
in
a
little
bit.
They
provide
a
balance
of
control
that
reduces
the
operational
burden
on
developers
and
supports
enterprise
operators
who
manage
apps
at
scale.
They
ensure
that
apps
meet
security
and
compliance
requirements
with
developer
and
intervention.
A
They
provide
automated
delivery
of
both
OS
level
and
application
level
dependency,
upgrades
efficiently
handling
de
to
operations
that
are
often
difficult
to
manage
with
docker
files
and
we'll
get
into
a
specific
example
where
I
talk
through
a
security
update
that
illustrates
that
a
little
bit
better,
they
do
rely
on
compatibility
guarantees
to
safely
apply
patches
without
rebuilding
artifacts
and
without
an
intentionally
changing
application.
Behavior.
A
So
I
guess
really.
The
first
question
that
is
worth
talking
about
is
why
not
dr.
files?
You
know
the
business
and
technology
challenges
that
we're
addressing
as
a
result
of
even
doing
this
and
why
doctor
might
not
be
the
right
fit
for
that
image.
Inner
layer
compatibility
does
not
have
an
interface
and
docker
files
so
base
dependency.
Image
layers
are
not
clearly
defined
in
a
consistent
manner.
A
Aside
from
the
image
declared
in
the
front
directive,
a
layer
which
may
perform
a
composer
install
expects
PHP
and
composer
to
be
installed
in
a
previous
layer,
but
there's
no
interface
or
inner
layer
compatibility
to
be
able
to
validate
that.
That
has
actually
happened.
Layer
compatibility
needs
to
be
defined
in
a
clear
interface
in
order
to
enable
on
the
ecosystem,
where
layers
maintained
by
different
authors
can
declare
compatibility
requirements
using
unique,
IDs
and
versioning.
A
So
through
an
interface
and
inner
layer
compatibility,
you
have
the
ability
to
understand
kind
of
what's
in
a
container,
that's
being
built
yeah,
and
you
can
rely
on
that
being
there
repeatedly.
So
what
that
does
give
you
as
the
ability
to
offload
additional
layer,
separate
layers
to
different
maintainer
zhh,
an
example
of
that
might
be
a
PHP
layer
or
you
know,
a
base
Debian
image
layer.
Those
could
be
maintained
by
different
authors
and
work
in
unison.
A
Another
problem
with
docker
files
are
that
conditional
layers
and
environment
detection
is
difficult.
The
syntax
for
dr.
files
tends
to
make
it
difficult
to
perform
conditional
steps
within
a
layer.
For
example,
you
may
not
wish
to
install
the
PHP
a
PCU
module
Deut,
a
limitation
in
the
target
Drupal
site
that
may
force
you
to
maintain
a
separate
docker
file
for
sites
with
that
specific
limitation,
or
include
the
step
in
installing
this
module
using
shell
code
to
check
they
can
for
the
conditions.
A
A
A
A
Large
container
images
can
negatively
impact
site
deployment
and,
if
it's
running
on
a
platform
like
kubernetes
lengthen
the
time
it
takes
first
sight
to
self-heal
due
to
the
overhead
dagger
pool.
And
finally
you
know
you
ease-of-use.
In
order
to
build
a
container
locally,
you
must
use
a
docker
file
located
on
your
local
file
system.
There
are
definitely
ways
of
distributing
Dockers
files
or
recipes
for
building
a
site.
You
can
use
github
it,
but
there's
really
no
clear
way
to
find
a
docker
file,
specifically,
which
would
be
appropriate
for
building
your
particular
site.
A
There's
no
awareness
of
the
environment
that
that
docker
file
is
being
run
into.
There's
no
mechanism
to
recommend
doctor
files
based
off
of
what
the
environment
that
you're
trying
to
create
for
is
or
what
the
application
that
you're
actually
trying
to
run.
Is
you
need
to
post
those
pieces
together
yourself
and
that
problems
kind
of
been
resolved
on
the
docker
side,
with
with
you
know,
well-known
staples
such
as
rpm
docker
hub
symphony?
They
all
kind
of
try
and
do
something
similar,
but
what?
A
If
the
user
only
needs
to
have
docker
or
canna
Co
installed,
along
with
one
other
CLI
tool,
which
is
the
PAC
tool
which
comes
from
the
build
pack
specification
which
can
inspect
the
target
file
site
file
system,
recommend
a
build
recipe
for
the
site
based
on
its
observations
and
performant
build
for
the
user.
So
basically
the
distribution
and
ease
of
use
of
docker
files
are
a
little
bit
lacking
and
let
me
give
a
quick
example
of
one
of
the
specific
issues
there
and
I
touched
on
security.
A
Being
one
of
those
patching
upstream
layers
is
very
cumbersome
in
dr.,
so
let's
say,
for
example,
you
know
we've
got
a
a
Debian
image.
It's
a
version.
Nine
point,
eight
point:
ten,
it's
a
base
image,
it's
not
maintained
directly
by
the
developer
that
might
be
making
a
Drupal
site
or
another
application,
for
example,
in
that
layer
stack.
They
also
introduced
PHP.
Seven
point
two
point:
eleven:
they
also
don't
maintain
that
image
it's
pulled
in
from
somewhere
else
and
then
finally,
they
get
to
building
out
their
site.
So
they
have.
You
know,
version
0.01.
A
Where
you
add
your
web
root
files
to
the
container
eg,
something
along
the
lines
of
like
VAR,
add
Florida,
Beach
tml
and
then
that
collectively
gets
pulled
into
a
container,
but
that
also
introduces
a
problem.
So,
let's
say,
for
example,
there's
a
security
vulnerability
that
comes
out
in
the
Debian
package
and
we
need
to
move
from
version.
Nine
point
eight
point:
ten
to
nine
point,
eight
point:
eleven
because
of
the
way
those
layers
are
stacked
in
a
docker
file.
You
have
no
real
way
of
doing
that.
A
So
one
solution
to
the
problem
would
be
to
take
the
docker
files
if
they're
available
upstream,
add
your
specific
steps
to
your
docker
file,
let's
say,
for
example,
that
base
PHP
M
and
still
references
Debian.
Nine
point
eight
point:
ten:
instead
of
the
secure
the
release
for
nine
point,
eight
point:
eleven:
if
you
wanted
to
fix
that,
you'd
have
to
fork
that
docker
file
essentially
or
wait
for
that
developer
to
make
their
changes.
A
It
effectively
means
that
you
will
need
to
take
on
the
additional
burden
of
maintaining
each
of
those
layers
which
becomes
overhead
in
most
organizations.
A
solution
which
defines
the
life
cycle
on
contract
between
layers
would
need
to
address
this
shortcoming
and
I
believe
that
you
know
in
a
lot
of
ways:
that's
what
build
packs
end
up
doing
if
this
is
gonna,
be
a
little
difficult
without
the
graphics
behind
it,
but
I'm
gonna
go
through
some
of
the
definition
of
build
packs
and
kind
of
how
they
work
together.
A
Some
of
the
concepts
that
are
associated
with
them
a
build
pack
represents
a
building
block
of
executable
programs
or
scripts
identified
using
something
called
a
bill.
Pat
Tamil
file.
It's
used
auto
detective
characteristics
of
an
application
when
too
many
build
packs,
compose
the
build
lifecycle
and
are
used
by
a
builder
to
generate
the
image
layers.
So
what
that
means
is
that,
in
that
example,
that
I
gave
I
could
have
a
build
pack
for
the
Debian
base.
A
Os
image
I
could
have
a
build
pack
for
PHP,
I
could
have
a
build
pack
for
Drupal
and
and
they
would
all
be
able
to
be
executed
together
to
create
an
app
image
which
is
essentially
a
container
output.
Build
packs
can
declaratively
to
indicate
which
stats
they
are
compatible
with
in
their
configuration.
A
build
tax
should
be
designed
as
a
self-contained
step
which
performs
just
one
task.
Install
PHP
and
composer
install
should
be
two
separate,
build
packs,
instead
of
being
implemented
as
a
single
built
app.
A
That
gives
you
the
flexibility
to
be
able
to
update
PHP
if
necessary
or
update,
composer
independently
and
not
have
to
have
those
intermingled
it.
This
allows
for
separate
builders
to
optionally
use,
install
it
install
PHP
and
composer
installed,
build
packs,
which
is
essentially
what
I
just
described.
A
Each
build
pack
is
given
an
opportunity
to
perform
discovery
logic
and
the
detect
portion
of
the
build
cycle.
Composer
installed,
detect,
execute
logic
to
discover
if
the
site
being
built
needs
to
install
composer
in
the
build
image
and
executing
a
composer
install
at
Build
time
if
it
sees
a
composer
JSON
file.
So
these
things
can
have
conditionals
built
into
them
and
they
can
react
to
what's
actually
being
built
based
off
of
something
that
is
unique
to
that
specific
build
pack
layer.
A
If
the
built
at
does
not
report,
it
has
action
to
take,
then
its
build
executable
will
not
be
invoked.
This
means
a
builder,
including
a
composer.
Build
pack
could
be
used
to
build
Drupal
sites,
regardless
of
composer
actually
being
used.
So
what
that
means
is
that
you
know
on
the
left
side
when
we're
building
out
the
image.
A
The
pattern
can
be
used
by
organizations
or
hosting
providers
wishing
to
add
their
own
custom,
build
packs
and
stacks,
while
leveraging
upstream
community
maintain
build
packs.
So
basically,
what
that
means
is
that
using
build
pack
as
a
paradigm,
you
have
the
ability
to
open
it
up
so
that
a
broader
community
can
participate
in
each
layer
of
the
build
pack
that
basis
deem
appropriate,
and
that
gives
you
know
the
hosting
providers
or
local
development
environments
a
conditional
ability
to
do
things
that
they
can't
necessarily
do
now.
A
Let's
say
hypothetically
that
there's
a
reference
implementation
of
a
build
pack
for
Drupal
Drupal.
Can
you
know
you
can
get
all
the
way
up
to
the
point
where
you
Drupal's
being
built
properly
with
all
of
the
dependencies
and
understands
composer?
It
understands
the
modules
that
need
to
be
available
inside
of
PHP.
A
That
type
of
thing
and
then
a
local
development
environment
might
be
want
to
use
apache
specifically
with
it,
so
it
would
introduce
its
own
variants
of
a
build
pack
into
that
larger
stack
and
then
be
able
to
have
that
export
appropriate
code
for
that
local
development
environment.
Conversely,
that
can
also
happen
with
hosting
providers
if
hosting
providers
were
to
agree
upon.
You
know
hey.
This
is
a
specification
that
we
use
that
defines
what
Drupal
actually
is.
A
We
can
add
our
own
independent,
build
packs
in
here
to
customize
Drupal
to
be
specific
to
our
hosting
provider
as
necessary,
and
this
opens
up
the
door
for
a
lot
of
reusability
across
the
board,
helps
realize
some
of
the
vision
behind
what
kubernetes
actually
is.
In
my
opinion,
another
concept
that
we
have
with
build
packs
is
the
builder
and
a
builder
is
a
special
container
image,
which
includes
a
collection
of
build
packs
which
would
be
executed
against
an
application
or
a
site
in
the
order
defined
by
the
builders
configuration
file.
A
So
this
is
kind
of
where
you're
putting
together
to
build
packs
in
a
specific
order
to
be
able
to
do
a
specific
thing.
Build
packs
are
executed,
not
at
the
time
when
the
builder
is
created
using
a
command
called
pack
create
builder,
but
when
the
Builder
is
used
to
execute
a
path
run,
they
are
identified
with
a
unique
staff
ID
which
can
be
used
by
other
builders
and
staffs.
So,
essentially
you
you
can
reuse
these
and
a
lot
of
different
scenarios.
A
A
A
A
stack
represents
a
build
pack
lifecycle
which
a
builder
executes
syn
order
to
create
a
runtime
image
for
the
site.
A
stack
can
be
identified
using
a
unique
stack,
ID
and
an
example
stats
or
Drupal
8
that
could
be
maintained
by
the
Drupal
community.
It
might
be,
you
know,
on
the
base
layer,
we
have
operating
system
layer,
we
have
a
PHP
runtime,
we
have
a
Drupal
installation
from
distribution,
and
then
we
have
you
know.
A
The
next
concept
is
something
called
a
build
image.
A
build
image
is
a
base
image
and
a
staff
that
build
packs
used
to
install
dependencies
which
are
needed
in
order
to
perform
to
build
tasks
needed
for
the
run
image.
So
basically
there's
the
concept
of
a
built
image
and
then
a
run
image.
So,
for
example,
the
composer
buildpack
assuming
detect
equals
true,
will
install
the
composer
binaries
as
a
layer
in
the
build
container,
and
it
will
use
that
binary
to
perform
the
composer
install
in
the
run
container.
A
That
doesn't
mean
that
the
build
container
is
being
included
in
the
output
of
the
run
container.
That
means
that
those
binaries
are
available
in
the
building
of
the
runtime
container,
which
helps
reduce
your
size.
Reduce
your
overhead
makes
things
a
bit
more
efficient.
A
run
image
is
the
image
composed
of
a
base
image
in
layers
added
by
build
packs
in
order
to
provide
a
runnable
container
image
of
the
application.
A
That's
essentially
it
it
wasn't
as
much
fun
as
if
I
had
slides
so
I
know.
That
was
a
lot
of
information
and
it
looks
like
Jeff
has
a
question
in
there
are
these
the
Eric
who
weren't?
Yes,
they
actually
are
derived
from
here,
Akoo
and
pivotal,
working
together
to
work
on
that
specification.
So
Heroku
had
build
packs
initially
in
their
hosting
platform,
and
then
they
figured
out
that
it
would
be
better
as
a
cloud
native
project,
so
the
build
pack
concept
and
it's
located
at
Bill
patio-
is
directly
related
to
the
hiro-kun
Heroku
bill
packs.
A
But
pivotal
came
into
the
mix
at
some
point
and
I'm
not
exactly
sure
if
they're
convert
contribution
to
it.
But
those
are
the
two
founding
partners
inside
of
the
bill
packet
initiative
inside
the
cloud
native
foundation
and
the
cloud
native
foundation
has
build
packs
as
an
incubator
right
now.
So
it
is
a
very
valid
option
for
people
to
consider
and
again
I
guess.
C
What
I'm
wondering
so
this
is
pretty
much
well.
There
are
a
few
different
things
that,
like
kind
of
work
around
those
and
in
our
case
we
are
not
actually
doing
the
whole
build
process
inside
of
the
docker
image,
so,
for
example,
not
for
an
in
composer
and
so
on
inside
of
docker,
because
the
main
reason
is
that
this
doesn't
have
any
caching
and
downloading
dependencies,
whether
it's
NPM
more
composer.
C
It's
probably
the
slowest
part
of
the
build
and,
and
and
so
what
we
do
is
that's
our
CI
platform
actually
does
the
building
and
then
the
only
thing
that,
like
or
docker,
images,
are
pretty
much
copy.
The
code
base
into
the
docker
well
into
the
the
docker
images
as
a
new
layer
and
I'm
I
mean
it
would
be
nice
to
have
something
that,
like
a
standard
specification
on
how
to
build
Drupal
and
that
would
be
pretty
cool
I'm
wondering,
would
it
be
able
to
have
any
kind
of
cat
example?
A
I
think,
as
a
result
of
being
able
to
build
out
I,
there's
different
build
components
that
each
one
of
them
would
become
a
stack.
So
so,
essentially,
you
would
achieve
your
caching
paradigm
by
referencing
the
output
of
one
of
those
stacks
so
essentially
a
version
of
a
stack
that
contained
the
same
type
of
thing
that
you're
doing
with
the
composer
dependencies,
and
that
would
create
the
same
effect
as
having
that
caching
layer
in
place.
For
you.
C
Okay,
that
makes
sense
yeah
so
pretty
much.
What
we
have
at
the
moment
is
that
if
the
composer
Jason
doesn't
change,
then
we
don't
well.
We
still
rerun
composer
install
bits
with
like
the
dependencies
already
in
place
and
correct
so
that
it's
premature,
like
the
the
I'm
not
sure
if
I'm
using
the
right
terminology,
but
the
Builder
would
be
would
be
taking
care
of
not
doing
things
twice.
A
Correct
so
there
would
be
conditional
logic
that
would
evaluate
that
composers,
bad
JSON
against,
say,
a
set
of
checks
and
those
checks
can
be
defined.
So
these
build
packs
can
be
written
in
any
language.
They
can
be
done
in
bash.
They
can
be
done
and
go.
They
can
be
done
in
PHP,
whatever
people
are
comfortable
with,
so
that
logic
would
be
built
into
that
build
pack
and-
and
it
would
determine
whether
or
not
it
needed
to
do
something.
So
you
would
have
a
very
nice
defined
build
pack
with
that
conditional
logic
inside
of
it.
A
Nice
they're
pretty
neat
and
it
sounds
like
there's
a
lot
of
possibility
with
them.
I
we're
in
the
process
of
going
through
some
proof
of
concepts
with
them
and
I
have
a
business
objective,
getting
those
out
into
the
public
before
the
end
of
this
quarter.
So
I
will
be
posting
some
additional
links,
as
we
have
some
more
concrete
examples
for
people
to
look
at,
but
I
feel
like
that
was
a
pretty
decent
overview.
A
You
know
a
really.
We
have
Drupal
con
Amsterdam
coming
up.
I
know
that
we
have
kubernetes
panel
that
is
going
to
be
happening
there.
We
do
need
to
potentially
talk
about
starting
a
bath
to
discuss
kubernetes
options,
so
I'll
be
looking
at
getting
something
on
the
agenda
this
week
on
to
that
bath
board
I
knew
the
organizers
of
Drupal
con
had
that
pretty
automated
at
this
point,
so
we
just
need
to
get
that
into
place.
A
So
if
you're
going
to
Drupal
con
make
sure
to
stop
by
and
check
out
that
panel
it'll
be
very
interesting,
I
know
Michael,
Schmidt
and
amazing,
or
helping
to
organize
that
it
has
some
very
interesting
and
knowledgeable
people
on
it.
So
I
expect
it
should
be
pretty
informative.
I
know.
Some
of
the
things
that
we'll
be
talking
about
are
some
of
the
horror
stories
that
we've
had
with
kubernetes.
A
You
know
some
of
the
design
challenges
with
kubernetes
things
that
you
should
consider
if
you're
looking
at
implementing,
because
goober
net
ease
into
your
day-to-day
workflow
and
whether
or
not
your
organization
should
make
the
leap
to
something
like
kubernetes
there's.
A
lot
of
you
know
gotchas
and
things
to
consider
before
going
down
the
path
of
a
new
technology.
A
C
Actually
buff,
but
I
actually
want
one
of
them.
The
fun
challenges
that
we
have
and
and
I
don't
know.
There
must
be
some
somebody
else
out
there.
It
has
this
challenge
so
for
development
environments
running
as
many
pods
as
possible
in
a
cluster
and
keeping
that
cluster
as
small
as
possible
and
there's.
This
is
something
we
have
something
that
kind
of
works.
C
We've
definitely
run
into
the
issue
that
you
can
only
have
a
hundred
and
ten
pods
for
node,
at
least
on
Google
cloud
and
I'm,
not
sure
if
this
is
haven't
looked
more
deeply
into
it,
but
yeah
that
limitation
is
definitely
one
one.
That's
that
we're
running
into,
and
so
on
one
hand
we've
been
looking
into
a
few
different
scale
to
zero
options.
C
Osiris
is
one
of
them
in
works,
fantastic
for
a
simple
nginx
container,
but
haven't
gotten
it
to
run
for
a
drupal
setup
with
multiple
pods
and
database
and
so
on,
but
but
yeah,
even
if
we
were
able
to
turn
down
turn
off
part
of
the
application
or
scale
that
to
zero,
for
example,
scale,
PHP
and
nginx
to
zero
and
leave
the
database.
That
would
still
be
a
good
benefit,
but
yeah
I
don't
know
if
anybody
else
has
done
that
and.
C
Not
the
the
problem
is
that
if
you
requests
too
little
and
keep
too
much
open,
then
kubernetes
will
happily
it's
like
everything
on
a
single
node,
because
it
is
simply
oh
well,
you
don't
need
much
so
well-well
put
everything
together
and
we'll
never
all
to
scale
the
cluster
yeah.
So
the
these
kind
of
challenges
we're
yeah
they're
pretty
much.
C
A
That
might
be
a
good
topic
for
the
next
session
that
we
do.
Let
me
put
a
little
bit
of
research
into
that
and
find
out
how
we're
handling
that
and
then
let
me
see
what
I
can
come
up
with
from
our
side,
but
we'll
also
post
that
out
to
the
community
and
see
if
we
can't
get
some
additional
feedback.
Yeah.
A
Well,
great
I,
don't
think,
there's
any
reason
to
keep
us
longer
than
necessary.
I.
Definitely
thank
you
for
your
attendance
and
I
look
forward
to
getting
this
out
there
so
that
the
rest
of
the
community
can
look
at
it
and
video
forum.
So
if
anybody
that's
looking
at
this,
has
any
questions
feel
free
to
hop
in
the
Drupal
slack,
we're
in
the
pound,
kubernetes
room
and
happy
to
address
any
questions
that
you
might
have
otherwise.
Thank
you
very
much
and
have
a
good
day.