►
From YouTube: TGI Kubernetes 091: kpack
Description
Episode notes up at https://github.com/heptio/tgik/blob/master/episodes/091/README.md.
kpack content starts at 00:19:43.
Come hang out with Joe Beda as he does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Joe talking about the things he knows. Some of this will be Joe exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
This week we will be looking at kpack -- a Kubernetes native way to build container images with build packs.
A
Hello,
hello,
hello,
welcome
to
t
GI,
kubernetes
I'm,
your
host
Joe
Beda
I'm,
a
principal
engineer
here
at
VMworld
and
I'd
like
to
welcome
you
to
kubernetes
kubernetes.
So
this
is
our
weekly
ish
YouTube
livestream,
where
I
go
over,
what's
been
happening
in
the
kubernetes
world,
and
then
we
deep
dive
into
a
specific
topic
and
learn
a
little
bit
about
something
together
oftentimes.
A
A
Well
it
who's.
You
know
joined
us
a
long
time
watcher
here
asking
about
what
we
should
do
for
the
100th
episode.
I
was
talking
to
George
about
that.
If
you
all
have
ideas
on
what
we
should
do
for
the
100th
episode,
that
would
actually
be
be
amazing.
I'm,
not
sure,
like
my
ability
to
plan
is
limited,
but
I
think
I
think
it
would
be
fun
to
actually
plan
something
fun.
A
Let's
see
the
body
say:
l'm
a
nice
and
happy
Friday,
Shahar
good
to
see
ya
to
everyone,
but
the
person
that
invent
you
know
I
actually
met
the
the
guy.
Who
is
one
of
the
the
folks
who
invented
llamo
it's.
It
was
an
interesting
conversation,
because
I
think
the
way
that
we're
viewing
Yemma
what
were
using
it
for
was
different
from
what
he
was
thinking
and
using
it
for
I.
A
Think
there's
some
interesting,
interesting
history
there
so
Rory
from
Wroclaw
head
I,
assume
that
that's
in
Scotland
send
deep
from
Poland
Martin
from
the
Netherlands
Raphael
from
Poland.
Also
I'm
sure
like
how
do
you
pronounce
the
L
with
the
with
the
dash
in
it
I
should
I
should
learn
that
Marcin
from
from
Cracow
from
Poland,
also
Rowan
from
San
Francisco
excited
to
see
about
kpac.
That's
gonna,
be
our
topic
today,
Mike
from
New
Jersey
Paul
from
Austin
will
brought
from
South
Africa
welcome.
A
So
while
it
is
from
from
Libya
good
to
see
ya
yatin
from
from
Ashburn
Virginia
James
waters
from
San
Francisco
how's
it
going
sir
Sean
from
Birmingham
Christoph
from
Dusseldorf
Lemann
he's
asking
about
India
I'll
talk
about
me,
India
trip
in
a
in
a
in
a
little
bit
so
I
eat
from
London
Glenny
Oh
from
New
York
Jeremy
from
Colorado
good.
To
see
you
Jeremy
Jeremy
is
one
of
the
folks
working
at
Microsoft
on
the
the
CBM
project,
which
hit
one
point.
Oh
I
didn't
have
that
in
the
notes.
George.
Can
you
put
a
scene?
A
You
have
1.0
pointer
in
the
notes
so
that
we
can
actually
get
to
that
and
then
yeah
Bogdan
from
Romania
Stevo
from
Germany
Amin
from
Strasbourg
Mohammed
from
Atlanta,
Wow,
okay,
so
Wow
I
can't
keep
up
with
you
all.
That's
crazy,
so
Bangkok,
Nigeria,
awesome,
okay,
the
L
with
the
dash
has
a
W,
sound,
okay,
okay,
that's
good
to
know:
Paris
Philippe
good
to
see
ya,
Florida,
more
Germany,
DC,
New,
York,
awesome!
Houston
is
good
to
see
you,
okay,
crazy,
crazy,
crazy!
So
thank
you
all
for
joining
me.
A
A
You
know
it
was
one
of
these
things
actually,
where
the
CEO
of
the
company
Pat
was
meeting
with
some
folks
in
India,
and
he
promised
that
I
would
go
out
there
and
talk
with
him.
Let's
see,
he
was
like:
hey
go
to
India
I'm
like
all
right,
I'll
go
to
India
and
so
I
met
with
a
bunch
of
customers
talked
about.
Kubernetes
gave
some
talks
to
audiences
out
there
and,
and
it
was
pretty
crazy-
it
was
great
met
a
lot
of
great
great
people.
A
Countries
and
go
to
conference
rooms,
and
but
no
the
people
were
wonderful,
gave
some
great
talks,
some
great
questions,
some
great
interactions
and
then
I
had
a
chance
to
swing
by
Bangalore
and
visit
the
the
VMware
office
there
and
hang
out
with
the
team
working
on
that.
We
have
folks
working
on
project
Pacific
there.
A
We
have
folks
working
on
Tom,
Zhu,
Mission
Control
there
and
just
a
lot
of
really
great
work
going
on
out
of
that
out
of
the
office
there
in
Bangalore,
so
very,
very
cool,
ok,
so
I
don't
want
to
I
want
to
jump
into
it.
Let
me
switch
to
my
screen
and
oftentimes
what
we
do
early
on
in
these
in
these
things,
as
we
I
go
through
sort
of
what's
been
happening
over
the
last
couple
of
weeks,
since
we
had
the
last
tea
GI
k.
A
B
A
Yes,
t
GI
ki
own
notes.
If
you
go
to
that,
you'll
actually
come
to
this
thing
and
if,
if
there's
notes
or
questions
or
if
you
want
to
say
hey
at
this
time
code,
this
interesting
thing
happened.
We
can
we
can
crowdsource
here
and
it's
a
it's
a
good
way
to
actually
be
able
to,
and
then
I
terminally
end
up
putting
these
things
into
github
so
that
we
keep
them
forever.
So
all
right.
So
the
big
news
over
the
last
couple
of
weeks
is
the
launch
of
kubernetes
1.16
every
three
months
ish.
A
We
have
a
new
release
and
it's
a
lot
of
work
to
actually
put
that
release
together.
The
release
team
does
an
amazing
job
and
it's
a
rotating
cast
of
characters
that
do
that
and
there's
a
lot
of
good
stuff
in
1.16
I'm
not
going
to
go
into
an
exhaustive
detail.
Sed
three
point:
four:
is
there
a
lot
of
good
scalability
improvements
around
that?
A
Old,
the
mo
files
old
howl
charts
that
you
might
have,
and
so
it's
something
to
be
aware
of,
and
let's
see,
there's
a
blog
post
here,
but
you
can
also
watch
the
TGI
K
episode
that
Duffy
did
where
he
went
into
that
in
a
lot
of
detail.
Other
highlights
from
this
just
real
quick
custom
resource
Stephenie,
CR
DS
went
into
GA
along
the
way.
A
lot
of
sort
of
extended
features
that
make
these
things
closer
and
closer
to
the
built
in
types
were
merged
on
and
are
no
longer
optional.
A
A
It's
not
going
to
end
there.
We
have
ideas
to
make
him
even
better,
but
but
there
we
go.
Let's
see
we
had
continued
to
see
a
lot
of
work
happen
with
Windows,
including
some
security
binding
into
Active,
Directory
and
stuff.
Like
that,
and
then
endpoint
slices
Leonetti
was
saying.
Was
that
my
idea
for
endpoint
slices
I,
don't
think
so?
I
mean
in
general,
when
we
talk
about
how
do
we
scale
kubernetes,
a
big
part
of
that
is.
We
want
to
find
ways
to
shard
things.
A
That
will
help
with
things
like
ipv6.
So
that's
that
super
cool
hello,
Alexandra
from
Paris
and
Mike
good,
to
see
you
from
New
Jersey
Ashwin,
also
so
those
and
then
yeah
and
then
debate.
The
other
big
news
is,
you
know,
ipv4
ipv6,
dual
stack
really
what
this
is
is
taking
the
places
where,
honestly,
we
made
a
mistake
early
on
in
kubernetes,
where
we
had
one
IP
address
per
pod
and
actually
turning
that
into
an
array.
So
this
is
not
just
going
through
and
saying
what
we
have.
A
You
know
the
ipv4
address
and
the
v6
address
it's
really
saying:
well,
we
went
from
one
address
to
an
array
of
addresses
and
that's
a
it's
a
more
food
future-proof
change,
but
it's
also
a
more
complicated
change
to
actually
make
happen.
So
it's
really
excited
that's
an
alpha
now.
So
it's
starting
that
march
towards
GA
all
right!
So
that's!
What's
going
on
there
there's
a
CB
n
cube
control.
This
probably
doesn't
impact
most
folks.
A
This
is
essentially
a
soft
link,
traversal
type
of
thing,
if
you're
not
doing
cube,
control
copy
with
untrusted
paths,
you're
probably
going
to
be
fine,
but
there's
there's
some
some
issues
there.
So,
okay,
so
borough
says:
will
this
support
multi
NIC?
The
answer
is
I,
don't
know,
I
think
out
the
data
out
the
gate,
the
real
scenario
there
is
going
to
be
ipv6,
but
I
know
that
there's
a
lot
of
folks
thinking
about
how
do
we
deal
with
more
advanced
networking
situations,
and
so
this
might
be
one.
A
That
but
I
think
it's
also
worth
looking
at
there's
a
sandbox
project
in
the
CNC
have
called
No
servus
meshes,
which
is
another
approach
to
dealing
with
multiple
interfaces
and
secondary
interfaces
and
being
able
to
sort
of
hook
these
things
up
between
different
different
pods
and
different
infrastructures.
So
so
that's
something
that
I
think
is
is
coming
and
it
definitely
does
start
to
approach.
Some
of
these
sort
of
you
know,
network,
intensive
and
a/v
type
of
situations,
but
if
you
want
to
do
something
like
say,
hey
I
want
this
pod
to
have
VPN
access.
A
You
know
at
the
IP
level,
back
to
my
corporate
network,
doing
something
like
that
network
service
mesh
is
probably
the
the
project
to
look
at
for
that:
okay,
cool
yeah,
so,
okay,
so
yeah
we
have
the
the
note
of
there
on
episode,
84
with
the
Rho
v2
api's.
The
CV
probably
won't
hit
you
unless
you're
doing
some
funky
stuff
with
cube
control
CP.
A
The
cube
con
schedule
is
up
lock
going
on
there.
I
haven't
been
able
to
actually
find
time
yet
to
actually
go
through
and
pick
the
talks
that
I
want
to
see,
but
I
encourage
you
all
to
start
digging
in
and
seeing
what
looks
interesting
and
one
of
the
things
I
love-
and
this
is
totally
a
grassroots
thing.
The
kinfolk
Star
Fox
started
it
at
the
last
cube.
A
Con
was
there's
so
many
good
talks
submitted
and
and
not
all
of
those
can
be
shown,
so
they
essentially
created
this
sort
of
pre
conference
thing
or
at
least
last
time
it
was
pre
called
the
rejects
conference
where,
if
your
talk
was
rejected
from
cute
con
proper,
you
can
submit
it
to
rejects
and
it's
essentially
sort
of
an
extended
cube
con,
which
is
really
really
cool
to
see
that
stuff
come
come
together.
Let's.
A
A
Is
this
just
for
existing,
or
is
this
for
new,
also
just
new
yeah?
So
there's
a
new
contributor
workshop
yeah.
So
it's
a
great
way
for
folks
who
are
like
hey
I,
want
to
get
involved
in
the
kubernetes
in
a
deeper
way.
It's
a
great
way
to
so
start
getting
to
know
folks
and
start
learning
about
how
to
how
to
get
move
in
there.
So
there's
content
for
both
new
and
existing
contributors.
A
A
So
the
man
is
suggesting
that,
like
for
episode
100,
we
might
want
to
do
like
a
live
Q&A
with
everyone
who
has
done
at
TGI.
K
episode,
yeah
I
was
actually
talking
with
with
with
George
about
that.
That
might
be
fun
to
actually
maybe
see.
If
we
get
a
zoom
call
with
everybody
online
and
we
can,
we
can
talk
about
it
that
might
be
cool
I.
A
So
kubernetes
is
not
like
languages
which
often
guarantee
not
to
deprecated.
Api
is
after
one
dot.
Oh
yeah
I
think
what
we
try
and
do
is,
you
know
just
to
be
clear.
So
there's
the
kubernetes
def
break.
There
is
a
deprecation
policy,
and
this
is
something
that
we
agreed
upon
really
early
on
and
once
an
api
hits
v1
I,
don't
believe
that
we're
that
we
deprecated
it-
or
at
least
we
haven't
done
any
of
those-
and
so
so
there's
at
least
a
twelvemonth
deprecation
window
for
GA
for
beta
it's
nine
months
and
then
for
alpha.
A
It's
like
Yolo,
you
know
whatever
happens,
but
you
know
you
can
read
through
the
deprecation
policy
here.
I
think
there
is
this
idea
that
kubernetes
does
need
to
evolve.
I
haven't
heard
of
any
any
v1
deprecation.
To
be
honest,
so
I
don't
think
we're
there.
Yet,
let's
see
so
George
is
saying,
there's
also
a
mentorship
content
and
the
cube
come
schedule.
So,
if
you're
looking
to
find
someone
to
help,
you
get
started,
that's
a
good
way
to
do
that
too,
and
then.
Finally,
let's
see
so,
we
are
looking
for
folks
to
help
with
discuss.
A
A
I
don't
like
about
it,
to
be
honest,
but
I
think
there's
a
lot
to
like
there.
Also
in
terms
of
what
it's
trying
to
do,
yeah,
you
know
the
the
short
thing
is
that
I,
don't
like
the
fact
that
it's
self
executing
that
there's
essentially
that
invocation
image
but
I,
think
that
that's
a
that's
a
longer
discussion
there,
let's
see
and
then
George
pulled
together,
some
really
cool
blog
posts.
That
I
think
are
interesting
and
worth
looking
at,
so
somebody
reached
out
on
Twitter
and
I
I.
A
You
know
one
of
the,
so
you
know
some
ideas
around
sort
of
like
hey,
you
know,
collect
data
using
something
like
Prometheus
and
then
look
across
the
the
sort
of
the
last
day
and
figure
out
sort
of
like
what
your
what
your
history
is
on
this
stuff.
They
are
suggesting
setting
the
request
and
limit
to
actually
be
different
things.
My
experience
is
that
you
generally
four
critical
workloads.
You
want
the
most
predictability
and
you'll
get
to
get
the
most
predictability.
If
you
actually
set
those
things
equal
to
the
same,
but.
A
This
is
a
really
good
guide
about
how
to
think
about
this.
How
to
profile
things,
let's
see
so
Gergely
bron-bron
Big
M
has.
This
is
a
really
interesting
sort
of
like
somebody
using
a
kubernetes
cluster
for
their
own
personal
infrastructure,
which
is
really
cool,
they're
running
on
digital
ocean,
and
it
talks
about
sort
of
like
all
the
things
that
they're
running
and
then
goes
through
sort
of
like
infrastructure
things
and
then
like
starting
from
something
simple
like
a
website
and
then
building
to
more
complex
hosting
on
top
of
their
own
personal
cluster.
A
Has
a
CK
ad
exam
prep
checklist,
which
is
really
cool
bunch
of
resources
there
and
then
feel
oi
has
been
working
on
bringing
man
pages
to
the
web,
which
is
really
cool
too,
and
so
you'll
see
it
breaks
out
all
the
various
cute
control
commands
and
stuff
like
that,
so
you
can
look
at
like
cute
control
set
and
which
I
didn't
even
know
is
a
thing.
But
apparently
this
is
a
thing
which
has
all
the
various
options
and
stuff
like
that.
A
So
that's
actually
pretty
cool
to
see
that
stuff
coming
yeah
there
was
a
teacher
I
can
limits
and
requests
and
limits.
I
didn't
actually
have
a
lot
of
advice
at
the
time.
Then,
in
terms
of
like
how
do
you
pick
the
right
values
for
those
things
and
so
I
think
that's
the
missing
piece
is
that
you
know
it's
like
how.
A
Is
my
process?
Gonna
use,
I,
don't
know,
and
so
people
usually
say
I,
don't
know
4
gig
right,
that's
probably
going
to
be
enough
and
it
can
be
wasteful
and
so
like.
It
is
tricky
to
actually
pick
the
right
thing
there,
because
if
you,
if
you're
too
small,
it
could
have
performance
issues
or
you
could
find
your
workload
getting
killed.
If
it's
too
big,
then
you
may
find
the
error
that
you're
being
wasteful,
and
so
this
is
a
problem
in
cloud
in
general
right
I
mean
it's
like
how
big
a
vm
do.
A
I
need,
I,
don't
know,
let's
just
err
on
the
side
of
too
big
and
so
yeah,
so
that
is
a
quick
run
through
sort
of
what's
been
happening.
Some
good
pointers!
Oh
and
here's,
a
let's
see.
Oh
here's,
a
talk
on
kubernetes
moment
episode,
10
endpoint
slices.
So
what
is
kubernetes
moment?
I,
don't
even
know
so,
but
they're
on
episode,
10,
but
it's
zero,
zero.
Ten!
So
now
when
I
started,
teaching
I
K
I
get
zero
zero
one
and
I
thought
that
was
pretty
ambitious.
I'm
like
if
I
get
to
a
hundred
episodes.
That's
great.
A
These
folks
are
actually
like
you
know:
they're
planning,
big,
so
that's
cool,
so
I
want
to
watch
this
at
some
point.
This
looks
really
interesting.
I'd
like
to
learn
about
the
endpoint
slices,
so
cool
all
right.
So
the
the
plan
today,
and
so
we
can
mark
it
down
we're
about
you
know.
20
minutes
in
I
mean
I,
try
and
keep
it
short
I'm,
so
bad
at
it
is
we're
going
to
be
looking
at
capac,
and
so
what
Capek
is
is
a
new
open
source
project
from
our
friends
at
pivotal.
A
That
is
really
a
a
system,
a
kubernetes
native
system
for
using
build
packs.
So
what
our
build
packs
you
might
ask.
Well
that's
actually
a
great
experience,
because
I
think
that's
part
and
parcel
of
what
Kay
pack
is
bill.
Packs
are
a
more
structured,
opinionated
way
to
build
container
images
and
so
I
think
most
of
us
in
this
world.
A
We
understand
sort
of
what
a
container
image
is
and
the
relationship
between
say
something
like
a
docker
file
and
a
container
image
is
something
that
we
I
think
generally
have
a
good
handle
over
or
at
least
I.
Think,
a
lot
of
folks
in
the
kubernetes
world
have
a
good
handle
over
it
and
but
like
like
docker
files
themselves
are
a
little
bit
yellow.
Essentially
you
can
run
any
command,
you
want.
A
You
can
override
anything
and
so
they're,
incredibly
flexible,
and
that's
one
of
the
things
that
has
made
docker
so
successful
is
that
you
can
go
through
and
if
like,
if
you
have
sort
of
like
a
text
file
that
says
hey
whenever
I
set
up
my
scene,
I
run
these
twelve
commands.
You
can
essentially
take
those
twelve
commands.
Put
it
in
a
docker
file,
build
a
docker
image
and
you
have
something
that
that
approximates
sort
of
like
hey
what
I
do
whenever
I
actually
ask
the
stage
into
a
new
machine
and
so
that
translation
from
here's?
A
What
I
do
when
I'm
admitting
a
machine
in
a
manual
way
to
let
me
automate
it
with
a
docker
file
and
create
an
artifact?
It's
been
an
incredibly
useful
thing
for
for
making
docker
successful,
because
there's
that
direct
translation,
but
if
you're
coming
at
this
from
the
point
of
view
of
not
from
sort
of
the
operations
point
of
view
of
like
here's,
the
commands
I
run
on
a
machine,
but
I
want
to
run
it
over
here,
but
more
along
the
lines
of
I
have
some
code.
A
How
do
I
get
that
code
in
production
that
sort
of
code
to
production
thing?
Docker
files
are
a
little
bit
too
flexible,
sometimes,
and
so
you
know,
and
especially
when
you're
dealing
with
a
corporate
environment
where
you
may
want
to
actually
say,
thou
shalt
use.
These
particular
decisions
in
terms
of
the
technology
stack
that
we're
building
on
I
think
a
lot
of
times.
There
are
requirements
around.
A
There's
that
policy
of
here's
that
here's
the
sort
of
build
packs
that
apply
to
this
thing.
You
then
go
through
there's
a
bunch
of
caching
so
that
you
can
get
efficient,
rebuild
and
updates,
and
then
you
do
a
build,
and
so
the
build
could
be
just
copying
files
if
you're
doing
like
a
simple,
Python
app,
it
could
be
going
through,
and
you
know
downloading
some
stuff,
like
you
know,
ruby,
gems
or
other
dependencies,
or
it
could
be
something
like
you
know,
building
a
java
app
or
building
a
building,
a
c++
program
and
then
the
ideas.
A
And
so
so
that's
essentially
what
a
build
pack
is,
and
so
there's
this
idea
here
that
there's
sort
of
a
life
cycle
of
activating
a
build
pack
yeah
so
like
open
chef
has
to
I
I
mean
these
things
all
have
like
these
ideas
have
been
swimming
around
for
a
while
and
I.
Think
if
you
actually
click
through
here
to
the
cloud
native,
build
packs
which
is
the
build
packs
and
then
there's
the
platform
to
build
pack.
A
Specification
I
think
one
of
the
things
that
I
don't
know
if
they
have
at
some
point.
There's
history
like
Oh
on
the
webpage
here
you
know
I,
think
you
know
there
is
sort
of
a
nod
towards
this
idea
of
optimizing
source
to
deployment.
Image
really
I
think
probably
dates
back.
You
know,
probably
at
least
to
Heroku
in
2011,
probably
even
before
that,
and
if
you
look
at
sort
of
the
lineage
of
OpenShift,
you
know
an
open
shift.
A
There
was
this
idea
of
cartridges,
which
was
very
similar
to
build
packs
in
some
ways,
and
so
a
lot
of
those
ideas
have
carried
forward.
I.
Think
the
interesting
thing
here
is
that
we're
viewing
I
like
I,
think
open
ship
as
2i
is
great,
but
you
know
it
has
open
shift
in
the
name
right,
and
so
what
I
love,
seeing
and
I
think
this
is
part
of
sort
of
what
has
made
kubernetes
successful
is
that
these
things
are
decoupled.
A
A
B
A
What
I
love,
seen
and
I
think
what's
great
about
this-
is
that
the
built
pack
dot
IO
is
a
CNC
F
project
and
k
pack
is
an
open
source
project
from
pivotal
that
is
essentially
separated
outside
of
the
the
more
opinionated
pivotal
products
like
like
enterprise,
PKS
and
and
seeing
these
things
actually
have
an
independent
life
of
their
own
I.
Think
it's
a
really
good
thing
for
the
community
as
a
whole.
A
Now
here's
the
thing
is
that
if
you
look
at
this
sort
of
this
back
for
build
packs,
this
is
some
pretty
in-depth
stuff
and
I
got
to
be
honest
with
you.
This
is
like
the
first
time
I
saw
this
was
today,
so
this
is
how
much
prep
I
actually
get
from
the
TTI
case
and
there's
a
lot
here.
It's
a
pretty
complicated
process
when
you
actually
look
at
how
build
build
packs,
get
built
and
I
got
to
be
honest,
I,
don't
totally
grok
all
this.
A
A
We
all
know
that,
like
docker
containers
are
a
bunch
of
layers
that
layer
on
top
of
each
other
I
think
what
we
see
here
is
that
the
build
pack
building
stuff
is
you
execute
code
and
that
code
has
access
to
be
able
to
edit
those
layers
directly
and
so,
instead
of
actually
editing
layers
by
running
commands
and
having
the
composition
of
those
layers
be
done
using.
You
know,
essentially
extracting
data
from
the
file
system.
A
After
the
fact,
there
are
programs
that
can,
as
part
of
the
build
packs
that
can
explicitly
go
through
and
start
inserting
stuff
into
different
layers,
which
I
think
is
really
interesting.
So
there's
like
there's
a
level
of
sort
of
sort
of
access
that
the
build
pack
builders
have
that
you
generally
don't
get
when
you're
doing
a
docker
build
type
of
thing.
So
it's
a
fundamentally
different
way
to
think
about
building
container
images.
B
A
Of
the
other
things
that
I'm
going
to
be
looking
at
as
we
play
with
kpac
here
is
and
there's
a
tutorial
here,
we're
going
to
go
through
it
is
that,
when
running
anything
that
builds
an
image
on
kubernetes
the
old-school
way
of
doing
this
is
that
you
would
actually
run
docker
build
itself
and
you
had
to
have
a
docker
daemon
running
and
to
get
a
docker
daemon
running
inside
of
kubernetes.
You
had
to
do
one
of
two
things
you
either
had
to
give
access
to
the
underlying
docker
daemon
that
is
running
on
the
node.
A
There's
two
problems
with
that.
The
first
one
is
that
you
may
not
be
running
with
dock.
You
may
be
using
container
D
or
cryo.
There
may
not
be
a
docker
running
on
the
node,
underneath
you
and
number
two
is
that
now
you're,
essentially
accessing
resources
out
from
under
kubernetes
and
so
kubernetes
doesn't
know
how
to
track
those
things
you
can
leave
stuff
around.
They
wouldn't
get
cleaned
up.
There's
security
issues
around
that,
and
so
so
essentially
having
direct
access
to
the
docker
images
is
obviously
a
problem.
A
The
other
old
way
of
doing
this
is
that
you
could
relax
some
of
the
security
mechanisms
so
that
you
could
essentially
do
a
nested,
docker
and
and
in
generally,
to
be
able
to
relax
things.
That
way
also
introduces
security
issues
and
so,
and
so
will
be
fine
here
is
that,
like
there
are
a
set
of
prod
which
operate
cleanly
inside
of
kubernetes
and
I?
Think
you
know
where
lead
here
is
suggesting.
Conoco
is
one
of
those
from
Google.
That's
essentially
a
purely
user
mode.
A
I
think
it
does
work,
does
it
require
it
may
require
root,
but
it
runs.
It
runs
inside
of
kubernetes
in
a
cleaner
way
and
it
essentially
evaluates
docker
files.
I'm
gonna
be
looking
to
see
whether
kpac
actually
does
the
similar
thing
where
it
actually
operates
in
a
clean
way
with
with
with
no
root
access
kind
of
code.
I
believe
Conoco
does
require
root
in
the
container,
but
doesn't
actually
require
privileged
containers.
A
At
least
last
I
looked
at
it
I
still,
don't
think
I
think
we're
gonna,
probably
probably
going
to
need
something
like
a
medium
wrong,
maybe
I'm
wrong,
maybe
I'm
thinking
about
a
different
one.
We're
probably
gonna
need
something
like
fully
user
name
space
supporting
kubernetes
to
be
able
to
do
that.
So
so
Steven
here
is
saying
the
the
CNB
bill
packed
a
platform.
Interface
is
an
abstraction
for
generating
OCI
images
using
modular,
build
stages,
build
packs
in
a
way
that
maximizes
layer,
reuse
between
builds
yeah.
So.
A
Summary
there,
but
the
fun
thing
here
is
that
we
actually
have
programs
that
know
how
to
operate
natively
on
the
set
of
layers,
and
so,
instead
of
just
blindly
building
layers
up,
you
have
programs
that
know
how
to
actually
deal
with
those
things
and
that's
the
inversion
that
happens
with
build
built
acts
that
is
different
from
from
the
docker
file
way
of
actually
building
stuff.
This
team
that
says
no
root
or
proves
for
CNB
as
well
awesome
and
the
scape
hack
doesn't
need
root.
Prims,
that's
really
good.
A
B
A
And
yes,
I'll
cover
why
we
need
a
different,
build
tool,
because
I
think
this
is
fundamentally
a
different
way
of
constructing
container
images
in
a
more
structured
way,
and
the
fact
that
it
runs
cleanly
on
kubernetes
is
a
good
thing
and
I
think
there's
room
for
this
to
actually
be
funneling,
fundamentally
more
controlled
and
more
efficient
than
the
than
the
traditional
way
of
building
images.
So
yeah.
A
And
then
there's
question
about
how
this
kind
of
code
compare
to
build
up
builder
can
now
build
images
without
you
idea.
Does
bilder
actually
run
natively
on
kubernetes
last
I
thought
it
I,
don't
know,
I
need
to
dig
in
to
build
it
at
some
point
also,
but
that's
another
new
docker
file
evaluator
build
tool
going
on
okay,
so
let's
actually
go
through
and
we're
gonna
set
Kay
pack
up.
The
first
thing
we
need
is:
we
need
to
github
release
seven
days
ago,
in
anticipation
of
me
doing
TGI
kid.
A
Last
week
before
I
got
sick,
the
the
folks
actually
pushed
a
new
release.
I'm
gonna
go
ahead
and
download
that
now
one
of
the
things
builder
images
use
the
require
lifecycle
of
v4.
Is
that
something
different
that
I
have
to
actually
download
I,
don't
know
so
the
things
that
I
need
to
download
is
that
there's
a
release
yamo
make
your
own
ID
one.
A
B
A
There's
the
gamble
that
we're
gonna
apply,
we're
gonna,
actually
dig
into
that.
Let's
see
I'm
gonna
catch
up
on
the
comments
here.
What
is
path
the
rule
is
containers
on
kubernetes
container,
runtimes
I,
don't
know,
I
haven't
actually
kept
up
to
date
in
terms
of
I.
Would
love
to
see
us
get
to
rootless
containers,
but
I
think
there's
there's
a
couple
of
things.
There
number
one
is
being
able
to
actually
run
containers
without
having
root,
which
means
that
at
this
point
you
know
at
least
some
parts.
A
Some
of
the
major
parts
of
sort
of
the
cubelet
tank
chain
can
be,
can
run
without
root
and
then
I
think
there's
also
supporting
you
know
this.
It
gets
bound
up
with
supporting
user
name
spaces,
which
means
that
you
can
look
like
you
have
root
inside
your
container,
but
you
really
don't
have
root
because
there's
a
sort
of
a
mapping
going
on
there.
A
Like
call
it
something
like
you
know
like
kpac,
logs
or
something
so
tarred
xzv
I
just
want
you
all
to
be
impressed
that
I
know
that
our
command
line
out
the
gate.
So
this
is
an
ATAR
file,
mostly
just
so
that
you
get
a
binary
with
the
execute
bit
already
on
it.
I'm
gonna
run
that
cuz.
Why
not?
And
it's
saying
I
don't
know.
What's
going
on?
Okay,
one
of.
A
I
did
is
I,
did
start,
we're
gonna
be
working
against
a
kind
cluster
I
have
cute,
can
keep
config
running
I,
don't
know
what
this
logs
binary
does
I
ran
and
nothing
happens.
Can
I
do
a
help.
So
there's
a
also
log
two
standard
error,
which
means
it's
using
G
log,
which
is
I'm,
sorry
that
everybody's
using
G
log
the
build
number
to
tail
the
image
names
you
okay,
so
this
is
actually
sort
of
very
specific
to
2k
back
okay
cool.
So
there
we
go
so
one
of
the
things
that
I'm
wondering
is
builder.
A
A
A
The
build
packs
have
to
be
a
sort
of
a
certain
vintage,
be
able
to
do
it,
so
it's
going
to
be
including
the
Builder
images
we're
going
to
be
using
the
sort
of
the
pivotal
provided
builder
images,
but
you
know
build
packs
themselves
are
extensible
and
you
can
do
fancy
things
there.
Okay
Lerman
says,
as
it
relates
to
container
image
types.
Wasn't
part
of
the
docker
hype.
Circa
2015
was
that
everyone
using
a
common
container
image
format,
was
better.
Yes,
we
are
using
a
common
container
image
format
that
is
OCI,
that's
evolving.
A
It's
actually
great
to
actually
see
that
sort
of
be
an
industry
standard,
but
the
the
way
that
you
build
that
image
there
is
like
I,
think
it's
great
that
we're
actually
seeing
that
there's
multiple
ways
to
skin
the
cat
right.
So
it's
the
question
of
like
we
have
the
thingy,
which
is
the
image,
and
then
we
have
the
way
that
you
build
the
thingy
and
just
like,
like
let's
you
know,
for
analogy:
let's
say
that
you
haven't
executable
on
your
on
your
thing
right.
A
You
could
have
multiple
compilers
and
those
compilers
can
all
produce
say
an
elf
binary
and
you
can
have
the
go
compiler
you
can
have
GCC.
You
can
have
an
LLVM
think
time
base
thing
right.
Those
are
all
different
ways
of
building
that
one
binary,
and
so
that's
the
sort
of
the
difference
that
we're
talking
about
here.
Okay,
so
there's
some
breaking
changes,
I,
don't
care
about
those!
That's
in
their
release.
Okay,
so
cool
I
have
cluster
admin
and
then
I
need
an
accessible
Docherty
to
registry.
A
You
know
what
let's
use
should
we
use
google
cloud
or
AWS
for
our
registry,
which
one
should
we
use.
Aws
is
a
little
bit
more
complicated.
Let's
use,
let's
use
GCP,
to
be
able
to
do
that.
What
do
you
think?
Gcp,
yeah?
Okay,
we'll
do
GCP.
Give
me
like
three
seconds
here
to
make
sure
that
I
I'm
not
showing
stuff
that
I
don't
want
to
show
on
the
screen
here
going
to
the
cloud
console
I
think
I
have
a
project
called
TGI
K
here.
Let
me
just
make
sure
that
I'm
sweet
to
that
yeah.
A
Under
is
it
under
compute
engine
or
kubernetes
engine?
No,
it's
under
storage.
Maybe
where
is
tools?
I?
Guess
it's
under
tools
container
registry,
there
we
go
and
I
think
I
have
we're.
Gonna
pin
this
puppy.
Oh
now,
I
pin
it.
It
shows
up
here
and
we'll
look
at
images.
Okay,
cool!
So
I
have
like
a
Cordy
thing
there.
Okay,
so.
A
Let's
go
through
and
oh
and
then
the
other
thing
that
we're
gonna
need
here
is
we're.
Gonna
need
to
actually
create
a
service
account
for
GCP
to
be
able
to
do
that,
we'll
get
to
that,
because
that
actually
ends
up
being
a
little
bit
tricky
here.
Okay,
so
now
we're
in
install
here,
okay,
we're
going
to
do
the
cube
control,
apply
their
release.
Dot,
yeah,
Mel
you'll
know
that
one
of
the
things
I
do
is
I.
Do
not
blindly
install
amyl
code
release
Yamma.
What
all
do
we
got
going
on
here?
Oh
and
I?
A
A
All
right,
so
here's
what
we're
gonna
do.
So
this
is
great.
So
this
is
one
of
the
things
why
I
love
CAF
is
that
it
actually
gives
me
a
way
to
see
what
I'm
gonna
do
before
I
install
it,
and
it's
a
trusted
tool
on
my
side
and
just
with
response
to
see
now.
This
is
something
that
we
can't
do
with
scene
ab
where
we
actually
have
a
executable
image.
A
We
have
a
another
CR
D.
So
what
do
we
have?
We
have
builders,
builds
cluster
builders,
images
and
then
source
resolvers.
Okay,
so
those
are
the
sort
of
the
custom
resource
definitions.
We
have
one
thing
called
a
controller,
that's
going
to
be
in
the
kpac
namespace,
and
then
we
have
a
deployment
for
that.
Also,
I
wonder
why
the
kpac
controller
doesn't
actually
have
a
name
space
associated
with
it.
A
A
A
Okay,
so
I
do
this
now.
One
of
the
interesting
things
about
cap
is
that
it
actually
waits
until
everything
is
up
and
running.
So
if
we
go
back
to
the
instructions
here,
what
we
see
is
that
it's
like
oh
watch
until
you
actually
see
running
for
the
name
spacing
and
Kay,
like
that's
one
of
the
things
that
this
thing
actually
does.
A
Is
it
actually
waits
until
things
are
running
so
so
now,
if
we
actually
go
through
and
look
at
the
pods
here,
we'll
see
that
things
are
up
and
running
very
cool,
so
that
is
getting
that
up
and
running
now
the
one
of
the
things
that
I
think
we
need
to
do
here.
It's
like
we're
going
to
have.
We
need
to
create
a
cluster
builder
resource.
A
cluster
builder
is
a
reference
to
a
cloud
native,
build
packs
builder
image.
The
Builder
image
contains
build
packs
that
will
be
used
to
build
images
with
Capac.
A
A
Ok,
so
these
things
can
either
be
built
in
or
they
can
actually
be
downloaded.
So
a
builder
is
life
cycle
plus
build
packs.
Ok,
so
it's
the
thing
that
implements
the
whole
life
cycle,
so
you
have
one
and
only
one
thing
and
so
ok,
so
it
is
it's
the
full
meal
deal.
So
it
essentially
runs
those
four
steps
and
has
the
opinions
of
sort
of
what
things
are
actually
going
to
be
applied
to
it.
So
it's
essentially
the
build
time.
Environment.
A
Ok
sounds
good,
alright,
so
what
we
need
to
do
is
we
need
to
actually
create
one
of
these
things.
We're
going
to
call
it
name
of
cluster
builder
pod.
So,
ok
when
you're
writing,
tutorials,
it's
great
just
to
say
name
it
this
thing
right.
So
what
we're
going
to
do
is
we're
two
touches,
and
this
is
the
builder.
A
A
And
if
I
go
I,
do
this
I
did
this?
Okay,
let's
see
I,
have
multiple
windows
up
and
I'm
using
window
management.
Here,
give
me
a
second
there.
We
go
okay,
so
we
have
builder
da
mo
and
we're
gonna
copy
and
paste
this
stuff
in
there
alright.
So
this
is
the
the
the
API
group
is
built
to
be
1.
Alpha
1
cluster
builder
is
the
type
and
so
I
think
what
cluster
builder
means
is
that
this
thing
is
it's
a
cluster
level
resource
that's
available
to
everybody.
This
is
something
new
in
the
the
0.4
release.
A
A
A
B
A
Okay,
did
you
see
what
I
did
here
so
this
is.
This
is
something
I
did
something
wrong
here,
which
is
it,
which
is
a
caveat
emptor?
What
I
actually
did
is
I
actually
reacts
acute.
It
saying
hey,
this
is
the
the
kpac
application
and
I
gave
it
just
the
ammo
file,
the
new
Y
mo
file,
not
the
old
file
and
one
of
the
things
that
you're
gonna
see
here
is
it's
like.
Oh,
like
a
bunch
of
stuff
went
missing.
A
This
is
like
gives
you
some
confidence,
and
so
right
now
what
I
can
do
is
I
can
do
two
things
and,
first
of
all,
I'm
gonna
say
no,
because
I
don't
want
to
delete
all
that
stuff
is
that
I
could
actually
have
the
Builder
be
a
different
sort
of
app
name,
so
I
could
call
it
k
pack
builder,
like
that
or
I
could
actually
specify.
You
know
start
out
yeah
Mel
here
and
actually
say
what
did
I
do.
A
A
A
That's
an
annotation
there's
the
label
I,
don't
know
yeah
I
thought
that
it
actually
put
the
name
of
the
label
in
there.
Yeah
mobile,
multiple
Jeff
also
works
yeah,
but
anyways.
This
is.
This
is
probably
I'm
guessing
that
that
it's
a
this
is
a
hash
of
the
of
the
the
app
name
that
I
gave
it.
Okay,
so
so.
B
A
A
Did
this
actual?
Is
there
at
all
like
that?
Gets
me
the
dead
ones,
but
anyways
did
this
actually
go
through
and
run
the
run,
the
image
to
actually
get
metadata
about
it,
I'm
wondering
because
what
we,
what
we
got
out
of
this
is
essentially
metadata
about
the
Builder,
and
so
either
it
downloaded
the
thing
and
actually
cracked
it
open,
or
it
actually
ran
it
and
I'm
wondering
which
one
it
is
because
this
thing
might
have
actually
want
to
run
upon
to
be
able
to
do
that.
A
I,
don't
know
so:
let's
see,
okay,
here's
what
I'm
gonna
do
what
we're
gonna
I
know.
Maybe
somebody
can
answer
it
out.
We
can
dig
into
that
later.
I
think
that's
super
interesting
that
it
actually
went
through
and
actually
is
able
to
collect
a
whole
bunch
of
metadata
about
that
particular
builder
image
to
understand
the
different
things
that
it's
able
to
do,
and
so
this
is
a
bunch
of
version
metadata.
A
Now
one
of
the
actually
fascinating
things
here
and
I'm
just
going
to
call
this
out
is
this-
has
conditions
this
has
a
condition
here
of
type
ready
that
went
to
true.
This
is
a
pattern
actually
borrowed
from
the
K
native
folks.
That
is
actually
really
cool,
so
it
fetch
the
metadata
by
querying
the
registry.
It
did
not
need
to
crack
it
open,
okay,
so
this
is
actually
metadata.
That's
in
the
registry,
not
necessarily.
A
Not
not
necessarily
a
something
that's
from
within
the
bundle,
that's
actually
from
the
metadata
around
it.
Okay
cool!
So
that's
good
to
know,
and
then
bogdan
is
like.
Maybe
I
want
to
try
a
k-9s
at
some
point.
Okay,
at
some
point,
we
can
give
it
a
try
for
sure.
Okay,
the
build
tech
metadata
is
a
label
on
the
image.
It
sounds
good,
shut
up,
slack,
okay,
cool,
so
we
got
that
installed.
We
got
the
log
utility.
We
are
good
to
go.
Oh
now
we
need
to
create
a
secret.
A
A
Are
there
ways
that
we
can
actually
start
creating
the
idea
of
default
private
registries
with
kubernetes
clusters,
because
this
type
of
thing
is
something
that,
like
we
just
got
to
do
all
the
time
and
it's
a
total
pain
in
the
butt?
And
so
what
we
have
here
is
that
we're
gonna
actually
create
a
secret
for
dealing
with
this
and
we're
gonna
use.
Gcr.
A
A
And
there's
a
certain
amount
of
sort
of
exercise
left
to
the
reader
for
this.
So
the
first
thing
to
recognize
is
that
there
is
a
type
of
secret,
which
is
the
docker
registry
secret.
So
we
knew
Cube
control
create
secret
doctor
registry
right.
So
we
have
one
of
these
types
of
Secrets
and
what
this
ends
up
doing
is
it
actually
creates
the
dot
file,
that's
needed
by
docker
and
puts
that
as
the
key
into
the
secret
that's
different
than
this
secret
here,
because
you
can
see
the
type
here
on
this
secret
is
basic
auth.
A
B
A
A
There
are
other
things
that
do
this,
but
my
guess
is
that
this
probably
should
not
be
using
I,
don't
know,
look
we're
trying
to
actually
control
where
people
actually
go
through
and
start
using
kubernetes
IO
in
these
types
of
namespaces
and
less
there's
like
official
support.
So
that's
something
that's
that's
worthwhile
and
then
the
other
thing
to
keep
in
mind
is
that
you
have
to
have
this
annotation
here.
That
actually
says
well.
This
is
for
docker
images
that
start
with
this
particular
prefix
here.
So
so,
let's
make
this
happen.
A
A
This
is
gonna,
be
GC,
r,
io
basic
auth.
We
need
the
username
and
password
okay.
Now.
This
is
where
we're
sort
of
off
the
rails
here
to
get
so
here's
what
we're
gonna
do
so
you
have
to
in
Google
and
like
every
single
one
of
these
has
their
own
funky
way
of
doing
this
stuff,
and
none
of
it's
easy.
But
what
we
essentially
do
is
we
need
to
go
to
I
am
in
Google.
We
need
to
go
to.
A
A
A
Admin
here,
I
think,
is
what
need
at
least
storage
object
admin,
at
least
let's
give
it
storage
admin,
because
it
actually
turns
out
that
under
the
covers,
GCR
uses,
Google,
Cloud
storage
and
instead
of
actually
having
its
own,
also
like
what
which
you
know.
One
way
to
do
this
is
that
you
actually
get
permission
for
GC
r
and
then
GC
r
has
very
specific
permissions
within
the
Google
ecosystem,
to
talk
to
cloud
storage
or
maybe
use
this
cloud
storage
that
you
can't
see
using
its
own
creds.
A
But
the
way
that
this
was
done
is
that
it's
actually
a
pass-through
to
cloud
storage,
but
now
you're,
giving
it
to
all
store
this.
This
particular
service
account
now
has
access
to
all
storage
in
your
project,
which
kind
of
sucks
and
and
if
we
were
doing
something
like
AWS
I
am
you
could
write
a
document
that
says
no.
Only
this
bucket
only
needs
path,
but
as
far
as
I
configure
Google
I
am
doesn't
do
that,
and
so
this
is
just
one
of
the
reasons
why
I
don't
like
Google
I
am
sir
okay.
A
A
A
So
the
first
thing
here
is
that
we
have
the
username
here
and
and
so
we're
going
to
copy
that
that
ends
up
being
this
email
and
so,
but
we
can't
just
do
that-
you'd
like
to
be
able
to
do
that,
and
we
should
be
able
to
do
that.
But
we
can't
do
that.
One
of
the
things
we
did
early
on
with
secrets
is
a
you're
going
to
put
binary
stuff
in
there.
A
So
what's
base64
encoded
what
we
should
have
done
and
it's
one
of
those
things
where
maybe
at
some
point
we
can
clean
it
up,
is
that
we
should
have
had
the
value
and
then
we
should
have
had
an
encoding
thing
so
that
you
could
have
the
encoding
actually
be
something
like
utf-8
or
you
could
have
or
Unicode
or
you
could
have
the
encoding
be
BAE.
64,
so
the
binary
thing
was
actually,
it
was
actually
actually
optional,
but
we
didn't
do
that
so
now.
What
you
need
to
do
is
you
need
to
do
a
base64.
B
A
This
thing,
the
n
means
don't
actually
put
a
new
line
after
the
fact.
We
do
that
and
then
here's
the
string
that
we
actually
put
here,
which
is
like
okay,
fine
right,
but
then
you're
always
going
to
forget
what
that
string
is.
And
so
you
want
to
actually
sort
of
put
this
into
a
comment
so
that
people
figure
out
what
happens,
and
so,
if
you
actually
create
the
secret
using
cube
control,
it
does
some
of
this
stuff.
A
A
I
and
I
called
this
thing:
GC
PSA,
so
that's
the
input
to
this
thing
and
then
OH
standard
out
which
it
would
do
and
I'll
pipe
this
on
the
Mac
to
PB
copy,
which
then
puts
it
on
my
on
my
copy
buffer
and
now
I
can
go
through
and
actually
paste
that
thing
in
here.
So
there
we
go,
isn't
the
user
Jason
key?
Maybe
it
is
the
user
Jason
key,
maybe
you're
right,
okay,
so.
A
A
A
A
We
have
to
do
this
thing,
and
so
this
is
one
of
those
places
where
things
are
non-standard
and
if
you
actually
watch
the
episode
on
tact
on
CD,
it
was
actually
very
similar
in
that
this
thing
was
like
like,
and
this
is
the
thing
where
it's
like
you'll
watch
people
do
demos
of
this
stuff
and
they'll
be
like
look
magic.
The
registry
just
works
and
it's
always
a
pain
in
the
butt.
So
now
I
got
that
now.
I
can
do
oh
and
then
so
I'm
doing
this
okay,
so
the
namespaces
this
is
going
into.
A
A
A
B
B
A
A
When
was
that
added
I
am
out
of
date
here,
I
would
have
done
it
with
a
type,
but
I
guess
this
is
another
way
to
do
it.
Okay,
so
we're
gonna
change
this
to
data
now,
because
we
did
do
the
same
base.
64
encoding-
and
this
thing
is
too
long
and
so
we'll
go
ahead
and
do
that
Thank,
You
Cole!
That's
actually
awesome
my
bad
there.
A
Maybe
we
can
get
the
folks
working
on
google
config
connector
to
provide
a
way
to
do
it
foresee
your
knees
yeah,
so
that
would
be
actually
cool.
Okay,
good
catch
I
was
gonna
screw
that
up.
If
I
wasn't
careful,
okay,
so
now
we
have
that
and
then
then
the
other
thing
we
need
to
do
is
we
need
to
create
a
service
account
here,
and
so
I'm
gonna
actually
append
this
to
the
same
file
here.
A
A
A
A
A
A
One
string,
so
this
is
not
actually
a
quoted
thing
where
we
should
be
fine
there.
It's
just
the
word
wrapping
in
the
UI
yeah
good
catch,
and
then
you
also
need
the
the
kubernetes
service
coming.
Yeah
I
did
the
service
countdown
here,
it's
actually
in
the
same
file,
so
we
actually
did
create
that
service
account.
A
Okay,
so
now
we've
created
this
I
assume
that
I'm
gonna
want
to
clone
it,
get
clown
boom.
Okay!
So
now,
let's
go
back
to
our
instructions
here,
and
this
isn't
Java
and
I'm
a
dummy
when
it
comes
to
Java,
but
so
this,
if
I,
can
build
and
deploy
java
application.
You
know
something's
going
right.
Okay,
apply
kay
back
image
configuration
image
configuration
is
the
specification
for
an
image
Oh.
Actually
you
know
what
I
want
to
do
here.
Give
me
a
second
here
so
I.
A
A
So
there
we
go
okay,
cool
because
that's
going
into
the
default
namespace
an
image
configuration
is
a
specification
for
image
that
kpac
should
build
and
manage
for
more
info
check
out.
The
image
documentation
will
create
a
sample
image
that
builds
with
the
default
builder
set
up
in
the
installing
documentation.
So
this
is
the
bill
that
we're
talking
about
here,
and
this
is
fascinating
because
it
actually
goes
and
I'm
going
to
call
this.
You
know
image
dot,
UML,
okay,
so
we're
gonna
call
we're
gonna,
create
a
new
thing
here
called
damaged
yamo.
A
A
A
A
A
Alright,
so
then,
now
this
is
a
URL,
so
do
I
put
the
github
URL
like
that.
Do
I
actually
put
this
get
URL
like
cuz.
This
is
a
URL.
Also,
do
I
want
to
use
the
HTTP
s1
here
and
do
this,
it's
probably
I'm
guessing
that's
what
I
want
to
do
unclear,
though,
what
the
form
of
that
URL
is
because
it
could
be
any
number
of
things.
You're,
Bill
pack,
sample
app
Fork.
A
Public
accessible,
github,
URL
and
then
there's
a
way
to
actually
do
a
private,
git
repo.
Alright,
we'll
see
if
that
works.
Ok,
so
if
we
need
the
dot
git
or
whatever,
ok
and
then
I
can
just
go
ahead
and
just
apply.
This
and
image
usually
works
without
the
suffix
bogdani
saying:
okay,
so
we'll
take
the
dot
get
off
here
and
we'll
see
what
happens
so
I
I
do
that.
A
A
All
right
so
there's
caching
going
on
here
also,
so
this
is
good.
So
this
is
fascinating.
So
I
want
to
look
so
like
so
skip
the
archive
expanding.
But
then
there's
a
bunch
of
these
build
plaques
that
say
yeah.
We
want
to
do
these
things
like
open,
JDK
and
stuff
like
that,
but
then
there's
a
bunch
of
other
stuff.
That's
like
hey!
Do
you
want
to
do
stack
drive
or
do
you
want
to
do
like
all
this
and
that
those
things
are
being
skipped
and
then
passes?
A
A
A
Was
the
discovery
phase
I
think
this
was
the
analysis
phase
and
now
I
think
we're
into
the
build
phase
at
this
point.
Don't
quote
me
on
this,
and
so
now
we're
actually
running
the
open,
JDK
build
pack,
and
so
what
it's
doing
is
it's
downloading
from
the
open,
JDK,
verifying
checksum
and
then
writing
that
into
a
layer,
okay,
and
that
goes
into
slash
layers,
blah
blah
blah
and
then
it's
doing
the
same
thing
for
the
JRE
is
also
going
into
a
layer.
Is
it
the
same
layer?
A
One
don't
tell
them
that
I,
don't
know
what
I'm
talking
about
replacing
main
artifact
with
repackage
art,
so
there's
some
voodoo
in
terms
of
getting
maven
sort
of
installed
and
maybe
run
to
actually
download
all
the
stuff
that
needs
to
get
downloaded.
There's
an
application
build
pack
which
is
essentially
just
writing
the
classpath
stuff
in
there's
a
spring
boot
okay.
So
this
is
interesting.
So
there's
a
lot
of
like
some
of
these
things
are
just
modifying
like
class
paths
and
stuff
like
that
to
share.
A
A
Yeah
like
maven
is
maven
is
not
you
know
lightweight.
So
what
we're
actually
doing
here
and
I
think
what's
interesting.
Is
that
I
think
what
we're
seeing
is
that
let's
see
we're
going
through
we're?
Caching,
ok,
so
now
that's
done
and
now
we're
doing
the
export
phase,
and
so
the
export
phase
is
actually
looking
at
the
layers
that
were
created.
A
So
we
have
an
a
player,
a
config
layer,
a
launcher
layer
and
the
open
JDK,
there's
an
executable
jar
layer,
there's
a
spring
boot
layer
and
then
there's
an
auto
reconfiguration
layer
and
it's
turning
those
things
into
layers
for
the
eventual
docker
image
that
we're
actually
pulling
down
and
then
and
then
it's
caching,
those
things
I,
wonder
how
it's
cache.
You
know
where
is
it
cash?
You
know.
A
So
builder
is
basically
a
pod
of
containers
for
creating
the
build
pack
actually
I
think
what
a
builder
is
is
actually
it's
a
it's
a
thing
that
actually
runs
the
process,
essentially
those
four
steps
that
are
part
of
build
packs.
So
it's
a
single
container
image
that
can
that
essentially,
has
all
the
build
packs
in
it.
So
if
you
want
it
to
like,
go
through
and
say,
I
want
to
do
the
you
know
the
VMware
specific,
build
pack
or
my
enterprise
specific,
build
pack.
A
Of
that
layer
thing
hey,
you
must
be
using
up
to
this
level
to
be
able
to
do
this.
Okay,
so
we're
doing
this
caching
layer.
So
I
think
this
is
uploading
to
GCR
now
like
if
I
go
through
and
if
I
look
at
my
my
container
registry
here,
okay,
so
here's
kpac
demo-
and
we
see
that
we
actually
have
tags
latest
and
a
build
thing
here,
and
this
thing
was
created
three
minutes
ago,
and
so
it's
done.
Okay,
so.
A
Okay,
so
that
looks
good
I
didn't
know
it
was
done.
It
would
be
great
to
actually
call
out
in
the
output
here,
like
you
know,
hey
doing
phase
blah
blah
blah
and,
like
all
that,
because
I
think
it's
Xander
saying
its
capac
all
the
way
down
to
it.
Okay,
so
you're
gonna
actually
be
using
kpac
to
create
the
builders,
so
Pat
Creek
builder
creates
builders,
but
k
pack
will
do
this
for
you
soon.
A
Okay,
so
pack
for
those
who
didn't
see
it
and
I
only
know
this,
because
if
we
go
to
do
build
packs,
X
dot,
IO
pack
is
essentially
a
toolkit
for
essentially
doing
this
stuff
directly,
which
is
part
of
the
upstream
project,
whereas
k
pack
is
sort
of
a
I
think
maybe
like
a
kubernetes
native
version
of
pack,
because
maybe
the
right
way
to
actually
say
it.
Okay,
so
now
we
have
an
image,
that's
awesome!
So
now
we
can
actually
do.
A
Awesome-
and
here
we
go
all
right,
so
it
looks
like
that
pack
is
pretty
slow
compared
to
basic
docker,
builds
well,
I,
think,
Dennis
and
I
think
this
is
the
thing
that
we're
gonna
see
is
that
you
know
there
was
a
bunch
of
caching.
There
was
a
bunch
of
sort
of
one-time
thing
going
on
there
and
if
you
actually
went
through
and
did
a
maven
based
docker
build
with
a
docker
file,
it
would
be
very
similar
in
terms
of
the
amount
of
time
that
it
would
take.
A
A
So
it
would
be
nice
if
cap
could
actually
go
through
and
recognize,
or
maybe
like
based
on
sort
of
a
white
list
of
api's
API
versions
or
kinds
that
hey
this
thing
is
condition
enabled
and
when
something
is
condition
enabled,
then
we
know
that
we
can
actually
wait
for
it
to
be
done,
based
on
the
the
conditions
that
be
going
to
be
pretty
cool.
Okay,
but
we're
done
there.
So
the
output
should
look
like
this.
A
A
B
A
Can't
ruin
a
major
I,
know,
I
know,
but
I
want
to
I
want
to
pull
it
and
I
don't
have
permission
to
pull
it.
Air
response,
our
game,
pool
access
denied,
so
I
don't
have
permission.
Reg
Murphy
does
not
exist.
Some
require
talk
in
Locker
login.
Permission
died
for
latest
from
request,
so
I
get
cat.
You
know
docker.
A
B
A
A
Make
it
public
a
bad
idea,
but
let's
just
do
that,
because
that'll
get
us
moving
something
screwed
up
with
my
daugher
okay.
So
what
we're
seeing
here
is,
which
is
interesting,
is
that
we
have
a
whole
bunch
of
layers
now.
The
interesting
thing
is
that
the
way
that,
when
you're
building
from
a
docker
file
works,
is
that
these
things
are
essentially
a
stack
and
I'm
not
actually
actually
sure
which
way
they
go
when
they're
listed
here.
A
But
if
you
made
an
edit
in
this
layer,
it
would
actually
invalidate
all
the
downward
layers
that
are
going
on
here.
The
interesting
thing
with
build
packs
in
the
way
that
kpac
works
is
that
it
knows
how
to
responsibly
be
able
to
edit
layers
in
the
middle.
So
the
idea
here
is
that
it
can
be
much
more
surgical
when
it
actually
goes
ahead
and
does
updates
g-cloud
off
was
the
answer.
I
was
g-cloud
offed.
Oh,
you
know
what
g-cloud
config
configure
age.
A
A
B
B
A
We
actually
built
an
image
and
that
image
is
actually
runnable
and
it's
a
java
image.
So
you
know
it's
not
going
to
be
small,
but
how
big
is
it?
Let's
look
at
that
kpac
demo.
Can
we
actually
like?
Is
there
metadata
about
this?
So
this
thing
is:
is
a
hundred
megabytes
which,
for
a
java
image,
is
actually
relatively
reasonable
and
I?
Think
what
we're
seeing
here
is
that
the
run
tiny,
omim
egde
versus
the
the
built-in
image
the
image
are
different
right.
A
So
this
thing
has
the
GRE
installed
but
doesn't
have
the
JDK
installed,
and
so
that's
one
of
the
things
that
you
actually
get
out
of
this
and
so
and
then
I
think.
One
of
the
things
that
we'll
see
here
is
that
it's
a
v2
docker
manifest
there's
a
bunch
of
this
stuff
here,
but
then
is
there
like
labels
or
other
metadata
on
here?
No,
not
yet.
Ok,
so.
B
A
Of
the
things
with
like
OC
I
know,
CI
registries
is
there
is
a
lot
more
room
to
actually
have
rich
data
and
different
types
of
objects
in
the
registry,
and
so
it's
going
to
be
really
interesting
over
time.
As
we
start
seeing
more
supportive
OCI
registries
for
this
stuff
too,
for
this
stuff
to
actually
be
utilized.
B
A
Things
like
kpac,
okay,
so
kpac
rebuilt
with
source
code
updates.
Ok,
so
we
can
actually
do
kpac
get
builds
and
each
build
actually
adds
up
being
a
thing:
does
it
actually
GC
these
things,
or
do
we
actually
see
builds
that
like
go
on
forever?
At
some
point?
Is
there,
like
you
know
where
we're
going
to
have
problems
like
what
we
did
with
deploy
early
on
with
kubernetes?
A
A
And
rebuild
you
can
see,
how
am
I
running
this
and
then
and
then
you
can
tell
the
logs
and
then
it
also
read
those
okay.
So
let's
actually
make
a
change
to
the
app
I
know
nothing
about
Java.
So
we
have
tests.
We
have
main
okay,
so
I
assume
the
code
is
under
source.
Let's
actually
say
that's
the
case:
I
go
through
sample
application,
Java.
A
A
Sample
that
I
am
el
I
want
to
change
it,
so
it
says
something
different.
This
is
this
is
this
is
my
problem
with
Java
I
have
no
idea
what's
going
on
source
main
resources,
okay
source
main
no
resources
there
we
go
okay,
banner
text,
we
have
build
packs.
Do
application
properties
has
nothing
in
it
index,
dot,
HTML,
okay,
so
I'm
gonna,
edit,
this
thing,
hello,
Joe,
it
will
garbage
collect,
builds.
You
can
configure
that
on
the
image
would
pay
you,
okay,
so
in
resources,
you
know
my
joke
about
job.
You
want
to
hear
my
job.
A
A
A
A
Worth
noting,
there's
a
ton
of
metadata
about
the
layers
on
the
image:
docker
inspect
okay.
So
if
we
do
that,
we
will
see
the
metadata
in
the
images
okay.
So
this
is
something
that
we're
I'm,
not
okay
did
I
configure
a
PVC
I.
Didn't
nobody
told
me
I
needed
to
how
do
I
configure
a
PVC
is
that
in
release.
A
A
Nothing
talks
about
a
PVC
here,
you
can
add
a
cache
to
the
image.
Is
that
on
the
image
or
is
it
on
the
image
that
MD
cache
size
and
then
okay,
the
size
of
the
volume
claim
that
will
be
used
by
the
bill
cache?
So
one
of
the
things
I
do
like
is
all
the
building
is
happening
in
my
particular
namespace,
and
so
so
that's
good,
okay,
okay,
the
number
of
builds
that's
retained.
Okay,.
A
A
A
A
Okay,
same
thing:
okay,
so
this
should
work
okay.
So
now,
let's
go
through
we're
gonna
we're
gonna
do
logs
here
all
right
and
we're
gonna
pick
up
where
we
left
off.
I'm
gonna
go
through
we're
gonna
see
how
fast
it
is.
Oh
I'm
gonna
have
to
do
one
and
then
do
another
one.
Okay,
let's
do
another
change
here
and
then
this
will
catch
after
this
one,
hello,
TGI
Kay
both
commit
changes.
A
A
A
A
There's
a
digest
there.
Okay,
so
when
you
do
a
docker
build
what
it
does
is
it
actually
takes
all
the
stuff.
That's
in
your
current
directory,
or
at
least
the
direct
that
the
dr.
bill.
Does
it
tars
it
up
into
a
file
and
then
it
transmits
that
over
to
the
docker
to
the
docker,
daemon
and
and
that's
called
build
context
right,
and
so,
when
you're
building
in
something
like
kubernetes
there's
this
question
of?
Where
does
the
build
context
come
from?
A
B
A
Of
the
things
and
that
I
think
this
is
is
that
the
the
source?
Okay-
this
is
fascinating.
Okay,
so
here
you
know,
the
source
is
actually
a
git
repo,
but
the
source
could
also
be
just
a
tarball
or
a
jar
or
something
else
that
you're
getting
from
something
else,
and
so
you
don't
have
to
okay,
the
different
source
you
can
do
get
you
can
do
blob
source
code
is
a
blob
jar
in
the
blob
store
jar,
because
why
not?
It's
just
a
zip
file
right
and
then
sub
path.
A
A
Now,
let's
actually
see
okay,
so
I'm,
assuming
that,
if
you
configure
this
thing
with
like
a
blob,
you
know
the
way
that
you
actually
do
an
upgrade.
Is
you
actually
say,
hey
take
the
new
blob
or
something
like
that.
You
have
to
actually
change
the
image
resource
to
be
able
to
kick
off
a
new
build
in
some
way,
similar
to
changing.
A
A
Do
you
do
to
do
yeah,
cuz
I
think
you
know,
I
love
the
idea
of
source
to
image.
I!
Think
it's
great,
but
I
also
like
the
idea
of
actually
being
you
know,
being
able
to
be
more
explicit
about
when
you
do
that.
What
source,
you're
grabbing
I
think
one
way
to
look
at
this
okay,
so
cool,
so
we're
reusing
the
cash
layer
boom,
reusing
cash
layers.
We
only
had
to
sort
of
do
a
let's
see
this.
A
A
That,
like
does
it
support
things
like
github
work
web
hooks.
Could
you
know
if
you
had
something
like
like?
It
was
a
larger
pipeline.
You
could
do
something
like
well,
you
take
the
things
you
upload
it.
It
could
be
something
local
and
then
you
actually
pointed
at
that.
So
thinking
about
how
do
we
actually
work
this
into
a
larger
workflow?
A
A
Does
this
just
retain
the
CR
DS,
or
does
this
then
actually
go
through
in
GC?
The
builds
that
are
sitting
in
your
registry
also
right
when
I?
Actually,
when
it
goes
through
and
actually
deletes
a
build,
does
it
actually
just
delete
the
record
locally
of
that
build,
or
does
it
actually
maintain
the
stuff
that's
happening
in
the
registry,
because
then
there's
the
question
of
like
what
is
the
lifecycle
of
stuff
happening
in
the
registry?
A
If
I
have
my
like
a
bunch
of
like
crap
builds
and
then
I
have
my
official
build
I
want
to
make
sure
that
I
keep
okay.
So
it's
only
ashlynn
saying
it's:
only
the
local
records
are
deleted
so
essentially
having
a
GC
mechanism,
and
your
registry
is
important
also
because,
if
you're
doing
this
thing,
where
it's
like
every
change,
every
check-in
I
do
a
rebuild
you're
going
to
have
a
lot
of
builds
and
at
some
point
you
want
to
trim
those
and
there's
some
builds
that
you're
gonna
want
to
keep
there.
A
A
One
of
the
things
that
I
think
might
be
interesting,
looking
at
sort
of
other
pipelines,
things
like
Argo
and
Tecton
is
it
might
be
interesting
to
have
the
source
configuration
also
be
perhaps
a
persistent
volume.
How
would
you
bind
this
into
something
like
Argo,
where
you
actually
have
a
whole
other
way
of
a
are
actually
getting
and
you're?
Maybe
a
pre-processing
or
you're
doing
some
stuff
in
your
source
before
you
go
off
and
build
it
so
I
think
that
would
be
interesting
to
think
about.
That
also
would
be
a
cool
feature.
Okay,.
A
Do
the
final
thing
here
so
this
thing
is
built:
I
could
run
it
in
the
in
the
the
remote
cluster,
but
I
can
go
through
and
do
let's
see
dr.
Paul
one
of
the
things
that
we're
going
to
see
here-
and
this
is
also
is
that
because
it
was
reusing,
image
layer,
caching,
it
didn't
have
to
pull
everything.
It
was
a
mock,
a
lot,
much
smarter
than
that
and
I
believe
let's
actually
go
through,
and
so
this
was
the
latest
one.
So
I
pulled
a
bunch
of
layers.
Some
of
them
already
existed.
A
A
All
right,
so,
though
this
thing
already,
let's
eat
it
all
right.
No,
it's
just
actually
it's
dreaming
all
this
stuff,
okay,
so
having
logs
actually
go
all
the
way
back
to
the
start,
I
wonder
when
logs
gets
pruned
how
logs
get
stored.
Is
it
just
in
RAM
we're
storing
caching
layer
boom?
We
have
a
bunch
of
layers.
Some
of
the
layers
are
changing.
Some
of
them
aren't
kpac.
Doesn't
support
github
web
hooks,
yet
hopefully
someday
soon,
all
right,
so
reusing
a
bunch
of
layers
boom
pushing
it.
We
have
a
new
digest.
A
A
It's
still
downloading
quite
a
few
images.
Is
this
a
problem
with
docker
I
wondered
this
docker
have
to
download
all
of
them,
or
one
of
the
things
that
I
was
expecting
to
see
here.
Is
that
just
one
of
the
the
layers
in
the
middle
would
actually
change
and
I
wouldn't
actually
have
to
change
more
stuff?
I
was
I'm
surprised
to
actually
see
that.
A
B
A
B
A
Then
that
invalidates
the
hash
and
all
sorts
of
stuff
and
it
starts
download,
read
downloading
from
the
first
changed
layer.
Okay,
so
this
is
docker
bean
docker,
which
you
know
it's
not
it's
not
insane,
because
you
could
imagine
that,
like
this
idea
that
you
do
a
surgical
thing,
very
rare
in
the
docker
world
right
and
so
it
actually
throws
off
the
the
diff
IDs
I.
Think
viewing
you
know.
A
A
Logically,
what
we
have
is,
we
have
a
stack
of
layers
which
is,
like
you
know,
scratch
at
the
bottom,
and
then
you
have
like
all
this.
You
know
in
each
of
these
layers.
You
know
it
is
essentially
like
copy
or
an
ad
or
run.
You
know
these
things
essentially
end
up.
Mapping
to
my
dockerfile
right
ends
up
being
like
all
of
these
things,
and
then
you
know,
even
things
like
sitting
in
an
environment
variable
turn
into
a
new
layer
in
the
in
the
dockerfile,
and
so.
B
A
So
this
is
like
an
OCI
image.
Is
that
we're
done?
It
was
the
eye
image
and
essentially
what
we
can
say
is
like
you
know,
you
can
have
a
tar
file
here
and
you
want
to
mount
this
thing
in
two
slash
right
and
then
you
have,
or
you
know,
to
user.
Maybe,
and
then
you
have
another
tar
file
here,
another
layer
and
you
want
to
mount
this
thing
into
/foo,
slash
and
bar,
and
you
know-
and
you
have
another
one
here
and
you
know
and
you're
mounting
this
one
into
you
know
data
right.
A
We
get
much
more
flexible
about
how
we
think
about
composing.
These
things
now
I
actually
don't
know
if
OCI
supports
this
at
some
point
this,
when
we
were
talking
about
this
I,
don't
know
if
it
actually
landed
there,
but
now.
The
idea
here
is
that
you
start
viewing
this
as
a
tree
logically,
instead
of
actually
as
a
linked
list
right.
A
So
like
a
docker
image
today,
when
you
use
a
docker
file
to
essentially
a
linked
list
of
of
layers
that
stack
on
top
of
each
other-
and
it's
like
you
know,
it's
essentially
free-range
in
terms
of
each
layer
can
modify
any
files,
whereas
when
we
started
viewing
it
as
a
tree,
then
it
ends
up
being
a
much
more
structured
way
about
thinking
about
stuff,
and
you
can
think
of,
like
oh
I'm
gonna
modify
my
data
directory
without
modifying
/user
right.
These
things
are
totally
separate.
A
A
B
A
A
I
want
this
build
pack
to
be
sort
of
like
cordoned
off
to
only
do
a
certain
set
of
things
right,
and
so
that
actually
creates
much
more
clear,
scoping
and
less
danger
of
the
build
packs
interfering
with
each
other
in
unexpected
ways,
and
then
it
does
use
a
Merkel
tree
for
the
checksums
right,
so
the
Merkel
tree,
so
that
it's
okay.
So
so
the
idea
is
that
there's
a
hash
here
for
source
and
then
the
hash
for
the
next
layer
is
essentially
the
content
here.
A
The
question
is
that
you
do
the
hashes
as
identity
for
the
intermediate
layers,
or
do
you
have
an
independent
idea,
ID
for
the
intermediate
layer
so
that
you
can
actually
keep
them
without
having
to
actually
look
at
the
previous
ones?
Oh,
don't
see
what
I
draw
it
so
I
can
show
you
the
I
bet,
I'm,
sorry,
so,
there's
hashes
that
actually
sort
of
like
depend
on
the
different
layers.
Yeah
I
can
undo
some
of
my
stuff
here.
A
So
I,
you
know,
I
I'm,
still
figuring
out,
like
I,
haven't
dug
into
the
details
of
the
OCI
stuff.
Yet,
but
regardless
I'm
really
excited
to
be
seeing
it
make
progress.
So
alright,
so
I'm
gonna
wrap
up
now,
I've
gone,
really
long.
I
think
it's
a
great
super
interesting
project,
I
love,
taking
the
source
to
image
stuff
in
new
directions,
taking
build
packs
into
account,
breaking
this
out
from
being
sort
of
tied
to
a
specific
distribution
to
something
that
anybody
can
use
for
kubernetes
anywhere.
I.
A
Think
that's
super
super
exciting
using
crts
to
be
able
to
represent
this
stuff
is
also
super
super
cool,
because
that
means
that,
like
one
of
the
things
we
didn't
do
is
like,
we
have
k-9s
or
we
can
do
like
octant
as
a
solution
here
that
pulls
up.
We
actually
start
seeing
things
like
custom
resources
and
I
can
see
the
builds
that
got
built
here,
which
I
think
is
really
really
cool.
One
suggestion
by
the
way
is
that
it
might
be
cool
to
actually
have
the
images
actually
be.
Oh
I
they
do
have.
A
Fact
that
these
things
are,
you
know,
CR
DS
means
that
we
actually
get
oh
shoot.
I
didn't
actually
did
it
again.
I
don't
have.
My
screen
means
that
we
actually
see
sorry
about
that,
see
that
we
can
actually
start
using
the
the
fuller
set
of
tools
that
we
have
in
the
kubernetes
world.
So
that's
super
cool,
really
exciting
stuff
I'm
excited
to
learn
more
about
build
packs
and
thank
you
for
everybody,
from
the
team,
Stephen
and
and
others
so
who's
from
the
team,
so
Stephen
looks
like
Andrew.
You
were
on
the
team
also.
A
Cole
you
know
thank
you
so
much
for
joining
in
and
and
helping
make
sure
I
stay
at
Hunt
trap
and
it
didn't
didn't
get
myself
into
too
much
trouble
and
we
will
see
you
all
next
week.
I
think
I'm
gonna
I'm
at
least
free
to
do
it
next
week,
so
I'm
going
to
try
and
do
a
TGI
K
next
week,
the
week
after
that,
I'm
actually
going
to
be
at
spring
one,
and
so
we're
gonna
have
to
trade
off
a
little
bit.