►
From YouTube: TGI Kubernetes 072: Kustomize and friends
Description
Come hang out with Joe Beda as he does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Joe talking about the things he knows. Some of this will be Joe exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
This week we will look at Kustomize! A (now built in) system for managing and Kustomizing your Kubernetes manafests via patches.
If we have time we'll examine some other up and coming solutions in this space.
See https://github.com/heptio/tgik/tree/master/episodes/072 for notes and code.
A
Hello,
everybody
and
welcome
to
TGI
kubernetes
I
am
Joe
Beda
I'm.
A
principal
engineer
at
VMware
was
the
CTO
of
hefty.
Oh
we're
now
part
of
VMware,
which
is
very
exciting
and
TTI
kubernetes.
For
those
who
aren't
aware
is
a
weekly
live
broadcast
that
we
do
where
I
take
technology.
A
new
thing
happening
the
kubernetes
world
play
with
it
live
a
lot
of
times.
I,
don't
have
a
lot
of
grounding
in
it,
so
I'm
learning,
along
with
everybody
else,
and
bring
some
context
explore,
see.
A
What's
going
on
that
type
of
thing,
and
so
we
are
an
episode
72,
which
kind
of
blows
my
mind.
I
had
no
idea
that
when
I
started
it
that
we'd
get
this
far,
but
one
of
the
things
I
like
to
do
is
start
off
by
saying
hi
to
everybody
who's
joining
from
around
the
world.
When
we
first
started
doing
these
things,
I
didn't
realize
that
1
p.m.
on
a
Friday
afternoon
on
the
west
coast
of
the
US
means
for
a
lot
other
folks
that
it's
either
going
to
be
Friday
night
or
Saturday
morning.
A
So,
if
you're
spending
your
Saturday
night
with
me
thanks
for
doing
that,
quick
shout
out
to
a
lot
of
folks,
let's
see
so
we
have
Baz
Bank
who's,
saying
hi
from
Lagos
wondering
what
the
topic
is
today.
We're
going
to
be
talking
about
customize
and
I'll.
Explain
the
relevance
of
that
of
the
image
that
I
picked
Martin
from
the
Netherlands
Olaf
from
Denmark.
Welcome,
welcomes
and
deep
from
pull
in.
A
A
A
The
image
here
is
a
bunch
of
patches
like
patches
that
you
put
on
like
a
uniform
or
what
have
you
because
the
whole
idea
with
customized
is
that
we
move
away
from
templating
towards
something
where
you
actually
have
Delta's
on
top
of
yamo,
so
you
have
yet
base
oh
and
then
you
actually
have
ways
to
modify
that
with
patches
using
some
of
the
patch
strategies
that
are
built
into
into
kubernetes.
Now,
as
this
happened
to
sort
of
grown
a
little
bit,
there
is
like
a
little
bit
of
sort
of
specialized
type
of
stuff.
A
That's
got
mixed
in
will
dig
into
sort
of
like
where
patches
fit,
where
other
things
fit,
sort
of
pros
and
cons
of
templating.
That's
some
of
the
stuff
we're
going
to
dig
into,
but
that's
why
I
had
the
image
here
that
we
have
so
the
first
thing.
First
eye
we
have
a
hack
MD,
which
is
like
a
way
that
we
can
collaboratively
edit
a
yamo
file.
So
if,
if
you
want
to
take
notes
as
we
go,
that's
super
appreciated,
especially
if
you
want
to
take
time,
codes
and
say:
hey.
A
You
know
at
this
particular
time
something
interesting
happened,
that'll
help
folks
that
come
later,
if
they
want
to
find
a
part
of
the
episode
so
yeah,
if
we
can
crowdsource
that
the
link
should
be
in
the
in
the
comments
there.
If
you
want
to.
If
you
want
to
find
that,
and
then
we
take
these,
we
check
these
into
github
later
as
we
go
alright,
so,
oh,
we
have
a
Hillary
from
South
Africa,
so
my
co-founder
for
hefty
Oh
Craig.
A
He
grew
up
in
South
Africa,
let's
see
and
so
Mohammed
from
Morocco
good
to
see
you
right,
one
of
the
other
things
that
we
like
to
do.
As
we
start,
these
things
out
is
go
through
a
little
bit
of
what's
been
happening
around
the
kubernetes
ecosystem,
announcements,
new
things,
and
so
that's
what
we're
gonna
start
with.
So
the
first
thing
and
all
these
links
are
going
to
are
in
that
hack,
MD
and
all
these
links
are
things.
Will
that
we'll
be
able
to
talk
about,
and
you
can
find
them
later?
So?
A
The
load,
balancing
situation
with
a
focus
towards
working
in
multi
team
environments
being
respectful
of
namespaces,
and
so
one
of
the
things
that
we're
introducing
is
a
set
of
CR
DS
that
move
beyond
ingress
called
ingress
route.
I
think
we've
talked
about
those
before,
but
here
Steve
went
into
some
detail
has
a
great
video
talking
about
how
you
can
use
some
of
the
features
of
contour
to
do
Bluegreen
deployments
in
a
very
easy
to
understand
way,
so
very
cool.
A
B
A
Is
the
open
container
initiative,
so
it's
common
runtime
interface
for
the
open
container
initiative
and
oh
yeah,
so
Steve's
saying
that
we
just
patched
contour
for
the
for
the
new
Envoy
that
had
there
was
a
CV
out
there,
where
we're
going
to
talk
about
that
with
respect
to
sto
in
a
second.
So
this
is
an
alternative
to
using
docker
for
your
runtime
and
so
what
we're
seeing
emerge
in
this
space.
The
big
players
is
that
we
have
a
legacy.
A
Docker
sort
of
you
know,
driving
docker,
which
is
really
a
fully
experienced
with
an
engine
buried
underneath
it
we're
moving
towards
container
D
and
cryo
B
in
the
common
runtimes
they're
they're.
Both
maturing
container
D
is
sort
of
that
engine
underneath
docker
without
a
lot
of
the
the
other
sort
of
user
facing
stuff
on
top
of
it
and
then
and
then
cryo
is
based
on
a
set
of
other
technologies.
A
lot
of
this
is
being
driven
by
Red,
Hat
and
they're
kind
of
peers
to
each
other.
A
The
other
player
in
the
space
a
while
ago
is
rocket
rocket,
is
kind
of
fading
to
the
background
unclear
exactly
what's
going
to
happen.
There
and
then,
let's
see
solo
Maddie
say:
do
you
see
a
rosy
future
for
docker
I
mean
I,
don't
know
I
mean
like
I
hear
you
know
it's
so
hard
to
tease
apart,
like
who's
making
noise
in
the
community
who's
popular?
A
A
Turning
that
into
something
where,
where
people
are
paying
you
money
for
goods
and
services
over
a
long
time,
it's
really
a
different
thing
and
I
think
what
we're
seeing
with
docker
now
is
a
real
focus
to
you
know
a
real
shift
to
be
able
to
focus
on
building
a
business
out
of
it
and
you
know,
and
and
the
the
the
rumors
that
I've
heard
is
that
you
know
they're,
building
a
real
business
and
they're
finding
customers
and
they're
being
successful
there.
So
so
I
think
that's
good
for
them.
I'm
happy!
It's
been.
A
It's
definitely
been
an
exciting
ride
to
to
get
to
where
we're
at
all
right.
So
that's
cryo
so
now
that
joined
the
CNC
F
at
the
incubator
level.
There's
there's
three
levels
with
the
CNC
if
their
sandbox
very
low
bar
to
get
in
just
providing
some
support
so
that
people
can,
like
you
know,
have
a
neutral
ground
to
be
able
to
develop
and
sort
of
Kindle
community.
There's
a
incubation
which
is
is
sort
of
more
support
and
then
there's
graduated,
which
is
a
full,
a
fully
graduated
project
of
the
CNC
F.
A
Let's
see
so
we
have
Rory
from
Scotland
dare
from,
let's
say,
hi
and
then
Nick
from
just
down
the
way
is
as
Nicholas
as
is
here
with
us,
also
all
right.
So,
let's
see
here's
a
great
article
talking
about
benchmarking,
see
and
I
over
very
high,
very
fast
networks.
Looking
at
different
C&I
implementations,
looking
at
some
of
the
pros
Kahn's
talking
about
things
like
MTU,
so
this
is
an
updated
version,
looks
like
there's
lots
of
great
info
here.
A
Let's
see
so
there's
that
this
is
another
project
that
has
just
joined
the
C
CN
CF.
At
the
sandbox
level,
it's
called
network
service
mesh,
it's
different
from
sort
of
the
l7
service
mesh.
This
is
actually
service
mesh
at
the
l2
l3
layer.
The
idea
here
is
that
kubernetes
tries
to
make
networking
super
simple,
but
in
some
ways
perhaps
it's
an
oversimplification
for
some
use
cases.
A
So
the
idea
with
with
network
service
mesh
is
that
you
can
start
creating
sort
of
virtual
wires
between
you
know
patch
cables
between
containers
and
then,
as
you
do
so,
those
actually
manifest
as
new
interfaces
inside
the
containers,
and
so
this
layer
is
on
top
of
regular
networking
in
an
extensible
way.
So
we're
not
reinventing
networking
for
kubernetes,
but
instead
this
is
a
way
to
start.
Taking
advantage
of
hey
I
have
this
pod,
it
needs
its
regular
environment,
but
it
also
needs
access
to
maybe
sort
of
like
back
to
a
corporate
VPN.
A
You
can
then
use
this
to
say
well,
create
another
interface
create
another
wire
back
to
the
corporate
VPN.
A
lot
of
telcos
have
very
specialized
needs
in
terms
of
dealing
with
data
at
a
more
primitive
later
than
just
TCP
streams,
and
so
that's
where
this
comes
in.
This
is
coming
in
at
that
sandbox
level,
so
that
very
sort
of
incubation
level
low
bar
to
get
in
there,
but
it's
a
project
that
I
sponsored
into
the
CNC
F
I.
Think
it's
really
interesting,
let's
see
other
stuff,
so
Syd
cluster
lifecycle.
A
This
is
something
where
a
lot
of
folks
at
VMware
are
super
involved
in
this
is
a
this
morning
we
had
an
onboarding
call
sort
of
like
welcoming
new
contributors
into
the
sig,
here's
a
really
bad
screen,
capture
of
Tim
and-
and
so
that
was
really
successful.
A
lot
of
people
joined
both
on
YouTube
and
on
the
zoom
call.
It
was
really
great
to
see
that
so,
if
you're
interested
in
sort
of
seeing
what
it's
like
to
contribute
to
kubernetes
get
involved
in
a
sig.
A
This
is
a
great
place
to
start
to
get
an
idea
for
that.
Let's
see,
there's
a
survey
here
on
sort
of
windows
use
cases.
I
haven't
actually
gone
through
the
survey
and
it's
just
sort
of
starts
asking
for
your
name
but
George
found
this
out
and
has
some
context
on
that.
There's
a
great
article
here
talking
about
pod
security
policies.
This
is
not
something
that
we've
done
at
TGI
K
on
yet
and
I
think
would
be
a
great
topic,
something
I'd
really
like
to
get
to
at
some
point.
A
There
is
an
update
to
sto
because
there
were
a
couple
of
CVEs,
a
couple
of
security
vulnerabilities
in
envoy
and
the
way
that
hiss
teo
was
using
these
things,
tickled
them,
and
so,
if
you're
running
sto.
This
is
something
that
you
want
to
take
pretty
seriously
and
go
ahead
and
update
and
then,
like
Steve,
said,
we've
updated
contour
for
this
also-
and
let's
see
so,
let
me
keep
moving
we're
moving
fast
here.
A
So
you
can
go
here
and
see
like
all
the
different
folks
who
are
talking.
You
know
all
the
things
that
we're
looking
at
doing
super
exciting
stuff
there
and
I'm
really
excited
it's
like
this
page
came
together
nicely
alright,
so
the
other
things
I
wanted
to
call
attention
to
is
there's
these
couple
of
business.
This
is
a
project
that
a
and
engineered
pivotal,
I
believe
put
together
called
ytt.
This
is
a
templating
engine
for
yamo.
It's
a
really
interesting
take
on
it.
It's
in
some
ways.
A
A
So
it's
or
it
kind
of
feels
like
a
real
language,
so
you're
not
learning
a
new
language
if
you're
familiar
with
Python,
but
then
it
provides
sort
of
structure
aware
for
templating
of
this
stuff,
which
I
think
is
is
super
interesting,
we're
going
to
talk
about
sort
of
the
different
ways
and
then
paired
with
that.
So
this
just
does
templating.
A
You
know
this
file,
you
know
one
of
these
files
in
UML
out
that
type
of
thing
and
then
there's
a
better
sort
of
applier
so
that
you
can
think
of
this
as
sort
of
a
super
version
of
cube
control
apply
called
kay
app.
That
gives
you
a
lot
of
this,
and
this
I
think
you
know.
If
you
see
a
screen
like
this,
this
is
reminiscent
to
me
of
what
you'll
get
out
of
using
terraform
or
perhaps
out
of
the
the
plumie
stuff.
We
did
an
episode
on
plumie
as
an
example.
A
There
I
think
that's
super
interesting,
bringing
some
of
that
experience
to
sort
of
a
native
you
know,
yeah
mo
kubernetes
flow
I
haven't
played
with
the
stuff
deeply.
Yet
this
is
the
type
of
thing
that
I
think
I'd
love
to
do
an
episode
on
also
at
some
point
yeah,
so
yeah.
So
it's
a
star,
Larkin
yam
will
comment.
A
So
it's
a
really
interesting
combination
there
and
then
okay,
oh
these
are
things
that
I
had
left
open
as
I
was
getting
ready,
okay
cool,
so
that
is
sort
of
my
oh
and
then
there
was
lets,
see
somebody
who
is
martin
posted
a
link
here
in
the
comments.
Let
me
pull
this
up,
which
is
looking
at
helm
versus
customize,
very
topical
to
what
we're
going
to
be
talking
about
today.
So
I'm
gonna
grab
this
link
and
throw
this
into
our
notes
here,
because
I
actually
have
let's
see
how
vs.
custom
customize
I
can't
type
yeah.
A
This
did
you
write
this.
Are
you?
Are
you
like
digging
into
this?
Oh,
this
is
Matt's.
Oh
I
didn't
know
this
is
Matt's,
okay,
so
cool.
This
is
Matt
looking
into
it.
Matt
frenum,
okay,
cool
I,
know
Matt.
Let's
see
yeah,
there's
a
picture
of
mass,
so
he's
at
Samsung.
I
can
probably
our
office
from
here
I
think
they're.
Still
there
so
cool
all
right.
Let's
see
so
he's
asking.
What's
the
history
behind
API
version?
Was
it
originally
envisioned
as
the
way
to
extend
kubernetes?
That's
a
good
question.
I
think
I.
A
A
The
internal
API
infrastructure
at
the
time
when
we're
coming
up
with
kubernetes
had
their
own
idea
of
like
how
do
we
version
schemas
for
api's?
How
do
we
version
api's
borrowed
a
lot
of
ideas
from
that
and
that
really
turned
into
the
API
version
and
the
and
the
kind
stuff
we've
learned
lesson
lessons
along
the
way:
I'm,
not
necessarily
the
person
to
go
by
go
through
every
blow-by-blow
of
how
that
stuff
happened.
A
So
let's
go
ahead
and
and
get
started
and
we'll
start
digging
in
to
customize
and
what
customize
is
and
so
we'll
start
out
looking
at
customize
do
so.
This
is
a
landing
page.
Now
the
status
of
customize
is
really
interesting.
It
started
as
a
sort
of
side
project
out
of
some
ideas.
As
we
talked
about
templating.
What
is
the
right
thing
to
do?
Do
what
should
we
do?
Templating,
not
templating.
A
Looking
at
the
larger
issues
around,
you
know,
how
does
keep
control
apply,
evolve
and,
and
so
we
started
as
a
sub
project
of
sig
CLI
and
it
went
through
a
couple
of
name
changes
and
and
with
version
1.14,
it's
been
merged
into
into
cube
control
in
in
a
pretty
interesting
way,
and
so
it's
gone
from
being
sort
of
a
standalone
tool
to
something
that's
that's
built
into
cube
control.
Now
this
is
not
without
controversy
and
I
have
some
links
in
the
notes
I'm
not
going
to
dig
into
this.
There
is
some.
A
There
is
some
drama
over
it
yeah,
so
I
can
talk
about
sort
of
why
I
think
there
was
drama.
So
the
first
thing
to
recognize
is
that
this
is
a
place
where
there's
like
a
thousand
ways
to
skin
this
clip
cap
and-
and
you
know,
and
and
I
have
to
admit
that,
like
the
way
that
customized
solve
some
of
these
problems
is
not
my
favorite
way
of
going
about
doing
this
and
I'll
try
and
call
attention
sort
of
as
we
go
through
it.
A
Some
of
the
sort
of
pros
and
cons
to
the
approach
that
customized
is
taking
I,
look
at
the
problem
that
customizes,
solving
and
and
and
my
take
on
it
is
that
there
is
no
one-size-fits-all
solution.
I
think
that
we
can
probably
come
up
with
solutions
that
will
work
for
a
lot
of
users
and
maybe
customize
as
that
solution,
bringing
strong
opinions
and
customize
is
an
opinionated
solution
and
building
it
into
the
to
the
default
tool,
which
is
Cube,
control
and
I.
Think
some
people
view
cube.
Control
is
being
a
non
opinionated.
A
Just
hey
apply
some
yamo
type
of
thing.
It
feels
like
a
mismatch.
It
feels
like
we're,
injecting
a
bunch
of
opinionated
workflow
into
something
that
was
a
relatively
raw,
an
opinionated
tool
and
so
a
and
it
really
comes
down
to
the
definition
of
like
what
is
the
purpose
of
cube
control.
How
does
it
move
forward?
A
Do
we
want
to
bless
one
solution,
for
this
is
customized
at
the
same
level
as
other
solutions
like
helm
or
case
on
it
or
JSON
it
or
the
ytt
stuff,
and
there
aren't
clear
answers
to
a
lot
of
this
stuff,
so
I
think
a
lot
of
the
drama
was
beat
having
to
face
the
question
of
what
exactly
is
the
role
of
cube
control
moving
forward?
So
that
was
one
one
reason
why
there's
drama
there.
A
The
second
reason:
why
is
that
I
think
some
folks
felt
that
this
was
snuck
in
without
actually
going
through
a
lot
of
vetting
across
larger
community,
and
so
there's
there's
there's
an
article
that
I
post
that
I
put
in
the
notes.
That
really
goes
through
a
bunch
of
the
play-by-play
here
with
some
links
and
stuff.
A
One
of
the
things
that
we're
introducing
is
this
kept
process,
kubernetes
enhancement
proposals
to
make
matters
worse.
The
kept
process
itself
is
a
work
in
progress,
and
so
there
it's
been
a
bumpy
road
to
get
to
the
point
where
that's
smooth,
it's
useful,
it
doesn't
feel
like
it's
a
lot
of
friction
for
the
sake
of
friction
for
folks,
and
so
some
of
this
is
that,
like
some
of
the
early
customized
stuff,
maybe
didn't
use
a
cap
when
they
started
using
the
cap.
Caps
are
kind
of
a
pain
in
the
ass.
A
We
need
to
make
them
better,
we're
still
rolling
out
new
processes.
We
don't
want
to
change
the
goalposts
on
folks
when
they're,
in
the
middle
of
something
like
hey,
I
was
doing
something
now
this
cap
thing
came
along
now,
I
have
to
do
a
bunch
of
work
II
when
I
thought
I
had
approval
to
make
stuff
happen.
So
a
lot
of
this
is
just
you
know,
fast-moving
project,
a
lot
of
stuff
going
on
now.
A
Is
that
there's
a
lot
of
people
with
opinions
here,
but
they
don't
necessarily
have
skin
in
the
game
they're,
not
actually
in
the
cig,
doing
the
work
making
it
happen,
I'm
one
of
those
right
like
I've,
shown
up
to
sig
CLI
meetings,
but
to
the
end
of
the
day,
I'm
busy
I
haven't
been
actually
putting
my
work
in
so
in
some
certain
point.
I
want
to
be
like
I
may
not
like
if
I
were
gonna,
do
it.
A
Maybe
this
isn't
the
way
it
would
have
ended
up,
but
I'm,
not
the
one
doing
it
so
I
can't,
like
you
know,
say
hey.
Nobody
else
should
do
it
until
I
actually
find
time
to
do
this
right
and
so
I
think
some
of
this
is
testing
that,
like
hey,
you
know
if
a
sig
is
driving
something
that's
part
of
that
sig.
Well,
it's
up
to
the
signify
that
stuff
out.
Another
example
is
sick.
Cluster
lifecycle,
that's
a
place
where
we
are.
You
know,
as
a
company.
A
Vmware
is
putting
a
lot
of
time
into
that
we're
showing
leadership
there,
which
means
that
as
we
go
through
and
as
we
marshal
that
community
to
do
things
like
like
cube
admin
and
cluster
API,
there's
going
to
be
opinions,
there's
gonna
be
decisions
that
may
be
controversial,
but
at
the
end
of
the
day,
it's
the
folks
doing
the
work
that
get
to
make
those
decisions
and
I
think
customized
is
a
great
great
example
of
that.
So
that's
the
drama,
that's
my
sort
of
like
hey.
A
It
may
not
be
the
way
I
would
have
done
it,
but
I'm,
not
the
one
doing
it
so
that's
kind
of
like
you
know
it
is
what
it
is
so
yeah.
So
that's
that's
I
think
I
think
we're
moving
past
the
drama.
It's
never
gonna
be
over
completely,
but
you
know
if
you
know,
as
we
say
in
the
open
source
world,
you
know,
PR
is
accepted
right.
If
you
want
to.
If.
A
To
change
the
to
directory
of
this
stuff
roll
up
your
sleeves
and
get
involved,
and-
and-
and
you
know
folks
would
love
to
have
you
alright?
So
that's
that's
my
deep
deep
analysis
of
that
anyway.
So
so
so,
with
with
kubernetes
1.14,
it's
built
in
two
cubed
control,
there's
a
little
bit
of
like
marketing
going
on
here.
You
know
in
terms
of
documentation,
there's
two
places
where
you're
going
to
see
documentation.
A
Don't
know
if
he
wrote
this,
but
I
definitely
see
sort
of
his
fingerprints
on
this,
which
I
think
is
awesome,
because
this
is
this
is
the
the
new
documentation
for
cube.
Control
is
really
really
good
in
terms
of
the
just
the
description.
No,
it's
really
good,
but
it
also
makes
the
assumption
that
customized
is
the
preferred
way
to
use
cube
control,
which
again,
I
think
is,
is
controversial
because
you'll
see
things
like
you
know.
As
we
look
at
apply
here,
it'll
say
things
like
it
is
recommended
to
run
apply
against
customization
yeah
Mille.
A
A
It
goes
through
a
lot
of
the
features
of
customize
and
and
how
it
works
and
yeah,
and
so
let
me
sort
of
like,
like
I've,
been
reading
up
on
it
yeah
so
I
yeah
I
can
tell
Phil,
wrote
the
docs
and,
and
and
and
the
the
this
looks
like
it's
probably
write
the
docs
as
the
as
the
thing
to
generate
it.
Also
I'm,
not
sure
it
definitely
has
that
look
look
to
it
and
Phil.
A
This
bill
was
at
the
at
the
center
of
that
drama.
Dealing
with
did
the
backlash
there,
but
I
think
we're
getting
to
the
other
side
of
it,
which
is
really
good
alright.
So
so
the
idea
here
is
that
we
have
cube
control,
apply
with
dash
F,
where
you
can
point
it
out
of
directory
dat
ya
mo
then
goes
through
and
says,
and
and
and
pushes
that
out
to
the
to
the
cluster
that
is
sort
of
the
basis
of
customized.
A
That's
a
full
templating
solution
or
are
90%
of
the
mucking,
with
the
yeah
mo
that
users
want
to
do
something
that
we
can
actually
make
a
really
easy
experience
and
then
do
we
have
an
escape
valve
for
mucking,
with
that
yeah
mole
in
a
structured
way
without
sort
of
parameterizing
everything,
and
what
I'm
gonna
do
here
is:
is
I'm
gonna?
Let's
look
at
a
how
I'm
shocked
because
I
actually
said:
hey,
let's
convert
something
you
know
to
customize
and
I
wasn't
actually
going
to
take
the
SIRT
manager
from
from
jet
stack.
A
I
was
really
what
does
it
take
actually
convert
that
to
customize
and
I,
actually
think
it's
probably
too
much
to
do
for
this
particular
episode,
because
I
think
it's
really
in-depth,
but
as
I'm
looking
for
through,
like
the
the
certain
manager
chart
here,
it's
actually
a
couple
of
sub
carts.
There's
some
issues
around
sort
of
like
setting
up
web
hooks
with
authentication,
which
is
a
total
pain
in
the
ass
because
they
have
a
validating
admission
controller
here
last
to
t
GI
case,
we've
done
that
manually,
certain
manager.
A
Template
parameter.
This
is
painful.
I
mean
like
using
helm,
charts
like
this
and
using
these
templates
is
actually
a
decent
experience.
Authoring.
These
things
can
be
really
really
really
painful,
and
you
can
see
that,
like
there's
all
sorts
of
you
know
here
we
have
some
templates
sub
templates
here,
we're
like
taking
stuff
and
there's
like
this:
hey
I
need
to
indent
stuff
and
I,
don't
actually
off
the
top
of
my
head
understand
the
difference
between
an
indent
and
indent.
A
So
this
is
using
go
line
templates,
which
is
a
textual
based
templating
language
against
llamó,
is
painful.
Doing
templating
in
general
is
the
only
way
to
be
able
to
sort
of
take
something
and
parameterize.
It
leads
to
the
point,
especially
when
you're
doing
it
for
a
wide
audience,
where
you
create
values
for
pretty
much
everything
that
anybody
ever
might
want
to
customize.
A
Okay,
so
Saad
is
saying
that
there's
some
regressions
insert
manager
0-7,
which
is
which
is
a
something
I,
wasn't
aware
of
so
I.
You
know
I'm,
not
deucer
manager
today,
but
I'm
using
this
as
an
example,
let's
see
so
Fernan
says
think
that
painful
triumphal
mean
template
function,
your
eyes
will
start
bleeding,
yeah,
yeah
and
so
and
then
but
yeah.
The
idea
is
that,
like
once,
you
want
to
do,
label
requirements
are
back
that
type
of
stuff.
It
gets
like.
A
That's
where
you
end
up
with
like,
like
you
know,
you
start
with
a
simple
helm:
chart
the
more
people
that
use
it.
The
more
they're
like
hey,
I,
love
your
helm
chart,
but
I
want
to
tweak
it.
This
way,
I
love
your
helm,
shirt,
but
I
want
to
tweak
it
every
time.
Somebody
says
something
like
that.
You
add
another
parameter,
you
add
more
sort
of
spaghetti
to
the
to
the
templating
and
you
end
up
with
something
that
eventually
looks
like
this.
A
The
nginx
helm
chart
is
another
great
example
of
that,
and
so
customize
is
really
a
reaction
to
this
and
I.
Think
what
we'll
find
is
that
as
we
go
through
here
and
if
we
were
going
to
say
what
are
the
sort
of
parts
of
this
template
that
people
will
change
the
most,
can
we
make
those
easy
and
then
couldn't
we
provide
a
way
for
people
to
change
the
parts
of
the
template
that
they
don't
want
to.
So
as
I
go
through
this
things,
like
the
name
well,
yeah
I'm
gonna
change
its
the
namespace.
A
All
the
metadata
here,
like
labels,
like
you
know,
a
number
of
replicas
is
something
that
actually
customize
doesn't
make
super
easy,
but
I
think
it's
probably
pretty
common.
It's
going
to
be
interesting
to
see
how
customized
evolve
--zf
or
that
you
know
just
labels
that
people
are
using
something
like
the
rollout
strategy
for
the
deployment.
That's
probably
going
to
be
a
rarely
modified
things.
I
think
most
folks
will
be
like
whatever
I'm
going
to
take
the
default
again.
More
labels,
annotations
service
account
name,
probably
most
folks,
are
going
to
be
happy
with
default.
A
There,
like
things
like
priority
class
default,
the
security
context
all
this
stuff,
and
then
you
get
down
to
the
image.
The
image
is
going
to
be
something
that
a
lot
of
folks
are
going
to
want
to
customize,
especially
if
you're,
using
it
for
your
own
stuff
versus
a
public
release,
and
so
yeah,
so
I
think
what's
interesting,
Pope
policy,
most
folks
are
gonna
actually
be
happy
with
the
with
the
the
the
the
common
thing
there
but
like
this
is
the
type
of
thing
we're
like
every
little
parameter
then
gets
gets
updated.
A
So
so
this
is
the
first
thing
that
I
think
is
really
interesting
here.
That
the
first
observation
is
that
is
that
templating,
with
values
turns
into
spaghetti
like
what
you
see
with
helm
charts,
because
everything
gets
gets,
gets
a.
It
becomes
a
variable
that
gets
input,
but
the
usage
of
those
things.
This
probably
follows
a
sort
of
a
what
he
calls
like
a
zip
distribution
right
where
it's
like
the
you
know,
there's
a
few
types
of
values
they
get
used
by
most
people
and
then
there's
just
a
long
tail
of
stuff.
That's
very
rarely
customized.
A
So
that's
where
customized
is
coming
from.
So
the
idea
here
is
that
you
take
a
directory
of
the
ammo
and
then
you
add
a
manifest
file
on
it.
That
provides
a
little
bit
more
context
and
the
first
thing
that
that
does
is
that
explicitly
lists
out
all
the
ammo
files
that
are
there
because
I
think
when
you're
doing
this
directory
of
yeah
mol,
it's
easy
to
accidentally
get
something
included
or
not
included,
and
then
you
can
do
some
really
interesting
stuff.
You
can
be
like
hey
override
the
namespace
across
all
these
things.
A
Now
one
of
the
things
I
don't
know
because
I
haven't
actually
played
with
it.
Is
that-
and
this
isn't
clear
in
the
docs
film,
if
I
already
have
the
name
space
in
this
yamo,
will
this
modify
and
overwrite
it
or
does
this
only
apply
the
name
space?
If
there's
not
already
a
name
space
things
like
labels?
This
says:
hey,
add
a
label
to
every
resource,
and
so,
if
you
do
something
like
this,
then
you're
gonna
go
through
and
let's
see
and
then
you
can
go
through.
A
A
So
so
Mike
here
I
just
saw
this
says
that
templates
were
really
complex
for
our
workflow,
so
yeah
so
I
think
here's
here's
some
Mike
wrote
looks
like
I.
Don't
have
time
to
dig
into
it
right
now
like
another
templating
to
it.
I
think
this
is
what
we
find
is
that
folks
will
do
something
special
purpose
for
them,
which
I
think
is
great.
That's
part
of
the
power
of
kubernetes,
but
I.
Think
a
lot
of
folks
want
to.
A
You
know:
don't
want
to
have
to
write
their
own
tool
right,
but
I
think
that's
part
of
the
drama
is
like
hey.
You
know
part
of
the
conflict.
Is
that,
like
you
know,
should
there
be
one
built
in
or
or
do
we
want
to
actually
do
something
that
is,
you
know,
make
people
pick
pick
a
solution,
okay,
so
config
map.
So
the
idea
is,
most
of
the
time
your
config
map
you're,
going
to
have
a
file
next
to
your
stuff.
You
want
to
edit
that
file.
A
A
And
then,
if
you
reference
those
things,
it
actually
updates
those
reference,
and
so
what
that
does
is
it
logically
makes
the
config
map
say
a
child
of
your
deployment.
So
when
you
change
your
config
map,
it
actually
causes
a
reroll
of
the
deployment
that
doesn't
happen
naturally,
unless
you're
using
something
like
this.
A
So
that's
super
useful
that
being
able
to
use
a
config
map
of
a
chai
latte
as
a
child
of
a
deployment
versus
something
that's
a
peer
of
the
deployment
is,
is
something
that's
super
super
useful
super
easy
to
update
the
container
images,
namespaces
and
names.
We
talked
about
the
labels
and
annotations
and
then
there's
there's
some
other
support
for.
Oh,
this
is
like
advanced
sort
of
like
how
keep
control
apply
works.
A
You
can
have
files
that
say:
okay,
I'm,
not
using
those
common
sort
of
things
that
that
that
changed,
the
yam
oh
I,
want
to
do
something
a
little
bit
more
complicated
and
to
do
that,
what
you
can
do
is
you
can
specify
essentially
a
diff
on
top
of
these
things
and
there's
different
types
of
strategies
for
doing
the
diffs,
and
that's
where
that
sort
of
patching
semantics
comes
into
play.
So
here's
you
know
some
of
the
documentation
there
and
then.
Finally,
there
is
this
thing:
it's
called
config
reflection.
A
A
So
it's
essentially
being
able
to
you
know,
create
references
from
one
place
to
another,
so
that
you're
not
repeating
yourself-
and
so
you
end
up
with
this
stuff
like
this
here,
where
you
have
this
dollar
sign,
and
this
is
actually
done.
This
replacement
of
this
new
ammo
is
actually
done
by
the
customize
tool,
and
so
this
really
is
starting
to
go
down
the
path
towards
templating.
Now
the
intent
is
not
to
make
this
a
general
sort
of
variable
type
of
thing,
but
I
think
it's
going
to
be
interesting.
A
Let's
see,
Lisa
is
note
that
config
map
secret,
rolling,
update
via
names
is
a
common,
valuable
use
case,
but
you
actually
don't
want
that
when
your
app
dynamically
updates.
Yes,
that's
a
really
good
point,
so
one
of
the
things
that
you
can
do
here
and
I'm
trying
to
think
okay,
so
the
other
really
interesting
piece
of
documentation
here,
if
you
go
into
the
docs
file
of
the
customized
github,
is
that
there's
a
customization
yamo,
which
is
a
fully
annotated.
A
Everything
that
you
ever
might
want
to
set
in
customization
yamo
and
one
of
the
things
that
you
see
here
is
there's
a
config
map
generator,
there's
a
generator
options
here,
where
you
can
actually
set
specific
values
on
the
resources
generated
from
the
generators.
But
then
also
you
can
do
this
disable
name,
suffix
hash,
which
actually
says
okay,
I,
don't
want
to
treat
config,
Maps
and
secrets
as
a
child
of
my
deployments.
A
I
want
to
treat
it
as
a
peer,
and
so
this
disables
that
so
now,
when
you
update
a
config
map
with
this,
it
won't
actually
cause
a
reroll
along
the
way.
One
of
my
criticisms
and
I
think
this
isn't
to
some
degree
I
think
it's
one
of
the
things
I
worry
about
it
and
and
I.
Don't
know
how
serious
it
is
here.
Let
me
up
make
this
a
little
bit,
so
here's
the
the
generator
options-
disable
name
suspects.
So
this
is
a
global
option
for
your
entire
customized
deployment.
A
A
Similarly,
as
we
look
at,
let
me
find,
as
we
look
at
when
we
update
images,
but
when
we
update
images.
Essentially,
what
we're
saying
is
that
if
I
have
an
image
named,
Postgres
I
want
to
update
it
to
this
new
thing.
With
this
new
tag,
I
might
actually
use
the
same
image
in
different
places.
Maybe
I
have
a
canary
and
I
have
a
mainline
and
that's
part
of
my
same
thing.
A
I
have
canary
built
into
into
my
particular
yamo
setup,
and
sometimes
those
are
gonna
be
the
same,
sometimes
they're
different,
so
there's
actually
no
way
using
this
mechanism
to
be
targeted,
so
you'd
have
to
actually
refer
to
the
patching
type
of
mechanism
to
be
able
to
do
this
stuff.
Let's
see
so,
Jax
has
posted
a
link
here.
B
A
Yes,
so
that
the
the
the
reflection
stuff
starts
to
become
templating
is,
is
something
that
other
folks
have
actually
started
to
started
to
look
at
also
yeah.
So
I
think
this
is
some
of
the
like.
The
the
approach
that
customized
is
taking
is
very
much
one
of
let's
find
the
common
things
make
those
super
easy
and
I
think
that
it
does
a
really
good
job
with
it's
obviously
resonating
with
a
lot
of
folks,
but
we
do
end
up
with
I
think
as
we
started,
managing
in
more
advanced
use.
A
Cases
like
hey
I
have
the
same
engine,
but
I
want
to
use
two
different
versions,
and
the
same
you
know
app
in
the
same
same
directory
of
e
mo
is
not
something
that
it
can
support
easily
out
of
the
gate.
Similarly,
with
I
want
some
config
Maps
to
have
the
suffix
and
some
config
Maps
to
not
again
not
something
that
it
supports,
natively
and
easily
being
able
to
say,
hey,
I
want
to
define.
A
Like
you
know,
a
rollout
strategy
for
across
five
deployments
is
not
something
that
it
can
support
easily
without
resulting
to
something
that
starts
to
look
more
like
templating.
Now
the
one
thing
and
I
haven't
had
a
chance
to
dig
into
this
is
that
there
is
a
lot
of
work
on
going
to
create
a
plug-in
model
for
customize
and
I'm
gonna,
be
interested
to
see
how
that
works
essentially
taking
the
ideas
around
sort
of
like
the
secret
and
the
config
map
generators.
A
How
do
we
make
that
something
that
is
extensible,
so
it
becomes,
and
then,
as
part
of
that,
maybe
there
is
a
templating
engine
becomes
a
plug-in
that
this
thing
can
actually
call.
So
these
things
aren't
always
apples
to
apples
as
we
look
at
it.
Okay,
so
enough
talking
what
I'm
gonna
do
is
I
want
to
actually
take
some
yam.
Oh
I
want
to
convert
it
to
customized.
I.
Haven't
done
this
before
we'll
see
what
that
looks
like
there's
two
things
that
I
want
to
actually
be
able
to
get
to.
A
The
first
is
I
want
to
deploy
contour
with
I'm,
going
to
take
the
contour
llamo
I
want
to
convert
that
to
customize,
and
then
I
want
to
actually
take
a
simple
application,
specifically
the
the
demo
app
I
created
for
kubernetes
up
and
running
QWERTY.
Convert
that
that's
a
good
place
for
us
to
start
playing
with
things
like
config
maps,
because
it
lets
you
introspect
and
actually
see
some
of
that
stuff,
so
that
sound
good,
should
we
dig
into
some
code
I've
been
talking
too
much
here.
A
So
what
we
have
here
is
I
have
a
cube
control
version.
I
have
cube
control
14
with
customized
built
in
going
against
a1
dot,
13.2
cluster,
and
this
is
running
the
hefty
of
VMware
QuickStart
running
on
AWS.
So
if
I
do
cute
control
get
nodes,
I
got
I
got
a
couple
of
nodes
going
on
here.
Nothing
super
super
complex,
alright.
So
if
I
bring
up
my
editor,
what
you'll
see
is
I
have
a
contour
when
we
do
deployment
stuff.
The
way
that
we
do
it
is.
A
So
if
we
look
at
the
one
that
I
want
to
do
is
a
daemon
set
with
G
RPC,
because
there's
different
types
of
things
that
we've
written
here
these
end
up
being
some
sim
links,
and
then
we
actually
have
a
simple
script
that
essentially
cats
these
things
together
into
single
yamo
files.
So
this
is
totally
janky.
A
So
the
one
that
I
want
to
do
is
this
the
the
daemon
set
our
back
one,
so
you
can
actually
see
here's
all
the
resources
that
get
generated
out
of
this
doc
says
the
DA
updated
the
doc
so
Phil's
working
on
this
already
to
clarify
the
name.
Namespace
overrides
value
here.
So
what's
the
answer,
so
it's
like
setting
the
namespace
will
override
the
name.
So
even
if
it's
already
set
okay,
yeah,
so
cool
cuz,
that's
something
that
I'm
like
let's
test
that
out
and
figure
it
out.
A
So
yes,
this
is
the
janky
workflow
that
we
did
for
contour.
It
actually
works
really
well
for
the
type
of
stuff
that
we're
doing.
We
did
want
to
create
one
Yamma
because
we
wanted
to
provide
a
one-shot
cube
control
applied
a
chef
to
a
URL,
but
I
think
you
know,
as
we
look
at
overtime
like
people
have
created,
helm,
charts
for
for
contour
and
I.
A
Think
it's
going
to
be
interesting
to
see
if
customize
is
something
that
makes
sense
here
also
so
what
I'm
gonna
do
is
I'm
gonna,
take
all
the
stuff
in
this
file
and
I'm
gonna
copy
it
into
a
new
directory
that
we
can
work
on
so
I'm
gonna
do
make
dirt.
Contour
customize
I
have
a
hard
time
typing
customize,
oh
I,
do
a
copy,
contour
deploy
and
we're
gonna
do
Dias
GRP
CV
to
hear
Oh.
What
did
I
do
it's
a
directory?
If
you
start
there,
we
go
alright.
A
So
now,
if
we
go
through
and
pull
up
vs
code,
let
me
close
this.
What
we
have
here
is
a
bunch
of
boilerplate
around
CR
DS
service
account
and
I.
Think
yeah,
because
we
have
to
see
our
DS
we're.
Defining
+
service
accounts,
here's
the
daemon
set
for
contour.
Here's
the
the
are
back
definitions,
including
role
bindings
and
such
and
then
here's
the
service
for
contour,
and
so
we're
going
to
go
ahead
and
we're
going
to
set
this
up
and.
A
A
If
you're
from
the
UK
or
or
you
know,
you
probably
expect
an
S
there,
so
a
cultural
imperialism
going
on
there
and
the
common
things
that
we
want
we're
going
to
start
with
the
with
the
sort
of
common
things
here
is
we
we
need
an
API
version
and
kind.
So
now
this
is
common
across
everything
in
the
kubernetes
ecosystem.
We
got
a
lot
of
yamo
files
flying
around.
The
idea
here
is
that
we
want
some
standard
way
to
say
what
kind
of
yamo
file
is
this.
A
What
is
the
version
and
what
type
of
object
is
it
supposed
to
be?
So
this
is
sort
of
a
schema
identifier
that
we
got
going
on
here,
and
so
it
feels
a
little
bit
like
boilerplate,
but
I
think
you
know,
as
we
have
all
these
things
over
time.
This
is
the
type
of
thing
where
you
think
yourself
later
for
doing
this
stuff
and
then
and
then
we
just
have
our
Rhys
sources
going
on
here
and
what
we
have
is
I'm
going
to
do.
Oh
one
common
yam
all
bow
to
common
DM.
A
Now,
the
way
that
the
reason
why
we
did
the
old
102
here
was
to
actually
go
through
and
make
sure
that
we
had
specific
order
or
when
you
actually
sort
these
things
so
the
common
one.
This
does
like
the
namespace
and
the
in
the
service
account
you
got
to
do
the
namespace
before
you
actually
do
stuff
that
lands
in
the
namespace.
Otherwise,
you
know
hilarity
ensues
so
I.
Allegedly.
This
is
going
to
be
enough
for
us
to
be
able
to
actually
set
this
stuff
up.
So
I
can
do
cube.
A
Control,
apply,
k,
dot,
let's
see
what
happens
boom,
we
did
it.
It
worked
king
controlled
now
the
the
the
namespace
that
we're
talking
about
here
is
hefty
Oh
contour
get
pods
yeah.
So
there's
our
daemon
set,
that's
initializing
up
and
running
one
on
each
pod.
So
that's
as
easy.
It
is
to
get
started,
and
this
moves
you
from
that
sort
of
directory
of
yeah
mole
to
something
where
we
can
start
doing
some
more
interesting
stuff.
A
A
A
Where
tries
to
take
the
common
things
that
you
do
with
templating
and
and
actually
get
those
with,
with
both
explicit
support
with
things
in
the
customization
demo
file,
plus
this
ability
to
do
patching
that
we'll
talk
about
later,
and
so
it's
it's
a
little
bit
I
would
say
a
peer
to
helm,
templating,
but
there's
actually
room
moving
forward
for,
instead
of
using
patching
to
actually
use
templating.
In
addition
to
what
customize
the
customization
customize
is
doing.
So
at
some
point
they
may
be
more
complementary.
A
B
A
Delete
Kay
works,
I
didn't
know
that
worked
so
yeah
Ross
is
saying
it
would
be
nice
of
JSON.
It
wasn't
bed
into
cube
control
instead,
while
coding
shows
its
limitations
pretty
quickly.
Yeah
I
think
I
agree
like
it
would
be
nice
if
we
had
that
I
think
part
of
the
concern
here
Ross
is
that,
like,
like
JSON
specifically,
is
very
polarizing
for
folks
I.
Think
some
people
love
it.
Some
people
hate
it
and
I.
Think
that's
one
of
the
things
we're
seeing
about
anything
in
this
space
is
that
it's
very
polarizing
all
right.
A
So
here's
the
problems
I
went
to
go
ahead
and
do
this,
but
I
actually
already
changed
the
namespace,
so
I'm
gonna.
Do
this
I'm
gonna
go
ahead.
Do
the
delete
nope
cube
control,
get
namespace
is
oh,
it
deleted
the
namespace.
So
how
did
it
know?
I'm
died,
not
hit,
save
I,
think
I
maybe
didn't
hit
save
because
we
actually,
what
did
it
do
it
went
through
and
it
deleted
the
namespace
first
and
then
tried
to
delete
other
stuff
here.
Alright.
A
So
it
turns
out
the
delete
flow
for
for
customized
is
something
that
is
still
very
much
a
work
in
progress,
so
customized
is
a
good
way
to
actually
bring
up
new
resources
if
you
find
that
you
have
to
delete
a
resource,
customized
is
actually
not
good
at
that,
and
so
I'm
sure
there's
active
discussions
previously
and
keep
control
there
is
this
feature
called
purge
where
you
could
say:
hey
across
a
set
of
resources
with
a
specific
label.
If
they're
not
specified
in
this
directory,
then
I
want
you
to
delete
them.
A
It's
kind
of
it's
kind
of
a
dangerous
thing,
so
I
think
people
were
really
scared
of
that.
So
purge
never
really
saw
wide
adoption
because
it
was
a
really
scary
thing
to
do
a
lot
of
that.
You
know
being
scared
of
accidental
eating
stuff.
A
lot
of
that
healthy
skepticism
is
being
imported
into
customizing.
It's
stuff
that
folks
are
looking
at
I.
A
Think
an
easy
thing
to
do
here
would
be
some
sort
of
tombstoning
where
you
could
delete
something
and
then
actually
say:
hey
I
want
to
have
a
delete
directive
for
this
thing
in
for
this
resource
in
your
customization
yamo
I'm
sure
that
there's
a
lot
of
folks
already
talking
about
this
okay,
so,
let's
see
so
I,
think
we're
cleaned
up
now,
so
let's
go
through
and
now
we
can
go
through
and
we
can
deploy
this
to
this
thing
to
two
VMware
contour
o-o
namespace.
Oh,
this
is
the
okay,
so
this
is
the
next
bug.
A
Is
that
if,
if
the
thing
that
you're
actually
creating
is
a
namespace,
the
namespace
override
doesn't
actually
know
how
to
fix
that
and
I
think
this
is
a.
This
is
interesting
thing
and
you'll
see
this.
Sometimes,
when
people
expect
different
things
from
Hjelm,
do
you
expect
your
directory
to
actually
create
its
own
namespace,
or
should
it
deploy
to
a
namespace
that
already
exists?
So
we
did
not
update
this
one.
A
Yeah
wait
but
now
it's
a
cluster
role:
okay,
yeah,
because
these
were
cluster
wide,
so
they
were
unchanged.
The
service
got
created,
everything
got
created
so
now
I
can
do
keep
control
and
VMware
contour
get
pods
and
I
see
these
things
up
and
running
so
cool
all
right
now.
The
next
thing
we
can
do
is
we
can
go
through
and
let's
do
some
labels.
A
A
There
are
some
common
label
conventions.
These
are
really
interesting
and
so
I
think
Sega
apps
actually
helped
to
do
this.
So
there's
this
idea
that
we
want
to
find
some
common
labels
that
folks
can
use
with
the
idea
that
if
you
see
these
on
resources,
they
actually
start
lighting
up
in
UI
and
stuff.
So
this
is
something
interesting
that
you
might
want
to
play
with
and
look
at.
A
A
Can
go
through
it
I
look
at
this,
the
labels
that
we
have
contour
control
like
so
we
have
app
contours
what
we
got
going
on
here
already
so
Miguel
says:
yeah,
Andrew,
saying,
okay,
so
Andrew
saying
I
could
add
another
base
resource
and
the
custom
HTML
with
the
new
namespace
to
pull
that
in.
So
you
can
actually
compose
some
of
the
sort
of
directories
that
you're
merging
the
Miguel,
so
those
name
suits
creation
is
tricky.
A
A
A
Does
it
catch
the
case
that
label
change,
invalidates
service
or
network
policy
of
the
selectors
from
the
same
apply
one
of
the
things
that
so
I
don't
know
exactly,
but
one
of
the
things
that
I
think
is
important
to
recognize
about
customize
is
that
it
is
schema
aware.
So
it
knows
that
hey
if
I
change
the
name
of
the
you
know
this
particular
config
map.
I,
know
that
hey
this
thing
is
actually
referring
to
a
config
Mac.
A
A
A
B
A
Think
folks
in
general,
including
you
know,
the
next
version
of
helm,
helm,
v3,
we're
moving
away
from
specialized
server
components
here
and
when
we
do
need
special
server
side
work,
we're
actually
trying
to
do
it
in
a
way
that
is
very
much
friendly
with
upstream.
So
like
a
lot
of
the
apply
stuff,
that's
happening
here.
There's
this
stuff
that
API
machinery
has
been
working
on
really
hard
called
server
side
apply,
that's
going
into
alpha
I
believe
in
the
next
in
the
next
release
in
115.
I!
Think
that's
where
it
is.
A
A
What's
going
on
Oh
fascinating
all
right,
so
this
is
interesting.
So
what
happened
is
is
that
we
changed
the
selector
on
the
labels,
and
so
this
is
actually
one
of
those
things
where
you
may
want
to
add
a
label
to
a
particular
as
a
bunch
of
labels.
But
you
may
not
want
to
change
your
selectors.
You
may
want
to
just
additively,
add
stuff,
but
not
change
the
selectors,
because
what
happened
is
because
I
changed
the
selector
that
was
used.
It
actually
started
a
new
copy
of
all
of
those
pods
because
we
applied
it.
A
It
created
it
essentially
reconfigured
the
daemon
set
that
said:
hey
I'm
now
like
the
daemon
set,
is
associated
with
pods
in
a
different
way
and
as
it
did
so,
it
sort
of
orphaned
those
other
pods,
because
now
there's
no
control
or
in
charge
of
of
these
two
pods,
and
so
we
now
have
to
go.
So
this
is
those
things
as
you
change
selectors
on
things
like
daemon
sets
and
deployments.
You've
got
to
be
really
really
careful
and
I.
Was
there
warning
in
the
docs
about
that
I,
wonder
I.
Think
I,
remember
seeing
that
fill
see.
A
A
A
B
A
A
Let's
see
like
like
one
of
the
things
that
we
could
do
actually,
let's
let's
play
with
this
with
the
application
here,
because
I
don't
want
to
actually
go
through.
So
one
of
the
things
that
I
wanted
to
say
is
that
there's
a
great
example
of
using
customize
with
something
complex
like
a
certain
manager,
and
so
jet
stack
has
a
post
here.
Talking
about
this
with
some
examples:
I
wanted
to
go
through
this,
but
I
didn't
have
time
to
actually
really
get
everything
set
together.
A
So
that's
what
we
go
here.
Okay,
so
that's
so
I
think
this
is
a
good
example
of
sort
of
like
how
you
can
take
a
core
application
and
then
customize
it
in
different
ways
using
overlays.
So,
let's
go
through.
Let
me
close
some
stuff
up
here,
so
that's
that
blog
post,
it's
in
the
notes,
if
you
want
to
find
it
a
couple
of
other
things
that
I
want
to
actually
call
out
here.
That
I
think
we
might
have
time
to
play
with.
A
We
did
the
common
label
stuff
similar
stuff
with
annotations,
it's
not
going
to
modify
the
selector
so
no
big
deal.
You
specify
your
resources.
There
is
the
config
map
generator
secret
generator.
We
talked
about
that
stuff,
there's
basis
here.
So
this
is
the
idea
that,
like
hey,
it
turns
out
that
this
particular
directory
is
a
delta.
A
On
top
of
this
other
particular
customized
base,
and
so
it's
like
customize
can
layer
on
top
of
other
customized
setups,
and
so
this
is-
and
you
can
do
that
either
within
a
customized
thing,
where
you
can
actually
have
overlays
that
are
built
into
it
for
different
environments,
but
you
can
also
actually
have
the
basis
done
this
way.
So
so
oftentimes
people
will
set
up
a
directory
structure
where
they
like
the
basis
like
dot
dot
or
something
like
that.
I
believe
that's
how
it
works.
A
One
of
the
things
that
confuses
me
is
that
you
can
have
these
things,
be
github
links
or
get
links.
The
problem
is:
is
that
what,
if
I,
have
a
directory
called
github
com?
How
does
this
thing
know
that
I'm,
actually
talking
about
a
network
resource
versus
a
directory?
I
personally,
would
really
really
really
like
to
see
this,
be
a
explicit.
You
know,
like
I
mean
you
could
do
something
like
you
know,
like
CSS,
URL,
paren,
right
and
I.
Think
that's
unique
enough
that
you're
not
going
to
have
a
file
with
Firenze
in
it
like
that.
A
Maybe
you
would
but
like
like
I
I
would
love
to
see
something
that
was
more
explicit
than
just
saying:
hey.
That
looks
like
a
domain.
Let's
go
Sonique,
says
I.
Think
it
supports
that
too.
Okay,
so
maybe
cuz
like
this
kind
of
G,
gives
me
the
heebie
jeebies,
because
we're
mixing
two
types
here:
okay,
you
can
do
file
colon,
colon
or
HTTP
colon
colon
that
the
thing
there's
also
like
there's
danger
here
too,
because,
like
like
docker,
created
problems
with
themselves
initially,
because,
similarly
they
supported
registries,
they
didn't
specify
TLS
or
no
TLS.
A
They
would
actually
do
either
and
led
to
a
lot
of
insecurity,
and
so
there
was
like,
like
early
on
and
docker
it
took
for
freaking
ever
to
get
to
the
point
where
everything
was
now
over
HTTPS
for
this
stuff,
but
like
yeah
I
get
:,
:
I
get
you
know,
sort
of
URL
would
help
there
and
then
there's
these.
These
merge
things
where
it's
essentially
I'm,
taking
my
resources,
I'm
merging.
On
top
of
it,
there's
two
different
strategies.
Here
we
can
go
to
some
details
on
that.
A
You
can
specify
CRD
schemas,
like
I,
said
customize
the
schema
aware,
and
so
that
means
that
when
you're
using
crts,
you
actually
need
to
be
able
to
import
the
schema.
So
it
ends
up
being
open
api
with
some
annotations
about
sort
of
like
hey.
If
this
is
an
object,
ref,
what
is
the
type
of
ref?
So
then
it
knows
hey.
This
is
a
secret
I
need
to
go
ahead
and
rewrite
that
name
in
a
certain
way.
Then
there's
the
the
variables
and
the
image
stuff
that
we
had
talked
about.
A
And
so,
if
we
look
here
like
our
Yolo
way
to
run,
this
stuff
is
like
this.
That
runs
a
single
pod,
and
this
is
a
great
sort
of
like
just
getting
started
type
of
thing.
But
what
we
have
here
is
we
have
the
image
we
do
and
so
I'm
going
to
do.
Cute
control
create
deployment
okay,
so
here's
something
else
that
I'm
gonna
I'm
gonna
it
fill
about
cube
control
run
and
we
want
to
do
image
equals
this.
No
I
don't
want
to
what
did
I
do
wrong
here.
A
So
we
can
do
dry
run,
oh
yeah.
This
is
a
way
to
sort
of
generate
yeah
mole
off
the
so
this'll
actually
gives
you
a
message
saying:
hey
cube,
control
run
is
deprecated,
but
it
turns
out
that
there's
a
lot
of
stuff
you
can
do
with
cube
control
run
that
the
preferred
way
to
do
this
is
cute
control
create.
But
you
can't
do
everything
with
coupons
will
create
that
you
can
do
with
queue.
A
Control
run
so
especially
around
exposing
ports
and
stuff,
but
we're
gonna
we'll
deal
with
that
later,
but
I
can
do
cube,
control,
create
deployment
and
we're
gonna
call
it
a
Rd
image
equals
blah
oak
know
how
come
I
copy
deck.
That's
not
what
I
want
to
keep
control,
create,
deploy
and
4-d
image
equals
I
keep
doing
my
sweat
versus
not.
A
Dry
run:
that's
what
we're
gonna
do
we're
going
to
redirect
that
to
a
file
called
a
deployment.
Yamo,
okay,
we're
gonna
touch
customization
yeah
mo,
hopefully
I
spelled
that
right,
we're
gonna
go
into
our
edit.
Here
we
have
quality,
customize
and
yeah
I'm
gonna
remove
some
stuff,
that's
just
kind
of
like
bogus
that
I
don't
need,
and
this
should
be
good
enough
to
go.
We
have
app
this
blah
blah
blah
creation,
timestamp,
okay
and
then
customization
yeah
mo.
Oh,
that's
the
previous
one,
but
I'm
going
to
copy
this
stuff
here
and
we're
gonna
edit.
A
Deploy
man,
okay,
so
I,
just
think
that
should
work
so
I
do
cube.
Control
apply
Kay
this
and
we
now
have
that
it's
running
in
cube
control
get
pods
okay.
So
now
we
got
that
running
so
that
was
essentially,
you
know,
table
stakes
getting
this
stuff
set
up.
The
next
thing
we're
gonna
do
is:
let's
create
another
resource
here,
we'll
call
this
service,
Tamil
cube
control,
create
service,
and
it's
oh
wait.
Okay,
we
want
a.
Let's
do
just
a
cluster
IP
type.
A
A
Its
let's
see
so
we
say,
okay
related
to
deprecating
Kieran
is
decorating
kay,
get
export.
Okay,
yeah
export
did
a
bunch
of
stuff
to
sort
of
like
strip
a
bunch
of
fields
that
you
probably
don't
care
about,
but
it
was
implemented
server-side
in
a
weird
way,
so
I
actually
think
that
one's
probably
good
to
deprecate
I,
don't
know
like
whether,
like
it's
like
you
deprecated
these
things
like
it's
hard
to
get
every
single
use
case
actually
nailed
down
on
that,
though.
A
If
I
look
at
service
yeah
mo,
let's
see
what
do
we
have
here
so
then
the
selector
I
think
it
assumes
that
the
app
is
quar
D
based
on
the
name.
What
I
wonder,
or
does
it
actually
do
a
get
to
be
able
to
do
that
stuff
because
we
haven't,
because
this
is
create
service?
If
I
do
Fuu,
oh
yeah,
that's
it
assumes
that
it's
it's
Cordy,
okay,
so
yes,
this
is
actually
fascinating,
so
create
service.
A
Just
assumes
that
there's
a
convention
that
you're
gonna
have
the
service
name,
the
same
thing
as
the
app
that
you're
dealing
with.
So
if
we
wanted
to
call
this
instead
QWERTY
service,
it
would
assume
that
the
app
is
called
QWERTY
service.
But
it
turns
out
that
you
know
if
you
just
name
these
things
the
same
thing.
This
will
actually
all
just
work
out.
Just
fine,
so
we'll
name
this
HTTP.
B
A
That's
so
that
should
be
up
and
running,
and
so
when
we
then
we're
going
to
do
service
yeah
Mille
and
we
can
go
ahead
and
apply
that
now
one
of
the
things
I
might
want
to
do
is
oh
and
then
let's
do
an
ingress.
Okay,
so
do
we
have
cube
control
create
ingress?
Do
we
have
that
there
is
no
creating
grass
ingress?
A
Is
a
second-class
citizen
in
the
kubernetes
world,
so
now
I
have
to
go
through
and
actually
do
the
copy/paste
thing
and
everybody
else
in
the
world
does
ingress
kubernetes
and
we're
not
gonna
we're
not
going
to
go
through
and
actually
I
don't
have.
Domain
mat
mapped
now
so
we'll
do
a
single
service
ingress
like
this
for
contour.
A
Alright,
so
this
and
then
I
kind
of
don't
like
that,
you
can't
just
drop
files.
You
have
to
actually
list
them
in
the
resources
that
feels
like
it's,
not
quite
DoD
dry.
You
know,
don't
repeat
yourself
but
I,
think
I
know,
but
it
also
very
explicit
which
I
also
do
like
so
I
kind
of
like
it
and
I
kind
of
don't
like
it
all
right.
So
now
we
created
the
service,
the
ingress,
cube,
control
and
VMware
contour
get
service.
A
A
Customized,
edit
add
resources.
Start
I
am
oh,
oh
okay.
Can
we
do
that
customize,
edit
okay?
So
this
is
okay,
so
this
is
the
other
thing.
So
I
installed
the
customize
tool
with
brew.
Install
customized
I
did
that
earlier
today,
but
if
I
do
cube
control,
customized
dot,
that
actually
says
do
a
render
of
it.
This
actually
outputs
all
the
stuff
and
like
runs
the
thing
to
output
the
yamo.
The
cube
control
customized
is
not
the
same
as
the
customized
command.
I,
don't
think
so.
If
I
do
edit
does
that
work?
A
So
if
I
do
customize
edit,
it
thinks
I'm
trying
to
do
a
render
of
that.
So
so
like
there
is,
like
I,
think
there's
functionality
in
the
customize
command.
That's
not
built
into
cube
control,
or
at
least
that's
what
it
looks
like
so
I'm
still
figuring
that
out
because,
like
I,
don't
think
I,
don't
think
you
customize
edit,
actually
made
its
way
into
cube
control
so
yeah
alright.
So
then
we
got
that
up
and
running
we're
going
through
ingress.
We
got
the
whole
thing
set
up.
That
is
awesome.
A
A
What
we're
gonna
go
through
is
we're.
Gonna
use
the
config
map
generator,
which
is
going
to
be
interesting,
so
we're
going
to
do
config
map
generator,
it's
a
Cordy
CM
and
we'll
do
the
literal
thing
just
actually,
let's
do
it
from
a
file.
My
like
quar
d,
config,
something
like
that.
I'll
now
create
a
new
file,
gordita
config,
and
do
let's
see,
oh
that
thinks
it's
XML,
that's
what
V
s
code
thinks
and
then
we'll
put
something
clever
in
here.
A
This
is
something
clever,
and
so
now,
if
I
do
this
I
think
what
we
find
is
if
I
do
keep
control
customized
dot?
What
we're
gonna
find
is
that
we
end
up
with
a
config
map
that
has
the
contents
of
that
file
imported
into
it,
and
then
it
gets
this
unique
name
and
I
haven't
referenced
it
anywhere,
but
this
still
works.
So
this
is
the
config
map
generator
working
loading
start
from
file,
make
it
into
a
config
map.
A
A
Ok,
so
these
things
I
just
pulling
it
from
a
subdirectory,
because
a
lot
of
times
you'll
have
like
oh
here's,
my
CA,
here's,
my
clients
or
my
own
server,
cert
you'll
name,
those
different
things,
but
actually,
as
you
import
them
into
a
secret
you'll,
actually
rename
them
and
the
secret
vs.
not
so
this
actually
implies
a
certain
of
layout
on
disk.
That
may
not
actually
be
the
same,
for
you
may
not
be
the
right
thing
for
you,
so
you
can't
do
that,
remapping
that
that's
probably
okay,
but
but
it's
just
a
little.
A
A
A
A
A
A
Okay,
so
it
says
configured
here,
which
means
something
changed
and
now,
if
I
go
to
my
browser
here,
I
can
go
to
the
file
system
browser,
and
here
we
have
our
DCM.
We
actually
have
the
config.
This
is
something
clever,
so
this
actually
push
that
config
file
out
in
a
way
that
I
can
get
it
now.
Here's
here's
the
magic
here's,
some
of
them,
here's
some
of
the
magic.
So
let's
look
at
what
happened
here
so
we
remember
it
gave
it
that
a
unique
name.
So
we
see
here
that
we
have.
A
This
is
actually
the
name
of
the
config
map
that
got
created.
It
also
is
in
schema
aware:
it
knows
that
hey
this
particular
field,
buried
inside
of
the
deployment,
is
a
reference
to
a
config
map.
I
want
to
be
able
to
say
well
that
config
map
is
called
quar
DCM.
I
want
to
actually
append
that
hash
to
it.
So
it's
smart
enough.
It's
context,
aware
enough
to
be
able
to
append
that
hash,
and
so
that's
what
we
see
going
on
here
and
so,
if
I
do
keep
control.
A
Gig,
config,
Maps
you'll
see
that
there's
the
there's
the
particular
config
map
and
I'm
getting
a
spam.
It's
my
mom
we're
back
later
and
then,
let's
see,
and
so
the
next
thing
we
can
do
is
we
can
go
through
and
edit
the
QWERTY
config
no
I
mean
really
clever.
I,
don't
have
anything
clever
to
say.
Let's
see,
I
know
how
factorio
is
a
great
game.
A
Miguel,
are
you?
Are
you?
Are
you
lost
here
all
right?
Let
me
explain
a
little
bit
more
of
what's
going
on,
I
think,
let's,
let's
see
what's
happening
and
then
I'll
make
sure
that
I
really
drive
this
point
home,
but
now
I
actually
go
in
here
and
I
actually
see
my
config.
It
might
config
got
updated
okay.
So
here's
the
scenario
and
I
think
this
is
something
that's
a
little
bit
magic
and
I'll
be
super.
Let
me
know
if
you're
lost
here
is
that
a
lot
of
times?
A
What
will
happen
is
we'll
have
so
I
can
create
a
config
map
here.
Let
me
let
me
do
this
I'm
going
to
create
a
new
file
here.
Oh
not
a
new
folder,
a
new
file
called
static,
config
map
yamo,
okay,
and
this
is
actually
and
then
what
I'm
gonna
do
is
cube.
Control,
create
config
map
foo,
oh
I,
didn't
want
to
actually
did
that
on
the
cluster.
A
A
A
A
A
A
A
A
What
we'll
find
here
is
that
the
dynamic
config
map,
the
one
that's
generated
from
the
generator,
does
gobbly
actually
gets
written
into
it.
Modifies
the
template
of
the
deployment
to
include
that
gobbledygook,
whereas
you
know
here
with
the
static
config
map,
it
didn't
modify
anything
there's
no
gobbledygook,
nothing
like
that.
So
now,
if
I
actually
go
through
and
if
I
do
Cube
control
get
pods.
What
I
see
is
that
I
have
a
pod
here,
it's
been
running
for
47
seconds
now.
A
What
I'm
going
to
do
is
I'm
going
to
go
through
and
I'm
going
to
modify
the
static
one.
This
won't
cause
a
redeploy
really
because
now
I've
modified
this
now,
if
I
do
an
apply
here,
what
happens
is
that
we
see
that
it
configured
the
static
config
map,
but
it
actually
didn't
change
the
the
deployment
at
all.
So
that
means
it
actually
didn't
cause
a
reroll
of
the
deployment
it
didn't
cause
a
redeployment
of
it,
so
that
would
that
says
that
this
pod
didn't
get
restarted
it.
Actually,
it
hasn't
been
restarted.
A
Nothing
changed
in
that
template
for
the
deployment
to
say:
hey,
I
need
to
push
a
new
version
out
now,
if
I
do
the
same
thing
with
a
dynamic
one.
So
then,
now
let's
go
through
and
I
change
this
Cordy
config
I'm
gonna
do
another
change.
What's
going
to
happen
here?
Is
it's
going
to
that
gobbly
at
the
end?
That's
a
hash
based
on
the
contents
of
the
config
map.
A
So
when
I
change
the
contents
of
the
config
map,
it's
going
to
change
the
hash
that
and
then
and
then
customize
is
smart
enough
to
actually
change
that
name
deep
inside
of
the
deployment
template.
That
change
will
trigger
a
redeployment
and
we'll
try
cause
a
restart,
rolling,
restart
of
all
of
your
application
containers.
So
now,
when
I
do
this,
what
we
see
here
is
that
the
deployment
is
configured
here
and
now
when
I
go
through
and
I
get
pods.
A
What
I
see
is
that
this
thing
just
restarted,
you
can
see
it
got
restarted
with
a
new
one,
and
so
the
nice
thing
here
is
that
I
can
actually
have
a
config
map
be
based
on
a
file
being
used
from
a
deployment
when
I
change
that
config
map
it
automatically
causes
that
deployment
to
treat
it
as
an
upgrade
and
actually
do
a
rolling
update
of
it.
Now
it
turns
out
that
some
programs
are
able
to
deal
with
either
hitting
kubernetes
directly
to
read
the
config
map.
A
Also,
it
takes
a
little
while,
but
kubernetes
does
know
how
to
actually
update
the
value
of
a
config
map
on
disk
out
from
under
the
application.
That's
the
type
of
thing
that
more
inland
applications
are
sort
of
smart
enough
to
know
how
to
reread
their
configurations
dynamically,
a
common
pattern
at
Google,
not
a
common
pattern
outside
of
Google.
So
this
is
a
place
where
sort
of
Google
isms
leaked
into
kubernetes
a
little
bit.
A
There
are
some
applications
and
I
think
somebody
brought
up
traffic,
the
the
ingress
controller,
where
they're
smart
enough
to
know
how
to
reload
their
config
dynamically
from
disk.
They
wouldn't
need
this
capability,
but
I
think
most
applications
once
they
read
a
config
file.
They
really
need
a
restart,
or
at
least
re-up,
to
be
able
to
reread
the
config
file.
A
A
This
is
something
that
deployment
does
something
similar
to
Mamet
manage
sort
of
child
replica
sets,
and
so
there
was
a
whole
bunch
of
stuff
with
deployment
about
being
able
to
sort
of
garbage
collect
old
replica
sets.
So
this
is
a
problem
here
and
I'm
kind
of
wondering
whether
customized
folks
actually
have
out
have
an
approach
to
try
and
deal
with
this
stuff
all
right.
Let
me
catch
up
on
the
comments
here.
A
little
bit
so
Ross
is
asking
about.
Is
there
a
way
to
opt
out
of
the
hash
config
map?
A
Yeah
there's
a
thing,
but
it
does
it
globally.
If
you
have
a
horizontal
pod
autoscaler
and
in
the
deployment
and
it
scaled
up,
it
scales
back
down
to
the
earth
zone,
appointment
replicant
account
Miguel.
It
should
not.
This
is
actually
something
that's
a
subtlety
of
cube,
control,
apply
and
I.
Don't
know,
did
folks
actually
take
that
on
yeah,
if
you're
using
HPA,
then
what
you
have
is
that
replica
account
and
let's
go
back
to
the
documentation,
because
this
is
actually.
A
This
is
one
of
those
things
that
makes
some
of
this
stuff,
so
freakin
complex
and
if
you
look
at
the
documentation
here
for
field,
merge
semantics
that
goes
into
this
a
little
bit,
so
it
says
like
so
this
thing
motivation
here.
This
is
actually
this
talks
exactly
about
that
case.
It
says
other
fields
such
as
replicas
may
be
owned
either
by
human
users,
which
is
where
you
specified
specifically
in
your
deployment.
A
May
may
be
owned
by
either
human
users.
The
API
server
or
controllers,
for
example,
replicas
may
be
explicitly
set
by
user
implicitly
set
by
default
value
of
the
API
server
or
continuously
adjusted
by
HPA,
right
and
so
part
of
apply
is
that
you
have
to
figure
out
who
actually
last
touch
which
field
and
when
you
actually
do
an
apply,
do
you
want
to
actually
stomp
over
the
stuff
that
things
are
happening?
So
a
bunch
of
these
sort
of
apply
semantics
are
to
be
like
hey.
You
know
the
horizontal
pot
autoscaler
is
mucking
with
that
field.
A
I'm
gonna
leave
it
alone,
I'm
not
going
to
go
ahead
and
touch
it.
So
that's
all
so
that's
the
the
real
hard
part
about
apply
is
exactly
that
case.
All
right,
Sonique
says:
could
disabled
named
self
Akash
we've
done
for
something
specific,
config,
Maps
or
secrets?
Doesn't
look
like
it
I
think.
The
right
thing
to
do
in
that
case
is
to
actually
use
a
static
resource
like
I
did
not
use
the
generator
for
your
static
things.
It's
a
little
bit
of
a
bummer
that
you
can't
actually
be
more
targeted
with
that
stuff,
so
yeah.
A
A
A
I.
Do
know
that,
like
the
the
parallels
for
that
particular
use
case
inside
of
Google
was
with
MapReduce,
because
what
we
would
do
with
MapReduce
at
Google
is
that
you
would
launch
a
essentially
a
seed
job
into
the
board
cluster
and
then
that
would
go
through
and
launch
all
of
the
mappers
and
reducers.
It
would
need
to
be
able
to
actually
do
that
as
those
did
their
work.
It
would
actually
go
through
and
delete
those,
those
particular
jobs
and-
and
so
that's
very
much-
the
operator
pattern
that
we're
seeing
in
kubernetes
that
doesn't
didn't.
A
You
see
our
DS,
we
didn't
have
C
or
decent
board,
but
a
lot
of
sort
of
echoes
of
operators
or
controllers
were
sort
of
like
used
in
MapReduce
and
as
you
did,
that
what
you
found
is
that
Borg,
the
Borg
config
language
was
both
usable
as
a
command-line
similar
to
customized,
but
also
could
be
embedded
as
a
C++
library
in
Google
to
be
able
to
run
that
stuff
server-side
for
things
like
things
like
MapReduce
and
a
lot
of
the
problems
of
like
what
do
you?
What
do
you
actually
do
with
variables?
A
A
Let's
see
so
fill
disabling
name
generation
for
specific
secrets
or
configuring,
it
can
be
done
by
putting
them
in
a
separate
base,
the
disabling
generation
for
that
base.
Okay.
So
this
is
something
where
you
have
to
sort
of
create
a
different
customized
base.
Do
what
do
you
call
like
a
customized
directory,
because
I
think
this
was
one
of
those
things
with
case
on
it?
We
had
a
hard
time.
Do
you
call
it
an
apt,
you
call
an
app
config.
Do
you
call
it
like?
A
So
that's
one
way
to
one
way
to
work
with
that.
So
deck
says:
OPA
is
promising
for
patching
controller
generated
deployments.
Mutating
webhook
controllers,
I
am
dubious
of
server-side
mucking
with
llamó,
like
that.
I
think
that
you
know
the
problems
that
we
just
talked
about
with
HP,
a
mocking
with
replica
set
those
things
get
amplified
dramatically,
as
we
have
more
and
more
things
mucking
with
yamo
that
you're
actually
posting
to
a
certain
degree.
People
build
an
expectation
of
when
I
actually
could
create
a
resource
with
yema
land.
I
read
it
back.
A
The
amount
is
sort
of
mucking
with
that
is
actually
going
to
be
kept
to
a
minimum,
and
when
we
see
things
like
like
a
sidecar
injection
for
things
like
sto
oftentimes
users
don't
deal
with
the
pods
directly.
So
like
I
can
write
a
deployment.
I
can
read
the
deployment.
Everything
works,
fine,
it's
only
when
the
replica
said
finally
creates
the
pod.
Does
that
actually
injection
happen?
And
so
that
means
that
a
lot
of
times
that
particular
there's
a
sort
of
a
cut
out
between
the
deployment
and
the
pod,
where
it
doesn't
impact
users?
A
The
more
that
we
have
things
like
like,
like
mutating
to
mission
controllers,
mocking
with
the
amyl,
the
more
I
think
we're
gonna
break.
Some
of
the
configuration
utilities
like
like
customized
like
apply
so
I
think
it's
a
dangerous,
dangerous
thing
to
actually
look
at.
So,
let's
see
so
somebody
posted
a
link
to
Gus's.
Let's
see
customize
lips,
I
haven't
seen
this,
so
this
is
a
JSON,
a
library
that
reimplemented
the
core
transformation
operations
in
customize
yeah.
So
this
is
I
love.
Gus
he's
he's
a
JSON,
a
believer.
A
A
This
is
a
config
map.
I
want
to
do
a
dynamic
naming.
You
know
substitution
here
to
me
this
doesn't
seem
onerous,
it's
not
quite
as
magical
as
the
stuff
that
that
that
customized
does
but
I.
Don't
think
that
that's
that's
too
bad,
just
yeah
the
settings
are
global.
They're
global,
as
in
oh
so
McHale
is
asking.
Aren't
those
settings
global?
Do
they
bubble
up
to
the
bait
through
bass
and
herons?
I?
Don't
know,
I,
don't
know,
okay,
so
I'm
having
fun.
This
is
really
interesting.
A
Stuff,
the
I
want
to
do
one
more
thing
here,
just
to
play
around
with
it
and
then
we're
gonna
actually
go
ahead
and
I'm
gonna
have
to
sign
off
because
we're
running
out
of
time
I
want
to
play
with
the
change
in
a
container
image
here.
So
what
I
can
do
here
is
I
can
do
images
and
the
image
that
we're
going
to
be
changing
I
want
to
see.
A
A
Did
this
actually
go
through
all
right,
so
this
configured
deployment,
apps
I,
can
go
here
all
right
and
now
I'm
learning
the
blue
version
right.
Oh
wait!
No
I'm
not
running
the
REC
I.
Did
it
get
real
old,
no
okay.
So
this
is
interesting.
I
have
to
say:
I
want
to
go
from
blue
to
red
here,
so
you
have
to
specify
the
entire
the
fully
right
now.
A
B
A
Oh
I,
don't
have
a
red,
so
what
do
I
have
I?
What
did
I
name
them?
I
have
different
versions
here
and
I
named
him
bait
off
of
colors,
but
I
just
did
this
well,
a
blue,
green
and
purple
I
need
to
create
a
red
one.
Okay,
so
we'll
do
protocol
and
I
wanted
to
do
something
QWERTY
where
it
actually
changes.
Some
colors
and
stuff
apply
that
got
configured
so
now.
I
go
through
now
we're
running
the
purple
version.
Let's
change
it
to
the
green
version.
A
So
yeah,
so
this
is
how
so
this
is
how
you
can
go
through
and
actually
say:
hey
I
want
to
modify
the
image,
but
again
this
is
global.
So
any
version
of
this
image
is
what
I
want
to
modify.
What
you
could
do,
though,
is
you
can
create
sort
of
like,
and
this
is
where
you
know
it
starts.
Looking
kind
of
template
ish
is
I
could
create
this,
like
you
know
invalid
image
here
right
and
then
I
want
to
say
I
want
to
rewrite.
You
know.
A
It'll
go
ahead
and
go
ahead
and
so
like.
If
you
want
to
actually
sort
of
have
logical
names
for
your
image
and
then
actually
use
your
customization
Yambol
to
say,
here's
the
true
name
for
the
image,
the
the
at
some
point.
It
doesn't
matter
what
I
put
there.
It's
just
a
unique
image,
name
tag
that
I
can
then
override
in
my
customization,
so
I
think
that's
probably
a
way
to
have
used
customize
a
little
bit
here.
A
Alright,
so
hopefully
that
was
some
useful
stuff.
I
definitely
learned
a
lot.
Thank
you
all
for
sticking
with
me
this
one.
This
is
such
a
deep
topic.
It
seems
so
freakin
easy
we're
just
managing
yeah
Mille.
How
hard
could
it
be?
It
turns
out
it's
really
really
really
hard,
and
so
it's
easy
to
it's
easy
to
think
that
this
is
an
easy
problem
in
about
yeah
Mille,
but
it
turns
out
that
that
there's
some
real
depth
here-
and
this
is
something
that
you
know
a
lot
of
smoke.
A
Smart
folks
have
worked
on
for
a
long
time.
So
thank
you
very
much,
I
think
I'm
off
next
week.
I
think
Chris
is
going
to
return
next
week
for
TGI
K
I'm,
not
sure
what
the
topic
is
gonna
be
yet,
but
I'm
sure
it'll
be
fun,
and
thank
you
for
joining
me
and
we
will
see
you
all
next
week.
Oh
wait.
Phil
just
posted
something
here.
I
want
to
make
sure
that
that
we
get
this
before
everything
disappears.
A
What
did
you
post
your
fill?
We
got
customized
meta
issues.
Okay,
so
we
got
set
setting
replicas
the
first
class
like
images,
fine
grain
control,
a
config
map,
they
came
up
a
lot
order
of
resource
means
nothing.
Yeah
I
did
see
that
that
you
guys
create
a
graph
there
and
then
deleting
a
reverse
order.
So
those
are
the
things
that
we
saw
I'm
for
anything.
So.
A
Oh
I'm
sorry
I'd
switch
around
my
face.
There's
the
the
link
that
that
Phil
put
in
there
with
a
meta
issue
covering
some
of
it,
some
of
this
stuff,
some
of
the
ideas
and
stuff
that
we
discovered
here,
if
you
guys
have
other
you
know,
gripes
ideas,
definitely
put
it
into
the
the
customize
here.
Oh
yeah,
we
didn't
get
a
chance
to
actually
really
do
the
Patchen
in
the
overlay.
I
have
a
lot
of
ideas.
There.
That's
going
to
have
to
be
be
something
else
in
the
future.
A
We
have
change
of
labels
duplicates
pods.
That
was
something
and
then
the
the
namespace
stuff.
The
fact
that,
like
the
namespace
override,
doesn't
actually
handle
namespace
resources.
Those
were
the
two
other
issues
that
I
think
we
hit
all
right.
Thank
you.
Everybody
and
I
will
see
you
all
next
week.