►
From YouTube: TGI Kubernetes 084: Kubernetes API removal and you
Description
Come hang out with Duffie Cooley as he does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Duffie talking about the things he knows. Some of this will be Duffie exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
This week I'll be demonstrating a cluster with the soon to be remove apis already removed. We will explore how that affects your use of Kubernetes and what you can do to mitigate the problem should you hit it!
A
Good
afternoon
everybody
and
welcome
to
TGA
number
84
in
this
episode,
we're
gonna
be
exploring
the
the
changes
that
are
coming
up
in
1.15
release
of
kubernetes,
sorry
116
release
of
community.
So
it's
still
a
release
away,
we're
still
kind
of
playing
around
with
115,
but
116
is
baking
right
now,
so
we're
it's
coming
soon.
You
don't
know
a
lot
of
time,
but
but
yeah,
let's
talk
about
it,
a
little
bit
more.
So
what
this?
A
What
this
episode
is
going
to
be
about
is
about
the
fact
that,
like
in
1/16,
a
bunch
of
the
deprecated
API
is
that
have
been
being
served
over
time
are
going
to
be
removed
from
the
kubernetes
api,
and
this
up
in
this
episode
we're
going
to
explore
some
ways
to
see
how
that
will
affect
you,
what
to
do
and
how
to
fix
it
and
and
those
sorts
of
things
and
we're
gonna
kind
of
walk
through
some
of
those
examples.
So
I'm
really
looking
forward
to
this
one.
A
This
is
another
episode
in
which
I
will
shout
out
to
college,
because
I'll
be
using
kind
clusters
to
do
validation
of
those
things
and
those
and
that
sort
of
stuff
and
then
yeah.
That's
that's
what
we're
up
to
in
this
episode.
Let's
take
a
look
at
our
chat
setup
here,
all
right,
hello,
Martin
from
the
Netherlands
hello.
Wherever
guilty
hello,
Maddy
good
to
see
you
Rory
from
Scotland,
it's
always
good
to
catch
up
with
you
on
this
George
is
back
from
vacation.
A
He
just
had
a
lovely
time
in
Crete
for
a
bunch
of
time
and
he's
gonna
join
us
here
kind
of
like
help
moderate
the
chat.
So
if
you
have
questions
that
I
miss
him
he'll,
let
us
know
Maroof
from
Bangladeshi
and
Andre
from
Brazil
Bogdan
from
Romania
and
Jo
from
Kyoto.
You
should
be
sleeping
Jo
but
I
understand.
The
jetlag
is
a
thing:
well
fall
from
Poland
koufang
from
South
Korea
and
Herman
from
Berlin
Germany,
hello,.
A
It's
great
to
see
you
I
love
that
it's
such
a
worldwide
audience.
It's
like
so
incredible
that
you
know.
Here
I
am
sitting
on
my
laptop
in
beautiful
San,
Francisco
California
and
talking
to
all
of
y'all
alive
all
around
the
world.
That's
it's
such
a
it's
such
an
credible
thing,
hello,
joy
from
Richmond,
is
that
Virginia
or
California
I
think
Richard
Richmond
is
one
of
those
is
one
of
those
city
names.
It
is
like
everywhere.
You
know,
like
I'm
kind
of
curious,
how.
A
Virginia,
okay,
cool
yeah,
so
welcome
welcome.
Welcome
so
what
we
do
now.
What
we
do
next
usually
is
to
kind
of
go
through
some
of
the
weekly
news
here
and
let's
just
take
a
list
of
the
camera
over
to
that
screen
and
and
started
to
get
through
that
stuff.
So
this
is
our
hack,
MD
and
George
is
already
put
up
a
link
on
our
to
our
hack
MD.
A
If
you
want
to
contribute
or
Itamar
crammed
on
time
or
if
you
have
some
ideas
about
things
that
I
could
talk
about
this
week
and
you
think
I
missed
something
go
ahead
and
throw
it
in
there
and
then
I
can
actually
talk
through
those
things
in
this
episode.
The
first
thing
I
want
to
start
off
with
is
something
that
really
impresses
me
every
time
it
happens.
A
A
What's
coming
in
that
new
version,
good
common
is
somebody
I
work
with
here
at
VMware
and
he
does
an
incredible
job.
I
can't
say
enough
about
how
great
this
is
of
really
just
giving
a
really
good,
deep
and
yet
somehow
concise
overview
of
everything
that
is
coming
in
115.
It's
really
incredible.
I
have
an
echo
problem
that
is
I,
don't
see
an
echo
problem.
B
A
A
A
The
next
thing
up,
one
of
the
merges
in
four
days
ago,
which
was
actually
so
like
this
is
actually
coming
from
the
last
week
in
kübra
needed
info
lwk
DDOT
info
page,
which
basically
gives
you
a
kind
of
a
view
into
what's
happening
with
the
code
base
within
kubernetes.
This
one
is
exciting
to
me.
I
think
this
is
something
that
I've
been
kind
of
waiting
for
to
happen
for
a
little
while
and
I
know,
there's
been
a
lot
of
people
working
on
it.
A
Can
you
bring
up
another
container
that
has
access
to
that
running
pod?
That
I
can
use
to
kind
of
debug
this
state
of
that
pod,
so
that
we
can
dig
into
what
the
actual
problem
isn't?
Maybe
I
can
fix
it?
Well,
this
proposal
and
the
work
associated
with
this
proposal
is
actually
trying
to
actually
provide
that
set.
That
capability,
which
I
think
we
would
we'll
be
great
I,
think
opinion
is
very.
A
B
A
And
then
we
have
dims
from
Boston.
I
didn't
know
that
if
lived
in
Boston,
are
you
living
there
Bob?
Are
you
live
in
Boston
or
are
you?
Are
you
visiting
there
I'm
curious,
and
then
we
got
Marco
from
Milan
Italy
Italy.
It's
like
such
an
incredible
place.
I
was
actually
just
talking
about
to
George
about
my
trip
to
Italy.
I
was
I,
got
to
visit
Venice
Rolo's
there
and
she
could
kind
of
tripped
around
a
couple
of
different
areas,
kind
of
like
in
that
upper
part
of
Italy
and
really
enjoyed
it.
A
It
was
actually
lives
in
Boston.
Well,
that's
awesome!
Well!
Well,
thank
you
for
logging
in
good
to
see
you.
We
have
Oh
from
Copenhagen
Denmark.
We
have
Alec
Alexis
from
France,
hello,
hello
again
such
a
worldwide
community
insist
such
such
an
awesome
thing
all
right,
so
we
talked
about
ephemeral
containers
and
how
that's
coming
in
right
now.
Obviously,
this
was
merged
four
days
ago.
So
this
is
still
you
know
some
releases
away
from
you
actually
being
able
to
put
your
hands
on
it
and
play
with
it.
A
Let's
see
what's
happening
here,
so
a
30
minute
outage
was
caught
was
what
happened
to
the
hosted
prometheus
service
and
custom.
Some
customers
were
affected.
This
is
them
being
very
transparent
about
what
happened
and
I
think
that's
great
I
love
what
companies
actually
like
put
up
things
that
that
help
us
all
learn
from
the
experience
because
like
if
we
can
learn
together,
we
can.
We
can
really
go
very
far
together.
You
know
like
if
you're
learning
in
isolation,
it's
very
hard
to
really
like
to
you,
know
level
up.
You
know,
and
so
I
love
it.
A
When
people
put
up
things
like
this
and
describe
what's
actually
happened
here,
so
the
co-founder
hosted
prometheus
service
is
based
on
cortex
to
achieve
zero
time,
so
your
downtime
cortex
and
gesture
service
requires
and
extra
in
gesture
replica
during
the
upgrade
process.
This
allows
and
gestures
to
do
things,
but
adjusters
are
big.
They
require
four
cores
and
six
15
gigabytes
of
RAM
per
pod
25%
of
the
CPU
memory
of
a
single
machine
in
our
communities
cluster.
A
In
aggregate,
we
typically
have
more
than
four
cores
and
15
K
over
m
of
unused
resources
available
on
the
cluster,
to
run
these
extra
and
gestures
for
upgrades.
So
basically,
what
they're
saying
isn't
when
they
spin
up
a
new
deployment
of
this
set?
They
have
to
kind
of
spin
up
an
extra
one
to
kind
of
hold
them
over
to
the
in
the
migration
to
the
new
version.
A
However,
it
is
often
the
case
that
we
don't
have
25%
of
any
of
any
single
machine
empty
during
the
normal
operation,
and
you
can
see
how
this
would
slip
through
the
cracks
here
like
this
is
a
this
is
the
thing
where
you
actually,
you
may
not
actually
have
enough
room
on
a
single
machine,
although
in
aggregate
you
do
have
that
much
room.
So
it's
a
little
misleading.
A
Let's
see
what
happens
here
so
on
Thursday,
we
deployed
four
new
priority
classes
to
our
clusters:
critical
high
medium
and
low,
and
we
had
been
running
these
priorities
on
an
internal
cluster
with
no
customer
traffic.
For
one
week
the
medium
priority
class
was
set
to
be
the
default
for
pause.
They
didn't
specify
explicit
priority
and
gestures
were
set,
the
high
priority
class,
which
would
mean
that
the
in
gestures
would
have
a
higher
priority
and
thus
be
able
to
kick
other
things
out
when,
when
the
time
came
on
Friday,
we
spun
it
up.
A
So
a
replica
set
for
preempted
and
gesture
on
our
production,
cortex
cluster
notice,
the
preempted
pod
created
a
new
and
Jester
pod
to
maintain
the
correct
number
of
replicas
and
it's
battling.
This
new
pod
was
given
the
default
medium
priority
as
such
preempted
another
production
and
gesture.
This
caused
a
cascading
failure
that
eventually
caused
the
preemption
of
all
the
ingest
er
pods
for
the
production,
cortex
Custer's.
A
Yeah,
so
now
that
you
have
the
power
of
actually
ejecting
other
pods,
you
still
have
to
kind
of
think
about
how
that's
gonna
balance
out.
You
don't
want
it
necessarily
this
cascading
failure
models.
That's
an
interesting
one,
that's
about
as
much
details
as
I
can
spend
on
this
article
right
now,
but
it
is
a
very
good
read
and
they
go
into
some
detail
about
what's
actually
happening
there.
They
talk
about
their
takeaways
and
what
to
do
and
what
they
learned
in
that
again.
I
can't
say
enough
about
that.
B
A
A
Wait.
Yeah,
okay,
cool
the
fresh
problem
was
gone
right
like
you
can
actually
see
the
right
page
now.
Okay,
good,
alright
good,
so
definitely
check
that
out
if
you're
interested
in
Python
or,
if
you're,
just
interested
in
exploring
the
patterns
that
might
put
your
application
in
a
better
position
to
to
kind
of
you
know
thrive
in
a
kubernetes
environment.
So,
like
that's
a
really
that's
a
really
great
article.
That
kind
of
talks
through
some
of
those
things
next
up
on
our
list,
we
have
a
business
executive
guide
to
kubernetes.
A
Now
this
is
I,
think
a
spicy
one,
it's
interesting,
so
this
is
actually
from
just
Frisell,
and
many
of
the
things
that
she
highlights
here
are
still
very
much
top
of
mind
for
a
lot
of
people
who
are
adopting
kubernetes
and-
and
some
of
them
are,
you
know,
have
aged
better
than
I.
Think
the
article
lets
on
like
there
are
some
references
to
a
default
dashboard
that
isn't
a
default
dashboard.
It's
not
improving
it
is.
A
You
have
to
we've
spent
some
time
in
CGA,
Kate,
I'm
kind
of
talking
about
how
to
configure
that
in
a
more
secure
way,
but
it
does
highlight
the
fact
that,
like
even
though
in
our
case,
the
dashboard
is
not
default
inside
of
kubernetes,
it
doesn't
mean
that
the
problem
itself
doesn't
exist
right
like
if
you
deploy
an
application
and
you're
not
actually
spending
the
time
it
needs
to
secure
that
application.
That's
still
a
problem
that
you're
gonna
have
to
deal
with,
whether
it's
on
whether
it's
on
kubernetes
or
on
any
platform.
A
Still
very
it
still
calls
attention
to
the
problem
that
is
related.
Upgrading
kubernetes
is
still
a
tricky
thing.
There
are
some
good
examples
of
patterns
for
how
to
do
it
and
pattern
is
for
to
watch
out
for
and
yeah
I
mean
I
just
spent
this
week,
working
on
a
presentation
that
I'm
going
to
be
giving
with
Ian
cold
water
in
a
couple
weeks,
at
blackhat
and
in
the
presentation
we're
talking
about,
you
know
the
what
we
mean
by
kubernetes
defaults
right,
and
so
this
is
really.
This
is
really
highlighted
again
to
me
this
week.
A
That
Cooper
need
is
this
complex.
You
know
like
if
you're
thinking
about
it
from
a
security
surface,
there's
a
lot
to
think
about
and
if
you're
thinking
about
it
as
a
development
surface,
there's
a
lot
to
think
about
I.
Think
it's
worth
the
price
of
admission,
though,
because
there's
a
lot
of
reliability
in
the
system
and
I
do
think
that
having
a
single
API
with
which
to
reason
about
all
these
things
is,
cannot
be
understated
in
its
value.
I.
Think
it's
I
think
it's
like
it's
a
super
valuable
thing.
A
So
it's
an
interesting
article
check
that
one
out
I
would
like
to
give
a
shout
out
to
all
of
the
people
at
gopher
con,
including
my
friend
Eric
Chang,
who
I
worked
with
at
core
OS
he's
a
great
guy
and
he
actually
wrote
a
a
presentation
for
overcome
this
year.
That
is
called
PKI
for
Gophers
I
highly
recommend
it.
Eric
is
probably
one
of
the
sharpest
pkn
folks
I
know.
There
are
and
I
think
that
was
true
of
a
number
of
people
that
I
work
for
the
core
OS,
but
definitely
check
this
out.
B
B
A
A
If
you
have
not,
it
was
actually
authored
by
Valerie
Lancie,
who
is
another
incredible
engineer
in
our
community
she's
she's
taken
on
the
world
like
she's,
just
doing
so
much
so
much
great
stuff
for
the
community,
including
this
blog
post
spending,
a
lot
of
time
in
cig,
Network
and
a
lot
of
other
places,
but
this
blog
post
as
we
get
into
it
or
I'm,
not
going
to
read
the
whole
thing
to
you,
but
we
are
going
to
spend
some
time
on
the
finer
points
before
we
started
exploring
how
this
might
affect
you
and
how
and
how
to
solve
the
problem
before
we
do
that,
though,
let's
go
back
to
chat
and
see
how
we're
doing
on
the
chat
stuff.
A
So
I
like
this
from
France
hello
and
Sam
from
North
Carolina
Anna
Alexander
from
st.
Paul
good
to
see
you
Murph
all.
Can
you
show
the
page
sorry
about
that
show
the
page
I
did
fix
it.
I
hope
I'm,
sorry
that
it
was
the
I
was
I,
was
freezing
up.
There
looks
like
it's
all
good
now,
Paulo
from
a
Montevideo
or
a
wood
way,
I
hope
I'm,
not
slaughtering.
That
too
much
Abhinav
is
asking.
Maybe
a
question
dennis
is
replying
to
a
question.
A
Oh
you're,
talking
about
the
timing.
Yeah,
the
timing
is
hard
because
the
probably
there's
no
perfect
time,
I
there's.
No,
if
there's
no
way
for
us
to
pick
a
time
that
is
like
going
to
be
perfect
for
everybody,
and
so
it's
like.
We
just
have
that's
why
we
record
them,
though
you
know
like,
if
you
can't
make
it
in
person,
you
can
always
just
jump
into
the
chat.
A
If
you
have
other
feedback
for
us
a
court
about
the
session,
the
comments
usually
remain
open
for
a
little
while
and
we
try
to
actually
reply
to
them,
and
then
we
also
have
to
get
have
repository
where
you
can.
Actually
you
file
issues
for
things
that
are
coming
that
you
want
to.
They
want
us
to
talk
about
and
actually
just
provide
us
feedback
that
way
as
well.
A
A
A
Actually,
but
at
the
same
time
like
from
a
security
service,
I'm
I'm
still
kind
of
racking
my
brains
around
it,
but
it
is
pretty
awesome.
I'm
really
looking
forward
to
it.
Fabio
from
Brazil
gonna
help
hello
and
Christopher
from
Germany
great
to
see
you
all
and
Joe
my
buddy
Joe,
my
other
buddy
Joe,
so
many
buddies
Joe
anyway.
My
buddy
Joe
from
Atlanta
I
hope
your
week
is
I,
hope,
you're
looking
forward
to
the
weekend.
I
know
I
know
so,
let's
jump
into
this
blog
post
and
let's
talk
about
what's
happening
here.
A
Deprecated
means
that
for
quite
a
while,
like
some
time
ago,
a
particular
API
path
within
the
kubernetes
api
has
been
moved
to
another
path
and
the
old
path
has
been
a
marked
as
deprecated,
meaning
that,
ideally,
you
would
toward
the
new
path.
An
example
of
that
is
the
deployment
object.
You
know
the
humble
deployment
object,
the
workhorse
imperialists,
so
the
deployment
object
is
now
inside
of
apps
v1
and
has
in
previous
versions
been
instead
of
extension,
this
v1
beta
1.
It's
been
an
absolute
one,
beta
1
and
absolutely
1
beta
2.
A
You
know
during
those
previous
versions
of
kubernetes,
probably
still
have
the
API
version
set
to
them,
like
the
older
deprecated
API
version
and
in
1/16
we're
going
to
remove
those
from
the
API
server,
which
means
that
if
you
try
to
create
a
manifest
that
has
that
old
API
version
in
it
you're
going
to
get
rejected
and
I
have
plenty
of
examples
of
that
that
we're
going
to
walk
through.
So
you
can
see
how
that
works,
but
I
do
want
to
make
sure
that
we
understand
the
terms
right.
A
Deprecated
and
removal
are
two
different
things
that
happened
in
two
different
time
periods
right.
So,
even
though
these
things
have
been
deprecated
for
some
time,
they've
still
been
in
the
API
server
being
served,
and
so
the
user
experience
has
been
well.
I
can
deploy
any
deployment
manifests
and
it
will
work
inside
of
kubernetes
and
then
user
experience
is
about
to
be
actually
that
manifest
has
to
use
the
correct
API
version
to
be
deployed
into
communities.
A
It
can't
be
any
version
anymore,
and
you
can
see
that
economy
there
and
I'm
expecting
that
to
be
to
provide
some
friction
for
people
as
they
move
into
116,
but
yeah.
So
that's
what
I
wanted
to
highlight.
It's
a
it's
a
difference
between
those
things
right,
like
it's
pretty
darn
important
that
we
understand
the
that
you
know.
Deprecated
does
not
mean
removed,
but
now
we
are
actually
talking
about
removed.
A
So,
as
we
move
down
the
blog
post,
we
also
see
ways
to
actually
bring
up
a
cluster
to
validate
this,
and
so
in
my
set
up
that
we're
about
to
go
through
I'm
gonna,
walk
you
through
how
I
set
up
the
environment
so
that
I
could
use
to
validate
these
things.
And
basically
I
did
exactly
this
thing
right.
A
A
Signal
oosting,
but
you
know,
feel
free
to
tweet
about
it
and
and
get
out
there
and
tell
people
about
it.
I
think
it's
an
important
one
and
I
really,
you
know
from
an
empathic
perspective.
I'm
like
I,
can
feel
the
pain
coming.
You
know,
I
know
that
people
are
gonna,
be
really
surprised
by
this.
Even
though
we've
there's
been
kubernetes
blog
posts
and
lots
of
notice
about
it,
I
know
that
people
are
still
gonna
be
caught
on
their
heels
on
it,
so
help
me
spread.
The
word
please
help
me
spread
the
word
on
this.
A
B
A
A
Cough
chest
is
a
relatively
recent
project,
I
think,
but
what
it
does
is
it
allows
you
to
use
open
policies
Rico
to
write
rules
to
validate
a
set
of
manifests,
maybe
as
they
lie
on
disk,
rather
than
as
they
are
as
they
are
submitted,
and
so
a
comp
just
can
do
and
I
haven't
I,
don't
know
if
there's
an
example
of
this
for
the
for
the
expired
API
is
to
be
kind
of
interesting.
If
there
were.
A
B
A
Right
Gareth
watch
Gareth
rush
Grove
did
do
a
demo
of
of
comp
test
and
like
in
the
community,
one
that
was
really
great.
What
else
do
we
have?
The
Madi
thanks
DIMMs
was
thinking
might
be
useful
to
check
for
that
yeah.
That's
true
yeah.
That
would
be
good.
All
right,
yeah
I
think
there's
like
an
approval
thing
that
has
to
happen
if
there
is
a
URL
and
I'm,
not
sure
how
that
got
turned
on
or
how
to
turn
it
off.
A
A
We
have
yeah,
so
that's
the
block
post.
That's
the
info
help
me
get
it
out.
Take
this
link
and
spam
it
everywhere.
You
can
and
like
make
sure
that
people
that
you
know
that
are
using
kubernetes
talk
about
it
at
meetups
like
get
the
word
out,
I
think
it'd
be
really
important
to
chat
with
it
and
since
we
have
such
an
incredible
audience
kind
of
all
over
the
place
help
us
get
the
word
out
in
every
direction.
You
know
like
this.
This
should
also
be
a
distributed
system.
It's
great
stuff
alrighty.
A
So
anything
else,
I
wanted
to
talk
about
Oh,
actually
yeah
could
reduce
deprecation
policy.
This
actually
talks
about
the
policy
behind
deprecating
and
removal
of
things,
and
it's
definitely
worth
a
read,
but
it
talks
about
both
the
promotion
of
api's
and
also
the
removal
and
deprecation
of
api's,
and
it's
it's
a
it's
a
good
one
to
read
for
sure.
A
So,
if
you're
curious
like
what
the
lifecycle
of
an
API
or
not
or
API
versions
and
stuff
within
the
core
kubernetes
project,
when
I
find
that
generally
things
even
that
we're
not
in
kora,
follow
the
same
pattern.
So
if
you
like,
look
at
cluster
API
or
any
of
those
other
things
you
may
DM
well,
if
you
medium
is
in
core,
but
you
get.
The
idea,
like
other
other
projects,
are
following
the
same
pattern,
because
it's
a
pretty
well
established
pattern.
A
So
it's
a
it's
a
good,
a
good
article
to
read,
if
you're
in,
if
you're
curious
about
that
sort
of
stuff.
Moving
on
Ducker
1903
was
released.
That's
worth
talking
about.
There's
a
lot
of
stuff
in
1903
I'm,
not
sure
that
it's
actually
trusted
by
kubernetes
yet
but
I'm
looking
forward
to
it.
There's
a
bunch
of
stuff-
that's
actually
happening
here.
That
is
actually
pretty
neat,
including
some
of
the
changes
from
that
hero
going
into
doctor
in
1903.
A
That
allows
you
to
do
this
to
run
it
as
not
the
daemon
itself,
as
non-root
I'm,
not
sure
if
that
was
completely
if
that
work
is
complete
yet,
but
I
know
that
it's
that
it's
being
merged
in
so
yeah
good
stuff
there
we
go
yeah,
allow
docker
D
as
a
non
root
user
in
rootless
mode.
So
that's
some
of
the
stuff.
A
You
know
really
good
stuff
lots
of
exciting
things
happening
still
in
the
project,
and
it's
actually,
you
know
remember
that
this
is
a
lot
of
the
stuff
is
not
just
happening
to
docker.
It's
also
happening
to
container
D,
so
it's
like
exciting
and
kind
of
both
of
those
spaces,
so
yeah,
definitely
definitely
good
stuff.
Coming
I
think
that
kind
of
helps
someone
solve
some
of
the
interesting
stuff
coming
with
containers.
I
think
that
covers
our
last
week
in
review.
Let's
talk
about
the
set
up,
oh.
A
All
right,
pod
man,
you
know
I-
have
an
explorer
pod
mint
too
much
pot.
Man
is
interesting
because
I
think
it's
one
of
the
very
few
projects
that
can
make
use
of
sea
herbs
v2,
which
provides
kind
of
a
better
delegation
model
offer
for
things.
I
explored
a
question
on
Twitter.
Recently,
it's
like
the
question
was
like
when
you
think
of
docker
and
docker.
A
Both
of
these
two
patterns
do
create
containers
on
the
underlying
post,
but
but
they
are
intrinsically
different
and
in
the
conversation,
a
lot
of
things
really
came
up
and
I
thought
it
was
really
interesting.
One
of
the
things
that
came
up
was
around
understanding:
how
better
to
delegate
resources.
A
So
pot
man
is
one
of
the
projects.
I
think
that
actually
makes
use
of
cgroups
v2,
which
is
actually
a
step
in
that
direction
and
I
think
that
that's
a
pretty
interesting
project
because
of
it
and
I
think
that
containers
in
general
ashley
and
as
they
walk
down
that
path,
will
will
continue
to
get
better
and
more
more
I
know
it's
we're
not
resilient
per
se,
but
like
that,
we
can
make
better
decisions
about
how
to
provide
delegation
of
resources
within
kind
of
the
container
model.
Maybe
that's
kind
of
what
I'm
trying
to
say
all.
A
A
A
You
know,
but
I'm
doing
that,
because,
like
my
laptop
is
just
like
a
Lenovo
x1
carbon
and
so
I
kind
of
need,
the
processing
power
of
the
laptop
to
make
sure
that
this
dream
stays
reasonable
and
so,
instead
I'm
kind
of
farming
out
the
creation
of
containers
and
my
clusters
to
my
nook,
which
has
a
much
more
powerful
CPU,
much
more
RAM
and
that
kind
of
stuff,
so
I'm
using
kind
to
to
create
clusters
and
I
might
do
kind
get
clusters.
I
can
see
that
I
have
two
clusters
that
are
created
and
to
create
them.
A
I've
created
these
two
directories,
api's
and
OAP
is
which
are
the
names
of
the
two
clusters.
If
I
go
into
api's,
I
can
look
at
the
configuration
for
it,
and
the
configuration
in
this
case
is
basically
just
kind
of
the
standard
configuration
that
I
use
for
a
kind
cluster
in
which
I'm
turning
off
the
default
C&I,
because
usually
I
want
to
explore
one
of
the
other
seen
eyes
like
flannel
or
cow
or
one
of
the
other
things,
and
this
enables
me
to
disable
the
cni
directly
and
then
just
apply
whatever
CNI
is
interesting
to
me.
A
This
configuration
describes
an
override
setting
for
the
API
server
and
which
I'm
providing
an
extra
args
argument,
setting
runtime
to
that
same
string
that
we
saw
in
the
in
the
example,
and
so
what
this
lets
me
do
is
create
a
kind
cluster
which
is
just
running
locally
and
docker.
That
allows
me
to
simulate
the
pattern
around
turning
off
those
api's
in
a
115
cluster.
So
in.
A
A
A
A
What's
interesting
here
is
that
we
can
actually
see
that
the
api's,
the
the
part
where
I
haven't
turned
off
these
api's
do
actually
expose
more
of
the
versions
than
the
one
where
I've
turned
it
off.
So,
what's
neat
about
that,
what
I'm
trying
to
highlight
here
is
that
in
your
cubicle
API
versions,
which
is
a
way
of
understanding
those
versions
that
are
available
to
you
for
inside
of
your
cluster,
we
can
see
that
some
things
have
been
pulled
out
and
some
things
remain
so
extensions
you
want.
A
A
One
thing
of
interest
was
to
me:
was
that
I,
so
sometimes
just
for
quick
and
easy
kubernetes.
Networking
I
still
use
final
for
this
right
because
I
don't
I'm
not
looking
at
necessarily
at
Network
policy,
and
so
what
I
do
is
I
go
to
the
final
document
or
the
final
repository
and
in
here
there
is
I,
only
just
don't
notice.
This
is
part
of
testing
this
and
we're
going
to
kind
of
walk
through
how
this
works.
But
there
is
a.
A
There's
a
cube
final
document
that
they
host
look
created
five
months
ago
and
the
cube
final
document
still
refers,
unfortunately,
to
the
expired
api's.
And
so,
if
we
look
at
the
configuration
here,
we
can
see
that
there
is
a
daemon
set,
that's
being
created
and
it's
being
referred
to
as
extensions
V
1
beta
1,
and
if
we
go
back
to
these
deprecated
API
is
we
can
see
that
daemon
set
will
no
longer
be
served
from
extensions
if
you
wanna
be
the
one
so
that
it's
actually
our
problem.
But
what
I
wanted
to
show?
A
A
Aw
we
can
see
it
final
has
come
up
and
running
accordion
s
is
now
running
everything
works.
Is
you
kind
of
would
expect,
but
to
kind
of
give
you
an
I
light,
a
view
into
what
they
with
the
UX
is
about
to
be,
let's
go
ahead
and
deploy
that
same
thing
in
our
other
cluster.
So
why
did
Q
could
all
get
pause?
A
A
A
There
we
go
thank
you,
should
I
get
some
surrounding
text,
it
kind
of
gives
you
a
little
more
you
into
what's
happening
there,
and
so
we
can
see
that
two
groups
have
been
removed.
What
this
does
what
this
realize,
that
what
I
was
I
was
struggling
with,
is
that
it
doesn't
tell
you
that,
even
though
extensions
be
one
beta,
one
still
exists.
The
stuff
that's
actually
been
removed.
There
there
is
stuff,
has
been
removed
from
that
API
group
and
I
can't
figure
out
a
way.
A
Show
you
that,
without
like
dumping,
the
entire
open,
API,
spec
and
dipping
that-
and
you
know,
in
the
words
of
many
amazing
people-
ain't
nobody
got
time
for
that.
So
what
would
have
happened
if
you
upgraded
the
existing
cluster
to
the
new
version?
Existing
resources
had
removed
versions,
great
question
and
we're
going
to
talk
about
it.
A
Okay,
so
here
we
are,
we
have
deployed
q-final
on
the
old
one,
but
we
were
unable
to
deploy
q-final
on
the
new
one.
What
can
we
do
to
fix
it
so
that
we
can
deploy
q-final
and
then
we'll
talk
about
that?
That
migration
piece,
which
I
think
it's
important,
because
it
also
talks
about
how
the
storage
part
works,
but
we'll
talk
about
that
in
just
a
second,
so
I've
got
my
cube
flannel
manifest
here
on
disk
I've,
just
downloaded
it
and
so
trying
to
apply
it
directly
and
I'm.
I
do
cube
cano
final
convert.
A
Now,
Qube
cattle
convert
is
also
deprecated.
They
will
eventually
go
away
and
their
argument.
The
argument
is
that,
like
you'd,
be
able
to
apply
compatible,
AP
is
within
the
deprecation
allowance
and
you
would
be
able
to
download
the
new
version
that
way
right.
So,
if
you,
if
you
had
a
version
that
was
not
compatible
as
we've
seen
in
the
last
I,
don't
know:
gosh
nine
releases
of
Unitas,
maybe
eight
releases
of
granitas,
those
all
deprecated
API
to
stay
around
for
many
releases
and
so.
A
Deprecating
qpl
convert
because
it
is
actually
a
client-side
thing
and
they
want
to
make
it
so
that
we
we
don't
have
any
client-side
things,
and
so
the
way
that
you'd
be
able
to
work
around
the
deprecation
of
q
pedal
convert
is
to
basically
apply
that
object
to
the
cluster
and
then
do
a
cube
channel
get
as
a
desired
version,
and
so
we'll
walk
through
that
as
well
and
I.
Think
that's
actually
relating
to
the
question
the
Dmitry
asked,
but
first
there's
one
of
them
call
that
out.
A
A
That
isn't,
oh
I,
see
the
beauty
of
screen.
Okay,
sorry
about
that,
all
right.
So
what
we're
seeing
here
is
that
we're
seeing
things
that
are
something
some
things
that
are
changing
into
order,
because
it
basically
is
going
to
recreate
the
manifest,
and
so
it's
a
little
dirty
in
some
ways,
kind
of
understanding
how
these
things
are
related.
A
So,
in
our
examples
we're
seeing
that
on
the
left
side,
the
old
one
has
extensions
v1,
beta
1
daemon
set
and
on
the
on
the
right
side,
we're
seeing
the
change
to
API
version.
Apps
v1,
still
daemon
set
same
thing
over
here
for
the
server's
for
some
of
these
other
manifest
parts,
but
basically
it's
modified.
The
group
that
is
associated
with
this
kind
to
the
new
version.
A
So
if
you
kiddo
convert,
is
one
of
the
tools
in
our
disposal
to
go
ahead
and
make
this
change
such
that
it
will
apply
to
the
new
version
so
like
our
back
another
one
of
the
API
is
that
V
1
beta
1,
for
example,
is
deprecated
for
our
back
and
in
our
example,
which
is
kind
of
like
hidden
from
us.
It's
been
moved
to
use
V
1.
If
we
look
at
the
other
changes,
we
can
see
the
daemon
said
obviously
moving
from
extensive
v1
to
apps
v1.
B
A
I'm
not
sure
that's
actually
gonna
play
out
we'll
see
how
that
works
out.
It
doesn't
seem
like
it's
quite
right,
I,
don't
know
why
that
change
would
have
happened
but
we'll
see
if
it
deploys
and
and
and
see
what
we're
looking
at
here.
So
that's
like
that's
actually
kind
of
interesting
that
it's
right.
It's
changing
the
demo
field,
ref
that
might
be
a
bug
in
puke
it'll
convert,
but
let's
try
it
so
now,
we've
got
our
new,
manifest.
Let's
go
ahead
and
do
apply
f,
Q,
fun.
All
new
against
this
cluster.
B
A
A
A
Okay,
so
that
got
us
to
that
car
that's
configured,
but
one
of
the
interesting
points
about
this
and
we're
gonna
have
to
figure
out
how
to
fix
this
problem
or
how
to
address
it
is
that
cute
kiddo
convert
did
not
convert
the
pod
security
policy.
So,
even
though
there's
tools
at
our
disposal,
it
doesn't
quite
work
the
way
it
could
and
that's
an
interesting
one.
That's
a
it
might
be
my
opinion.
It's
a
bug,
but
I'll
be
filing
a
bug
against
an
a
deprecated
tool.
A
And
so
that
was
just
the
networking
component
we
haven't
actually
even
talked
about
like
some
of
the
applications
or
some
of
the
other
things
that
could
be.
That
could
hit
us
here,
but
yeah
already
I
had
to
do
more
work
to
get
flannel
deployed
into
a
cluster
that
has
those
api's
turned
off,
but
I
did
to
get
it
deployed
to
have
this
with
those
API
is
turned
on
now.
A
Obviously,
what
I
should
do
here
and
what
I'm
gonna
do
is
open
a
bug
against
the
flannel
project
and
update
that
manifests
so
that
it
doesn't
bite
other
people,
but
I
thought
it
was
interesting
that
you
know
like
these
are
the
things
you're
gonna
run
into,
not
just
with
like
things
like
not
just
in
places.
You
expect
them,
but
also,
maybe
in
places
that
you
don't
right
when
I
was
discussing
this
with
Jo
beta
I
think
was
a
week
ago
or
something
like
that.
A
Well,
it's
interesting
that
many
of
those
examples
make
use
of
the
extensions
v
1
beta
1
or
beta
beta
2
api
group,
and
it's
not
necessarily
for
the
kubernetes
documentation
team.
I
was
just
actually
just
evaluating
that
and
a
lot
of
a
lot.
If
not
all,
of
the
examples,
there
have
already
been
updated,
which
is
awesome,
but
there's
you.
A
A
A
No
well
I
mean
what
do
you
mean
by
that
when
you
say
I
wonder
if
it's
a
matter
of
having
multiple
objects
within
one
manifest?
What
were
you
referring
to
so
the
sock
shop
demo
has
a
bunch
of
things
that
are
still.
You
know
that
will
not
deploy
against
this
new
version,
and
so
what
I
want
to
do
is
I
want
to
go
ahead
and
grab
that
whole
demo.
A
So
this
is
our
old
cluster
I'm
just
going
to
deploy
to
that
cluster
first
and
see
kind
of
whether
we
can
get
the
thing
up
and
running
and
then
we'll
put
it
on
the
new
cluster
and
and
and
we're
kind
of
put
off
the
fire
and
see
if
we
could
get
that
help
that
nothing
working
there
as
well.
So,
let's
do
tube
kettle
apply.
A
A
Is
it
smart
enough
to
convert
the
API
version
for
all
outdated
objects
defined
with
inland
manifests?
Well,
it
did
convert
all
of
the
deployments.
I
think
that
just
convert
doesn't
understand
how
to
convert
pod
security
policies.
That's
an
object,
and
so
I
think
I
suspect.
It's
like
a
bug
in
in
convert
that
it's
not
serializing.
All
of
the
objects
is
just
ignoring
stuff.
It
doesn't
understand,
rather
than
rather
than
airing
out
about
the
fact
that
it's
not
like
it's
not
known.
A
Is
that
Convergys
doesn't
know
how
to
solve
that
problem?.
A
A
So
you
can
see
that
if
I
do
a
get
even
on
the
old
version,
it
still
shows
me
the
version
that
I
persisted
to
when
I
created
the
object
right.
So
even
though,
there's
a
new
version
of
the
group
because
I
created
it
using
the
old
version
of
the
group,
it
still
sees
it
still
soars.
The
object
isn't
extensions,
you
wanna
be
the
one,
but
it
is
compatible.
So
let
me
show
you
what
I
mean
by
that
so
like.
If
I
do
get,
daemon
sets
dot,
apps.
A
And
this
is
actually
what
they're,
referring
to
in
the
deprecated
thing
right,
so
they're,
saying,
deprecated
daemon
set
template
generation
this
this
API,
this
manifest
have
already
download
that,
from
from
the
cluster,
would
be
compatible
with
the
forward
version
of
communities,
and
so
it
does
have
a
kubernetes
does
have
the
capability
of
exposing
a
manifesto.
That's
forward
compatible,
but
you
have
to
kind
of
know
how
to
get
and,
as
you
can
imagine,
that's
still
a
bit
of
work
right.
A
So
even
if
I
wanted
to
there's
actually
one
of
the
challenge
I
think
around
Herbert
is
that
even
if
I
wanted
to
go
ahead
and
make
that
change
or
download
the
new
version
based
on
the
change
version,
group
I'd
still
have
to
like
understand
what
it
was.
I
could
point
entirely
and
download
all
those
deployed
things
at
the
new
version
and
I
have
to
explicitly
say
what
that
new
version
is
right.
A
Because
now
it
says
policy
V,
1,
beta
1,
but
at
the
same
time
like
I,
would
have
to
know
all
the
things
that
I
had
deployed
as
part
of
the
manifest
and
download
them
kind
of
individually,
and
so
that's
that's,
a
little
I
think.
That's
certainly
challenging
so
I
think
I
think
the
user
experience
is
gonna,
be
a
little
tough
for
sure
I
think
it's
gonna
be
difficult
for
not
for
people
to
come
out
to
come
around
to
that.
A
So
let's
go
ahead
and
kind
of
explore
this
explore
this
other
example
a
bit
more
so
I
think
it'll
get
pods
and
whose
system
sorry,
ok,
that's
a
I
can
see
that
we're
mostly
up.
We
still
have
a
couple
of
things
that
are
being
slowly
like
shipping
is
not
quite
ready
yet
and
parts
just
not
quite
ready
yet
probably
waiting
on
dependencies
or
something,
but
we
have
it
been
able
to
deploy
it
and
we're
in
a
running
state
like
pods
are
actually
started.
A
A
A
A
B
A
A
So
we're
in
the
know,
api's
directory
now.
The
reason
I
want
to
be
in
here
is
because
I
want
to
make
sure
that
the
api's
that
are
registered
with
my
cubicle
client
are
there,
so
that
I
could
so
that
I
understand
what
api's
are
available
to
me.
What
I'm
doing
the
convert,
but
that's
an
interesting
thing.
I.
A
Am
I'm
totally
using
DRM?
Yes,
I
should
have
talked
about
that
yeah,
dear
amp
is
awesome.
What
I'm
doing
basically
is
when
I
move
into
one
directory
or
another
I'm.
Actually
configuring
my
cube,
config
based
on
an
environment
variable
that
is
loaded
when
I
move
into
that
directory,
which
is
what
Olav
is
pointing
out.
So,
when
I
move
into
the
api's
directory,
we
can
see
dear
I'm
kind
of
talking
about
the
fact
that
it's
alerting
the
n/bar
see
and
it's
changed
our
Cube
config
and
if
I
move
back
to
the
all
directory.
A
Again,
let's
change
and
it's
basically
making
sure
the
cube
cube
kettle
is
configured
appropriately
for
the
cluster
that
I'm
looking
at
I'm,
also
using
a
kind
of
go
status,
line
thing
that
lets
me
know
what
cluster
I'm
looking
at
in
my
prompt
right,
that's
where
this
like
kubernetes
logo
API
is
communities
like
oh
no
API
is
is
coming
from,
which
just
makes
my
life
a
little
easier
to
to
reason
about
what
I'm
moving
back
and
forth
between
clusters.
Thank
you
for
calling
that
out.
I
should
have
talked
about
that
as
part
of
the
set
up.
A
A
B
A
B
A
A
B
A
A
A
Got
to
do
it
twice
because
what's
interesting
is
when
you
do
a
apply
into
a
directory
when
you
apply
against
a
directory
it
takes.
It
I
think
it's
lexicographically,
it's
like
ordered
by
a
but
by
the
by
the
LS
right.
So
it's
gonna
be
ordered
by
the
same
way
that
we
see
here,
and
you
can
see
that
the
namespace
is
being
created.
A
It's
like
one
of
the
last
things
that's
being
created,
and
so,
if
I
applied
twice
then,
when
the
namespace
gets
created
on
the
first
round,
all
the
things
that
are
dependent
on
the
fact
that
that
namespace
exists
do
get
apply
on
the
second
round.
And
so
that's
actually
that's
the
magic
here.
If
I
actually
renamed
a
sock
shop
and
s200
sock
shop.
Now.
A
Handy
trick
like,
if
I
do
that,
then
what
would
happen
there
I
think
only
for
I
in
LS
pass.
Oh
you're
right,
you're,
right
Bogdan,
because
I
was
actually
assuming
bash
would
iterate.
It
would
give
me
a
list
of
all
the
files
in
it
because
I
just
did
the
path
star.
It
was
giving
me
the
entire
file
path,
but
if
I
had
said
for
I
in
LS
path,
that's
where
I
would
have
messed
things
up,
good
catch,
that's
awesome
anyway!
A
So
if
I
had
actually
moved
the
ns200
sock
shop
and
s,
then
it
would
actually
applied
first
and
I.
Wouldn't
have
had
to
do
that
twice.
But
it's
one
of
those
things
right
and
and
and
really
like
applying
a
set
of
manifest
like
this
complex,
is
set
of
manifest.
A
couple
of
times
doesn't
really
hurt
you
anyway
and
I.
Think
it's
interesting.
A
It's
an
interesting
pattern
for
for
continuous
deployment
of
manifests
that
you
kind
of
have
to
work
out
like
what
the
dependencies
are,
because
because
there
are
like
obviously
in
this
case,
a
bunch
of
manifests
that
are
dependent
on
the
fact
that
a
namespace
exists,
and
so
sometimes
the
ordering
matters.
And
so
sometimes
you
might
see
a
deploy
fail
because
you
had
not
actually
defined
the
deployment.
The
the
dependency
before
the
actual
thing
that
it
was
like
it
was
expecting
it
to
exist,
in
this
case
a
namespace.
A
A
A
But
what
we
learned
from
our
previous
example
was
a
few
pillow
converts
not
going
to
catch
everything.
We
may
still
see
errors
and
you
may
still
have
to
do
some
hand
tuning
of
manifests
to
get
those
things
to
create,
depending
on
how
much
of
those
api's
are
actually
making
use
of
so
I
thought.
That
was
a
really
interesting
lesson
learned
and
I
hope
that
you
found
that
useful
as
well.
I
know
that
it's
always
kind
of
interesting
to
see
what
kind
of
what
you're,
what
you're
gonna
run
into
in
these.
A
B
A
A
To
have
like
revision,
history,
Lenna
and
progression
deadlines
seconds,
but
it's
updated
basically,
according
to
kind
of
the
defined
inspect,
so
we'll
leave
them
in
there.
That's
what
Q
pillar
converts
adding.
So
it's
adding
like
the
creation
timestamp
field,
even
though,
if
it's
not
there,
it'll
still
get
picked
up.
That's
fine!
A
It's
adding
the
image
poll
policy,
always
wow!
That's
fascinating!
I
mean
it's
good,
but
it's
like
funny
that
it's
doing
that
it's
ending
the
termination
policy
or
message
path,
determination,
log
and
sitting
into
file.
This
is
actually
really
handy
because
it
basically
means
that
you
could
do
coop
catalogs
on
a
terminated
pod
and
see
what
happened
and
supposed
to
come
handy.
A
Added
protocol
tcp
ii,
I
should
added
session
affinity,
none
because
it
wasn't
defined
so
does
cluster
AP
went
ahead
and
edit
the
empty
status
piece
of
this.
These
are
what
we're
looking
at
here
is.
What
cube
kettle
did
to
modify
these
objects?
When
I
did
the
cube
channel
convert
right
so
we're
dish?
Add
it
we're
seeing
those
things
that
Cupido
convert
added
to
the
to
the
objects.
A
What
is
the
difference
here?
So
this
is:
oh,
it
added
a
dash
for
the
end.
It's
a
new
object.
Maybe
interesting
is
just
changing
the
order.
Oh
it's
because
it
yeah.
This
is
interesting.
So
it's
because
it's
in
a
different
order,
like
it
started
at
the
start
of
the
list
with
M.
Now,
instead
of
name
and
so
it's
just
an
ordering
thing,
it
doesn't
actually
change
any
of
the
values.
A
So
these
are-
and
so
this
was
now
successfully
deployed-
it'd-
be
interesting
to
go
ahead
and
evaluate
like
as
a
as
a
as
a
consumer,
whether
the
this
was
actually
enough
to
to
fix
the
problem
for
this
particular
application,
as
it
relates
to
version
116.
So
remember
this
is
a
great
experience
to
like
I
mean
this
is
a
great
way
to
actually
validate
these
things
and
then
to
go
ahead
and
push
your
changes
before
116
comes,
and
so
in
my
chain.
A
A
That's
why
I
am
probably
going
to
make
a
pull
request
for
this,
but
I
figure
the
reason
I'm
gonna
do
it
right
now
is
because
there's
all
these
other
manifests
that
probably
should
also
be
changed,
and
so
I'm
gonna
I'm
gonna.
Do
that,
like
all
at
once,
rather
than
kind
of
like
doing
only
just
part
of
a
part
of
it
now
in
this
directory?
There's
one
more
thing:
I
want
to
touch
on,
we
think
is
also
interesting,
and
that
is
how
charts
and
the
health
charts
are
really
a
trip.
A
B
A
A
Like
whether
it's
JSON
or
yeah
that
sort
of
stuff,
it
doesn't
really
change,
does
it
work,
you
can't
do
it
in
place,
and
so
there's
that
all
right,
helm,
helm
is
really
interesting.
Let's
go
look
at
how
more
quick,
so
here's
a
helmet
art
that
deploys
all
the
same
stuff
that
I
was
actually
just
deploying.
B
A
A
No
okay!
Well,
in
this
case
it
actually
did
the
right
thing
so
converting
these
one,
who
actually
would
actually
have
done
the
job,
but
what
I
was
looking
for
is
mustache
templates.
So,
let's
see
if
we
can
find
one
that
actually
has
that
sea
helm,
let's
go
ahead
and
move
back.
A
few
directories
here.
A
A
That
you're
gonna
find
any
number
of
public
comments
that
you
might
depend
on
are
in
the
same
state.
So
I
want
to
call
that
out,
like
I,
think
it's
important
that
we
understand
that
case
and
then
we
fix
it
up
right.
So
both
the
Damon
set
manifest
and
the
deployment
manifest
are
both
like
outside
of
the
outside
of
their
configuration
at
the
moment
and
the
pod
security
policy
also
not
going
to
apply
right
like
those
those
changes
that
we
had
to
make
to
the
other
manifest
we
would
have
to
make
to
this.
A
A
That's
all
up
in
the
manifest
here
and
this
all
that
is
taking
it
from
the
configuration
that
is
making
use
of
the
the
templating
capability
of
app
helm
and
I
suspect
that
if
I
try
to
convert
this
file,
it
would
complain
loudly
at
me.
But
let's,
let's
take
a
look
and
see
what
we
can
and
see
what
we
see
here
so
I
did
cuoco
convert
F
controller
deployment.
A
It
says
error
parsing
and
then
we
can
see
it
pointing
at
the
mustache
template
so
because
we're
actually
using
the
template
and
capability
of
Corinthians
and
convert
does
not
understand
how
to
parse
the
file,
because
there's
all
this
extra
stuff
in
there
that
helm
understands
but
but
but
but
but
you
kiddo,
that's
not.
We
can't
fix
that
problem
as
it
sits,
but
here's
something
that
we
could
do
and
this
is
not
going
to
work
for
all
home
charts,
but
it
may
be
a
way
forward
for
some
help.
A
A
So
what
helm
template
does
is
it
renders
chart
templates
locally
and
it
displays
the
output
right.
So
in
my
example,
what
I'm
trying
to
do
is
make
sure
that
I
can
render
the
template
so
that
I
can
see
what
the
resulting
manifests
would
look
like
and
then
I
could
take
those
resulting
manifests
and
convert
them
so
that
they
are
deployable.
A
So
in
this
example,
what
I'm
trying
to
show
is
I've
got
a
I've
got
a
helm
that
is
using
all
the
template
and
capabilities
of
helm
and
I
want
to
go
ahead
and
fire
up
the
tempering
the
templating
part
of
this
locally,
and
then
I
will
take
the
resulting
manifests
from
that
helm.
Template
output
modify
it
to
make
it
compatible
with
my
new
version
of
the
cluster
and
then
validate
that
I
can
deploy
that
new
version.
A
A
Make
use
of
the
templating
capability
based
on
a
values
file
and
it
will
produce
the
actual
manifests
that
I
would
use
to
deploy
this
application.
Based
on
that
template
in
capability
now,
once
I
have
those
manifests
that
I
have
something
they
keep
it'll
convert
can
actually
modify,
and
so
that
I
can
actually
leverage
qpl
convert
directly
to
modify
those
things
before
I
deploy
them.
That's
what
I
wanted
to
point
out?
I
hope
that
made
sense,
but
that's
what
I'm
that's
what
I'm
going
for
here.
So
if
I
do
helm,
template.
A
Then
let
me
can
see
what
the
out
the
resulting
manifest
looks
like,
and
so
we
can
see
a
source
line.
We
can
see
the
versions
of
things,
and
so
here's,
like
a
service
account
the
cluster
role,
all
the
stuff
that
goes
into
defining
those
things
around
our
back
for
the
nginx
ingress
controller,
here's
the
role
for
it
and
then
down
below
we
get
the
role
binding.
We've
got
our
service
defined
another
service
defined
and
there's
our
deployment,
and
so
here's
the
thing
that
we
need
to
change.
A
A
A
Get
told
nope
for
all
kinds
of
interesting
things,
apparently
their
names
too
long,
also
I'm,
probably
gonna,
have
to
fix.
The
name
doesn't
like
release
deem
it
created
the
cluster
roll,
it
created
the
roll
and
then
it
complained
about
the
config
map,
because
the
name
Oh
needs
to
be
lowercase.
Okay,
so
we'll
do
cute.
Kennel
delete,
f,.
A
So
what
that's
telling
us
is
that
I,
after
I,
like
provide
a
little
bit
more
value
than
just
like
trying
to
template
it
with
against
the
defaults.
I
need
to
actually
set
release,
name
to
something
that
is
not
totally
jacked.
So,
let's
go
back
into
the
template.
I
will
template
that
again
to
fix
that
problem.
B
B
A
Trying
to
fly
that
again,
let's
just
do
dry
run,
though
we
don't
have
to
delete
later,
looks
like
it
actually
almost
worked
right
like
actually
I
was
able
to
apply
most
of
the
things,
but
the
only
thing
holding
us
back
right
now
is
is
that
deployment
object?
So,
let's
try
to
convert
it
cube,
get
a
little
convert.
A
And
then
we're
good,
so
what
that
just
showed
me
was
that
I
could
actually
I
could
use,
convert
to
change
this,
and
that
would
at
least
allow
me
to
modify
the
resultant
manifests,
but
it
would
not,
of
course,
change
the
fact
that
my
house,
art
has
a
bug
and
and
to
do
that,
I
would
have
to
actually
go
back
and
change
those
fields
in
the
manifests,
probably
by
hand
for
for
all
of
those
things
that
it
could
apply,
and
so
this
is
gonna.
This
is
what
I'm
trying
to
say.
A
B
B
B
A
A
A
Alright
well
I
mean
that's
what
I
wanted
to
share
with
you
and
I
think
it's
a
super
important
that
we
catch
it.
Thank
you
all
again
for
spending
your
afternoon
with
me.
We've
been
at
this
for
about
an
hour
and
a
half
and
I
think
that
that
covers
the
problem
pretty
well
and
some
of
the
things
that
you
could
do
and
what
you
can
and
what
you
can
expect.
A
Obviously,
there
are
myriad
ways
to
fix
this
problem
like
if
you
are
actually,
if
you
have
manifests
that
you
need
to
fix
I
was
showing
you
what
I
suspect
is,
but
it's
going
to
be
most
people,
which
is
that,
like
we're,
gonna
have
to
like
go
through
and
and
and
modify
all
of
our
manifests
to
support
this
new
version,
like
all
of
the
code
that
you
have
in
a
code
repository
anywhere
that
that
you
used
to
store
those
deployments
that
you've
been
using
over.
You
know
the
last
bunch
of
versions
of
kubernetes.
A
You
need
to
go
through
and
evaluate
whether
they're
still
viable
to
deploy
in
one
sixteen
before
before
you
deploy
one
sixteen,
and
you
can
do
that
manually,
you
can
go
through
and
just
like
change
the
API
objects.
You
can
codify
that
there's
tons
of
ways
to
do
it.
Some
of
the
interesting
ones
that
we
talked
about
our
comp
test,
which
is
a
good
way
of
actually
evaluating,
manifests
and
things
in
place.
A
Griego
is
actually
a
really
powerful
thing
to
allow
you
to
kind
of
do
that
testing,
because
it's
not
unlike
you
pill
divert
it's
not
specific
to
it's
not
specific,
to
the
way
that
Cooper
news
itself
works.
It's
actually
just
trying
to
look
at
it
as
like
a
Polish
in
your
policy
language
problem,
yeah,
there's
lots
of
ways
to
fix
this
problem,
but
I'm
I'm,
confident
that
we're
gonna
find
even
more
ways
than
I
can
think
of
right
now
to
fix
this
problem,
because
I
know
that
we're
gonna
run
into
it.