►
Description
In this episode of the Cloud Native Social Hour, the team goes through all the new and interesting features that went into Kubernetes 1.15.
Looking at:
- CRD Pruning/Defaulting
- Third party metrics endpoint in Kubelet now beta
- ExecutionHook API in alpha
- kubeadm now includes the ability to specify certificate encryption and decryption keys for the upload and download certificate phases as part of the new v1beta2 kubeadm config format
- kubelet now allows the use of XFS quotas
And much more!
A
A
Enhancements,
it's
been
super
fun
and
one
I
guess
like
every
other
day,
every
other
lead
position
and
every
listing
a
CI
signal
is
another.
One
of
those
put
you
in
an
amazing
place
to
be
able
to
work
and
contribute
to
all
eight
water
projects.
You
really
get
it,
you
really
get
to
know.
It's
really
get
to
know
people
you
get
issue.
You
could
have
really
nice
opportunity
to
be
able
to
learn,
had
to
be
able
to
learn
how
things
for
how
things
work
and
it's
you
know.
A
B
A
It
was
really
good,
I
think
so
once
again,
triage
right.
So
really
our
goal
in
life
was
to
track
the
things
that
we're
coming
in
making
sure
we
were
getting
traction,
especially
especially
around
code
freeze,
right
like
making
sure
that,
like
things
where
you're
in
my
house
donor
out
and
really
that
just
involves
a
lot
of
I,
think
it
was
candy
that
said
it
a
little
bit
earlier.
It's
a
lot
of
like
hunting
people
down
and
like
hey,
is
this
gonna
happens?
A
It's
not
gonna
happen
how
we
doing
right
so
so
the
process
went
really
well.
We
we
divvied
up
all
the
issues
and
and
and
open
PRS
amongst
the
team.
We
had
a
good
four
or
five
people
who
all
you
know
were
participating
in
that
we
all
probably
had
about
you
know
like
15
or
so
things
that
we
were
responsible
for
making
sure
we
checked
into
I.
Think
the
one
thing
that
we
would
we
should
probably
improve
on
that
front
is
just
some
better
automation
around
all
this
stuff.
A
You
know,
like
a
good
chunk
of
the
a
good
chunk
of
the
work
that
we
did,
at
least
in
the
bug.
Triage
areas
is
really
someone
automatable
like
I,
feel
like
there
could
be
a
bot.
That's
actually
doing
a
lot
of
the
work
that
we
were
doing.
Specifically,
you
know
timed
around
around
code,
freeze,
something
so
I.
You
know
that's
that's
time
to
be
better,
but
all-in-all
great
experience.
I
will
say
this
for
folks.
A
Looking
at
what
roles
they
should
shadow
like
in
and
maybe
with
just
interest
in
something
and
not
really
knowing
what
to
do
bug
triage
is
great,
because
you're
going
to
get
open
issues
and
PRS
that
are
from
every
kind
of
area,
and
it
gives
you
the
ability
to
kind
of
get
an
idea
of
what
every
area
of
the
kubernetes
platform
is
doing,
and
so
it
also
gets
time
to
interface
with
those
people.
So
I
did
like
that
to
figure
out
kind
of
areas
that
I
think
I
want
to
contribute
to
I'm
warming,
future
yeah.
B
B
A
A
B
B
B
Expertise
required,
but
there's
time
commitments
associated
with
it,
I
would
I
would
say
that
bug.
Triage
is
the
least
time
intensive
role
of
all
the
roles,
because
we
really
only
become
viable
in
the
last
part
of
the
release.
So
right
before
code,
freeze
during
code
freeze
and
during
code
thaw,
that's
when
bug
triage
kind
of
comes
into
play
during
the
first
two
months
of
the
release
you
just
like.
A
B
A
Know,
okay
I
mean
I'm
thinking
the.
Maybe
there
are
other
roles
that
the
work
can
come
in
a
little
a
little
more
uniformly
right,
so
you
can
kind
of
figure
out
how
much
of
your
time
this
is
gonna
suck
up
like
documentation,
I'm
assuming
maybe
things
come
in.
You
know
as
those
features
actually
hit
the
branch
and
I
actually
merged
and
you're
like
okay.
I
can
like
fully
document
that
that
is
fully
documented
yeah.
B
A
I
think
from
what
I've
seen
the
won
the
role,
so
we
have
on
the
release
team.
We
have
a
release,
lead
enhancement,
CI
signal
bug,
triage
branch
manager,
Doc's,
release,
note
and
comms
I.
Think
of
all
those
roles,
the
ones
that
are
the
most
steady,
our
lead,
CI
signal
and
branch
manager,
but
I
would
not
say
they're
steady
and
they
don't
have
tons
of
work
way.
I
think
they're,
just
the
ones
doing
things
consistently
from
week.
A
B
You
guys
are
what
you
all
are
interested
in,
so
one
of
the
things
I
see
are
improving
and
defaulting
functionality.
So
what
this
does
is
when
you
so
right
now
in
114
or
113,
any
of
the
releases
that
you're,
probably
using
if
you
submit
to
see
the
with
an
extra
field
that
isn't
defined
in
the
schema,
it
will
be
accepted
by
the
API
in
115
at
least
I.
Think
it's
gone
to
beta
at
this
point.
That
isn't
the
case.
B
Every
field
in
the
serie
needs
to
be
defined
by
an
open,
API,
schema
and
I
think
that's
very
valuable.
If
you're
not
just
like
hiding
new
fields
and
just
managing
things
willy-nilly,
it
seems
like
it
feels
like
that
functionality
or
that
the
methodology
breaks,
how
kubernetes
works
and
I'm
kind
of
glad
those
here
these
are
going
coming
into
a
more
defined
structure.
B
Defaulting
is
also
pretty
cool
where
it's
like
hey,
you
need
all
these
fields,
but
some
of
them
are
more
pressing
than
others.
These
ones
you
can
have.
You
can
fill
in
defaults,
so
the
user
doesn't
film
the
people
you
can
just
fill
in
the
information
you
have
something
there.
So
it
doesn't
break
every
time
like
if
one
of
your
CRE
values
is
like
time-to-live.
Well,
you
can
send
the
people
to
be
like
five
minutes
or
whatever
right.
It
doesn't
need
to
be
set
every
single
time.
B
B
Something
else
pretty
cool
is
that
third-party
metrics
there's
now
an
end
point
for
their
metrics
in
the
cubelet,
so
you
can
get
device
metrics
from
the
cubelet
in
a
way.
That's
like
not
defined
in
granny's
core,
like
the
third
party
device,
vendors
can
define
the
metrics
that
they
want
to
send
more
metrics
make
it.
So,
let's
code.
B
And
then
this
one's
kind
of
interesting
I
saw
this,
and
it
kind
of
scares
me
a
little
bit
but
I
kind
of
understand
why
it's
there
so
in
alpha
there's
something
called
an
execution
hook.
Api
be
added
to
manifest
manifest.
So
the
idea
here
is
that
you
can
provide
a
list
of
commands
that
can
be
run
arbitrarily
in
a
container
at
any
time.
B
A
B
B
B
B
Cool,
so
some
changes
that
I
think
are
notable
to
talk
about
as
well
are
some
stable
and
beta
features.
The
cube,
API
server
watch
can
have
being
able
with
watch
size
watch
cap
size
flags,
that's
the
only
stable
feature
that
landed
in
115.
Pretty
you
know,
caching,
the
final
cap
size,
but
not
really
super
cool.
B
B
And
also
pass
them
around
and
you
know
a
control
plane
easier.
That's
so
great
I
love
this.
So
I
am
totally
on
board.
With
this.
This
one
was
interesting
to
me
because
I
didn't
know
it
exists
already,
but
I
guess
that
works
now.
Ingress
objects
are
now
persisted
in
NCD,
using
the
network,
8
io,
v,
1
beta
versioning,
so
I
didn't
know.
The
ingersoll
codes
were
persisted
in
before
so
that
was
fun
back
to
me.
It's.
C
C
But
some
more
interesting
bit
in
my
opinion,
means
that
what
this
represents
is
the
move
to
make
ingress.
Actually
a
stable
API
like
it's
finally
getting
out
of
the
place
of
of
being
an
alpha
or
beta
API,
but
on
the
track
toward
a
stable
API,
which
is
very
interesting
because
ingress
has
been
one
of
those
has
been
one
of
those
hot
topics,
probably
as
long
as
all
of
us
have
been
working
on
kubernetes,
but
there's
no
good
way
to
generalize
such
a
wide
surface.
B
C
How
do
you
generalize
an
API
for
something
that
is
so
specific?
That's
I
think
that's
kind
of
what
it
boils
down
to
is
that
there
are
so
many
different
ways
that
people
need
to
use
it
or
to
configure
the
thing
that
would
satisfy
it
for
to
modify
in
a
particular
behaviors
of
it
that
there's
not
I
mean
it's
hard
to
rationalize.
Like
a
consistent,
you
know
generic
API
across
that
whole
set.
You
know
fuckness,
so
I
feel
like
it's
been
kind
of
it's.
B
B
This
was
interesting
to
me.
Caching
is
from
graduating
to
beta,
so
now
you
can.
Obviously
you
can
cache
your
code
and
that's
information
to
a
local
host
so
that
if
a
pod
dies
and
comes
back
that
information
that
may
be
relevant,
it's
still
on
the
node,
so
it
kind
of
speeds
up
DNS
a
little
bit.
I'm
wondering
get
this
many
penis
lookups
I,
wonder
how
this
is
going
to
collect
DNS,
because
everyone
knows
it's
not.
Your
problem
is
never
DMS,
never
never,
it's
always
the
ns.
B
So
that
will
be
interesting
to
see
how
that
pans
out,
but
I
think
I'm
all
for
it
for
faster
DNS,
lookups,
so
totally
cool
in
that
this
one
was
pretty
cool,
I
almost
look
past
it,
and
this
was
like
I'm,
not
gonna
include
it.
Online
volume
expansions
is
now
a
beta
feature,
which
means
that
you
can
grow
a
PPC
as
you
needed,
while
you're
using
it
and
that's
pretty
killer,
because,
like
growing
PVCs
before
it
was
kind
of
a
pain
in
the
ass
ELISA.
B
Correct
me:
if
I'm
wrong,
it's
been
a
while
since
I've
done
much
PPC
management,
but
you
would
have
to
like,
if
you
wanted
to
extend
a
physical
or
a
persistent
volume
in
criminalities,
you
had
to
kill
the
claim,
extend
the
volume
and
then
rejoin
the
claim,
and
now
you
can
join
you
expand
it
as
you
need
it
while
using
it,
because
why
don't
you
like?
Sometimes
you
just
need
more
data
and
even
go
through
this
hassle
right.
So
though,
I
love
this,
of
course,
second
change
cuz.
B
It
could
cause
any
number
of
problems,
but
I,
don't
think
that's
gonna
be
too
bad.
So
this
and
now
that
it
is
beta.
This
is
default
functionality
that
you
can
use
this
today,
if
you're
using
115,
as
you
realize,
somehow
all
the
features
that
are
coming
around
pretty
cool
they
got
introduced.
This
release
you
can
now
implement.
You
can
turn
on
an
alpha
feature
to
enable
non
pre-empting
pod
priority,
so
pot
priority
agreement
was
already
kind
of
confusing
it
weird.
Let's
make
it
even
more
confusing
and
weird
right.
B
So
if
you
set
a
priority
non
pre-empting
high
priority
on
a
class,
the
pot
will
continue
to
be
privatized
above
Q
pods
of
lesser
class,
but
will
not
priam
to
running
pause.
So
this
pot
will
essentially
get
scheduled
first
above
lower
class
pause,
but
it
will
not
kick
up
pods
that
are
already
running.
B
B
I
don't
get
disease
case,
but
we'll
see
how
it
works.
It
certainly
sounds
confusing
to
me,
but
all
right
cool.
So
that's
a
new
thing
and
then
this
is
one
that
I
thought
was
kind
of
cool
I
have
a
picture
to
allow
XFS
quotas
to
monitor
storage
consumption
for
a
ephemeral
sort
of
the
pot,
so
before
you
this
will
only
be
used
for
empty
dirt
consumption.
B
This
is
now
a
more
apparently.
This
is
a
more
accurate
and
faster
mechanism
for
determining
how
much
storage
you're
consuming
on
a
ephemeral
storage
than
the
previous
mechanism.
I'm
saying,
apparently,
because
gonna
be
honest,
a
lot
of
file
system
engineer,
I,
no
clue
it
does
actually
works
the
way
it's
intended
or
anything,
I
thought
that
sounded
pretty
cool.
We
need
more.
We
need
more
metrics
around
basically
everything
and
if
this
makes
it
faster
and
easier
to
use,
awesome
I'm
all
for
it.
So
any
thoughts
on
those
output
features
beta
feature.
It's
anything
like
that.
B
B
C
Right
now
like
right
now,
if
you
make
an
employment
for
even
like
some
of
the
really
early
stuff,
it'll
just
work,
and
so
there's
a
ton
of
stuff
like
how
charts
and
things
like
that
that
are
just
working
and
a
lot
of
that,
a
lot
of
us
gonna
change
here
pretty
soon.
So
what
happened?
What
song
is
pointing
out
is
super
super
important.
If
you
look
at
the
Jubilee
this
blog
post
on
this,
then
you
can
actually
it'll
also
show
you
how
to
set
specific
flags
and
earlier
version
of
kubernetes
on
the
API
server.
C
B
B
Also
something
kind
of
cool,
in
addition,
is
the
V
1
beta
2
config
format
has
been
added
to
keep
atm,
so
we're
getting
closer
to
a
stable,
medium
config
which
took
the
amazing,
goes
damn
that
sure
does
change
a
lot.
It's
like
refusing
every
time
changes,
nope,
so
I'm,
happy
marching
towards
everyone.
For
that,
that's
awesome.
I
haven't
seen
what
changes
in
viewing
beta
2
I'm.
Sorry,
if
anyone
is
a
any
of
the
major
changes
in
that
schema,
please
let
us
know,
but
I,
don't
I
didn't
recording
the
metrics
changes,
there's
just
a
lot.
B
There's
a
lot
of
changes
so
be
aware
that
every
beware
every
release,
the
metrics
that
kubernetes
provides
changes.
We
add
some
and
we
deprecate
some
so
I
highly
recommend
going
into
the
changelog
and
check
that
out.
If
your
company
does
a
lot
of
metrics
gathering
and
that
sort
of
thing
so
check
that
out
something
that
took
me
by
surprise
a
little
bit
I
had
no
idea.
This
was
coming.
If
you
are
going
to
update
the
115
here's
some
things
that
you
need
to
be
aware
of.
B
Cabrera's
have
uses
build
modules
for
dependency
management
instead
of
like
Deb
or
Clyde
like
I
might
have
been
using
in
the
past,
with
I
could
find
it
on
go
modules
code
or
if
your
client
go
or
anything
like
that,
doesn't
use,
go
mumbles
for
dependency
management.
Yeah
gotta
change,
also
something
that's
kind
of
interesting.
If
you
were
using
an
older
version
of
Lancer
for
kubernetes
heads
up,
the
credentials
provider
has
been
removed
from
core
some
commands.
Kubernetes
so
be
Rancher.
B
Authentication
mechanism
that
existed
from,
like
one
six
on,
has
been
removed
because,
basically,
at
some
point,
I
guess
Ranger
fork
to
kubernetes
and
started
implementing
it.
It
was
in
the
core
at
that
point
and
no
one
in
urban
schools
has
been
using
it
since
then,
so
it's
been
achieved,
but
just
heads
up,
if
you're
using
a
ranger
environment,
that's
kind
of
old
you
may
need
to
change.
This
is
a
big
one.
To
me
this
is
a
the
AWS
cloud
provider.
B
Cluster
role
will
no
longer
be
created
for
you
automatically
when
you
spin
up
the
cluster
as
we're
moving
towards
the
AWS
cloud
provider.
Getting
moved
out
of
tree
these
certain
things
need
to
happen
to
facilitate
that.
This
is
one
of
those.
If
you
want
to
use
the
enemies
cloud
provider
increment
as
1:15,
you
need
to
give
the
proper
permissions
to
the
AWS
cloud
providers
in
the
cube
system,
namespace,
so
heads
off.
B
B
B
B
B
So
this
is
something
that
you
know
I'm
a
little
bit
sad
about,
but
I
guess
I
understand
why
the
cube
CTO
scaled
jobs
feature
from
the
CLI
has
been
removed.
You
can
no
longer
scale
dogs,
which
I
guess
makes
sense.
I
guess
we're
targeting
deployments
only
as
a
scaling
mechanism,
so
you
can
only
scale
deployments
or
you
can
I
guess,
but
I've
used
it
in
the
past.
B
Also,
here's
something
kind
of
crazy
to
me.
There
are
a
number
of
security
controls
that
have
been
removed,
allow
privileged
host
network
sources,
post
PID
sources
and
hosts
IPC
sources.
Security
controls
have
been
removed
from
the
API
server
and
from
cubelets.
So
if
you
want
better
security
over
what
people
can
do
in
your
cluster,
you
need
to
set
PSPs
security
policies,
give
Java
7,
because
some
of
these
people
controls
that
you
might
have
been
using
are
gone
now.
So
we've
done.
C
Those
security
facts
that
you're
talking
about
are
definitely
like
a
big
deal
and
I'll
reiterate
that
that's
a
big
deal
but
also
realize
that
what
we're
talking
about
here,
deprecated
means
and
no
longer
available,
doesn't
mean
that
it's
not
gonna,
be
there.
It
doesn't
mean
that
it's
being
phased
out.
It
means
that
if
you
start
a
couplet
in
version
115-
and
you
have
that
flag,
it
won't
be
parsed
yeah,
so
allow
privileged
as
a
cubelet
won't
work
anymore.
Let's
want
to
make
sure
that
we're
super
explicit
there
right.
B
B
It
did
seem
like
they
came
and
went
pretty
quickly,
but
there
are
a
lot
of
features
like
the
API
machinery
was,
a
big
focus
is
release,
and
so
the
functionality
of
kubernetes
is
getting
more
and
more
stable
and
I
feel
like
a
lot
of
good
work
was
put
in
this
release
so,
like
Koopas,
everyone
could
put
in
some
PRS
issues
for
kubernetes
kubernetes
for
this
release.
Thank
you
so
much
for
all
your
work.
This.
B
C
C
First
thing
I
was
gonna,
show
was
the
key
to
the
new
command
in
cue
kettle
called
cuddle,
rollout
and
restart.
This
is
a
newer
command
that
allows
you
to
restart
a
deployment
at
they've
been
set,
whereas
Facebook
set
in
place.
What
I
mean
by
in
place
with
what
this
allows
you
to
do
means
the
use
case
for
this
there's
really
things
like
configuration,
only
changes
right.
C
What
this
new
command
will
allow
you
to
do
is
basically
that
it
will
patch
the
existing
replica
set
and
and
and
allow
for
the
pause
itself
to
actually
be
restarted.
So
what
I
want
to
show
off
is
like
how
that
actually
works
and
like
what
some
of
the
semantics
are
there.
So
what
I'm
gonna
do
is
I'm
gonna
get
our
s
before
I
actually
start.
What
I've
got
here
is
I've
got
make
a
deployment
of
a
car
door,
which
is
just
just
a
just
a
simple
little
application.
C
So
what
do
you
keep
get
I'll
get
pods
I
can
see.
I
have
coupons
running
if,
like
you,
kids
get
deployments,
I
have
my
deployment
first
as
to
things
nobody
could
get
I'll
get
pods
in
spaces
all
right.
That's
our
wide
I
can
see
that
these
two
pods
are
existing
and
they
are
on
kind
worker
and
kind
worker
too.
So
what
I
wanted
to
prove
was
that
it's
not
going
to
move
the
deployment
around
to
different
workers.
C
It's
not
going
to
reschedule
this
stuff,
it's
just
going
to
restart
the
pods
where
they
sit
and
we're
also
going
to
dig
in
a
little
bit
into
how
that's
actually
going
to
work,
which
is
pretty
interesting
stuff.
So
let's
do
Ketel
rollout
restart,
deploy
tests
and
what
this
will
do
is
it
should
actually
fire
that
restart
of
the
of
the
pods
and
we'll
see
what
the
kind
of
the
resulting
state
here
is.
C
So
we
can
see
that
it
actually
created
a
new
replica
set
it
didn't.
It
didn't
keep
the
old
replica
set
around,
which
is
interesting.
We
can
tell
that
it
created
a
new
replica
set
because
this
this
jumble
of
winners
here
in
the
middle,
has
changed
and
now
we're
completely
on
the
new
replica
set,
and
if
we
do
keep
it
I'll
get
Rs.
We
can
see
that
the
new
replica
set
is
operating
to
desired
to
current
to
ready
and
the
old
one
is
now
got
zero
running
the
desire
which
interesting.
C
If
we
do
our
cube,
petal
get
pods
Oh
wide
again,
we
can
see
that
we're
still
on
kind
worker
and
an
worker.
You
can
also
see
that
we've
got
a
new
IP
address
right.
The
old
ones
were
132
and
132
just
by
chance,
and
the
new
ones
are
133
and
133,
but
they're
still
in
the
same
nose.
So
it
didn't
have
to
go
back
through
scheduling
it,
basically
just
restarted
them
kind
of
on
the
nose
theory.
C
C
C
So
I
think
what's
happening
here
is
a
you
know,
but
actual
pod
template
change.
Todd
template
has
changed,
but
the
only
thing
that's
changed
in
this
spec
is
the
generation
number
which
is
interesting,
stuff,
so
kind
of
neat
that
it
doesn't
actually
have
the
benefit
of
not
having
to
reschedule
means
that
we
don't
have
to
pull
that
new
image
down
to
a
new
to
a
new
set
of
notes
right
that
existing
image
is
the
same.
All
we're
doing
is
kind
of
restarting
the
process
in
place.
So
that's
a
new
feature.
C
B
C
C
This
is
kind
of
like
the
same
as
SSH
again
to
the
node,
where
the
with
a
the
pot
is
running.
Now
we
do
see
our
kid
old
pods
I
could
see
that
this
pod
is
the
one
that's
running
here
on
this
node
right.
So
this
is
pod
ID.
This
is
a
name
of
pod
all
that
stuff
and
if
I
do
see
Eric
at
all,
yes,
I
can
see
my
contain
Mike,
my
container
and
the
image
this
it's
related
list
and
it's
been
running
for
three
minutes.
C
There
we
go
and
then
we'll
run
that
same
command
again
and
we'll
see
that
the
pot
ID
has
changed
right.
You
can
see
that
here
the
actual
pot
ID
has
changed
in
the
container
images.
The
container
image
itself
has
changed
all
right,
so
we
can
see
the
container
ID
da
six
eight
was
the
old
one.
24
Iggy
is
the
new
one.
The
image
has
not
changed
where
the
pot
ID
hasn't
changed.
C
Actually,
the
quality
has
also
changed,
so
the
old
pot
ID
was
three
three
five
five
or
three.
Fifty
five
and
the
new
part
ID
is
sixty
five
level.
So
really
this
container
and
its
associated
pod
have
been
removed
from
this
cubelet
and
a
new
one
have
been
has
been
created
right.
It's
not
like
it's
completely.
We
it's
not
like.
You
went
in
and
helped
the
process.
B
A
That
feature
is
mainly
I
mean.
It
seems
like
it's
useful
in
the
case
where
you
like
every
starter
pod,
but
you
don't
like
I,
think
that
he
said
like
you,
don't
want
to
go
through
the
you
know,
scheduling
you
don't
want
to
get
kicked
off,
and
then
you
want
to
restart
it
right
now.
You
might
say
I'm
delete
those
pods
or
whatever
anything
might
get
rescheduled
to
an.
C
There's
a
ton
of
behaviors
here
that,
like
you
know
this,
isn't
this
is
not
a
new
idea.
People
have
been
using
this
idea
for
a
mint
for
managing
software
for
a
long
time,
and
it
really
comes
down
to
like
that
kind
of
like
one
of
those
semantics
that
we
kind
of
sort
of
take
for
granted
with
software
right.
C
So,
generally
speaking,
when
we
load
up
a
piece
of
software
or
start
or
start
executing
a
piece
of
software,
we
take
a
lot
of
the
configuration,
maybe
the
certificates
that
are
on
disk
all
of
those
things
and
we
kind
of
load
them
into
memory
and
those
things
are
in
use
without
doing
something
like
a
file
system.
Watch
we're
not
going
to
be
able
to
actually
determine
that
anything.
The
underlying
file
system
has
changed
right.
C
So,
for
example,
if
you
start
up
a
web
server
and
that
web
server
isn't
doing
the
file
system
watch
to
see
that
certificates
have
changed,
it's
not
going
to
be
able
to
change
the
certificates
that
is
in
use
dynamically
without
actually
having
some
semantics
written
into
the
code.
That
allows
you
to
watch
that
certificate
on
disk
and
and
and
behave
differently
when
it
changes
right.
What
this
allows
us
to
do
is
say:
look
we
want
the
ability
to
restart
a
particular
process
when
we
know
the
file
system
has
changed
like
those
underlying
primitives
have
changed.
C
C
Yeah,
so
the
other
one
I
wanted
to
show
off
was
a
kind
of
a
newer
feature
in
cube
ATM,
which
I
think
really
makes
it
easy
to
understand
a
particular
behavior
that
has
come
up
many
many
times
in
the
cube,
ATM
chat
and
also
just
within
people
who've
been
running
TLS
secured
good
radius
clusters,
which
you
should
be
doing
by
default
everywhere.
All
the
time
is.
B
C
Relatively
new
thing,
within
queue,
medium
that
allows
you
to
see
the
expiration
of
the
certs
associated
with
the
node
you're
on
I
want
to
say
that
again,
this
command
keep
all
cube.
Atm
commands
generally
will
only
apply
to
things
that
are
running
on
the
node,
where
you
run
cube,
ATM
generally
speaking
right,
so
a
cube,
ATM
Annette
will
actually
turn
it.
C
C
Now
I
actually
did
a
TGI
key
on
this
I
guess
it
was
a
week
before
last.
It
was
called
what
happens
with
all
your
expire
and
I
showed
this
command
off
there,
but
I
was
showing
it
off
kind
of
like
before
it
had
been
released,
but
it's
in
115
now
so
now
you
can
actually
play
with
it.
It's
it's
pretty
neat.
The
other
neat
thing
about
this
is
that
the
way
cube
idiom
works.
C
You
could
actually
just
use
this
newer
version
of
cube
ATM
to
evaluate
this
on
even
on
old
clusters
like
if
you
used
cube,
ATM
to
start
up
a
112
cluster
or
a
113
cluster.
If
what
you're
trying
to
understand
is
whether
the
certificates
themselves
have
expired,
and
you
don't
want
to
go
through
each
individual
piece
and
figure
that
out
this
is
a
really
good
command
to
do
this.
The
last
thing
I'll
point
out,
which
I
think
is
actually
probably
for
me.
C
And
I,
like
you,
know,
decrypt
or
base64
unload
this,
and
then
you
know
I
kind
of
go
through
the
whole
process
to
figure
out
what
the
expert
of
this
certificate
is
right,
which
is
which
is
pretty
cool,
there's
also
a
lot
of
really
cool
new
features
inside
of
queue,
medium
that
allow
you
to
rotate
those
certificates
now.
So
if
we
do
Q
atm.
C
So,
with
four
new
you
can
at
you
can,
where
you
can
renew
particular
components
just
by
giving
them
a
particular
name
where
you
can
do
all,
and
this
will
actually
leverage
the
existing
CA
that
it
knows
about
and
just
print
new
or
in
issued
neutral,
its
associated
with
all
of
the
things
in
the
places
where
you
expect
to
find
them.
So
that's
what
I
wanted
us.
Those
are
the
two
things
I
wanted
a
demo.
B
Because
in
these
TLS
stops,
working
is
essentially
gonna.
Stop
working,
your
pods
are
continuing
to
run,
but
everything
else
falls
apart
and
it
falls
apart
in
such
a
way
that
you
wouldn't
even
realize
it
was
happening
until
you
kind
of
dug
into
it
a
little
bit.
So
it's
kind
of
a
secret
failure.
So
it's
these
tools
now
that
will
help
facilitate
this
problem
or
facilitate
fix
it
for
you.
B
B
B
A
Don't
don't
know
what
happened
so
very
well
might
have
been
me.
I
have
no
idea
yeah,
but
just
sheepish
shout-out
to
anybody
who
may
want
to
help
with
some
of
the
cluster
API
stuff
out.
There
I
know
I'm
looking
for
folks
to
assist
on
the
on
the
Asscher
side
on
the
magic
provider
side,
all
these
to
call
use
assistance
needed
there.
So,
if
you're
familiar
with
Azure
when
help
build
out,
kubernetes
and
azure
come
join,
the
party.
B
Their
api
is
essentially
a
mechanism
for
you
to
create
a
kubernetes
clusters
cluster
using
a
kubernetes
cluster.
The
word
close,
like
you,
train
a
cluster
using
cute
the
coaster
api
pointed
like,
depending
on
whichever
cloud
provider
you
want
to
use,
there's
different,
closer
API
mechanisms
for
that
you
go
save
a
great
cluster.
It
stands
up.
B
So
you
just
do
this
quick
we're
going
up
to
a
new
cluster,
so
it
is
a
way
that
the
cluster
lifecycle
team
has
been
putting
together
to
programmatically,
create
kubernetes
clusters,
I,
think
it's
pretty
cool
or
declaratively
create
command
these
clusters
and
I
think
it's
very
cool
and
repeated
myself,
I'd
say,
but
we
do
need
a
lot
of
help.
There's
a
lot
of
work,
there's
a
lot
of
cloud
providers.
What
is
there's
a
lot
of
providers
and.
A
Can
I
just
jump
in
and
say
one
thing
right
there
like
you
do
not
have
to
be.
You
don't
even
have
to
be
a
programmer
to
help
like
seriously
like
half
of
what
we
need
on
the
Azure
sites.
Tell
our
folks
to
actually
just
run
the
process
like
does
this
work?
Are
there
bugs
in
the
code
like
what
doesn't
work
for
you?
What's
your
user
experience
like
that
kind
of
contribution
is
really
valuable
too.
So,
please,
any.
B
B
B
So
you're
interested
in
joining
any
like
working
group
or
any
there
are
meetings
if
you
do
have
limited
time
just
attending
the
meetings
is
enough
just
to
get
you
know
familiar
with
what's
going
on,
and
if
you
see
something
interesting
that
you
feel
like
you
have
the
time
to
work
on,
that's
a
good
way
to
jump
in.
So
just
even
just
attending
meetings.
Just
doing
stuff
like
this
is
in
tributing
in
some
capacity.