►
From YouTube: Webinar: What's New in Kubernetes 1.15
Description
Join members of the 1.15 release team to learn about the new features in this release.
A
And
thank
you.
Everybody
for
joining
us
today.
Welcome
to
today's
CN
CF
webinar.
What's
new
in
kubernetes
1.15
I'm
Kim
McMahon
on
the
marketing
team
at
CN,
CF
and
I'll
be
moderating.
Today's
webinar
I'd
like
to
welcome
our
presenter
today,
kenny
coleman,
the
kubernetes
1.15
enhancement,
lead
at
VMware.
We
also
have
on
the
panelists.
We
have
Jorge
Castro,
who
will
be
available
to
help
with
some
Q&A
as
well
a
few
housekeeping
items
before
we
get
started
during
the
webinar.
A
A
B
You
Kim
and
thank
you
to
all
at
this
point,
188
of
you
that
are
dialed
in
and
joining
this
into
everybody
they'll
be
watching
the
recording
later
it's
a
pleasure
to
be
once
here
again
presenting
on
what's
new
in
the
latest
stable
release
of
communities.
I
was
your
enhancement
leap
for
1.15
big,
shout
out
to
Clare
in
the
entire
release
team
for
having
an
awesome
session
and
release
that
actually
got
this
out
the
door.
So
today
we're
gonna
be
focusing,
of
course,
on
really
what's
new.
B
You
have
an
idea
of
maybe
what's
important
to
you
or
what's
important
to
the
sig
that
you're
involved
with
as
well,
and
you
can
kind
of
see
exactly
what's
happening
there.
Okay,
so
on
the
1.15
enhancements,
as
I
mentioned
just
from
a
high
level,
we
didn't
have
a
whole
lot
happening.
This
is
kinda
what
happens
with
every
single
release
cycle.
There
is
a
big
influx
new
new
requests,
new
features.
However,
one
thing
that
is
required
by
an
enhancement
to
be
able
to
be
graduated
or
to
be
included
in
a
community's
release.
B
It
must
have
a
cap
or
a
committee's
enhancement
proposal
that
is
tied
to
it.
This
cap
has
a
few
different
things
involved.
It
has
to
have
graduation
criteria.
It
has
to
also
be
sort
of
consensus
by
the
cig
in
the
overarching
community
that
it
will
be
integrated
into
kubernetes
and
will
be
owned
by
that
particular
cig.
B
So,
if
you're
new
to
this
ecosystem
or
if
you
think
that
you
have
an
idea
for
an
enhancement
or
a
feature
that
you
would
like
to
see
with
inside
of
kubernetes
I
encourage
you
to
do
not
just
go
ahead
and
think
that
you
want
to
just
start
coding
right
away,
but
instead
get
involved
with
your
special
interest
group
or
your
sig
and
bring
it
up
there
for
consensus,
making
sure
that
everybody
is
on
board
and
then
from
there.
It
can
go
through
the
process
of
figuring
out
who
is
gonna,
be
the
owner.
B
Who
is
gonna,
be
the
code
reviewer?
Who
is
going
to
be
taking
care
of
everything
else
from
that
sort
of
point?
So
that's
sort
of
the
process
that
you
want
to
go
through
if
you
are
looking
to
bring
a
new
enhancement
into
kubernetes.
So,
as
I
said
right
here,
we
have
25
that
were
tracked
for
for
1.15.
B
It
usually
goes
in
this
sort
of
cycle
where
we
have
about
double
the
source
by
about
50
to
60
that
we're
being
tracked
at
one
point,
however,
because
of
missing
enhancer
trees,
missing
code
freeze
not
having
proper
kept
sir
documentation.
Therefore,
they
were
punted
from
this
particular
release
and
they
weren't
being
tracked.
So
we're
gonna
touch
on
all
these,
as
I
said
pretty
lightly,
but
you
know
kind
of
at
a
high
level
overview.
B
But
just
as
you
can
see
on
here,
we
had
10
new
alpha
enhancements
that
got
introduced
into
kubernetes
1:15
13
that
are
not
graduated
to
beta
and
to
have
an
idea
exactly
what
alpha
and
beta
really
mean
in
regards
to
this
is
that
anything
that
is
an
alpha
feature
is
usually
behind.
What's
called
a
security
gate
or
a
feature
gate
that
has
to
be
manually,
unlocked
with
inside
of
your
configuration
to
be
able
to
use
that
particular
feature.
B
So
we're
gonna
see
some
of
these
numbers
change
ever
so
slightly
with
1/16
I'm.
Also,
your
1/16
enhancement
lead
as
well.
We'll
save
that
for
the
very
very
last
slide
as
we
go
through
this.
So
some
highlights.
Let's
look
at
the
the
first
one
here.
First
is
the
ability
to
create
dynamic,
highly
available
clusters
with
cube
atm?
Most
of
us
are
probably
aware
of
what
cube
ATM
is
it's
a
tool
that
allows
conveyors
administrators
to
quickly
easily
bootstrap
a
Minimum
Viable
cluster?
That's
fully
compliant
with
a
certified
kubernetes
guidelines.
It's
been.
B
It's
been
under
active
development
from
sig
cluster
life
cycles
since
2016,
and
it
graduated
from
beta
to
GA
at
the
end
of
2018,
and
this
tool
is
really
supposed
to
be
a
composable
building
block
for
making
higher-level
tools
on
top
of
them
and
the
core
of
cube
atm.
It's
pretty
simple:
you
have
new
control
play.
Nodes
are
created
by
utilizing,
cube
idiom
in
it,
and
worker
nodes
are
joined
to
the
control
plane
by
utilizing
cube
Adium
join
with
on
those
particular
nodes,
and
these
are
all
common
utilities.
B
Now
that
are
be
used
for
bootstrapping
clusters,
you
can
do
control
plane
upgrades,
you
can
do
token
and
certificate
renewal
as
well.
Now,
one
of
the
things
I
also
understand
about
what
cube
idiom
rings
is
that
it
is
not
an
infrastructure,
provisioning
tool.
There's
no
third
party
networking,
there's
no
add-ons,
there's
nothing
that
it
does
in
regard
to
monitoring
logging
or
visualization
or
traffic,
or
anything
that's
really
specific
to
a
particular
cloud
provider.
Really.
B
What
this
is
supposed
to
be
is
that
kind
of
common
denominator
for
every
kubernetes
cluster,
utilizing
the
control
plane
so
graduating
to
Bay
115
is
now
the
ability
to
create
dynamic,
multi-master,
highly
available
clusters
with
cube
ATM
and
there's
a
lot
of
additional
bonuses
that
come
with
this,
such
as
automatic
certificate,
copying
and
rotation.
During
these
upgrades,
this
is
going
to
make
it
easier
for
highly
available
clusters.
Utilizing
cube
ATM
to
use
that
same
in
it
and
joint
commands
that
you've
already
been
familiar
with.
B
The
only
difference
now
moving
forward
is
that
you
want
to
pass
this
control
plane
flag
to
the
cube
idiom
join
when
you're,
adding
more
control
plane
nodes.
There
was
a
lot
of
effort
that
was
done
with
inside
cube
idiom.
To
achieve
this
particular
goal
among
them
was
a
redesign
of
the
cube,
ATM
config
file
and
making
sure
that
we
had
some
graduation
criteria
built
in
to
actually
have
that
control,
plane,
workflow
flag,
add
to
added
to
it
and
we'll
touch
on
the
config
file
a
bit
later.
But
to
set
this
up
just
how
your?
B
If
you
want
to
be
aware
of
how
this
works
is
that
it
still
requires
a
load
balancer
to
be
pre
provision,
so
you
can
use
anything
like
a
che
proxy
or
Envoy
or
anything.
That's
actually
provided
by
your
particular
cloud
provider
and
with
inside
of
the
cube
ATM
configuration
file,
you
set
the
control
plane
in
point
field
to
where
your
load
balancer
can
be
reached.
Then
you
run
in
it
with
the
upload
certs
flag.
So
you
would
do
something
that's
pushed
here.
B
It
says
like
pseudo
cube
idiom
in
it,
put
your
config
flag
and
then
you
have
your
upload
search
flag,
and
so
both
this
both
these
nodes
can
now
be
joined
in
any
order.
So
what
do
you
want
to
add
more
control,
plane,
nodes
or
add
worker
nodes?
They
can
all
be
done
concurrently
and
in
the
background,
there's
our
really
there's
a
lot
happening
here.
So
cube
idiom
implements
automatic
certificate
copy
features,
so
it
is
automatically
distributing
all
of
the
certificate
authorities
and
it's
keys
that
have
to
be
shared
amongst
all
of
the
control
plane
notes.
B
So
that
gives
your
cluster
working
and
that's
what
utilizes
that
upload
certs
flag
and
when
you're,
not
providing
an
external
@cv
cluster
cube
ATM
automatically
is
going
to
create
a
new
sed
member
for
every
for
every
new
master.
That's
added
and
it's
kind
of
running
as
a
static
pod
on
that
particular
control.
Plane,
though-
and
this
is
what's
called
a
stack
configuration
file,
so
you
have
the
concurrent
joining
that's
available
to
you.
It's
also
building
more
workflows
to
actually
make
it
upgradable.
B
So
if
you
want
to
properly
handle
this
highly
available
scenario
and
starting
with
the
upgrade
you
didn't
hit,
the
applies
per
usual
and
you
users
now
can
kind
of
complete
that
configuration
process
by
upgrading
the
the
node
on
both
the
remaining
control
planes
and
then
moving
to
the
worker
nodes.
After
that,
all
right.
B
So
moving
on
to
the
next
one
glowing
PVC
data
sources
to
extend
or
what
most
of
us
in
messed
with
storage
in
a
long
time
talk
about
volume,
cloning
so
features
like
bottom
cloning,
you're
pretty
prevalent
in
most
storage
devices,
then
not
only
is
it
capable
on
most
store
devices.
It's
pretty
frequently
used
in
various
use
cases,
whether
you
want
to
duplicate
data,
or
you
want
to
use
it
as
a
particular.
Dr
disaster
covering
method
and
clones
are
different.
B
Some
snapshots
from
that
in
that
regard,
clone
results
in
a
a
new
duplicate
volume
that
is
being
provisioned
from
an
existing
volume.
It
also
counts
against
the
users
volume
quota
at
that
point,
and
it
follows
the
same
creation
and
workflow
and
the
validation
checks,
if
you'd
be
using
if
you're
going
to
be
utilizing
some
other
kind
of
provisioning
request.
B
All
right
snapshots,
on
the
other
hand,
result
in
that
sort
of
like
point
in
time
copy
it's
a
it's
a
full-time
copy
of
itself,
but
in
its
and
is
still
a
usable
volume,
and
it
can
be
at
that
point,
you
can
either
provision
a
new
volume
from
it
or
you
can
restore
the
existing
volume
from
a
previous
state.
So
the
community
storage
sig
identified
the
clone
operations
as
kind
of
one
of
those
critical
functionalities
for
a
lot
of
the
stateful
workloads
that
we
want
to
run
inside
of
kubernetes
today.
B
So
if
you're
a
database
administrator,
you
may
want
to
duplicate
a
database
and
actually
create
another
into
the
instance
of
that
utilizing.
One
of
the
existing
databases
that
you
have
already
so
providing
a
way
that
we
can
trigger
a
clone
operation.
Utilizing
the
criminalities,
API
and
now
users
can
handle
this
without
having
to
go
around
the
API,
and
you
can
not
to
worry
about
adding
cloning
support
and
all
those
other
kind
of
pieces
into
it.
B
So
the
cloning
feature
is
enabled
utilizing
the
volume
claim,
data
source
field
and
adding
support,
for
that
is
allowing
you
now
to
hona
volume.
So
with
this,
there
are
no
new
objects
that
are
introduced
to
enable
cloning.
Instead,
it's
utilizing
that
existing
data
source
field
in
that
persistent
volume
claim
object,
and
what
this
is
able
to
do
is
now
able
to
accept
the
name
of
it
in
the
same
exact
namespace
and
its
Fortin.
B
A
note
that
when
you're
wanting
to
use
this
from
a
user
perspective,
cloning
is
just
another
persistent
volume
and
another
persistent
volume
claim
the
only
difference
being
that
the
persistent
volume
is
being
populated
with
the
contents
of
another
persistent
volume.
During
that
creation
time,
so
after
creation,
it
behaves
it
just
behaves
like
any
other
communities.
Persistent
volume
and
it's
gonna
have
to
follow
the
same.
Behaviors
and
rule
sets
that
you
would
expect,
and
at
this
time
cloning
is
only
supported
for
CSI
drivers,
not
for
entry
or
flex.
B
So
if
you
want
to
use
this
kubernetes
cloning
feature
ensure
that
you
are
utilizing
the
CSI
driver.
That
is
also
implementing
this
cloning
feature
with
inside
of
it
as
well
and
as
you
can
see
in
this
example
right
here.
This
is
assuming
that
we
have
a
persistent
volume
claim
with
the
name
PVC
one
and
it
exists
in
the
namespace
called
my
NS
and
has
a
size
less
than
or
equal
to
ten
gigs
and
the
result
would
be
a
new
and
independent
persistent
volume
and
persistent
volume
claim
called
PVC
and
on
the
back.
B
That
device
is
then
going
to
duplicate
of
that
data,
and
it's
going
to
now
push
it
into
the
same
exact
namespace,
alright.
So
moving
on
now,
I
didn't
put
anything
right
here
for
C
Rd
meny,
because
there's
a
lot
of
stuff
that
happened,
regards
of
custom,
resource
definitions
and
that's
gonna,
follow
along
when
we
talk
about
each
one
of
the
individual
SIG's
and
their
components
together.
B
The
admission
web
book
is
a
weight
that
you
can
extend
kubernetes
and
you
can
put
a
hook
on
an
object
during
its
creation,
its
modification
or
deletion
of
that
particular
object,
and
these
web
hooks
can
mutate
or
validate
those
objects
in
itself
and
right
now
it
supports
namespace
selectors,
and
this
is
great,
but
it's
like
an
all-or-nothing
within
that
type
of
namespace,
and
you
may
not
want
to
get
all
the
activity
that's
happening.
So
this
has
now
been
extended
to
include
a
single
object
selector
within
within
this
new
beta
enhancement.
B
So
now
on
to
CR
DS,
if
you're
not
familiar
with
what
a
CR
D,
let's
make
sure
that
we
set
a
baseline.
So
a
resource
is
an
endpoint
in
the
kubernetes
api
and
that
stores
a
collection
of
api
objects
of
a
certain
kind.
So,
for
example,
you
have
built-in
pod
resource
resources
and
that
contains
a
collection
of
pod
objects.
Now
a
custom
resource
is
an
extension
of
that
community's
API.
B
That
is
not
necessarily
available
on
every
kubernetes
cluster,
but
it
represents
a
customization
of
a
particular
communities,
installation
and
today,
there's
many
distributions
out
there
that
are
utilizing
C
RDS
as
their
own
special
sauce.
So
when
we
start
looking
into
what
this
particular
enhancement
is
doing,
is
adding
defaulting
and
pruning
for
these
custom
resources,
so
defaulting
is
implementing
for
most
of
the
new
the
kubernetes
api
types.
B
It's
gonna
play
a
crucial
role
because
what's
gonna
do
is
now,
it's
gonna
make
sure
that
there's
api
compatibility
when
you're
adding
new
fields
and
custom
resources,
don't
do
this
natively,
and
so
this
was
all
about
making
sure
that
we're
specifying
default
value
fields
that
are
following
along
with
the
open,
API
version.
3
validation
schema,
and
this
is
all
happening
inside
the
CID
manifest,
and
so
once
this
now
has
this
native
support.
It's
gonna
have
a
default
field
for
arbitrary
JSON
values
and
what
is
going
to
happen
now
these
default
values
during
the
D
serialization.
B
Just
can't
suddenly
render
unaccessible
data
because
it's
it's
lost,
breaking
any
sort
of
decoding
aspect
or
contract
of
that
as
well,
and
if
the
unexpected
data
inside
of
EDD
is
at
the
right
tight
and
doesn't
break
the
coding,
but
it
hasn't
gone
through
its
validation.
It's
gonna
be
probably
a
omission
webhook
or
something
along
those
lines
that
isn't
going
to
exist
and
it
would
get
pruned
out
of
there.
B
So
the
existing
problem
is
that
when
a
webhook
needs
to
make
a
request
to
another
service,
but
those
api's
have
progressed
or
changed,
a
CRD
user
wants
to
be
certain
that
they
can
involve
their
api
before
they
get
down
the
path
of
developing
the
CR
D
+,
the
controller
function
and
the
webhook
conversion
allows
developers
to
now
evolve
their
API
and
still
maintain
backwards.
Compatibility,
utilizing,
versioned,
API
resources,
and
this
is
gonna,
allow
objects
and
services
to
hold
multiple
versions.
B
At
the
same
exact
time
and
now
I,
you
can
convert
a
web
hook
from
one
version
to
another
based
on
its
need.
Now
the
CR
D
open,
API,
schema
I
kind
of
mentioned.
That
already,
is
that
it's
utilizing
open
api
v
v3
to
enable
this
server-side
validation
for
custom
resources,
and
this
validation
format
is
compatible
with
creating
open
api
documentation
for
the
custom
resources.
It
can
also
be
used
by
clients
like
cute
cuddle,
to
inform
client-side
validation
if
you're
using
cheap
cuddle
create
or
cute
cuddle
apply.
B
B
Now
the
watch
API
is
one
of
the
fundamentals
of
the
kubernetes
api
and
it's
and
right
now
there's
a
there's,
a
recommended
pattern
for
utilizing
this
to
retrieve
a
collection
of
resources
and
it's
following
this
using
a
consistent
list
and
then
initiating
this
watch
starting
from
a
resource
version
and
that
then
returns
this
list
operation
and
now,
if
the
client
watch
is
disconnected
a
new
one
can
be
restarted
from
the
last
returned
resource
version.
Now
this
new
proposal
to
add
in
bookmark
support
is
going
to
be
actually
creating
a
cheaper
resource.
B
Consumption
model
from
it
as
well,
and
it's
gonna
be
looking
at
from
the
performance
of
the
cube
api
server
in
different
scalability
tests
has
been
shown
that
when
you
want
to
restart
these
watches,
it
can
cause
significant
load
on
the
cube
API
server
and
when
you're,
especially
if
you're
looking
for
like
a
small
amount
of
person
and
it's
changes
due
to
like
a
field
of
her
label,
select
or
anything
like
that.
But
in
extreme
cases
reestablishing
this
watcher
can
lead
to
falling
outside
of
the
history
window
and
getting
an
error
back.
B
That
says:
there's
a
resource
version,
that's
too
old,
and
the
reason
for
that
is
that
the
lights.
The
last
item
received
by
the
watcher,
has
a
resource
version,
kind
of
tally
or
an
rb1
next
to
it,
and
we
may
already
know
that
there
aren't
any
changes
that
are
given
to
the
watcher
and
it's
now,
it's
interested
in
saying.
B
I
want
to
kind
of
level
up
here
and
I'm
gonna
get
my
RB
or
my
resource
version,
and
so
the
goal
of
this
is
to
reduce
the
load
on
the
API
server,
as
I'd
already
mentioned,
and
it's
going
to
be
doing
this
by
minimizing
the
amount
of
unnecessary
watch
events
that
are
needed
to
be
pre
processed
after
restarting
a
watch.
And
so
the
proposal
is
going
to
introduce
a
new
type
of
watchman
called
bookmark
and
this
type
of
event
called
bookmark.
B
It
will
represent
information
that
all
objects
are
utilized
as
they're,
given
a
resource
version
as
they've
been
processed
for
a
given
watcher.
So,
even
if
that
last
event
of
the
other
types
contain
the
the
object
with
resource
version,
rv1
receiving
a
bookmark
with
the
resource
version,
RB
2
means
that
there
aren't
any
interesting
objects
for
that
watcher
in
between.
So
it
can
kind
of
just
put
it
to
the
side
all
right.
So
that's
for
that
sig.
Now,
moving
on
to
sig
apps,
that
was
one
of
the
larger
ones
for
for
most
of
these
and
I.
B
Think
storage
is
sort
of
a
larger
one,
but
moving
on
to
sig
acts
so
the
the
pod
disruption
budget.
This
is
again
sort
of
like
a
custom
resource.
It's
been
graduated
to
beta
here,
and
this
is
an
important
tool
to
control
the
number
of
voluntary
disruptions
for
workloads
inside
of
your
command
cluster.
So
the
pod
disruption,
budget
or
PD
peeves,
I'm
gonna,
try
to
say
it
without
stumbling
over
myself.
It
allows
a
user
to
specify
the
allow
amount
of
disruptions
through
either
a
minimum
of
or
maximum
unavailable
number
of
pods.
B
So,
in
order
to
support
this,
where
a
maximum
number
of
unavailable
pods
is
set,
the
controller
needs
to
be
able
to
look
up
the
desired
number
of
replicas,
and
it
does
this
by
looking
at
the
controller,
and
the
controller
in
itself
is
going
to
have
four
basic
workload
pieces
that
are
supported
by
the
pod
disruption
budget.
That's
the
types
of
controllers,
our
deployments
staple
sets,
replica
sets
and
replication
controllers
and
there's
also
a
scale
sub
resource.
B
That's
a
part
of
this
as
well,
and
it
allows
any
resource
to
specify
as
desired
number
of
replicas
and
in
a
generic
way.
It
can
look
up
this
information,
so
this
will
now
support
using
the
scale
sub
resource
to
allow
setting
these
pod
disruption
budgets
on
any
resource
that
implements
this
scale
sub
resource
fabien
s
is
strategic,
merge
patching
already
supported
with
custom
resource
definitions,
I'm
not
sure
I
can
answer
that,
one
for
you.
B
That
would
be
one
to
take
to
API
machinery,
and
hopefully
somebody
here
can
can
answer
that
one
for
you,
as
well,
so
on
to
sig
architecture,
so
go
module
support.
This
is,
this
is
one
that
was
kind
of
weird
where
it
went
from
didn't,
go
through
alpha
or
beta
just
went
straight
to
stable,
and
this
is
because
go
modules
have
carried
been
very
well
tested
with
inside
of
just
the
ecosystem
in
general
for
go
line
and
to
give
you
ten
of
his
background
in
history
here.
B
This
is
mostly
going
to
be
utilized
for
anybody.
That's
actually
contributing
and
utilizing
a
lot
of
these
pieces
and
vendored
modules
with
inside
of
kubernetes,
but
to
understand
that
there's
it's
the
addition
of
just
trying
to
keep
things
simple
with
inside
the
go
ecosystem,
because
go
modules
provide
a
whole
lot
of
benefits.
It
can
rebuild
the
vendor
without
utilizing
go
modules.
It
provides
that
10x
increase
in
speed
over
go
dip
them
from
some
of
the
preliminary
tests
that
we
ran.
B
They're,
recording
will
be
shared,
see
somebody's
asking
that
in
chat,
so
it
will
be
available
after
this.
It
will
also
be
available
on
youtube.
So
thank
you
for
asking
all
right,
so
six
CLI,
so
cue,
cuddle,
get
and
described
should
work
well
for
extensions.
This
is
now
graduating
to
stable
in
1.15.
So
the
two
kind
of
kind
of
look
at
this:
it's
a
server-side,
the
get
and
a
partial
objects.
This
is
now
being
brought
to
GA.
B
This
is
also
coming
sort
of
feature
complete
as
we're
looking
at
removing
some
of
the
legacy
printers
in
the
subsequent
versions,
and
this
also
updates
the
controllers
that
would
benefit
from
the
use
of
things
like
partial
object,
metadata
without
the
fear
of
deprecation
and
so
partial
object.
Metadata
allows
these
controllers
perform
a
protobuf
list
on
actions,
and
so
that's
just
one
of
the
things
that
you'll
be
able
to
see
with
inside
of
here.
B
But
again,
this
is
going
to
allow
you
to
get
columns
back
to
the
server
and
not
the
client,
and
it's
going
to
allow
extensions
to
work
a
lot
more
clearly
so
moving
on
to
say,
cluster
lifecycle,
as
I
mentioned,
we're
gonna
start
rolled
through
this
pretty
quickly
here.
So
we
already
kind
of
talked
about
cube
idiom
and
its
HEA
features
of
why
this
was
graduated
to
beta.
But
one
big
part
of
this
was
the
cube
ATM
configuration
file.
B
This
is
now
graduated
to
1.5,
and
so,
if
you're
familiar
with
utilizing
cube
ATM
today,
you
should
refill
your
eyes
yourself,
because
a
lot
of
things
that
you
might
be
utilizing
inside
of
the
configuration
file
might
have
changed
and
you
just
need
to
make
sure
it's
validated
against
it,
and
so
this
is
really
one
of
those
first
touch
points
that
for
a
lot
of
kubernetes
users
that
or
any
for
any
higher-level
tooling.
They
use
to
actually
build
these.
B
So
it
follows
the
regards
of
like
API
version
and
such
like
that
now
the
the
file
was
originally
created,
as,
as
I
mentioned,
like
an
alternative
to
those
command
line,
flags
for
the
init
and
joint
commands,
but
over
time
the
number
of
options
that
were
supported
by
the
qadian
config
had
just
grown
so
much
that
it
had
to
be
kept
under
control
and
limited
to
like
the
most
simple
use
cases.
And
so
today
the
config
file
is
really
the
only
viable
way
for
implementing
mini
use.
B
Cases
like
the
use
of
an
external,
let
CD
cluster
customer
customizing,
the
kubernetes
control
plan
components
are
utilizing,
cube
proxy
and
you
cubelet
parameters,
and
so
the
config
file
today
sort
of
acts
as
a
persistent
representation
of
the
cluster
specification,
and
so
it
can
be
used.
At
any
point,
any
time
after
the
cube,
ATM
initialization
and
it
can
be
actually
utilized
for
the
cube,
Adium
upgrade
actions
as
well,
and
so
this
these
these
are
new
config
options
that
have
been
adding
now
for
new
and
existing
features
for
cube
idiom.
B
So
over
time,
you're
gonna
see
cube.
Atm
is
going
to
be
gaining
new
features
which
are
going
to
require
the
addition
of
newer
config
file
formats,
and
one
of
these
was,
of
course,
the
V
1
beta
1
API
version
that
was
added
in
for
the
certificate
copy
featuring
as
we
saw
with
H
a
control
plane
of
actually
having
that
new
flag
of
just
join
control
plate.
B
So
moving
on
to
see
network
node,
local
DNS
cache.
This
is
now
graduating
to
beta,
and
this
is
an
add-on
that
runs
a
DNS
at
DNS
cache
pod
as
a
daemon
set
to
improve
your
clustered
DNS
performance
and
its
reliability,
and
this
add-on
runs
as
a
node
local
DNS
pod
on
every
cluster
node,
and
this,
of
course
runs
cornet
or
Core
DNS,
as
its
DNS
cache,
and
it
runs
with
the
the
parameter
of
host
network
is
set
to
true
and
creates
a
dedicated
dummy
interface
with
a
link
local
IP
to
listen
for
DNS
queries.
B
B
The
actual
deletion
of
the
resource
will
be
blocked
until
this
finalizer
is
removed,
and
so
the
finalizer
will
not
be
removed
until
the
cleanup
of
the
load.
Balancer
resources
are
actually
considered
finished
by
the
service
controller,
so
hopefully,
at
the
end
of
the
day,
this
all
saves
us
resources
and
money
within
all
particular
clouds,
as
well.
So
moving
on
to
cig
node
the
quotas
for
ephemeral
storage.
Now
you
might
think
this
would
be
part
of
really
like
six
storage.
B
Those
that
you
would
see
for
like
memory
and
CPU
consumption
as
well
and
the
the
current
mechanism
relies
on
periodically
querying
each
sorry
about
that
kind
of
looking
at
periodically.
Looking
at
each
one
of
these
nope
looks
I'm
already
supposed
to
know
there.
We
go
and
it's
gonna
be
look
at
each
one
of
these
and
what
its
gonna
be
doing
is
querying
it
and
then
summing
up
the
space
consumption
at
the
end
of
it,
and
so
today
this
method
is
it's
pretty
slow
and
has
high
latent
and
seem
love
with
it.
B
And
this
mechanism
pros
here
is
gonna,
be
utilizing
a
file
system
project
quota,
and
this
is
going
to
provide
monitoring
of
resource
consumption
and
optionally,
actually
enforcing
the
limits
itself.
So
project
quotas
sort
of
are
in
the
form
of
the
file
system
quota
that
apply
to
these
particular
files,
and
then
it
also
offers
a
kernel
based
means
of
actually
monitoring
and
restricting
the
file
system,
consumption
and
it
can
be
complied
to
one
or
more
directors
as
well.
B
So
support
for
third
party
device
monitoring
plugins.
This
is
now
moving
to
graduating
to
beta
and
really
this
falls
under
extensibility
because
since
there's
a
whole
ecosystem
built
around
performance,
monitoring
and
management
of
your
clusters
and
device
monitoring
and
device
management
today,
it
typically
requires
external
agents
to
be
to
be
able
to
determine
if
those
sets
of
devices
are
actually
in
used
by
private
containers.
B
Device
vendors
can
provide
tools
that
live
out
a
tree
and
aren't
gated
by
the
kubernetes
release
cycle
and
pit
limiting.
So
pigs
are
fundamental
resources
on
Linux
hosts
it's
trivial
to
hit
the
task
limit
without
any
other
resource
limits,
because
the
instability
of
a
host
machine
at
this
point
so
administrators
require
mechanisms
to
ensure
that
users
and
their
pods
can't
induce
pit
exhaustion
and
that
prevents
host
daemon
such
as
the
runtime
and
cubelet
from
running
at
that
point.
B
In
addition,
it's
also
important
to
note
here
that
making
sure
that
there's
enough
that
pigs
are
limited
amongst
these
pods
to
ensure
that
they
have
limited
impact
to
other
workloads
on
that
particular
node.
So
to
enable
pit
isolation
amongst
pods,
you
can
utilize
support,
pods
limit
feature
that
is
now
no
longer
gated
with
inside
of
here
and
at
the
same
exact
time,
the
the
nodes
can
be
allocated
to
a
well-established
feature
concept
with
inside
the
cubelet,
and
this
allows
the
isolation
of
user
resources
or
user
pod
resources
from
host
Damons
at
the
cube.
B
Pods
see
group
level
that
parents
out
to
all
in
user
pods,
alright
moving
on
to
sig
scalability,
so
adding
more
structure
to
the
event
API,
and
this
is
going
to
also
change
the
deduplication
logic.
So
events
aren't
overloading
the
cluster.
This
is
an
alpha
improvement,
really
performance
side
of
things
that
you're
going
to
see
inside
the
event
API
now
and
there's
relatively
a
wide
agreement
that
the
current
implementation
of
events
in
kubernetes
is
problematic.
B
So
events
are
supposed
to
give
an
app
developer
insight
into
what's
happening
with
their
particular
application
in
important
requirements
for
the
event,
library
is
that
it
shouldn't
cause
performance
problems
in
the
cluster
at
the
same
exact
time.
So
the
problem
is
that
neither
these
requirements
have
actually
ever
been
met.
Currently,
events
are
extremely
spammy,
so
event
can.
B
B
Into
the
show,
I
say:
there's
a
lot
of
features
that
are
being
added
into
the
community
scheduler,
and
this
is
a
new
framework.
That's
been
added
as
alpha
and
as
new
features
are
being
added
to
the
scheduler.
The
code
jacent,
the
code
base
just
becomes
very
large
and
the
logic
becomes
more
more
complex
and
the
more
complex
the
scheduler
becomes.
It's
just
the
harder
it
is
to
maintain.
That
means
it's
harder
for
bugs
to
be
able
to
find
a
fix
and
users
that
are
running
some
sort
of
custom.
B
Scheduler
have
a
hard
time
catching
up
and
integrating
these
new
changes,
and
so
the
current
committee
scheduler
provides
web
bugs
and
that
allows
it.
Of
course,
we
talked
about
earlier
to
extend
some
of
its
functionality.
However,
it
can
also
be
sort
of
limiting,
as
it
hinders,
building
high
performance
and
and
versatile
scheduler
features,
and
so
now
the
scheduler
framework
defines
a
new
extension
point
and
go
api's
inside
the
committee
scheduler
for
use
by
plugins
and
plugins
ad
scheduling
behaviors
to
the
scheduler,
and
these
are
now
included
at
compile
time.
B
So
these
schedulers
component
config
will
allow
plugins
to
be
enabled
disabled
and
reordered.
Custom
schedulers
can
actually
write
their
own
plugins
that
can
be
now
out
of
tree
and
compile
a
scheduler
binary
with
their
own
plugins
included,
while
keeping
the
scheduling
core
simple
and
maintainable.
If
we
go,
if
you
go,
and
you
check
out
this
particular
link
on
this
on
this
issue,
there's
actually
somebody
that's
already
written
their
first
custom
scheduler
that
utilizes
the
schedule
framework
as
well.
B
B
So
if
you're
unfamiliar
with
this
pods
are
scheduled
according
to
a
descending
priority,
if
a
pod
can't
be
scheduled
due
to
insufficient
resources,
lower
priority
pods
will
be
preempted
to
make
room,
and
this
enhancement
makes
preemption
behavior
optional
for
a
priority
class,
and
by
doing
so
it's
adding
a
new
field
called
priority
classes
which
is
going
to
populate
with
inside
of
the
pod
step,
and
if
a
pod
is
waiting
to
be
scheduled
and
it
does
have
preemption
enabled
it
will
not
trigger
preemption
of
other
pots.
So
batch
workloads
typically
have
a
backlog.
B
Work
with
unscheduled
pods
higher
priority
work
loads
can
be
assigned
a
higher
priority
via
the
priority
class,
and
this
may
result
in
pods
with
partially
completed
work
being
preempted,
and
this
is
adding
non
preemption.
This
is
actually
an
allow
users
to
prioritize
the
scheduling
queue
as
well,
and
it's
going
to
kind
of
you
know.
It's
not
gonna
worry
about
discarding
incomplete
work
at
the
same
exact
time,
so
this
is
going
to
be
again
adding
into
the
the
pod
spec,
adding
this
pre-empting
field
into
this
and
the
priority
class.
B
So
if
preempting
is
true
for
a
pod,
then
the
scheduler
will
preempt
the
lower
pods
to
schedule
this
particular
pod,
which
is
the
current
behavior,
and
if
preempting
is
false,
a
pot
of
that
priority
will
not
preempts
other
pods.
So
setting
the
pre-empting
field
in
the
priority
class
provides
a
straightforward
interface
and
allows
resource
quotas
to
now
start
restricting
preemption,
alright
SIG's
storage
or
coming
down
here
to
the
end
folks,
so
the
online
resizing
of
persistent
volumes.
B
This
is
something
that,
if
you're
a
database
owner
or
if
your
particulars
simply
just
running
out
of
room,
there
needs
to
be
a
capability
to
resize
the
volume
on
demand,
while
it's
still
in
use-
and
this
is
critical
for
applications
that
support
many
concurrent
users,
but
perhaps
haven't
taken
advantage
of
cloud
native
database
types.
So
if
you're,
a
my
sequel,
user
and
you're
running
out
of
space-
and
you
want
to
dynamically
increase
the
size
without
losing
data
and
staying
online.
B
If
you
use
a
rewrite
mini
file
system
like
cluster
FS,
you
can
resize
a
lot
of
stuff
without
taking
it
offline
all
the
time.
But,
however,
this
feature
enables
users
to
increase
the
size
of
the
persistent
volume
claim
which
is
already
in
use
and
is
already
currently
mounted,
and
the
user
will
update
the
persistent
volume
claim
to
request
a
new
size
and
underneath
we
expect
that
the
cubelet
is
going
to
resize
the
filesystem
for
the
persistent
volume
claim
accordingly,
so
providing
environment.
B
Variable
expansion
and
subpath
mounts
as
graduating
to
two
beta,
and
this
feature
allows
a
user
to
provide
really
should
I,
say
dynamically.
Allows
you
to
kind
of
generate
host
paths
with
particular
mounting
volumes.
So
the
sub
path
feature
creates
directories
on
demand
and,
as
the
names
get
assigned
to
these
directories,
they
become
sort
of
static,
so
supporting
downward
API
variables
provide
a
way
to
share
storage
as
well.
B
So
in
this
example
here
that
you
see
on
the
screen
the
filled
path
and
the
sub
path
combined
and
over
time,
the
host
and
the
story,
the
host
storage,
is
going
to
be
looking
for,
creating
something
that
you
see
underneath
of
it,
where
the
containers
wouldn't
need
to
change
any
of
their
logging
logic.
You
need
to
start
any
of
their
logging
logic
of
actually
how
they
are
tying
themselves
to
these
particular
persistence
volumes.
B
B
At
the
same
time
now
the
execution
hook
or
I'm
sorry,
this
is
a
this
is
I,
think,
ok,
never
mind.
This
is
I.
Think
I
believe
this.
This
slide
is
incorrect
up
here,
because
this
is
actually
should
be
talking
about.
Volume
snapshot
features,
I
apologize.
It
was
a
copy-paste
error
on
my
end
right
here,
but
I'll
kind
of
tell
you
a
little
bit
about
what's
happening
with
inside,
of
the
volume
snapshot
feature.
Of
course
you
know
it
creates
it
allows
deleting
and
creating
volume
snapshots
and
the
ability
to
create
these
new
volumes
from
a
snapshot.
B
Natively
utilizing
the
kubernetes
api.
However,
the
application,
consistent
consistency
is
not
guaranteed
and
a
user
has
to
figure
out
how
to
kind
of
quiesce
that
application
before
taking
a
snapshot.
It
unquestioning
a
snapshot
and
I'm
sure
that
many
of
us
that
are
familiar
with
utilizing
applications
that
need
to
quiesce
the
data
down
the
disk
or
flushy
to
disk.
B
So,
what's
coming
up
next
in
version
116,
so
we
are
currently
already
in
four
weeks
of
the
1.16
release
process.
The
enhancement
freeze
is
going
to
be
set
for
July
30th.
If
you're
curious
of
just
checking
out
what
is
going
to
be
new.
What's
going
to
be
graduating,
stable,
what's
gonna
be
moving
to
beta.
What's
on
track
to
be
kicked
out
so
on
and
so
forth,
you
can
use
that
bitly
link,
that's
located
here
and
you
can
go
check
out
that
particular
spreadsheet.
The
GA
target
for
community
116
is
set
for
September
16
as
usual.
B
This
is
a
moving
target.
It's
all
based
on
a
lot
of
factors
that
go
in
do
the
release
process.
This
is
from
measuring
CI
signal
to
make
sure
that
there's
no
bugs
or
anything
like
that
that
that
we
want
to
make
sure
that
it
are
squashed
as
we
go
into
this
as
well
all
right
so
I
know
we
have
a
few
more
minutes.
Left
I'm
gonna
try
to
reach
through
some
of
these
questions
here
and
try
to
answer
them.
I'm
gonna,
try
to
scroll
back
here!
B
Does
PVC
cloning
lock
up
the
underlying
disk
Robert
Johansen
asked
this
I
cannot
answer
that
for
you
completely
because
I'm
not
part
of
these
six
storage
people
that
had
created
this,
however,
I
would
dig
into
the
documentation
and
check
it
out.
If
not,
you
can
always
anybody
that
ever
has
a
question.
You
can
always
go
to
the
C&C
f
slack
or
you
can
go
to
the
kubernetes
slack
fine
or
should
I
say,
go
to
community
select,
find
the
the
cig
that
you're
interested
in
you
can
ask
those
questions.
B
A
lot
of
the
maintainer
zhh
are
all
located
inside
there.
If
you
want
to
get
any
more
information
or
you
do
want
to
get
involved,
I'd
encourage
you
to
go,
and
just
google
kubernetes
cigs
and
you
can
get
yourself
involved
in
those
individual
SIG's
and
be
a
part
of
the
community
and
understand
exactly
what's
happening
for
those
particular
aspects
that
you
find
interesting
as
a
part
of
kubernetes
as
well.
B
So
today,
when
you
are
creating
a
kubernetes
cluster,
you
have
your
your
controller
node
and
you
have
your
worker
notes,
most
of
us
that
want
to
run
in
a
sort
of
higher
level
production
scenario.
You
want
to
have
the
ability
for
failover,
because
if
that
controller
node
goes
down
the
API
server
a
lot
of
the
components,
not
necessarily
you
can't
make
any
changes
or
anything.
Your
workloads
are
still
be
running
because
they're
utilizing
the
node
the
node
workers
that
are
actually
running
your
workloads.
B
However,
the
controller
is
needed
to
make
any
changes
to
the
cluster
or
run
any
new
any
new
pods,
or
to
do
anything
like
that.
So
what
you
want
to
be
able
to
do
is
you
want
to
try
to
figure
out?
How
can
I
make
my
my
controllers
highly
available
and
so
that's
been
available
for
a
while?
You
you
there's
there's
plenty
of
ways
today
that
you
can
go
and
you
can
read
of
how
to
do
certificate
sharing
and
how
to
do.
B
You
know.
Tls,
sharing
and
all
MPLS
set
up
and
load
balancer
set
up
amongst
multiple
controller
nodes.
The
documentation
is
already
on
the
community's
website.
However,
what
is
being
introduced
with
inside
of
cube
ADM
for
h
a
is
allowing
you
to
kind
of
do
it
in
a
one-liner,
doing
it
very,
very
quickly
and
very
very
easily,
without
having
to
perform
all
of
those
manual
steps.
So
that
is
the
the
beta
feature
that's
now
being
introduced
with
inside
of
that
particular
tool
set.
B
John
Owings
s
does
resize
require
a
pod.
Restart
I
have
not
tried
this
myself,
so
I
cannot
answer
that
once
again,
I
would
encourage
you
to
just
check
out
the
documentation
or
try
it
yourself
and
kind
of
see
what
you
can
find
out
about
it.
Of
course,
anybody
is
always
happy
to
accept
documentation.
Help
coming
into
anything
coming
into
all
this
as
well
did,
let's
see
in
asks,
did
NetApp
contribute,
Trident
management,
software
of
PVCs
to
the
open
source
community
I
have
no
idea.
Can
it
sit
out
for
you.
B
Meier
does
dynamic
HJ
clusters
scale
the
sed
cluster.
If,
yes,
what
will
the
impact
be
on
its
performance?
So
yes,
I,
as
I
mentioned
in
here,
the
dynamic
h,
a
cluster
will
also
scale
the
SED
cluster.
So
every
new
controller
node,
you
add
it
will,
and
if
you
do
not
configure
you
not
specify
that
you
want
an
external
HDD
cluster,
be
managed,
then
it
will
create
a
nut
other
@cd
pod
on
that
controller
node
and
will
be
added
into
the
existing
@cd
cluster.
This
is
running
in
a
stacked
configuration
or
a
stacked
architecture.
B
B
B
B
Just
so
the
ones
that
I've
kind
of
had
a
little
bit
of
touch
on
with
but
again
I
would
encourage
you
to
just
reach
out
to
the
owners
and
ask
them
sort
of
the
idea
and
and
and
as
well
I
can
tell
you
for
a
fact
that
this
pod
disruption
budget
has
gone
to
recap.
So
you
can
go
to
the
Kate
enhancements,
repo
searches
if
keps
and
you
can
see
everything
inside
there
of
documentation,
architecture
code.
Everything
like
that.
That's
gonna
be
going
into
that
particular
feature.
A
B
I
cannot
give
you
the
the
operation
manual
of
how
to
upgrade
from
114
to
115
utilizing
the
new
cube,
ADM
tooling.
I
would
always
encourage
you
to
test
with
a
test
cluster
or
something
like
that.
First,
as
I
mentioned,
the
cube
idiom
file
has
gone
through
some
configuration
changes.
It
is
supposed
to
be
maintaining
backwards.
Compatibility,
so
I
would
just
encourage
you
to
check
that
out
and
try
it
before
you
move
on
to
a
production
cluster.