►
From YouTube: Webinar: What’s New in Kubernetes 1.16
Description
Join the Kubernetes release team to learn about the new features in Kubernetes 1.16.
A
All
right,
I'd
like
to
thank
everybody,
who's
joining
us
today.
Welcome
to
today's
CN
CF
c
CN
CF
webinar.
What's
new
in
kubernetes
1:16,
my
name
is
Taylor
Dolezal
I
lead
set
reliability.
Engineering
at
Walt,
Disney,
Studios
I
was
the
116
communications.
Lead
I'll,
be
monitoring
today's
webinar.
We
would
like
to
welcome
our
presenters
today.
Kenny
Coleman
technical
product
manager
at
VMware
and
kubernetes
116
enhancements
lead
in
lackey
Evanson
principal
program
manager
at
Microsoft
and
kubernetes.
116
lead
some
housekeeping
items.
A
I
have
for
you
today
are
that
during
the
webinar
you
are
not
able
to
talk
as
an
attendee.
There
is
a
Q&A
box
at
the
bottom
of
your
screen.
Please
feel
free
to
drop
your
questions
in
there
and
we'll
get
to
as
many
as
we
can.
At
the
end.
This
is
an
official
webinar
of
the
CNC
F
and,
as
such
a
subject
to
the
CNC
F
could
conduct,
which
basically
translates
to
please
don't
be
a
jerk.
A
B
Tyler
appreciate
it
and
thank
you
everybody
for
joining
here
today.
You
know
it's
gonna,
be
a
fun-filled
afternoon,
we're
looking
at
all
of
the
new
features
as
well
as
some
of
the
graduated
features
that
happen
with
the
116
release.
Now
just
some
quick
housekeeping,
as
we
start
going
through
here.
Please
note
that
we
will
do
our
best
to
answer
any
questions
possible.
As
Taylor
said,
please
use
the
Q&A
panel
if
you
can
lakis
going
to
be
there
to
try
to
answer
them
all.
So.
Please
know
that
we
are
part
of
the
release
team.
B
That
means
that
our
goal
and
our
job
is
to
take
the
everything
that
is
going
to
be
going
into
the
release
and
funnel
it
and
push
out
something
that
is
stable
and
everybody
is
happy
with
now.
That
does
not
mean
that
we
know
every
little
nuance
and
technical
detail
of
every
single
thing
that
we're
going
to
go
over.
So
as
you
as
we
go
through
here,
you're
gonna
see
that
there
is
going
to
be
a
link
at
the
very
bottom
for
every
issue.
B
If
you're
in
climb
to
read
more
about
it,
you
can
go
to
that
issue
and
read
the
kept
for
the
kubernetes
enhancement
proposal,
and
that
will
give
you
a
better
idea
of
exactly
what's
going
into
this
as
well
as
lucky.
Am
I
lucky
myself
we're
also
not
the
issue
owners,
nor
are
we
in
every
single
sig,
so
we
will
not
be
able
to
answer
all
the
details
to
the
you
know,
basically
to
them
we're
gonna.
B
Do
it
the
best
of
our
ability,
but
that's
what
we
can
do
so
just
quickly
kind
of
hit
the
agenda
here.
We're
gonna
go
through
some
of
the
really
three
major
features
and
then
we're
gonna
go
and
touch
all
of
the
enhancements.
But
before
we
get
to
that,
I
want
lucky
to
kind
of
talk
about
the
release
theme
and,
as
you
can
see,
this
badge
and
the
logo
that
kind
of
embraced
for
116,
and
he
can
kind
of
give
you
an
update
of
kind
of
where
this,
where
this
all
stemmed
from
as
well.
C
Thanks
Kenny
said
the
release
mascot.
There
were
two
aspects
to
the
creativity
here.
One
aspect
was
going
into
the
116
release
cycle.
It
was
actually
the
50th
anniversary
of
the
lunar
landing
which
got
me
thinking
about
all
the
Apollo
missions,
so
this
badge
is
actually
inspired
by
the
Apollo
16
mission.
So
if
you
go
and
refer
to
that
mission,
patch
you'll
take
a
look
and
you
will
see
a
very
similar
thing
and
eagles
been
replaced
with
captain
cube
and
a
few
other
things
there.
C
The
the
rest
of
it
represents
the
meme
that
is
around
my
love
of
Olive
Garden,
and
this
represents
the
fun
that
the
release
team
had
together
during
this
release.
I
think
it
embodies
everything
that
we
went
through
for
the
last
three
months,
getting
the
116
release
out.
So
that
is
the
history
behind
the
release.
Mascot
thanks,
Kenny,
absolutely.
B
And
so,
if
you
ever
see
lucky
in
person,
you
all
can
go
out
to
Olive
Garden
and
just
have
unlimited
breadsticks
together
and
just
talk
about
all
those
all
the
great
things
that
happen
inside
of
kubernetes.
Alright.
So
let's
go
ahead
and
we
will
dive
into
the
1/16
enhancements
so
just
to
kind
of
give
an
overview
of
what
this
looked
like.
It
was
a
little
bit
more
than
we
had
in
115.
I
was
Enhancement
lead
back
in
115
too,
so
I
can
kind
of
give
a
little
bit
of
context
to
it.
B
But
back
in
115
we
introduced
a
total
of
25
enhancements
that
were
being
tracked
and
now
we're
bumping
that
up
to
31
here
in
116
and
we're
gonna
be
touching
on
all
these
pretty
lightly
and
looking
at
them
at
a
high
level.
But
we
had
in
this
release.
We
had
15
introduced
as
new
alpha
features
that
are
coming
into
this.
B
This
is
typically
the
ones
that
people
are
very
interested
to
know
about,
because
these
are
the
things
that
are
gonna,
be
extending
kubernetes
in
a
new
way
that
you
can
gain
more
value
out
of
then
we're
looking
at
some
of
the
features
that
have
graduated
from
previous
versions
into
a
beta
state,
which
means
that
they
have
gone
through
another
release,
cycle
of
testing
and
and
bug
fixing
and
stuff
like
that,
and
then
we
also
have
eight
that
have
been
graduated
to
stable.
This
is,
in
my
opinion.
B
This
is
a
great
thing
because
we
only
had
to
graduate
the
stable
in
115.
This
just
gives
you
a
better
indication
of
the
trajectory
trajectory
of
where
everything's
going
inside
of
kubernetes
as
we
progress
more
towards
a
stable
platform,
instead
of
always
trying
to
introduce
a
bunch
of
new
features
all
the
time.
This
time
we
have
a
bunch
of
enhancements
that
have
graduated
to
a
stable
state.
So,
let's
again,
let's
look
at
three
of
the
highlights.
B
In
our
opinion,
some
of
the
biggest
ones
that
kind
of
came
out
of
here
first
was
a
bunch
that
were
happening
with
custom
resource
definitions
and
that's
gonna
all
take
place
as
we
go
through
and
dive
down
into
each
individual
cig
and
we'll
be
able
to
see
a
bunch
of
CRD
mechanisms
all
moved
to
stable
with
inside
the
116
release.
Next
is
looking
at
the
ipv4
and
ipv6
the
dual
stack
support.
B
Or
there
is
this
kind
of
thing
called
a
single
pod,
IP
aware
dual
stack
configuration,
but
this
had
a
bunch
of
limitations
from
CNI
networking,
plugins
system
pods
from
the
API
and
controller
server
managed
there
were
types
of
like
Service
IPS
had
to
be
either
all
ipv4
or
ipv6
when
the
cluster.
So
it
really
didn't
encapsulate
a
lot
of
things
that
people
were
really
looking
for,
and
so
this
enhancement
is
adding
dual
stack
or
single
family
services
support
to
these
clusters.
B
So
what
it's
doing
is
its
providing
ipv4
to
ipv6,
to
ipv6
communication
to
and
from
within
a
kubernetes
cluster,
and
you
can
do
this
by
providing
dual
stack
addresses
for
the
pods
and
the
nodes.
But
you
can
restrict
service.
Ip
is
to
be
a
single
family
if
you
will,
and
so
as
we
start
going
through
here,
the
kubernetes
endpoints,
the
API
supports
only
a
single
IP
address
for
endpoint.
Then
you
have
the
addition
of
dual
stack
features,
so
you
have
pods
that
are
serving
as
backends
for
kubernetes
services.
B
So
now
you
can
have
both
ipv4
and
ipv6
addresses
the
cube
proxy
is
going
to
be
modified
to
drive
IP
tables
and
IP
six
tables
in
parallel,
and
that's
gonna
require
the
implementation
of
a
proxy
or
interface
in
the
cube
proxy
server.
That
is
going
to
modify
and
track
changes
in
both
the
tables
as
well,
and
this
is
also
required
to
expose
services
in
both
ipv4
and
ipv6.
Core
DNS
is
also
making
changes
to
support
multiple
address,
endpoints
and
there's
a
lot
more
considerations
about
ingress
and
load
balancers
that
weren't
mentioned
here.
B
That's
me
vision
from
an
existing
volume.
That's
already
with
inside
of
kubernetes,
and
as
we
start
going
through
here,
the
snapshots
in
their
hand,
are
gonna
result
really.
As
a
point
in
time,
copy
of
a
volume
that
is
itself
not
a
usable
volume
from
there-
and
so
that's
that
can
be
used
for
vision,
a
new
volume
or
to
restore
the
existing
volume
to
a
previous
state.
So
the
storage
sig
identified
clone
operations
as
one
of
those
critical
functionalities
for
many
stateful
workloads
that
we
all
want.
B
So
maybe
you
are
a
database
administrator
and
you
want
to
duplicate
that
database
volume
or
perhaps
that
you
want
to
figure
out
a
way
to
trigger
these
clone
operations
with
inside
of
the
kubernetes
api,
and
so
kubernetes
users
can
now
handle
this
without
having
can
go
around
the
committee's
API,
and
this
is
enabled
in
the
persistent
volume
claim
datasource
field
and
it's
adding
support
for
the
specifying
for
specifying
existing.
This
persistent
volume
claims
in
that
field,
and
that
is
going
to
be
the
volume
that
you
would
want
to
clone.
B
So
with
this
there's
no
new
objects
that
are
being
introduced
to
enable
cloning.
Instead,
this
particular
field
in
the
volume
persistent
line
claim
object,
is
expanded
to
be
able
to
accept
the
name
of
an
existing
persistent
volume
claim
in
the
same
namespace.
And
so
it's
important
to
note,
though,
that
from
your
perspective
as
a
user,
a
clone
is
just
another
persistent
volume
and
a
persistent
volume
claim.
B
The
only
difference
is
that
the
persistent
volume
is
being
populated
with
the
contents
of
another
persistent
volume
at
the
creation
time
and
after
that,
it
behaves
exactly
the
same
way
that
any
other
kubernetes
persistent
volume
does
and
just
adheres
to
those
same
behaviors
and
rules.
Just
to
note,
though,
that
cloning
is
only
supported
right
now
for
CSI
drivers,
it's
not
for
entry
or
4
flex
volumes.
So
to
use
this
cloning
feature,
you
have
to
ensure
that
your
CSI
driver
is
implementing
this
on
the
cluster
that
is
deployed.
B
So
if
we
look
at
this
example
that
we
have
here,
it
shows
that
we
have
a
persistent
volume
claim
with
PVC
one
that
exists
in
the
name
space
of
my
NS
or
mine
namespace,
and
has
a
size
that
is
less
than
to
10
gigabytes,
and
this
is
going
to
result
in
a
new
independent
persistence,
volume
and
persistent
volume
claim
called
PVC
to
on
the
backend,
and
that
is
gonna,
be
a
duplicate
of
this
data
that
existed
on
PVC,
one.
So
just
kind
of
gives
you
an
idea
of
what
this
looks
like
going
through.
B
So
now,
let's
go
ahead
and
we
will
dive
in
to
see
our
D
mania
if
you
will
and
we're
gonna
look
at
every
individual
cig
that
had
new
enhancements
that
came
into
this.
It's
gonna
go
in
alphabetical
order,
so
if
you're
waiting
for
Windows,
you
can
wittily
it.
So
here
we
go
for
API
machinery
so
for
people
that
aren't
familiar
with
C
RDS
or
custom
resource
definitions.
B
Let's
kind
of
set
a
baseline
here,
so
a
resource
is
an
endpoint
in
the
kubernetes
api
that
stores
a
collection
of
api
objects
and
that
could
be
of
a
certain
kind.
So,
for
example,
the
built
in
pods
resource
contains
a
collection
of
pod
objects.
Now
a
custom
resource
is
an
extension
of
the
kubernetes
api,
but
it's
not
necessarily
available
on
every
kubernetes
cluster,
but
represents
a
customization
of
your
particular
criminate
installation
and
today
there's
all
kinds
of
distributions
out
there
that
are
using
crts
to
kind
of
put
their
own
sort
of
special
sauce
on
it.
B
B
3
schema-
and
this
is
gonna-
enable
server-side
validation
for
custom
resources,
and
this
validation
format
is
compatible
for
creating
these
documentation
for
custom
resources,
which
can
be
used
by
clients
like
cube
cuddle,
and
you
can
perform
client-side
validation
like
create
and
apply
schema
explanation
like
cute
cuddle,
explained
and
client
generation,
and
this
enhancer
will
be
using
the
open,
API
version.
3
schema
to
create
you,
publish
the
open,
API
documentation
for
these
custom
resources
as
well.
B
So
the
sub
resources
for
custom
resources
is
also
graduating.
This
table
and
these
objects
are
defined
by
CR
DS
called
custom
resources.
However,
it
is
one
of
the
most
requested
features
and
adds
a
slash
status
and
a
slash
scale,
sub
resource
for
custom
resources.
If
the
status
sub
resource
is
enabled
the
main
end
point
will
ignore
all
changes
in
the
status
sub
path
it,
the
spec,
does
not
change
and
the
meta
generation
is
not
updated.
B
Now
for
the
scale
behavior,
the
number
of
custom
resources
can
easily
scale
up
or
down,
depending
on
the
replicas
field
set.
That
is
that
that
is
set
inside
of
the
spec
sub
path,
moving
on
with
even
more
CRD
stuff.
So
we
have
defaulting
and
pruning
for
custom
resources.
Now
it's
a
little
bit
different,
because
pruning
is
moving
to
stable,
while
defaulting
is
moving
to
beta
in
1/16.
B
Yet
they
are
all
enveloped
inside
of
a
single
enhancement
tracking
issue,
so
defaulting
is
kind
of
a
fundamental
step
in
processing
API
objects
in
the
request
pipeline
of
the
cube,
API,
server
and
defaulting
happenings
during
serial
that
during
the
D
serialization
period
and
it's
implemented
for
most
of
the
native
kubernetes
api
types,
and
it
plays
a
crucial
role
for
API
compatibility
when
adding
new
fields.
However,
custom
resources
do
not
support
this
natively,
and
this
is
all
about
adding
support
for
specifying
default
values
for
that
open.
B
Api
version
3
schema
that
we
have
talked
about
inside
of
the
CR
D
manifest
and
now,
when
we
have
this
deep,
this
new
schema
is
gonna,
have
a
support
for
a
default
field
with
any
sort
of
arbitrary
JSON
values
that
we
can
put
into
it.
And
now
what
we
can
do
is
we
can
apply
these
default
values
during
the
deserialization.
The
same
way
as
native
resources
do
and
so
custom
resources
store
these.
B
These
JSON
data
values
without
following
the
typical
kubernetes
api
behavior
to
prune
these
unknown
fields,
and
this
makes
the
arity
is
a
little
bit
different
because
it
could
also
lead
to
potential.
You
know
security
and
general
data
consistency
concerns
because
it
is
unclear
what
is
being
stored
inside
of
SED,
and
this
adds
pruning
of
all
fields
which
are
not
specified
in
that
open.
B
Api
validation,
schema,
excuse
me
and
so
pruning
is
now
moving
forward
to
stable
116,
and
this
is
going
to
enforce
consistency
of
the
data
stored
inside
of
at
CD,
and
this
means
that
objects
cannot
suddenly
render
themselves
unaccessible
because
of
unexpected
data
breaking
decoding
or
anything
of
that
nature.
So,
even
if
unexpected
data
with
inside
of
at
CD
is
of
the
right
type
and
does
not
break
decoding,
it
has
not
gone
through
the
validation
and
probably
an
admission.
B
Webhook
either
does
not
exist
for
these
CR
DS
or
it
won't
have,
or
it
wouldn't
have
been,
implementing
the
pruning
behavior
in
itself
and
so
pruning
at
this
decoding.
Step
is
enforcing
this
type
of
scenario
to
happen,
and
really
it's
a
countermeasure
to
take
care
of
security
tax
that
could
make
use
of
knowledge
of
safe
future
versions
of
api's
with
new
security
relevant
fields.
B
These
fields
could
become
alive
and
lead
to
some
unknown
or
unallowable,
and
so
moving
on
to
the
web
book
version
for
customers
sources
also
graduating
disabled
plays
well
at
the
previous
slide
on
defaulting
and
pruning,
because
you
can
default
and
prune
with
a
web
book
conversion,
but
it's
not
a
native
style
and
requires
the
additional
work
to
make
that
happen.
And
so
the
existing
problem
is
when
a
web
hook
needs
to
make
a
request
to
another
service,
but
the
API
hasn't
progressed
or
changed
so
CRD
users.
B
And
so
this
Web
book
can
mutate
or
validate
the
object.
If
you
want
to
as
well-
and
this
supports
namespace
selectors-
and
this
is
great-
because
it's
almost
like
an
all
or
nothing
in
that
namespace.
So
you
may
not
want
to
get
all
the
activity
that's
happening.
So
you
want
to
be
able
to
extend
that
to
include
a
single
object.
Selector,
if
you
want
to
as
well
and
moving
on
to
not
necessarily
just
CR
DS
but
moving
into
another
feature,
called
bookmark
support.
B
We
had
talked
about
in
115,
but
this
is
graduating
this
beta
in
116,
and
this
is
talking
about
the
watch
API,
and
this
is
one
of
the
fundamentals
of
the
kubernetes
api
and
it's
a
watch
api
is
there
to
retrieve
a
collection
of
resources
using
a
list
and
then
initiating
a
watch
to
start
from
a
particular
resource
version
returned
by
that
list
operation.
So
if
a
client
watch
is
disconnected
a
new
one
can
be
restarted
and
the
last
one
is
returning
a
particular
resource
version
and
now,
but
the
bookmark
support
is
going
to
do.
B
Is
it's
trying
to
make
the
API
server
performance
a
little
bit
better?
So
if
we
take
this
and
some
of
the
problems,
that
kind
of
existed
was
that
different
scalability
tests
saw
that
restarting
these
watches
caused
a
significant
load
on
the
API
server
when
it
was
watching
a
small
percentage
of
changes,
so
in
extreme
clay
in
in
shrink
cases
you
could
be.
You
know
lead
to
the
point
where
you're
falling
out
of
a
history
window
and
a
particular
resource
version
to
old
errors
could
potentially
occur
and
now.
B
The
reason
is
that
we
want
this
last
item
received
by
The
Watcher
to
have
a
resource
version
of
our
v1,
and
we
may
know
that
that
there's
not
going
any
changes.
This
particular
given
watcher
and
we
want
to
bump
that
to
say
an
rb2,
but
we
don't
have
any
way
of
communicating
that
to
the
watcher.
So
as
a
result,
when
restarting
a
watch,
the
client
again
sends
this
particular
our
v1
as
a
starting
point
and
we
process
all
the
events
with
that
resource
in
between
rb1
and
rb2.
B
To
have
that
again,
and
so
the
goal
here
is
to
reduce
the
load
on
the
API
server
by
minimizing
the
amount
of
unnecessary
watch.
Events
that
need
to
be
processed
after
restarting
a
watch,
and
so
this
is
gonna
be
called
a
new
event
called
a
bookmark,
and
this
type
of
bookmark
is
gonna,
represent
information
that
all
objects
up
to
a
given
resource
version
have
processed
for
a
given
watcher.
B
So,
even
if
the
last
event
of
the
other
types
contain
the
object
with
the
resource
version,
one
receiving
a
bookmark
with
the
resource
version,
two
means
that
there
aren't
any
interesting
objects
for
that
watcher
in
between
those
states
and
then
looking
at
server-side
apply
graduating
the
beta
with
inside
of
here.
Everybody
knows
that
cube
cut'
will
apply,
is
a
core
part
of
the
community's
config
workflow,
but
it
can
be
buggy
and
it
can
be
hard
to
fix
from
time
to
time,
and
so
you
end
up
with
some
potential
conflicts.
B
It's
been
around
for
a
while
and
so
always
good
to
have
a
little
bit
more
eyes
on
it.
If
this
is
something
that
is
interesting,
you
as
well
the
last
one
here
with
inside
of
this
section
is
deprecating
and
remove
the
self
link
function
here
and
there's
been
no
use
of
self
link
for
a
while.
So
there's
no
compelling
reason
for
having
self
link
field
in
there.
B
So
when
modifying
or
reading
an
object
from
the
API
server
self
link
is
set
to
exactly
the
year-old
that
it
was
used
for
form
that
operation
and
so
as
a
part
of
making
sure
that
we
are
following
the
kubernetes
process
for
demoting
and
deprecating
features.
This
field
will
be
deprecated
in
one
year,
all
right,
moving
on
to
sig
cloud
provider.
B
So,
as
everybody
kind
of
knows
that
the
entry
cloud
provider
implementations
are
being
removed
in
the
future-
and
this
is
gonna-
involve
a
large
amount
of
code
that
is
used
in
many
places
with
inside
of
kubernetes
and
entry
in
itself.
So
in
order
to
prepare
this
to
eventually
be
completely
vendor
agnostic,
it's
gonna
be
helpful
to
see
what
that
removal
actually
entails
and
to
verify
that
communities
will
continue
to
function
without
it
and
doing
so.
B
It's
a
it's
a
bit
tricky
without
ensuring
that
this
entry
provider
code
is
not
being
used
in
some
unexpected
fashion,
such
as
a
side
channel
during
init
methods,
or
anything
like
that.
So
what
this
is
going
to
be
doing
is
building
kubernetes
binaries
without
the
entry
cloud
provider
packages,
and
this
can
allow
verification
and
provide
additional
experimentation
to
smaller
and
cheaper
binaries
for
anybody,
that's
interested
and
just
using
out
of
tree
providers
or
perhaps
maybe
using
no
provider
based
clusters,
and
so
the
goal
is
to
enable
kubernetes
without
entry
cloud
providers
and
without
forking.
B
So
this
helps
test
out
out
a
tree
providers
with
the
simulation
of
the
future
removal
of
the
entry
code
and
enabled
experimentation
with
in
cleat
with
sorry
with
cloud
provider
list
clusters.
So
if
you're
doing
cloud
providers
clusters
be
cool
to
kind
of
see
some
of
those
use
cases
out
there.
So
if
you
have
something
feel
free
to
share
it
on
the
communities,
community
I
think
that'd
be
really
cool
to
see
so
sig
cluster
lifecycle,
so
cube
ATM
for
Windows
is
net
new
for
alpha.
B
You
know
on
Linux
cubed
EMS
is
quickly
able
to
join
those
the
cluster
and
the
intent
of
this
is
propose
a
design
that
implements
some
of
that
same
functionality
for
Windows,
so
you're
gonna
see
a
PowerShell
script
to
install
and
run
on
kubernetes
prerequisites
for
Windows
nodes,
and
it
should
also
be
noticed
at
this
time
that
this
document
and
this
this
enhancement
proposes
the
enablement
for
support
for
Windows
worker
nodes.
Utilizing
cube,
ATM
and
you're
gonna
see
this
as
this
continues
to
progress.
B
B
Pod
manifests
that
are
stored
with
inside
of
Etsy
kubernetes,
manifest
after
the
the
cube
ATM
init
enjoy.
So
you
can
utilize
customize
and
make
that
happen
now
for
you
moving
on
to
cig
instrumentation,
so
there's
a
huge
metrics
overhaul.
That's
now
taking
place
inside
of
116
and
the
number
of
metrics
with
inside
of
kubernetes
today,
there's
a
lot
of
them
that
just
do
not
follow
the
official
criminalization
instrumentation
guidelines,
and
this
is
for
a
number
of
reasons.
Some
of
the
metrics
were
created
before
the
guidelines
implemented,
which
happened
around
two
years
ago.
B
Some
of
it
is
just
missing
it.
In
code
reviews,
but
beyond
that,
some
of
the
instrumentation
guidelines
there's
several
violations
of
the
Prometheus
instrumentation
best
practices,
so
in
order
to
have
consistently
named
and
high
quality
metrics,
this
is
going
to
start
overhauling
and
making
those
metrics
being
exposed
by
communities
consistent
with
the
rest
of
the
ecosystem,
and
so
what
we've
even
moving
further
beyond
this,
you
can
actually
start
thinking
of.
How
can
we
even
join
these
metrics,
which
shouldn't
be
as
difficult
anymore
as
well?
B
Metrics
needs
its
work,
queue
metrics
to
follow
the
Prometheus
best
practices
in
naming
conventions.
So
if
you
are
on
the
outside,
looking
in
from
a
metric
standpoint-
and
you
are
our
polling
metrics
from
kubernetes,
this
is
something
that
you
should
probably
take
a
look
at
to
see
if
your
product
or
feature
anything
like
that
can
utilize
this
as
well.
Moving
on
to
cig
network,
since
we
already
hit
the
ipv6
and
I
before
wool
stack,
we'll
move
on
here
to
looking
at
the
finalize
the
protection
for
service
load
balancers.
B
If
the
cluster
has
the
cloud
provider,
integration
enabled
so
fun
pond
the
deletion
of
that
particular
service,
the
actual
deletion
of
the
resource
will
be
blocked
until
this
finalizar
is
removed,
and
this
finalizar
will
not
be
removed
until
the
cleanup
of
the
load.
Balancer
resources
are
considered
finished
by
the
service
controller
itself,
the
in
point
slice
API.
This
is
net
new
alphas
116
as
well
and
in
the
current
endpoints
API.
One
of
the
object
instance
contains
all
of
the
engine
and
points
of
a
particular
service.
B
So
whenever
a
single
pod
in
a
service
is
added,
updated
or
deleted,
the
whole
endpoints
object,
whether
with
the
other
endpoints
didn't
change
or
not.
Is
recomputed
and
written
to
storage
with
inside
of
at
CD
and
then
sent
to
all
the
Watchers
like
cube
proxy,
and
this
leads
to
two
major
problems.
So
first
is
storing
multiple
megabytes
of
endpoints.
B
It
can
put
strain
on
multiple
parts
of
the
system
because
you
don't
have
a
paging
system
and
it's
a
monolithic
kind
of
watch,
storage,
design
and
the
number
of
max
endpoints
is
bound
by
the
community
storage
layer
which
is
at
CD,
and
that
has
a
hard
limit
on
the
size
of
a
single
object,
which
is
one
point
five
megabytes
by
default,
and
this
means
the
attempt
to
write
a
larger
object
when
that
limit
hits.
It
will
be
rejected.
B
Additionally,
there's
a
similar
limitation
in
the
watch
path
in
the
communities,
API
server,
so
for
a
kubernetes
service.
If
its
endpoints
object
is
too
large
that
end
point
update,
will
not
be
propagated
to
the
cute
proxies
and
thus
iptables
and
IP
vs
won't
be
reprogrammed
and
so
you're
gonna
have
performance
segregation
in
large
kubernetes
deployments.
If
this
happens
so
not
being
able
to
efficiently
read
and
update
these
individual
endpoint
changes
can
lead
to
elements
not
actually
being
able
to
basically
have
their
endpoint
operations
happen.
B
So
moving
on
to
cig
node
the
ephemeral
containers.
Now,
if
you
look
at
the
issue
down
here,
it
says
277,
we've
moved
on.
We
passed
the
1,000
issue
mark
with
inside
of
the
enhancements
repo,
so
seeing
something
that
has
277
means
that
this
has
been
around
for
quite
a
long
time
and
it's
just
now
being
introduced
as
alpha
with
inside
of
116.
And
so
what
does
this
do?
B
Well,
for
many
developers
in
some
of
the
native
kubernetes
applications
out
there,
you
want
to
treat
kubernetes
as
an
execution
platform
for
binaries
that
are
produced
by
any
type
of
build
system.
So
you
can
forego
scripted
OS
installation
of
traditional
docker
files
and
instead
you
might
want
to
copy
the
output
of
the
build
system
in
from
a
container
image
to
say
scratch
or
a
distro
list
container
image,
and
this
gives
advantages
of
having
minimal
and
mutable
and
smaller
size
images.
B
Now
the
disadvantage
of
using
containers
built
from
scratch
is
the
fact
that
binary
is
provided
by
this.
Don't
it
makes
it
difficult
for
troubleshooting
these
types
of
containers
and
if
Rob's
folks,
it
becomes
the
case
that
a
person
troubleshooting
the
application
is
not
unless
the
person
who
built
it
so
people
who
want
the
ability
to
attach
a
known,
good
or
automated
debugging
environment
to
a
pod
can
now
do
that
as
well.
B
So
looking
at
pot
overhead
as
something
else
being
new
in
116,
we
all
know
that
pods
have
some
resource
overhead
and
in
our
traditional
Linux
container
or
docker
approach.
The
accountant
overhead
is
limited
to
the
pause
container,
but
that
also
invokes
an
overhead
that
accounts
to
various
system
components,
including
the
cubelet
for
controlled
loops.
You
have
docker,
you've
got
kernel
for
various
resources,
you
got
fluent
v4
logs
and
this
current
approach
is
to
kind
of
reserve
a
chunk
of
resources
for
the
system,
components
and
ignore
the
overhead
from
the
pause
container.
B
But
this
doesn't
scale
well
well
with
sandbox
pods,
the
pod
overhead
potentially
becomes
much
larger.
Maybe
a
hundred
megabytes
and,
for
example,
Kafka
may
run
as
a
guest
colonel.
It's
got
its
cada
agent,
init
system,
so
on
and
so
forth,
and
since
this
overhead,
it's
pretty
big.
So
it's
hard
to
ignore.
So
we
need
to
account
for
it
and
starting
from
quota
enforcement
and
scheduling,
and
so
this
new
feature
is
gonna,
be
able
to
take
care
of
that
and
actually
have
that
overhead
in
there
as
well.
B
So
the
no
topology
manager
is
also
new
within
1/16
to
into
date
and
multiple
components
with
inside
the
cubelet
make
decisions
about
the
apology,
related
assignments
with
inside
of
here,
the
CPU
manager.
It
can
make
decisions
about
the
set
of
CPUs
and
containers
allowed
to
run
on.
It's
only
implemented
as
its
policy
from
kubernetes
1.8
as
a
static
one,
but
that
doesn't
change
the
assignments
for
the
lifetime
of
container
and
then
you
get
the
device
manager
and
that
makes
device
assignments
to
satisfy
container
resource
requirements.
So
generally
devices,
maybe
they're
attached
to
one
peripheral
interconnect.
B
So
if
a
device
manager
and
a
CPU
manager
misalign
all
in
communication
between
the
CPU
and
the
device
can
incur
as
an
additional
hop
over
the
processor
and
you're
gonna
experience,
even
more
latency
and
you've
got
things
like
CNI,
you've
got
NICs
that
include
single
route.
Io
virtualization,
these
kind
of
virtualization
functions
have
affinity
to
particular
socket
and
those
have
measurable
performance
ramifications
if
it's
not
done
properly.
B
And
so
the
goal
here
is
to
create
a
preferred,
socket
affinity
for
containers
based
on
the
input
from
the
CPU
manager
and
a
device
manager
and
provide
an
internal
interface
that
they
can
now
integrate
to
be
more
topology
aware
and
the
cubicles
gonna
be
able
to
take
care
of
some
these
components
too.
So,
for
example,
if
a
user
asked
for
a
fast
Network
in
a
virtualized
environment,
it
will
automatically
get
all
the
various
pieces
coordinated
and
co-located
on
a
socket
like
huge
pages.
B
This
is
all
gonna
do
to
socket
alignment
and
having
the
CPU
manager
and
the
device
manager
kind
of
working
in
lockstep
here
so
adds
startup
probe
liveliness
probe
hold
off
for
slow
starting
pods.
I
know
it's
a
long
one
that
this
is
also
new
in
116,
and
this
is
because
you
know
we
have
slow
starting
containers
and
those
refer
to
ones
that
require
those
just
a
significant
amount
of
time.
B
That
was
from
is
that
they
need
to
give
it
enough
time
to
start
before
having
the
live
in
this
probe
fail
and
that
failure
threshold
time
and
that
which
again
is
gonna
trigger
a
kill
by
the
cubelet
before
having
a
chance
to
be
up
and
so
there's
ways
that
we
can
now
handle
this
situation
with
the
current
API.
But
none
really
provided
an
answer
to
actually
having
a
container
stuck
and
deadlock,
and
so
what
we
can
do
is
now
we
can
have
this
live
in.
B
However,
it's
become
clear
that
heterogeneous
clusters
will
not
be
uncommon
and
you
need
to
support
a
better
user
experience
here
and
the
introduction
of
Windows
nodes
kind
of
presents
an
immediate
use
case
for
heterogeneous
clusters,
because
some
nodes
are
gonna,
be
running
supporting
Windows
while
some
are
you
supporting
Linux
there's
also
be
inherent
differences
in
the
operating
systems.
So
it's
natural
that
you
want
to
support
different
runtimes.
B
So,
for
example,
Windows
nodes
may
support
hyper-v
sandboxing,
while
linux
nodes
support,
kata
containers
and
every
native
container
support
is
gonna
vary
on
each
so
with
Runcie,
for
linux
run
HDS
for
Windows,
and
perhaps
some
users
just
wish
to
keep
saying
box
fork,
loads
and
native
workloads
separate.
This
is
again
just
another
example
of
being
able
to
create
this,
the
scheduling
so
they
can
kind
of
create
with
inside
of
a
single
cluster.
B
This
can't
be
an
ideal
solution,
especially
if
you
know
pods
evenly
across
different
topology
domains,
and
you
can
do
this
because
you
want
high
availability
or
perhaps
cost
savings,
and
maybe
you
want
to
do
regular,
rolling
or
updates
or
scaling
out
replicas,
and
if
you
do
that
previously
it
could
become
problematic,
and
so
the
goal
is
to
have
even
spreading,
and
this
is
calculated
amongst
the
instead
of
the
apps
API
such
as
deployment
or
replicas
replicas
set,
and
so
this
can
now
be
either
a
hard
or
soft
requirement.
So
as
an
application
developer.
B
Perhaps
you
want
your
application
pods
to
be
scheduled
on
to
a
specific
topology
domain
as
even
as
possible,
and
that
current
status
is
that
the
pods
may
be
stacked
on
to
a
specific
top
topology
domain.
Or
maybe
you
want
your
pods
to
not
coexist
with
specific
pods,
and
you
can
do
this
say
such
as
anti
infinity
and
in
some
cases
it
might
be
favorable
to
tolerate
violating
pods.
But
you
can
be
able
to
say
that
whether
it
should
or
should
not,
okay.
B
So
here's
another
long
one
here,
so
extending
your
requested
to
capacity
ratio.
Priority
function
to
support
resource
bin
packing
of
extended
resources
again
net
new
alpha
one
sixteen
here
and
what
we
want
to
be
able
to
do
here
is
we
want
to
run
more
clothes
on
communities
which
use
some
sort
of
accelerated
devices,
and
we
want
that
default
scheduler
to
spread
pods
across
across
them,
resulting
in
you
know.
B
We
don't
want
to
have
a
fragmentation
of
these
extended
resources,
and
today,
you're
gonna
have
that
because
they
may
potentially
remain
independent
in
a
pending
state,
and
so
what
they're
gonna
do
is
they
can
schedule
pods
using
a
best
fit
policy,
and
this
is
using
the
requested
to
capacity
ratio.
This
priority
function
for
extended
resources,
so,
as
the
graphic
shows,
the
default
scheduler
in
most
cases
will
schedule
the
pods
as
follows.
B
B
So,
of
course,
this
is
net
new
and
alpha.
We
can
see
more
things
happening
with
inside
of
Windows
and
and
so
what
we
wanted
to
do
is
we
want
to
see
this
continue
to
progress
as
well,
so
previously
for
persistent
storage
requirements
for
Windows,
you
had
to
depend
on
PowerShell
based
flex
volume
plugins
that
maybe
they
were
maintained
by
Microsoft
or
somebody
else,
and
then
they
were
also
only
being
used
over
samba
or
SMB
or
I
scuzzy
protocols,
or
you
had
entry
plug-ins
for
kubernetes
with
inside
the
core.
B
But
as
we
all
know,
this
is
starting
to
become
deprecated
and
we're
trying
to
move
things
out
of
tree.
So
support
for
CSI
and
kubernetes
reached
GA
status
in
113
windows
was
in
114.
So
the
goal
is
to
make
sure
that
window
nodes
also
support
CSI
plugins.
So
they
can
get
all
the
benefits
that
these
this
ecosystem
has
to
offer
for
persistent
storage
requirements
and
to
make
sure
that
windows
nodes
are
seen
as
a
top
tier
clutter.
B
B
This,
however,
is
not
surfaced
as
a
field
in
the
pod
container
spec
that
an
operator
can
specify,
and
so
this
is
going
to
give
you
the
ability
to
specify
the
desired
user
name
in
the
pod
container.
Spec
asked
fields
and
be
able
to
pass
them
as
configured
windows,
runtime
or
pass
them
on
to
the
configured
windows
during
runtime.
B
Also,
net
new
and
116
is
being
able
to
run
as
user
name
for
Windows.
So
this
API
instant
here
will
actually
capture
Windows
Boas
specific
security
options.
From
the
perspective
of
the
Windows
workload,
identity
and
containers,
so
we
already
talked
about
the
ability
to
sort
of
talk
to
have
the
GMs.
Yes,
sorry,
the
the
GMs
a,
but
what
this
is
gonna
be
able
to
do.
B
B
So
what's
coming
next,
so
the
1.17
release
of
kubernetes
is
already
in
progress,
we're
already
a
few
weeks
into
it.
The
enhancement
freeze
this
past
that
was
back
on
October
15th.
If
you're
interested
to
kind
of
see
exactly
what
features
will
be
going
into
kubernetes
117.
There's
the
link
to
the
tracking
spreadsheet
right
there.
The
targeted
release
for
communities
117
is
also
looking
at
December,
9th
2019
as
well
just
in
time
should
I,
say
right
after
yeah
after
cube
con.
B
B
B
B
Andres
asking:
is
there
any
actions
being
taken
from
kubernetes
team
regarding
the
incompatibility
with
helm,
as
reported
in
Helms
github,
the
cig
release
team
doesn't
do
anything
directly
with
helm
that
is
outside
of
cig
release,
so
there's
nothing
that
we
will
be
able
to
answer
with.
Regards
to
that.
C
C
So
so
those
api's
were
marked
for
deprecation
and
they
went
through
the
deprecation
policy,
which
was
three
releases
so
back
in
1.12
or
1.13.
We
we
said
those
api's
were
deprecated
and
then
they
were
subsequently
deleted.
Helm
had
the
opportunity
to
update
that
baseline
installers
to
use
the
updated
api's
which
have
been
around
since
1.9
or
1.10
I
believe
so
I
can
I
can
dig
into
that
mansurov.
B
B
Okay,
with
that
I,
don't
see
any
other
questions
available
or
out
there.
So
thank
you
everybody
for
joining.
In
again
this
will.
This
recorded
session
will
be
available
later
on
on
the
CNC
F
webinar
page,
which
will
also
be
available
on
YouTube.
Thank
you
again
everybody
and
enjoy
the
rest
of
your
week.
Thank.