►
From YouTube: Webinar: Kubernetes 1.19
Description
The webinar will cover the latest release, Kubernetes 1.19.
Presented by:
Kubernetes Release Team
A
Okay,
let's
get
started
welcome
everyone.
Thank
you
very
much
for
joining
us
today.
Welcome
to
today's
cncf
webinar,
what's
new
in
kubernetes
1.19,
I'm
jerry,
fallon
and
I'll
be
moderating.
Today's
webinar
we'd
like
to
welcome
our
presenters
today,
nabrun
powell
infrastructure
engineer
at
clearsights,
taylor,
duzal
senior
developer,
advocate
at
hashicorp
max
kobachar
manager
of
cloud
native
engineering
at
stormreply,
just
a
few
quick
housekeeping
items
before
we
get
started
during
the
webinar,
you
are
not
able
to
talk
as
an
attendee.
There
is
a
q,
a
box
at
the
bottom
of
your
screen.
A
B
B
And
we
have
navaroon
both
were
like
the
the
key
figures
to
to
move
the
release.
Taylor
is
a
release,
lead
and
navaroon.
Is
the
enhancement
lead
so
for
today's
agenda?
We're
first
looking
on
the
120
release?
What's
coming
up
there
we're
moving
on
to
the
119
stats
going
over
to
the
119
highlights
like
how
we
came
to
this
awesome
crate
name
of
the
kubernetes
119
release,
but
also
what
are
maybe
some
really
interesting
changes
on
what
is
new
to
the
kubernetes
then?
B
Finally,
we
move
on
to
all
the
different
updates
through
the
different
six
and
in
the
end,
we
will
just
discuss
a
few
of
your
questions
and
hope.
We
will
also
find
some
answers
for
it.
Please
remind
that
you
can
anytime,
ask
us
questions.
I
will
try
to
answer
some
of
them
during
the
session
and
if
we
have
some
which
we
really
like
to
discuss
or
need
some
broader
discussions,
we
will
move
them
to
the
end
to
give
them
also
some
further
insights
with
that
said,
please
go
ahead.
C
C
To
go
ahead
and
cover
some
of
the
120
release,
dates
that
we
have
coming
up.
Jeremy
is
leading
the
120
release,
jeremy,
ricard
and-
and
just
spoke
with
him.
Yesterday,
we
talked
a
little
bit
about.
1.20
will
likely
be
the
last
release
of
2020.
Unless
I
don't,
I
don't
anticipate
anymore
jumping
in
there
and
surprising
anyone
they're
working
on
defining
a
test
freeze,
I
believe
last
week,
chad
there's
a
pr
going
in
for
that.
The
release
is
targeted
for
tuesday
december
8th.
C
The
enhancements
freeze
is
tuesday
october
6th
and
all
of
the
shadows
for
that
release
have
been
onboarded,
so
that
did
kick
off
on
september
14th
and
then
the
original
target
date
was
for
the
eighth
of
december,
but
that's
so
still
so
looking
at
like
that
is
going
to
be
the
release
date
but
happy
to
have
you
all
here
today
and
really
excited
to
cover
what
came
out
in
119
and
answer
all
your
questions
on
that
with
that,
I'm
going
to
turn
it
over
to
nabaru.
D
Hi
everyone
so
I'll
give
you
a
brief
overview
of
what
announcements
did
we
ship
out
in
the
last
release?
So
we
shipped
out
a
total
of
34
enhancements
in
1.19,
so
by
enhancements
we
mean
features
in
kubernetes,
so
they
are.
They
can
be
like
api
changes.
They
can
be
like
usability
fixes.
They
can
be
internal
test
changes.
They
can
be
conformance
changes,
so
we
usually
categorize
changes
or
enhancements
or
features
into
three
broad
categories
alpha
beta
and
stable,
so
we
had
like
10
stable
enhancements.
What
this
means
is.
D
There
is
like
nearly
100
confidence
that
these
features
are
here
to
stay,
barring
some
changes
they
may
be
like
improved,
but
not
substantially,
which
would
affect
the
users
we
have
15
enhancements,
graduating
to
beta
in
1.19,
so
beta
stage
usually
features
tag
in
when
they
feel
that
they're
confident
enough
that
this
may
go
to
stable
or
generally
available
in
the
next
few
releases.
So
the
api
remains
more
or
less
constant
between
beta
and
stable
changes,
and
then
we
have
like
nine
alpha
features.
D
They
are
like
majorly
new
features
into
the
kubernetes
project.
There
may
be
new
feature
editions
which
we
will
go
through
later
on
when
we
do
the
sig
updates.
With
that,
I
would
hand
over
to
taylor
again
for
some
highlights
on
the
1.19
release.
C
Well,
thank
you
so
much
neverend,
so
with
119
the
release
name
that
was
chosen
for
this
is
accentuate
the
positive.
I
have
to
give
a
huge
shout
out
to
hanabeth
lagalov
for
designing
this
logo
and
truly
capturing
what
it
felt
like
to
be
on
the
release
team.
During
this
time,
we
faced
a
lot
of
uncertainty,
which
you
know,
I'm
sure
you've
heard
that
word
numerous
times
over
the
course
of
this
year.
But
that's
it
was
really
true.
C
We
started
the
119
release
in
one
world
and
ended
it
in
a
completely
different
one.
It
was
also
quite
a
marathon
release
and
I
believe
it
was
the
longest
release
that
we've
had
to
date
and
really
just
wanted
to
focus
on
the
community.
That's
why
it
got
stretched
out
so
long.
We
wanted
to
give
time
so
that
people
were
able
to
really
work
through
their
enhancement,
proposals
and
features
and
really.
B
C
On
the
community
right,
because
if
we
don't
have
the
community
working
and
behind
us
and
and
we're
working
well
with
each
other
and
communicating,
we
don't
really
have
an
open
source
project.
That's
you
know
it's.
The
community
makes
the
project
not
the
other
way
around
and
so
really
was
happy
to
see
everyone
in
good
spirits,
while
working
on
this
project
and
while
being
sensitive
to
all
that
was
going
on
around
the
world.
But
so
within
this
logo
you
can
see
there's
some
fun
little
easter
eggs.
C
If
you
look
close
enough,
you
can
see
you
know
the
kubernetes
logo
and
and
hat
and
all
of
these
you
know
hints
at
things
that
might
have
come
out
during
the
119
release
that
a
lot
of
people
enjoyed
and
found
fun.
And
then
I'm
pretty
sure
I
haven't
asked
these
characters
here,
but
I'm
pretty
sure
they're
using
a
green
screen,
because
I
don't
think
I've
ever
seen
that
part
of
the
beach
just
yet
but
I'll
ask
them
later.
C
So
in
terms
of
new
things,
we
are
going
to
cover
each
of
the
features
individually,
but
just
at
a
glance,
some
new
things
are
structured
logging,
which
I'm
quite
excited
about.
So
looking
at
you
know,
json
logging
or
just
standard
out
and
standard
error.
That's
going
to
make
it
a
lot
easier
for
ingest
and
you
don't
have
to
do
any
wild
regex
rules
or
anything
like
that
on
that
front
anymore,
as
of
119,
so
very
excited
about
that
storage.
C
Pools
for
capacity
management,
storage
got
a
big
uplift
in
119
and
there
are
a
lot
of
new
rules
around
how
to
deal
with
storage.
It's
not
treated
as
just
you
know,
nebulous
storage,
that's
infinite.
There
are
ways
to
atomically
control
how
that
is
used
and
utilized
within
kubernetes
119.
So
again,
we'll
talk
more
to
that,
but
that's
also
something
I'm
quite
excited
about.
C
Allow
users
to
set
a
pod's
host
name
to
its
fqdn.
This
will
help
with
getting
some
legacy
systems
or
other
things
you
know
kind
of
that
need
that
fqdn
to
transition
over.
So
if
you
have
a
service
that
was
called
foo,
you
can
get
a
whole
qualified
domain
name
when
you
set
that,
and
you
call
that
within
the
pod
itself,
rather
than
just
getting
the
service
name,
allow
csi
drivers
to
opt-in
to
volume
ownership
change.
C
That
again
is
a
a
storage
interface
improvement
kind
of,
within
that
same
vein,
of
being
able
to
have
a
little
bit
more
control
over
storage
and
how
that's
defined
and
then
same
thing
with
generic
inline
volumes.
But
again
we'll
cover
more
of
that.
As
we
proceed
forward.
C
119
marks
a
brand
new
support
model
for
us
as
well.
Previous
releases
were
only
supported
for
nine
months,
which
worked
really
well
with
the
you
know,
it
was
one
release
or
two
releases
back,
just
kind
of
supporting
that
that
three
release
cycle
so
at
any
given
point
in
time
we
were
working
on
a
release.
C
We
have
one
out
and
then
there's
one
behind,
and
so
this
is
the
first
time
that
we
are
moving
to
that
one-year
support,
and
this
is
you
know,
in
reaction
to
a
lot
of
the
what
the
community
has
expressed
at
some
organizations
it's
much
more
difficult
to,
even
though
you
have
a
year,
it's
difficult
to
uplift,
a
lot
of
these
workloads
and
get
them
ready
for
the
next
version
or
n
plus,
you
know
versions
of
kubernetes,
and
you
know
we.
C
We
heard
that
and
wanted
to
react
on
that
front,
so
very
excited
to
announce.
You
know
with
119
we're
going
to
have
that
year
of
support
to
allow
people
a
little
bit
more
time
to
shift
their
workloads
over
and
dealing
with
things
like
you
know
the
deprecations
in
116
and
other
things
of
that
note.
You
know
again
hoping
that
this
makes
things
easier
for
for
most
teams,
with
that,
let's
jump
into
the
sig
updates
and
for
to
kick
us
off,
I'm
going
to
turn
it
back
over
to
now.
Brian.
D
Thank
you,
taylor,
for
all
the
highlights
from
1.19.
That
was
really
awesome,
so
I'll
go
through
all
the
sick
updates.
Basically,
we
have
categorized
all
the
enhancements
into
sigs,
so
when,
when
any
kubernetes
feature
is
added
to
a
release,
it
has
to
be
driven
by
a
group
of
people,
so
the
kubernetes
communities
are
structured
into
logical
isolations
of
groups
called
special
interest
groups
who
actually
own
code
in
particular
areas,
and
that's
why
we
are
doing
it
like
basic
and
the
first
set
to
come
is
api
machinery.
D
D
Now
the
schema
of
conditions
has
been
like
a
bit.
It
varies
a
lot
depending
on
the
resource.
Now,
with
this
release,
there
is
a
feature
shift
which
actually
specifies
some
guidelines
or
a
default
thing
that
you
can
use
in
api
can
use.
So
there
is
a
condition:
type
for
conditions
in
status,
objects
that
the
api
designers
can
use,
and
then
they
can
also
like
derive
more
features
out
of
them
more
attributes
out
of
them.
D
This
is
graduating
is
stable,
so
it
is
available
in
1.19
by
default,
and
the
next
feature
in
this
api
machine
we
say
is
warning
mechanism
for
deprecated
apis.
So
kubernetes
follows
like
a
graduation
mechanism
for
apis,
and
then
you
go
from
alpha
then
beta
to
like
stable
releases,
even
in
cases
of
rest
apis.
D
So
what
you
see
here
is
the
ingress
api,
so
ingress
api
resides
in
two
different
api
groups,
so
one
is
like
one
version
is
like
v,
one
beta
one.
The
next
is
the
stabilo.
So
what
do
you
see
here
is
if
you
try
to
access
the
v
1
beta,
1
resource
ingress
resource,
you
will
get
a
deprecation
warning
that
hey
this
resource
will
be
phased
out
in
1.12.
D
Can
you
please
use
the
new
url
as
you
see
here
it
when
you
use
networking.kts.io,
slash,
v1,
beta1
ingress.
It
actually
prompts
you
that
hey
use
the
stable
one,
networking
dot,
kts
dot,
io,
slash,
v1
ingress.
We
have
a
beautiful
feature
block
on
this.
You
can
so
when
we
post
the
slides,
you
can
just
click
on
the
slide
where
we
wrote
feature
blog
and
then
see
the
blog
on
the
kubernetes
website.
D
Moving
on
the
next
sake
in
focus
is
sig
architecture.
So
the
first
thing
there
is
clarify
use
of
node
role,
labels
within
kubernetes,
and
it
also
attempts
to
migrate
the
components
which
actually
use
the
neutral
labels.
So
traditionally
there
has
been
a
label
called
node
role.kubernetes.io
something,
and
it
was
seen
that
several
components
even
inside
the
kubernetes
architecture
use
that
to
change
behavior,
but
the
purpose
of
the
label
was
to
give
a
api.
D
I
should
not
tell
it
as
an
api,
but
as
a
resource
for
other
people
or
an
attribute
for
consumers
of
kubernetes
to
actually
modify
behaviors
or
migrate
like
do
do
stuff
around
it.
So
in
this
kubernetes
release
it
it's
a
vital
it's
a
beta
rollout.
D
D
So
this
is,
this
does
not
usually
conform
to
our
alpha
beta,
stable
thing,
because
it's
like
internal
apis
and
then
so.
This
is
kind
of
a
stage
stage
thing,
so
the
conformance
test
can
so
what
happens
is
if
you
see
kubernetes
projects
or
projects
outside
the
kubernetes
ecosystem,
who
actually
use
kubernetes
as
a
resource
or
a
product,
so
they
need
a
stable
and
reliable
foundation
for
actually
using
kubernetes.
D
Now,
in
order
to
achieve
that,
when
you
run
kubernetes
conformance
test
which
actually
verifies
whether
a
distribution
is
confirmed
to
the
spec
or
not,
you
now
don't
need
to
run.
The
beta
features
like
the
tests
are
not
run
using
beta
features.
D
The
next
feature
is
very
important.
In
a
sense,
it
is
kind
of
related
to
the
deprecation
warning.
So
there
have
been
instances
in
the
past
where
certain
kubernetes
resources,
like
let's
say
cron
jobs
or
investors
or
disruption
budgets,
policies
basically
or
disruption
budget
policies
they
have
stuck
in
beta
for
a
long
time.
Now
it's
like
a
double
sword.
So,
in
one
sense,
when
people
ship
it
to
beta,
they
get
enabled
by
default
in
the
kubernetes
distributions.
D
But
if
you
see
it
like
that,
there
is
little
incentive
to
actually
make
it
to
ga,
but
then
this
may
lead
to
instabilities
or
user
friction.
D
So
with
this
release
it,
it
has
been
mandated
as
a
policy
that,
along
with
any
new
beta
features
that
are
coming
in
in
1.19,
any
old
beta
features
that
were
there
have
to
reach
ga
and
deprecate
the
beta
or
have
a
new
beta
version.
For
example,
they
can
go
from
b,
one
beta
one
to
v,
one
beta
two
or
they
have
to
go
to
v
one.
There
has
been
like
some
resources
that
have
been
already
transitioned
in
1.19,
like
ingress,
which
was
beated.
D
I
think
in
like
a
few
releases
like
a
lot
releases
back
I'll,
come
to
that
later
on,
when
I
give
the
signature
updates.
C
C
With
the
cubelet
client,
there
was
an
out
of
cluster
system
that
was
set
up
to
kind
of
handle,
that
rotation
and,
and
while
that
was
done
automatically,
it
was
not
you
know
as
efficient
as
it
could
be
and
it
didn't
operate
within
the
cluster.
Thus,
you
know
kind
of,
even
though
it
was
secure.
We
get
even
more
security
by
having
this
happen
within
the
cluster.
So
now
this
this
feature
has
moved
to
stable
and
as
that
expiration
date
comes
up,
this
automatically
gets
rotated
out
and
then
you'll
notice.
C
C
C
Certificate
signing
request
api,
so
this
one
is
that
the
certificates
api
would
handle
the
root
certificate
authority
used
to
encrypt
traffic
between
a
lot
of
the
core
components
within
kubernetes,
and
this
now
adds
a
registration
authority
such
that
the
signing
process
is
a
little
bit
more
secure,
and
you
now
have
that
endpoint
able
to
be
called.
If
you
want
to
include
this
in
any
of
your
machine,
your
operators
or
machinery
within
core
kubernetes,
with
that
I'm
going
to
oh
cluster
lifecycle
is,
is
mine
as
well
cluster
lifecycle.
C
So
the
first
new
feature
that
we
have
here
is
new
cube,
adm
component
config
scheme.
So
the
the
cube
adm
component
is
that
configuration
management
is
getting
a
huge.
D
C
Some
of
those
changes
include
the
the
stop
defaulting
component
configs
and
delegating
that
config
validation.
This
is
a
new
feature
again,
you
know,
cubitium
is
getting
a
lot
of
work
done
to
it
in
the
releases
to
come,
and
I'm
quite
excited
about
that
in
terms
of
a
little
bit
more
granular
configuration
and
hitting
on
some
of
the
things
that
we've
seen
problems
with,
or
you
know
rough
edges
in
the
community,
the
next
one
is
customization
with
patches,
so
this
one.
I
also
found
quite
interesting
in
that
new
flag.
C
Experimental
patches
has
been
added
very
similar
to
the
cube
control
type
of
declaration,
and
so,
if
you
want
to
set
different
values
for
dev
test,
broad
other
environments,
you
can
do
so
once
this
moves
out
of
alpha
and
into
beta
that
flag
then
becomes
dash
dash
patches,
and
with
that
I
would
like
to
hand
it
back
over
to
nabarin
to
talk
about
instrumentation.
D
Thank
you
david,
so
which
second
with
instrumentation
special
interest
group.
We
have
a
beautiful
thing
called
events.
D
So,
if
you
see
kubernetes,
there
are
a
lot
of
resources
or
components
with
generate
events,
but
one
thing
that
people
need
to
ensure
or
workloads
need
to
ensure
that
the
rate
at
which
they
churn
out
events
that
should
not
have
impact
on
the
other
part
of
the
cluster
other
parts
of
the
cluster
and
users
should
also
be
able
to
track
what
changes
are
happening.
D
It
can
be
related
to
verbosity
of
the
events
or,
let's
say
some
event
like
let's
say
I
want
to
find
out
an
event
by
which
component
of
the
cluster
actually
generated
that
event
or
which
controller
generated
that
event.
Now,
with
this
release,
a
redesign
happened
of
the
event
api.
D
You
can
go
ahead
and
read
the
announcement
proposal.
It
has
a
lot
of
details
on
how
the
structure
is
changed
and
there
is
one
more
really
good
announcement
coming
up
next,
which
also
deals
on
similar
lines,
which
is
structured
logging.
So,
as
taylor
spoke
about
it
earlier,
when
giving
the
highlights.
D
So
if
you
see
kubernetes
logs
in
like
controllers
or
any
other
component,
you
will
see
that
they
were
traditionally
like
strings.
So
now
there
is
a
backward
compatible
change
in
1.19.
It's
also.
D
So
if
you
want
to
use
structured
logging,
you
have
to
enable
the
feature
flag,
which
enables
structured
logging.
D
So
what
this
does
is
you
have
an
additional
function
called
info
s
in
k,
log
where
you
can
basically
specify
object,
key
value
pairs
and
which
will
basically
parse
out
the
references
later
on
when
you
see
the
logs,
so
as
you
see,
we
have
added
an
example.
So
if
you
do
a
kellogg
dot,
invoice
part
status,
updated
say
that
hey
the
key
is
part
and
the
object
is
this
k,
log.k
object,
part
and
then
again,
status
is
equal
to
status,
and
then
it
comes
out
beautifully
into
the
logs.
D
D
D
D
As
an
alpha,
I
think
last
release
a
few
releases
back
and
it
graduated
to
beta.
D
So
this
feature
is
now
available
without,
like
the
feature
gate
is
enabled
by
default,
so
that
you
don't
need
to
make
any
changes
when
bootstrapping
the
cluster
so
as
to
take
like
use
http
protocol
ports.
D
So
if
you
see
I
put
a
screenshot
of
a
service
resource
where
you
say
that
hey
talk
to
maya,
but
then
the
protocol
is
actually
so.
This
is
very
useful
in
the
telecommunications
world,
where
they
use
sata
lot
for
switching
and
one
interesting
thing
is.
This
feature
is
also
slated
to
go
to
ga
in
this
release.
The
current
1.20
release,
which
is
like
a
great
win
moving
ahead.
We
have
the
endpoint
slice
api,
so
this
is
a
very
substantial
change.
D
So
what
traditionally
happens
is
every
service
resource
has
a
way
to
track
pods,
to
which
it
has
to
direct
the
traffic
coming
into
the
service
it
used
to
do
using
something
called
endpoints
objects,
so
you
can
think
of
endpoints
endpoint
objects,
as
basically
areas
of
references
to
pods
to.
D
What
happens
is
let's
say
if
you
have
like
a
thousand
pods,
which
are
pointed
to
by
the
same
service?
What
happens
is
when
you
want
to
update
that
reference
to
the
object.
It
becomes
a
bulky
data
transfer
across
the
network.
You
have
to
send
like
around
a
megabyte
of
endpoint
blob
that
you
need
to
modify
and
then
patch
back
to
the
resource.
D
Now,
instead
of
endpoints,
what
you
can
use
is
something
called
endpoint
slice,
so
it
basically
chunks
out
endpoints
into
slices
as
simple
as
that.
It
is
enabled
in
q
proxy,
by
default,
with
this
release
and
there's
a
beautiful
blog
on
the
website
again,
which
shows
you
a
scenario,
a
real
life
scenario,
which
actually
explains
why
this
was
needed.
I
urge
everyone
to
actually
go
ahead
and
read
it.
Obviously,
you
can
also
go
to
the
enhancement
proposal
to
see
the
integrated
details
of
the
same
going
ahead.
D
We
have
graduated
ingress
to
v1.
As
I
was
saying
earlier,
that
ingress
was
in
beta
since
kubernetes.
I
think
it
was
alpha
1.1
and
then
around
2015-ish
fall
2015-ish.
Since
then,
it
was
in
beta,
but
then,
with
this
release,
it
has
reached
ga.
A
very
important
change
here
is
that
earlier
you
in
back
ends
you
needed
to
put
service
name
and
service
port
as
keys
like
attributes
of
the
structure.
D
Now
it
they
have
been
like
shelled
out
into
a
service
structure,
and
then
inside
that
you
have
name
and
port
attributes,
it.
D
Like
better
going
and
so
adding
app
protocol
to
services
and
endpoints,
so
this
is
a
feature
which
I
think
did
alpha.
C
D
Releases
back
and
then
I
think,
in
one
dot
in
1.17,
it
was
added
to
ends
and
service,
port
and
endpoint
ports
as
beta
and
now
this
has
also
been
added
to
services
and
endpoints.
So
now
you
don't
need
to
basically
have
those
arbitrary
resource
annotations.
Let's
say
you
have
some
controller
which
actually
sees
those
labels
and
then
acts
upon
them.
D
Now
you
have
something
called
app
protocol
which
actually
makes
things
much
easier
for
you,
so
there
have
been
instances
where
users
have
reported
that
it
actually
creates
incoherences.
There
is
an
issue
linked.
I
linked
an
issue
like
the
link
called
user,
where
I
hyperlink
user
frustration
too.
You
can
go
ahead
and
see
it's
an
issue
in
coordinates
this
kubernetes
the
main
code
base
repo
going
forward.
We
have
sig
node
updates.
With
that
I'll
hand,
over
back
to
taylor
for
some
time.
C
Thank
you
navarin.
So
quite
a
few
sig
known
enhancements,
the
first
one
of
which
is
sec
comp,
which
has
a
lot
of
people.
I've
seen
a
lot
of
demos
on
this
actually
and
and
worked
with
david
rocco
about
on
one
of
these
on
a
on
a
different
stream
as
well,
and
what
this
does
is
it
provides
you
the
ability
to
set
a
sec
comp
profile
for
pod
using
the
pod
security
policies.
C
It
also
allows
for
that
control
of
privilege,
given
two
pods,
so
you
can
again,
you
know,
put
a
sec
comp
profile
onto
that
pod
and
include
that
in
and
you
could
either
you
know,
define
one
via
the
local
run
time
or
you
can
set
your
own
and
configure
that.
I
wish
you
all
lots
of
luck
in
configuring.
Second
profiles,
that's
something
I
usually
do,
but
typically
in
my
nightmares,
but
very,
very,
very
critical,
and
that's
something
that
you
should
do,
but
it
is
no.
C
It
is
a
quite
quite
an
effort,
moving
on
to
node
topology
manager.
C
So
for
this
feature
that
has
moved
into
beta,
the
use
case
is
teams
that
have
to
you
know,
spin
up
a
lot
of
compute
and
have
a
low
latency
kind
of
response
time.
They
need
that
low
level
of
latency
and
they
prefer
just
one
core
on
the
cpu.
They
don't
want
to
break
it
up
across
those
multiple
cores
and
kind
of
risk
that
that
overhead
in
that
time.
C
C
The
next
one
is
building
cubelet
without
docker
and
that's
exactly
what
it
does
really
just
about:
removing
that
dependency
on
the
docker
docker
golang
package
and
then
allowing
cubelet
to
compile
and
work
without
that
docker
dependency.
However,
that
doesn't
mean
that
it's
that
this
feature
is
not
focused
on
that
removal
of
code
entry
at
the
current
point
in
time.
C
Next
one
is
allow
users
to
set
a
pods
hosting
to
its
fqdn.
We
talked
about
this
a
little
bit
in
the
highlights
and
really
so.
This
is
the
very
much
to
help
out
with
interoperability
with
legacy
systems
and
very
easy
to
set
for
your
pods
just
hostname
fqdn
true,
those
are
my
favorite
types
of
features
are
the
ones
that
are
a
little
bit
easier
to
enable
than
not
moving
on
to
the
cubelet
feature:
disable
accelerator
usage
metrics.
C
What
this
is
really
is
so
the
with
the
third-party
device
monitoring
plug-ins
a
separate
issue
and
pod
resources
api
about
to
enter
ga,
it's
not
expected
for
the
keyboard
to
gather
metrics
anymore.
So
really,
this
enhancement
is
just
about
that
deprecation.
C
D
Thanks
taylor,
I
am
assuring
you,
I
won't
switch
back
to
you
again
of
the
updates,
so
with
scheduling
we
have,
I
think
five
features
that
their
shift
that
they
have
shipped
the
first
of
them
is
graduating
the
cube
scheduler
component
config
to
v1
beta1.
D
To
give
you
a
small
context
on
what
component
config
is
so,
as
you
saw
in
the
queue
medium
updates
too
so
using
component
conf.
So
the
idea
of
dn
component
configs
is
how.
D
You
can
configure
the
kubernetes
components
themselves
with
kubernetes
resource
manifest
kind
of
things,
so
it
has
been.
This
effort
has
been
going
on
since
the
past
year.
I
guess
under
the
cluster
life
cycle,
wg
component
standard,
so
here
in
this
specific
enhancement,
they
were
focusing
on
cube
scheduler.
D
I
have
put
in
a
small
snippet,
a
very
basic
snippet,
of
how
a
component
config
looks
for
cube
scheduler,
so
this
went
on
beta
in
1.19,
and
the
cap
owner
wants
to
soak
it
for
at
least
two
releases,
and
hopefully
it
will
go
ga
in
one
to
one
or
eventually.
So
next
up
is
run
multiple
scheduling
profiles.
D
So
this
is
a
very
interesting
enhancement.
I
would
say
like
from
a
personal
point
of
view,
like
we,
we
face
a
lot
of
problems
when
scheduling
workloads
in
equivalence
clusters.
The
problem
comes
up
when,
like
you
have
heterogeneous
workloads,
let's
say
you
have
like
long-running
jobs.
Let's
say
you
have
batch
workloads,
which
you
can't
really
interrupt
if
you,
if
they
are
not
interruptable
what,
if
you
have
so
you
have
like
jobs,
which
are
very
ephemeral
like
web
servers,
you
can,
which
don't
store
state.
D
You
can
basically
kill
them
now,
you
can.
You
could
also
solve
this
problem
using
multiple
schedulers,
but
there's
a
big
issue
with
that
is
race
conditions
and
scalability
concerns.
So
what
multiple
scheduling
profiles
does?
Is
it
actually
introduces
profiles
in
the
in
a
single
scheduler,
so
you
can
have
different
algorithms
for
different
kinds
of
workloads.
D
D
So
if
you
see
there's
a
feature
called
so
you
can
basically
set
up
affinities
or
anti-affinities
or
you
can
basically
kind
of
to
a
certain
extent
model
your
workloads.
What
do
you
say
spreading.
D
Across
a
cluster,
let's
say
you
have
like
a
web
server
running
that
and
you
don't
want
them
to
run
on
the
same
node.
You
can
say
that
hey,
please,
don't
run.
On
the
same
note,
you
can
say
that
no
two
parts
should
run
on
the
same
note.
They
will
get
spread
over
in
your
cluster
as
much
as
possible
owing
to
other
restrictions
like
if
you
have
like
more
replicas,
then
there
are
nodes.
D
Obviously
this
can't
be
satisfied
coming
back,
so
there
is
a
feature
which
added
like
more
controls
at
the
end
of
the
end
user,
to
give
scheduling,
heuristics
like
this
and
also
achieve
high
level
availability
and
resource
utilization
with
that
said,
there
is
a
very
important
thing
we
should
see
here
is
that
you
actually
have
an
option
to
say
that
hey
this
heuristic
is
a
hard
requirement
or
a
soft
requirement.
D
Basically,
you
can
differentiate
between
a
predicate
or
a
priority
based
on
that,
whatever
you
set,
whatever
you
configure
your
scheduler
to,
or
your
workload
to
your
workloads
will
get
scheduled
in
that
manner,
and
it
went
stable
this
recycle.
So
it
is
here
to
stay
going
at
so
adding
a
configurable
default
constraint
to
part
topology
spread.
D
D
D
C
D
In
this
release-
and
hopefully
it
will
go
to
beta
in
the
next-
I
think
this
release
on
next
release.
I'm
really
excited
about
those
scheduling,
features
they're
like
really
boom,
so
coming
coming
to
the
next
feature,
adding
non-preempting
option
to
priority
classes,
so
priority
classes
have
been
a
ga
feature
since
I
think
1.14,
which
impacts
the
scheduling
and
erection
of
pods.
So
what
happens
is
if
you,
so
if
any
pod
has
so,
parts
are
actually
scheduled
in
descending
priority,
so
lower
priority
pods
are
preempted
or
killed.
D
If
there
is
a
high
higher
priority
part
coming
in
and
there
are
resource
exhaustions
in
your
cluster
now
this
announcement
actually
adds
a
non-preemptive
option.
You
can
say
that
hey
this
priority
class
can
like
it
may
or
may
not
trigger
preemption,
as
in
like
you
can
disable
the
preemption
feature.
So
what
this
will
do
is
it
adds
a
default.
It
has
an
attribute
called.
I
think
it's
called
sorry
I
so
this
is
written
in
the
announcement
proposal.
D
You
can
go
ahead
and
see
the
cap
where
it
is
written,
but
the
default
value
right
now
is
false,
so
it
will
still
follow
the
previous
behavior
of
preemption.
But
then,
if
you
want,
you
can
go
ahead
and
enable
like
non-preemption.
So
you
can
say
that
hey,
don't
preempt
pods
going
at
to
the
next
thing:
storage,
so
storage.
D
D
It
just
graduated
to
beta,
which
implies
that
you
can
use
this
feature
without
any
switching
a
feature
flag
now
to
give
a
small
context
on
what
this
does
it
actually
lets
you
like
what
happens
is
if
you,
if
you
can
see
so
the
default
behavior,
is
you
can
if
any
secret
or
config
map
gets
changed?
D
It
directly
gets
watched
by
cubelet
and
then
updated
on
the
pod.
Let's
say
you
have
like
thousands
of
those
objects
now
watching
those
objects
becomes
tedious
job.
This
actually
tries
to
solve
that
problem
so
going
at
so
we
have
a
few
csi
driver
migrations
in
this
release.
To
be
exact
too.
First
is
azure
disk
entry
to
csi
driver
migration.
D
So
if
you
are
using
azure
disk
for
storage
in
your
cluster-
and
you
have
the
csi
driver
installed,
you
can
just
turn
on
the
feature:
gate
or
csi
migration
as
your
disk
to
basically
use
the
out
of
tree
code
same
case
for
vsphere.
So
you
have
the
feature
flag
called
csi
migration
vsphere,
which
actually
enables
out
of
tree
code
usage
going
ahead.
So
we
have
two
again
kind
of
related
enhancements,
so
one
is
storage
capacity
tracking.
D
Can
create
a
volume
in
a
specific
node,
this
feature
what
it
does
is.
It
gives
a
attribute
called
storage
capacity,
which
is
seen
by
the
controller
and
then
determines
whether
you
can
shoot
on
the
pod
or
not,
there's
a
feature
block
again
on
it,
which
describes
this
very
feature.
This
is
alpha
right
now,
so
you
have
to
enable
the
feature
flag
so
as
to
use
this
feature.
D
Next
up
is
generic
ephemeral,
inline
volumes?
It
is
also
kind
of
related
to
the
previous
one,
and
it
gives
you
a
beautiful
way
where
you
can
extend
kubernetes
with
csi
drivers,
but
which
provide
lightweight
and
local
volumes.
So
there's
a
new
resource
called
ephemeral,
ephemeral
volume
source
which
contains
the
fields
that
are
needed
to
create
the
volume
claim,
and
another
thing,
if
you
see
here
is
the
pod,
which
creates
that
resource
gets
tagged
as
an
owner
of
the
resource.
So
if
you
delete
the
part,
the
resource
gets
deleted
automatically.
D
There's
a
default
garbage
collector
scheme
used
in
kubernetes
again
reiterating
all
the
alpha
features
need
to
be
enabled
explicitly
with
the
feature
flag.
D
So
the
last
enhancement
in
storage
is
allowing
csi
drivers
to
opt
into
volume,
ownership
change.
So
to
give
you
a
short
background,
what
happens
is
if
you
specify
fs
group
in
your
pod
security
context
in
a
pod,
any
volume
that
you
mount
gets
masked
with
that
fs
group
now.
This
is
really
not
necessary
that
the
bear
like
back
end
the
csi,
the
csi
type,
that
you're
using
supports
ownership,
modifications
using
fs
group,
for
example,
nfs.
D
It
does
not
support.
So
you
now
get
a
feature
called
support,
surface
group
when,
where
you
can
say
where
the
csi
driver
can
say
that
hey,
I
support
sub
fs
group
or
not,
based
on
that,
your
fs
group
should
be
like
honored
or
not
the
the
one
that
you
said
in
the
pod.
This
is
again
an
alpha
feature
has
to
be
enabled
so
search
to
use
it
going
ahead.
So
it's
the
last
thing
in
the
roster
for
us
windows.
D
Now
this
change
going
to
stable
gives
a
path
to
to
the
like
implementers
to
actually
implement
kubernetes
specific
features
that
are
not
available
on
the
docker
api,
which
are
available
just
on
the
container
dab,
and
with
that,
that's
all
on
the
announcements
and
I
will
hand
it
back
to
taylor
to
talk
a
bit
about
the
release
team
shadow
program.
C
C
The
team
is
broken
down
into
seven
different
roles.
So
for
those
interested
I
definitely
recommend
checking
out
the
kubernetes
sig
release
repository
inside
of
there
are
role
books
and
it
talks
about
each
of
those
different
roles.
You
know
we
could
see
them
here
depending
on
what
you
might
have
an
interest
in.
C
So
personally,
I
I
started
in
similar
shoes
to
max
and
worked
through
the
communications
roles
and
then
in
the
kubernetes
116
release
was
the
communications
lead,
took
117
off,
came
back
in
118
as
a
release,
lead
shadow
and
then
led
the
119
release.
So
the
program
is
really
fantastic
in
that
you
just
show
up
and
you
learn
and
you
gain
a
lot
of
of
information
from
being
part
of
the
shadow
program
you
aren't
expected.
C
You
know,
for
those
of
you
nervous
about
having
the
world,
see
your
contributions,
whether
it
be
code
documentation
or
what
have
you
no
worries?
The
kubernetes
community
is
really
fun
friendly
and
engaging,
and
just
you
know,
if
you
have
questions,
that's
the
best
place
to
ask
them
the
again.
C
The
team
is
fantastic
and
and
very
easy
to
work
with,
and,
as
you
read
through
role
books,
if
you're
interested,
you
can
kind
of
see
what
might
be
a
good
fit
for
you
or
something
you
might
want
to
learn,
might
be
something
you
don't
do
in
your
nine-to-five
job
or
just
something
you
have
a
innate
curiosity
about,
and
then
we
walk
you
through
that
shadow
program
and
then
kind
of
try
to
get
everyone
set
up
to
potentially
be
a
lead
in
some
capacity.
C
So
again,
the
goal
is
to
train
new
leads
to.
When
leads
aren't
able
to
make
it.
You
know
they
have
a
conflict.
You
know,
life
event
happens
something
stuck
at
the
stuck
at
the
motor
vehicle.
You
know
place,
then
you
know
that
we
typically
would
call
on
shadows
to
to
help
out
with
that
I
leaned
very
heavily
on
jeremy
and
bob.
Those
are
my
release,
lead
shadows.
C
I
went
through
job
transitions
so
that
just
yet
another
example
of
a
life
event
coming
up
and
and
bob
and
jeremy
were
always
just
really
keen
to
help
out
and-
and
I
really
appreciate
that
from
both
of
them
for
each
role,
there's
one
lead
and
typically
three
to
four
shadows
that
are
selected
via
the
shadow
application
process.
The
application
typically
gets
advertised
near
the
end
of
a
release
or
just
before
a
release
starts
and
typically
you'll,
see
that
shared
out
on
linkedin,
twitter
and
and
several
other
venues
and
avenues.
C
The
release
cycles
generally
last
around
three
months,
but
with
119
being
a
prime
example.
We
went
quite
a
quite
a
bit
beyond
that:
weekly
workloads,
ebb
and
flow,
as
some
teams
are
a
little
bit
more
busy
than
others.
So
like
enhancements,
nebrun
and
max.
I'm
sure.
C
Can
you
know
after
this
I'll
turn
it
back
to
you,
but
I'm
sure
you
can
attest
to
this
where
enhancements
was
very
busy
during
the
beginning
of
the
release
cycle
and
communications
was
very
busy
near
the
end
of
the
cycle
as
an
example,
so
that
is
also
covered
in
the
role
books.
Just
and
you
know
with
the
weekly
breakdowns
gives
you
a
really
good
understanding
of
of
what
you're
in
for
and
what
time
commitment
you
can.
You
know
if
you're
willing
to
make
that
put
in
to
the
shadow
process
for
more
information?
C
Please
click
on
the
release.
Team
shadows,
github
repo
link
here
once
those
slides
are
all
shared
out.
C
So
with
that
I'd
like
to
turn
it
I'd
like
to
open
up
for
questions
and
turn
it
back
to
max
and
auburn.
If
you
wanted
to
say
anything
about
the
shadow
program
or
your
experiences
on
that
front
too,
while
we're
waiting
for
questions
to
come
in
join.
B
D
Yeah
I
I'd
like
to
say
a
few
words
about
the
program.
Like
it's
a
great
program,
I
would
say
anyone
looking
at
like
many
people,
ask
me
like
how
to
get
started,
contributing
kubernetes,
but
then
this
is
there.
Is
this
beautiful
program
right
released
in
shadow
program?
You
can
just
come
in
and
say,
like
hey,
I
am
interested
in
working
in
this
vertical.
This
really
interests
me.
Can
I
work
and
and
then
you
get
to
meet
so
like
so
many
awesome
people
who
actually
help.
C
D
D
The
role,
then
I
let
the
1.19.
D
Then
again,
I
am
shadowing
the
release
lead
this
cycle,
so
this
this
is
really
fun
like
and,
and
you
get
to
feel
the
responsibility
that
is
bestowed
upon
you
that
hey
you,
you
have
some
important
role
in
the
community.
You
are
looking
after
at
least,
and
the
team
is
like
filled
with
diverse
folks
from
the
community
like
there
is
like
huge
time
zone.
Diversity
like
like,
like
taylor
and
max,
can
also
say
like
right
now.
D
In
this
call,
we
are
like
12
and
a
half
hours
span
in
time
zone
me
from
ist
max
is
from
central
european
time,
and
then
taylor
is
from
specific
time
u.s
specific
course
now.
So,
if
you
see
like
there
are
literally
no
barriers
other
than
you
just
going
ahead
and
applying
to
the
program,
even
if
you
are
not
applying
to
the
program
just
go
ahead
and
just
come
to
the
slack
channel
and
say
hi,
I
want
to
work
on
this.
Nobody
would
say
literally,
nobody
would
say
no.
I.
D
B
Absolutely-
and
it's
really
great
to
see
how
all
the
the
massive
contribution
from
this
really
huge
fantastic
community
all
get
brought
together
and
this
steps
step
by
step.
You
see
more
and
more
getting
this
version
and
its
quality
and
then
it's
getting
ready
for
release
and
then
everyone's
sitting
there
and
it's
getting
nervous.
And
then
you
see
like
the
countdown
and
then
in
the
last
moments,
I'm
writing.
Now
we
are
going
to
release
it
and
then
you
see
like,
what's
all
going
behind
the
scenes,
it's
really
great
to
see
this
gary.
B
How
much
time
do
we
have
left
to
to
potentially
go
into
some
of
the
questions?
We
have
right
now,
no
open
one,
but
we
can
at
least
highlight
one
or
two
of
the
questions
which
we
answered
already.
B
Okay,
that
sounds
like
we,
we
at
least
can
have
a
look
at
one
of
two
things
which
we
got
so
far.
So
one
question
which
I
really
liked
is
about
that
about.
Actually
the
endpoint
slices
with
the
question
that
there
are
some
issues
with
the
demon
sets
in
a
really
huge
cluster
and
the
update
cycle
of
the
demon
sets.
B
B
There's
a
really
great
blog
post
about
it
from
rob
scott
who
drafted
this
and
and
worked
mainly
on
it
and
why
it
is
so.
Every
of
the
communication
within
kubernetes
goes
through
the
kubernetes
api
server
and
not
just
something
which
comes
from
the
outer
world,
but
literally
every
communication,
and
that's
why
the
future?
Your
cluster
is
getting
the
well
not
slow
layer,
but
sometimes
a
little
bit
inconsistent
performance.
You
will
find
through
the
api
server,
and
this
is
exactly
what
the
endpoint
slices
will
solve
in
the
future.
B
So
if
it's
growing,
then
you
have
smaller
chunks
and
the
update
cycles
shouldn't
be
that
much
infected
from
it.
B
So
officially,
we
cannot
give
the
the
best
answer
to
it,
because
we
do
not
write
down
the
curriculum
and
what
should
be
included,
what
shouldn't
be
included,
but
that
actually
answered
perfectly
increases
a
really
specific
baby,
because
it's
so
old
I
mean
it's
actually
it's
like
it
should
go
already
in
the
retirement
action
section,
but
no
it's
it's
a
stable
thing,
even
though
it
was
so
long
time
and
better
now
it
has
become
officially
stable.
B
So
that's
why
there
and
up
some
other
and
better
features,
normally
shouldn't
happen,
except
you
find
something
else
which
is
super
duper
old
and
stuck
in
a
better
alpha
version.
Also
really
interesting
is
about
the
cube
ctl
and
the
client
rotation.
So
with
the
certificate
client
rotation,
if
implemented
or
if
configured
it
will
does
this
by
its
own.
Yes,
you
can
force
it
also
if
needed,
but
actually
company
just
takes
care
about
it
and
then
maybe
the
last
comment,
this
release
is
cool.
B
A
Project,
thank
you
all
very
much
for
a
wonderful
presentation.
That's
just
about
all
the
time
we
have
for
today
before
the
presentation
and
slide
will
be
available
later
today.
I
would
like
to
thank
everyone
again
for
joining
us
today.
Have
a
wonderful
and
safe
weekend
and
we'll
see
you
next
time.