►
From YouTube: Kubernetes v1.26 Release
Description
Don't miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from 18 - 21 April, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
Welcome
everyone,
thanks
for
joining
us,
for
a
special
edition
of
today's
cncf
live
webinar
kubernetes
version
1.26
release
some
exciting
stuff
coming
your
way.
I'm
Libby,
Schultz
and
I'll
be
moderating.
Today's
webinar
I'm
going
to
read
our
code
of
conduct
and
then
hand
over
to
Leo
paulque,
Fred
Munoz
and
Mark
Rosetti
from
the
kubernetes
1.26
release
team,
a
few
housekeeping
items
before
we
get
started
during
the
webinar.
You
are
not
able
to
talk
as
an
attendee,
but
there
is
this
lovely
chat
box
that
everybody
is
adding
their
hellos
and
locations
too.
So
thank
you.
A
That
is
where
you
can
leave
your
questions
for
the
team
we'll
get
to
as
many
as
we
can
at
the
end
or
if
it
so
pertains,
we
can
interject
in
the
middle,
but
we're
going
to
have
a
full
webinar
today
so
be
sure
to
leave
your
questions
here.
For
us
this
is
an
official
webinar
of
the
cncf
and
as
such
is
subject
to
the
cncf
code
of
conduct.
A
Please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct
and
please
be
respectful
of
all
of
your
fellow
participants
and
presenters.
Please
also
know
that
the
recording
and
slides
will
be
posted
later
today
to
the
cncf
online
program,
page
at
community.cncf.io,
under
online
programs.
They
are
also
available
via
your
registration
link.
You
used
to
get
here
today
and
we'll
be
on
our
online
programs.
Youtube
playlist
with
that
I
will
hand
it
over
to
our
release
team,
Mark,
Leo
and
Fred
to
take
it
away.
B
B
So
obviously
we
have
our
126
release.
Team
leads
Leonard
Parker
from
a.
A
B
Reply
which
led
the
entire
team
in
terms
of
the
1.6
release,
effort
Mark
Rosetti
from
Microsoft,
which
was
a
part
of
the
126
enhancements
team,
and
it's
and
currently
is
the
enhancements
lead
for
kubernetes,
127
and
and
I
Frederick
Minaj
from
SAS,
which
and
I
was
the
communications
lead
for
kubernetes,
1
and
26.,
and
we
have
a
interesting,
although
not
completely
original
agenda,
because
we
tend
to
follow
the
same
major
approaches
in
each
kubernetes
release
to
share
with
you
today.
B
B
We
will
then
go
on
into
focusing
on
some
of
the
major
highlights
removals
applications
and
aspects
that
we
want
to
really
talk
about
in
terms
of
kubernetes
1.6,
including
the
major
teams
Etc,
and
then
we
will
go
through
a
perfect
lists
and
description.
A
quick
description,
obviously,
given
the
time
constraints
that
we
have
on
the
enhancements
that
were
not
only
tracked
but
implemented
during
this
release.
I.
B
So
excellent
glad
it's
not
on
my
end,
because
sometimes
this
happens.
I'll
I'll
I'll
proceed
that
starting
with
some
updates
on
the
kubernetes
1
to
27
release
cycle
and,
as
you
are
likely
aware,
kubernetes
have
several
releases
a
year
and
just
when
we
end
one
another
one
starts
and
the
cycle
it
continues,
and
currently
we
are
on
the
kubernetes
127
release,
which
has
the
following
timeline.
It
started
some
weeks
ago
in
the
9th
of
January.
B
It
will
have
the
enhancements
for
a
freeze
on
the
10th
of
February
code
freeze
on
the
15th
of
March
and
is
scheduled
for
release
on
the
11th
of
April
the
reference
to
kubecon
here,
because
it
will
follow
very
shortly
after
the
release
it
and
Cloud
native
Con
Europe
will
start
on
April,
April,
17
and
21.,
so
this
coupon
will
build
will
be
made
in
in
the
footsteps
of
the
1.7
release.
Currently,
there
are
no
changes
in
terms
of
the
planned
timeline.
B
Sometimes
changes
do
happen
in
the
126
release
cycle,
for
example,
we
did
have
a
slight
adjustments
to
accommodate
the
security
updates
in
go,
but
the
process
itself
is
made
with
this
in
mind.
So
there
are
the
necessary
leeway
in
place
in
order
to
avoid
making
any
potential
rescheduling
an
issue,
and
with
that
I
would
we
would
will
now
go
into
the
kubernetes.
146
highlights
major
team
changes,
removing
applications
and
there's
nobody
better
to
be
able
to
provide
this
to
us
than
a
learner
to
our
release.
Team
lead.
D
D
For
this
cycle,
we
we've
been
a
little
bit
more,
more,
maybe
I,
don't
know
we
had
like
a
little
bit
more
thoughts
in
mind
with
electrifying
in
terms
of
environment,
so
we
want
to
raise
awareness
that
kubernetes
clusters
are
deployed
everywhere,
powering
huge
systems
and
therefore
also
like
requiring
a
lot
of
energy,
and
this
is
at
the
moment
and
the
Cap's
not
being
reflected
as
much
so.
This
might
be
like
a
big
theme
or
major
theme,
also
in
the
future,
for
this
cycle,
unfortunately
not
but
yeah.
D
D
So
if
you
break
it
down,
we
have
11
stable
ones,
10
beta
ones,
16,
Alpha,
caps,
nine
removals
and
three
deprecations
about
the
stable,
beta
and
Alpha
features.
We
will
talk
about
them
in
these
updates
in
a
bit
about
the
removals
and
applications.
I
believe
this
is
the
next
slide.
I
know
this
is
the
one
after
it,
okay,
first
for
the
major
themes,
because
people
discuss
every
one
of
them
in
like
a
little
bit
more
detail
like
on
a
like
slide
exclusive
side.
D
I
would
just
like
run
it
down,
so
you
have
like
an
overview
like
a
teaser
of
what
we
will
discuss
this
the
rest
of
the
session.
So
first
we
now
use
exclusively
registry.khs.io.
This
was
broadly
discussed
in
the
past.
We
used
the
GCR
registry.
Now
we
just
publish
new
artifacts
to
registry.khsio,
so
if
you
use
or
if
you
pull
the
artifacts
manually,
you
now
have
to
use
a
new
endpoint.
D
We
now
graduated
designing
artifacts
to
Beta.
We
support
now
or
graduated
stable,
privileged
containers
to
stable
for
Windows.
We
have
some
enhanced
like
changes
in
the
Cs
migration,
which
is
a
larger
effort.
Now
we
also
include
the
Azure
fights
and
vsphere
like
over.
Like
external
plugins,
we
delegated
the
fs
group
to
the
CSI
drivers,
which
is
now
stable.
D
There
are
some
changes
in
the
metrics
framework.
We
added
some
changes
to
component
slis
some
feature
metrics.
We
have
a
very
big
change
with
Dynamic
resource
allocation,
which
introduces
a
new
API,
which
is
very
exciting.
It's
it's
an
alpha
in
Google,
greater
beta
and
so
on.
Later
we
have
a
cap
focusing
around
admission
control.
D
We
have
one
alpha
feature
for
bad
scheduling:
readiness
made
some
changes
for
node
inclusion
policy
for
pot
topology
spread,
which
graduated
to
Beta
And
node
grade
to
graceful,
node
shutdown,
graduates
to
to
Beta
as
well.
So,
as
I
said
these,
we
will
discuss
them
like
separately.
So
I
just
want
to
like
go
over
them
quickly,
so
we
have
more
time
right
for
the
removals
and
applications.
D
So
we
deprecated
the
user
space
proxy
mode
for
like
over
a
year
and
now
it's
no
longer
supported
on
either
Linux
or
Windows,
so
we
removed
it
after
what
was
deprecated
the
next
one,
deprecations
for
cube,
API
command
line
arguments
so
there's
a
flag,
the
master
service,
namespace
command
line
arguments
are
do
not
have
any
effect
anymore
and
it's
basically
already
deprecated,
but
not
officially
so
this
cycle
we
now
officially
deprecated
it
so
in
the
future
we
can
remove
it,
which
is
like
policy
in
kubernetes.
D
Next,
one
removal
of
the
V1
beta1
flow
control
API,
and
we
remove
the
flow
control,
API,
server.khs.io,
V1,
beta1,
and
now
you
need
to
migrate
over
to
V1
beta2.
If
you
want
to
use
it.
So
this
has
also
been
deprecated
before,
and
the
V1
beta
2
is
available
since
1.23.
So
there's
like
a
good
chance
that
you're
already
using
that
one
for
next
one
under
six
auth,
we
remove
removal
of
the
entry
credential
management
code.
A
B
Yeah,
give
me
I
think
a
couple
of
seconds:
unless
you
didn't
notice
Sunday
is
we
will
only
catch
him
in
20
minutes,
but
I
think
that
they
will
join
otherwise
I
can
pick
it
up
or
Mark.
Do
you
want
to
I,
don't
know?
Oh.
B
Doesn't
Okay
so
we
were
on
the
inter-credential
management
curves.
Essentially,
this
is
not
completely
unlike
the
CSI
migrations
and
that
there
there
is
internal
logic
that
was
contained
within
the
kubernetes
code
that
dealt
our
specific
Cloud
providers
had
the
authenticate,
the
credential
management,
and
this
has
been
externalized
into
plugins,
so
it's
being
removed
from
the
the
entry
Source.
B
The
removal
of
dynamic
cubelet
configuration
group
is
a
slightly
different
recently.
It's
the
removal
following
the
the
the
policy
that,
when
there's
not
enough
uptake
and
and
essentially
enhancements,
don't
progress
into
stable
after
a
while.
They
they
get
dropped
because
they
they
they.
They
aren't
used
as
much
as
initially
thoughts
in
terms
of
figure
out
Auto
scaling.
We
have
the
removal
of
the
V2
better
2,
horizontal
part,
autoscal
API,
which
not
unlike
the
flow
control.
D
B
D
At
okay
at
the
top
again,
all
right,
sorry.
D
People
yeah
all
right,
so
thanks
for
for
catching
that
threat
or
Mark
I,
don't
know
Fred,
probably
okay,
so
the
next
one
removal
of
the
Legacy
command
line,
arguments
related
to
logging
right.
So
there's
a
couple
of
flags
which
we
removed
I
have
not
listed
in
in
my
notes,
so
I
need
to
get
them
later,
I,
don't
know
Mark
or
Fred.
Maybe
you
can
jump
on
this.
Otherwise
I
can
move
to
the
next
one.
Yeah.
B
D
I
can
get
it
in
the
in
the
meantime
between
the
scenes,
the
next
one,
the
application
of
a
non-inclusive
Cube,
CTL
Flags.
D
So
right.
So
it's
so.
We
have
like,
in
an
initiative
in
kubernetes
to
remove
Flags,
which
are
not
inclusive,
so
we
have,
for
example,
removed
now
the
prune
right
whitelist
flag,
and
so
we
don't
want
to
have
like
white
listing
blacklistings
stuff
like
this
in
in
Flex
or
like
in
in
the
Communist
project.
So
we
now
replace
it
with
the
prune
loud
list
which.
A
D
The
same
meaning,
it's
just
a
different
name:
the
next
one,
the
applications
of
cubectl
run
command
line
arguments.
So
there's
several
options
and
arguments
in
Cube,
CTL
run,
which
are
marked
as
deprecated,
so
cascading.
So
the
flag's
cascading
file
name
Force,
grace
period,
customize,
recursive,
timeout
and
weight
the
these
arguments
already
ignored.
So
we
don't
expect
like
any
problems
for
the
next
one,
the
CRI
V1,
Alpha,
2
API
was
removed.
There
might
be
some
like
problems
in
in
your
classes,
so
some
some
context
for
this.
D
So
after
we
removed
the
docker
shim
in
1.20
5,
and
we
added
in
one
or
no
in
1.24,
we
added
in
1.25
the
V1
Alpha
spec
V1,
API,
spec
and
deprecated,
the
V1
Alpha
2
spec,
and
we
now
removed
this
V1
Alpha
2
spec
and
there
might
be
like,
as
I
said,
like
some
implications
with
your
CRI.
So,
for
example,
if
you
use
container
D
and
you
run
a
version
1.5
or
an
older
one,
this
will
not
work.
D
If
you
run
something
else,
you
need
to
consult
the
documentation
or
like
the
vendor
likely.
This
is
also
like
just
handled
by
your
by
your
like
platform
provider,
public
cloud
provider,
so
they
upgrade
it
for
you.
If,
if
you
do
it
like
everything
on
your
own,
then
you
need
to
watch
out
for
that
for
the
next
one.
D
The
cluster
FS
plugin
was
removed
in
this
available
entry
drivers,
there's
not
much
to
say
about
that.
We
deprecated
it
last
cycle.
D
The
entry
openstack
cloud
provider
is
removed.
So
this
is
the
cinder
volume
type
there's
also
not
too
much
about
that.
So
right,
if
you
have
any
questions,
drop
them
in
the
chat.
Otherwise,
we
can
move
on
yeah.
B
Announce
that,
in
terms
of
the
logging
Flags,
it's
it's
essentially
the
realization
of
an
overall
of
the
catalog
component
and
just
about
every
single
flag
was
deprecated
only
minus
V
and
and
others
were
caps.
The
rationale
for
this
is
on
the
on
the
capped,
but
it's
aligned
with
the
restructuring
of
the
log
infrastructure
in
kubernetes,
so
the
functionality
is
there.
It's
just
that
the
catalog
itself
will
not
have
the
flags
and
the
information
will
be
accessible
in
other
ways.
B
And
so
now
we
will
do
a
per
seek
update,
in
which
we
will
pick
some
of
the
ones
that
we've
briefly
discussed
before
and
a
lot
of
new
ones
and
give
a
one-page
overview
of
each
one,
a
with
links
to
the
active
to
the
enhancement
to
the
cap
and,
in
some
circumstances,
also
a
link
to
the
feature
blog.
We
have
15
feature
blocks
in
this
release,
which
is
quite
a
high
number,
and
these
feature
blocks
are
a
very
good
way
to
have
a
more
direct
knowledge
of
those
specific
features.
B
C
Yeah
hi
everybody
we
are,
there
are
a
lot
of
enhancements
and
we
are
kind
of
only
have
an
hour
here.
So
a
lot
of
these
will
be
pretty
quick
too.
So
I'll
just
give
a
warning
once
up.
First
up
is
the
enhancements
from
Sig
API
Machinery
Sync
API
Machinery.
Is
the
special
interest
group
responsible
for
pretty
much
everything
related
to
the
apis
and
API
server
here
next
slide:
yeah
I'm,
good
yeah,
okay,
the
first
one
is
validating
admission
policies,
this
one.
This
enhancement
is
going.
A
C
Status,
this
introduced
a
common
expression
language
for
doing
some
basic
validations
and
for
your
as
an
admission
controller.
This
is
an
alternative
to
setting
up
or
like
an
admission
web
hook,
which
can
be
very
burdensome
to
kind
of
maintain
and
deploy.
So
this
is
a
much
lighter
weight
option
to
do
simple.
Validations
next
is
the
aggregated,
API
Discovery
feature,
which
is
also
going
into
Alpha.
C
This
centralizes
the
discovery
of
all
the
supported,
API
endpoints
in
your
that
the
kubernetes
API
server
knows
about
and
there's
so
there's
new
two
new
endpoints
one
that
gives
more
information
about
each
endpoint
and
one
that's
the
kind
of
the
aggregation
of
them
using
this
will
help
reduce
load
on
your
against
your
API
server,
because
clients
no
longer
need
to
spam.
The
API
server
to
see
what
all
the
available
endpoints
are.
C
And
last
is
this
Cube
API
server
identity,
which
is
graduating
to
Beta?
This
just
gives
a
unique
identity
to
each
API
server
instance,
so
you
can
identify
if
there's
problems
or
things
like
that
in
the
cluster.
C
Okay,
next
is
Sig
apps
Sig
apps
is
responsible
for
basically
defining
and
how
applications
and
workloads
are
defined
and
managed
in
their
their
cluster
in
the
Clusters.
C
So
first
is
this
job
tracking
without
lingering
pods
is
graduating
to
stable,
so
previously,
before
this
enhancement.
If
you
were
scheduling
batch
jobs,
the
job
controller
needed
to
keep
completed
jobs
around
in
order
to
maintain
state
which
caused
the
next
a
lot
of
extra
kind
of
stress
in
your
cluster,
but
now
that
that
the
job
controller
has
been
re-architected
to
no
longer
need
that,
and
some
enhancements
with
this
are
the
job.
C
Controller
can
now
scale
up
to
like
a
hundred
thousand
current
pump,
100
000
concurrent
pods,
which
is
much
more
parallel
and
scalable
than
previously.
B
C
Yeah
and
this
this
is
this
graduated
into
Alpha
in
this
release,
and
this
allows
for
your
staple
sets.
You
can
specify
the
the
where
you
want
to
start
the
numbering
for
over
replicas
from,
and
this
is
useful
if
you
need
to
restart
your
workload
or
you
want
to
work
or
migrate,
your
workload
across
namespace,
cross,
cluster
and
and
all
of
that
foreign
next,
is
this
pod
healthy
policy
for
pod
disruption
budget?
This
is
this:
has
graduated
to
Alpha.
C
This
enables
specifying
pad
destruction
budgets
for
pods
that
are
not
ready,
and
this
can
help
prevent
data
loss
by
preventing,
not
ready,
pods
from
being
evicted,
so
until
somebody
has
a
chance
to
either
automatically
or
automatically
or
manually,
go
and
recover
that
data.
C
This
can
also
prevent
some
deadlocks
in
the
system
where,
if
you
have
a
lot
of
PODS
that
aren't
ready,
you
can
request
that
they
do
get
evicted
now
and
the
last
one
for
Sig
apps
is
it's
retrievable
and
non-retribal
pod
failures
for
jobs?
This
is
graduating
to
Beta.
This
provides
a
mechanism
to
enable
workflows
to
differentiate
between
re-tribal
and
non-retrieval
failures
to
help
Enlighten
your
your
cluster
or
things
like
transient
or
infrastructure
failures
to
retry
on
those
and
not
if
it's
like
a
workload.
Failure.
B
And
for
sick
often
is
exactly
and
see
instrumentation
I'll
pass
it
back
again
to
Leonard.
D
All
right
so,
on
my
side,
I,
don't
know:
I
have
the
next
pack,
maybe
I
I
still
see
the
validating
admission
policy
slide.
So
it's
kind
of
stuck
but
I
can
just
pull
up
the
slides
on
my
side
and.
D
I,
don't
know
I
I,
don't
know,
there's
some
back
on
my
side,
okay,
but
it
doesn't
matter
because
I
have
to
slide
as
well.
So
I
can
I
can
just
pull
it
up,
one
second
and
okay
right
so
for
this
one.
So
we
are
in
sick,
auth,
reduction
of
secret
Based.
D
Account
tokens
awesome,
so
this
cap
graduates
to
Beta
to
to
the
beta
stage
and
it
introduces
actions
to
reduce
the
surface
area
of
secret
based
service
account
tokens.
D
B
D
Okay,
auth
API
to
get
self
user
attributes
right.
So
this
graduates,
so
Alpha,
as
you
can
see
on
the
right
side,
it
adds
a
new
API
endpoint,
which
you
can
use
to
run,
who
am
I,
but
you
basically
on
Cube
CTL,
which
is
a
nice
addition
I
think.
So
this
is
a
new
API
endpoint,
which
gets
added
and
also
a
new
Cube
CTL
command.
So
pretty
neat
feature
CLI.
D
I
mean
it's
it's
a
sick
about
about
the
CLI.
Basically,
so
it
should
be
sort
of
the
safe
explanatory,
I,
I
guess
the
six
CLI
has
one
cap
this
cycle,
it's
a
it's
stage
beta
about
Cube,
CTL
events,
so
there's
in
the
past,
or
we
are
still
using
Cube
City
l-get
events
now,
there's
like
a
new
like
top
level
command,
which
is
Cube
CTL
events,
so
they're,
like
some
internal
reasons,
why
this
is
needed.
D
Some
limitations
so
yeah
as
far
as
I
know,
there's
at
the
moment
no
like
new
features,
but
this
enables
us
to
add
like
new
features
which
the
community
requested
to
it.
D
Okay,
so
the
second
instrumentation
I
can
just
read
out
covers
best
practices
for
cluster
observability
through
metrics
logging
events
and
traces
across
all
communities,
components
and
development
of
relevant
components
such
as
k-lock
and
Cube.
State
metrics
coordinates
metrics
requirements
of
different
six
for
other
components,
including
finding
common
apis.
D
So
let's
just
enrich
the
data
which
you
get
if
you,
if
you
run
this
command,
if
there
is,
if
there's
no
API
V3
spec
available,
it
will
fall
back
to
the
V2
spec.
So
there
should
be
this.
No
issues
expected
on
this
side
right
for
the
next
one
yeah
right,
kubernetes
component
helps
slis.
D
It's
a
new
cap
graduating
or
it's
stage
Alpha.
It
exposes
new
health
check,
endpoints
which
it
should
allow
creating
new
slis
and
then
like
related
services
and
agents,
and
so
on
can
create
new
slos
based
on
that
all
right.
So
for
the
next
one,
metrics
exactly
extended,
metrics
stability,
so
in
in
which
which
Ed
graduates
to
Alpha.
D
So
this
is
in
addition
to
the
metric
stability
framework,
which
we
introduced
a
couple
of
Cycles
ago,
there's
also
a
very
nice
blog
post.
If
you're
interested
about
that.
D
So
this
cycle,
we
add
a
little
bit
more
like
two
new
classes
to
it,
internal
and
beta,
which
is
like,
like
yeah
for
the
most
part
like
internal
internally
I,
don't
know
actually
actually
how
like,
if
how
much
does
this
affect
like
the
end
user,
but
I
think
the
metric
stability
framework
was
very,
very
well
received.
So
adding
to
this
is
is
a
nice
addition
and.
D
B
Now
we're
entering
sick
Network,
which
covers
networking
in
kubernetes
and,
as
is
authenticated,
Network,
there's
Parts
quite
a
few
enhancements.
B
I
will
actually
start
by
saying
something
that
I
think
would
be
important.
We've
been
talking
about
Alpha
Beta,
And,
stable
issues,
just
a
very
quick
overview
on
what
does
this
mean?
So
obviously,
this
enhancement
starts
as
Alpha
proceeds
to
beta
and
then
reach
stable
if
they
do
not
reach
stable
after
a
while,
they
can
be
obviously
deprecated,
as
we
showed
before,
and
the
major
differences
is
that
Alpha
features
are
this
in
general,
are
off
by
default,
but
can
be
enabled
by
using
a
feature
guide.
B
B
So
with
that,
we
have
the
this
enhancement
of
surface
internal
traffic
policy,
see
that
graduates
to
stable
and
which
essentially
allows
to
Define
as
a
specific
policy
in
terms
of
the
service
that
will
allow
routing
to
be
limited
to
Services
running
on
the
the
that
same
node
so
to
to
limit
the
the
the
forwarding
in
terms
of
the
service
and
and
and
limited
to
the
specific
Target
we
have-
and
these
are
all
relatively
related
that
are
connected
enhancements.
B
Proxy
terminating
endpoints
enables
zero
downtime
deployment
for
services
with
external
traffic
policy
equals
local.
This,
because
in
some
use
cases
and
depending
obviously
on
many
different
configurations,
the
choices.
It
was
possible
that,
when
a
change
in
the
extra
external
traffic
policy,
some
downtime
would
would
happen.
This
enhancement
reduces
the
debt
potential
traffic
loss
from
Q
proxy
unrolling
updates
exactly
because
the
traffic
was
being
sent
to
parts
that
were
terminating
and
that's
unable
to
address
that
request.
B
B
This
is
a
new
feature,
an
alpha
feature
and
which
is
actually
quite
important
in
terms
of
performance
for
a
very
large
clusters,
because
the
IP
tables
restore
command
can
take
quite
a
long
time
to
run
due
to
the
sheer
amount
of
network
rules
that
end
up
being
created
by
the
several
kubernetes
networking
objects
and
policies.
So
this
enhancement
drastically
improves
the
performance
of
the
importables
with
sort
mode
support
of
mixed
protocols
and
services
which
type
load
balancer.
B
So
this
is
a
enhancement
that
graduates
to
stable
and
it
enables
the
creation
of
a
load
balance
service
that
has
different
port
definitions
with
different
protocols
allowing
users
to
expose
their
applications
through
a
single
IP
address,
but
different
level
layer.
4
protocols
with
the
cloud
provider
load
balancer.
But
this,
for
example,
means
that
there
were
some
cases
where
this
was
already
possible
to
disappear
or
UDP.
But
this
one
generalizes
it
and
makes
it
and
and
reduces
the
coordinate
cases
that
were
not
really
well
documented.
That
could
result
in
this
not
working.
B
This
adds
that
support
explicitly
and
thus
makes
it
both
possible
and
also
consistent
and
resilient,
and
and
defines
that
expected
Behavior
Reserve
surface
IPL
ranges
for
dynamic
and
static
IP
allocation
graduates
to
stable.
So
this
is
essentially
around
this
cluster
IPS
can
be
assigned
dynamically
or
statically,
and
with
this
enhancement
this
is
it
splits
the
range
that
is
used
by
static
and
dynamic
allocation.
So
that's
a
specific
situation
can
be
avoided,
which
is
someone
specifying
a
static
IP
because
it
was
free
at
the
time,
but
by
the
time
it
gets
used.
B
B
That
adds
more
configuration
options
and
remove
some
of
the
limitations
that
existed
at
a
certain
point
in
time,
but
do
not
make
sense
to
keep
right
now
and
with
with
that,
we
and
the
list
of
enhancements
from
the
network,
and
we
have
the
enhancements
for
seed
nodes
and,
if
I'm
not
mistaken.
This
is
picked
up
by
Mark
right,
yep.
C
Yep,
thank
you
Fred,
so
Sig
node
is.
They
are
responsible
for
all
of
the
components
that
run
on
the
nodes
and
well
that
that
control
the
interactions
between
the
pods
and
the
hosts,
most
notably
the
cubelet.
C
Okay,
so
this
enhancement
Dynamic
resource
allocation,
which
is
I
graduated
to
Alpha.
This
adds
a
new
set
of
apis
that
allows
for
workloads
to
specify
resources
other
than
memory
or
CPU,
and
it
also
allows
for
sharing
of
resources
between
multiple
containers
or
pods.
C
This
is
quite
a
big
enhancement
over
the
previous
way
of
allocating
resources,
so
anybody
who's,
interested
I'd
recommend
them
to
go.
Read
the
blog
post
and
read
up
on
the
cup
device
manager
device
manager
is
graduating
to
stable.
This
enhancements
has
actually
been
in
beta
since
110
and
they're
just
finally
time
to
graduate
it.
The
device
manager
is
a
kind
of
a
plug-in
and
a
I
think
it's
an
API.
C
It's
a
way
of
kind
of
advertising
and
allocating
different
external
devices,
which
you
can
use
and
assign
into
your
your
containers,
such
as
like
GPU
devices
and
fpga
devices,.
C
Next
is
CPU
manager,
which
has
also
been
in
beta
since
110
and
is
graduating
to
stable
on
the
CPU
managers.
It's
part
of
the
cubelet
and
it's
responsible
for
assigning
the
CPUs
to
Containers
and
prior
to
this,
there
was
potentially
a
lot
of
issues.
If
you
specified
more
than
one
CPU
for
your
for
your
resource
limits
or
requests,
you
could
land
that
could
get
split
across
multiple
CPUs
and
you
would
spend
a
lot
of
time
just
cycling
with
that.
C
Next
is
the
cubelet
credential
provider.
This
is
graduated
to
stable.
This
provides
a
plug-in
model
for
registering
and
basically
for
authoring
to
different
container
Registries.
So
previously,
every
time
the
cubelet
wanted
to
authenticate
to
a
new
container
registry,
it
needed
code
needed
to
be
added
to
the
cubelet
to
understand
how
to
talk
with
that.
The
credential
provider
provides
a
plug-in
model
to
eliminate
that
which
is
simplifying
maintenance
and
making.
C
Support
more
registries:
next:
is
this
improved
multinoma
alignment
in
the
topology
manager?
This
enhancement
is
graduated
to
Alpha.
C
Previously,
the
topology
manager
you
could
assign
you
could
assign
pods
to
specific
pneuma
nodes
or
ensure
that
all
of
your
containers
land
on
a
specific
Newman
node,
but
it
didn't
have
any
kind
of
awareness
of
the
distance
between
the
new
nodes,
which
is
becoming
more
of
an
issue
with
like
multi-socket
processors.
So
this
enhancement
allows
you
to
basically
specify
like
a
minimum
distance
or
between
your
Newman
nodes,
to
get
these
workloads
to
kind
of
land
in
the
same
proximity
of
each
other.
C
Next
is
this
cumulative
vented
plague
for
better
performance
or
the
pop
life
cycle
event
generator?
This
enhancement
is
graduating
to
Alpha,
and
this
enhancement
is
tracking,
outlining
changes
to
the
cubelet
and
the
container
runtime
interface
to
move
to
a
list
watch
model
instead
of
a
continuous
pulling
model,
and
the
goals
of
this
is
to
reduce
the
steady
state,
CPU
usage
of
the
cuboid
and
the
container
runtime
but
or
last
I,
think
is
the
CRI
or
C
advisor
CRI,
full
container
and
pod
stats
enhancement,
and
this
has
graduated
to
Alpha.
C
This
allows
for
the
cubelet
to
get
all
of
the
container
and
pod
stats
for
over
the
the
CRI
or
the
container
runtime
interface
from
the
container
runtime,
instead
of
also
having
the
cubelet
query
into
the
C
advisor
to
get
some
missing
stats.
So
this
is
just
kind
of
removing
some
duplicated
work
and
putting
more
responsibility
onto
the
container
runtime,
which
is
where
it
belongs.
D
Thank
you
all
right
back
to
my
slides
so
for
sick
release
which
handles
all
the
releases
so,
for
example,
release
team
is
part
of
the
sick
release.
D
We
have
one
cap
signing
release
artifacts,
which
graduates
to
Beta,
so
this
release,
or
with
this
cap
we
now
sign
all
the
release
artifacts,
so
users
can
verify
the
integrity
and
of
the
downloaded
resources.
So
this
includes
all
the
like
binary
artifacts.
D
So
what
we
listed
here
so
the
tables
the
binary
artifacts,
the
software
bill
of
materials,
the
s-bombs
and
we
use
cosine
yeah
so
there's
also,
for
example,
if
you're
interested
in
this
entire,
like
s-bombs
theme
and
topic
and
everything
the
kubernetes
release
team
is
quite
like
Forerunner
or
like
a
Pioneer.
I
would
say
a
little
bit
so
if
you're
interested
in
this
there's
also
a
lot
of
good
resources,
blog
posts
about
these
topics
right.
So
the
next
one,
six
scheduling
which
is
responsible
for
the
components
that
make
pot
placement
decisions.
D
We
have
two
caps.
The
first
one
is
a
pod
scheduling,
readiness
So,
currently
pods
are
considered
like
ready
for
scheduling
as
soon
as
they
are
created
and
in
general.
This
is
fine,
but
in
some
scenarios
or
like,
like
often
or
a
few
cases,
this
is
not
the
case
and
pods
are
not
ready
as
soon
as
they
are
created
and
at
the
moment,
or
there
have
not
been
like
any
like
options
to
control
that
behavior
that
these
parts
are
not
ready.
D
So
with
this
cap
we
introduce
a
new
like
parameter
to
the
API
spec,
which
allows
you
to
control
this
Behavior.
So
you
can
set
scheduling
Gates,
which
defines
if
a
pod
is,
for
example,
unscheduable
or
you
can
Define
like
a
hot
scheduled
condition
for
the
next
one.
D
Take
tanes
tolerations
into
consideration
when
calculating
pot.
Topology
spread,
skew
it
graduates
to
Beta.
D
So
this
defines
a
node
inclusion
policy
and
the
topology
spread
constraints
which
you
can
set
to
control
the
behavior
and
where
the
parts
are
scheduled.
Yeah.
B
So
and
I'll
to
describe
the
several
enhancements
of
six
storage,
which
is
responsible
for
ensuring
that
different
types
of
file
and
block
storage
are
available
wherever
a
content
is
scheduled,
so
it
tackles
everything:
storage
related
with
kubernetes
and
trying
to
split
things
up
a
bit.
I'll
go
through
this
relatively
briefly,
non-grateful
note
shutdown.
B
This
is
a
beta
feature
that
allows
that
workloads
to
fail
over
to
a
different
nodes
after
the
original
node
is
shut
down,
or
is
it
in
a
non-recontrollable
state
such
as
a
hardware
failure
or
a
broken
OS
allow
kubernetes
to
supply
parts,
FS
Loop
to
the
CSI
driver
on
Mount
graduates
to
stable.
This
essentially
allows
the
CSI
driver
to
have
the
option
to
apply
the
fs
group
setting
during
volume
Mount
points,
in
contrast
with
what
was
before.
B
B
This
new,
with
this
new
feature,
enable
specifying
a
data
data
source,
rep
filled,
you
specify
data
source,
wrap
field
and
once
that
kubernetes
checks
that
access
is
okay.
The
new
persistent
volume
can
populate
this
data
from
the
storage
so
specified
in
that
separate
name.
B
Space
retroactive
default
storage
class
assignments,
graduates
to
Bitter,
with
this
enhancement,
there's
no
need
to
create
a
default
storage
class
first
and
the
only
after
that,
the
PVC
to
assign
to
that
class
and
and
also
allows
that
any
PVC
without
the
storage
class
can
be
retroactively,
updated
to
specified
that
storage
class.
Even
if
the
PVCs
were
created
before
that
storage
class
was
defined.
B
This
sphere,
this
Fair
entry
to
CSI,
Java
migration,
part
of
the
overall
movement
of
migrating,
from
removing
things
from
the
kubernetes
entry
kubernetes
source
to
external
drivers.
This
one
graduates,
sustainable
and
migrates
the
internals
of
the
vsphere
plugin
to
the
vs
CSI
driver
driver
while
maintaining
the
original
API
and
exactly
the
same,
is
done
for
the
Azure
file.
Alphabetes
stable,
also
with
the
same
motivation
and
the
overall
same
approach
as
the
the
one
before
and
with
this
stick
Windows.
That
Mark
will
will
cover
yep.
C
C
C
Enhancement
for
support
for
Windows
privilege
containers
has
graduated
to
stable
support
for
privileged
containers
on
windows,
were
kind
of
allow
the
containers
to
access
those
resources
and
are
very
useful
for
many
operational
type
workloads,
and
so
now
they
have
access
to
all
of
the
host
resources,
and
this
enables
running
things
like
your
cni
Solutions
node
exporter,
no
problem
detector
on
all
that
and
managing
them,
as
demon
sets
and.
A
C
Is
a
host
Network
support
for
Windows
pods?
This
is
graduated
to
Alpha.
This
adds
support
for
the
cubelets.
You
request
that
pods
getting
scheduled
on
Windows
nodes
get
added
to
the
host's
network
namespace.
C
B
And
this
ends
our
seek,
updates
and
I
think
we
are
actually
on
time
just
some
final
words
around
the
release,
steam
Shadow
program
and
kubecon.
B
B
There
are
several
different
release:
teams,
enhancements,
yeah,
I,
think
no
comms,
dots,
release,
notes,
Etc,
and
these
teams
take
on
this
Shadow
shadows
in
order
for
them
to
have
hands-on
experience
with
developing
a
release
and
they
participate
in
the
release
cycle,
and
hopefully
they
with
this
get
involved,
not
only
in
other
kubernetes
community
efforts,
but
can
also
potentially
lead
the
teams
in
the
future,
and-
and
this
is
the
this
is
a
program-
there
are
several
different
themes.
As
I
mentioned,
release
team
leads
enhancement,
CI
signal,
but
triage
doctor,
listen
communication.
B
Each
team
has
one
leads
that
selects
between
three
and
five
shadows
and
each
release
takes
four
months.
What
I
can
say
right
now
is
that
127
is
already
ongoing,
so
the
the
shadow
program
is
not
accepting
people
for
the
127
released
right
now,
but
anyone
listen
listening.
B
Please
pay
extra
attention
after
the
127
release
is
published
because
in
the
following
weeks,
an
app
that
is
sent
to
the
kubernetes
mailing
lists
that
shares
a
form
in
which
anyone
can
volunteer
and
hopefully
be
integrated
into
one
team
and
experience
being
a
part
of
a
kubernetes.
Releasing
a
final
word
about
the
kubernetes
126
at
Europe,
as
I
mentioned
Europe
will
happen
in
Amsterdam
in
the
17th
of
April.
Well,
it
starts
in
the
17th
with
the
contributor
column
and
then
18
cubecon
Europe.
B
Several
of
the
release
team
members
will
will
be
there,
so
we
will
be
more
than
happy
to
share
some
additional
thoughts
and
ideas
with
anyone
attending.
We
share
here
some
transparency
reports
for
boats,
quick
money
in
Europe
and
North
America,
but
suffice
it
to
say
that
is
one
of
the
biggest
kubernetes
and
Cloud
native
related
meetings.
So
we
do
either
physically
or
virtually.
B
We
would
like
to
to
be
able
to
see
any
any
of
you
there
and,
as
I
mentioned,
share
some
thoughts,
discuss
kubernetes,
present
and
future
or
just
any
other
topic,
and
with
that
we're
done.
We
still
have
five
minutes
and
we
are
more
than
open
for
any
comments.
Questions.
D
The
in
the
community,
that's
like
we
have.
We
have
a
channel
for
like
six
for
each
sick,
sick
release
and
and
all
the
other
six.
So
if
you
want
to
reach
like
the
second
General
you
can
you
can,
this
would
be
the
best
place
to
go.
If
you
have
like
something
very
specific,
maybe
about
like
internal
processes,
I,
don't
I,
don't
think
this.
There
will
be
like
some
some
questions
about
this,
but
there
are
also
like
other
channels
which
are
more
about
those.
D
B
I
just
shared
the
kubernetes
slack
address
in
the
in
the
chats
and
obviously
I
think
all
of
our
names
are
easily
findable
on
on
Google
and
so
feel
free,
obviously
to
and
actually
reach
us
in
in
slack
there
individually.
If,
if
needed,.
A
Perfect
and
yes,
if
Fred,
if
somebody
will
send
me
the
slide
deck
that
I
used,
I
will
upload
that
to
the
website
as
well,
so
you'll
be
able
to
get
both
the
recording
and
the
slides.
A
You
can
all
right,
I,
don't
see
any
more
questions
coming
through,
so
we
will
we
look
this
up,
but
thank
you
so
much
release
team
y'all
are
wonderful
and
thank
you,
everyone
for
joining
us
today.
We
will
have
this
online
by
this
afternoon.
So
if
anyone
missed
it,
it
will
be
ready
to
go
and
everyone
have
a
great
day
or
evening
or
spending
where
you're
calling
from
thank
you
thank.