►
From YouTube: Kubernetes 1.24 Release Webinar
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
we'll
kick
things
off
welcome
everyone.
Thank
you
for
joining
us.
Welcome
back
after
think
of
kubecon
cloudnativecon.
If
you
were
there,
we're
excited
to
get
back
into
things
and
welcome
to
today's
cncf
live
webinar,
kubernetes,
kubernetes,
1.24,
release,
I'm
libby
schultz
and
I'll
be
moderating.
A
Today's
webinar
I'm
going
to
read
our
code
of
conduct
and
then
hand
things
over
to
james
lavrax
staff,
solutions,
engineer
at
jet
stack,
mickey
boxell
product
manager
at
oracle
and
grace
nguyen
an
engineering
student
at
the
university
of
waterloo
and
all
members
of
the
kubernetes
1.24
release
team,
a
few
housekeeping
items
before
we
get
started
during
the
webinar.
You
are
not
able
to
talk
as
an
attendee.
There
is
a
q,
a
box
on
the
right
hand,
side
of
your
screen.
Please
feel
free
to
drop
your
questions.
There
say
hello
to
us.
A
Let
us
know
where
you're
calling
from
we'll
get
to
as
many
as
your
as
your
questions
as
we
can
at
the
end.
This
is
an
official
webinar
of
the
cncf
and,
as
such
is
subject
to
the
cncf
code
of
conduct.
Please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct.
Basically,
please
be
respectful
of
all
of
your
fellow
participants
and
presenters.
A
Please
also
note
that
the
recording
and
slides
will
be
posted
later
today
to
the
online
programs
page
at
community.cncf.io
under
online
programs.
They
are
also
available
via
your
registration
link
and
the
recording
will
be
available
on
our
online
programs
youtube
playlist.
With
that,
I
will
hand
it
over
to
our
release
team
to
kick
things
off.
Take
it
away.
B
Hey
folks,
we
are
excited
to
share
with
you
a
lot
about
what's
new
in
kubernetes
1.24,
so
you
know
libby
just
did
a
great
introduction
of
all
of
us,
but
just
to
reiterate,
my
name
is
mickey.
I
was
the
comms
lead
for
kubernetes
1.24
and
we
also
have
with
us
today
james
our
release,
lead
and
grace
our
enhancements
lead.
B
B
So
on
to
1.25
release
updates
the
1.25
release
timeline
and,
of
course
this
is
all
subject
to
change
it
started.
Just
yesterday
may
23rd
the
enhancements
freeze
is
coming
up
on
the
16th
of
june
code.
Freeze
is
the
month
after
that
on
august,
2nd
and
our
target
release
date
is
august,
23rd
2022,
so
everyone
make
sure
that's
in
your
calendars.
It's
going
to
be
an
exciting
day.
B
Now
quickly,
segwaying
into
1.24
highlights.
I
will
turn
it
over
to
james
to
talk
about
our
release
theme.
D
D
So
this
is
a
theme
that
I
picked
really
to
encapsulate
the
idea
that
everyone
in
the
community
can
work
together
and
look
forwards
to
try
to
find
new
ways
of
solving
problems
and
really
interesting
solutions
are
out
there.
The
logo
is
made
by
my
my
wonderful
wife,
brittany
and
I
think
it
looks
beautiful
so
yeah.
I
think
that's
really
all
I
have
to
say
about
it.
B
Alrighty
thanks
james
next
up,
we
have
grace
to
talk
about
our
enhancements.
C
Right
so
for
1.24,
we
have
46
total
enhancement
after
code
freeze
and
everything
within
that
we
have
14
stable
15
to
beta
13,
alpha
two
deprecations
and
two
removal.
Interestingly,
since
1.17
we've
always
delivered
more
than
10
stable
features.
C
Also,
the
alpha
features
are
obviously
new,
so
if
anyone's
interested
in
using
it
make
sure
to
turn
on
the
the
future
flag,
as
well
as
we're
always
open
for
feedback
on
features
as
well.
B
Awesome
thanks
grace.
So
next
up
we
have
some
major
themes
now
I
might
turn
it
over
to
james
to
talk
specifically
about
our
first
theme
and
then
we
can
all
dive
in
and
chat
about
the
ones
afterwards.
D
Oh
yeah,
so
docker
shin
and
the
document
removal
has
been
a
topic
that
we've
been
talking
about
and
communicating
about
within
the
release
team
for
some
time
now,
just
to
give
a
quick
overview
for
those
that
might
not
be
aware.
Docker
shin
is
a
compatibility
with
docker
engine
that
was
deprecated
back
in
kubernetes
120
and
has
been
removed.
D
Now,
as
of
cube9124,
we
have
posted
a
large
amount
of
documentation,
primarily
aimed
at
platform
teams
and
questions
administrators
about
how
to
check
whether
they
are
affected,
with
both
lots
of
migration
instructions
and
things
to
do
next.
I
think
the
short
version
of
this
really
is
that
most
people
who
are
running
as
application
developers
on
kubernetes
or
deploying
day-to-day
will
not
need
to
change
their
workflows.
D
So
if
you
use
docker
as
like
a
local
cli
on
your
computer
when
you're,
developing
and
packaging
things
or
in
your
ci
pipelines
to
create
create
containers,
then
that
workflow
will
almost
certainly
not
change.
This
really
only
affects
a
subset
of
people
within
kubernetes.
There's
a
lot
more
information
about
this
out
there.
D
We
highlighted
the
exact
technical
nature
of
this
change
in
our
release
blog
and
there
have
been
a
number
of
blog
posts
over
the
past
few
months
and
years
on
the
kubernetes
blog
around
this
change
and
about
what
you
can
do
to
find
out.
So
if
you
have
any
concerns
about
this,
as
I
know
that
some
members
of
the
community
have
been
rather
concerned,
then
do
read
our
release
blog
and
that
will
give
you
all
the
information
you
need.
B
Cool,
thank
you
james
also.
One
other
thing
to
call
out
is
grace
brought
up
the
need
to
turn
on
alpha
flags
to
check
out
alpha
features.
One
change
happening
in
1.24
is
that
now
new
beta
apis
are
not
enabled
in
clusters
by
default,
this
doesn't
impact
existing
beta
apis,
and
basically
it
just
means
that
moving
forward
beta
apis
are
something
that
you'll
also
have
to
turn
on
via
a
flag.
C
Release
artifacts
or
sign
this
is
this
is
a
big
one
for
the
release
team,
I'm
not
sure
what
extent
this
has,
but
essentially
where
we're
rethinking
about
how
to
guard
kubernetes
release
process
in
in
as
relation
to
the
whole
supply
chain
attacks.
So
do
you
want
to
talk
a
little
bit
about
six
door,
or
should
we
just
dive
into
that
later?
James.
D
B
B
D
B
D
Yeah
these
those
two
are
really
a
combination
of
a
lot
of
work
from
sig
storage
and
we're
going
to
cover
that
in
a
little
bit
more
detail
later.
But
that
is
you
know.
These
are
features
that
have
been
in
beta
for
a
while,
and
the
season
that
goes
stable
is
really
exciting
for
stability
of
the
platform
as
a
whole.
C
B
Cool
and
then
finally,
one
other
csi
related
feature
we're
moving
away
from
having
entry
storage
plugins
to
having
everything
be
out
of
tree.
This
helps
a
lot
with
the
releases
and
it
also
reduces
the
support
boundary.
So
in
this
release,
the
azure
disk
and
stack
sender
plugins
have
both
been
migrated.
D
Oh,
I
love
the
grpc
probes
one.
So
of
course,
liveness
and
readiness
pros
has
been
a
feature
in
kubernetes
for
a
long
time
the
ability
to
to
probe
a
container
and
ask
it
for
its
couldn't
status,
either
liveness
or
readiness
with
different
behavior
within
kubernetes
for
what
that
means,
but
they
have
to
be
up
until
now
they
had
to
be
http
probes.
D
So,
but
if
your
application
did
not
otherwise
ship
a
http
server
and
primarily
communicated
over
grpc,
that
meant
you
have
to
bundle
the
entire
http
server
just
for
this
probe,
whereas
now
we
also
support
grpc
probes,
which
means,
if
you're
working
in
a
microservices
application,
which
is
using
a
lot
of
grpc
and
doesn't
use
http
internally
at
all.
This
can
really
streamline
your
deployment
of
of
your
internal
services
that
still
need
to
have
these
probes,
but
don't
overwise.
You
just
speak
http.
B
Another
one
that's
come
up
is
that
originally
released
is
alpha
and
kate's
1.20
cubelet's
support
for
external
excuse
me
for
image.
Credential
providers
is
now
beta
and
what's
cool
about
this
is
that
the
cubelet
can
now
dynamically
retrieve
credentials
from
a
container
industry
image
registry
using
exec
plugins,
rather
than
having
to
store
the
credential
on
the
nodes
file
system.
C
That's
pretty
neat
next,
we
have
contextual
logging
in
alpha,
so
essentially
this
enable
the
color
of
a
function
to
control
all
aspects
of
login,
so
that
allows
you
to
format
it
better.
You
know,
add
key
value
pairs,
add
more
velocity,
more
values
and
names.
C
Do
you
want
to
take
the
ip
collision
james.
D
Yeah,
sorry,
I
was
muted,
so
the
ip
collision.
One
is
another
really
interesting
one
for
me.
So
this
is
the
idea
that
you
can
specify
that
a
cluster
ip
service
should
receive
an
ip
address
and
you
can
also
hard
kind
of
hard
code
in
in
your
service
definition
a
ip
address,
but
there's
no.
There
was
before
this
feature,
there's
no
way
to
guarantee
that
the
hard-coded
one
you
chose
was
not
already
allocated
by
dynamic
service.
D
So
this
allows
you
with
a
little
bit
greater
control
over
a
subset
of
the
ip
allocation
for
service
ips.
That
will
actually
be
dynamically
allocated,
which
means
that
you
can
use
some
of
them
for
static
allocations.
So
you
can
avoid
this
problem
and
have
a
mix
of
dynamically
and
statically
allocated
service
ips
without
having
to
worry
about
contention.
C
Next,
we
have
dynamic,
cubelet
configuration
removal,
so
what
this
feature
was
is
to
allow
live
rollout
of
cubelet
configuration
in
live
cluster,
but
I
think
there
was
a
lack
of
motivation
in
within
the
community
or
interest
to
promote
this
to
to
stable,
so
it
will
be
removed
or
it
is
removed
from
the
kubelet
in
1.24
and
will
be
removed
from
the
api
server
in
1.26.
B
I
can
start
talking
about
some
of
these
specific
changes
that
have
come
out,
so
one
of
those
changes
is
the
deprecation
and
removal
of
self-link
fields.
So
self-link
is
a
url
that
represents
a
given
objects.
It's
part
of
object,
metadata
and
also
list
meta,
which
means
that
it's
part
of
every
single
kubernetes
object.
B
With
this
we
are
proposing
deprecating
the
field.
Excuse
me,
I
should
say
we
are
deprecating
the
field
and
removing
it
at
a
later
release.
According
to
the
deprecation
policy,
and
just
because
we
haven't
talked
about
the
deprecation
policy
much
yet
the
way
that
kubernetes
does
things
is
if
something
is
not
looking
like
it's
going
to
graduate
to
stable
or
if
it
looks
like
it's
been
replaced
by
a
more
capable
feature.
B
We
will
enter
the
deprecation
process,
which
means,
rather
than
completely
removing
an
api
feature
immediately.
There
will
be
a
process
where
it'll
be
removed
in
a
later
version.
So
in
this
case,
we've
just
began
the
deprecation
process
and
it
will
be
fully
removed
from
the
api
server
at
a
later
date.
B
B
B
So
if
I
were
a
client
sending
a
create
update
or
patch
request
to
the
server,
I
want
to
be
able
to
instruct
the
server
to
fail
when
the
object
I
send
has
fields
that
are
not
valid
in
the
kubernetes
resource,
and
so
what
this
does
is.
It
allows
us
to
remove
client-side
validation
from
q
cuddle,
while
maintaining
the
same
core
functionality
of
erroring
out
on
requests
that
contain
unknown
or
invalid
fields.
B
Oh
dear
yeah,
well
we'll
save
that
for
offline,
but
thank
you
grace.
Next
up
we
have
open
api
enum
types,
so
this
detects
enum
types
and
resource
types
and
generates
a
definition
in
the
opi
open
api
spec.
B
What
else
can
I
say
about
this?
Currently
types
in
the
api
have
fields
that
are
actually
enums
but
represented
in
a
plain
string,
and
what
this
does
is.
It
proposes
an
enum
marker
type
aliases
that
represents
enums.
B
C
I
just
want
to
call
out
that
that
this
feature
doesn't
include
fields
outside
of
the
schema
object.
They
might
be
nice
to
add
on
later,
but
this
is
not
part
of
the
enhancement.
B
So
now
we
have
max
unavailable
for
stateful
sets
entering
alpha.
This
implements
max
unavailable
for
stateful
sets
during
a
rolling
upgrade
when
a
staple
sets
spec
type
is
set
to
rolling
update
the
staple
sets
controller
will
delete
and
recreate
each
pod
in
a
staple
set.
Now,
if
any
of
you
are
familiar
with
doing
application
updates,
being
able
to
do
rolling
updates
is
incredibly
critical,
and
so
the
fact
that
this
is
supported
now
is
it's
really
helpful
for
the
process
of
updating
your
applications.
B
Also
worth
calling
out
updating
each
pod
currently
happens
one
at
a
time,
and
now
what
you
have
is
support
for
a
max
unavailable
variable
as
well.
So
what
that
means
is
that
you
have
x
number
of
pods
that
you
are
comfortable
with
being
available
simultaneously,
which
facilitates
a
speedier
rollout
than
doing
something
like
waiting
for
all
of
those
pods
to
be
updated
in
serial.
B
B
Jobs
can
be
parallel
where
they
have
no
dependencies
between
each
other
or
tightly
coupled
where
the
pods
can
communicate
amongst
themselves
to
make
projects
or
progress,
and
this
essentially
is
adds
the
fixed
completion
account
which
supports
running
parallel
programs,
with
the
focus
on
the
ease
of
workload.
Partitioning.
B
B
B
Next
up,
we
have
tracking
ready
pods
in
the
job
status.
We
are
hitting
this
jobs,
api
really
hard
with
updates.
So
what
this
does
is
it
adds
a
ready
feel
to
job
status
that
tracks
the
number
of
pods,
with
the
ready
condition
so,
along
with
all
the
other
jobs
related
updates.
This
gives
users
a
lot
of
control
over
the
number
of
job
pods
that
are
running
or
in
pending
phases.
B
Next
up,
we
have
sig
architecture,
and
I
will
dive
right
into
this,
and
this
is.
This
is
again
one
that
we
call
that
in
major
themes,
which
is
simply
that
new
beta
apis
are
not
enabled
in
clusters
by
default.
B
Existing
data
apis
and
new
versions
of
existing
beta
apis
will
continue
to
be
enabled
by
default,
so
there
won't
be
any
changes
required
there,
but
this
is
definitely
worth
noting,
because
previously,
when
you
use
alpha
or
beta
flags
for
clusters,
there
are
a
lot
of
beta
apis
that
are
available
by
default,
and
so
you
may
expect
that
when
a
new
feature
graduates
from
alpha
to
beta
that
you
might
actually
see
it
already
enabled,
in
this
case,
you're
going
to
want
to
check
that
list
to
make
sure
that
the
api
is
in
fact
enabled.
D
I
don't
think
there
are
any
in
124
of
actually
affected
by
this.
I
think
all
of
the
better
changes
are
either
things
that
have
already
been
in
beta
or
they
don't
introduce
apis.
So
if
it's
just
a
beta
change,
there's
something
else
and
not
an
api
change.
I
don't
think
it's
so
correct
me.
If
I'm
wrong,
I
think
that's
the
case.
I.
B
B
It's
a
little
bit
self-explanatory,
but
it's
worth
calling
out
that
today,
certificates
issued
through
the
certificate
signing
request.
Api
are
not
revocable
and
you
also
do
not
have
the
ability
to
control
the
duration
of
an
issued
certificate
and
there
might
be
a
reason
to
have
trust
distinction
for
different
clients.
B
The
great
thing
here
is
this,
and
also,
of
course,
all
of
the
work
with
signing
release
artifacts
for
six
store,
help
with
the
process
of
increasing
the
overall
security
of
the
kubernetes
ecosystem.
B
And
then
last
one
for
sig
off
is
the
reduction
of
secrets
based
service
account
tokens,
so
this
very
simply
reduces
the
surface
area
for
secret
space
service
account
tokens
yeah.
That's
all
I
got
anything
to
add.
B
Okay,
okay
and
I
realized
I've
spoken
a
lot.
I
promise
I
will
turn
it
over
to
my
colleagues
momentarily
and
they
will
handle
the
next
few,
but
I
think
this
is
the
last
sig
that
I'll
be
covering
and
it's
sig
cli
six
cli
covers
cube
control.
Grace
is
that
right.
B
We,
the
sig
cli,
focuses
on
the
development
and
standardization
of
the
cli
framework
and
also
its
dependencies,
and
just
simply
improving
the
command
line.
Experience
for
developers
and
devops
personas.
B
B
Yes-
and
we
also
are
adding
a
new
alpha
feature
which
is
adding
sub-resource
support
to
control,
it
adds
a
new
sub-resource
flag
for
commands
like
get
patch
and
edit
and
also
replaces
commands
to
allow
fetching
and
updating
sub-resources
like
status,
scale
and
etc.
B
Today,
when
you're
testing
or
debugging
or
fetching
sub-resources
like
status
of
api
objects
via
cute
cuddle,
it
involves
using
cube
cuddle
with
the
raw
flag
and
patching
sub.
Resources
is
not
possible
at
all
and
requires
using
curl
directly
and
it's
you
know
it
kind
of
violates
the
cli
sig
cli
principle
of
making
things
user
friendly,
because
this
method
is
very
cumbersome
and
with
this
we're
adding
subresources
as
a
first
class
option
for
qccuddle
that
allows
you
to
work
with
the
api
in
a
very
generic
fashion.
C
Yes,
I
will
take
it
from
here.
So
first
I
have
a
say
cloud
provider,
the
first
enhancement
they
have
is
to
add
a
new
field.
Service
type
equals
load,
balancer
class,
and
what
this
allows
you
to
do
is
to
have
multiple
load
balancers.
So
for
the
use
case
in
which
you
have
multiple
workloads-
and
you
know,
each
of
them
require
different
types
of
balance.
Right
now
you
can
use
this
field,
which
is
service,
dot,
specs,
dot,
load,
balancer
class
and
it's
also
unstable.
C
Next
up
we
have
leader
migration
for
controller
managers,
and
the
name
doesn't
tell
me
anything
about
the
enhancement,
but
what
it
does
is
migrate.
My
trading
cloud
specific
code
inside
the
cube
controller
manager
to
their
out
of
tree
equivalent-
and
this
is
also
a
stable
enhancement.
C
Next,
we
have
sick
instrumentation,
they
have
three
enhancement.
The
first
one
is
deprecation
of
kubernetes
system
components,
lock,
sanitization,
and
so
this
feature
came
out
of
a
security
audit,
but
essentially
we
are
deprecating
it.
So
this
this
allows
dynamic,
lock,
sanitization,
which
essentially
is
a
filter.
C
Okay
and
then
next
up,
we
have
deprecating
specific
k-lock
flags
in
kubernetes
component,
so
this
came
out
due
to
due
to
complexity
and
as
well
as
performance
issue,
so
we
used
to
use
k-log
as
a
result
of
when
the
go
ecosystem
was
not
as
developed
as
it
is
now,
and
so
this
deprecation,
which
is
in
beta,
will
remove
some
not
all
of
the
k,
loft
flags
in
that
component.
C
Relating
to
this
next
up,
we
have
contextual
logging,
which
is
in
alpha,
and
this
was
one
of
the
major
themes
and
is
relating
to
the
the
k
log
one
I
just
mentioned
previously.
Is
it
allows
you
to
better
log
such
as
have
the
key
value
attached
pairs
at
name
no,
more
verbosities
and
such
so?
This
is
in
alpha.
C
Currently,
all
righty,
no
one
has
anything
to
add
I'll
move
on
to
sick
network
and
they
have
four
announcements.
The
first
one
is
support
of
mixed
protocols
and
services
with
type
equals
to
balancer.
So
essentially,
this
allows
you
to
use
different
protocol
via
the
same
ip
address,
so
different
layer
for
protocol.
C
So
the
goal
of
this
is,
you
know,
analyze
the
impact
of
the
feature
it's
currently
in
beta
and
see
how
cloud
provider
load
balancer
implementation
can
use
this
in
the
future.
C
C
C
C
Oh
okay.
Okay,
maybe
I
missed
this
one
yeah
service
internal
traffic
policy,
so
this
is
a
new
field
that
allow
routing
to
the
local
node
instead
of
randomizing
it.
So
the
the
field
is
back.internal
traffic
policy,
pretty
straightforward.
C
Okay,
where
are
we
known.
C
Network
policy
status,
okay,
so
network
policy
status
provide
feedback
to
user
when
they
use
network
policy.
So
currently,
if
or
not
currently,
but
the
feature
of
the
features
and
beta.
If
you
use
the
network
policy
and
something
goes
wrong,
there
is
no
immediately
feedback,
and
this
is
a
feature
that
is
going
to
tell
the
user
if
something
has
been
properly
parsed
or
not,
okay
and
then
next
up.
I
think
I
cover
this
one
already.
C
Sig
note
dynamic,
cube
lid
configuration
and
it
is
removal.
So
this
was
one
of
the
major
themes
in
which
the
feature
essentially
allow
live
rollout
of
cubelet
configuration,
but
to
due
to
lack
of
motivation
and
interest
in
promoting
to
stable.
This
feature
is
being
deprecated.
B
C
Oh
yeah,
okay,
yeah
remove
gone
next
up.
We
have
pot
overhead,
so
this
feature
is
going
into
stable
and
it's
a
mechanism
to
account
for
the
pot
overhead
require
for
scheduling
so
accounting.
What
the
pod
overhead
is
in
the
runtime
solution.
C
Next
up
we
have
cubelet
credential
provider,
so
this
is
a
plug-in
to
allow
the
cubelet
to
dynam
dynamically
fetch
the
image
registry
credential
for
any
cloud
provider
on
top
of
the
three
cloud
providers
that
we
currently
have,
which
is
azure
elastic
container
registry
and
google
container
registry.
So
this
plugin
allows
it
to
to
dynamically
fetch
image
from
any
cloud
provider,
and
it
is
currently
in
beta.
C
Form
and
then
this
is
a
big
one,
as
james
mentioned
before,
darker
shrimp
removal,
due
to
incompatibility
as
well
as
maintenance
burden,
the
container
run
time
for
docker
has
been
removed
from
the
kubelet
code
base
and
there
is
loads
and
loads
of
documentation.
I
think
there's
faq
on
on
the
community's
website
as
well.
If
anyone's
asked
questions,
yeah.
B
And
just
add
on
like
truly
what
grace
said
is
is
correct.
We
are
there's
loads
and
loads
of
documentation
out
there
about
this
process.
On
the
day
of
launch,
we
have
a
post
that
was
published
about
the
history
of
the
docker
shim
removal
process.
B
There
was
one
launched
around
the
same
time
as
our
removals
and
deprecations
blog
for
1.24.
That
included
the
exact
steps.
You
need
to
ensure
that
your
cluster
is
ready
for
the
removal
of
docker
shim.
Prior
to
that
there
was
other
blog
posts
about.
Why
not
to
worry
about
docker
going
away-
and
I
think
there's
just
been
a
lot
done
by
the
community
to
ensure
people
that
we're
all
set
for
this
deprecation
and
there's
really
truly
nothing
to
worry
about.
C
Awesome
awesome:
next
up,
we
have
pot
priority
based
graceful
notes,
shutdown.
Once
again,
the
name
is
quite
quite
a
mouthful,
but
essentially
what
this
allows
you
to
do,
or
the
note
to
do
is
detect
plots
with
priorities
when
the
node
is
being
shut
down
and
shut
down.
Those
parts
gracefully
and
the
features
in
beta.
C
And
I
think
this
might
be
the
last
one
for
sick
node,
but
one
of
our
major
themes-
grpc
probes,
has
have
been
added
native
configurations
to
allow
users
to
not
not
require
user
to
package
a
health
check
binary,
so
you
can
do
it.
Natively
now.
C
Only
have
one
features
which
is
rare:
we
do
even
have
any
enhancement
at
all,
but
this
is
a
big
one.
Signing
release
artifacts
currently
in
alpha,
so
the
sick
release
is
trying
to
rethink
or
kind
of
create
a
framework
around
toolings
and
how
we
want
the
release
artifacts
to
be
signed,
and
the
tooling
is
going
to
be
the
linux
foundation,
a
six-door
project
and
I
think,
there's
loads
of
documentation
on
that
list
to
talk
about
that
at
kubecon
as
well.
C
But
the
goal
of
this
is
to
support
a
secure
software
supply
chain.
B
Yeah
then
just
add
on
I
mean
this:
this
provides
end
users
with
a
chance
to
verify
the
integrity
of
everything
that
you're
downloading
all
of
the
kubernetes
artifacts.
So
this
is
a
pretty
big
change.
It's
also
worth
calling
out
that
this
helps
the
kubernetes
project
achieve
greater
compliance
specifically
with
the
the
salsa
standard.
C
D
Sig
scheduling
so
six
scheduling
books
after
anything
to
do
with
well
the
act
of
scheduling
which
in
kubernetes
terms,
is
anything
that
chooses
where
to
put
a
pod
more
or
less
so
we
have
a
few
enhancements
here.
D
The
first
one
is
non-preempting
options
for
priority
classes,
so
priority
class
is
a
feature
that
has
existed
for
a
while.
That
allows
you
to
say
that
some
pods
are
more
important
or
of
a
higher
priority
than
others.
And
then,
when
you
hit
resource
contention,
you
can
kick
off
some
other
pods
to
make
room
for
more
important
ones
for
higher
priority.
D
This
has
now
been
changed
to
add
a
non-preempting
option,
and
if
you
use
a
non-preempting
priority
class,
then
your
priority
class
won't
evict
things
but
will
instead
be
used
to
make
scheduling
decisions
and
the
real
driver
behind
this
is
for
batch
workloads
where
you
want
to
use
priority
classes
to
prioritize
upcoming
scheduled
work
on
a
full
cluster,
but
you
don't
want
to
evict
pods
are
part
way
through
computation.
D
D
Okay,
cool
that
person
might
want
to
check
the
audio
settings,
I'm
afraid
should.
B
D
Should
we
move
on
name,
specific,
profinity,
so
again,
pod
affinity?
Oh,
the
last
switch
was
stable
by
the
way,
as
is
this
one.
Pod
affinity
is
a
feature
again,
that's
been
around
in
kubernetes
for
a
while.
So
this
allows
you
to
say
that
you
want
pods
to
exist
alongside
other
pods
for
affinity
or
not
with
other
pods,
which
is
anti-affinity,
and
but
that
was
always
computed
against
pods
in
the
same
name
space.
So
now
you
can
select,
you
can
give
a
namespace
selector.
D
So
if
you
can
do
this
with
pods
across
namespaces,
which
is
a
nice
little
enhancement,
there
min
domains
in
pod
topology
spread.
So
this
is
around
tuning
how
the
pod
topology
feature
works,
which
is,
I
believe
this
is
a
feature
that
allows
you
to
give
kubernetes
some
awareness
of
which
nodes
are
next
to
other
nodes
in,
for
example,
in
a
data
center.
So
this
will
allow
you
to
make
intelligent
routines,
intelligent
scheduling
decisions
for
things
like
dr
purpose,
all
that
sort
of
thing.
D
So
this
is
another
enhancement
fact
coming
through
in
alpha
next
move
on
to
sig
storage.
Stick
storage
has
a
lot
of
caps,
so
we
will.
We
will
try
to
go
through
these
relatively
quickly,
but
there's
a
lot
of
good
features
coming
out
here,
so
six
storage
is
responsible
for
anything
to
do
with
with
storage.
So
predominantly,
of
course,
this
will
be
your
persistent
volumes
and
percent
volume
claims,
but
this
includes
all
sorts
of
features:
around
block,
storage
file,
storage,
object,
storage,
even
in
some
cases,
so
there's
a
lot
going
on
right.
D
What's
the
first
one,
csi
volume
expansion
is
now
stable.
We
talked
about
this.
One
spoke
about
this
one
in
our
major
themes.
This
is
something
that's
been
around
for
a
while
again,
it
depends
on
support
from
your
csi
drivers,
whether
it
can
actually
expand,
expand
volumes
at
all,
but
this
means
you
can
use
the
kubernetes
kubernetes
api
to
manage
that
expansion
and
the
storage
and
the
size
of
your
storage
position
for
persistent
volumes,
which
is
really
exciting
next
volume,
health
monitoring.
D
D
Next
we
have,
I
don't
think
the
slide
is
updated
for
me,
mickey,
oh
no,
it
has.
I
just
got
ready
storage
capacity
tracking,
like
the
first
one,
there's
another
one
that's
coming
through
into
stable.
D
D
Next,
we
have
a
handful
of
enhancements,
three
of
them
that
are
all
about
entry
storage,
plug-in
migration.
So
we
have
a
whole
bunch
of
intrigue.
When
we
say
entry
we
mean
that
the
code
is
in
github.com,
kubernetes
kubernetes,
that's
what
we
you
know
they
seem
to
be
considered
to
be
in
tree,
so
we
had
a
whole
bunch
of
storage
drivers
and
now
that
we
have
a
container
storage
interface,
all
of
those
are
being
pushed
out
into
csi
plug-ins.
D
For
now,
the
apis
are
remaining
the
same,
but
the
logic's
been
pushed
out
into
a
driver
and
there
may
be
future
changes
around
that
coming
from
six
storage,
but
that's
not
now.
D
We've
seen
this
been
done
for
a
handful,
but
not
all
of
them
so
far,
so
we're
doing
it
for
openstack
cinder
and
I
believe,
if
you
look
at
the
next
one
is
azure
disk
and
the
one
after
that,
as
your
file,
so
there
are
in
various
levels
of
stable,
stable
beta.
So
this
is
part
of
an
ongoing
effort
within
six
storage
in
order
to
move
everything
to
be
csi
drivers
and
reduce
the
support
burden
of
things
in
tree.
D
Next,
we
have
volume
populators.
So
this
is
the
idea
that
when
you
create
a
volume
in
kubernetes,
you
can
populate
it
from
somewhere
this
at
the
moment
or
before
this
feature
was
primarily
targeted
at
restoring
snapshots,
so
you
could
tell
a
volume
snapshot
and
to
restore
it
as
a
populated
option
or
from
another
potentially,
if
your
driver
supported
it,
another
volume
is
already
around.
This
is
expanding
the
idea
there
to
make
a
generic
volume
populator
so
that
you
can
provide
other
ways
of
populating
volumes
into
into
your
kubernetes
cluster.
D
D
Next,
we
have
non
graceful,
node
shutdown.
This
is
really
a
reliability
and
reliability
improvement
around
if
your
node
just
dies.
I
think
non-graceful
node
shutdown
is
a
euphemism
for
your
node
crashed
or
someone
unplugged
it.
So
again,
this
is
really
just
a
reliability
improvement
with
some
behavioral
options
so
ensure
that
some
things
don't
get
stuck
unscheduled,
unable
to
re-run
and
require
someone
to
manually
poke
at
them
to
fix
them
again.
D
This
is
coming
in
alpha,
so
you
will
need
a
feature
tank
to
enable
this,
but
it
is
an
interesting
one.
Coming
up
next,
we
have
honor
persistent
volume,
reclaim
policy.
This
is
really
just
a
standardization
on
behavior.
There
are
certain
circumstances
in
which
volume
reclaimed
policy
won't
be
honored,
and
this
just
makes
it
so
that
it
will.
D
This
requires
a
behavioral
change,
though,
so
it
came
for
this
enhancement
rather
than
a
bug
fix,
but
yeah.
This
is
again
a
pretty
interesting
one
going
through
and
then.
Finally,
finally,
for
six
storage,
we
have
control
volume,
mode
conversion
between
source
and
target
pvc.
So
this
is
actually,
you
could
argue,
a
security
fix.
D
D
Then
you
could
use
this
to
trick
the
kernel
into
doing
something,
incorrect
and
cause
a
kernel
crash,
and
if
you
can
cause
a
kernel
crash
and
you're
creative,
you
can
do
all
kinds
of
fun
things,
so
this
just
really
closes
off
something
that
no
user
would
ever
actually
want
to
do,
which
is,
for
example,
try
to
mount
a
volume
mode
block
pvc
snapshot
and
a
monitor
as
volume
mode
file
system.
You're,
never
really
going
to
want
to
do
that.
D
So
this
is
just
an
alpha
feature
to
stop
you
trying
to
do
that
in
the
anticipation
that
eventually
there
would
be
a
bug
that
this
could
be
part
of
next
one
chain,
four,
so
yeah.
This
is
pretty
interesting
stuff.
Most
users
probably
won't
have
to
worry
about
it,
but
it's
nice.
It's
there.
D
Sick
windows
stick
windows,
as
the
name
implies.
Oh,
this
is
the
last
sig
by
the
way,
only
two
more
to
go.
Sig
windows,
the
name-
implies
deals
with
kubernetes
or
windows,
so
microsoft
windows,
they
have
a
couple
of
enhancements
in
with
his
operational
readiness
specification.
D
This
is
really
a
certification
thing
to
do
with
our
enter
and
test
suite.
So
this
allows
you
to
get
greater.
This
well,
ultimately
will
lead
to
having
greater
confidence
in
running
production,
workloads
on
windows,
clusters
and
windows
notes.
So
that's
a
big
improvement.
We
have
to
see
and
then
finally,
the
last
enhancement
for
kubernetes
124
is
identify
pod
os
during
api
server
admission,
which
is
just
about
expressing
which
operating
system
our
pod
intends
to
run
so
that
you
can
make
scheduling
decisions
much
more
effectively.
D
B
Into
the
release
team
shadow
program,
so
I
mean
essentially
the
three
of
us
met
and
became
great
friends
on
the
kubernetes
release.
Team
kubernetes
release
team
is
a
group
of
folks,
many
of
whom
have
had
experience
from
past
releases,
others
of
whom
are
our
shadows
and
they're
joining
for
the
first
time,
maybe
they're
existing
kubernetes
contributors,
maybe
they're
folks
that
are
interested
in
collaborating
on
a
big
open
source
project,
but
whoever
they
are
they're.
A
group
of
dedicated
individuals
who
comes
to
weekly
meetings
takes
roles
on
a
number
of
different
teams.
B
You
know
enhancements
where
grace
was
in
in
the
church
and
in
in
lead
in
lead
up
lead,
lead
enhancements
there
we
go
or
the
team
lead,
which
is
people
who
have
come
from
previous
releases
and
want
to
be
leaders
of
an
upcoming
release,
just
like
james.
You
can
also
join
me
on
the
communications
team.
There's
also
a
release,
notes,
team,
a
docs
team
bugs
triage
and
ci
signal.
B
B
We
ask
folks,
to
you,
know,
usually
stick
around
a
little
bit
after
just
in
case
they're
called
on
for
a
couple
of
additional
weeks
worth
of
work,
and
we
would
encourage
everyone
watching
or
everyone
who
works
with
kubernetes
to
consider
joining
the
release
team.
B
You
know
try
to
throw
your
hat
into
the
mix
if
you
want
to
be
part
of
the
1.26
release,
because
1.25
has
already
kicked
off
when
the
shadows
have
been
selected,
but
we're
always
looking
for
new
collaborators
that
can
help
out
with
the
project
and
we'd
love,
to
see
your
name
on
the
next
application.
B
D
D
Mid-August
give
or
take
that
may
change,
but
if
you,
if
you
pay
attention
to
this
sig
dev
site
to
the
kdev
mailing
list
like
devak
cuban
asset
io,
if
you
subscribe
to
that,
then
you
will
get
information
updates
posted
to
you
that
way
or
if
you're
interested
in
gemalin.
What
secretly
studs
you
can
come
along
to
our
channel
on
the
slack
and
we
have
bi-weekly
main
release
team
meetings
on
wednesdays.
D
Sorry
on
tuesdays,
my
mistake
so
yeah,
I'm
always
happy
to
to
see
people
and
talk
to
people
and.
C
Yeah,
I
have
two
things
that
I'll
be
really
quick.
The
first
one
is,
you
don't
need
experience
to
join.
I
joined
when
I
was
in
first
year
skipping
school
to
be
in
this
webinar
right
now,
so
no
experience
required
and
also
on
top
of
the
teams
listed
here,
our
release,
engineering
team
or
the
branch
managers.
C
B
Cool,
thank
you
james
and
grace,
and
last
up.
We
have
time
for
questions,
so
I
think
we
can
look
over
in
the
q
a
to
see
if
there
are
any
questions
asked
by
folks
in
the
audience.
B
D
Be
mean:
is
there
any
place
to
validate
and
test
features
like
volume,
health,
monitoring
and
storage
capacity
depends
what
you
mean
by
validate
and
test.
Do
you
mean
by
yourself
in
your
own
clusters,
or
do
you
mean?
Does
a
community's
project
as
a
whole
perform
testing
on
these
features
own
cluster
yeah?
So
I
mean,
though
those
features
are
enabled
in
any
kubernetes
125.
I
think
health
monitoring
you,
you
might
need
an
alpha
flank,
for
I
have
a
storage
capacity
of
stable.
D
So
if
you
spin
up
any
one
kubernetes
one
two
four
cluster,
then
you
can
do
it,
I'm
not
sure
if
any
of
the
cloud
providers
have
124
yet,
but
you
can
use
kind
in
order
to
spin
up
a
local
cluster
we've
hostpassed
nfcs.
B
D
I
see
what
you
mean
do
any
of
the
locally
supported
csi
drivers
support
those
two
features.
I
don't
know
actually
sorry,
I'm
missing
some
of
your
question.
D
D
C
Network
policy
status,
so
the
goal
of
this
is
for
the
network
policy
provider
to
add
a
feedback
or
status
to
to
the
user,
to
see
whether
the
network
policy
was
properly
parsed.
With
this
feature.
C
B
Yeah,
it's
worth
calling
out
that
for
additional
details
about
any
of
the
things
we
covered
today.
For
example,
the
network
policy
status
we
will
be
sending
out
the
deck
and
the
deck
will
have
links
to
all
of
the
kubernetes
enhancements
proposals
and
issues
that
are
tracked
and
that
we
discussed
today.
So
if.
C
D
Just
that
you're
only
really
seeing
three
of
us
here,
but
the
releasing
of
30
people
like
this
is
the
release
team
itself,
of
course
only
handles.
I
say,
only
only
handles
kind
of
the
release,
mechanics
and
we're
30
people
and
all
of
the
enhancements
you
you've
seen
us
talk
about
today
were
implemented
by
other
teams
of
people
in
other
cigs.
So
there's
there's
hundreds,
if
not
thousands,
of
individuals
who
have
contributed
to
kubernetes
124..
A
Thank
you
three
of
30
for
coming
and
giving
us
this
great
webinar
and
teaching
us
about.
What's
what
the
updates
are
and
what's
going
on,
I
think
everybody
knows
where
to
reach
you
and
if
anyone
has
any
other
questions,
definitely
reach
out.
You
can
always
hit
us
up
on
the
slack
channels
again.
This
recording
will
be
online
later
today
and
thanks
everybody
for
joining
us.
It's
been
a
great
chat
and
we
will
see
you
next
time.