►
From YouTube: CNCF Live Webinar: Kubernetes 1.23 Release
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Today's
webinar
I'm
going
to
read
our
code
of
conduct
and
then
hand
over
to
karen
chu,
ray
lahanno
and
xander
gravinsky
with
0.23
release
team
a
few
housekeeping
items
before
we
get
started
during
the
webinar
you're
not
able
to
speak,
but
there
is
a
q,
a
box
on
the
right
hand,
side
of
your
screen.
Please
feel
free
to
drop
your
questions
there
and
we'll
get
to
as
many
as
we
can.
At
the
end.
A
A
Please
also
note
that
this
recording
and
slides
will
be
posted
later
today
to
the
cncf
online
page
unity.cncf.io
under
online
programs
they're
also
available
via
your
registry
and
the
recording
will
also
be
available
on
our
cncf
youtube
channel.
Under
online
programs
playlist
with
that,
I
will
hand
it
over
to
the
team.
B
Hey
everyone,
as
mentioned
I'm
karen,
I'm
the
communications
lead
for
the
1.23
release
team,
and
should
we
go
to
the
next
library,
cool,
yeah
and
ray?
Do
you
want
to
introduce
yourself.
B
Cool,
so
on
today's
agenda,
we
are
going
to
be
going
over
the
1.24
release,
timeline
and
updates
and
then
we'll
go
back
to
the
1.23,
highlights
sig
updates
and
then
ray
and
xander
will
do
the
cuny
at
the
end.
D
We've
got
just
a
brief
overview
here
of
the
projected
timeline
for
the
1.24
release,
so
we'll
be
kicking
the
release
off
on
monday
january
10th
next
monday
here
and
then
all
of
the
following
dates
after
that
are
are
subject
to
change,
particularly
the
enhancements
freeze
date.
But
this
is
kind
of
what
we
have
laid
out
so
far.
Looking
at
an
initial
enhancements
freeze
of
thursday
january
27th,
followed
by
code
freeze
on
march
29th
and
then
targeting
final
release
for
1.24
on
tuesday
april
19th.
C
All
right
next,
we're
going
to
go
through
the
1.23
highlights.
First
off
is
a
theme
of
the
release,
so
kubernetes
1.23,
the
theme
is
the
next
frontier,
and
this
is
the
logo.
The
next
frontier
represents
three
things.
One
is
the
new
and
graduated
enhancements
in
1.23.
C
D
A
little
overview
of
the
enhancements
that
we
had
attract
for
1.23,
we
ended
up
with
a
total
of
47
tracked
11
of
those
being
stable,
16
beta
and
then
19
new
alpha
features
and
one
deprecation-
and
you
know,
since
1.17,
there's
consistently
been
north
of
of
10
stable
features
which
is
pretty
fun,
and
then
I
guess
for
those
unfamiliar
with
the
terminology.
Here,
the
the
alpha
features.
D
C
C
There's
a
few
slides
here,
we'll
go
into
more
detail
when
we
go
through
the
sig
updates.
Firstly,
is
dual
stack:
ipv4
and
ipv6
networking
went
too
stable.
Dualstack
was
first
introduced
with
alpha
1.15
and
refactored
and
1.20,
because
before
1.20
you
had
to
have
a
service
per
ip
family
still
1.20.
C
Secondly,
the
pod
security
admission
is
now
beta,
so
for
those
who
are
familiar
with
the
pod
security
policies,
which
were
dedicated
in
1.21
and
pod
security
policies,
also
known
as
psps,
is
targeted
to
be
removed
in
1.25.
So
pod
security
mission
is
the
replacement
for
cop
for
pod
security
policy.
What
it
is
it's
a
mission
controller
that
evaluates
pods
against
a
predefined
set
of
pod
security
standards
to
either
admit
or
deny
the
pod
from
running,
so
we'll
go
into
a
little
more
detail
when
we
go
through
the
stick
updates.
C
Thirdly,
the
horizontal
pod
autoscaler
v2
api
is
now
stable
or
ga.
So
this
v2
api
allows
for
multiple
and
custom
metrics
to
be
used.
The
v1
api
is
not
being
deprecated.
C
Also,
the
cubelet
container
runtime
interface
is
now
beta,
so
the
cri
v1
apis
is
also
the
default,
since
it
is
beta.
Go
into
that
a
little
more
detail
with
the
sega
updates
as
well.
Some
more
major
themes
here.
The
ttl
controller
is
now
stable,
which
cleans
up
it's
a
little
like
a
garbage
collector
for
cleans
up,
drops
and
pods
after
they
finish,
you
do
have
to
set
a
specific
field
in
the
jobs
the
ttl
seconds
after
finish,
to
be
set.
C
Then
then,
a
kubernetes
well,
there's
a
controller
that
will
watch
all
the
jobs
and
I'll
compare
that
field
against
the
current
time
to
see
if
the
pods
are
done
or
the
job
is
done
and
they
will
delete
those
corresponding
pods
another
one,
simplified
multi-point
plug-in
configuration
for
scheduler,
so
this
is
for
the
new,
for
this
is
for
the
cube
scheduler.
It's
adding
a
new,
simplified
config
field
for
plug-ins
to
allow
for
multiple
extension
points
to
be
enabled
one
place.
Another
one
is
the
generic
inline
volumes.
C
2
is
now
ga,
so
this
allows
any
existing
storage
driver
that
supports
dynamic,
provisioning
to
be
used
and
as
an
ephemeral
volume,
that's
bound
to
the
pod
so
come
again
to
gear
another
one.
Is
software
supply
chain,
salsa
level,
one
compliance
so
kubernetes
now
releases
and
generates
pro
provenance
and
station
files
describing
the
the
staging
and
release
phases
said
the
release
process,
so
the
artifacts
are
now
verified,
as
they're
handed
over
from
one
phase
to
the
next
more
major
themes,
the
skip
volume
ownership
change.
C
So
this
feature
allows
you
to
to
to
choose.
If
you
want
to
change
your
ownership,
when
a
volume
is
by
not
inside
a
container,
otherwise
it
would
go
back
recursively
to
change
the
ownership
for
each
for
each
volume
I'll
go
into
this,
like
I
mentioned
more
in
the
sig
updates
as
well,
and
the
problem
is
that
it
could
kind
of
take
too
long
for
very
large
volumes.
C
So
this
allows
you
as
an
option
to
eat,
to
skip
that
also
allows
csi
drivers
to
opt
in
volume
and
permission
changes,
so
this
allows
csi
drivers
to
declare
support
for
fs
group
based
permissions.
Structured
logging
is
now
data,
so
most
log
messages
from
cubelets
and
cubescheduler
has
been
converted
and
there's
more
csi
migration
updates.
So
there's
this
is
a
continuation.
C
This
is
a
continued
effort
to
move
from
entry
plug-ins
to
csi,
so
it's
beta
for
gcpd,
aws,
ebs,
azure
disk
and
alpha
first
f,
rbd
and
port
works.
A
few
more
major
things.
Expression,
validation,
crd
is
now
alpha.
C
So
if
the
feature
gate
is
enabled,
a
custom
resource
can
be
valid
using
the
common
expression
language,
no
one's
opened
api
b3,
open
api
v3
is
more
transparent
than
an
open
api
v2.
It's
also
more
expressive
because
we
actually
lost
some
fields
when
we
published
with
open
api,
v2
server
side,
feed
validation.
So
the
this
is
now
alpha,
so
the
speech
gate
is
enabled
users
will
receive
warnings
from
the
server
when
they
send
a
kubernetes
objects
in
their
request
that
contain
unknown
or
duplicate
fields,
deprecation
of
flex
volume.
C
So
this
is
one
of
the
this
is
actually
deprecated
previously
it's
this
is
the
out
of
tree
cs
drive.
The
outer
tree.
Csi
driver
is
now
the
recommended
way
and
deprecation
of
k,
log
specific
flags,
so
this
is
kubernetes
in
the
process
of
simplifying
logging
in
its
components.
C
Now
we'll
go
into
the
sig
updates,
so
we're
going
so
we're
going
to
go
through
each
various
cigs
and
talk
about
the
enhancements
per
sig,
starting
off
with
sig
api
machinery,
which
covers
all
aspects
of
the
api
server.
The
first
one
is
priority.
Fairness
of
api
server
request,
so
this
actually
extends
the
existing
maxim
flight
request
handler
in
the
api
server,
so
that
we
can
make
more
distinctions
among
requests
to
provide
prioritization
fairness
among
other
other
categories
of
requests.
So
this
so
we
get
priority.
C
C
Next,
one
for
sig
api
machinery
is
the
custom
resource,
def
or
crd
validation,
expression
language.
So
this
is
where
we
can
use
it
where
we
currently
use-
or
we
can
use
mission
web
hooks
to
validate
custom
resources,
but
it
is
intensive,
can
be
very
complicated.
C
So
this
feature
enables
to
use
the
common
expression,
language
or
cl
to
validate
those
custom
resources
and
also
makes
those
crds
more
self-contained,
and
you
could
actually
write
write
those
validations
or
as
code
so
it'll,
be
in
the
definition
of
the
crd
object,
add
server
side
and
unknown
field
validation,
which
is
now
alpha.
C
So
this
is
the
links
here
again
to
feature
announcements
issue
2885.
So
this
allows
you
to
send
before
if
there
was
a
server-side,
a
node
field,
validation,
it
will
let
go
through,
but
now
with
the
server
side.
Validation.
If
there's
an
if
there's
a
misspelled
field
or
or
invalid
field
or
extra
field
or
any
field,
that's
duplicated,
it
will
not
allow
that
this
also
is
also
kind
of
linked
to
there's
the
client
side
validation
might
is.
C
Open
api
enum
types
is
alpha,
and
so
the
way
it
is
now
currently
or
before
123
or
currently,
since
this
is
still
at
alpha
the
api
fields
that
meepseed
are
actually
enums
but
they're
actually
represented
as
plain
string.
So
this
adds
in
in
a
marker
so
that
those
enum
types
so
that
allows
for
element
type
support
for
open
api
open
api
v3
goes
to
alpha
so
open
api
v3,
like
I
mentioned
before,
is
more
transparent
and
expressive
and
with
open
api
v2,
there
are
some
fields
that
were
dropped
when
published.
D
I'm
going
to
touch
on
the
caps
that
were
part
of
sig
apps,
so
this
covers
just
deploying
and
operating
applications
in
kubernetes
and
and
the
developer.
Experience
related
to
that.
D
So
the
first
one
that
we
have
is
cron
jobs
and
crown
jobs
have
been
stable
for
a
little
while
now
1.21
was
when
that
change
was
made,
and
there
just
was
some
cleanup
work
with
the
old
controller
that
happened
in
the
1.23
release.
D
And
then
this
one
ray
touched
on
as
a
major
theme,
I
believe
the
the
ttl
after
finish
controller,
so
this
adds
a
field
to
jobs.
The
the
ttl
seconds
after
finish,
to
allow
this
new
controller
to
clean
up
old
pods
related
to
jobs.
So
this
actually
went
stable
this
time
around
and
yeah
like
ray
mentioned
in
the
the
major
themes,
it
does
require
that
that
field
set
to
to
make
use
of
that.
D
And
then
this
one
was
auto:
removing
persistent
volume
claims
created
by
stateful
sets.
So
previously
those
wouldn't
be
deleted.
As
part
of
cleanup
of
the
staple
sets,
it
was
a
manual
process,
and
so
this
adds
an
auto
cleanup
of
pvcs
that
are
managed
by
staple
sets.
D
And
then
job
tracking
without
lingering
pods,
so
currently
jobs
rely
on
completed
pods
to
our
existing
pods
to
to
count
the
you
know,
the
job
completion
status,
and
this
removes
that
requirement
by
utilizing
a
finalizer
rather
than
keeping
those
existing
pods
hanging
around.
D
And
then
min
ready
seconds
on
stateful
sets
allows
end
users
to
specify
a
number
of
seconds
that
a
pod
must
exist
without
crash
looping,
for
the
stateful
set
to
be
considered
ready
and
have
that
status.
It's
an
existing
feature
with
deployments
and
daemon
sets
and
replica
sets.
So
this
adds
parity
with
stateful
sets.
D
And
then
add
count
of
ready,
pods
in
job
status,
so
this,
let's
take
a
look
here,
feature
adds
a
field
ready
that
counts
the
number
of
job
pods
that
have
a
ready
condition,
so
a
status
reflection
on
the
the
job
spec.
C
C
One
enhancement
from
sigoth
it's,
but
it
is
one
of
the
major
themes.
It's
pod
security
admission,
which
replaces
pod
security
policies.
Like
I
mentioned
before,
security
policies
is
targeted
to
be
removed
in
1.25,
so
pod
security
mission
went
to
beta
in
1.23.
C
There
is
a
feature
blog
on
this
on
the
kubernetes.io
website
and
with
some
tutorials
as
well.
Support
security
mission
controller
enforces
the
pod
security
standards
on
pods
within
the
namespace
there's
three
pod
security
stand
three
levels
of
pod
security
standards,
privileged
baseline
and
restricted,
and
you
can
set
the
policy
enforcement
in
three
ways
as
well:
enforcing
audit
or
warning,
and
you
use
this.
You
use
policy
enforcements
through
label
through
namespaces
and
through
labels
on
the
namespace.
D
D
So
we've
got
one
cap
for
for
this
sig
and
that
is
graduating
the
horizontal
pod,
auto
scaler
to
stable
and
yeah.
So
this
adds
support
for
multiple
and
custom
metrics
for
horizontal
pod,
auto
scaling
and
nice
to
see
this
one
go
to
stable
for
sure.
C
Next
is
six
cli,
which
covers
keep
control
and
related
tools,
there's
a
new
command
which
is
in
alpha
as
cube
control
events,
it's
different
from
control
get
event,
so
it
this
does
add
a
new
command
and
adds
more
features.
Then
keep
control
get
events.
So
there's
like
default
sorting
of
the
events.
You
can
manipulate
events
more,
so
you
could
sort
events
with
other
criteria.
You
could
also
list
events
in
the
timeline
for
last
and
minutes.
C
It
also
extends
the
behavior
of
dash
watch
as
well
could
also
you
could
change
the
communion
fields
and
custom
columns
options
so
there's
a
little
more.
It
just
extends
the
keep
control,
get
events,
and
so
now
there's
command
of
keep
control.
Events.
D
We've
got
cluster
life
cycle,
and
so
this
sig
deals
with
everything,
cluster
life
cycle
deployment
and
and
upgrades
of
kubernetes
clusters
and
the
the
one
enhancement
that
we
have
here
is
for
keep
adm.
So
when
q,
a
q
adm
does
its
initial
init,
it
creates
a
config
map
in
the
cluster,
and
this
is
just
a
kept
that
changes.
The
naming
of
of
that
config
map
to
a
more
simplified
form.
C
Next
is
sig
instrumentation,
which
covers
best
practice
for
observability
through
metrics
logging
and
events
across
all
the
components,
so
structured
logging
went
to
beta
so
structured,
so
structure
logging
defines
a
standard
structure
for
log
messages
before
structure
log.
Before
this
next
month
there
was
no
structure
for
log
messages
and
add
some
methods
to
k.
Log
k
log
is
a
fork
of
g
log
to
enforce
this
structure,
and
so,
with
this
most
log
in
123,
most
log
messages
from
cubelet
and
cubescheduler
has
been
converted
related
to
to
k.
Log.
C
Like
I
mentioned
before,
how
k
log
is
a
fork
of
g-log
123,
whether
it
was
alpha.
There's
deprecation
of
k-log,
specific
flags
and
kubernetes
components
is
to
make
logging
more
simplified,
but
there
are
some,
the
the
ones
that
are
being
the
flags
are
being
deprecated
now
they're
now
they're
leaving
them
with
defaults,
so
so
for
the
k,
log
flags
they're,
all
the
they're,
the
plans
to
remove
all
the
flags
besides
dash
v,
and
that
should
be
module.
D
Next
up,
we've
got
sig
network
they're
responsible
for
the
components
and
interfaces
that
expose
networking
capabilities
to
kubernetes
workloads.
They
also
do
some
of
the
reference
implementations
for
for
those
apis
like
cube
proxy,
and
things
like
that.
D
So
first
up
was
one
of
the
major
themes
that
that
ray
touched
on,
which
was
ipv4
and
v6.
Dual
stack
support
and
we're
going
stable
this
release,
so
it
adds
adds
dual
stack:
support
for
pods
nodes
and
services
and
yeah.
It's
it's
a
super,
exciting
feature.
I
know
that
this
one,
a
lot
of
folks
worked
really
hard
to
deliver
this
and
it's
it's
really
great
to
see
it
go
to
stable.
D
And
next
up
we
have
namespace
scoped
ingress
class
parameters,
so
it
adds
a
new
scope
and
namespace
fields
to
the
ingress
class
parameter
ref
field
to
allow
referencing
name,
space
scope,
parameters,
resources.
So
this
is
one
I'm
actually
not
super
familiar
with
this
one,
but
you've
got
a
description
there.
I
encourage
folks
to
go.
Take
a
look
at
the
cap
on
the
features
website.
D
And
then
last
topology
aware
hints
is
going
beta,
and
so
this
works
to
enable
topology,
aware
routing,
and
it
adds
that
that
automatic
topology
hinting
mechanism
to
the
endpoint
slice.
D
And
then
node,
so
this
is
a
the
work
under
this
sig
encompasses
a
huge
huge
amount
of
things,
and
so
this
is
everything
to
do
with
the
cubelet
and
schedule
like,
I
guess,
life
cycle
of
pods
that
are
scheduled
to
a
node
yeah
lots
happening
here,
we'll
we'll
go
right
in
this
one.
D
I'm
actually
really
excited
about
ephemeral
containers
along
with
the
cube
cuddle
debug
feature,
so
it
adds
a
mechanism
to
run
a
short-lived
container
that
executes
within
the
namespace
of
an
existing
pod
and
allows
debugging
capabilities
against
running
pods
without
having
to
do
the
whole.
Like
cube,
cuddle,
exec,
workflow,
yeah,
this
one's
super
cool
and
then
we've
got
a
container
runtime
interface
support
going
to
beta
yeah.
D
And
next
up
we
have
a
c
advisor
list,
cri
full
container
and
pod
stats.
So
this
will
enhance
the
cri
api
with
additional
metrics
to
be
able
to
support
pod
and
container
fields
in
the
summary
api
directly
from
cri
without
having
to
utilize
c
advisor.
So
some
additional
metrics
information
there.
D
And
then
extending
pod
resources
api
to
report,
allocatable
resources,
so
it
ends
up
enhancing
the
metrics
information
adds
to
the
the
cubelet
pod
resources
endpoint,
which
will
allow
third-party
consumers
to
get
more
information
about
compute
allocation.
So
super
useful
for
getting
a
clear
understanding
of
the
state
of
resources
within
a
cluster
and
utilization.
D
And
then
next
up
we
have
cpu
manager
policies,
so
this
will
provide
some
additional
isolation,
guarantee
that
no
physical
core
is
shared
among
different
containers,
improves
cash
efficiency
and
mitigates
the
interference
with
other
workloads
that
can
consume
resources
of
the
same
physical
core,
which
should
help
with
a
lot
of
noisy
neighbor
issues
that
that
folks
operating
clusters
can
deal
with.
D
And
then
got
a
priority
pod
priority
based
graceful
note,
shut
down
going
to
alpha
so
graceful
node
shutdown
itself
was
a
a
feature
that
moved
up
in
one
of
the
more
recent
releases,
and
this
ties
pod
priority
into
that
feature.
So
it
should
take
pod
priority
values
into
account
to
determine
what
order
the
pods
are
stopped
when
going
through
a
graceful
note,
shutdown
and
then
also
add
some
flags
to
specify
the
total
time
for
shutdown
and
time
to
reserve
for
shutting
down
critical
pods.
D
So
I
know
this
feature
has
been
definitely
a
hit
with
cluster
operators
as
they
deal
with
upgrades
and
things
like
that.
D
And
then
next
we've
got
grpc
probe
to
pod,
so
it
adds
the
ability
to
use
grpc
to
check
for
liveness
readiness
and
startup
probes,
rather
than
just
a
typical
http.
This
is
alpha
so
another
one
of
the
features
that
would
need
to
be
enabled
with
the
flag.
D
And
then,
lastly,
for
node,
a
cpu
manager
policy
option
to
distribute
cpus
across
pneuma
nodes.
So
it
adds
a
cpu
manager
policy
field
and
when
enabled
that
would
trigger
the
cpu
manager
to
distribute
cpus
across
human
notes.
C
Next
is
six
scheduling,
so
six
scheduling
is
responsible
for
that
make
for
the
components
that
make
pod
placement
decisions.
C
C
So
1.22
is
awesome
in
beta
as
well,
but
there's
been
some
changes,
there's
been
so
the
next
123
there
was
a
beta
iteration
to
be
when
beta
3
was
introduced.
So
this
is
the
cube
schedule,
configuration
api.
C
Next
is
the
simplified
multi-point
plug-in
configuration
for
scheduler
this?
This
went
to
beta,
so
this
feature
defines
a
simplified
field
for
end
users
to
get
to
configure
scheduler
plugins,
which
use
multi-point
extensions.
C
C
Next
is
sig
security,
so
six
security
covers
the
horizontal
security
initiatives
for
the
for
the
project
includes
external
security
audits,
vulnerability
management
process
across
any
security
documentation
as
well,
well
defend
against
logging
secrets
via
static
analysis,
went
too
stable.
C
So
the
motivation
of
this
announcement
came
from
the
2019
external
security
audit,
where
the
exposure
or
we
we,
what
was
discovered
was
that
secrets
were
were
exposed
to
logs
or
execution
environments.
In
three
ways
the
bare
tokens
are
revealed
in
logs
environment
variables
exposed
since
the
data,
iscsi
volume,
storage,
sort
of
clear
text
secrets
and
logs.
C
So
with
this
enhancement,
there's
a
type
analysis
called
taint
propagation
analysis
which
provides
inside
how
on
how
the
data
is
spread
from
within
the
program.
So,
with
this
feature
with
their
state,
there's
a
taint
propagation
analysis
tool
called
go
flow
levy,
so
it
runs
as
a
blocking
presement
test
and
which,
which
will
detect
if
the
secret
is,
is
being
exposed
anywhere
with
that
pull
request.
So
during
the
testing
of
that
and
it
will
block
any
pull
requests
that
log
any
secrets.
C
Next
is
sig
storage,
so
six
storage
is
responsible
for
ensuring
the
different
types
of
file
box
storage,
which
are
available
when
a
container
is
created,
scheduled
also
responsible
for
any
storage
capacity
management
and
also
influence
scheduling,
a
container
space
on
storage
and
also
just
storage
operations
as
well
like
snapshots.
C
First,
one
is
skip
volume,
ownership
change,
so
this
feature
went
too
stable.
So
this
is
one
of
the
one
of
the
major
themes.
So
the
problem
before
was
that
when
a
volume
is
by
mount
inside
a
container,
the
permissions
on
that
volume
are
changed.
Recursively
to
to
that
fs
group
value
that
is
provided.
C
This
change
in
ownership
can
take
a
very
long
time
if
the
volume
is
very
large.
So
any
issues
this
we
saw
a
lot
of
issues
with
databases
with
very
large
volumes,
so
with
this
feature,
allows
the
user
to
specify
how
they
want
the
permission,
ownership
change
for
volumes-
you
can
set
it
to
always
so
always
change
your
their
permissions
and
ownership
to
match
this
group
or
on
route
mismatch
so
only
perform
the
permission.
Ownership
change.
C
If
the
permissions
does
not
match
the
expectations
compared
to
the
top
level
directory,
so
there's
like
I
mentioned
one
of
the
major
themes
that
there's
a
continued
effort
to
move
entry.
Csi
plugin
entry,
two
csi
plugins-
so
this
is
one
of
the
enhancements
that
went
to
is
aws
ebs
entry
to
csi
driver
migration.
So
this
is
the
part
of
that
continued
effort
to
migrate
entry
storage
plugins
to
your
csi,
so
this
migrates
internal
73,
aws
eds
plugin,
to
call
out
to
the
ebs
csi
plugin.
C
This
is
another
one
for
gcepd.
Another
continued
effort
to
move
the
migrate
entry
storage
plugins
to
csi,
so
migrated
internals,
avenchy,
gsgcpd
plugin
to
call
out
the
pd
csi
driver.
So
this
went
to
beta
azure
desk
entry
to
csi
driver
migration
went
to
beta.
This
is
another
one
of
the
continued
entry
storage
plugins,
two
csi
next
config
fs
group
policy
and
csi
drive.
C
Driver
objects
went
to
stable,
so
this
feature
allows
the
csi
drivers
to
opt
in
to
those
volume,
ownership
change,
there's
a
new
field,
a
csidriver.spec.fs
group
policy
which
allows
to
define
if
that
driver,
supports
volume,
ownership
modifications
with
the
fs
group.
C
Generic
inline
thermal
volumes
went
too
stable.
So
this
is
one
of
the
major
themes
as
well,
so
this
allows
you
similar
to
empty
deer,
but
with
with
csi
plug-ins
that
allows
you
to
use
any
existing
storage
driver
that
sports
dynamic
provisioning
to
be
used
as
an
environmental
volume
recovering
from
resize
failures
went
to
alpha.
So
the
issue
before
is
that
when
a
pvc
is
expanded,
let's
say
if
you
let's
say:
if
you
expand
a
pvc
to
that,
was
10
gigs
and
you
expanded
to
500
gigs,
but
it
underlying
storage
provider
doesn't
support
it.
C
It
only
supports
like
100
gigs,
so
this
feature
allows
you
to
resize
to
to
change
that
request
to
change
that,
to
that
that
we
size
from
500
gigs
to
100
gigs,
so
that
you
can
recover
from
that.
From
that
volume,
expansion,
failure.
C
Delegate
fs
group
csi
driver
instead
of
cubelet,
went
to
alpha
so
when
the
fs
group
is
specified,
like
we
mentioned
in
the
in
skip
volume
ownership,
the
map
volume
is
recursively,
challenged,
shown
or
ch
own
or
to
modded.
So
this
allows
it
to
you.
Don't
the
kilobit
will
do
this,
but
there
are
some
storage
drivers
csi,
since
chon
and
chmod
are
unix
primitives,
so
like
you're,
so
this
allows
it
so
that
the
cube
lights
handles
it.
C
C
Portworx
file
entry
to
csi
driver
migration
went
to
alpha,
so
this
is
part
of
that.
Continued
effort
to
migrate
entry
storage,
plugins
to
csi.
C
Always
on
a
reclaimed
policy
went
to
alpha,
so
this
is
when,
if
you're
familiar,
if
you
work
with
pvs
and
pvcs,
you
know
the
issue
is
that
if
the
position
of
the
perceptive
volume
is
delayed
before
the
pvc,
then
the
reclaimed
policy
is
ignored
or
associated
so
that
this
feature
and
they.
So
there
was
a
certain
order.
You
had
to
delete
it.
C
You
have
to
delete
the
pvc
first
then
delete
the
per
system
volume,
so
this
feature
make
sure
that
the
perceptible
reclaimed
policy
is
always
honored,
even
if
you
delete
the
perceived
volume
first
before
the
pvc
another.
So
this
is
staff
rbd
entry
provisioner
to
csi
driver
migration.
So
this
went
to
alpha.
This
is
part
of
that.
Continued
effort
to
migrate
entry
storage
plugins
to
csi
next
is
sig.
Testing.
Sig
testing
is
interested
in
affecting
an
effective
testing
of
kubernetes
and
automating
away
project
oil.
C
C
So
with
kubernetes
part
of
this,
what
part
of
this
proposal
was
to
remove
the
bazel,
build
and
any
associated
tooling
and
to
just
use
the
to
use
the
make
build,
so
this
feature
allows
it
just
simplify
the
process
by
moving
to
that
single
billet
system.
D
We've
got
sig
windows
which
deals
with
supporting
windows,
nodes
and
scheduling,
windows
containers,
and
it's
just
the
one
enhancement
for
sig
windows,
and
that
is
allowing
windows,
privilege
containers
so
extending
that
same
capability
that
exists
for
for
linux
containers
to
run
as
a
privileged
container
and
have
more
host
level
access
getting
that
working
for
for
windows
containers,
and
that
is
moving
to
beta.
With
this
release.
C
All
right
so
next,
so
that
covers
all
the
47
enhancements
in
1.23.
What's
what's
next
is
going
to
talk
about
the
release
team
shadow
program
and
the
release
team
itself
there
with
each
kubernetes
release.
There
is
three
a
year
there
is
a
release
team
for
each
release
and
that
release
team
is
made
up
of
community
members
from
all
sorts
of
different
organizations.
C
Students
as
well
and
those
these
release
team
members
handle
the
day-to-day
operations
of
the
release.
This
team
is
broken
down
into
seven
different
roles,
with
the
release
team,
lead,
enhancements,
ci
signal
bug,
triage
docs
or
documentation,
release,
notes
and
communication.
C
C
So
so
there
are
a
few
roles
that
had
five
shadows
for
one
and
the
goal
for
the
release
team
is
to
train
new
leads
and
for
shadows
to
become
new
leads,
as
well,
eventually
in
the
future
and
for
and
for
role
leads
to
share
the
knowledge
that
they've
had
or
to
share
their
tasks
and
knowledge
that
they've
had
through
many
released
teams
or
release
cycles.
Like
I,
myself
have
been
on
release
teams
since
1.18.
C
It
also
is
a
good
way
to
for
new
contributors
as
well
to
be
to
be
introduced
to
the
kubernetes
project
so,
and
each
release
cycle
generally
takes
about
four
months.
A
give
or
take
of
about
15
weeks.
Every
week
depends
the
workload
for
each
release.
Team
role.
Is
it
kind
of
varies
on
the
week,
and
it
also
varies
on
the
role
itself.
Like
enhancements
is
very
early
on
the
release
heavy.
C
So
they
do
quite
a
bit
of
work
in
within
the
first
three
to
four
weeks
of
the
of
the
release
cycle
and
then
do
quite
a
bit
of
work
around
code
freeze.
But
there
are
some
roles
like
docs
and
communications
and
that
will
that
are
more
the
tail
end
of
the
release
heavy.
C
So
they
do
most
of
their
work
towards
docs
does
quite
a
bit
of
work
around
code,
freeze
and
code
freeze
to
the
release,
and
they
do
quite
a
bit
of
work
on
really
states
off
and
stable
communications
where
they
tend
to
do
a
lot.
A
lot
more
work
towards
the
middle
to
tail
end
of
the
release.
C
So
I
just
want
to
invite
folks
who
are
interested
in
learning
about
joining
the
release
team
to
check
out
the
github
repo
for
the
on
the
release
team
shadows,
like
xander
mentioned
in
pre
in
the
early
in
the
beginning
of
this
webinar,
the
1.24
release
cycle
will
start
on
monday,
so
so
the
release.
C
So
every
before
every
release
cycle,
we
released
a
an
shadow
application,
so
I
do
suggest
to
join
the
the
kubernetes
slack
and
join
the
the
sig
release
channel
or
the
sig
release
mailing
lists
or
the
or
the
kubernetes
developers
mailing
list
to
get
notifications.
When
those
shadow
applications
are
out,
so
I'm
gonna
go
to
any
questions.
D
One
question
in
the
chat
about
cni
plug-ins
that
are
compatible
with
windows
based
nodes
and
I
think
I'll
just
kind
of
add
a
a
all
up
thing.
I
don't
know
specifically
on
that,
but
I
think
four
questions
on
specific
caps
that
you
want
some
more
detail
on
a
good
place
to
go
is
gonna,
be
the
sig
channel
for
that
cap
on
the
kubernetes
slack.
D
So
if
you
are
a
member
of
the
kubernetes
slack,
the
the
sig
windows
channel
would
be
a
fantastic
place
to
ask
that
question
and
I'm
sure
you
could
get
it
answered
super
quickly.
C
I
think
that
was
it,
so
I
want
to
thank
everyone
for
their
time
and
you
know
I
do
actually
see
one
more
question.
Well,
the
other
container
on
time.
C
So
there's
a
question
about
container
runtime
and
do
you
want
to
make
a
note
in
1.24
dr
shim
will
be
removed
and
it's
not
a
container
runtime,
it's
a
shim
so
that
folks
could
use
the
docker
docker
engine
or
the
docker
container
runtime
with
kubernetes,
and
so
when
starting
1.24
darker
shim
has
already
been
deprecated,
so
it'll
be
removed
in
1.24,
so
in
1.24
you
would
do
have
to
to
to
use
a
container
runtime
that
is
compliant
with
the
container
runtime
interface,
so
container
runtimes,
there's
there's
quite
a
few
that
are
out
there
like
container
d
cryo
and
there's
more.
C
A
Of
course,
thank
you
all
so
much
for
kicking
off
2022
and
thank
you.
Everyone
for
joining
us
look
for
the
recordings
later
today,
and
with
that
I
will
say
goodbye
to
everyone
and
thanks
for
thanks
for
joining.