►
From YouTube: Kubernetes 1.21 Release
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
we're
going
to
go
ahead
and
kick
it
off
so
that
we
can
get
moving.
I
know
we
have
a
lot
to
cover.
I
want
to
welcome
everyone
to
today's
cncf
live
kubernetes,
1.21
release,
I'm
livi
schultz
and
I'll
be
moderating.
Today's
webinar,
I'm
going
to
read
our
code
of
conduct,
real,
quick
and
then
I'll
be
handing
over
to
divya
mohan
a
few
housekeeping
items.
Before
we
get
started
during
the
webinar.
You
were
not
able
to
speak
as
an
attendee.
There's
a
chat
box
at
the
top
right
of
your
screen.
A
A
B
B
B
A
A
B
That's
probably,
is
this
better
yeah,
I'm
really
sorry
about
that
is.
Is
it
perfect
right
now.
B
Yeah
so
moving
through
for
today,
anna
will
be
first
walking
us
through
a
sneak
peek
of
what
you
can
expect
for
the
1.2
to
release
and
post
that
we
will
be
walking
through
some
highlights
of
1.21.
Here
is
where
you
know.
Nabrun
and
anna
will
probably
be
speaking
about
how
the
1.21
logo
theme
came
about
to
be,
and
we'll
also
be
going
through
some
stats
for
the
1.21
release.
B
Next
up,
obviously,
we
are
going
through
the
meat
of
the
presentation
which
is
going
to
be
the
same
updates
from
the
various
six
in
terms
of
the
feature
enhancements
and
a
little
bit
of
the
time
we
will
be
reserving
for
q
a
towards
the,
however
likely
to
be
mentioned,
and
if
you've
joined
in
late,
if
you're
you're
not
able.
If
we
are
not
able
to
get
through
to
your
questions
at
the
end,
we
will
be
answering
them
on
the
cncf
slack
channel,
which
is
cncf
online.
B
So
please,
please
request
you
all
to
join
in
there
and
you
know
directly
post
in
there
we
will
be
answering
and
taking
a
note
of
all
the
questions
and
answering
there
if
we
could
not.
If
we
cannot
get
them
towards
the
end
of
the
webinar
with
that
being
said,
I
will
now
be
handing
it
over
to
anna
and
over
to
you,
anna.
C
Cool
thanks,
divya
hi
everyone.
My
name
is
anna.
I
was
an
enhancement
lead
for
1.21
and
let
me
give
you
a
sneak
peek
of
two
release,
so
we
started
the
release.
The
1.22
release
on
april
26
and
is
targeted
to
release
on
august
4th
last
thursday
was
actually
the
enhancement
freeze
deadline
where
all
enhancements
wishing
to
be
included
in
the
1.22
release
must
have
their
caps
updated
and
merged.
C
As
of
today,
right
now
we
are
tracking
67
enhancements
for
the
upcoming
release,
but
the
number
will
change.
We
have
a
few
exceptions
coming
in
and
with
the
code
freeze,
I'm
not
sure
if
the
numbers
will
go
down,
but
this
does
look
like
another
big
release
for
us.
C
So
expect
a
lot
of
things
coming
up
in
august
4th
and
also
just
if
you're,
following
along
with
the
release,
you'll,
actually
notice
that
1.22
release
cycle
is
little
longer
than
usual,
because
kubernetes
kubernetes
release
cadence
actually
has
changed
to
three
releases
per
year.
So
1.22
will
be
actually
a
15
weeks
release
cycle
compared
to
1.21,
which
was
only
13..
This
was
changed
to
give
people
more
time
to
develop
and
etc.
So
now
I
will
pass
it
to
navaroon
for
1.21.
D
Come
back
thanks
anna
and
divya
for
starting
of
the
session
really
well,
and
thank
you
enough
for
the
1.22
updates.
So
I
will
go
over
through
the
1.21
release
highlights.
The
first
thing
that
I
want
to
emphasize
on
is
our
release
theme.
So
this
time
we
chose
the
release
to
be
titled
as
power
to
the
community.
D
Now
you
may
ask
like
what
does
it
mean
so
one
of
the
one
of
the
things
or
a
few
things
that
we
have
been
doing
since
the
past
few
release
cycles
have
been
to
make
the
release
team
more
accessible
and
more
inclusive
to
each
and
every
nook
of
the
globe
in
a
sense
like
facilitating
people
to
allow
facilitating,
facilitating
people
to
participate
in
the
discussions
of
the
release
through
alternative
meetings
like
meetings
which
are
more
in
the
asian
or
like
european
time
zones,
so
that
people
don't
have
to
like
be
awake
until
late
night
to
attend
the
meetings,
although
we
do
recognize
that
having
synchronous
meetings
even
like
even
multiple
of
them
may
not
serve
the
purpose
of
like
going
to
each
and
every
person
on
earth.
D
So
we
have
also
transformed
a
lot
of
the
processes
that
we
have
into
asynchronous
processes
so
that
you
really
don't
need
to
come
to
a
meeting
to
discuss
things.
You
can
just
post
something
on
the
mailing
list
or
the
slack
channels
for
the
release,
and
they
will
be
discussed
there
itself.
D
We
also
started
more
likely,
keeping
keeping
more
lazy
consensus
to
decisions
and
like
letting
people
see
and
review
stuff
in
all,
we
have
been
moving
ahead
steadily
towards
our
goals,
inclusion
and
more
sustainability
in
the
release,
teams,
which
are
continuing
continued
further
in
the
future
releases.
D
Now
what
did
what
did
we
even
ship
in
kubernetes,
101,
so
quantity,
so
every
kubernetes
release?
We
have
been
like
breaking
records
of
the
number
of
features
that
we
ship
kubernetes
1.20
before
1.21
shipped
the
highest
number
of
features
in
recent
history.
In
1.1,
we
upped
the
game
again
with
shipping
like
51
announcements,
so
enhancements
is
basically
a
term
in
the
kubernetes
community
which
is
given
to
like
how
we
track
a
feature
from
its
inception
to
its
stability
or
deprecation
in
certain
cases.
D
So
alpha
features
are
usually
like
disabled
by
default
on
each
conformant
and
shift
kubernetes
cluster.
You
can
obviously
enable
them
using
a
feature
flag.
So
all
of
the
alpha
and
beta
features
are
gated,
just
that
alpha
features
are
disabled.
By
default.
You
have
to
explicitly
enable
those
features
coming
onto
beta
features.
So
when
contributors
or
feature
owners
think
that
hey
this
alpha
feature
has
been
in
in
the
release
or
in
the
project
for
some
time,
and
it
has
gained
enough
maturity
to
graduate
to
beta
they.
D
They
will
just
enable
it
by
default.
So
the
feature
flag
is
set
to
true
by
default,
although
if
you
feel
that
it's
buggy-
or
there
are
some
things
that
are
not
suited
to
your
use
cases,
you
can
disable
it
when
you
are
bootstrapping
the
cluster,
so
alpha
enhancement
see
a
lot
of
changes
along
their
journey.
D
But
when
you
graduate
to
beta
making
changes
becomes
a
bit
difficult
because
now
users
end
users,
do
use
them
a
lot
and
there
are
guarantees
established
and
then
once
beta
enhancement
stay
as
beta
for
some
time
they
eventually
graduate
to
stable
and
they
have
certain
they
have
like
strong
guarantees
that
the
feature
won't
change
for
in
future
releases.
It
requires
a
lot
of
maturity
for
features
to
graduate
to
stable.
D
So
that's
about
how
we
categorize
things
coming
to
numbers
of
51
enhancements.
We,
the
community
graduated
13,
13
announcements
to
stable,
which
means
they
are
like
they
will
be
there
in
the
communities
project
for
some
time
and
then
15
announcements
have
been
graduated
from
alpha
to
beta.
That
means
we
see
a
lot
of
confidence
in
all
those
features
that
they
are
going
towards
stable
and
are
consumable
to
end
users.
D
We
have
also
introduced,
like
21
new
features
as
alpha
features
in
kubernetes
1.21,
that
you
can
just
check
out
by
enabling
the
feature
flag
when
you
are
bootstrapping
a
kubernetes
cluster.
D
Apart
from
that,
we
have
deprecated
two
features
which
we
will
discuss
in
detail
when
we
go
through
each
of
the
code
ownership
updates.
So
you
will
come
to
know
that
moving
in
we
have
certain
major
themes
for
kubernetes
1.21.
We
are
just
emphasizing
a
few
of
them
in
this
slide
and
the
next
one
number
one.
The
cron
job
resource
has
graduated
to
stable.
What
this
means
is
cron
jobs
have
been
beta
for
some
time
and
back
in.
D
D
Also
the
feature
gates
have
been
removed.
The
next
feature
which
graduated
to
stable
is
and
which
is
one
of
our
major
themes,
is
immutable
secrets
and
config
maps.
What
it
means
is
that
when,
whenever
you
create
a
secret
or
a
config
map,
you
can
set
that
hey.
This
is
immutable,
so
any
more
update
requests
to
those
won't
be
visible.
We
will
also
see
it
a
bit
in
detail
when
we
talk
about
the
storage
updates,
because
this
is
a
really
cool
feature.
D
Next
comes
up
is
ipv4
ipv6,
dual
strike
support
which
has
graduated
to
beta.
This
is
a
revolution,
and
a
lot
of
work
has
went,
has
went
into
making
this
happen
kudos
to
all
those
people
involved.
In
that
graceful
note,
shutdown
has
also
graduated
to
beta.
We
will
see
in
the
sig
nodes
updates.
What
does
it
mean
for
you
and
a
bit
more
in
detail,
and
we
have
more
major
themes
in
this
release.
One
of
the
things
that
happened
is
whenever
you
create
a
persistent
volume
there
are.
D
There
were
no
like
mechanisms
where
you
can
check
and
the
kubernetes
api
server
could
check
whether
the
underlying
resource
of
your
infrastructure
provider
is
healthy
or
not.
So
now
we
do
have
a
mechanism
for
it,
although
it
has
graduated
to
alpha.
You
can
still
check
out
this
feature
by
enabling
the
feature
flag.
D
D
With
this
enhancement,
bazel
based
tooling,
has
been
removed
from
the
core
kubernetes
repository
and
every
each
of
the
processors
have
been
transformed
to
go
to
link.
Based
now
remember,
I
talked
about
two
depletions.
We
deprecated
part
security
policy,
which
was
a
massive
change.
It
created
a
bit
of
uproar
in
the
end
user
community
as
well,
although
we
will
understand
a
bit
more
in
detail
later
on,
what
does
it
mean
for
the
end
users
and
how
they
can
mitigate
around
it?
And
what
comes
next?
D
On
top
of
it,
we
or
the
the
specific
code
owners
who
own
topology
key
have
depleted
it
in
favor
of
like
better
options
going
ahead.
So
anna
and
I
will
go
through
the
special
interest
groups
which
have
shipped
features
in
kubernetes
1.21,
and
we
will
go
through
each
of
those
features
and
give
you
a
short
overview.
Why?
Why
do
I
say
short
in
because
in
the
interest
of
time
we
did
ship
a
lot
of
features,
this
release
cycle
and
it
it
is
a
long
list.
D
But
before
going
through?
All
of
that
I
want
to
just
say
two
things
here.
What
special
interest
group
means
is
special
interest.
Groups
are
work,
are
units
of
people
inside
the
kubernetes
community
or
their
community
groups,
which
own
specific
areas
of
code,
so
each
sig
is
delegated
with
one
specific
area
in
the
core
kubernetes
repository,
namely,
like
you
have
api
machinery,
you
have
node
your
cli
api
machine.
D
It
handles
everything
related
to
the
kubernetes,
ebay,
server,
the
ebay
types,
api
expressions,
node
maintains
cubelet
any
other
any
other
code
which
is
relevant
to
the
operations
of
node
and
surrounding
it.
Cli
handles
keep
ctl
and
anything
which
you
need
to
do
in
qct.
These
are
just
examples.
We
will
go
through
more
6..
The
other
thing
is,
since
we
will
be
briefing
out
on
things,
you
still
can
go
ahead
and
read
in
more
detail.
D
Each
of
the
slides,
like
each
of
the
slides,
will
have
linked
to
a
tracking
issue
and
an
announcement
proposal.
So
the
enhancement
proposal
actually
the
feature
proposal
where
each
enhanced
announcement
or
a
feature
owner
have
written
like
what
are
their
motivations.
What
are
their
goals?
What
is
the
non-goal
and
and
some
implementation
details
of
those?
So
you
can
just
skim
through
the
cap
in
order
to
understand
like
what
each
feature
means
so
now.
D
Having
said
all
that,
the
first
thing
we
will
go
through
is
apa
machinery
and
one
of
the
first
enhancements
that
this
ship
and
their
ship
did.
They
graduated
to
beta
is
efficient
watch
resumption
after
cube's
ap
server
reboot.
D
What
it
means
is
so
whenever
you
restart
api
server-
and
you
do
a
tons
of
like
so
whenever
you
restart
ap
server,
it
needs
to
like
refresh
the
watch
cache
from
it
city
and
many
times.
It
may
happen
that
the
resource
version
is
like
out
of
sync.
So
if
you
have
like
a
lot
of
like
watches
to
the
ebay
server,
you
may
do
a
ton
of
real
lists
which
may
create
unnecessary
load
on
the
etc
api
server.
D
So
this
has
been
resolved
and
which
has
resulted
in
like
avoiding
tons
of
realists
during
the
april
rolling
upgrades.
Basically,
at
that
time
you
stop
one
old
api
old
version,
kubernetes
api
server,
and
then
you
start
a
new
one.
This
also
avoids
like
different
instances
of
ap
server
being
stuck
with,
like
the
watch
cache
seeing
to
different
resource
versions
for
a
long
period
of
time.
You
can
obviously
go
through
the
answer
proposal
and
read
about
it
more
in
detail.
D
Next
up
is,
you
might
have
heard
about
this
feature.
Called
server
side
apply,
so
what
it
does
is
whenever
you
apply
in
a
new
kubernetes
resource
earlier,
it
used
to
happen.
The
diff
used
to
happen
on
the
client
side
and
then
the
diff
used
to
be
sent.
Now
the
calculation
can
even
happen
on
server
side
apply,
but
then
what,
if
you
want
to
do
in
a
programmatic
way
in
client
go
so
earlier?
D
What
you
needed
to
do
is
you
needed
to
use
a
patch
type
called
apply
patch
type
and
give
it
a
binary
of
or
bytes
of,
yaml
or
json
to
the
api
server
so
that
the
api
server
takes
in
and
does
ssa
operations
with
client
go
shipping
apply
configurations,
you
don't
need
to
do
it
anymore.
You
have
like
types
that
will
help
you
doing
server
side
apply
from
client
core.
D
With
that
server
side
apply
can
go
ga,
which
is,
I
think,
slated
to
happen
in
kubernetes
1.22.
The
current
release
cycle
so
do
track
that
the
third
thing
that
api
machinery
has
shipped
is
so
oftentimes.
You
want
to
select
name
spaces
reliably
using
the
traditional
methods
of
label.
Selectors
with
a
small
change
may
not
be
small,
but
it's
like
whenever
you
create
a
new
namespace,
a
reserved
label
called
kubernetes.io
slash.
D
Metadata.Name
gets
added
as
a
label
to
the
namespace
metadata
so
that
you
can
efficiently
like
choose
that
namespace
using
this
label
with
that.
Those
are
the
three
enhancements
that
api
machinery
has
shaped
and
they
did
a
great
job
with
a
lot
of
those
enhancements
like
going
into
beta
and
making
it
available
for
users
to
use
by
default.
D
Moving
over
to
the
next
thing,
it
is
apps
apps,
also
shipped
a
lot
of
interesting
things,
primarily
the
first
thing:
cron
jobs
graduating
to
stable.
As
I
mentioned,
the
old
controllers
are
now
removed
and
feature
flags
are
also
not
present,
so
the
new
controller
has
become
the
way
to
go
for
your
cron
jobs.
D
D
Next
up
is
port
disruption.
Budget
has
graduated
to
stable,
which
also
makes
pot
disruption,
budgets
mutable,
so
you
can
change
them
after
even
after
you
create
them.
Along
with
that,
the
team
has
also
addressed
a
lot
of
performance
issues
with
the
pod
disruption
budget
controller
next
up.
D
So
this
is
a
bit
interesting,
so
suppose,
you're
a
clustered
admin
and
your
cluster
users
are
creating
a
lot
of
jobs,
and
if
you
have
like
a
high
ish
number
of
cardinality
and
highest
number
of
completion
counts,
you
will
have
a
lot
of
pods
which
will
be
there
in
the
cluster
part
resources
which
will
be
there
in
the
cluster.
Now
they
are
not
cleaned
automatically
by
default.
You
do
have
to
run
an
operation
with
this
feature.
D
It
makes
it
easy
for
users
to
actually
specify
a
ttl
a
time
to
live
for
those
those
resources.
This
controller
will
basically
read
what
you
specify
as
the
ttl
and
then
keep
on
deleting
jobs
and
parts
which
have
completed
and
finished
successfully
that
that's
next
up
is
random.
Part
selection
and
replica
sit
down
scale.
D
Now,
if
you
as
a
user,
have
been
using
replica
sets
and
have
been
like
constantly
upscaling
or
downscaling
them,
you
might
have
noticed
that
the
part
that
is
killed
on
a
downscaled
downscale
event
is
usually
the
last
part
which
was
created
a
higher
number
now
it.
It
may
happen
that
if
it
was
created
later,
it
may
be
doing
some
some
work.
D
It
may
be
handling
some
workload
which
has
recently
started,
and
it
may
be
detrimental
for
your
use
case
to
actually
kill
a
pod
which
has
started
like
very
recently,
so
it
introduces
a
randomized
heuristic
so
that
randomly
any
of
the
pods
in
the
replica
set
are
selected
and
killed
so
that
that
behavior
does
not
come
into
picture
where
your
workload
may
be
hampered.
D
Next
up
is
index
job.
Oh
also.
I
have
to
mention
that
this
feature
is
in
alpha,
so
you
would
need
to
enable
the
feature
flag
to
have
this
logic
working
in
your
cluster.
In
case
of
so
the
next
step
is
index
job.
So
often
people
may
run
like
machine
learning
workloads,
so
machine
learning
workloads
may
be
one
of
the
cases
or
there
may
be
cases
where
your
workload
may
need
or
may
require
some
kind
of
index
for
the
completion
of
the
job.
So
with
this
change,
you
can
actually
specify.
D
That
this
job
is
indexed
and
a
job
completion
index
environment
variable
would
be
there
in
the
container
of
the
containers
of
in
that
pod,
created
by
the
job.
Here.
In
this
example,
you
can
see
that
a
specific
process
like
image
processing
task
is
taking
the
index,
which
is
reading
the
environment
variable
basically,
and
it
can
also
take
a
host's
pattern
to
effectively
talk
to
another
pod
created
by
that
job.
So
here
it
becomes
like
a
bit
deterministic
in
talking
to
other
processes.
D
The
next
feature
is
the
ability
to
suspend
jobs.
So
if
you
have
been
users
of
the
job
resource,
you
might
have
noticed
that,
in
order
to
like
halt
a
job,
you
can
easily
like
delete
it,
but
when
you
delete
a
job,
the
metadata
like
how
many,
how
many
times
the
job
has
completed
or
how
many
times
the
job
is
failed
to
complete,
is
lost.
D
So
we
we
have
been
talking
about
a
lot
of
changes
to
replica
sets,
and
this
is
one
other
change
where
you
can
influence
the
order
of
particulation
on
downscale
events,
so
you
might
think
like
intuitively.
It
is
opposite
to
the
randomized
one,
but
it's
like
adding
features
layer
over
layer.
So
one
number
one
replica
set
downscale
events
will
randomly
delete
a
part.
D
But
then,
if
you
want
to
control
the
heuristic
a
bit,
you
can
specify
an
annotation
called
controller.kubernetes.io
deletion,
cost
and
specify
a
value,
so
the
pods,
with
the
lower
value,
will
be
deleted
first.
So
this
is
how
you
can
actually
determine
a
little
bit
of
the
heuristic
of
or
control
the
heuristic
of
how
your
pods
get
deleted
when
there
is
a
replica
downscale
event.
D
This
is
also
alpha,
so
you
need
to
enable
the
feature
flag.
Having
said
that,
I
just
want
to
shout
out
just
want
to
just
wanted
to
give
a
shout
out
to
sig
apps.
They
have
been
doing
an
awesome
job
in
enhancing
the
user
experience
for
jobs,
replica
sets
and
cron
jobs,
and
thank
you
to
them.
Also,
a
lot
of
these
features
are
in
alpha,
so
please
feel
free
to
intentionally
you
enable
them
and
use
them
and
give
feedback
to
the
community.
The
community
would
be
really
indebted
for
that.
D
Next
up,
we
have
a
special
interest
group
auth
who
handle
the
authentication
mechanisms
in
kubernetes.
We
now
come
to
a
very
interesting
announcement,
which
is
security
policy.
There
have
been
lot
of
like
discussions
on
social
media
on
several
channels
about
it,
but
one
thing
I
would
like
to
mention
here
is
that
part
security
policy
has
been
deprecated
and
is
slated
to
be
removed
in
1.2
1.25.
D
It
does
not
mean
that
you
can't
use
spot
security
policy
now
you
can
still
use
it,
but
we
would
highly
recommend
you
to
use
the
other
replacements
that
are
there,
so
that
your
transition
or
your
cluster's
transition
in
moving
from
psp
port
security
policy
to
the
alternate
is
smooth.
So
you
can
read
the
deprecation
blog
which,
by
the
authentic
security
folks
who
are
the
primary
drivers
here,
the
communities
is
a
community
work,
so
a
lot
of
people
have
been
driving
it.
D
Just
that
I
want
to
give
a
shout
out
to
them,
so
a
replacement
is
also
being
worked
on.
The
link
is
also
in
the
slide.
So
please
please,
please.
If
you
are
an
user
of
bot
security
policy,
I
would
urge
you
to
go
and
look
at
what
is
the
replacement
and
give
your
feedback?
D
The
community,
the
kubernetes
community
and
the
upstream
contributor
committee
strives
on
such
feedback.
So
please
do
that.
Moving
ahead
to
the
next
announcement
from
auth
is
so
client
who
has
needs
to
like
can
have
some
way
to
authenticate
requests
right
from
external
providers.
D
With
this
announcement,
clientgo
provides
for
a
mechanism
for
you
to
implement
out
of
tree
providers
now.
What
do
I
mean
by
out
of
tree
out
of
tree
in
kubernetes
communities?
Context?
Is
that
the
code
does
not
reside
in
kubernetes
kubernetes.
The
code
does
not
reside
in
the
kubernetes
code
base.
What
you
can
do
is
you
can
implement
an
out,
implement
a
credential
provider
and
then,
when
you
use
client
go,
you
can
basically
specify
that
as
a
provider
now
you
may
have
already
used
like
gcpn
as
your
providers,
which
are
inbuilt
into
kubernetes
kubernetes.
D
Now
they
will
eventually
be
deprecated
and
in
favor
of
like
out
of
tree
providers,
this
is
still
in
beta,
so
it
would
need
to
go
ga
and
then,
however,
things
will
get
progressed.
D
One
thing
to
note
here
is
that
it
also
essentially
means
that
credentials
can
be
rotated
without
even
restarting
the
client
processes.
So
since
the
credentials
lie
out
of
bounds
or
out
of
tree
from
the
client
or
the
process
that
you
are
running
to
talk
to
the
apa
server,
you
don't
need
to
essentially
restart
that
process
and
resulting
in,
like
your
workloads
not
being
hampered.
D
Next
up
so
bound
service
account.
Tokens
are
a
cluster
of
features.
If
I,
if
I
can,
freeze
it
that
way,
which
involve
like
separate
enhancements
together,
so
one
of
one
of
them
is
like
separating
the
root
ca.
Config
map
from
bound
service
account
token
volume.
So,
with
this,
the
audience
of
issued
json
web
tokens
should
be
bound
and
also
like.
Auto
configured
service
account
tokens
imports
can
use
those
projected
tokens,
so
it
will
now
become
more
efficient
in
while
you
are
using.
D
This
remember
I
mentioned,
like
one
service
account
against
a
cluster
of
different
things.
With
that
like
root
c,
a
configmap
also
goes
to
ga,
eventually
like
paving
path
for
the
other
parts
of
bound
service
account
tokens
to
become
or
to
graduate
or
evolve
in
their
functionality.
D
With
this,
a
config
config
map
called
cube,
root,
ca,
dot
crt
will
be
published
to
every
name
space
so
that
it
can
be
used
by
any
workload
to
server
and
verify
those
connections.
D
So
this
will
also
be
helpful
when
you
are
designing
workloads
which
run
in
cluster
next
up
service
account
signing
retrieval.
So
what
happens
now
is
when
you
have
like
service
account
service
account
tokens
inside
a
cluster.
D
D
This
is
also
going
to
stable,
with
that.
There
has
been
a
lot
of
work
in
the
outside
of
things
specifically
around
psp,
so
do
give
feedback
in
any
of
the
things
that
you
feel
necessary.
D
Having
said
that,
moving
over
to
cli,
there
have
been
two
improvements
to
keep
ctl
and
both
both
of
them
are
alpha.
So
one
of
the
things
which
caters
more
to
cluster
admins,
who
wants
a
matrix
to
understand
the
behavior
of
users
who
are
like
calling
the
kubernetes
api
server,
so
each
of
the
kubernetes
command
operations,
not
each.
D
There
are
like
specific
cases
where
the
cube
ctl
will,
along
with
the
request
to
the
api
server,
also
include
headers
like
cube
ctel,
hyphen
command,
cube,
ct
life
and
flags
and
cube
citation
session,
which
will
help
you
to
essentially
build
more
telemetry
operations
in,
like
in,
like
several
use
cases
where
you
want
to
know
like
what
kind
of
operations
your
engineers
or
your
cluster
users
are
doing,
the
cube.
Serialization
value
is
basically
an
uuid
which
will
like,
but
which
will
be
like
different
in
in
case
of
each
session.
D
Next
up.
This
is
also
one
of
the
interesting
things
in
like
user
behavior.
So
let's
say
whenever
you
do
cubesat
logs
or
you
do
keepsidl
exec,
especially
the
part
name
and
along
with
it.
If
your
pod
has
like
multiple
containers,
you
have
to
specify
a
flag
minus
c
and
specify
the
name
of
the
container
that
you
want
to
operate
on,
be
it
an
exact
operation
and
be
it
a
logs
operation.
D
With
this
feature,
you
can
actually
write
an
annotation
for
that
part
like
what
is
your
default
container?
So
if
you
don't
specify
the
minus
c
flag,
you
base
the
kubernetes,
the
cubesat
cube
ctl
will
assume
from
the
annotation,
which
container
you
mean
now
minus
is
still
takes
precedence.
Just
if
you
don't
specify
the
flag,
it
will
take
the
default
one.
D
Those
are
the
two
things
shipped
by
six
cli,
so
I
think,
as
a
as
a
cluster
user.
This,
these
two
enhancements
would
be
really
like
awesome
to
see
being
used
and
gather
more
insights,
moving
to
cloud
provider,
so
special
interest
group
cloud
provider
shipped
a
leader
migration
mechanism
for
controller
managers,
is
alpha.
D
What
this
helps
with
is
now
so
all
of
the
out
of
the
tree
cloud
providers
that
you
have
like
if
you
want
to
do
a
migration
of
kubernetes
api
server,
which
has
an
entry
provider
to
a
kubernetes
api
server
version
which
is
like
out
of
tree
cloud
provider.
There
is
now
a
mechanism
which
will
help
you
to
do
it
in
a
highly
available
way.
D
So
this
announcement
basically
defines
all
the
guidelines
that
you
need
to
follow
like
any
locking
mechanism
or
any
resource
logs
that
you
want
to
put
on
the
kubernetes
api
and
then
do
the
migration
kudos
to
them
for
shipping.
Such
an
is
useful
thing
with
that,
I
will
hand
over
the
baton
to
anna
who
will
be
going
through
a
few
more
sigs.
C
Cool
thanks
neverend.
Let's
take.
C
Updates
from
sig
instrumentation,
which
had
five
enhancements
in
1.21
first
one
up
is
metrics
stability,
enhancement
graduates
to
stable
metrics
are
categorized
as
either
alpha
or
stable
and
when
all
alpha
metrics
can
be
deleted
at
any
time
and
stable
metrics
are
guaranteed
not
to
change,
but
when
a
stable,
so
this
enhancement
actually
gives
a
better
ability
to
deprecate
stable
metrics.
So
it
will
start
marking
things
as
deprecated
and
you'll
see
deprecation
notice
in
the
description
text
in
the
warning
log
and
then
eventually
the
metrics
will
be
hidden
and
then
removed.
C
Next
we
have
the
structure
logging,
which
actually
still
remains
in
alpha
structure.
Logging
defines
standard
structure
for
kubernetes
log
messages
and
I'm
starting
in
1.21.
It
is
available
for
couplet
and
even
though
it
didn't
graduate
to
the
next
stage,
there
was
a
lot
of
effort
put
into
this
during
1.21,
so
yeah.
Next,
please
expose
metrics
about
resource
requests
and
limits
that
represent
palm
model.
Graduates
to
beta
this
enhancement
allows
coupe
scheduler
to
expose
optional
metrics
that
reports
the
requested
resource
and
the
desired
limit
of
all
running
pods.
C
Next,
oh
the
fun
against
voting
secret
via
static
analysis.
This
is
the
one
where
static
analysis
now
can
be
used
during
testing
to
prevent
various
types
of
sensitive
information.
C
Yeah
sorry,
this
graduates
to
veda
next,
please
metrics
metric
cardinality
enforcement
is
a
new
enhancement,
and
this
mitigates
the
memory
leak
that
has
been
identified
with
metrics,
so
this
enhancement
introduced
the
ability
to
turn
off
metrics
and
set
a
list
of
allowed
values
for
the
metrics.
C
So
a
lot
of
great
metrics
related
changes
from
sick
instrumentation
in
this
release,
so
shout
out
to
them
for
everything
and
specifically
the
structured
logging
efforts.
Now
we
can
take
a
look
at
the
sig
network,
so
sig
network
had
nine
enhancements.
First,
one
up
is
ipv4
ipv6,
dual
stack
support
it
graduates
to
beta
and
dual
stack.
Support
in
kubernetes
means
that
pods
and
services
and
nodes
can
get
ipv4
and
ipv6
addresses,
and
it's
not
enabled
by
default.
C
Next
next
we
have
endpoint
slice
api
graduates
to
stable,
endpoint
slide
on
slice.
Api
was
introduced
to
solve
existing
performance
problems
with
the
endpoints
api,
and
this
does
that
by
like
splitting
the
end
points
into
several
endpoint
slice
resources
and
with
v1
topology
field
has
not
been
removed
in
favor
of
fields
like
node
name
and
zone,
and
it
has
a
it
adds
annotation
to
indicate
over
capacity
for
endpoint
resource
with
more
than
thousand
endpoints
next.
C
Next,
we
have
service
type
load,
balancer
class.
This
is
one
of
the
new
enhancements
that
enables
option
to
specify
the
class
of
a
load,
balancer
implementation
for
services,
type
load,
balancer,
so
to
allow
user
to
leverage
multiple
service
types
in
a
cluster,
and
this
is
the
lightweight
approach
until
the
gateway
of
api
becomes
mature.
C
Next
next,
we
have
network
policy
port
range,
another
new
enhancement,
from
sig
network.
I
think
this
would
make
a
lot
of
people
happy.
This
enhancement
allows
you
to
write
one
role
for
network
policy
that
targets
range
of
ports.
Instead
of
writing
one
rule
for
every
port
and
there's
a
new
field
now
called
and
port
to
leverage
the
outrange
reports
cool
next
one
is
service
internal
traffic
policy.
C
This
is
also
a
new
enhancement
that
introduced
a
new
field
in
service
called
internal
traffic
policy
that
is
used
by
coop
proxy
to
filter
the
end
point
it
routes,
so
when
it's
set
to
cluster
all
endpoints
are
considered,
which
means
that
it
will
behave
as
usual,
but
when
it's
set
to
local,
only
node
local
endpoints
will
be
considered,
which
means
that
only
it
will
only
send
traffic
to
service
on
the
same
node
next
block
service
external
ips
via
admission.
C
This
enhancement
is
new
that
graduated
straight
to
stable
in
response
to
burnability.
That
was
identified,
which
allows
unprivileged
user
to
hijack
on
ip
address
via
service
aspect,
and
this
enhancement
blocks
the
use
of
external
ips
by
allowing
user
to
disable
the
external
ips
and
block
the
deployment
of
any
resource
that
uses
external
ip
fields.
C
C
Topology
aware
hints
is
a
new
enhancement
that
provides
hints
to
cluster
components
to
influence
how
traffic
is
routed
so
that
components
like
coop
proxy
can
be
more
efficient
and
keep
service
traffic
within
the
same
zone.
C
Next
next
one
is
deprecation
top
topology
aware
routing
service,
specifically
topology
keys.
Api
is
now
deprecated
in
favor
of
top
topology,
aware
hints
that
was
just
mentioned
before
this
slide.
C
So
to
summarize,
sec
network
introduced
many
new
alpha
enhancements
and
focused
on
stickability
improvements,
so
shout
outs
to
them
for
all
their
hard
work
and
getting
total
of
nine
enhancements
into
1.21.
C
C
Next
slide,
please
first
one
is
this:
is
the
tl
support,
so
this
one
actually
has
been
around
since
1.4,
and
it
last
allows
interaction
with
linus
linux,
this
ctl
service
to
tune
os
parameters,
and
it's
been
beta
since
1.11
and
now
with
1.21
stable
next,
provide
a
run
as
group
feature
for
containers
in
a
pod.
So
this
one
graduates
to
steve
stable?
This
is
another
old
one
that's
been
around
since
1.10.
C
Next,
we
have
memory
manager,
which
is
a
new
enhancement
from
sig
node.
It's
a
new
component
in
kubelet
ecosystem
to
guarantee
a
memory
allocation
for
pods
in
a
guaranteed
quality
of
service
class
by
using
single
or
multiple
new
application
strategy.
This
will
be
useful
for
any
apps
that
require
memory,
optimization
like
pocket,
processing
or
databases.
C
Next
graceful,
no
shutdown
graduates
to
beta
and
is
now
enabled
by
default
with
this
enhancement,
enabled
kubelet
will
detect
no
system
shutdown
and
try
to
gracefully
terminate
when
the
pause
running
on
the
nodes
add
downward
api
support
for
huge
pages
graduates
to
beta.
C
This
enhancement
allows
pod
to
fetch
information
under
huge
page
requests
and
limits
using
the
download
api
next
remove
c
advisor
json
metrics
from
kubelet,
so
this
enhancement
has
been
deprecated
since
1.18
and
now
by
graduating,
to
stable
it
has
been
removed
permanently
so
yeah
next
add
configurable
grace
period
to
pros
this
enhancement
introduces
a
probe
level
termination
grace
period
seconds
in
addition
to
pod
double
termination
grace
period.
C
Second,
that
was
already
available
as
a
solution
to
an
edge
case
when
liveness
probes
are
used
with
a
long
grace
period,
extend
pod
resource
api
to
report
allocable
resources.
This
is
another
new
enhancement.
C
C
Cri
container
log
rotation
is
another
enhancement,
that's
been
around
for
a
long
time
and
finally,
graduates
to
stable.
This
enhancement
enables
container
log
rotation
for
container
runtime
interface
and,
like
I
said
it's
been
around
since
1.10,
and
now
it's
stable.
C
So
it's
really
nice
to
see
a
lot
of
old
features.
Finally,
graduating
to
stable
from
sick
node
and
new
enhanced
enhancements
like
memory
manager.
That
was
a
big
up
first,
so
huge
shout
out
to
sig
note
now,
let's
look
at
six
scheduling.
C
So
six
scheduling
had
two
enhancements.
The
first
one
is
honor
nominated
node
during
news
scheduling
cycle.
This
allows
user
to
define
a
preferred
node
to
speed
up
scheduling
a
pod.
C
Instead
of
evaluating
all
the
node
to
find
the
best
candidate,
you
can
now
define
the
preferred
node
in
a
new
field,
nominate
a
node
name
inside
a
pod.
Now
next
namespace
selector
for
pod
affinity,
another
new
enhancement
from
scheduling
this
enhancement
introduced
a
namespace
selector
to
allow
setting
namespaces
for
affinity
terms
dynamically
to
allow
namespace
specification
by
labels
instead
of
names.
C
In
addition,
it
introduces
a
cost,
namespace
affinity
that
limits
which
name
spaces
are
allowed
to
have
pods
with
affinity
term
that
cross
name
spaces.
So
shout
out
to
six
scheduling
for
two
awesome
new
enhancements.
Now
I'm
going
to
pass
it
to
neverend
to
go
over
six
storage
and
testing.
D
Thank
you.
Thank
you
for
all
the
updates
on
instrumentation
network
note
and
scheduling
that
was
really
awesome
to
hear
about
them,
so
I'll
go
over
the
final
bits
of
this
webinar
and
go
over
like
storage
testing
and
a
bit
about
our
releasing
shadow
program.
D
So,
as
I
discussed
earlier,
as
I
mentioned
earlier,
immutable
secrets
and
config
maps
have
gone
to
alpha,
so
you
can
specify
and
protect
specify
that
secrets
and
config
specify
secrets
and
config
maps
to
be
immutable
which
will
eventually
protect
against
like
unnecessary
updates
or
like
accidental
updates.
Also
cubelet
does
not
poll
for
such
secrets
and
config
maps,
which
results
in
like
much
better
performance.
D
Another
thing
that
I
mentioned
as
a
major
theme
was
pv
health
monitor.
So
right
now
the
user
experience
with
this,
and
if
you
enable
this
feature
gate,
it
will
drastically
enable
the
user
experience
of
handling
the
issues
with
underlying
storage.
D
So
you
you
will
know
in
a
better
way,
and
this
this
also
gives
you
a
really
early
signal
of
any
storage
failures
that
may
happen
potentially
preventing
your
workloads
going
down
in
future.
Next
step
is
storage
capacity,
constraints
for
both
scheduling,
so.
D
With
this
feature,
when
you,
when
you
try
it,
when
the
scheduler
tries
to
schedule
a
pod
to
a
node,
it
will
now
keep
on
checking
like
whether
the
request
storage
capacity,
for
example,
if
you
say
like
hey,
I
need
10
gigs
of
storage,
10
gifts
of
10
gigs
of
apv
along
with
this
part,
but
does
it
not
even
have
the
backing
capability
to
have
that
storage?
So
now
you
can
specify
those
constraints
and
it
will
block
pod
creation
on
on
those
notes,
generic
ephemeral,
inline
volumes.
D
So,
with
with
this
change,
you
can
have
like
really
lightweight
local
volumes,
which
need
to
be
eventually
provided
by
those
csi
drivers.
But
now
what
will
happen
is
like
the
pod
will
be
the
owner
of
the
volume
claim
and
if
those
kinds
of
ephemeral
volume
claims
or
ephemeral
volumes
which
are
created
due
to
the
spot
being
present,
if
the
particulate
created
the
volume
claims
will
also
be
created-
and
this
is
done
through
the
owner
mechanisms-
next
up
is
prioritizing
nodes
based
on
volume
capacity
again
this
will.
D
This
will
result
in
like
parts
being
scheduled
on
nodes
where
the
available
clip
capacity
is
actually
close
to
the
requested
capacity.
For
example,
let's
say
you
have
like
100
gigs
on
one
node
and
your
pod
or
or
a
volume
or
a
claim
requires
like
10
10
gb,
so
it
may,
it
may
try
to
schedule
there
if
you
don't
have
any
other.
But
let's
say
you
have
a
node
where
you
have
like
20
gigs
of
storage
available.
D
It
will
try
to
schedule
that
so
it
it
does
some
kind
of
heuristic
based
scheduling
where
it
will
eventually
try
to
optimize
your
volumes
resource
usage.
Again
saying
this
is
alpha.
This
sounds
really
cool,
so
probably
should
enable
and
try
it
out.
One
of
the
things
that
graduated
from
alpha
to
beta
is
is
your
file
csi
drivers,
migration
to
out
of
tree
csi
driver
so
earlier
in
one
of
the
previous
releases.
Azure
disk
also
moved
out
to
out
of
tree,
and
this
has
been
done
for
azure
file
provider
as
well.
D
Now
one
interesting
bit
to
note
here
is
that
the
feature
flag
here
is
by
default
set
to
false,
although
this
is
beta-
and
this
is
because
of
the
entry
to
out
of
tree
driver
migration
for
more
details.
Do
look
at
the
tracking
issue
in
the
cap,
which
has
like
more
details
and
more
discussions
on
how
why
this
is
even
necessary
service
account
for
csi
driver.
So
now
the
csi
drivers
can
essentially
request
for,
like
audience,
bounded
service
account
tokens
of
the
specific
parts
from
cubelet
to
node,
publish
volume.
D
D
That
was
all
for
six
storage
and
you,
you
might
see
like
there
have
been
lots
of
improvements
on
the
csi
driver
side
of
things
and
how
to
involve
or
how
to
evolve
and
improve
the
user
experience
of
cluster
admins
or
end
users
when
they
are
trying
any
workload
on
workload
which
attaches
storage
volumes.
D
Moving
over
to
testing
sick
testing
shipped
in
the
basal
removal
kept,
which
basically
meant
that
now
the
bazel
based
build
and
related
and
related
release,
tooling,
are
now
removed
and
ci
processors,
which
used
to
use
bazel,
are
now
using
the
native
tools
like
make
build,
which
essentially
use
goes
tool
chain.
This
is
also
result
in
in
like
reduced
stress
on
the
community
that
they
need.
They
don't
need
to
think
and
maintain
like
multiple
build
systems.
They
can
just
use
the
native
core
tool
chain.
D
That's
all
on
the
updates
from
each
sick,
and
you
have
seen
like
how
51
features
have
been
shipped
out
by
the
community
as
a
whole,
and
we
have
seen
like
lots
of
interesting
things
that
you
should
try
out
and
lots
of
interesting
ways
that
the
feature
set
has
evolved
over
time.
Having
discussed
all
about
the
kubernetes
release,
the
exact
kubernetes
release
I'll
talk
a
little
bit
about
the
release
team
shadow
program.
D
The
release
team
shadow
program
is
basically
an
apprenticeship
or
an
internship
program
through
which
any
new
contributor
or
any
contributor
who
is
interested
in
participating
in
kubernetes
releases,
can
start
up
with
the
kubernetes
release
sustainably
like
they
will
be
mentored
by
each
of
these
role
leads,
and
each
of
the
role
leads
sign
up
like
three
to
five
shadows,
depending
on
the
role
and
what
kind
of
workload
is
involved
in
the
team,
and
this
program
is
usually
like
four
months
four
months.
Why?
D
Because
the
kubernetes
release
cycle
is
for
four
months
now
and
all
throughout
the
release
cycle,
the
shadows
are
mentored
so
that
they
can
take
on
the
lead
role
next
time.
So
with
that,
that
is
the
end
of
the
session.
You
might
have
already
asked
questions,
so
we
will
try
to
see
if
we
can
answer
a
few
of
them.
Otherwise
we
will
take
it
to
slack
one
thing
that
I
want
to
ask
like
and
I'm
going
to
stop
sharing
and
one
thing.
D
I
would
like
to
ask
an
interviewer
to
maybe
mention
a
bit
like
when
they
started
with
the
kubernetes
release,
which
cycle
so
that
people
can
also
get
motivated
here
in
your
journey.
C
Yeah
so
I
started
with
1.17,
I
believe
right.
I
actually
was
an
enhancement
shadow
with
navaroon
and
then
I've
been
part
of
it
since
then,
so
I
shadowed
for
multiple
roles
like
enhancements
bug,
triage
and
dogs,
and
then
I
actually
led
multiple
roles
as
well.
I
was
a
gas
lead
and
enhancement
lead
and
now
I'm
participating
in
the
1.22
release
as
a
release
lead
shadow.
B
Well,
I
think
I
am
a
relate
to
the
most
both
of
you
here,
so
I
joined
last
year
as
the
release
shadow
on
the
1.19
release
cycle
and
it
was
as
a
dark
shadow-
and
I
worked
alongside
anna
for
that.
After
that
I've
I
shadowed
the
comms
role
and
left
the
comms
roll
last
cycle,
as
you'll
already
probably
know
from
the
introduction
bit
and
along
with
anna
again,
I
am
one
of
the
release
leads
for
a
release,
lead
shadows,
sorry
for
this
cycle,
that's
1.22!
B
So
it's
been
an
amazing
experience
and
it's
it's
a
highly
recommended
experience
that
you
know.
I
advise
every
student
every
aspirin
once
to
get
into
open
source
to
join
in
because
it's
it's
a
different
thing
to
contribute
to
something
that's
larger
than
yourself.
So
yeah,
that's
about
it
from
me!.
D
Awesome,
thank
you
both
for
your
experiences
and
yeah.
I
also
started
in
like
kubernetes
1.17.
I
started
out
as
an
announcement
shadow
and
then
shadowed
announcements.
Again
then
led
announcements
in
1.19,
then
eventually
like
became
the
release,
lead
shadow
and
let
the
release
in
1.21.
So
it's
a
fun
journey.
D
If
you
want
to
learn
a
lot
about
how
the
kubernetes
community
works,
this
is
one
of
the
programs
that
you
should
look
at.
The
program
has
been
like
really
competitive
in
the
past
few
cycles.
So
don't
worry,
even
if
you
are
not
selected
like
you,
can
still
contribute
to
the
community
in
a
lot
of
different
ways.
We
do
hang
out
in
the
community
slack,
so
I'm
just
leaving
the
link
to
the
slack
on
the
chat.
So
if
you
want
to
join,
please
do
so.
D
We
are
on
the
channel
called
sigrillis
where
most
of
the
release
team
hangs
out.
So
thank
you
all
for
joining
today
and
thank
you
the
event
anna
for
hosting
this
session.
With
me,
this
was
really
great
to
enumerate
through
all
the
features
in
kubernetes
1.21,
so
handing
over
to
liberty.
A
Thank
you
all
so
much
thank
you
for
joining
us
at
cncf
in
our
live
webinar.
Thank
you.
Divya
anna
and
namroon
for
leading
us
through
this
and
check
the
website
later
today,
and
the
recording
and
slides
will
all
be
up
and
ready
to
go
and
thank
you
guys
so
much
keep
the
conversation
moving
on
the
slack
channels
and
we'll
see
y'all
next.