►
From YouTube: CNCF Live Webinar: Kubernetes 1.20
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
Thank
you
for
joining
us
today.
Everyone
welcome
to
today's
cncf
live
webinar.
Kubernetes
1.20.
Welcome
to
our
very
first
live
webinar
of
2021,
thanks
for
kicking
it
off
with
us,
I'm
libby
schultz
and
I'll
be
moderating.
Today's
webinar
we'd
like
to
welcome
our
presenters
jeremy
rickard
software
engineer
at
vmware
and
kirsten
garrison
software
engineer
at
red
hat
a
few
housekeeping
items
before
we
get
started
during
the
webinar.
You
are
not
able
to
talk
as
an
attendee.
A
There
is
a
q,
a
box
that
I
will
activate
right
now,
so
you
should
be
able
to
see
that
next
year
chat
leave
your
questions.
There
feel
free
to
pop
them
in
now
towards
the
end,
whenever
you
think,
and
we
will
get
to
as
many
as
we
can
at
the
end.
This
is
an
official
webinar
of
the
cncf
and,
as
such
is
subject
to
the
cncf
code
of
conduct.
Please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct.
A
Please
be
respectful
of
fellow
participants
and
presenters.
Please
also
note
that
the
recording
in
slides
will
be
posted
later
today
to
the
cncf
website,
as
well
as
back
through
this
registration
link
at
community.cncf.io
under
online
programs.
With
that,
I
will
hand
it
over
to
jeremy
and
kirsten
to
kick
it
off.
C
Hey
thanks
libby,
just
before
we
get
started,
are
the
slides
coming
through.
Okay.
Do
they
look
good
size?
It's
pretty
really.
A
C
All
right,
oh
so,
like
libby
mentioned,
my
name
is
jeremy
and
I
was
the
release
lead
for
120
and
joining
me
today
is
kirsten.
She
was
the
enhancements
lead
for
120.
I
work
at
vmware.
I
do
I
work
on
an
internal
platform,
so
my
team
runs
kubernetes.
I
build
things
on
top
of
kubernetes.
Do
a
lot
of
things
like
that,
so
it
was
really
interesting
to
participate
in
the
release
and
I
think
there's
a
couple
of
really
awesome
things
that
I'm
looking
forward
to
deploying
in
120.
B
Sure
I'm
kirsten
I'm
a
software
engineer
at
red
hat.
I
work
on
the
machine
config
operator,
which
is
an
operator
in
openshift,
and
I've
been
on
the
enhancements
team.
I
think,
since
1.17
being
a
bug,
trial,
shadow
enhancement,
shadow,
a
couple
times
and
the
enhancements
lead
last
release.
So
that's
a
great
experience
and
I
think
we'll
talk
about
that
a
little
bit
at
the
end
of
the
presentation
as
well.
So
if
you
have
any
questions
about
that
process,
we're
happy
to
talk.
C
C
It's
been
been
quite
a
quite
a
journey.
All
the
things
happening
so
we're
here
today
to
talk
about
the
kubernetes
120
release,
which
we
lovingly
called
the
raddest
release,
whole
bunch
of
reasons.
For
that.
First
and
foremost,
this
is
a
big
big
release.
We'll
see
some
some
numbers
in
a
little
bit,
but
really
this
was
one
of
the
biggest,
if
not
the
biggest
kubernetes
release
in
quite
some
time.
C
One
of
the
fun
things
that
a
release
lead
gets
to
do
is
to
pick
the
theme
pick
like
a
mascot
or
a
logo,
and
in
this
case
I
wanted
to
pay
homage
a
little
bit
to
kubernetes
114..
That
release
was
known
as
caternetties
and
cabernet's.
If
you
hadn't
seen
it
was
this
really
great
picture
with
the
kubernetes
logo
and
a
bunch
of
cats.
C
So
I
wanted
to
pay
kind
of
tribute
to
that
and
and
just
have
a
little
bit
of
fun.
2020
was
kind
of
a
rough
year
in
a
lot
of
ways
and
just
wanted
to
end
the
year
with
a
little
bit
of
fun.
So
here's
my
cat
and
we
styled
him
up
in
kind
of
like
1990s
school
pictures
with
lasers
in
the
background
he's
a
very
fun
happy
happy
guy
so
kind
of
captured.
My
my
feelings,
like
the
release,
was
really
fun.
B
Yeah,
it
was
definitely
a
great
experience
and
I
looked
for
I.
It
was
the
thing
I
looked
forward
to
a
lot
last
year.
Like
all
of
my
meetings
and
participation
was,
it
was
just
a
really
nice
thing.
C
So
today
we
are
going
to
start
off
by
giving
you
a
little
bit
of
an
update
on
121
and
what
you
can
expect
and
what
you
can
expect
just
generally
around
releases
going
forward
and
we'll
talk
a
little
bit
about
120
in
numbers,
just
kind
of
look
at
that
compared
to
other
releases
and
then
we'll
run
through
some
highlights
and
show
you
what's
new
from
each
one
of
the
six
we'll
go
through
that
kind
of
in
a
rapid
fashion,
because
there's
so
many
of
them
this
time
around
and
we'll
leave
a
little
bit
of
time
at
the
end
for
q.
C
A
so
first
up
the
121
release
updates
121
is
actually
going
on
right
now.
It
started
a
few
weeks
ago
and
the
next
major
milestone
you
really
need
to
be
aware
of
if
you're
kind
of
following
along
is
that
enhancements.
Freeze
is
going
to
come
on
february,
9th,
so
coming
up
pretty
quickly
and
that'll
set
the
stage
for
the
release
on
april,
8th
so
kind
of
between
there
you'll
have
a
bunch
of
milestones.
A
really
important
one
will
be
code.
C
Freeze
around
those
two
dates:
you'll
get
a
pretty
good
understanding
of
of
what's
going
on
and
we'll
talk
about
how
you
can
kind
of
dig
into
those
things
as
we
walk
through
some
of
these
issues.
But
one
thing
to
be
aware
of
is
that
if
you
look
back
at
one
at
2020
of
the
year,
there
were
only
three
kubernetes
releases,
so
120
was
the
third
release
of
the
year
and
it
was
the
last
release
of
the
year
in
previous
years.
C
There's
been
four
releases
so
generally
like
one
and
a
quarter,
but
because
of
all
of
the
uncertainty
and
the
kind
of
the
turmoil
of
2020,
the
119
release
became
pretty
long
and
extended
and
that
really
ate
up
most
of
the
year.
So
when
we
got
to
the
120
release,
we
didn't
have
enough
time
to
do
another
release.
After
that
it
was.
It
ended
up
being
at
the
end
of
the
year.
There's
been
some
ongoing
discussion
about
whether
that's
the
right,
cadence
or
not.
Do
we
go
back
to
four
releases?
C
I
think
the
decision
for
this
is
hopefully
going
to
be
made
around
enhancements
freeze,
so
in
a
few
weeks,
we'll
have
a
better
understanding
of
what
what
it's
going
to
look
like
going
forward
for
the
you
know
the
122
123
124
releases
when
they'll
land,
but
if
you're
interested
in
this
at
all-
and
you
would
like
to
provide
feedback
we've
linked
to
a
discussion
on
github
that
you
can
go
to
this-
is
using
the
new
github
discussions
feature
where
you
can
provide
your
feedback
and
what
your
thoughts
are
on
the
release.
Cadence.
C
We
would
really
really
recommend
and
encourage
you
to
go
and
add
any
feedback,
positive
or
negative,
see
three
releases
or
four
releases.
You
know
when
we,
when
sig
release
and
the
community
are
building
these
releases
out
and
pushing
forward
with
new
kubernetes
versions.
It's
really
for
the
community
and
people
that
are
going
to
consume
their
releases.
So
we
really
want
to
make
sure
that
we're
you
know
satisfying
the
desires
of
the
community
and
balancing
that
with
the
needs
of
contributors.
C
B
Sure
so,
as
jeremy
said,
it
was
actually
quite
a
large
release.
It
was
a
bit
hectic,
but
there
was
also
a
lot
of
sort
of
pent-up
demand
for
enhancement.
So
you
know
a
good
number
of
them
were
in
a
really
great
state.
By
the
time
we
started
the
release,
and
then
we
had
sort
of
the
normal
amount
of
enhancements
that
also
went
through
the
process,
like
normally
like
one
step
at
a
time.
B
So
we
had
44
total
enhancements
in
120
and
to
compare
that
to
117,
which
I
believe
was
the
same
sort
of
time
period.
B
There
were
22
in
that
time,
which
seems
a
little
low,
but
you
can
see
that
we
had
a
lot
more
than
the
prior
year,
so
we
had
16
stable
enhancements,
which
are
basically
ga,
which
is,
I
think,
up.
Probably
we
had
eight
in
the
previous
release.
We
have
15
graduating
to
beta
11
new
alpha
features,
two
deprecations,
which
we
started
tracking
and
then
something
that
I
think
is
really
cool.
B
To
think
about,
like
there
were
at
least
like
five
new
authors,
like
people
who
are
sort
of
new
to
the
enhancement
process
and
really
getting
involved
with
kept,
and
you
know
adding
features,
and
I
think
that,
as
we
get
more
new
authors
participating,
it's
also
going
to
decrease
the
load
on
everyone
else,
but
also
bring
in
some
great
contributors
and
great
ideas.
So
I
just
wanted
to
highlight
that
in
case,
anybody
in
the
audience
is
interested
in
you
know
doing
an
enhancement.
B
A
B
C
I
I
totally
agree,
I
think,
there's
two
really
important
things
to
just
hammer
in
on
there
one.
You
know
this.
This
was
an
end
of
your
release
and,
typically,
if
you
go
back
and
look
at
117
and
previous
end
of
your
releases,
you
know
they
overlap
with
holiday
seasons.
They
overlapped,
cubecon,
there's
so
many
other
pressing
concerns
that
come
in
that
you
know
the
bandwidth
for
getting
these
things
worked
on
and
hitting
code
freeze
on
time,
generally
just
kind
of
limits,
the
number
of
things
that
or
has
limited
the
things
in
the
past.
C
I
think
this
shows
that
it's
not
necessarily
true
that
the
kind
of
view
that
the
end
of
your
release
is
a
bug,
fix,
release
or
a
stability
release
or
it's
kind
of
a
waste,
isn't
really
true.
I
think
this
showed
that,
with
proper
planning,
I
think
the
extended
119
time
frame
gave
us
give
all
the
sigs
more
opportunity
to
get
things.
You
know
planned
out
and
ready
to
go
that
it
doesn't
really
have
to
be.
C
You
know
a
waste
and
then
just
generally,
your
last
point
there
that
at
least
five
new
authors
are
responsible
enhancements
are
how
we
track
new
things
coming
into
kubernetes.
So
if
you
have
an
idea
for
something
you
know,
the
way
to
get
that
done
is
through
writing.
Cap
writing
a
kubernetes
enhancement
proposal
and
that
tracks
through
this
process,
the
new
features,
the
beta
things
and
finally,
the
things
that
have
graduated
to
stable.
So
I
think
it's
just
really
really
cool
to
see
you
know
of
that
number.
C
B
C
I
think
that's
a
great
point
all
right,
so
we're
going
to
go
through
all
of
the
the
various
cigs
and
show
you
the
new
things,
the
things
that
have
gone
to
stable.
You
can
start
counting
on
just
one
kind
of
quick
thing
to
mention.
We
mentioned
alpha
beta
and
stable.
C
The
real
difference
between
those
things
is
that
alpha
is
not
turned
on
by
default.
So
you'll
you.
If
you
want
to
use
these
features,
there
are
generally
feature
gates
that
you
have
to
turn
on
on
the
api
server
or
configuration
options
you
need
to
pass
to
the
cubelet
beta
are
turned
on
by
default,
so
you
can
turn
them
off
if
you
need
to,
but
by
default
they're
turned
on
and
then
for
alpha
and
beta
things,
there's
no
guarantee
for
backwards
compatibility.
C
Those
things
can
go
away
and
there's
actually
a
policy
put
in
place,
starting
with
120
that
things
have
to
promote
to
beta
or
sorry
from
beta
to
stable
or
they
need
to
go
away.
I
think
we'll
see
some
more
deprecations
down
the
line,
kind
of
falling
out
of
that,
but
once
they
get
to
stable,
you
can
have
some
guarantees
that
those
things
will
be
there
for
for
a
much
longer
period
of
time
and.
B
And
we're
also
in
the
kept
starting
to
include
the
production
readiness
review,
which
I
believe
also
includes
considerations
about
upgrades
and
downgrades,
so
we're
trying
to
add
more
sort
of
safety
measures
into
it
before
things
go
before
features
are
submitted
into
into
the
stream.
C
Yeah
definitely
so,
let's
jump
into
a
couple
of
highlights
before
we
things
that
you
should
really
really
be
aware
of
from
this
release
and
before
we
dive
into
the
specifics
from
each
stick
and
the
first
one
that
I
think
everybody
is
aware
of
is
docker.
C
C
You
know
predating
adding
container
runtime
interface,
so
things
like
container
d
and
cryo
docker
shim
was
a
separate
code
path
that
existed
in
the
cubelet
and
you
know
just
introduced
another
area
that
had
to
be
maintained
and
is
kind
of
back
fitting.
C
You
know
the
docker
engine
into
the
cubelet
mirantis
for
anybody
that
wants
to
continue
using
the
docker
engine
like
that.
Mirantis
is
gonna
work
on
a
cri
implementation
around
that,
so
you'll
be
able
to
use
the
same
kind
of
functionality,
but
you
can
find
a
ton
of
information
on
the
kubernetes
blog.
It
kind
of
points
you
in
the
right
direction
and
you'll
see
some
more
of
this
coming
along.
C
But
the
big
takeaway
here
is
that
you
know
docker
is
not
going
away
and
even
the
support
for
the
docker
shim
isn't
going
to
go
away,
starting
in
120
when
you
start
up
the
node
and
you're
using
or
start
up
the
cubelet
and
you're.
Using
this
feature,
you'll
just
see
a
deprecation
warning.
Things
will
continue
to
work,
as
is
for
a
few
more
releases,
so
you
have
time
to
to
kind
of
get
around
this.
C
Okay,
so
there's
two
other
areas
that
we
wanted
to
highlight:
real,
quick
stability,
work
and
then
some
cool
new
things
and
I
think
playing
off
of
that
docker
shim
deprecation.
We
see
some
foundational
work
along
cri
to
move
that
towards
beta.
It's
been
alpha
for
so
long.
C
It
was
introduced
in
the
1.5
release,
so
we're
on
120
now
think
about
you
know
if
we're
doing
four
releases
a
year,
normally
that's
been
there
for
quite
a
while
the
same
thing
for
cron
jobs
that
was
actually
introduced
as
scheduled
jobs
before
it
was
called
crown
jobs,
but
in
one
eight
it
became
cron
jobs
and
became
beta,
so
another
one
that's
been
there
for
for
a
pretty
decent
amount
of
time.
C
Another
stability
sort
of
thing
exec
probes.
So
if
you've
ever
set
up
an
exact
probe
for
a
pod,
there
is
a
field
for
the
timeout.
Actually
that
timeout
was
never
honored,
so
this
is
kind
of
a
long-standing
bug.
That's
been
fixed,
we'll
hit
that
one
a
little
bit
and
then
just
generally
sig
node
has
had
a
lot
of
things
in
this
release.
There
were
something
like
13
or
14
enhancement,
issues
that
were
just
owned
by
sig
node
and
of
those
five
of
them
graduated
to
stable.
C
So
when
the
node
is
going
to
shut
down
cubic
can
become
aware
of
that
and
it
can
properly
send
signals
to
the
workloads
instead
of
just
kind
of
going
away
better
metrics
for
like
what
resources
are
being
consumed
in
the
cluster
rather
than
you
know.
C
Having
to
cobble
this
together,
there
will
be
a
starting
120,
a
nice
metrics
endpoint,
where
you'll
be
able
to
get
a
good
view
of
requests
versus
limits
and
and
make
better
planning
decisions
going
on
from
the
scheduling
point
of
view,
another
really
cool
one,
I
think,
is
the
ability
to
auto
scale
based
on
container
resources
instead
of
a
pod.
So
generally,
if
you're
using
the
hpa,
it
looks
at
the
pod
metrics.
So
if
you
have
one
a
multi-container
pod
and
one
of
those
things
is
maybe
skewing
that
result
either
you
know
positively
or
negatively.
C
You
couldn't
really
scale
based
off
of
the
individual
containers
and
starting
in
120
you'll
be
able
to
do
that
and
then
finally,
there's
a
bunch
of
security
related
improvements
that
have
come
along.
I
think
that
are
not
really
new
new,
but
they're
new,
in
the
sense
that
they're
fixing
some
problems
and
making
things
just
a
little
bit
better
all
around
okay.
So
let's
jump
into
the
sig
updates
and
for
this
one
kirsten
and
I
are
going
to
go
back
and
forth
and
give
you
a
little
bit
of
an
overview
of
these
things
kirsten.
B
Oh
sure,
api
machinery
had,
I
think,
four
four
enhancements.
I
think
two
beta
one
alpha
and
one
stable,
so
we
have
the
priority
and
fairness
for
api
server
requests,
which
is
now
beta
and
there's
like
a
ton
of
work.
That's
been
going
into
that.
It's
been
really
great.
We
also
have
the
deprecation
of
the
self-link
field,
which
was
alpha
and
116
they've
been
waiting
a
year
between
each
so
then
four
releases
from
now.
B
I
think
they're
aiming
to
finish
that
in
124.,
so
they're
spacing
that
out-
and
you
know
really
communicating
that,
through
their
caps
and
other
communications,
which
is
like
pretty
great
to
see
and
for
any
of
the
any
questions
that
you
have
on
any
of
this,
you
know
there's
the
tracking.
B
Proposals
as
well
as
the
you
know,
other
comments
that
come
from
the
release,
but
that's
been
that's
been
pretty
great.
We
do
have
the
default
built-in
api
types
defaults,
which
is
going
to
go
into
the
go
idl
and
it's
going
to
be
transformed
into
an
opi,
open
api
default
field
and
then
route
it
to
defaulting
functions
so
that
it
can
be
done.
Declaratively
and.
B
In
120,
and
then
we
have
this
cube
api
server
identity,
which
is
a
which
is
for
aj
clusters
and
also
like
a
pre-rack
for
other
aj
features.
So
it's
you
know
semi-foundational
work,
and
I
think
that
that's
going
to
be
really
interesting,
where
you
have
each
cube
server,
self-assigning
a
unique
id
during
bootstrap
and
storing
in
a
lease
object,
and
then
controllers
will.
A
B
B
So
I
think
we
mentioned
this
already.
This
was
the
previous
schedule,
jobs
crown
jobs,
we're
trying
to
not
have
things
sit
in
any
stage
forever
and
actually
like
move
through
the
process,
and
this
is
one
of
those.
This
is
for
all
time,
related
actions
like
backups
report
generation,
so
that
each
of
the
tasks
can
run
repeatedly
or
at
any
given
point
in
time,
and
this
is
moving
to
beta
and
I
think
it's
going
to
be
dual
support.
So
the
v1
of
the
controllers
are
still
available.
B
C
Yeah,
I
think
so
I
think
an
interesting
thing
is
that
you
know
we
don't
really
track.
Everything
is
a
cap
right
like
so
things
that
are
kind
of
process
related,
and
this
one
to
me
feels
kind
of
process,
related
kind
of
internal
yeah.
C
That's
true
for
some
of
the
security
things
that
came
in
this
release
as
well,
but
you
know
it's
interesting
to
see.
You
know
the
work
is
happening
kind
of
behind
the
scenes
that
you
may
not
directly
use,
but
I
think
this
conformance
testing
stuff
is
super
cool,
because
they're
making
sure
that
you
know
these
releases
are
adhering
to
the
contract
that
they
say
they're
adhering
to
and
as
a
release
goes
forward,
people
can
still
meet
that
conformance
requirement.
It
was
really
cool.
C
Just
as
an
aside
to
to
see
all
of
the
the
work
that
the
conformance
group
had
been
doing.
You
know
and
identifying
things
that
hadn't
been
covered
by
any
testing
and
working
to
get
that
done.
For
you
know
during
the
120
cycle
and
there
they
have
a
website
that
you
can
hit
called
api
snoop.
It's
really
cool
and
it'll,
show
you
like
when
a
test
was
introduced
and
what
things
are
covered
and
what
things
aren't
covered.
C
So,
if
you're
on
the
gray
slack,
there's
a
conformance
performance
channel
that
has
a
lot
of
really
great
people
working
out
of
it
and,
if
you're
interested
in
any
of
that
stuff.
It's
a
great
place
to.
B
And
go
believe
the
api
snoop
actually
came
in
handy
during
one
sort
of
critical
critical
feature
that
we
were
trying
to
merge
so
like
this
extra
tooling,
is
so
valuable
to.
I
think
the
community
and
all
of
the
hard
work
that
goes
into
it
is.
I.
B
B
C
Off
this
one
was
one
of
our
late
breaking
issues,
kind
of
came
in
towards
the
end.
I
think
it's
pretty
cool
because
it's
you
know
breaking
out.
C
Credential
providers
there's
a
similar
issue
that
we'll
see
for
node,
but
really
this
is
allowing
you
to
specify
different
ways
of
doing
authentication
and
allowing
you
to
do
these
things
outside
of
the
tree.
You
know
if
you
look
back
at
kind
of
the
history
of
the
kubernetes
repo.
C
C
Some
more
security
count
stuff
the
ability
to
provide
oidc
discovery
inputs
that
would
be
pretty
useful,
okay
and
then
on
to
auto
scaling.
C
So
I
mentioned
this
one
in
the
kind
of
overview
highlight
section,
but
this
is
the
ability
to
use
the
hpa
to
scale
based
off
of
an
individual
container
instead
of
the
aggregated
pod
usage.
It
does
this
by
adding
a
new
container
resource
type.
So
if
you're
familiar
with
the
ammo
for
defining
hpa,
then
under
metrics
you'll
be
able
to
define
new
metrics
that
are
at
the
container
level
and
make
those
scaling
decisions
based
off
of
that,
I
think
that's
really
cool.
As
you
start
getting
these
kind
of
complicated
multi
excuse
me,
multipod.
C
Sorry,
multi-container,
pods
things
with
side
cars
that
may
or
may
not.
You
know
help
you
with
that
kind
of
aggregated
view
of
the
resource
consumption,
all
right
cli.
C
So
there's
a
few
in
cli
that
are
interesting
to
look
at
this
one
is
qctl
debug
and
it's
going
to
beta
in
this
release.
This
is
cool
for
me
because
I
was
the
enhancement
suite
for
118,
and
this
came
as
an
alpha
feature
and
I
think
it's
really
cool
to
see
these.
You
know
these
kind
of
useful
things
come
to
cube
ctl.
C
C
Image,
maybe
you're
using
a
scratch
image.
It
doesn't
necessarily
have
tools,
you
might
need
to
debug
a
production
problem
and
that's
where
chip
ctl,
debug
and
and
the
ephemeral
containers
work
kind
of
come
together
to
allow
you
to
maybe
add
another
container
to
that
pod
that'll.
Allow
you
to
do
some
more
debugging
there
or
maybe
make
a
copy
of
it.
C
A
C
B
Sure
so
this
so
support
out
of
tree
azure
cloud
provider
is
something
that
jeremy
talked
before.
I
think
in
the
maybe
like
a
client
progress
slide.
I
think,
where
we're
moving
certain
things
out
of
the
kk
repo,
and
I
think
that
it's
going
to
be
really
helpful,
like
different
different
teams,
have
trouble
keeping
up
with
the
releases
or
that
cadence
also
like
isn't
the
best
for
them.
So
even.
A
B
C
All
right
cluster
life
cycle,
so
this
one
is
kind
of
a
new
feature,
but
it's
really
a
deprecation,
and
I
think
this
one's
a
really
cool
one
to
see
in
120
is
starting
to
address.
The
use
of
you
know:
non-inclusive
language,
there's
a
kind
of
community-wide
effort
with
the
inclusive
language
initiative
and
working
group
naming
inside
of
kubernetes
to
look
at
you
know
things
we
use
terms
we
use
and
and
find
better
uses
things
that
are
more
inclusive
things
that
don't
have
bad
connotations
associated
with
the
words.
C
So
in
this
case,
cube
adm
is
starting
to
replace
some
of
the
tainted
labels
that
would
have
been
applied
to
previously,
like
the
master
nodes
and
that's
becoming
control
plate
node.
So
this
is
marked
as
a
deprecation,
because
the
the
the
existing
you
know,
master,
taint
and
label-
is
being
removed
and
it's
being
replaced
with
this
control
plane
one
instead
so
again
because
of
the
deprecation
you'll
be
able
to
continue
using
that
existing
word.
But
you
should
really
start
to
migrate
towards
this
new
one,
starting
with
120..
B
B
C
Agree,
so
that
was
the
only
one
for
that.
For
that
sake,
let's
move
on
to
instrumentation
now
and
there's
a
few
in
here
that
I
think,
are
really
cool.
I
think
we
mentioned
this
one
before
earlier
as
well
again,
just
like
my
my
bias
towards
the
things
that
I
think
are
cool,
but
as
a
cluster
operator,
one
of
the
challenges
we
have
is
just
getting
a
really
good
view
about
who's,
using
what
you
know.
So
we
have
to
go.
Do
specific
queries.
C
We
have
to
look
at
a
lot
of
things
that
are
running
yeah.
Is
this?
Does
this
deployment
this
single
pod
actually
need
16
cpus,
or
could
it
be
reduced
and
kind
of
right
size?
You
know
all
that
stuff
in
the
cluster
also
figuring
out,
you
know:
what
capacity
are
we
going
to
need
down
the
road?
C
It's
not
a
unified
single
picture
right
now,
but
what's
going
to
be
cool
in
120,
is
that
there's
this
new
feature
that
will
enable
a
new
metrics
endpoint
to
be
scraped
so
you'll
be
able
to
use
prometheus
or
whatever,
to
scrape
this
endpoint
and
get
a
view
of
all
of
the
resources
that
are
being
consumed
from
a
scheduling
standpoint,
so
the
decisions
that
the
scheduler
would
make
are,
you
know
reflected
by
requests
and
limits
and
what
the
node
has
available
you
know
is
the
node
over
committed
or
not,
and
all
of
these
things
will
be
bubbled
up
to
a
single
endpoint
that
you
can
scrape
and
get
a
much
better
view
of
what's
happening
in
the
cluster.
C
C
So
this
is
work
that
went
into
kind
of
applying
a
logging
filter
that
can
be
applied
to
all
the
kubernetes
logging
components
to
make
sure
that
sensitive
information
doesn't
end
up
in
logs.
You
know
this
is
a
another
really
great
one,
if
you're
running
kubernetes
in
production
and
especially
if
you're
in
any
kind
of
environment
that
has
you
know,
compliance
concerns
or
security
concerns,
which
is
probably
everybody,
this
will
be
a
great
feature
to
have
and
then
another
one
that
is
really
like
more
of
a
process
sort
of
thing.
C
This
is
defending
against
logging
of
secrets
of
the
of
the
infrastructure.
So
when
a
job
runs
in
prow
or
in
when
you
make
a
pr
to
kubernetes
is
responsible
for
running
all
of
the
tests
and
whatnot,
and
what
what's
cool
here
is
that
this
is
the
ability
to
using
static
analysis.
To
figure
out
is
any
of
this
stuff
likely
to
to
leak
information.
If
it
is,
then
that
that
pr
will
fail.
C
All
right
next
up
is
sig
network
kirsten,
and
I.
B
Think,
just
as
a
note
like
for
sig
instrumentation,
that's
not
a
huge
thing.
You
know
so.
We've
landed
like
three
pretty
large
three,
three
substantive
enhancements.
You
know
in
one
release
and
you
know
I
think.
A
B
Sometimes
we
also
on
the
outside,
looking
in,
like
overestimate,
how
many
people
are
working
on
things
so
just.
B
Like
some
of
these
things
are
a
few
people
doing
this
right,
and
I
just
wanted
to
kind
of
call
that
out,
because
they
they've
been
doing
a
lot
of
a
lot
like
a
lot
of
hustling
on
these
things.
And
it's
pretty
great.
B
So
for
network
we
have
a
couple
things
well
more
than
a
couple,
but
for
alpha
we
have
ipv4
ipv6.
Dual
stack
support
that
I
think
a
lot
of
people
have
been
looking
forward
to.
We
have
graduating
to
ga
fctp
support
for
services,
pod
and
point
of
network
policy.
B
So
it's
not
always
that
everything
has
just
graduated
but
sometimes
there's
also
significant
amounts
of
work
going
in
between
you
know
the
sort
of
name
status
changes,
so
you
might
see
something
that
is
alpha
and
then
it's
and
then
you're
like
well.
Why
didn't
go
to
beta?
Why
is
it
still
an
alpha?
Well,
they
might
still
be
landing
some
things
that
they
think
are
important
to
have
in
alpha.
They
might
be
doing
some
sort
of
foundational
work
and
that's,
I
think
the
case
for
this
there's
there's
a
lot
of
work
going
on.
I.
C
B
B
And
so
we
have
another
alpha,
which
is
support
of
mixed
protocols
and
services
with
type
load
balancer,
and
this
is
alpha
and
120,
and
it's
behind
the
mixed
protocol.
Lb
service
futuregate,
like
jeremy,
was
mentioning
before
about
alpha
features.
B
We
have
stable,
adding
app
protocol
to
services
and
endpoints
with
the
endpoint
slice
beta
released
in
1.17
app
protocol
was
added
that
would
allow
the
application
protocols
to
be
specified
per
port
and
kef
is
basically
adding
support
for
that
same
attribute
to
services
and
endpoints.
The
feature
gate
is
supposed
to
be
removed
in
121.,
so
yeah,
and
then
we
have
a
tracking
terminating
endpoint
so
that
we
can
handle
determining
endpoints
gracefully.
B
This
is
the
endpoint
slice.
Api
inter
includes
a
terminating
condition
and
again
it's
under
feature
gate.
If
the
feature
gate's
enabled
this
is
alpha,
and
then
we
have
disabled
node
ports
for
service
type
load
balancer.
B
A
C
Actually,
a
few
more
that
didn't
make
it
into
this.
This
release
that
you'll
see
in
121,
just
like
kudos
to
to
the
node
team
for
all
of
the
stuff
they've
been
doing
and
kudos
to
elena
who's,
been
kind
of
working
on
getting
prioritization
for
121
and
really
making
sure
that
they
have
a
great
story
and
some
planning
going
forward.
C
B
A
B
A
C
I
think
it's
it's
really
impressive
to
see
and
I'm
really
excited
to
see
how
much
lands
in
121
too.
B
C
So
here
is
another
deprecation,
so
we
had
a
deprecation
back
a
little
bit
ago.
We
also
had
the
docker
shim
deprecation,
so
this
one
is
kind
of
simplifying
down
the
number
of
streaming
requests
that
can
happen
to
a
node
again.
This
is
an
area
where
there's
multiple
code
paths
and
it's
complicated
configuration
is
hard
in
the
end
users,
and
it
just
opens
you
up
to
more
security
problems.
So
this
is
condensing
things
down.
C
You
can
read
the
tracking
issue
we'll
make
these
slides
available
after
the
fact,
but
you
can
get
dig
into
this
to
see
exactly
what
you
need
to
be
aware
of
going
forward
and
then
we'll
start
with
the
stable
things
and
kind
of
move
to
beta
and
alpha.
There
were
14
things
in
this
release.
Pretty
pretty
impressive.
C
This
one
is
runtime
classes,
so
this
allows
you
to
basically
have
multiple
runtimes
in
a
cluster
and
specify
which
one
you
want
to
use
for
different
workloads
in
the
pod.
Spec,
that's
going
to
stable,
so
it's
been
around
for
a
little
bit
going
to
stable.
You
can
count
on
it
going
forward
pid,
limiting
this
one's
really
cool.
Another
security
related
thing
graduating
to
stable,
and
this
allows
you
to
do
pit
isolation
between
pods
as
well
as
node
to
pods,
adding
pod
startup
liveness
probes,
so
another
one
that's
graduating
to
stable.
C
C
So
this
one
is
going
directly
to
stable.
It's
really
a
bug
fix,
but
it's
a
bug
fix
that
has
pretty
deep
implications
and
we
actually
saw
this
towards
the
end
of
the
release.
Excuse
me:
a
report
came
from
some
of
our
friends
at
azure.
They
had
some
exact
probes
in
some
of
their
pipelines
and
those
things
took
more
than
a
minute
and
previously
that
timeout
was
never
was
never
honored
would
just
continue
forever.
C
If
you
define
an
exec
probe
and
something
took,
you
know
five
minutes
or
ten
minutes
now,
starting
in
120,
the
default
is
respected
and
the
default
is
one
minute.
So
if
you
don't
specify
a
timeout
for
the
exec
probe,
it
will
actually
it's
one.
Second,
I
think
it
will
default
to
that.
One
second
timeout,
so
things
that
previously
had
worked,
may
no
longer
work.
So
there.
C
Gate
that
you
can
turn
on
call
the
exec
probe
timeout
that
will
go
away
in
the
future,
so
this
is
really
just
kind
of
a
helping
you
get
over
the
hump
of
fixing
your
your
workloads
that
may
or
may
not
be
impacted
by
that
third-party
device
monitoring
plug-ins
again.
This
is
another
thing
you
know
where
things
have
moved
out
of
tree
supporting
things
that
are
out
of
tree.
C
This
is
finally
going
to
stable
as
well
next
up
the
beta
issues,
so
this
one
is
no
topology
manager,
so
this
allows
you
to
use
different
kind
of
hardware
resources
for
different
parts
of
the
the
kubernetes
components,
and
this
is
really
you
know
going
to
beta
now.
So
it's
turned
on
by
default.
You'll
be
able
to
use
this
kind
of
out
of
the
box
and
that's
pretty
cool
another
one.
That's
going
to
beta
is
allowing
you
to
set
the
fqdn
as
the
hostname
for
your
pods.
C
So
you
know
generally,
this
is
another
field
you
would
have
been
able
to
set
hostname
subdomain
before
now.
This
field
set
hostname
sfqdn
is
just
available
in
the
pod
spec
that
you
can
just
use
going
forward.
C
This
is
kind
of
a
deprecation,
it's
removing
some
metrics
that
are
really
for
gpus,
and
there
are
three
of
them
that
are
going
to
be
deprecated,
so
they're
turned
off
by
defaults
in
this
one
memory,
total
memory,
use
and
duty
cycle,
and
this
will
really
only
impact
gpu
users,
so,
if
you're
using
gpus,
this
is
a
good
one
to
be
aware
of
support
to
size
memory
backed
volumes.
This
is
another
one.
If
you
use
empty
dirt
volumes
in
the
past
size
limit
was
not
used
to
actually
to
bound
it.
It
was
used
for
eviction
purposes.
C
Now
it's
going
to
be
used
to
create
a
a
resource
of
that
size
so
that
it's
portable
between
cluster
providers.
You
know
different
different
cluster
environments.
You
might
have
gotten
different
behaviors.
This
is
kind
of
simplifying
that
down
this
one's
an
alpha
feature
so
to
use
it.
You
have
to
turn
on
that.
That
feature
get
sorry
that
feature
gate,
but
in
subsequent
releases
you'll
be
able
to
just
take
advantage
of
this
one
graceful,
node
shutdown,
another
really
useful,
one
that
I'm
looking
forward
to.
C
So
this
one
is,
you
know
basically
just
making
the
cubelet
aware
the
node
is
going
to
shut
down
and
propagating
that
signal
down
to
the
pods,
so
they
can
shut
down
without
you
know,
just
being
killed
and
unexpectedly
go
away,
cri
support,
so
this
one
was
introduced.
I
think
in
kubernetes
one
five.
You
know
way
back
when
and
it's
going
to
go
to
beta,
probably
in
121
there
was
some
work
that
needed
to
happen
kind
of
before
that
could
happen.
C
Part
of
that
was
deprecating,
the
docker
shim,
a
few
other
things
had
to
happen.
So
again,
it's
marked
as
alpha
here
it's
staying
in
alpha,
but
there
is
a
lot
of
work
that
starts
that
train
down
the
road
and
then
another
alpha
feature
adding
huge
page
support
to
the
downward
api.
So
download
api
allows
you
to
project
things
into
the
cluster
or
sorry
into
the
pod.
C
You
previously
could
not
use
huge
pages
with
that,
so
this
can
give
you
size
and
limits
for
huge,
huge
pages
into
the
download
api
previously
not
available
kind
of
going
back
to
that
client
go
one
we
mentioned
earlier.
This
allows
you
to
use
exec,
plugins
and
pull
imageable
secrets
and
stuff
for
the
cubelet
using
these
external
plugins.
So
two
new
flags
come
to
the
cubelet
and
then
there's
a
yamo
resource
where
you
can
define
how
these
things
should
work.
B
C
C
B
C
Yeah-
and
you
think
about
you,
know
that
touches
so
many
different
things
so
when,
when
those
things
go
through,
it
may
not
just
be
signal
that
has
to
remove
things.
There
may
be
guy.
A
C
B
Sure
this
would
be
add
a
configurable
default
constraint
to
pod
topology
spread
the
spreading
rules
are
going
to
be
defined
in
the
pod,
spec
and
tied
to
the
pod.
So
this
is
going
to
add,
defaults
and
allow
cluster
operators
to
define
spread.
This
is
beta
so
available
by
default.
I
guess
and
we're
moving
into
just
as
a
note
for
storage.
They've
also
done
they've
put
in
a
huge
amount
of
effort
to
really
refine
their
their
kept
handling
process.
B
Zing
young
and
michelle
out,
like
they've,
just
been
doing
so
much
work
that
just
as
an
enhancements
lead.
I
just
have
to
give
a
shout
out
because
they've
made
it
really
easy,
like
in
the
past
couple
of
enhancements,
to
really
get
the
caps
through
get
the
code
reviewed
and
get
everything
merged.
So
there's
a
lot
of
organizational
work
going
on
behind
the
scenes
as
well
that
he
doesn't
get
acknowledged,
but,
like
we
really.
B
B
This
is
beta,
so
this
is
skip
volume,
ownership
change
and
this
will
allow
user
to
optionally
skip
recursive
ownership
and
permission
change
on
a
volume.
If
the
volume
already
has
right
permissions.
B
B
And
then
this
is
the
service
account
token
for
csi
driver.
Basically
using
csi
service
account
token
plumbing
it
down
to
the
pod
service,
account
token
csi
driver.
B
C
This
one's
pretty
cool,
I
think
another
one.
I
think
that
I
remember
from
118
pretty
distinctly
having
having
been
at
microsoft
for
a
while.
It's
really
cool
to
see
the
things
that
have
been
happening
with
windows.
B
C
In
kubernetes,
you
know
when
you
think
kubernetes
you
may
or
may
not
think,
windows
containers
and
it's
really
cool
to
see
this
work
happening.
There
were
some
more
things
that
sig
windows
was
trying
to
land
towards
the
end
of
the
cycle,
unfortunately
didn't
make
it
in,
but
this
one,
I
think,
is
a
pretty
huge,
huge
win
right,
you're
you're,
getting
cri
support
for
windows
and
that's
a
stable
thing
in
kubernetes
120.
C
Now
I
think
it
opens
up
a
lot
of
possibilities
for
people
that,
for
whatever
reason
you
know
can't
migrate
off
of
windows-
or
you
know,
are
just
windows-based
shops
and
their
workloads
depend
on
that.
This
is
just
making
it
more
inclusive
for
them
to
be
able
to
take
advantage
of
all
of
the
benefits
of
using
kubernetes.
B
So
I
think
we
quickly
rolled
through
all
of
those
enhancements
via
the
slides
that
are
going
to
be
provided
by
libby
afterwards,
we've
like
provided
all
the
links
to
the
cats
and
to
the
issue
tracker,
so
that
you
can
kind
of
dig
into
those
and
get
a
little
more
detail
or
ask
the
questions,
but
also
like
these
are
obviously
like
substantive
cats.
These
aren't
just
like
you
know,
bug
fixes
going
at
the
end
of
the
year
like
this.
B
This
represents
a
ton
of
work
that
people
have
done
throughout
those
months,
especially
like
in
a
really
tough
year.
So
it's
been
pretty
amazing.
C
Yeah
I
I
was,
I
wasn't
sure,
what
to
expect
coming
into
the
you
know
to
lead
this.
I
think
just
thinking
like
the
rest
of
the
year
and
and
kind
of
getting
you
know,
it
was
the
shadow
for
the
lead
shadow
for
119
and
kind
of
seeing
all
the
turmoil
and
the
changes
that
were
happening
and
how
we
were
responding
to
concerns
from
contributors.
C
At
one
point
we
said,
should
we
even
do
a
119
release
with
how
the
year
was
going,
so
I
was
super
unsure
how
120
was
going
to
go,
but
you
know
in
the
end,
I
think
it
was
super
exciting.
I
think
there
were
so
many
things
done
by
so
many
people,
just
so
much
good
work
that
you
know
120
is
I'm
really
proud
of
it?
C
B
C
C
Everything
in
kubernetes
is
really
the
community.
Some
people
are
paid
like
full
disclosure.
My
job
allows
me
to
do
some
of
this
work,
but
it's
not
my
full-time
job
and
I
volunteered
to
do
this.
You
know
back
in
117
to
shadow
enhancements
and
that's
really
how
you
get
started
with
this,
so
if,
if
you're
interested
in
being
on
the
release
team
for
122
or
any
of
the
releases
after
that,
the
way
to
start
that
is
with
the
release
team
shadow
program.
C
So
I've
been
through
that
kirsten's
been
through
that
nabaru
who's
leading
121
right
now,
we
shadowed
together
in
117.,
there's
there's
lots
of
opportunity.
There's
lots
of
demand
too
just
kind
of
full
disclosure
there.
But
we
wanted
to
give
you
a
little
bit
of
information
about
the
release
team
program
and
the
shadow
program.
And
let
you
know
where
to
look
when
to
look
and
just
kind
of
more
information.
C
B
If
you
have
any
questions
about
it,
like
definitely
feel
free
to
reach
out,
like
jeremy,
was
saying,
like
the
workload
really
varies
on
the
team.
So
I
think,
like,
if
you're
interested
in
the
program
like
asking
trying
to
talk
to
people
like
during
a
quiet
time
about
what
those
expectations
be
might
be.
B
It's
really
helpful,
but
I'd
also
just
reiterate
what
jeremy
was
saying
where,
like
people
forget
that
the
release
just
doesn't
happen
by
itself,
you
know
like
there's,
not
just
people
like
doing
random,
pull
requests,
and
that's
it
like
there's
a
lot
of
work
that
the
cigs
have
to
do
outside
of
just
coding,
there's
a
lot
of
like
architectural
and
organizational
work
that
they
have
to
put
in
that.
B
You
know,
I
think
we
don't
appreciate
enough
and
then
the
release
itself
doesn't
doesn't
really
come
to
fruition
without
the
entire
release
team
put
together
with
all
of
these
shadows
and
all
these
other
teams,
and
all
these
things
that
you
might
not
always
think
about
like
docs
or
release
notes
or
bug
triage.
We
have
a
huge
ci
signal
effort,
like
underway
as
well,
to
get
the
I
stabilized.
B
All
of
these
are
like
really
important
parts
of
getting
a
great
release
that
you
know
it's
not
just
like
tennis
pr
go
in
or
can
I
get
this
feature
like
there's
a
ton
of
work
that
gets
done
and
I
think
any
help
that
anybody
wants
to
provide
would
also
be
welcome
because
it
takes
it
takes,
I
guess,
to
be
corny.
It
takes
a
village
right.
C
Yeah
definitely
would
recommend
this
to
anybody.
That's
interested
one
of
the
comments
that
I
remember
from
the
cycle
was
rob.
Who
was
the
ci
star
for
120
rob?
Who
was
the
ci
signal
lead,
likened
the
release
team
to
montessori
school
for
kubernetes.
C
The
montessori
school
is
a
method
of
educating
kids,
where
you
can
go
and
kind
of
figure
out
what
the
kids
good
at
and
what
the
kid
wants
to
do,
and
they
can
go
from
thing
to
thing
and
gives
them
exposure
to
a
lot
of
different
things
and
the
release
team
is
definitely
that
way.
You
might
think
that
the
release
team
is
only
experienced
contributors
and
that's
not
true.
I
think
when
we
pick
shadows,
we
definitely
look
for
a
mix.
C
When
I
was
an
enhancement
lead.
I
picked
a
mixed
people.
I
picked
john
bellamarick
one
of
the
leads
from
sig
architecture
I
picked
kirsten,
who
had
a
lot
of
experience
with
openshift
and
kubernetes.
I
picked
people
who
are
brand
new
to
the
project
because
I
think
you
get
great
experience
for
those
people,
but
you
also
get
a
lot
of
great
insights
that
you
may
or
may
not
have
otherwise
gotten
like
that
kind
of
diversity
in
people
really
helps
build
up
those
really
solid
teams.
C
C
If
you
go
to
the
kubernetes
sorry,
the
sig
release
repo
on
github,
so
github.com
kubernetes,
sid
release.
You
can
find
the
handbooks
for
each
one
of
these
roles
and
it'll
give
you
a
much
better
idea
about
what
kind
of
time
commitment
or
what
the
actual
you
know.
Job
of
that.
That
role
is,
you
know,
you're.
C
What's
enhancements
to
what
does
ci
signal
do
each
one
of
those
handbooks
gives
you
a
really
good
idea
about
what
what
what
that
team
does,
and
I
think
these
are
great
because
they
can
kind
of
springboard
you
into
doing
more
of
that
stuff.
So,
if
you're,
if
you're
interested
in
ci
signal
it
can
set
you
up
to
do
a
lot
of
really
great
things
like
sorry
with
sick
testing,
because
there's
a
really
really
tight
integration
between
what
the
release
team
ci
signal
team
does
and
what
stick
testing
is
doing.
C
You
know
tracking
down
flakes
and
tests
figuring
out.
Is
this
really
a
problem
or
not?
You
know
that
team
was
super
critical
to
us
at
the
end
of
the
release
and
again,
like
that
three
months,
thing
kind
of
tying
back
to
our
release,
cadence
that
may
change
depending
on
what
happens
with
the
release
cadence
going
forward.
C
All
right.
I
guess,
with
that
we
can
open
up
the
questions.
Now
we
can
scroll
back
to
the
chat
and
see
if
there's
any
anything,
you
got.
C
Yeah
so
the
first
question:
could
the
printer
presenters
open
on
the
fact
that
most
enterprises.
C
Toward
trailing
the
adoption
of
116.,
my
team
specifically
just
upgraded
to
116.,
so
I
definitely
feel
that,
and
I
think
that
really
is
an
important
one
to
consider
for
the
release.
Cadence.
There
was
some
work
by
the
by
a
working
group
called
lts
working
group
lts,
but
they
you
know
they
were
looking
at
supportability.
So
how
long
does
a
given
kubernetes
release
stay
in
support
for
right?
Now?
It's
three
releases.
So
that's
not
that
long,
their
their
work
was
looking
at.
C
How
do
we
shift
that
towards
a
year
and
there's
a
lot
of
things
that
go
into
that
you
know.
Maintaining
old
branches
is
extra
work.
C
You
know
if
fixes
come
along
to
like
go
and
you
need
to
rebuild
all
of
the
components,
that's
extra
work
and
a
lot
of
that's
done
by
the
sig
release,
but
just
in
general,
as
this
train
kind
of
continues,
it's
really
hard
to
keep
up,
and
I
definitely
feel
that
I
think
that
going
to
three
releases
from
a
you
know,
consumer
standpoint
makes
it
a
little
easier
for
us,
there's
less
kind
of
train
to
have
to
keep
up
with
that
116
upgrade.
C
If
you
haven't
done
it
yet
is
is
a
challenge.
There
were
a
lot
of
breaking
breaking
things
that
happened
in
there
deprecated
things
that
had
been
deprecated
for
quite
a
long
time
were
finally
removed,
so
it
was
quite
impactful.
I
think
you
know
we
tried
not
to
do
that
in
119.
C
We
were
really
mindful
like
of
the
time
of
the
year,
but
definitely,
if
you're
one
of
those
people,
you
should
definitely
go
and
give
comments
on
that
that
issue.
We
linked
the
discussion
issue.
B
Yeah,
I
would
say,
I'm
a
developer,
so
I
don't
necessarily
have
the
same
pain
points.
But
if
you
have
an
opinion,
then
I
think,
like
you
have
to
share
it
with
the
community.
So
I
would
definitely
follow
the
link.
That's
in
the
slides
to
kind
of
make
your
voice
heard,
and
you
know
if
you
feel
like
there's
something
that's
not
being
considered
in
the
decision
making.
C
I
see
another
question
at
the
bottom
here
we
can
answer
real
quick
is
istio
planned
to
be
part
of
kubernetes
in
122..
The
answer
to
that
is
no
cooper.
You
know,
kubernetes
and
istio
are
separate
projects.
Their
life
cycle
is
separate.
Istio
is
something
you
can
install
onto
kubernetes,
but
they
are
not
intrinsically
tied
together.
I
mean
istio
runs
on
kubernetes,
but
you
can
run
kubernetes
without
seo,
so
I
don't
think
that
would
be
planned
to
be
part
of
122..
You
would
have
to.
C
Question
I'm
having.
I
have
cuba
tail
installed
in
third
machine
and
I'm
accessing
its
kubernetes
cluster
with
cube
config.
Is
there
any
provision
to
make
a
check
whether
this
third
party
machine
is
legit
or
not
just
trying
to
rephrase
that
a
little
bit?
Is
there
a
way
to
verify
that,
when
you're,
using
cube
ctl
to
access
a
cluster
that
it's
the
right
place?
Is
that
the
do
you
think
that's
a
good
summarization
of
the
question
kirsten.
B
I
think
so,
maybe
if
it's
not
yes,
we
got.
C
Yeah,
so
for
that
one
certificates,
I
think,
are
the
are
the
answer.
You
know
you
when
you're
connecting
to
these
things
when
you
look
at
your
cube,
config
inside
of
that
is
a
certificate
authority
and
or
certificate
data,
and
I
think
the
the
important
one
there
is
that
at
some
point
you
have
to
have
that
kind
of
trust
relationship
with
with
those.
C
If
you're
using
self-signed
certs,
you
lose
a
little
bit
of
that
assurance,
but
in
like
production
use
cases
you
can
you
can
control
what
you
know,
what
servers
the
api
server
is
providing
and
whether
the
whether
the
the
client
trusts
that
or
not.
B
I
just
wanted
to
thank
everyone
for
showing
off,
we
weren't
sure
if
anybody
would
show
up
to
our
webinar.
So
I.
B
C
Oh
so
there's
another
question:
is
there
already
a
timeline
for
cloud
providers?
Adoption
of
120?
So
again,
that's
a
super
good
question.
Having
worked
with
a
cloud
provider
before
it's
difficult,
you
know
there's
a
lot
of
work
that
goes
into
consuming
these
things.
You
know,
as
a
cluster
operator
now
you're,
not
as
part
of
a
cloud
provider.
You
know
we.
C
Of
work
that
goes
into
making
sure
that
we
can
ship
that
they
have
the
same
amount
of
work,
probably
more
so
because
they
have
to
make
sure
that
it
works
across
all
of
their.
You
know
all
their
infrastructure
make
sure
that
it
fits
into
their
existing
tooling.
So
we
don't
coordinate
with
them
to
say:
hey
we're
going
to
launch
120.
When
are
you
going
to
have
you
know
aks,
120
or
gke120?
C
If
you
look
at
the
cloud
providers
now
they
do
lag
behind,
so
I
would
expect
you
know
down
the
road
you'll
see
like
early
access
to
it,
but
you
know
no,
no
firm
guarantee
of
timelines
between
when
the
release
happens
and
when
the
cloud
providers
move
to
it.
C
C
We
inside
my
team
use
a
not
open
source
tool
for
a
lot
of
container
scanning,
but
I've
also
used
trivia
in
the
past,
which
is
a
tool
from
aquasek
can't
make
any
like
firm
recommendations
either
way,
though,.
A
C
I
just
want
to
say
thank
you
again
to
all
the
contributors
that
made
120
successful
everybody
that
worked
on
a
cap,
everybody
that
worked
on
tracking
down
test
flakes
towards
the
end
of
the
release.
Like
a
fun,
quick
story.
You
know
we
we
had.
I
think
api
priority
and
fairness
had
an
exception
to
the
code
freeze
date
and
we
were
super
nervous
that
it
was
very
impactful.
C
Every
request
basically
goes
through
apf
and
we
got
to
the
end
and
it
looked
like
it
was
good,
and
then
we
started
getting
test
flakes
over
the
weekend
and
maybe
more
than
test
flakes
so
monday
came,
and
we
were
two
days
away
from
the
release
like.
Is
this
a
problem
or
not?
What
do
we
do?
And
you
know
it
was?
It
was
a
lot
of
effort
by
a
lot
of
people
to
get
get
over.
That
line,
make
sure
we
were
comfortable
doing
the
release
when
we
did
it.
B
B
A
A
I
think
we'll
go
ahead
and
wrap
up,
and
these
slides
will
be
online
later
today.
So
take
a
look
and
the
recording
will
also
be
up
so
you'll
be
able
to
re-watch
or
watch
if
you
weren't
able
to
join
us
right
now,
live
so
thanks
again,
everyone
for
joining
and
we'll
see
you
soon
at
another
webinar.
Thank.