►
From YouTube: Kubernetes SIG Node 20220809
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20220809-170341_Recording_640x360
A
A
So,
let's
start
our
Jing
I
I
noticed
that
you
are
the
first
one.
Yes,
yes,.
B
Can
you
hear
me?
Yes,
we
can
hear
you?
Oh
yeah,
sorry
I
I
put
my
in
the
first
place
because
there's
other
reading
complex,
so
the
the
feature
for
local
storage
capacity,
isolation
and
it
will
allow
local
informal
stories
related
to
set
request.
Name
it
and
also
there
is
another
related
feature
resource
quota
so
similar
to
CPU
and
memory.
You
can
define
a
resource
code
in
your
username
space
and
then
it
will
limit.
You
know
the
total
amount
of
limits
or
requests
right
for
CPU
memory
and
similar
for
e-commerce
story.
B
But
one
behavior
is
different.
Is
the
once
you
set
a
resource
code,
you
must
set
a
part
request
limit
for
CPU
memory,
otherwise
the
validation
will
fail
and
for
informal
storage
we
did.
We
don't
have
that
restriction.
I.
Think
when
I
check
the
code
background,
there
is
a
comment
saying
it's
a
mistake
to
to
must
require
CPU
memory
to
set
request
limit.
If
there
is
resource
code
are
defined
in
user
namespace,
I'm
wondering
what's
the
reason
behind
that
and
the
the
comment
saying
yeah
for
other,
like
storage
or
other
new
resources.
B
We
should
not
do
that,
but
it's
a
little
bit
strange
for
me.
If
you
set
resource
code
and
you
don't
have
restriction
right
for
users
part,
then
that
means,
if
you
create
a
part
without
any
request
limit
right,
then
it's
still
a
lot.
Even
you
have
resource
code
in
your
username
space,
foreign.
C
E
C
Is
only
applying
to
those
areas
where
you
give
explicit
resource
guarantees
towards,
but
not
necessarily
applying
to
those
where
you
give
best
effort
access
to
a
resource,
and
so
there
was
a
way
to
set
up
quota
now
to
also
let
you
say,
like
this
quota
applies
to
particular
Scopes.
So
you
could.
You
could
write
a
quota
that
said:
you're
allowed
10
best
effort
pods
in
a
namespace.
That
also
has
an
explicit
CPU
and
memory
quota,
so
my
memory
in
the
space
was
basically.
C
Explicit
quotas
became
more
oriented
towards
the
goal
of
quoting
that
which
you
guarantee
your
reservation
around,
but
not
prohibiting
you
access
to
best
effort.
B
Research
I
see
I,
see
then
I
think
in
that
aspect.
It
makes.
C
Sense,
let
me
just
pause
and
make
sure
on
the
same
spot.
Are
you
familiar
with
I?
Think
I
think
we
added
this
the
scope
token
to
a
resource
quota.
I
just
want
to
make
sure
you.
You
were
familiar
with
what
I
was
referencing
there.
You
can
say
that
this
this
quote
of
covers
pods.
This
quote
as
best
effort
resources
or
not
and
I
forget
the
other
Scopes
that
we
had
it's
been
a
couple
years
since
I
looked
at
too
closely,
but
yeah.
A
My
my
number
also
it's
been
a
while
we
talk
about
the
result,
called
that
what
I
remember
the
quarter.
It
is
aggregate
limit
to
Define,
it's
just
basically
defined
for
the
total
capacity
for
giving
aggregate
naming
space
like
in
a
visual
naming
space
or
whatever
things
and
here's
the
that's
basically
most
is
served
that
purpose
right.
Okay,.
A
B
B
C
So
maybe
just
to
make
sure
so
your
use
case
is
you
want
to
ensure
that
every
pod
that
comes
into
a
name
space
makes
a
storage
request.
B
The
it's
not
my
request,
it's
just
like
why
that
we
should
follow
the
same
behavior
like
CPU
and
memory,
so
that
the
purpose
of
maybe
some
customer
they
want
to
set
resource
code
on
the
the
purpose
of
citing
resource
code.
That
is
always
like
make
sure
pause
right,
also
set
request
limits
so
that
they
can
control
how
much
resources
you
know,
consumed
yeah.
C
So
I
would
say
that
the
purpose
of
resource
quota
is
to
ensure
you
can
constrain
the
amount
of
guaranteed
resource
that
kubernetes
gives
to
your
pod
and
in
if
I
apply
that
to
ephemeral
storage.
C
When
you
don't
request
ephemeral,
storage,
it's
not
even
a
guaranteed
resource
right.
So.
C
Then
the
CPU
and
memory
thing
just
took
except
looking
at
it.
Now
we
you,
we
had
the
ability
now
with
quotas
to
say
if
this
quota
is
applying
to
time-bound
Resource
consumption
versus
not
Townline
resource
consumption.
So,
like
you,
can
write
different
quotas
that
say
pods
that
have
an
active
deadline
like
if
their
jobs
have.
You
know
a
distinct
quota
from
pods
that
don't
or
then
you
definitely
had.
We
had
use
cases
where
you
could
write
quotas.
C
That
say
this
is
only
applying
to
best
effort
access
to
Resource,
but
not
best
effort
access
to
Resource
and
then
I
forgot
that
we
even
had
this.
You
can
write
quotas.
That
say
it
only
applies
to
particular
priority
bands,
and
then
you
can
write
quotas.
That
say
it
only
applies
across
particular
pot
affinities,
but
for
the
use
case
of
demanding
that
every
pod
have
a
ephemeral,
storage,
I
would
have
thought
a
limit
range
or
like
a
gatekeeper
policy
would
have
been
the
type
of
thing
that
we
can
drive.
Of
course,
that.
E
G
I
agree
with
that:
I
think
limit
range
is
what
has
the
looking
for
in
the
you
can
set
minimum
and
maximum
for
any
part
if
the
user
hasn't
specified
it
limit,
Ranger
yeah.
G
It
will
default
to
it,
so
it
doesn't
Force
the
Pod
user
I.
Think
I
was
confused,
whether
you're
saying
that
Bots,
which
are
best
effort,
are
not
allowed
because
I
don't
have
a
test
for
that.
I
need
to
add
a
test
case
in
mine,
but
it
was
more
like
okay,
if
the
pods
are
allowed.
They
shouldn't
be.
Is
that
your
question.
B
So
it's
a
busy.
My
question
is
the
behavior
right
in
terms
of
CPU,
a
memory
and
the
informal
story.
Right
now
is
different,
so
I
think
because
of
some
historical
reason,
the
CPU
memory
right
you
set
that
restriction
and
so
for
backwards.
Compatibility,
new
events
remove
that,
but
I
think
at
that
time,
when
locals,
informal
storage
is
implemented
and
we
didn't
follow
that
path.
C
I
would
say,
like
storage,
isn't
a
part
of
your
quads
class,
your
quality
service
class,
so
I
think
our
memory
was
anything
that
was
existing
in
Claudia
service
requirements
like
if
they
were
in
the
quota,
then
they
would
be
required
and
things
that
were
not
tied
to
quality
of
service.
B
Look
at
it
yeah,
yeah,
yeah,
I,
think
it
makes
sense
and
also
like
you
mentioned,
the
limit
range
that
can
be
used
for
if
the
purpose
is
to
limit
make
sure
autopilot
has
like
some
limits
right
in
namespace
yeah,
then
you
can
use
that
for
that
purpose
and
for
result,
code
is
for
none
best
effort
paths
to
guarantee
the
resources
I.
Think
yeah
it
should
work,
is
there's
some
user
like
erase
this
question,
so
I
want
to
make
sure
we
address
and
the
let
me
communicate
and
I
think
yeah.
B
G
Hi,
okay,
first
of
all,
I
think
thanks
to
a
lot
of
people
last
week,
Peter
Mike
ruin
and
micro
City.
They
all
I
think
he
jumped
in
and
did
quick
reviews
on
the
on
the
code.
What
we
did
was
we
took
out
the
CRI
portion
of
the
code
that
was
in
the
main,
PR
and
merged
that
this
hopefully
will
unblock
Raven's
PR,
that's
on
standby
and
also
Peter
for
the
CR
cryo.
G
That's
the
current
state
of
affairs
going
forward.
I
had
a
couple
of
questions.
One
is
if
there
are
no
objections
for
the
kublet
code.
At
least
there
are
a
lot
of
commits
in
there
can
I
squash
them
into
one
single
commit.
That
makes
it
a
lot
easier
for
me
to
deal
with
three
bases,
and
it
should
be
less
of
a
problem
now
with
the
CRI
change
already
in,
but
it
will
help
so
question.
G
Then
I'll
make
it
into
one
big
assume
that
all
the
comments
that
are
in
there
are
reviewed
a
few
times
and
there
is
no
and
I
know
there
is
one
change,
the
hash
one
which
we
need
to
take
it
out
in
GA.
We
can
do
that.
I
think
I
have
a
fair
idea
of
what
all
needs
to
go.
Go
out
from
that
once
we
are
there.
The
second
question
is
for
126:
can
we
do
the
API
code?
G
Merge
early
I
know
that
we
need
to
work
on
a
couple
of
things
for,
for
this
main
working
code
in
the
kublet
one
is
C
group
V2.
Adding
that
support
I
would
be
a
lot
more
comfortable
with
that
going
in
first,
because
CI
is
already
on
C
group
B2.
We
need
to
have
that.
It's
not
a
beta
item
anymore
and
the
other
thing
is.
We
need
to
get
the
support
from
Creo
containerdy
continuity,
mainly
because
that's
used
in
CI
I,
don't
know
what
the
timeline
for
that
is
going
to
look
like
that.
G
Will
you
know
close
the
loop
for
end
to
end
and
flush
out
any
issues
that
might
be
there.
Last
I
looked
was
before
Docker
shim
removal,
which
is
why
I
was
keen
on
Docker
shim
this
getting
in
before
document
got
removed,
but
it's
okay.
G
So
what
do
you
think
direct
done.
G
So
I'll
I
think
I'll
in
that
case,
I'll
create
just
like
we
did
for
the
CRI
portion
of
it,
I'll
spawn
a
new
PR
for
the
API
and
have
Tim,
and
you
look
give
it
one
look
and
because
we
want
to
do
this
in
126
right.
G
No
I
think
I
got
really
nervous
with
the
late
breaking.
So
if,
if
the
scheduler
test
all
came
in
and
there
was
Zero
code
change
to
the
main
code,
I
would
be
you
know
yeah,
let's
do
it
because
it
just
validates
that
of
what
we've
been
doing.
All
along
is
correct,
but
we
found
issues
and
then
the
C
group
we
took
change
which
I
didn't
I,
didn't
realize
that
you
know
that
we
still
continue
to
do
the
V1
tests.
I
thought
that
we
even
switched
to
V2.
So
my
test
validation
is
not
there.
G
That
was
scary
for
me
and
I.
I
know:
Daniel
wants
to
move
this
test
to
the
node
e2e
node,
but
my
thinking
is
that
we
should
have
this
and
add
new
test
in
e2e
node,
because
this
is
this.
The
fact
that
this
test
break
broke
proves
that
it's
working
well
and
it's
kind
of
my
security
blanket.
If
you
will
I
know
it's
pathetic,
but
I
trust.
It.
G
Okay,
so
I'll
do
that
I'll
spawn
a
new
PR
for
the
API
change
only
and
one
for
the
scheduler.
Maybe
but
scheduler
is
not
a
big
deal.
It's
small!
It
can
be
managed
in
one
in
the
same
PR.
So
thanks
I
think
that
those
were
the
two
things
I
wanted
to
get
some
clarity
on
and
I
got
them.
A
Thanks
vinay,
so
next
one,
the
credit.
H
Yeah,
that's
me,
hello.
Everyone
can
you
all
hear
me?
Yes,
yep
good
good,
so
we
we
actually
got
a
lot
of
our
peers
merged
regarding
the
rename
of
SRO
into
kmmo.
H
We
are
ramping
up
into
pushing
the
code
Upstream.
The
the
only
missing
beats
that
we
have
is
that
we
are
not
yet
members
of
the
kubernetes
six
organization
on
GitHub
and
I
understand
that
we
need
sponsorship
to
to
do
that.
So
I
have
put
this
item
on
the
agenda
to
pretty
much
solicit
the
sponsorship
from
you
know
any
other
company
except
red
hat,
where
we
are
working
where
we
have
the
main
contributors
to
kmmo
to
you,
know,
get
our
applications
in
and
become
members
of
the
the
Sig
organization.
H
I
I
think
we
are
looking
at
the
the
two
biggest
contributors
at
the
beginning
and
then
that
we
would
add,
you
know
people
as
they
contribute
to
the
project.
H
I'm
modeling,
you
know,
I
was
reading
at
the
I
was
reading
at
the
process
there,
and
we
should.
You
know,
reach
out
to
people
reach
out
to
sponsors
before
we
we
file
the
application.
So
I
was
wondering
if
there
is
any
any
volunteer
to
that.
That's
not
working
at
red
hat
that
that's
willing
to
to
sponsor
our
requests
for
membership
happy
to
plus
one.
A
A
H
Okay,
so
I
was
tripped
on
that
one
to
have
a
reduced
set
of
owners
at
the
beginning
and
then
we'll
add
more
people.
You
know
as
they
become
members
and
add
contributions
to
kmml.
A
Thanks
and
let's
move
to
the
last
topic,
I
think
that's
Peter,
right
Peter
is
that
your
topic.
D
Music
hi,
can
you
hear
me
now
hi
yeah?
So
this
is
me
so
a
little
bit
of
context,
there
was
a
pretty
low,
a
low
severity
cve
that
was
reported
against
Docker
a
while
ago
that
containerdy
and
cryo
also
created
one
for
that
said
that
inheritable
kids
really
shouldn't
be
specified
inside
of
a
container
by
one
of
the
container
managers
that
continue
to
cryo,
because
it
like
basically
the
idioms
of
how
capabilities
are
specified.
So
continuity
and
crowd
drop
those
capabilities
and
all
seemed
well.
D
But
actually
we
found
a
couple
of
instances
where
that
regresses
and
causes
users
who
are
previously
getting
those
capabilities
to
not,
and
so
in
discussing
how
we're
going
to
handle
it.
Renault
brought
up
that.
D
Maybe
we
should
bring
it
up
to
the
larger
container,
the
larger
Sig
node
organization
and
also
maybe
coalesce
to
a
a
CRI
test
to
test
the
specific
behavior
and
I
also
think
it
makes
sense
to
kind
of
discuss
what
what
we
think
we
should
do
about
this
one,
because
basically
we
have
some
users
that
we're
relying
on
having
the
inheritable
capabilities,
which,
like
is
questionably
correct.
But
it's
not
super
clear.
How
fast
to
move
forward
so
I
guess.
My
first
question
is
people
from
the
continuity
Community.
D
Have
you
met
any
backlash
to
the
changes
of
dropping
the
inheritable
capabilities.
E
I
believe
so
Peter
it
had
to
go.
They
have
to
go
look
again
to
find
the
the
issue
that
was
open.
F
A
Yeah,
okay,
yeah
I
agree
with
them.
You
and
Peter
found
the
issue,
and
can
you
ping
me,
through
the
slack
and
and
also
we
can
do
something
like
the
internal
cross-checking
and
to
say
that
right?
So
for
that,
because,
basically
it's
more
like
the
usage
right
so
from
the
user
perspective,
yeah
yeah.
D
E
Yeah
I
see
Samuels
most
of
the
comment
here.
He
remembers
one
issue
yeah
and
that's
probably
why
we're
applying
for
us
I
seemed
a
vague
memory
of
one
issue.
We'll
take
a
look
at
it.
I
agree
with
Peter.
We
we
shouldn't
flip
back
and
forth.
We
probably
need
to
check
the
old
Dockers
in
to
see
you
know
how
it
was
inheritable
or
not.
D
Well,
the
thing
is
I
think
Dr
Shim
was
inheritable
and
because
this
CDE
came
in,
you
know
in
the
last
six
months,
so
you
know
now
we
all
have
to
decide
how
to
handle
this
abnormal.
Your
next
situation
that
we
create
that,
like
you
know,
was
created
for
us
and
now
we
have
to
either.
You
know,
make
people
break
some
people
or
deal
with
this
abnormal
situation
so
yeah.
We.
D
Cool
yeah,
I'll
open
an
issue
and
then
come
up
with
a
reproducer,
and
we
can
we
can
follow
up
with
them.
Thank
you.
Thank
you.
That's
it
for
me,.
A
G
One
merge
is
waiting,
yeah.