►
From YouTube: Kubernetes SIG Node 20230110
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20230110-180554_Recording_640x360
A
Oh
hello,
hello,
it's
January,
10
2023
is
a
signaled
weekly
meeting
hello,
everybody.
We
have.
A
B
Okay,
hi,
Paco
and
I
were
working
on
this
cap
to
propose
to
add
a
kublet
level
of
couplet
level
limit
of
parallel
image
pools.
The
cap
link
is
here.
We
are
just
asking
for
people
to
review
it.
B
The
idea
is
quite
simple:
we're
just
trying
to
add
a
complete
configuration
to
put
a
limit
on
the
maximum
number
of
images
that
can
be
pulled
in
parallel
and
all
aiming
pools
beyond
that
limit
will
be
blocked
until
one
of
the
image
pools
finishes,
yeah,
more
details
and
user
experience
can
be
found
in
this
cap.
So
just
please
review
it
and
also
because
this
change-
and
this
idea
is
quite
simple-
we're
not
sure
if
it
really
Warrens
a
cap
or
we
can
just
go
implement
it.
B
So
if
you
don't
think
it
Warrens
the
cap,
please
let
us
know
so
that
we
can,
you
know,
make
the
process
simpler.
That's
all
about
it.
B
A
D
Think
we
should
definitely
have
like
an
issue,
and
maybe
some
public
discussions,
but
I'm
not
sure
if
this
really
warrants
the
cap,
that's
not
making
any
like
API
changes
that
would
need
to
progress
through
feature
stages,
I'm
not
opposed
to
it.
But
I'm
also
saying
it
might
be
a
bit
overkill
for
this.
B
B
Cap
is
actually
okay
to
me,
but
what
might
be
an
Overkill
is
the
alpha
beta
and
stable
faces,
yeah
I'm
thinking,
just
even
if
we
do
have
a
cap
here.
Maybe
we
can
skip
some
of
the
faces
like
we
go
directly
to
like
a
stable
stage,
because
this
is
just
like
a
new
configuration
right.
We
are
not
changing
entity,
plot,
Behavior,
so
yeah.
That
is
something
we
can
discuss
over
the
cap
as
well.
A
Yeah
I
think
the
only
concern
here
may
be.
Potential
issue
is
noisy
neighbors
when
you
run
too
many
image
pools
simultaneously
and
how
it
may
affect
it
just
think
through
this
scenario,
right,
Alpha
NGA
without
Beta,
it
also
may
be
fine.
Okay,.
F
Yeah
I
just
set
a
really
quick,
I,
guess
more
of
a
question
than
anything
else.
F
F
F
So
it's
not
just
some
opaque
thing
passing
through
a
set
of
annotations
and
we
opted
not
to
do
it
in
the
previous
release
because
we
will,
you
know
as
part
of
the
alpha
feature,
because
we
want
to
make
sure
this
was
actually
something
that
was
going
to
make
it
in
before
we,
you
know
modified
CRI,
but
now
that
it's
in
for
Alpha.
We
want
to
actually
make
this
addition
to
the
CRI
and
then
you
know
obviously
eventually
update
the
container
runtimes
to
know
how
to
consume
properly
and
my
main
question
around.
F
It
was
because
it's
such
a
small
change
and
it's
something
we
already
talked
about
in
the
dra
cap.
Really
we
glossed
over,
we
don't
have
the
details
of
what
the
change
would
actually
be.
My
question
is
whether
we
should
just
update
the
drda
cap
with
these
new
details
now
that
we
have
a
better
understanding
of
how
we
want
to
add
it,
or
did
this
warrant
an
actual
full-on
cap
just
to
make
this
CRI
Edition.
F
A
Does
it
go
into
this
discussion
about
how
much
Google
needs
to
know
about
everything
and
how
much
runtime
you
should
know
about
it,
I
think
it's
something
we
discussed
in
as
a
group
that
talking
about
topology
and
CPU
manager
being
external
plugins
and
how
how
much
they
will
know
about
everything
and
how
much
equivalent
to
know
about
everything.
A
I'm
just
making
a
comment
that
it
comes
back
to
this
discussion
about
how
much
kublet
needs
to
know
and
pass
to
run
time
and
how
much
runtime
we'll
need
to
know
itself.
G
It's
exactly
the
same
information:
what
gra
plugin
reports
to
couplet,
so
we
right
now,
it's
past
adjusted
sanitation
towards
the
runtime
and
we
are
promoting
it
to
a
normal
field
like
not
nothing
more
than
that.
Okay,.
A
E
Okay,
so
maybe
I
can
try
my
skin,
oh
I,
can't
okay,
so
my
my
properties,
too,
is
the
is
an
issue
that
you
can.
You
can
see
on
on
the
in
the
docs.
E
So
sometimes
you
have
a
con
job
that
you
run
each
week,
for
example
that
needs
to
claim
a
GWT,
but
the
GWT
is
only
very
like
10
minutes,
for
example,
and
you
need
to
get
this
on
the
amount
variable
before
running
the
container,
because
the
container
is
not
something
you
will
be
yourself,
but
it's
a
sub
battery,
so
you
use
the
the
default
init
container
and
at
the
end
you
will
always
under
like
sourcing
a
file
before
applying
a
non-prepoint
script
which
you
need
to
you
need
to
know
what
is
this
entrepreneurship
and
when
it's
some
to
to
set
on
maintain,
so
my
proposition
is
to
add
a
file
key
selector
on
on
the
on
source
file
on
source,
like
the
the
config
map,
key
on
the
secret
drive,
or
things
like
that.
E
So
you
can
tell
kubernetes
when
you
start
this
container
before
running
it,
go
to
the
go
to
the
file
system,
get
this
file
and
add
all
this
environment
before
running
the
entry
point.
So
you
don't
need
to
to
modify
the
entry
point
of
this
up.
A
Yeah
I
see
on
the
issues.
There
is
some
discussion.
Can
you
summarize
what
whose
fruit
box
no
I
just,
cannot
read
it
that
fast.
So
if
there
are
some
comments
or
ideas
that
people
were
making
from
cgaps,
you
can
voice
some.
E
So
yes,
so
I
I
prevented
it
in
the
six
apps
on
they
say,
go
to
the
signal.
The
meeting
we
can
not
give
you
a
lot
of
information
on
that.
That
or
I
have
I've
read
the
the
code
and
I
think
there
is
a
evening.
The
the
part
of
the
code
I
think
I
need
to
modify.
So
it's
in
the
in
the
signal,
the
part.
So
yes,
that
that's
a.
H
H
So
today,
when
you,
when
you
create
a
pod,
you
can
you
can
Source
environment
variables
from
excuse
me,
field
references
resources,
config,
Maps
or
Secrets
right
that
might
be
referenced
by
that
pod
and
you're,
proposing
a
new
reference
location
which
would
be
just
a
an.
H
File
path,
what
I
was
wondering
was,
if
there's
a
benefit
to
not
making
an
arbitrary
file
path,
but
making
it
a
an
empty
door
file
path,
reference,
so
I,
don't
know
if
you're
familiar
with
empty
directories
that
you
can
declare
in
your
pod
spec.
H
A
H
Right
or
you
don't
necessarily
have
to
reference
a
path
that
might
be
in
the
container
itself
right
or
on
a
container
writable
layer,
because
that's
not
going
to
be
shared
across
containers
anyway,
so
just
like
the
convention
could
be
to
reference
the
path
from
a
location
and
an
empty
directory,
and
then
from
a
modeling
standpoint.
We
already
have
like
an
empty
dur
volume
source.
C
H
Concept
makes
sense,
I
was
just
trying
to
think
through
like
rather
than
select
a
file
inside
the
container,
because
in
your
use
case
you
said
you
were
sourcing
containers
from
third
parties
and
might
mean
that
you're
not
able
to
write
in
that
container
versus
sourcing
the
file
from
a
separate
directory
that
is
shared
with
the
Pod
lifetime,
which
would
be
an
empty
dirt
felt
like
it
made
good
sense
for
you.
E
E
It
creates
it
create
a
neat
container,
but
you
don't
see
it
when
you,
when
you
create
your
deployment,
then
at
the
end
you
can
add
your
your
proper
image
but
I
think
it's
it's.
The
most
common
use
case
of
on
the
environment
variable
generated
on
the
fly,
so
you
create
your
own
pod,
your
own
container,
but
some
third
party
apps
create
any
container
that
create
some
files
for
you
and
sometimes
in
this
file.
E
A
Is
there
maybe
some
security
considerations
when
you
reference
a
random
file,
so
you
can
put
some
files
at
the
secret
of
any
sort
to
be
invalent
variable
somehow,
foreign.
H
Recommendation
for
like
a
next
PATH
is
like.
It
makes
sense
to
me
that
we
might
want
to
explore
allowing
you
to
Source
an
environment
variable
from
a
different
location
than
the
set
that
we
have
today
and
since
this
is
a
a
pod,
API
change,
it
has
to
go
through
a
kept
process,
and
so,
if
this
is
your
first
interaction
with
kubernetes
in
that
regard,
it
it's
a
it's
a
organizational
process.
We
follow
that
says
this
is
the
change.
This
is
how
the
change
will
be
implemented.
H
So
a
lot
of
what
you
wrote
in
the
issue
would
have
to
get
moved
into
a
cap
and
then
tracked
in
an
enhancement
but
I
think
generally
I
would
be
supportive
of
improving
the
set
of
locations
we
can
Source
an
environment
variable
from
the
anti-dair
was
just
one
convention.
We
have
today
for
other
like
volume
sources,
so
maybe
in
a
cap
which
would
be
a
next
step,
we
could
weigh
out
the
pros
and
cons
of
approaches,
but
I,
don't
know
to
me
generally.
E
E
If
the
the
file
that
I
source
is
a
python
script,
for
example,
what
is
what
must
be
the
compartment
of
the?
It
must
not
stop
the
Pod
I
think,
but
that's
a
question.
I
think
we
we
need
to
to
add
a
toward
the
inside.
H
Yeah
so
there
might
be
air
conditions
we'd
have
to
work
through
in
the
cap.
I
guess
if
there's
a
Next
Step
but
I
think
I'd
encourage
you
to
write
something
up
that
explores
those
spaces
and
then
maybe
in
that
write-up
Explorer,
if
you
think
referencing
M
bars
from
an
empty
door,
is,
is
palatable
to
your
use
case
or
not,
and
maybe
some
pros
and
cons
on
that.
But.
H
That
that's
already
today,
a
cubelet
manage
directory
with
the
life
cycle
of
the
Pod
and
so
and
as
distinct
from
any
individual
container
needing
to
be
putting
your
pods
that
there
just
could
be
benefits
for
you
to
explore
that
as
an
option.
So
I'm
not
sure
much
more.
To
add
on
that
I
guess
other
than
the
please.
Please
write
a
cup
and
explore
the
ideas
and
if
anyone
else
wants
to
help
explore
that
with
you
like,
please
try
them
out
now.
A
Yeah,
we
found
an
extra
item
here
and
lots
of
comments.
Next,
maybe
you
can
go
with
vinay
first
and
then
Derek
you
need.
Do
you
want
to
update
on
In-Place
product
skewing.
H
A
Here
today,
oh
he's
not
here:
okay,
yeah
I'm,
looking
through
the
status
really
quickly
and
yeah,
but
if
you
still
needed
I
wonder
if
API
was
merged.
A
B
H
Yeah,
it
looks
like
he
has
a
number
of
code
requests
in
the
smaller
API,
so
I
I
don't
have
to
work
out
on
that.
H
Yeah,
so
just
wanted
to
give
an
update
on
some
things
that
Don
and
I
have
been
batting
around
through
much
of
2022.
So
for
the
life
of
Sig
node.
For
those
who
don't
know,
Don
and
I
have
both
been
a
chair
and
Technical
leads.
H
It's
been
a
lot
of
work,
but
it's
been
very
rewarding
and
hopefully,
people
feel
like
we've
done
a
decent
enough
job
shepherding
the
area
you
might
have
noticed
over
the
last
year
that
other
folks
in
the
Sig
had
been
taking
more
prominent
roles
in
leading
meetings
and
helping
guide
release
planning,
and
that
was
because
Don
and
I
have
both
been
trying
to
find
ways
to
support
growth
of
folks
in
the
Sig
and
ideally
split.
H
The
signo
chair
and
Tech
lead
roles
into
separate
roles
rather
than
today,
they're
combined.
So
folks,
who
might
not
be
aware
with
kubernetes
governance,
there's
a
concept
of
a
Sig
chair
and
they
tend
to
run
the
operations
of
the
Sig.
H
A
lot
of
the
release
planning
activities,
Milestone
labeling,
our
commitments
out
to
the
broader
Community
with
respect
to
you,
know
health
of
Sig
reporting
that
type
of
thing
and
then
there's
the
concept
of
a
technical
lead
role
which
is
basically
intending
to
do
what
could
be
considered
by
some
folks,
the
fun
stuff
of
like
helping
figure
out
what
we
build
and
how
we
build
it
and
the
sub
projects.
We
sponsor,
don't
sponsor
that
type
of
thing.
H
Right
now,
the
signaled
for
the
life
of
the
Sig
has
like
combined
both
those
roles
into
both
Don
and
myself,
but
it's
a
lot
of
work
and
takes
a
lot
of
time
and
it's
probably
something
that
the
Sig
can
benefit
by
by
separating
a
part,
and
so
many
other
things
have
distinguished
the
two
roles
and
what
we
wanted
to
do
in
2023.
H
H
Two
to
three
people
hold
the
roll,
Don
and
I
wanted
to
nominate
both
Bernal
and
Sergey
to
hold
chair
roles
recognizing
the
work
that
everyone
here
has
been
for
the
last
year
has
seen
them
helping
coordinate,
release,
planning
and
general
operations.
This
sick
and
close
out
on
one
of
our
126
retro
items,
which
was
make
sure
that,
as
we
do
127,
we
can
better
scale
our
our
Milestone
planning
and
stuff.
H
H
Taking
on
this
new
responsibility,
as
for
the
Sig
Tech
lead
roles,
both
Don
and
myself,
wanted
to
maintain
that
role,
but
we
also
wanted
to
scale
it,
and
so
many
folks
may
have
known
that.
Bernal
has
been
very
active
in
sign
note
for
the
last
two
years,
and
so
we
wanted
to
extend
to
him.
The
Sig
Tech
lead
role
as
well,
so
we
would
have
three
people
filling
the
role.
H
Bernal
has
been
a
sub-project
approver
in
the
cubelet
for
the
last
two
years.
He
ran
the
cryo
project
when
it
was
a
sub-project
sponsored
by
the
Sig
has
been
very
active
in
oci
CRI
container
spaces,
and
so
we
wanted
to
start
the
new
year
by
recognizing
that
and
recommending
him
to
to
help
join
us
in
the
tech
lead
capacity.
H
So
with
that
I
wanted
to
thank
renal
for
the
last
many
years
of
Engagement
in
kubernetes,
and
thank
him
for
for
that.
If
there
is
any
objections,
any
of
these
changes
like
I
said
there'll
be
a
note
that
gets
sent
out
into
the
sign
mailing
list.
H
But
hopefully
everyone
can
support
folks
being
successful
in
in
these
new
capacities.
We
want
to
make
these
changes
before
the
127
release.
Planning
is
complete,
I,
don't
think.
There's
any
impact
in
our
present
Charter
I.
Don't
think,
there's
any
impact
on
any
of
our
sub-project
ownership
structures
or
stuff
like
that,
but
we
think
it
would
set
us
up
for
a
good,
2023
and
so
thought
about
doing
this
last
week.
But
we
had
a
lighter
attendance,
so
this
week
is
a
good
week
to
do
it.
H
So
that
was
the
update.
I
want
to
make
like
I
said:
I'll
send
a
note
out
this
afternoon
with
the
details,
but
a
big
thank
you
to
murnal
and
Sergey,
and
everyone
else
who
participates
in
the
Sig
and
I
look
forward
to
better
scaling
in
2023
and
that's
all
I
had.
C
Yep
thanks
Eric
yeah
happy
to
continue
supporting
sick
note
and
kubernetes
here.
A
Thank
you
I'm
glad
to
be
here
and
helping
out.
A
Is
there
any
other
comments,
any
other
agenda
items
I
think
next
week
we
may
come
back
to
kept
playing
and
reiterate
again
on
what
caps
will
be
scaled
for
for
this
release.
We
still
have
few
weeks
left
so
don't
be
alarmed
that
we'll
discuss
it
next
week,
but
if
you
have
something
you're
working
on
already,
please
make
sure
that
this
is
specked
out
in
our
document,
and
we
will
include
it
into
our
reviews.
F
H
That's
a
good
question
so,
in
the
note
I'll
send
out
I'll
link
to
the
two
things
that
describe
the
roles
and
responsibilities:
I
just
threw
it
in
the
chat.
Now
the
approval
of
new
sub
projects
is
the
capacity
of
the
tech
lead
things
like
Milestone
labeling,
just
release
planning
some
of
that
stuff
is
in
the
chair
role,
but
you
can
see
those
two
links
and
follow
through
what
the
individual
responsibilities
are.
H
But
the
chair
is
a
far
more
operational
role
and
the
tech
lead
role
is
more
domain
but
yeah
Kevin.
Hopefully,
if
you
click
through
those
things,
you'll
see
the
distinctions.
F
Okay,
yeah,
no,
that
makes
sense
I.
Think
I
was
more
curious,
though
just
about
you
know
right
now,
there's
an
alias
in
the
various
owner
of
owners.
Files
for
signaled
leads
that
needs
to
approve
things,
and
will
there
now
be
a
Sig
node
tech
leads
for
certain
owners
files
that
will
then
replace
the
signode
leads.
D
H
Node
leads
now
we
need
to
distinguish
that
from
maybe
a
given
sub-project
leads
so
I
think
it's
possible
that
where
we
have
signaled
leads
now
depending
on
where
it's
referenced,
it
should
be
kubernetes.
H
I'm,
sorry,
cubelet,
sub
project
leads
right.
We
might
have
to
work
that
out
we're
not
intending
to
change
anything
around
owner
files
rights
today,
like
let's
say
on
the
cubelet,
does
that
make
sense
it
shouldn't
have
any
impact
on
that.
The
only
impact
I
would
see
on
this
is
that
Sergey
and
Menaul
will
both
have
Milestone
labeling
rights
and
murnal
would
have
privileges
to
approve
requests
for
new
sub-projects.
H
F
H
These
are
basically
governance
roles
within
operation
of
the
project,
but
not
necessarily
like
I,
said
right
now:
we're
not
tackling
the
individual
owners
files,
I
will
I
will
say
right
now.
There
is
a
goal
of
mine
personally
and
I
know
Dawn
to
to
add
and
scale
new
approvers,
both
that
the
qubit
level
and
the
enhancement
levels,
and
last
year
we
wrote
the
the
membership
ladder
discussion
on
that
Kevin.
H
You
recall
that
so
we're
not
making
any
changes
on
that,
but
I
I
will
say
right
now
like
if,
if
folks
want
to
move
up
on
that
ladder,
please
reach
out.
H
So
we
can
help
nurture
that
that
desire
right,
but
for
for,
what's
being
described
here,
like
there's
a
lot
of
background
Work
Kevin
and
just
being
the
chair
like
getting
the
YouTube
videos
uploaded
having
access
to
that
playlist
a
lot
of
stuff
that
actually
ends
up
taking
a
fair
bit
of
time
and
that
that's
what
you'll
see
in
in
some
of
these
chair
versus
Tech
lead
things
for
folks
who
haven't
seen
it
for
governance.
F
No,
that
makes
sense,
I
think
what
I'm,
what
I've
found
in
the
past
is
that
you
know
a
lot
of
times.
I'll
do
a
review
for
something,
and
that
requires
a
signal
to
lead
approval,
and
it's
not
just
enough
to
be.
You
know
in
the
owner's
file
for
the
Kubota
and
it'll
reach
this
point,
where
it's
it's
mostly
ready
and
then
I'm
blocked
on
actually
getting
either
you
or
Dawn
in
the
past,
at
least
to.
F
It
and
approve
it,
and
so
this
is
where
I'm
just
curious.
If,
if
there's
some
way
to
scale
out
that
role,
they
can
do
that
final
last
level
approval
that
we
tend
to
get
blocked
on,
even
though
we
have
the
ability,
you
know
as
signaled
approvers
to
bring
it
all
the
way
to
that
last
step
until
you
guys
are
able
to
come
in
and
take
a
look
at
it.
F
H
So
the
particular
Alias
is
we'll
have
to
review.
That's
one
of
the
reasons
why
trying
to
make
this
announcement
now,
so
we
can
take
time
to
actually
collect
the
right
changes
that
need
to
make
in
the
in,
in
the
corresponding
organizational
repos
to
get
it
right
with
respect
to
enhancement,
reprovers,
I,
think
Kevin.
If
you
recall,
when
we
wrote
that
document
last
year,
we
threw
that
out
as
a
with
a
particular
path
to
getting
there.
H
I
personally
would
be
very
supportive
of
a
few
additional
individuals
having
further
rights
in
those
enhancement.
Approver
directories
that
that
requires
some
folks
to
speak
up
and
say
they
want
them,
though
so
like
maybe
the
best
thing
to
to
close
on
right
now
is
is
say.
These
are
the
changes
I
wanted
to
announce
today
because
they
set
the
groundwork
for
how,
like
the
operation,
looks
like
works
for
the
year,
but
that
doesn't
necessarily
us
for
making
additional
follow-on
changes
and
if
folks
have
particular
things,
they
would
like
to
see.
H
Please
reach
out
to
me
and
Don
and
we
want
to
use
the
new
year
as
a
time
to
to
scale
out,
but
this
was
the
most
immediate,
tangible
thing
I
wanted
to
announce
today.
I
guess.
F
F
H
No
problem
and
a
little
worried
here,
my
wife
has
coveted
right
now
and
I'm
sitting
here
thinking.
Oh
man
am
I
getting
sick
as
I'm
talking
about
this,
but
yeah
please
reach
out
Kevin
if
there's
particular
nuances
that
I
might
be
missing,
but
yeah.
We
definitely
want
to
scale
out
in
2023,
and
so
this
is
just
the
first
step
in
trying
to
help
that.
So
thanks
again
for
all
the
positive
comments
in
the
chat
and
thanks
Sarah
gaming
on
for
for
volunteering,
so
anyway.