►
From YouTube: Kubernetes SIG Node 20210907
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Good
morning,
everyone
it
is
today
is
the
september
7th.
Let's
start
the
as
euro
circuit.
Since
you
are
you
are
here,
maybe
you
can
cover
both
back
charge
and
also
the
testing
status.
B
Yeah,
as
we
just
mentioned
many
people
off
right
now
out
of
office,
and
we
don't
have
many
like
less
than
usual
prs
created
just
18.
So
if
you
wonder,
what's
going
on
check
them
out
between
I'm
on
closed
and
merged,
most
of
closed
pr's,
either
duplicates
or
some
work
in
progress,
I
didn't
find
any
rotten
or
otherwise
forgotten
prs
that
needed
to
take
care
of
yeah
and
I
think,
closer
to
enhancement.
B
Freeze,
we'll
have
more
and
more
prs
piling
up
so
need
to
pick
up
ways
and
hopefully
september
started,
and
we
will
have
people
coming
back
from
whatever
vacations
they
had
and
bhaktiash.
C
Yeah
hi,
thank
you.
So
just
a
little
bit
of
context,
I
brought
this
up
last
week
as
well.
So
currently,
what's
happening
is
in
the
fml
containers
cap.
C
So
I
was
working
on
adding
this
new
pot
condition,
but
as
the
discussion
progressed,
there
was
some
there
was
discussion
around,
should
the
spot
condition
also
cover
cases
of
such
as
cubecut
lexec,
for
example,
because
the
intent
of
such
a
pot
condition
would
be
to
basically
signify
that
this
pod
is
no
longer
pristine
and
then
whoever
is
the
cluster
admin
could
then
take
decision
in
terms
of
what
they
want
to
do
with
it.
C
So
there
was
discussion
around
should
such
a
pod
condition
also
cover
exact
use
cases
and
direct
last
week
brought
up
that
if
it
should
cover
exact,
should
it
also
cover
container
notifier
type
use
cases.
C
So
my
main
point
of
feedback
was:
what
do
people
think
about
having
a
single
port
condition
cover
all
three,
or
should
it
even
cover
all
three,
or
should
it
just
be
scoped
to
ephemeral
containers
for
now
and
later
revisited?
C
Maybe
add
another
podcast
for
exact
use
cases
or
just
have
this
one
condition
cover
all
three,
so
I
spoke
to
jing
as
well,
so
for
container
notifier
and
her
thoughts
were
that
if
such
a
port
condition
were
to
cover
exec,
it
should
also
cover
content
notifier,
because
that's
just
a
more
native
way
of
doing
exec
so
yeah.
C
So
I
I
just
wanted
to
know
what
people
thought
about
this,
and
so,
if,
if
like,
there
is
a
certain
direction
to
go
in
then
I
can
probably
like
update
the
cap
before
the
enhancements
freeze
for
fm
air
container,
so
that
we
can
have
that
changing.
A
I
do
think
about
the
if
we
want
to
want
to
cover
off
the
exit.
I
want
to
also
cover
off
the
container
notifier,
but
unfortunately
we
cannot
have
like
a
form
of
the
discussion,
because
I
think
a
lot
of
people
involved
with
either
it
is
in
female
container.
I
mean
I
always
inform
her,
but
many
people
like
lee
original
proposal
and
a
proposed
officer
is
not
here,
and
the
dark
also
is
not
here
and
so,
and
also
the
container
notifier,
the
both
the
two
author
and
owner
is
not
here
today.
B
Now
my
question
is
how
like,
if
we
will
include
exec
and
continue
recifire,
how
much
it
changes
the
scope
and
like
how
much
it
delays
ephemeral,
container
promotion
to
beta
like
can,
we
add
them
later?
If
needed?
Was
it
discussed.
C
So
adding
it
just
for
ephemeral
containers
isn't
too
big
a
change,
especially
so.
I
already
have
like
a
working
draft
pr
for
just
fml
containers,
but
if
you
were
to
add
it,
for
example,
I
suppose
that
would
delay
it
by
some
amount,
but
it
I
don't
think
it
would
affect
fm
area
containers
moving
to
beta,
because
you
would
the
the
change
to
the
cap
would
would
be
the
same.
C
A
If
a
cover
of
container
notifier,
I
think
it
will
be
definitely
have
the
impact,
but
it's
dynamic
sense
only
cover
exit
of
the
cover
of
the
contender
notifier.
So
that's
that's
the
province,
the
container
notifier
it
a
little
bit
remain.
I
know
at
this
stage
even
we
are
kind
of
highlight
or
agree
upon
api,
but
but
there
are
always
halftime
annoying
is
the
studio
is
remain
right.
So
so
that's
a
little
bit
concerned.
A
So
maybe
maybe
we
can
carry
off
the
offline
discussion
and
make
the
other
people
involve
this
in
a
small
group,
and
so
we
can
discuss
and
come
back
report
back
to
the
signal
community.
C
A
Ryan
and
the
clayton
extra
clinton
just
joined.
Do
you?
Do
you
want
to
talk
about
the
static
part
regression
issue.
D
Sure
so,
as
as
we've
been
working
through
the
impact
of
the
changes
to
the
pod
worker
to
more
rigorously
control,
there's
a
couple
of
areas
that
were
impacted,
one
is
on
the
admission
side,
so
admission
is
somewhat
under
tested,
which
we
kind
of
realized
over
the
last
there's
been
a
couple
of
regressions
where
the
admissions
side
and
how
it's
tested
tests
outcomes,
but
it
doesn't
really
test
all
like
the
core
logic
of.
D
If
this
is
rejected,
is
it
kept
so
there's
a
there's,
a
separate
thing
that
we're
kind
of
been
tracking
with
the
changes
to
how
pod
event
or
how
pod
workers
are
handled.
That's
kind
of
that
difference
between
there's
the
desired
state
in
the
cubelet
and
the
actual
state,
and
so
the
fix.
The
change
was
a
lot
of
like
separating
those
out
making.
Those
clear
so
like
pod
life
cycle
was
in
a
couple
of
places.
Now
it's
consolidated
the
desired
state.
D
What
the
config
says
and
the
actual
state
of
walking
each
pod
through
its
life
cycle
is
much
more
rationalized.
What
we
were
working
through
and
identified
in
the
last
week
or
two
as
a
as
a
regression
slash
previously
undefined
behavior
that
people
were
depending
on
is
people
are
using
static,
pod
uids
as
a
way
of
ensuring
that
there's
only
like
if
the
static
pod
is
terminated,
that
the
new
static
pod,
that's
created
after
the
update,
doesn't
run
at
the
same
time
as
the
old
static
pot.
D
D
D
You
could
change
what
volumes
it's
using,
but
it
would
have
the
same
uid.
A
lot
of
parts
of
the
cubelet
only
deal
with
things
by
euid,
like
the
volume
manager,
and
so
what
we've
realized
is
because
we
allow
you
to
keep
that
that
euid
and
make
mildly
incompatible
changes.
We
were
doing
a
workaround.
Some
of
the
places
where
this
was
this
was
detected
was
the
old
static
pod
would
be
deleted.
The
new
pi
would
be
created,
which
is
fine.
D
You
get
deleted,
it
has
the
same
uid,
but
there's
a
remove
and
an
ad,
but
that
never
took
into
account
the
fact
that
the
cubelet
is
only
really
tracking
in
all
of
its
loops.
So
the
cubelet
kind
of
has
like
the
main
pod
worker
loop
and
there's
a
bunch
of
other
loops,
the
volume
manager
device
managers,
all
that
they're
only
tracking
things
by
euid.
D
They
don't
actually
know
that
the
old
instance
of
the
pod
is
shut
down
and
they
could
have
wildly
different
views
so,
like
you've
got
the
old
version
and
the
new
version
they're
completely
different
they're
incompatible
like
they're.
It's
not
something.
We've
ever
supported
on
the
api,
server,
adding
or
removing
containers.
D
The
other
loops
have
no
idea
which
one
is
which,
and
they
can't
ask
the
rest
of
the
cubelet,
which
one
is
which,
because
the
euid
is
the
same.
So
as
part
of
this
it's
basically
the
euid
is
really
I'm
starting
to
think
that
the
use
case
that
people
have
is
they
want
their
pods
not
to
be
running
at
the
same
time
by
reusing
that
euid,
that's
what
it
was
being
used
in
a
couple
places.
D
We've
found
this:
it's
not
safe
at
all
in
the
cubelet
to
only
to
only
differ
major
changes
of
a
previous
instance
of
a
pod,
a
new
instance
of
a
pod
on
on
just
you
id.
If
we
allow
static
pods
to
preserve
you
and
so
like
we're
going
to
fill
up
the
deck
and
or
I'm
going
to
fill
up
the
dock
a
little
bit
more
and
send
this
around.
D
But
I
wanted
to
get
folks
who
have
a
previous
experience
with
static
pods
to
weigh
in
because
we're
underspecified
there's
no
place
to
go
reference
to
figure
out.
What
we
actually
think
we
should
support
is
my
argument
at
this
point
would
be
all
the
static
pod
preservation.
Stuff
is
wrong
and
we're
no
matter
what
we're
going
to
do,
we're
either
going
to
have
to
change
it.
D
So
you
can't
preserve
static
pod
uid
under
the
covers,
or
we're
going
to
have
to
add
a
new
uniqueness
tracker
which
is
going
to
be
really
complicated,
and
I
don't
want
to
do
because,
fundamentally,
the
cubelet,
almost
every
loop
in
the
cubelet,
is
about
you
it
and
it
makes
some
really
strong
assumptions
about
life
cycle
and
humanness,
and
what
changes
are
allowed
that
just
aren't
going
to
work
with
static
pods
because
static
pods?
Can
you
know
you
have
five
static,
pods,
all
sharing
the
same
euid,
the
pod
config
would
randomly
pick
one
of
them.
D
You
could
have
an
old
version
with
the
uids
like
shutting
down
and
a
completely
different
new
version.
Spinning
up
all
the
other
parts
of
the
cubelet
have
to
figure
out
some
way
to
agree
on
what
version
that
is
and
what
we
have
today
isn't
enough.
So
it
is
a
thorny
problem.
I
need
folks
who
are
familiar
with
static
pod.
What
guarantees
are
being
used
by
distributions
and
people
who
are
familiar
with
the
initial
implementations?
D
We
actually
need
to
talk
through
what
we
actually
are
trying
to
support
and
what
people
depend
on,
because
it's
not
safe
to
reuse,
static,
pod,
uids,
the
way
we're
doing
it
now
and
it's
going
to
have
to
change
but
something's
going
to
change
to
work
around
it.
I.
E
Oh
yeah,
I
just
want
to
say
that
the
I
think
the
the
static
power
uid
is
generated
based
on
the
part.
It
is
a.
E
One
yeah
yeah,
I
just
read
the
code
and
if,
if
in
the
static
yaml
file
you
hard
code,
the
uid,
we
will
escape
the
generation
hashing
part.
But
I
I
don't
even
know
that
people
use
it
in
that
way
because.
E
Is
always
that
whenever
you
touch
any
part
of
this
pod
starting
yammer
file,
it
should
generate
a
new
hash
and
it
will
be
a
completely
new
part.
That's
my
that's.
That's
my
assumption
in
my
mind,
but
the
interesting.
D
Thing
is
a
pod,
so
pods
in
the
api
server
should
never
have
a
duplicate
ui
they
can
because
of
like
you
could
do
a
rollback
like
there's
there's
a
bunch
of
cases
where,
like
it,
is
actually
possible
to
reuse
a
uid
in
the
cube
api
server,
and
we
mostly
work
around
like
it's
something.
That's
very.
It's
almost
impossible
to
end
up
with
a
reused
uid
in
the
cube
api
server.
It's
not
impossible,
but
it's
almost
impossible
static.
Pods
are
really
not
pods.
There
are
a
pod
controller.
D
D
If
you
don't
specify
a
static
uid
is
that
you
have
a
pod
template
like
a
replication
controller,
a
size
one
inside
the
cubelet,
and
it's
stamping
out
new
instances,
that's
totally
reasonable
and
that's
we
can
make
that
work,
there's
some
actual
other
bugs
that
we
realize
that
we
need
to
go
fix
the
when
you
reuse
the
static
unit.
D
We
didn't
actually
guarantee
that
before
and
so
because,
like
you
could
restart
the
cubelet
and
the
old
version
would
still
be
shutting
down.
So,
like
there's
a
bunch
of
missing
code
that
would
provide
that
guarantee.
D
We
can't
necessarily
change
the
core
behavior
of
the
cubelet.
To
like
there's
a
reason.
People
need
that
which
is,
if
you've
got
something,
that's
managing
unique
access
to
an
on-disk
store
and
the
static
pod.
If
you're
running
two
instances
of
static
pod,
we
could
change
the
behavior
of
cube
underneath
you.
D
So
what
we
need
to
do
is
figure
out
a
way
that
you
can
signal
to
us,
and
maybe
we
deprecate
the
old
static
pod
you
had,
or
we
just
make
it
result
in
there
being
a
unique
identifier
that
you
can
use
to
guarantee
exclusion
of
the
two
pods
at
the
same
time.
So
the
problem
is,
we
haven't
really
we
didn't
document.
D
What
guarantees
the
static
pod
was
providing,
so
one
of
my
other
statements
would
be
like
need
to
go
through
and
write
down
what
we
expect
static
pods
to
support
feature
wise
so
that
we
can
say,
because
there's
some
tests
missing
as
part
of
this
and
that'll
help
close
out.
What
do
we
actually
expect
for
so
lantau?
What
you
described
is
exactly
exactly
what
I
expect
people
to
use,
except
for
the
exclusion
thing.
A
A
I
just
quickly,
while
you
are
talking
and
quickly
search,
no
matter
based
on
what
my
comment
and
the
ug's
comment
in
the
past,
but
here's
the
one
comment
in
the
issue:
we
basically
talk
about
many
times
and
also
there's
the
one
here's
the
sum
of
history
in
the
when
the
kubernetes
first
founded
right,
so
there's
the
cell
for
host
of
the
pad
still
for
host
of
the
cluster.
So
basically
it's
get
rid
of
the
static
power.
A
So
this
is
why
I
have
a
lot
of
other
discussion,
but
it's
never
fly
so
we
know
we
understand
people
will
there's
always
have
like
the
buddha
strap
and
some
foot
strap
and
before
everybody,
like
even
cuba,
proxy
moved
toward
of
the
demon
side,
there's
the
static
power,
so
we
connect
the
have
a
way
to
stack
apart
for
next,
the
master
require
of
the
daemon
running
on
the
node
and
doing
the
boot
strap,
or
maybe
we
can
limit
it.
First.
A
The
cup
of
the
set
of
cluster,
like
the
api
server-
that's
connected
in
the
past
in
in
the
past,
what
we
are
doing,
but
we
want
to
have
that
visibility
for
the
static
part,
because
kubernetes
is
the
controller
there
and
just
start
the
cube
stack
apart.
So
we
needed
the
static
power
to
tolerate
all
the
non-execute
tolerance.
A
If
I
refresh
all
the
memory-
and
I
kind
of
insist,
we
introduce
of
the
priority
class-
and
if
you
remember
that
time,
because
people
talk
about
the
core
s,
I
said:
okay,
quality
of
the
services
cannot
solve
my
problem
for
importance
of
the
daemon
running
on
the
node
without
the
scheduler
schedule.
So
we
have
to
introduce
another
concept
which
is
called
priority
class.
So
for
right,
after
we
introduce
private
class,
we
make
the
static
part.
A
It
is
always
critical
and
always
have
to
schedule
by
the
node,
because
that's
out
of
our
control,
so
all
those
kind
of
things
so
so
basically
we
want
to
stack
apart,
is
make
that
a
minimum.
But
you
are
so
right.
We
didn't
really
see
guarantee
of
the
exclusive
after
your
uid,
because
we
think
about
that's
like
the
admins
job
at
the
node.
But
you
just
cannot
guarantee
that
click
cluster
wide.
A
D
Yeah-
and
I
I
don't
actually
like
the
use
case
like
having
we
probably
like
static
pods
for
better
or
worse,
are
part
of
the
v1
api.
We've
got
a
lot
of
machinery
using
it.
What
we
probably-
and
I
was
mostly
describing
this
in
terms
of
like
going
and
documenting
like
what
we
actually
support
on
it
like-
is
it
allowed
to
reference
a
pvc
from
a
static
pod?
Are
you
allowed
to
reference
csi
drivers
from
a
static
pod?
D
Are
you
allowed
to
change
and
remove
and
report
containers
like
even
just
saying,
like
here's,
this
current
state,
what
works
or
doesn't
work
would
just
help
guide,
and
I
honestly
like
as
one
of
the
consumers
of
it
ryan
and
I
kind
of
assume
that
we'll
sign
up
for
making
sure
the
testing
works,
because
we
want
to
break
we've.
We've
been
the
ones
who've
broken
it
for
ourselves,
but
it's
good
because
it
actually
is
highlighting.
You
know
if
we
can
go
through
and
clarify
what
that
minimal
support
is.
I
think
that
will
also
help.
D
There
was
another
area
related
to
it,
which
was
the
lack
of
vagueness
like
what
does
admission
mean
for
static
pods,
and
we
have
features
of
the
cubelet
admit
it
that
depend
on
admittance
like
cpu
manager
and
so
looking
through,
like
cpu
manager,
was
briefly
broken
by
the
regression
and
emission
which
didn't
have
anything
to
do
with
static
pods.
But
then
that
raises
the
question
of
what
does
it
mean
to
reject
a
static
pod
at
admission
and
that
raised
a
couple
of
other
issues
like
static
pods?
D
Have
a
the
life
cycle
is
owned
by
the
cubelet,
not
by
the
api
server?
So
there's
some
scenarios
where,
like?
How
long
should
the
cubelet
refresh?
So
it's
mostly
just
about
trying
to
pull
together
all
of
what
we
guarantee
into
one
spot
and
then
use
that
to
go
fix
this
issue
and
identify
the
tests
that
would
prevent
a
regression
in
the
future
because
it
has
pointed
to
weaknesses
and
like
our
testing
regime
with
static
pods
and
also
our
testing
of
admission,
which
you
know
it
was
what
the
previous
regression
was.
A
Think
about
the
cluster
level,
because
the
static
part
they
they
basically
want
to
bypass
the
scheduler
and
put
to
do
this
part.
The
allocation
right
part
placement,
but
on
the
other
hand
they
want
to
oh
do
magical
things
and
and
when
there's
things
when
there's
the
resource
establishment
on
the
node
like
this,
just
oh,
my
god,
you
have
to
guarantee
and
evaked.
But
when
you
evade
the
demon
and
some
demon
side,
which
is
run
as
the
static
part,
they
will
say.
Oh
my
god,
that's
my
master
required.
A
D
The
time
and
actually
like
there's
other
things
too,
like
we
don't
rerun
admission
for
certain
classes
of
thing
and
the
way
that
we're
there
was
some
confusion
inside
the
cubelet
as
to
what
the
actual
admitted
pods
were,
because
what
people
are
using
to
say
what
the
admit
like
this
was
one
of
the
bugs
that
came
up
as
we.
D
It
is
today
what
people
are
using
as
hey
show
me
all
the
admitted
pods
is
not
actually
all
the
admitted
pods.
You
have
to
filter
that
list
based
on
certain
rules,
and
so
one
of
the
things
that
we,
I
think
is
a
little
unclear
is
with
the
addition
of
more
admission,
handlers
in
device
manager
and
cpu
we've
kind
of
opened
the
door
for
we
don't
really
have
a
crisp
definition
inside
the
cubelet
code.
D
We're
mostly
I
mean
the
code
like
the
code's
credit
we
haven't
had
to
change
it
in
five
years,
as
we
bulk
those
up.
I
think
we're
hitting
some
emergent
things
that
make
it
harder
for
people
to
reason
about
so
at
least
it's
a
good
opportunity
for
us
to
go
and
clarify
what
what
we
mean
and
offer
guidelines
for
other
people
building
admission
hooks
in
the
future.
A
I
totally
agree
with
you
right.
I
can
see
that
a
lot
of
the
principle
we
made.
It
is
kind
of
loosed
and
a
lot
of
things
being
forget
and
or
maybe
evolved
with
the
good
intention,
but
those
evolve
after
evolve.
Maybe
a
lot
of
course
surface.
Some
existing
problems
used
to
be
hide
and
and
expose
new
regression.
So,
let's
just
redo
the
documentation.
B
F
D
F
A
So
we
do
have
all
the
dark.
Let
me
just
look
at
it,
but
there's
no
cap.
The
reason
is
because
cap
systems
way,
after
all,
those
kind
of
things
down
include
after
static
apart,
is
way
after
the
cabin
that
we,
after
we
finish
all
the
work,
the
major
work,
and
then
we
have
the
booming
of
the
cut,
but
the
major
thing
is
actually
design
is
leveraging.
D
We're
having
that
problem
in
other
places,
like
the
the
whole
thing
that
triggered
all
of
this
refactoring
was
the
pods
being
force
deleted
and
that's
a
pre-kept
proposal
that
was
written
in
2015
right.
It
was
after
like
one
of
those
first
meetings
where
we
sat
down
and
like
we
haven't
ever
gone
back,
so
I
was
going
to
refresh
that
because
jordan
was
like
hey
like
if
you.
If
we
want
to
change
what
the
definition
of
forced
deletion
is
we
really
or
we
need
to,
we
need
to
specify
it.
D
We
need
to
have
a
place
to
argue
about
it,
so
I
think
it
is
useful
for
at
least
those
three
concepts
I'll
be
I'm
planning
on
doing
the
pod
safety
one.
I
can
certainly
help
ryan
and
I
can
help
with
the
static
pod
one
and
then
the
admission.
One
would
probably
be
helpful
that
it
would
at
least
give
us
a
chance
to
like
write
down
like
the
stuff
that
lantau.
F
D
I
were
debating
in
the
pr
like
we
want.
We
wanted
to
separate
pot
workers
anyway,
we
want
status
to
be
separated.
We
want
the
plague.
Lifestyle
like
some
of
those
implications,
get
them
down
in
one
spot,.