►
From YouTube: Kubernetes SIG Node 20210406
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Good
morning,
everyone
today
is
april
6th
and
is
our
first
signal
meeting
right
after
signal.
I'm
not
a
big
signal.
Sorry
cuba,
camp
europe
for
this
year,
and
we
don't
have
much
topic
today,
but
maybe
we
can
have
spend
the
time
to
discuss
more
and
we
start
we
kick
off
still
still
with
the
circuit.
Maybe
sergey
and
alaina
can
update
us
about
the
pia
and
the
background
status.
B
C
Yeah
I
was
out
of
office
last
week,
so
the
only
catching
up
I
did
was,
I
made
sure,
to
drag
a
bunch
of
the
new
signo
pr's
onto
the
board.
C
I
would
expect
they're
mostly
growing
right
now
and
not
getting
closed
emerged
because
we're
in
code
freeze,
so
I
have
not
been
paying
much
attention
to
things
because
as
soon
as
we
hit
sort
of
the
test
freeze
deadline,
I
think
everything
in
signo
that
needed
to
get
handled
got
handled
so
but
for
the
most
part
it
would
be
very
helpful
right
now,
because
I
don't
have
time
to
look
at
them
while
I'm
preparing
for
the
next
cycle
and
I'm
assuming
that
many
other
people
are
in
a
similar
boat.
C
If
you're
looking
for
somewhere
to
get
involved
right
now,
there's
lots
and
lots
of
prs
on
the
sig
node
board
that
need
to
be
triaged,
so
they
need
to
have
their
priorities
set.
We
need
to
make
sure
they're
in
the
right
column.
We
need
to
make
sure
that
you
know
if
they
have
been
triaged
correctly,
that
the
triage
label
has
been
set.
C
So,
if
you're
interested
in
helping
out
with
that,
you
can
go
to
the
sig
node
board
and
there's
instructions
in
the
cards
at
the
top
of
the
columns,
and
you
can
try
to
help
out
with
that.
And
you
don't
even
need
to
be
an
org
member
to
do
some
of
the
stuff
like,
for
example,
setting
priority
and
closing
things
that
are
no
longer
applicable.
And
that
kind
of
thing.
So
good
luck
and
hopefully
we'll
see
some
new
folks
helping
out
with
that,
but
other
than
that.
Yeah.
A
Thanks
thanks
sergey
and
alela,
so
let's
move
to
the
topic,
oh
by
the
way.
I
I
hope
we
can
discuss
of
the
milestone
next
milestone
the
cap
next
week,
so
1.22
because
I
know
we
are
not.
We
just
have
code
of
phrase
and
we
also
have
the
cubacon.
So
we
don't
have
the
time,
but
there's
the
mini
project
carried
out
from
the
last
milestone
and
the
next
milestone,
and
also
we
have
the
dewey
master
circle
last
cycle
and
there's
the
several
project
being
raised,
and
we
suggest
it
goes
to
next
milestone.
A
So
that's
why
we
we
have
to
go
over
those
kind
of
things.
So,
let's
talk
about
those
next
week,
so
the
first
topic
is
to
be
named.
We
need
people
here.
B
Sorry
sort
of
like
I,
I
also
want
to
remind
that
pretty
soon
the
cherry
pick
will
close
for
april
release
and
I
think
it
will
be
last
release
for
118
or
something
like
that.
Maybe
if
I
count
it
correctly,
so
if
you
want
something
for
118,
please
go
ahead.
C
Yes,
I
believe
the
cherry
pick
deadline,
for
that
is
this
friday.
So
it's
got
to
be
done
by
the
end
of
this
week
and
don
I
added
for
122
kep
planning.
I
added
that
to
the
agenda
for
next
week.
A
Cool,
thank
you.
Thank
you.
So
next
one
is
the
villain
release
the
in-place
part
of
the
vertical
scanning.
We
know
it
won't
be
here.
I
believe,
and
I
revealed
his
cap
and
I
think
that
we
mentioned
a
couple
times.
We
are
okay
with
the
new
proposal,
which
is
came
from
the
team,
hacking
and
there's
the
several
of
nine
discussing,
and
also
here
I
saw
darek
just
joined
derek.
A
I
just
reviewed,
and
I
gave
the
looks
good
to
me
and
but
not
the
approval,
because
I
I
remember
last
time
you
reached
some
concern
about
the
resize,
the
sample
resource,
which
is
not
in
the
kind
of
proposal.
So
so
I
leave
some
comments
there.
So
please
take
a
look.
If
you
are
okay,
let's
move
forward
and
hopefully
in
the
1.22,
we
can
make
more
progress
on
this
vpa.
D
Yeah,
I
definitely
want
to
get
progress
on
this
at
122
and
I'm
just
coming
back
from
pto,
so
I
will
get
to
that
tab.
I
oh
thank.
A
E
So
don
did
you
get
a
chance
to
take
a
look
at
the
issue
and
the
proposed
changes
to
the
test
and
probably.
F
We
just
posted
the
pr
today
so
sort
of
late
breaking.
This
was
sort
of
in
a
result
of
what
derek
had
suggested
actually
on
the
issue,
which
is
basically
let's.
Let's
just
make
a
change
to
the
test.
G
A
F
Os
and
so
the
change
that
I've
got
in
the
pr
is
basically
just
where
it
was
checking,
you
know,
does
propagation
go
to
the
host
os.
Instead
of
checking
the
host
os,
it
specifically
looks
at
the
pid,
where
cubelet
is
running
whatever
namespace
that
is
and
making
sure
the
propagation
goes
up
that
high,
but
doesn't
necessarily
check
a
namespace
above.
F
So
this
should
work
regardless
of
whether
or
not
this
is
sort
of
the
traditional
current
deployment
where
cubelet
is
in
the
same
name,
mount
namespace
as,
as
you
know,
pig
one
or
if
it's
in
a
sub
name
space
in
its
own
separate
namespace
it'll
be
exactly
the
same.
So
the
test
should
pass
exactly
the
same
regardless
and
really
just
enforce.
That
sort
of
the
conditions
of
containers
passing
mountain
points
to
each
other
and
not
necessarily
anything
at
all
about
the
host
os.
D
Specifically
yeah,
so
I
was
personally
comfortable
with
that
change.
Obviously
I
this
is
an
issue
that
jim
and
others
have
been
looking
at.
D
A
D
If
there
wasn't
any
objection
to
ensuring
that
mounts
were
propagated
up
to
the
same
visibility
that
the
cube
would
itself
had,
this
seemed
like
a
good
end
state
with
respect
to
the
api
promises
that
kubernetes
itself
would
provide.
H
And
hey
jim
derek
one
problem
here
with
the
pr
is
probably
finding
the
path
of
the
cubelet,
so
you
can
do
the
pit
off,
because
the
path
user
bin
might
not
be
the
place
where
other
people
keep
that
cube
left.
So
that's.
D
A
fair
point:
yeah
yeah
good
point,
yeah.
I
haven't
looked
at
the
code
in
depth
with
jim
yeah,
so
yeah
aside
from
that,
though,
I
think
yeah,
I
see
you
were
doing
it
now
center
on
this
stuff,
but
I
don't
know
jim
if
you
want
to
recapture
it.
I
think
he
said
it's
like
a
50
cpu
savings
of
system
d.
F
Yeah,
so
so,
basically,
there
are
three
points
at
which
we
can
compare
numbers.
One
is
sort
of
current
state
of
like
red,
rel,
eight
system
d
without
the
the
back
port
and
with
that
basically
systemd
will
max
out
using
100
of
one
core.
In
the
worst
case
scenario,
you've
got
a
huge
number
of
mount
points,
so
the
system
d
upstream
changes
that
are
in
today,
which
are
also
being
backported
to
a
soon
to
come,
rel
eight
variant.
F
The
estimate
that
I
saw
said
that
that
should
basically
improve
it
by
30
to
50,
which
means
that
worst
case
we're
still
looking
at
50
to
70
percent
cpu
utilization
and
it
on
a
single
core.
Only,
but
still
I
mean
that's,
that's
not
nothing,
and
so,
with
this
e2e
test
change
in
and
then
subsequent
changes
to
actually
move
the
you
know
cubelet
and
cryo
into
their
own
mountain,
a
space
that
goes
to
zero.
So
that
is,
you
know
a
significant
difference
over
the
you
know,
50
to
70.
H
So
one
question
I
had
jim
is:
you
were
talking
about
rel
and
cryo,
and
so
how?
What
is
the
guidance
that
we
can
give
to
other
deployers
and
other
operating
systems.
H
H
D
Would
be
to
run
cubelet
in
its
own
mount
namespace
separate
from
the
host.
F
And
I'd
be
happy
to
write
up
a
specific.
Like
I
mean
I
have
an
example
in
github
that
you
know
people
can
look
at
if
they're
curious,
but
but
basically
there's
a
sort
of
a
set
of
three
things
that
you
need.
F
F
That's
at
the
you
know
cubelet
and
cryo
level,
and
then
the
last
piece
is
just
you
know
in
your
init
system,
where
you
create
this
mounting
space,
you
have
to
make
sure
both
your
container,
runtime
and
cubelet
both
go
into
that
mount
namespace.
Otherwise,
you
know
they
won't
see
each
other's
mounts,
and
that
would
be
bad.
H
Yeah,
a
perfect
gym.
It's
just
that.
Usually
we
do
this
in
a
cap,
but
you
know,
let's
just
write
it
up
somewhere,
so
we
can
tell
people
like,
for
example,
image
builder
team.
They
will
need
to
know
what
they
need
to
change
so
that
all
the
people
that
use
cluster
api
they
get.
This
performance
boost.
F
H
D
Yeah
dem's,
like
I
think,
it's
good,
that
we
give
guidance.
I
100
agree
with
this,
and
just
even
from
a
red
hat
standpoint,
like
my
own
guidance
for
for
us
at
red
hat
is
like
this
is
only
pertinent
in
environments
where
you
need
to
preserve
as
much
cpu
for
the
workload
and
that
that's
important,
probably
in
particular
industries
like
you
can
imagine
telco
or
media,
and
that
type
of
thing
for
general
purpose
clusters
like
this
is
good.
D
It's
probably
noise
that
people
wouldn't
notice,
but
if
you're
doing
industry
specific
issues,
I
think
we
don't
have
a
great
place
in
the
project
to
document
like
industry-specific
config
but
like
it,
I
would.
I
would
anticipate
like
an
intel
white
paper
on
dbdk
would
say
that
this
is
best
practice
for
the
reasons
that
jim
is
alluding.
H
A
I
think
the
I
think
the
type
is.
Maybe
it's
not
the
bad
idea,
because
we
do
because
the
this
is
the
best
practice
and
also
we'll
change
our
behavior
right
so
so,
and
the
kind
of
actually
have
the
built-in
process
for
us
to
to
say.
Oh,
here's
the
what
we
expected.
What
do
we
want
to
expect
it
to
graduate
at
least
those
process?
A
lot
of
things
you
maybe
just
skip
there.
But
that
means
the
problem
statement.
A
What
kind
of
problem
we
try
to
address
and
what's
the
solution
and
they
can
capture
in
the
one
place
and
also
can
target
a
full
milestone
and
the
release
and
the
signal
the
also
can
sponsor
and
make
sure
have
the
reviewer
and
the
approver,
and
so
there's
a
whole
process
like
documentation
at
the
end
and
the
the
cap
can
seem
to
already
have
the
building
process
and
it's
just
using
that
one.
But
we
could
relax
on
many
other
things
like
certain
things
isn't
bad.
A
D
So
I
think
the
tension
is
just
like
a
kept
for
this
wouldn't
be
coupled
to
a
feature
gate
and
so
like
we
used
to
just
have
general
design
docs
in
cube
right
where
you
could
write
down
like
this
is
the
way
something
would
be
structured
and,
I
think,
back
to
dawn
like
the
node
allocatable
design
dock.
D
Not
really
a
code
change
beyond
this
ede
that
we've
identified
as
pertinent
to
address.
H
Even
if
this
community
has
you
know,
directories
for
signal,
we'll
just
create
a
new
md
file
there.
That
should
be
fine
awesome.
A
So
now
we
can
once
you
return
and
then
we
can
talk,
discuss,
review
and
then-
and
this
is
kind
of
become
too
we
can
move
forward.
Yeah.
I
H
Yes,
so
I
was
going
through
my
older
notes
to
figure
out
where
you
know:
what
are
the
kinds
of
things
that
we
we
are
yet
to
tackle,
and
one
of
them
that
came
back
was
we
have
the
dynamic
cubelet
config,
which
is
stuck
since
111
in
beta?
I
think
we
need
to
like
pull
the
plug
on
it
one
way
or
another,
and
I'm
I
have
a
feeling
that
we
should
deprecate
and
get
rid
of
it
and
eliminate
code.
That
would
be
so.
H
I
opened
an
issue
basically
to
see
what
we
wanted
to
do.
That
was
the
first
one,
any
thoughts
there.
A
I
think
we
all
agree
in
the
past.
We
should
deprecate
and
remove
that.
Let
me
kubernetes
config,
there
are
certain
things
will
still
during
that
process
implemented
something
like
a
fair-based,
config
and
know
the
configurations
component
or
is
being
already
there
in
using,
but
dynamic
part
can
be
removed.
We
all
agree
we
can.
Hopefully
we
can
make
more
progress.
Remove
those
that
code
in
the
1.22.
A
Exactly
so
yeah,
so
this
is
a
while
next
week
we
should
talk
about
1.22,
owner
and
and
the
reviewer
and
the
uproar.
D
Sounds
good
yeah
and
the
second
yeah
like
what
is
the
I'm
trying
to
think
of
of
an
equivalent
we've
done
where
we
had
something
beta
and
then
deprecated,
but
is
the
step
in
122
anything
more
than
labeling?
It
deprecated.
H
I
think
that's
fair,
just
calling
it
deprecated
should
be
fine
for
122.
yeah,
but.
D
That's
fine
yeah.
B
H
B
H
Second,
one
was,
I
was
looking
at
cubenet
use
case
usage
of
cubenet
in
different
places,
and
I
realized
that
cops
uses
cubenet
by
as
the
default.
So
I
started
an
issue
there
and
there
was
a
lot
of
action
on
that
issue
on
what
actually
we
are
deprecating
and
what
we
are
not
and
it.
H
Apparently
it
seems
that
the
docker
implementation
is
just
one
implementation
of
cubenet
and
there
are
other
implementations
of
cubenet
where
people
construct
cni
configs
and
directly
apply
it,
and
so
we
have
to
basically
that's
what
that
thread
is
about.
I
wanted
to
make
sure
that
folks
know
that
when
we
remove
docker
shim,
there
is
an
option
called
cubenet
that
we'll
be
removing,
because
that's
under
docker
shim.
So
this
was
like
a
follow
up
for
the
docker.
H
Cops
is
the
biggest
culprit
at
this
point
jack.
There
is
a
link
in
the
meeting
docs.
A
Yeah,
we
need
maybe
double
check
who
is
also
adjusting
and
on
the
cops
too
and
make
sure
and
if
needed.
Maybe
we
can
ask
him
to
give
us
the
more
data
and
how
we
move
forward
on
this
manual.
G
Don
could
we
jump
back
to
the
first
agenda
item
for
just
a
minute
about
the
in
place
of
pottery
sizes?
Yes,
so
I
saw
that
you
tagged
the
enhancement
pr.
I
just
pasted
into
the
chat
with
lgtm,
but
I
think
it
still
needs
an
approve
from
either
you
or
derrick
anxious
to
get
that.
A
Yes,
I
just
mentioned
that
at
the
beginning
of
the
meeting
I
said
I
give
the
looks
good
to
me
because
I
agree,
and
I
just
want
to
leave
a
little
bit
buffer
time
for
derek
to
take
last
look
because
last
time
we
thought
derek,
still
have
some
questions
on
the
results
and
the
sub
resource.
So
I
hope
we
can
punch
that
a
separate
resource
definition,
but
I
still
want
to
directly
take
last
take
a
look
on
that
proposal.
A
A
C
A
C
I
think
there's
some.
I
think
there
was
something
happening,
but
yeah
I
was.
I
was
pinging
anybody
who
maybe
wanted
to
add
something
to
the
recording
but
yeah.
We
we
submitted
the
recording
yesterday,
so
there
will
be
a
kubecon,
node
maintainer
track
talk
that
sergey
and
I
recorded
last
week
with
all
of
the
updates
for
the
last
year.
J
D
D
E
We
said
we'll
do
it
for
next
week,
so
I
think
I
can
get
the
doc
ready
kind
of
work
with
elena
and
we
can
have
some
starting
point
for
next
week.
A
The
calendar
is
next
week,
so
somehow
I
saw
that
this
week
last
week
is
the
is
this:
is
the
community
meeting?
So
that's
why
I
thought
okay
this
week.
Maybe
it's
too
rush
so
next,
otherwise
I
will
try
to
get
we
talk
about
discuss
this
week.
Let's
say:
let's
talk
about
next
week,
yeah
discuss
that
at
the
beginning
of
the
meeting
also.
A
A
A
D
Understanding
who
we
want
to
potentially
align
with
having
their
time
available
to
help
drive
things
that
folks
may
want
to
drive
so
but
yeah,
okay,
I'll
wait
for
manoa.
If
you
want
to
send
that
yeah.
E
I'll
I'll
I'll
update
the
dock
and
I'll
set
it
on
the
channel,
so
folks
can
start
looking
at
it
before
next
week.
Yeah.
A
A
So
so,
because
there's
so
many
e2e
tests,
we
want
to
build
off
the
team
and
aerial
ownership,
so
people
can
help
with
those
tests
of
failure,
so
otherwise
it's
kind
of
the
only
like
the
circuit
themes
and
the
alarm,
so
that's
kind
of
not
sustainable,
and
so
we
maybe
also
want
to
talk
about
that.
How
to
build
those
things
and
build
those
ownership.