►
From YouTube: Kubernetes SIG Node 20211005
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Hello,
everyone
and
welcome
to
today's
slightly
belate
belated
edition
of
signode.
It
is
tuesday
october
5th
2021
and
I
believe
the
first
item
on
our
agenda
today
is
what's
going
on
with
pr
sergey.
B
Yeah,
if
you
had
any
fomo,
is
what's
happening,
don't
fear
nothing
major
happening.
We
have
a
few
prs
open,
like
I
think,
a
little
bit
less
than
usual,
but
pretty
close
to
average
and
out
of
closed
prs.
Only
one
got
rotten.
It's
trying
to
increase
some
buffer
for
a
bare
metal,
running
cubelet.
It
just
wrote
away.
So
if
you
want
to
fish
it
out
of
rotten
and
take
a
second
look,
please
do
that
and
from
merge
prs
we
have
one
bug
fix
when
we
reverted
the
change
with.
B
With
how
we
compile
couplet
and
consume
more
memory-
and
I
don't
see
many
other
like
very
interesting-
I
mean
they're
quite
quite
a
few
pr's
with
norse,
but
I
think
this
one
very
interesting
one.
So
if
you're
interested,
please
take
a
look,
yeah
yeah
for.
A
Those
who
aren't
familiar
with
that
one,
I
believe,
sig
release
for
the
122
release,
turned
on
aslr
in
the
cubelet
and
all
kubernetes
components
and
they
as
a
result,
consume
significantly
more
memory.
So
we've
reverted
that
until
we
can
have
a
little
bit
more
discussion,
let's
see
anything
else,
sergey.
A
Cool
next
thing
on
the
agenda,
I
have
a
reminder.
We
as
a
sig
suggested
that
we
would
have
a
soft
code
freeze
in
two
weeks,
a
little
bit
under
two
weeks
now
and
also
next
week
we
have
kubecon.
Now
I
haven't
heard
any
pushback
from
anybody
on
this
soft
freeze
deadline
where
we're
hoping
that
everything
that's
going
to
beta
will
be
merged,
but
on
the
other
hand,
I
haven't
seen
prs
for
the
majority
of
the
things
that
need
to
go
to
beta.
A
So
I
feel
like
if
you
are
not
able
to
make
this
deadline
like
please
say
something
now,
so
we
can
push
it
or
that
we
can
adjust
accordingly,
because
it
would
be
really
nice
if
at
least
we
have
like
everything
up
and
ready
to
review
and
work
in
progress.
Pr
is
open
and
whatnot.
A
Also
on
the
topic
of
cubecon
next
week,
sig
node
is
having
a
session.
It
is
a
virtual
session,
so
you
can
go
ahead
and
attend
and
watch
the
recording.
If
you
are
so
inclined,
I
included
the
link
in
the
agenda,
any
questions
or.
C
I
just
had
a
comment
about
the
soft
freeze
deadline.
Given
that
cubecon
is
going
to
be
next
week,
it
might
be
a
bit
challenging
to
solicit
reviews
and
approves
approvals
from
the
note
kind
of
reviewers
maintainers
approvers.
So
I
was
wondering:
does
it
make
sense
to
maybe
push
it
back
for
a
week,
given
that
kubecon
is
next
week.
A
Does
anybody
else
have
any
thoughts
on
pushing
back
the
deadline
a
week?
I
have
no
problem
with
doing
it,
but
it
looks
like
derek
is
plus
one
in
the
chat,
as
is
marinal
so
yeah,
but
lots
of
plus
ones
in
the
chat.
Okay,
we'll
push
this
back
one
week.
Thank
you
very
much
I'll.
Take
a
note
on
the
agenda.
B
Yeah,
let
me
try
to
share,
and
I
will
start
talking
so
the
reason
I'm
talking
about
it
now
is
we
plan
to
remove
docker
shim
completely
from
kubelet
in
124.,
so
123
the
last
release
when
we
can
do
some
major
changes,
so
customer
can
try
out
and
give
us
feedback
if
something
is
not
working
or
something
needs
change,
and
I
prepare
this
little
status
document
so
just
mostly
brain
dump
on
what's
going
on
and
how
we're
doing
this
documentation,
duplication
and
in
the
end
of
this
document
I
also
prepared
some
questionnaire
that
I
really
like
to
distribute
among
end
users
and
send
the
status
of
the
world
what
people
may
be
complaining
about
or
like
have
as
a
blockers.
B
I
know
I
have
some
visibility
in
what's
happening
in
gge,
but
I
really
like
to
understand
what's
happening
across
the
community
wider
than
just
g
key
and
to
start
off,
I
want
to
say
that
all
I
mean
major
clouds.
I
mean
I
three
clouds
that
I
listed
here.
I
don't
know
like
about
major
already
switched
like
proper,
like
have
an
offering
for
dockercine
interesting
things
that
aks
like
it's
a
lexicographical
order.
No
preferences
aks
is.
B
Starting
support
of
docker
container
dn119
and
it's
a
hard
switch
for
linux,
so
customers
have
no
choice,
they
need
to
transition
to
continuity,
and
this
is
a
kind
of
clean
transition
on
windows,
though
it
happened
like
on
linux.
That
happened
in
july
on
windows.
It
happened
quite
recently
in
may
and
it
started
with
120.,
so
we
just
started
getting
some
customers
transitioning
to
windows
and
as
far
as
I
can
tell,
there
is
not
much
feedback
yet
so,
hopefully
we
will
get
more
feedback
and
more
validation
before
we
go
into
124.
B
On
eks
situation
is
a
little
bit
worse
because
customer
just
got
the
option
to
transition
to
120
to
continuously
in
121,
and
it
happened
in
july.
So
this
is
pushing
it
a
little
bit
because
starting
july
is
it's
it's
available.
So
assuming
customers
started
testing
it.
B
I
don't
know
how
much
feedback
we'll
get
from
ets
customers
by
the
code
fees
of
123.,
but
let's
see
maybe
maybe
we'll
have
some
good
feedback,
or
maybe
everything
just
works,
and
I
don't
know
windows
situational
ets
to
be
fair.
I
didn't
look
any
deeper
on
gke.
We
had
a
contingency
support
for
quite
a
while.
B
It
started
1
14.,
so
we
had
some
customers
already
running
on
continuity
in
119
we
switched
to
container
ds
default
and
recommended
runtime,
and
we
have
quite
like
quite
reasonable
chunk
of
contingency
customers
now
and
many
customers
when
went
through
transitions
smoothly,
but
we
uncovered
a
lot
of
issues
with
transition
that
most
of
them
are
fixed,
now,
windows,
same
situation
as
with
aks.
We
just
offering
it
in
june,
started
offering
it
in
june,
but-
and
we
don't
have
many
windows-
customers
transition
to
continuously,
yet
not
even
trying
it.
B
The
situation
yeah
and
I
said
the
most
alarming
thing-
is
not
linux,
it's
windows,
but
hopefully
we
can
get
some
feedback
by
124.
B
And
another
thing
that
I
wanted
to
mention
here
is
mirantis
announced
craig
dougherty,
I
don't
know
the
usage
of
it.
It's
just
been
announced
and
there
is
a
repository.
So
if
anybody
knows
any
information
about
how
it
works,
comparing
to
docker
shim,
does
it
have
any
issues?
Can
we
very
committed
to
people
that
would
be
really
nice
to
highlight
and
maybe
even
highlight
in
our
documentation
on
like
kubernetes
io,
so
people
will
be
aware
of
this
option.
B
So
this
is
a
state
of
the
world
and
among
known
issues
I
mean
we
in
geeky
switching
to
container
d,
so
we
uncovered
quite
a
few
container
d
issues.
If
you
look
at
container
d
release
notes,
you'll
find
some
known
issues
like
some
of
them.
B
On
top
of
my
mind,
is
environment
variables
were
shared
between
containers
in
the
port
unintentionally
of,
of
course,
and
summoner
variables
were
meeting
so
that
was
fixed
in
and
cherry
picked
in
all
continuity
versions,
all
support
container
versions,
but
there
were
more
issues
like
that
right
now.
We
also
tracking
some
other
issues.
One
known
issue:
we're
always
saying
that
the
continuity
like
migration
from
docker
shim
is
no
open.
Runtime
will
be
exactly
the
same
turn
out.
B
There
is
a
very
old
legacy
thing
that
docker
shim
supports
this
media
type
for
images
and
container
d
doesn't
support
it.
I
don't
know
about
cryo,
so
this
is
kind
of
like
not
compatible
specification,
not
oci
compatible
specification
for
containers,
but
we
still
have
containers
with
the
specification
and
docker
sim
perfectly
fine
with
that
they
just
supported
backward
compatibility
and
the
newer
runtimes
doesn't
support
it.
So
there
is
not
like
even
our
statements
that
in
run
time,
it
will
be
exactly
the
same.
B
It's
not
completely
true,
but
it's
not
a
major
issue
like
I
think.
Most
of
the
recent
images
are
already
compiled.
Fine
we've
met
couple
customers
with
older
images,
so
itself,
then
exact,
prop
timeout
difference
in
execution
how
it
behaves.
We
changed.
We
fixed
the
bug,
so
whenever
boxes
back
is
fixed,
the
behavior
is
the
same
across
docker,
consumer
and
continuity.
B
But
if
this
flag
is
not
set
to
true,
so
people
still
running
with
a
box.
So
if
they
don't
want
exact,
prop
timeout
to
be
expected,
then
behavior
will
be
different
on
docker
human
container
d
and
it
may
result
in
some
probes
and
containers
unexpectedly
marked
as
failed
when
before
they
weren't
marked
as
failed,
because
docker
reacted
on
exact,
prop
timeout
differently
and
lastly,
yeah.
There
is
like
a
issue
with
that.
We
observed
a
lot.
We
don't
know
the
root
cause.
B
Yet
this
is
failed
to
reserve
container
name
when
cri
tries
to
create
a
googler,
tries
to
create
a
sandbox
and
the
creation
of
sandbox
fails
and
then
it
retries
again,
and
then
continuity
complains
that
this
name
already
taken.
B
We
don't
know
whether
it's
continuity
issue
or
like
theoretically,
it
should
be
the
same
for
docker
stream
and
continuity,
but
we
see
it
a
lot
more
often
on
container
d,
so
it
may
be
containing
the
issue,
not
a
couplet,
but
we're
still
investigating.
B
And
then
there
are
some
linux
specific
problems,
windows,
specific
problems
on
linux.
We
start
seeing
more
and
more
what
we
expected
that
not
all
vendors
support
the
continuity,
yet
one
notable
example
is
new
relic,
for
instance,
they
don't
support
continuity
or
any
other
runtime
to
be
except
docker
shim
for
monitoring
windsor
agent.
They
plan
to
support
it
by
end
of
year,
at
least
what
what
I
heard,
but
it's
not
there
yet,
and
that's
obviously
blocking
some
customers
to
adopt
containers
and
try
it
out
and
uncover
more
issues
with
that.
B
And
then
there
are
some
differences
differences
in
how
she
advisor
behaves.
There
are
some
missing
metrics.
Some
labels
that
are
incompatible
have
different
format
between
containers
and
docker.
Sim.
I
listed
couple
issues
I
think
we
fixed
most
of
the
bugs
and
c
advisors,
and
there
is
a
c
advisor
build
now
available
that
behaves
exactly
the
same
as
docker
shim
so
collects
image.
Fs
metrics
collects
all
the
proper
labels
and
we
really
hope
to
merge
this
c
advisor
built
into
123
of
container
d
or
for
kublet.
B
So
we'll
have
this
bug
fixed
for
everybody
who
doesn't
use
standalone
c
advisor
and
yeah
that
that
wasn't
quite
trivial,
because
we
took
a
depends
on
cri
and
there
is
like
cross
dependencies
and
david
is
doing
great
job
of
like
navigating
this
rendering
dependencies.
B
Then
another
legacy
issue
in
with
docker
shim,
a
docker
shim
at
some
point,
took
a
have
a
bug.
That
say
I
mean
not
about
behavior,
that
they
set
ipv6
or
disable
ipv6
or
enable,
I
think
it's
disable
ipv6
and
the
other
runtimes
doesn't
do
that,
because
it's
not
required
to
just
how
docker
shim
doing
stuff
so
yeah.
This
is
some
something
that
will
be
different
across
docker
scheme
and
other
runtimes,
because
docker
shim
has
a
specific,
behavior
and
other
times
needs
to
accommodate
for
this
difference.
B
If
they
want
to
then
yeah
another
like
canada
ignores
some
privileged
ports.
Maybe
it's
contingency
specific
and
then
there
is.
There
is
an
ip
leakage
that
was
fixed
recently
on
continuity
again,
when
sandbox
creation
fails,
like
third
down,
fails
and
something
ip
maybe
leak
leaking,
and
it's
because
of
some
race
condition
and
how
ordering
of
creation
of
network
and
the
creation
of
sandbox.
B
So
yeah
working
it,
I
listed
some
work
items
here
that
needs
to
be
done
and
some
work
on
that
already
taking
dependency
on
docker
simultaneously,
so
people
saying
124,
it
will
be
gone.
So
let's
not
fix
the
issue.
So,
let's,
if
anybody
interested
to
help
out
with
these
items,
please
do
it.
It
would
be
great
to
have
everything
fixed
by
123,
so
end
users
will
be
able
to
try
it
out
and
make
sure
that
124
will
save
to
remove
it
and
then
questionnaire
wise.
B
A
Hearing
no
questions:
let's
move
on
to
the
next
agenda
item
because
we've
got
a
few
things
left
paco.
Who
is
not
here.
I
assume
because
of
time
zones
linked,
apr
and
says
my
application
to
be
a
signaled.
Reviewer
has
not
been
responded
to
after
a
few
weeks.
I'm
not
sure
what
that
means.
Does
it
mean
no
objections
or
declined,
and
how
could
I
make
it
happen?
I
would
like
to
get
some
advice
on
how
to
be
a
qualified
reviewer.
So
I
think
that's
a
question
for
derek.
D
Yeah,
so
I
commented
on
this
pr
so
tucker
had
reached
out.
D
While
we
were
trying
to
establish
some
go
forward
guidelines
on
how
sig
node
would
handle
reviewers
and
approvers
so
for
awareness
we
did
send
a
draft
out
to
existing
sub-project
leads
on
what
qualifications
we
would
look
for
to
provide
a
basis
for
that
going
forward.
I
think
comments
on
that
dock
have
largely
dried
up
in
the
last
week
or
two,
so
I
think
we
can
put
it
forward
for
the
rest
of
the
project
to
review.
I
think
by
the
the
standards
outlined
in
that
document.
D
A
Is
that
doc
gonna
be
shared
with
the
sig
at
some
point
soon,
yeah.
D
D
If,
if
those
folks
who
had
it
shared
with
them
previously
can
weigh
in
which
is
particularly
the
sub
project
leads,
please
do
and
if
no
comments,
I
guess
we
can
share
it
out
to
the
sig
before
next
week's
meeting.
A
I
should
add
next
week
we
are
not
meeting
in
the
main
sig
meeting,
but
we
will
still
be
meeting
in
the
ci
sub
project
as
we
are
doing
our
alternate
time.
That's
a
little
bit
more
apec
friendly,
so
we
don't
expect
those
people
to
be
attending
kubecon.
So
we
will
keep
that
meeting
time
because
kubecon
is
in
a
time
zone
that
they
can't
make.
So
I
guess
next
item
on
the
agenda.
Sergey.
You
want
to
talk
about
node,
test
tags,
cleanup.
B
Yeah
we
discussed
this
a
couple
times
already.
We
were
looking
at
not
conformance
what
does
it
mean
to
be
not
conformant
test
and
we've
been
discussing?
What's
the
difference
between
node
feature
and
feature,
and
I
was
analyzing
this
test
and
test
grid
that
we
have
like
test
and
test
cases
that
we
have
so
the
result
of
this
investigation
is
this
document
with
the
proposal
how
we
can
clean
up
this
all
the
signal
tests.
I
will
entertain
it
through
sick
testing
and
maybe
seek
architecture
as
well
for
not
conformance
and
user.
B
If
you
have
any
comments
or
feedback,
please
read
it
up
and
give
your
opinion.
It
also
has
this
very
nice
spreadsheet
I
come.
I
took
all
the
test
cases
that
we
have
today.
There
is
a
name
of
test
case
and
then
you
can
filter
it
by.
Is
it
disruptive,
for
instance,
or
is
it
like?
B
You
can
feel
there
are
disruptive
and
not
conformance
at
the
same
time,
you
see
there
is
no
deceptive,
not
conformance,
which
is
good,
so
yeah
played
play
with
that
if
you're
interested,
it
really
helped
me
to
run
this
analysis
yeah.
That's
all
I
want
to
say:
please
read
it
up
and
give
your
feedback.
A
Sounds
like
no
last
item
on
the
agenda.
I
just
wanted
to
paste
this
in
for
my
email
with
respect
to
soft
code
freeze,
so
we
have
a
bunch
of
caps
up
and
I
took
a
look
at
every
cap
and
then
pasted
any
prs
that
were
linked
to
the
cap.
A
So
if
you
are
an
author
of
one
of
these
things,
I
will
send
an
email
to
the
mailing
list
as
well,
but
like
please
feel
free
to
either
go
and
fill
this
out
right
now
in
the
agenda
or
go
ahead
and
link
back
your
pr
I'll
spend
a
little
bit
more
time
hunting
for
these
things
later,
but
right
now,
this
is
basically
the
state
of
the
world
in
terms
of
what
the
release
team
knows
about
and
therefore
what
we
would
know
about
if
we
weren't
like
going
and
digging
in
the
kubernetes
kubernetes
repo-
and
this
is
like
that-
it's
clearly
not
all
work
that
we're
planning
for
this
release.
A
So
if
you
could
surface
that,
that
would
be
much
appreciated.
A
I
see
francesco's
adding
some
notes.
E
Hey
elena
for
some
of
the
windows
caps
that
do
have
some
significant
note
overlap.
Should
I
add
those
to
this
list
here
too,
to
track
just
to
make
sure
that
there's
pr
traction
for
some
of
those.
A
I
think
that,
as
long
as
the
sig
node
approvers
are
like
properly,
you
know
tagged
and
informed
and
that
you're
tracking
the
prs
correctly
in
the
enhancements
repo.
I
don't
think
there
should
be
any
issue.
A
A
I
think
we
also
are
not
going
to
be
subjecting
sig
windows
to
our
own
internal
deadlines,
because
we
can't
do
that
so.
Okay,
we'll
still
try
and
have
all
the
prs
up.
Just
that
would
be
awesome.
It'll
make
it
easier
for
the
release
team
too,
like
this
will
make
everyone's
lives
easier,
so
yeah.
I
know,
for
example,
like
there's
nothing
under
swap
support
right
now.
A
I
think,
because
I've
been
so
heads
down
on
fixing
blocking
bugs
that
I
haven't
had
a
chance
to
work
on
feature
stuff
so
like
I
know
that
one
is
definitely
behind,
but
according
to
like
the
the
caps.
This
is,
you
know,
kind
of
what's
been
linked
in
there.
If
you're
adding
things
to
this
list,
please
make
sure
that
they're
also
updated
in
the
enhancements
repo
item,
because
otherwise
no
one
will
be
able
to
find
it
again.
B
The
template
for
pr
is
a
little
bit
confusing
because
it
has
cap
and
just
like
square
brackets
and
then
like
people,
typically
don't
in.
Like.
A
A
Sounds
like
no
thanks
for
joining
us.
Everyone
today
and
thanks
for
joining
a
little
bit
late,
hope
to
see
you
all
next
week
or
well.
Not
next
week,
week
after
after
kubecon
cheers
everyone.