►
From YouTube: Kubernetes SIG Node 20200930
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
B
All
right,
so
we
can
continue
the
discussion.
So
what's
happened
so
far
is
like
mike
and
I
started
putting
together
a
cap
and
started
thinking
about
all
the
outstanding
items.
So
we
can
go
over
the
outstanding
items
and
I
think
then
we
need
to
make
a
call
whether
we
want
to
move
to
beta
or
ga
so
just
a
sec
I'll
try
to
share
my.
B
B
B
Sorry,
sergey,
can
you
see
these
items
or
did
you
want
to
discuss
something
else?
No.
B
Okay,
okay,
all
right:
okay,
okay,
perfect,
oh
okay,
yeah!
That
sounds
good.
Thank
you
and
then
there
was
windows,
privilege,
containers
so
for
c
groups
v2.
I
took
a
quick
look
and
it
looks
like
we
should
be
able
to
easily
add
like
a
linux
resources
v2
without
like
changing
our
current
plans,
but
the
blocker
to
adding
that
right
now
is
that
we
don't
have
enough
information
to
know
whether
which
fields
we
want
to
expose
like
do.
We
want
to
expose
all
the
memory
fields
or
so
on
and
like
we
have.
B
We
have
assigned
some
time
in
red
hat
to
look
into
that
over
the
next
couple
of
months.
So
are
we
okay?
I
mean,
but
we
we'd
spike
on
that,
and
then
we
come
back
to
signore
and
stay.
This
is
a
recommendation.
A
B
Okay,
all
right,
then
the
image
pull
in
sandbox,
so
we
have
the
we
have
the
field
already.
So
only
thing
that
remains
is
the
implementation
on
the
on
the
runtime
side
to
do
the
pulls
inside
the
sandbox
c
group
and
again
I
I
don't
think
we
should
block
on
that
for
graduating.
D
Yeah
I
I'll
agree
with
that.
We're
still
trying,
I
think
most.
I
think
this
is
a
kind
of
a
nice
screen
driven
for
microsoft,
for
the
windows
containers
and
we
haven't
had
a
chance
to
really
follow
up
with
this,
we've
been
looking
at
the
privileged
container
work
instead.
So
that's
perfectly
reasonable.
B
Okay,
so
username
space
is
still
the
cap
is
still
underway.
We,
we
will
again
come
back
to
signord
next
week
and
we
are
looking
at
potentially
adding
new
fields
to
the
pod
security
sandbox,
the
user
name,
space
modes
and
again,
that
is
an
addition.
So
again,
like
not
a
blocker,
especially
given
all
the
unknowns
there
right
now,.
A
Yeah,
I
think
there
was
nothing
in
the
cap
that
was
presented
thus
far.
That
made
us
think
that
there
was
a
backward
incompatible
issue.
So
I
I
think,
that's
fine.
C
C
I
guess
the
question
for
all
the
additive
changes
I
would
have
is:
how
do
we
communicate
those
new
features
to
corresponding
runtimes?
So
if
you
have
clear
and
saying
that
this
new
features
will
be
picked
up
and
people
will
know
about
it,
it's
fine
to
add
new
features,
say
after
g.
B
So
you
mean
like
the
times
I
I
mean
the
container
d
and
the
cryo
teams
at
least
are
involved
in
sig
notes.
So
we
would
expect
those
teams
to
implement
and
then
like,
are
you
talking
about
anything
beyond
release,
notes
or
features
like
sergey.
C
No,
I
just
I
just
thinking
like
those
two
are
big
ones,
but
I'm
not
sure
if
there
are
others
so
once
a
ga,
it's
a
little
bit
more.
I
think
it
should
be
a
little
bit
more
formal
and
I
just
don't
know
how
it
works.
A
So
I
would
say
for
a
couple
of
these
features
like
username
spaces
or
secret
suite
two.
A
We
could
take
it
on
a
feature
by
feature
basis,
but
my
default
feeling
would
be
demonstrating
that
it
is
implementable
on
more
than
one
runtime
provider
is,
is
a
fair,
is
a
fair
forward.
Looking
statement
right-
and
I
feel,
like
we've,
been
doing
that
now
with
cri
and
alpha
to
some
degree
that
I
don't
know
why
we
would
change
our
position
so.
E
And
sergey
for
our
announcement
channels
for
to
reaching
our
run
times,
we
can
use
cncf
c
grand
times,
communication
channels
like
mainland
list
meetings
and
so
on,
to
emphasize
those
new
features.
C
B
Great,
so
the
next
two
I
would
want
like
the
windows
folks
on
the
call
to
to
shine
in
like
mike,
and
I
chatted,
and
we
didn't
feel
a
need
to
block
on
the
virgin
and
image
pools
if
there's
a
workaround
to
do
it
right
now
and
if
later
we
expect
it
to
be
an
additive
change.
D
Yeah,
if
we're
okay,
having
additive
changes,
go
in
to
support
these
scenarios,
I
don't
think
that
this
should
block,
especially
going
to
beta,
but
probably
not
even
ga,.
D
D
B
And
I
think
the
next
two
are
the
more
contentious
ones.
So
there
is
a
docker
image.
Full
progress
indicator
so
so
c
advisor
versus
cri
stats
performance,
like
we
like
we
in
redhat,
did
one
test
and
that
looked
bad
from
a
cra
stats
perspective
and
we
need
to
follow
up
and
figure
out
whether
it's
it's
an
issue
in
how
we
have
implemented
the
stats
or
it's
an
issue
with
how
cra
stats
works
compared
to
c
advisor.
B
B
B
A
So
today,
in
cubelet
we
still
use
c
advisor
stats
for
the
cryo
runtime,
like
I
think
we
we've
allowed.
I
don't
know
if
we
did
it
for
continuity
or
not
a
check,
but
we
allowed
using
legacy
staff
behaviors
for
particular.
B
B
A
B
E
A
A
Whereas
right
now
we
have
the
cube
kind
of
translating
in
the
present
state
of
the
v2
work
to
a
v1
form
and-
and
the
only
reason
I
say
that
is
like
when
we
talk
about
things
like
c
advisor
or
the
other
monitoring
tools.
Like
those
same
monitoring
tools
are
going
to
be
seeing
a
secret
p2
host,
and
so
I
think
it's
the
at
least
the
discussions
behind
the
past,
with
dawn
and
and
other
individuals
from
from
from
google,
around
secret
c1
and
v2.
Is
that
cubit
would
tolerate
either.
E
My
single
point
was
like:
should
we
really
put
like
where
implementation
specifics
parts
on
inside
the
api
like?
Why
not
to
design
the
api
to
be
implementation
and
dependent
in
a
sense.
A
Like
I,
we
structured
a
lot
of
the
cri
run
time
to
align
to
the
oci
runtime
spec
itself
like
and
so
I'm
assuming
yep.
So
we
would
keep
that
alignment.
B
Right
so
like
like
the
just
to
update
every
one
of
the
calls
so
like
for
for
v2
right,
we
simplified
the
the
spec
where
we
just
have
a
key
value
map
for
unified.
So
we
don't
have
to
keep
tweaking
the
spec
again
and
again.
So
so
with
the
the
v2
spec
is,
is
much
simpler
than
the
way
we
had
it
in
v1
and
over
the
cri.
E
E
But
anyway,
it's
just
a
speck
which
will
define
how
to
treat
those
as
well
as
objects.
E
E
B
Yeah,
I
think
that's
a
good
point
and
maybe
like
when
we
come
back
to
adding
the
new
fields
we
can.
We
can
call
that
discussion
and,
I
think,
like
to
begin
with,
I
don't
see
a
model
where
we
want
to
support
different
versions
unless
there's
a
case.
A
strong
case
presented
by
the
vm
community
say
that
no
you
want
to
separate
a
different
version
than
the
one
you
have
on
the
host
just
to
keep
things
simple.
It
might
be
easier
too.
E
It
might
be
not
a
not
existing
use
case,
but
I
can
easily
come
up
with
a
scenario
what
I
will
be
interested
to
try
so,
for
example
like
where
customers
are
running
like
the
old
channel,
7
kernels
on
the
whole
system.
So
we
are
limited
with
all
potential
things.
What
c
groups
we
want
and
with
kernel
allows?
E
But,
for
example,
we
we
have
a
application
where
we
want
to
use
memory,
pressure,
adjustments
on
on
the
fly
and
twists
available
only
in
c
groups
too.
So
easiest
way
to
run
this
workload
inside
vm
run
it
with
c
groups
v2
and
apply
with
psi,
where
it's
just
like
example.
From
top
of
that,
we
don't
know
how
many
of
those
will
be
in
the
future.
A
But
I
would
be
just
kind
of
my
view
on
this
is
kind
of
blunt,
which
is
like
we
support
what
we
test,
and
so
it's
it's
hard
enough
for
us
to
get
a
secret
speed,
2
test
suite
going
and
we
have
secret
speed,
one
testing
suites
going
for
us
to
take
on
a
mix
mode.
That
would
mean
we
would
need
someone
in
the
community
to
take
on
ownership
of
that
test,
and,
and
so
that
would
be
the
cost
to
say.
Is
it
worth
us
doing?
A
So
if
it's,
if
it's
it's
a
high
enough
bar
to
maintain
that
infrastructure,
then
let's
do
it.
But
if
we
don't
maintain
that
infrastructure,
then
we
don't
even
know
if
it
works.
B
B
A
On
I
missed
the
first
part
of
your
statement,
so
I
recall
reviewing
a
cup
with
tim
hawking
on
like,
should
we
give
people
like
pool
progress
event,
updates.
B
Yeah,
so
I
I
remember
seeing
some
code
in
docker
shim,
I
did
see
it
since
yesterday.
I
I
didn't
get
the
time,
but
this
code,
where
we
are
saying:
okay,
don't
cancel
this
request
because
my
poll
is
still
in
progress.
Docker
is
able
to
communicate
that
back
to.
A
We
can't
communicate
back
to
the
end
user,
so
there
was
a
cap
that
was
asking
for
there
to
be
okay,
an
event
or
image
pull
progress.
So
people
would
know
why
is
my
pod
not
yet
starting-
and
I
think
we
didn't
move
forward
with
that
idea,
but
if
it
would
improve
reliability
between
cubelet
and
cri
to
let
us
know
that
the
pool
is
still
in
practic
progress.
E
B
You'll
see
some
some
timeouts
and
eventually
it
will.
It
will
succeed.
Like
you'll,
the
user
will
end
up
seeing
a
few
hangouts
for
the
pool.
B
B
B
B
B
A
This
used
to
be
more
concerning
when
we
couldn't
do
concurrent
image
pulling,
but
I'm
trying
to
think
if
there
was,
if
there's
a
real
concern
now.
Is
there
any
unique
issues
in
windows,
image,
fs
storage
that
we
don't
know
about
that
makes
this
important.
A
I'm
sorry,
could
you
repeat
the
question
like
some
historical
context
in
the
early
days
of
kubernetes
you
and
docker,
if
I
recall-
and
if
anybody
watches
this
recording
later
later
and
says,
derek's
hardly
wrong.
I'm
sorry,
but
we
originally
did
concurrent
image
pools
and
then
there
were
issues
with
file
system,
corruption
that
could
occur
when
you
pulled
two
images
at
the
same
time,
and
so
then
it
got
moved
to
supporting
a
serialized
image
puller
in
the
cubelet.
A
So
we
only
pulled
one
image
at
a
time
and
then
the
issues
in
in
the
runtime
container
storage
layers
got
resolved,
and
so
now
we
can
go
back
to
concurrent
image
pooling
and
the
only
thing
I'm
curious
about
is
like.
D
Problematic,
I
don't
have
some
of
that
context
right
now
I'll
have
to
I.
I
do
remember
the
the
issues
with
the
serial
image
polls
and
that
I
believe
has
been
resolved.
I'll
have
to
follow
up
with
some
other
folks
about
that,
because
it's.
D
I
I
know
that
the
way
that
image
poles
work
was
kind
of
completely
reworked
and
there's
a
different
image
store
for
the
hcs
v1,
which
was
what
moby's
using
and
hcs
v2,
which
is
what
container
d
is
using
right
now
and
but
I,
and
I'm
not
aware
of
any
issues
that
we've
seen
with
the
hts
v2
image,
pull
kind
of
functionality
right
now,
but
yeah
I'll
reach
out
and
see.
D
B
And
on
the
linux
side,
I
guess
like
I
can
take
an
action
to
just
test
out
a
few
scenarios
and
see
what
what
what
does
the
behavior
look
like
with
large
images
and
timeouts.
A
Mean
I
view
it
probably
more,
as
we
have
to
expect
failures
will
happen
and
whether
it's
an
image,
pull
timeout
or
a
grpc
timeout
like
we
would
always
have
to
call
again
anyway.
So
it's
the
nature
of
kubernetes,
okay,.
B
And
there's
one
more
item
from
sasha.
Sorry,
I
forgot
to
add
it
here.
B
Second
cri
so
right
now,
since
the
sec
comp
is
not
fully
typed
and
this
proposal
is
to
make
it
tight
like
have
a
runtime
default,
unconfined
and
local
host
as
an
enum
and
then
define
a
string
for
for
the
localhost
path.
If
you
choose
to
use
that,
so
is
that
something
we
want
to
tackle
for
ga?
It
should
be
relatively
straightforward
to
add
if
we
want
to
make
the
second.
A
A
Yeah
I
thought
when
setcomp
went
to
ga
it
was
like
the
other
two
failed
validation.
We
should
go
check.
Okay,
so
I
think
there's
an
admission
check
now
that
basically
says
it
has
to
be
runtime
default.
My
memory
is
escaping
me.
A
Yeah,
I'm
I'm
having
one
of
those
moments
where
my
memory
might
be
failing,
but
I
guess
I
I
think
making
set
come
a
first
class.
Economist
is
a
good
thing
and
I
just
can't
recall
if
what
we
did
on
promoting
it
to
ga
around
the
three
different
options.
I
I
thought
it
really
had
made
restricted
to
runtime
default
as
the
only
lad
outfield.
B
B
And
then
there's
a
longer
term
question
of
will
we
ever
go
down
the
path
of
actually
passing
the
full
oci
types
which
which
I'm
less
certain
of
I
mean.
I
see
a
sick
comp
more
as
profiles
from
from
kubernetes,
rather
than
worrying
about
the
syscall
level.
Behavior.
E
One
thing:
what
we've
seen
as
similar
to
sec
comp
like
the
pattern,
is
the
block
io
kind
of
settings
so,
like
the
workload
can
say
I
am,
I
want
block.
I
open
class
xyz
and
this
xyz
on
the
host
expands
to
actual
settings
to
vc
groups
like
like
specific
their
phrase,
gay
something
this
amount
of
iops
or
something
like
that.
B
So
this
is,
this
would
be
a
new
thing
that
we
can
explore,
adding
to
the
cri.
I
guess
I
mean
alex.
If
I
remember
correctly,
we
discussed
this
in
this
cdi
right,
yeah.
E
But
similar
to
what
there
is
also
technology
about
limiting
rdt
of
l3
caches
and
memory
bandwidth,
so
we
also
fell
into
the
same
pattern.
So
you
have
limited
amount
of
classes
which
represent
some
settings
on
a
particular
tutorial
just
to
specific
node,
and
when
the
workload
can
say.
I
belong
to
this
particular
class.
B
Okay,
I
think
that
it's
fair,
but
probably
those
would
be
additions
right
and
unless
we
want
to
think
about
a
unified
ways
to
handle
all
these
profiles
and
then
with
profiles,
I
think
the
challenge
is
that
it
is
less
portable,
because
if
you
have
some
second
policy
defined
in
your
cluster
it
may
it
may
not
be
the
same
on
a
different
cluster.
B
B
So
I
I
think
we
we've
covered
all
the
outstanding
topics
from
last
week.
So
now
question
for
you
derek
is
like.
Should
we
go
to
beta
or
ga?
I
think
that's
that's
what
is
blocking
us
from
making
up
starting
up
a
pr
for
this.
B
I
feel
that
beta
may
be
better,
given
the
the
uncertainty
around
this
stats
performance,
but
would
like
to
hear
thoughts
from
books
on
the.
A
Call,
I
guess,
there's.
B
I
guess
yeah
we're
just
adding
yeah
beta
would
be
adding
that
second
enum
and
then
promoting
the
rest
of
it.
Just
do
cleanups
so
mike,
and
I
identified
some
to
do's
and
some
things
that
aren't
valid
in
this
cri
anymore.
B
Like
some
outstanding
questions
like,
should
we
draw
up
std
in
a
cdn
once
or
tty
like
things
that
we
have
clarified,
but
yeah,
we
probably
need
to
keep
supporting
those
fields
so
clean
up,
to-do's
and
outside
I
mean
comments
that
aren't
valid
anymore
in
the
cri
and
then
change
it
add
the
sitcom
changes
and
call
it
beta
and
then
use
a
beta
period
to
to
dig
into
the
like
dig
into
the
performance
issues
of
any
and
find
fixes
or
workarounds.
C
A
A
Yeah,
so
docker
shim
removal
is
basically,
I
think
the
hope
here
would
be
that
we
can
announce
deprecation
of
docker,
shim
or
basically
announce
no
feature
evolution
in
it.
At
the
same
time,
we
could
announce
cri
to
beta
actual
removal
of
docker
shim.
I
think
I
don't
know
if
dems
is
on
the
call,
but
I
think
we
wanted
to
still
allow
through
releases,
so
the
first
clock
we
just
want
to
start
here
is
the
deprecation
clock.
B
A
And,
given
that
there
are
some
api
changes
we
want
to
make
and
that
type
of
thing
allowing
that
that
phase
seems
good.
A
B
And
and
in
the
new
cap
template
there's
a
bunch
of
questions
around
like
upgrade
rollback
and
our
answers
there
is
like
we
just
match
the
version
of
cubelet
with
the
runtime.
Is
that
fair?
I
mean
that's
a
fair
answer
to
those
kind
of
questions.
I
guess,
because
I
don't
anticipate
anyone's
supporting
like
an
in-place
upgrade
from
docker
to
a
cri
run
time
on
a
cluster.
They
would
have
to
plan
a
node
by
no
rollout
or
they'll
probably
end
up
creating
a
new
cluster
with
with
new
runtime.
E
A
Yeah
so
c,
cluster
lifecycle
might
have
a
path
that
they
could
make
it
doable,
but
it
would
be
like
whatever
steps
would
be
identified
would
be
up
to
cube
adm
or
another
life
cycling
solution
to
choose
to
take
on,
but
I
think
for
this
cap
I
don't
think
yeah.
B
A
B
Yeah
so
direct
I'll
I'll,
try
to
open
up
the
issue
and
an
initial
cap
here
today
and
then
I'll
I'll
I'll,
just
paste
the
link
and
sign
out
for
folks
to
start
adding
comments,
but
I'll
be
so.
I
think
we
want
to
get
it
merged
before
the
sixth
right.
So
we
don't
have
much
time.
A
B
All
right
anything
else,
otherwise
we
have
none.
C
I
would
suggest
to
pause
this
decision
on
signal
to
flag
channel,
at
least
so.