►
From YouTube: Kubernetes SIG Node 20200929
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Okay,
thank
you.
It's
signot
meeting
it's
september,
29th
hi,
everybody
good
tuesday,
so
we
making
stable
progress,
though
downwards
on
number
of
prs.
But
it's
not.
B
Super
fast
progress
this
week
we
have
closed
most
of
pr's.
Been
closed,
are
actively
closed,
which
is
good.
Very
little
was
rotten
and
yeah.
Yesterday
on
sick
test
meeting
with
a
sick
test
group,
we
discussed
that
somewhat
test
related
prs
are
not
not
approved
very
fast,
so
I
will
start
talking
folks
around
with
the
test
prs.
So
please
pay
more
attention.
We
have
three
pages
of
pr's
in
the
area
test
and
they
are
not
like
they're.
B
Just
slowly
wrote
them
away,
so
I
will
start
poking
people
around
with
with
those
tests
because
with
those
prs
because
they
are
quite
important
for
release.
Thank
you.
A
Thanks
sergey,
so
so
is
those
like
the
test
test
appear.
We
need
pi
is
blocked
by
the
which
is
nectar.
What
what
technique
of
the
approval
is
from
our
sake
or
is
it
from
the
sick
test.
B
A
D
Yeah
sure
can
I
maybe
share
my
screen.
If
it's
okay,
can
you
enable
that.
D
All
right,
it
should
be
sharing,
I
think,
yeah.
So
I
want
to
talk
a
little
bit
about
this
cap.
Thank
you
so
much
for
derek
and
and
don
for
already
taking
a
look
and
inserting
a
couple
other
folks.
So
a
little
background.
This
is
something
me
and
renault
been
working
on
so
I'll
go
over
briefly.
D
So
the
idea
here
is,
we
want
to
add
support
for
the
kubelet
to
make
sure
that
the
kubelet's
aware
of
the
underlying
node
shutdown,
so
today
kind
of
the
problem
is
that
when
the
machine
shuts
down
like
kubelet's,
not
aware
of
it
so
pods,
just
kind
of
terminate
and
the
underlying
init
system
on
the
node
will
basically
kill
kublin
and
kill
all
the
pods
and
stuff
like
for
your
pre-stop,
hooks
and
and
termination
grace
period
is
usually
not
respected.
D
D
So
there's
more
detail
here:
I
won't
go
into
every
every
piece
of
detail,
but
basically
the
idea
here
is
on
on
linux.
Like
there's
a
lot
of
ways
you
can,
you
can
shut
down
a
machine
right
like
you
can,
just
you
know,
call
a
shutdown
command
or
your
call.
If
you're
using
systemd,
you
can
do
system
ctl
power
off
and
then
each
cloud
provider
also
has
its
ability
to
like
stop
your
underlying
vm
if
you're
on
a
if
you're
on
the
cloud
provider,
but
usually
what
happens
underneath
is
this.
D
Is
this
results
into
like
an
acpi
event,
and
so
ecpi
events
were
traditionally
handled
by
a
century
of
a
demon
running
on
the
on
the
linux
vms
or
on
the
linux
machine?
So
it
used
to
be
like
something
called
acpid
and
now
it's
a
systemd,
specifically
login
d
from
systemd
kind
of
handles
that
event
on
most
distros.
D
So
basically,
the
proposal
here
is
to
make
use
of
this
thing
called
inhibitor
locks,
which
is
a
feature
in
systemd
that
was
released
in
183
systemd
in
2012.
So
it's
been
around
for
quite
a
while
at
this
point,
and
this
feature
basically
allows
an
application
to
basically
request
some
amount
of
time
from
systemd
to
delay
the
shutdown
by
and
then
also
get
a
notification
that
the
shutdown
has
begun.
And
then
you
can.
D
It
can
delay
the
shutdown
by
some
number
of
seconds
and
then
you
can
do
whatever
once
during
those
period
of
seconds.
So
the
proposal
is
basically
kublet
will
use
this
inhibitor
lock
mechanism
to
ask
systemd
to
delay
the
shutdown
by
some
configurable
period
of
time
and
then,
during
that
period
of
time,
it'll
actually
gracefully
kind
of
terminate
all
the
pods
on
the
machine.
It'll
give
them
some
certain
grace
period
that
we'll
talk
about
quickly
and
then
then
the
machine
just
proceeds
with
its
normal
shutdown.
D
So
that's
kind
of
the
idea
here,
and
so
the
kind
of
idea
here
is
we're
going
to
introduce
like
a
new
field
and
kublai
config
shutdown,
grace
period
and
the
user
will
specify
this
or
the
cluster
administrator.
D
That
is,
and
basically
for
each
pod
like
this
is
the
time
we'll
try
to
request
from
systemd
to
delay
the
shutdown
by
and
then
for
each
pod
will
get
its
it's
its
inhibit
or
basically,
from
the
pod
spec
we'll
get
like
the
termination
grace
period
seconds
right
here,
as
I
described,
and
we'll
basically
give
each
pod
a
minimum
of
the
termination
grace
period
or
the
shutdown
grace
period.
D
So
the
idea
here
is
that
the
pod
might
like
we
want
to
handle
some
cases
like
preemptable
vms,
for
example,
on
cloud
providers
that
provide
only
like
30
seconds
for
a
vm
to
shut
down,
and
each
cloud
provider
has
this
kind
of
equivalent
to
that
or
two
minutes,
or
something
like
that,
and
so
your
pod
spec
might
actually
request
more
than
that.
D
So
the
proposal
here
is
that
we'll,
basically
just
give
as
much
as
we
can,
whether
that's
the
shutdown
grace
period
that
actually
the
cloud
provider
provides
like
30
seconds
or
the
whatever
the
pod
spec
is
so
the
minimum
between
those
two
and
another
thing
that
the
cap
touches
on
here
is
we
want
to
shut
down
in
order
a
little
bit.
So
like
when
you're
shutting
down
all
the
pods
in
your
machine,
some
are
actually
critical.
D
Like
your
logging
daemon,
you
want
to
make
sure
that
your
logging
demon
exits
last
so
that,
like
during
shutdown,
you
get
all
your
logs,
propagated
and
stuff.
So
the
idea
here
is
actually
to
make
use
of
these
priority
classes,
and
so
pods
can
today
have
a
priority
and
there's
actually
some
well-known
priorities
called
system,
cluster
critical
and
system
node
critical,
and
we
expect
that
stuff,
like
those
logging
daemons,
will
have
a
priority
class
of
that
and
then
for
those
specific
ones,
we'll
shut
them
down.
D
Last,
so
we'll
allocate
some
period
of
time
at
the
end
and
first
shut
down
your
user
pods
and
then
shut
down
the
kind
of
cluster,
critical,
pods
and
then
yeah
and
then
basically
we'll
we'll
use
some
existing
functionality
and
kubelet
like
this
kill
pod
function,
which
is
which
is
being
used
in
general
to
kill
pods
so
that
already
handles
pre-stop
hooks
and
sec
and
the
exact
expected
pod
termination
life
cycle
like
sick
term
and
all
that
stuff.
D
So
I
guess
what
I
wanted
to
discuss
with
with
the
community.
Here
is
a
couple
questions.
First
of
all,
like
if
anyone's
interested,
I
would
love
any
more
feedback
here,
a
couple
things
that
we
wanted
to
discuss
that
I
saw
some
questions
about.
D
First
of
all,
we
wanted
to
make
sure
that
this,
like,
I
guess,
one
of
the
things
I
wanted
to
do
is
to
minimize
the
amount
of
design
that's
like
or
configuration
that's
required,
so
only
provide
one
kind
of
tunable
knob,
which
is
shut
down
grace
period.
However,
there's
kind
of
a
trade-off
because,
like
we
want
to
allocate
different
amount
of
time
for
the
critical
pods
and
then
for
your
user
pods.
D
So
what
I
kind
of
suggested
in
the
cap
is
we'll
just
hard
code,
two
seconds
as
the
default
shutdown
period
for
system
critical
pods,
and
this
is
going
to
define
to
be
like
the
minimum
in
the
kubelet.
If
you
don't
specify
any
grace
periods,
it's
two
seconds
by
default
and
kubelet
is
what
we
default
to
today.
So
I
guess
I
wanted
to
understand
if
people
think
that's
actually
worth
configuring
as
a
separate
option
or
whether
we
should
stick
with
that,
so
that
was
kind
of
one
question.
Yeah.
F
Yeah
sorry,
I
missed
that
when
I
read
it,
this
is
derek
so
or
I
didn't
connect
the
dots
the
way
I
should
so.
I
had
assumed
that
the
the
flow
proposed
was
basically
cuba
gets
the
shutdown
event
it
goes
and
tells
systemd
to
hold
off
for
x
period
of
time.
F
F
Second,
I'm
trying
to
think
if
two
seconds
would
be
bad
so
like
I
think,
trying
to
think
through
what
many
people
would
view
as
critical
pods
and
it'd
be
like
logging
demons,
the
network
where,
if
you
don't
have
ordering
among
them,
then
like
maybe
two
seconds
is
too
harsh
anyway-
to
shut
them
down
depending
on
what
the
work
is
but
like,
if
you
can't
depend
on
two
of
them
together.
Maybe
it's
not
the
end
of
the
world
either.
I
guess
for
me
for
me.
D
Yeah
yeah,
I
guess
because
the
issue
is
so
on
some
cloud
provider
like
on
on
google
cloud,
for
example,
for
preemptive
vms.
We
get
30
seconds
to
shut
down,
so
every
kind
of
second
counts
and
you
kind
of
want
to
prioritize.
You
know
users
workloads
right
getting
the
most
amount
of
time
so
but
I
understand
on
other
providers
or,
if
you're
doing
a
bare
metal,
you
kind
of
maybe
can
have
more
flexibility
there.
So
I
don't
know.
Maybe
it
should
be
something
configurable,
but
then
I
it
requires
more
flags.
I
don't
know.
D
G
Is
just
go
through
the
list
of
pods
that
are
critical
on
the
node
figure
out.
What
the
maximum
grace
period
seconds
is
and
then
subtract
that
from
the
total
time,
so
that
you
have
the
the
the
longest
grace
period,
critical
pod
remainder.
At
the
end
of
the
thing,
and
and
in
most
situations
you
normal
users
can't
create
cluster
critical
pods,
so
you'd
be
kind
of
guarded
against
someone
just
creating
a
pod
with
a
really
long
grace
period
termination
seconds.
A
Yeah
in
the
pr
review
the
cab
review,
I
did
reset
two
seconds,
but
I
understand
what
that
two
seconds
came
from
that
two
seconds
came
from.
It
is
the
previous
release
and
there's
the
two
second
being
predefined
for
this
one.
So
but
I
do
the
david
I
didn't
mention
that
I
think
about
it,
since
we
are
redesigning
this
shutdown
logic.
D
A
Use
cases
that's
acceptable.
So
that's
why
I
think
what
we
should
mention
is.
D
Yeah
yeah
yeah
so
another
another
thing
to
bring
up
as
don
mentioned.
So
the
way
this
works
is
we
need
to
request
some
amount
of
time
from
systemd
to
delay
the
shutdown
by
and
so
the
default
in
system
d,
I
believe,
is
five
seconds.
It's
this
inhibit
delay
max.
D
Second,
it's
like
a
property
under
login
d,
that's
configured
like
on
your
distro,
and
so
the
default
I've
seen
like
on
debian
and
ubuntu
and
stuff
is
five
seconds,
but
if
the
user
would
pass
in
something
else
for
kubla
config,
the
proposal
here
is
we'll
basically
go
in
and
write
a
config
file
to
login
d
and
update
that
value
and
kind
of
send
a
sig
hub
to
login
d
and
just
refresh
the
value.
D
I
Oh
hey
david,
like
I
had
a
quick
question
around,
would
it
be
possible
for
cubelet
to
send
an
event
out
saying
that
the
note
is
gonna
shut
down.
I
Oh
so
I
think,
like
a
lot
of
controllers
or
operators
around
juventus
could
you
know
accept
an
event,
maybe
on
a
channel
or
something
oh,
that
would
be
a
great
addition,
like
maybe.
C
I
Even
this
is
done
on
linux.
Priority
based
shutdown
right,
like
my
given
minus,
like
minus
19,
being
the
highest
priority
for
stuff
to
get
done.
D
I
Like
I
know
that
linux
has
a
priority
level
set
between
minus
20
and
plus
20,
and
the
negative
value
seems
seems
to
be
the
highest
priority.
The
application
takes.
D
Oh,
I
see
what
you're
saying
your
time
limit
spread
yeah.
I
don't
think
this
actually
will
it
will
change
that,
because
it'll
simply
ask
systemd
like
everything
will
continue
running,
as
is
it'll
just
simply
delay
the
shutdown
by
that
period
of
time,
so
they
shouldn't
actually
change
any
of
the
priorities
like
the
underlying
processes
or
anything
like
that.
I
D
Yeah
regarding
events,
so
that's
actually
another
thing
we
wanted
to
bring
up
briefly.
So
if,
if
you're,
referring
to
like
event
inside
the
kubelet
I'm,
we
can
probably
make
that
some
way
that
other
components
can
hook
into
in
terms
of
like
events
that
could
be
seen
like
on
the
api
server
or
controllers
running
elsewhere.
One
idea
that
we
also
had
was
to
add,
like
a
taint
or
potentially
a
condition.
I
guess
this
is
another
question
I
wanted
to
bring
up.
D
We
wanted
to
make
sure
that,
like
new,
like
during
shutdown,
new
pods
aren't
being
scheduled
right
because,
since
the
pod,
since
the
node
is
going
to
shut
down
anyway,
we
don't
want
new
pods
being
scheduled
there.
So
I
guess
the
question
is:
should
we
maybe
take?
I
haven't
seen
this
kind
of
pattern,
but
maybe
the
kubelet
should
taint
itself,
but
then
I
guess
the
question
becomes
what
will
remove
the
tank
later
or
potentially?
D
This
is
worth
adding
a
new
node
condition
like
a
node,
shutting
down
condition
or
something
like
that,
and
I
guess
the
controllers
could
hook
into
something
like
that
by
reading
the
taint
or
potentially
a
condition
or
something
like
that
on
the
node.
D
So
I
guess
I
wanted
to
ask
another
question:
maybe
don
derek,
if
you're
any
thoughts
in
terms
of
like
communicating
this
to
up
to
the
api
server,
what's
kind
of
the
pattern,
because
I
haven't
seen
google
like
taint
itself
as
a
pattern,
maybe
is
this
work?
Do
you
think
the
new
node
condition,
or
should
the
google
try
to
taint
itself-
or
I
know,
give
any
thoughts
on
the
best
way
to
represent
that.
A
There's
the
mini
effort,
so
people
don't
want
to
know
the
competition.
I
mean
that's
more
from
the
api
perspective,
but
we
never
consensus
at
that
moment.
So
I
saw
that
the
brand
through
the
chat,
I'd
comment
say:
ask
a
question
brand.
Do
you
want
to
clarify
the
question
here?
A
J
Sure
it
seems
like
there's
a
couple
of
things
that
could
be
done
like
you
could
the
kublet,
when
it
detects
it
shuts
down,
it
could
coordinate
itself
and
then
set
it
to
drain
to
get,
and
it
should
automatically
clean
up
all
the
pods
off
it
at
that
point,
but
the
other
thing
would
be
to
go
ahead
and
set
a
node
condition.
Yeah
that
marks
it
is
not
ready.
Yeah,
that's
about
all
I
can
think
of,
but.
F
Yeah,
so
I
think
we
have
to
separate
the
two
use
cases
that
might
motivate
this
action
occurring,
so
one
would
be
just
anyone
who's
running
kubernetes
on
on
metal
as
an
example
where
they
would
train
but
ignore
daemon
sets,
but
then
still
power
cycle
the
box,
and
so
that
flow,
the
notes
already,
probably
cordoned.
It's
already
gone
through
a
drain.
F
It's
just
the
the
remaining
daemons
that
are
running
there
are
not
integrated
in
the
shutdown
sequence
and
so
like
from
that
perspective,
like
giving
that
maintenance
activity
two
seconds
for
things
that
are
critical
is
kind
of
artificial
yeah.
That
makes.
D
F
Yeah
and
so
there's
a
lot
of
folks
running
kubernetes
in
that
that
fashion,
so
one
account
for
them
and
then
the
other
one,
the
preemptable
phase,
where
you
are
time-bound,
that's
the
scenario
where
you
are
likely
not
to
be
drained,
but
I'd,
I
that
varies
per
cloud
actually.
So
I
can't
remember.
F
D
Yeah
so
like,
for
example,
like
we
in
terms
of
the
cloud
aspect
so
like
usually,
there
isn't
enough
time
like
to
actually
drain
like
by
the
time
someone
detects
that
the
node
is
shutting
down
and
actually
draining
it's
like
too
late
in
those
time
down
situation,
that's
kind
of
why
cooper
the
proposal
here
is
kublet's
the
one.
D
Actually,
you
know
aware
of
that
event
and
then,
in
terms
of
like
the
cloud
providers,
so
each
cloud
provider
often
has
like
a
metadata
server
that
it
can
reach
out
to
itself
on
the
vm,
but
that's
cloud
provider
kind
of
specific
right.
The
benefit
here
is
that
this
is
just
looking
directly
into
the
acpi
event,
which
is
works
across
every
cloud.
You
know
and
bare
metal
as
well.
It's
that's
kind
of
the
idea.
F
F
I
don't
view
it
as
a
blocker
to
making
progress
on
that,
like,
I
think
it's
worth
getting
feedback
on
how
people
choose
to
monitor
power
cycles
across
particular
environments
on
how
that
condition
would
actually
provide
them
value,
because
if
it's
a
bare
metal
case
and
that
machine
reboots
15
minutes
later,
it
might
already
go,
not
ready
and
it'll
already
integrate
in
their
their
monitoring
environments
or
at
least
node
status
unknown.
So,
basically,
I
would
table
okay
to
a
separate
topic.
D
Okay,
that
makes
sense
yeah.
We
can
discuss
more
of
that
offline,
whether
that's
worth
creating
a
change
or
condition.
Okay,
yeah.
A
What
people
suggest
just
mark
once
you
receive
this
one
and
and
then
you
send
the
grid
terminated
of
those
non-critical
of
the
path,
and
you
should
be
going
to
send
a
status
back
and
the
mark
of
the
node
is
not
ready.
So
once
the.
A
Right
there's!
No,
so
we
don't
need
to
introduce
the
new
mechanism
here
and
no
new
conditional
and
the
inactive
not
ready.
I
remember
we
do
have
like
a
message
to
to
to
see
clarify
detail
why
it
is
not
writing,
so
you
could
totally
say
or
receive
after
shutdown
and
the
event
so
there's
another.
Since
the
mark
mentioned
that
the
windows
from
the
chat,
I
think
that's
the
real.
So
let's
include
that
one
at
least
not
unless
it's
with
blocker
for
the
alpha
feature,
but
we
need
to
have
the
to-do
for
like
later
so
like.
D
D
I
got
someone
to
mention
in
terms
of
the
design
here,
so
we're
making
use
of
the
systemd
inhibitors
right,
but
it
makes
sense,
as
mark
mentioned
in
the
chat,
make
this
a
little
bit
more
extensible
like
behind
the
interface
or
something
like
that
that
could
be
implemented
by
potentially
other
in
its
systems
or
on
windows.
So
that's
kind
of
the
other
thing
I'm
thinking
about,
but
yeah.
D
D
Cool
okay,
yeah.
I
don't
know
if
there's
any
other
questions,
otherwise
we
can
just
go
offline.
Yeah.
F
I
guess
is
this
something
that
we're
able
to
make
forward
progress
on
if
we
can
iron
out
the
last
remaining
questions
in
120.
Like
is
this
something
bernal
or
david?
You
were
able
to
help
push
the
implementation
forward
on.
D
Yeah,
I'm
definitely
happy
to
help
on
the
implementation
side.
I'm
not
sure
necessarily
timeline
wise
if
120
is
realistic
or
not,
but
definitely
120.
121
is
kind
of
my
what
I'm
aiming
for.
F
The
cap
is
merged
and
add
it
into
the
120
milestone.
D
A
So
if
I
understand
you
correctly,
you
just
ask
her
to
doing
this
upbeat
of
the
kubernetes
events
to
api
server,
so
we
totally
can
do
and
that
should
be
straightforward,
but
my
concern
it
is
because
you're
also
messing
some
logical
and
doing
that.
Using
that
event,
I
just
want
to
say,
like
the
kubernetes
event
is
the
best
ever
I
don't
want
to
say.
I
want
to
make
sure
no
really
depend
on
the
inventor
unless
so
so
it
is
the
portability
kubernetes
event,
not
it's
not
guaranteed.
Today,
oh,
oh,
I
lost
everyone.
I
Yeah
so
I
think
yeah,
I
think
I
got
the
gist.
So
the
use
case,
which
I
had
in
mind
for
eventing,
is
that
before
the
note
shutdowns,
if
it
can
just
tell
any
controller
that
the
controller
might
be
running
on
the
master
node
and
if
the
controller
knows
it
can
know
what
to
do
with
the
crds
or
whatever
it's
controlling
you
know.
So
that's
what
I
had
in
mind.
A
So
I
just
say
you
cannot
depend
on
the
event,
but
do
we
just
talk
about
maybe
send
one
more
status
so
tell
the
note
is
not
right
and
give
you
the
reason
why
it's
not
ready.
You
maybe
want
based
on
that
one.
Your
controller
based
on
the
know
the
status,
please
not
based
on
the
events
event
is
not
guaranteed
right.
A
F
F
Yeah,
so
I
don't
know
if
you
want
to
go
through
the
update
side,
I
I
haven't
seen
the
latest
iteration
since
my
prior
review,
but
if
everything
was
updated,
that's.
C
C
Like
again,
because
from
what
I
saw,
it's
like
it
can
bring
additional
problems
if
you
will
like
just
disable
it
via
kernel
arguments.
So
I
know
that
the
kernel
5.9
already
should
include
the
fix
and
probably
should
be
backported.
C
But
until
now
we
can
provide
some
section
under
the
resources
guaranteed
like
to
give
people
know
that
they
probably
will
need
to
request
more
memories,
that
the
real
workload
under
the
container
need
needs
because
of
the
because
of
the
canon
memory
also
calculated
as
part
of
the
container
memory.
And
if
you
have
like
a
lot
of
a
lot
of
cpus,
so
you
you
probably
will
have
a
lot
of
kernel
memory
used
and
summarized
under
the
container
so
like
I
can
prepare
some
section
under
the
documentation.
If
it's
good
enough,
what
do
you
think.
F
Okay,
I
guess
I'll
I'll
review
the
latest
comments,
but
for
the
question
on
it's
good
to
know
that
we
think
things
will
be
approved
on
later,
colonel
and
I'll.
Take
a
look
at
that,
but
for
the
was
the
overhead
of
kmm
accounting
reduced
when
cpu
sets
were
being
restricted.
I
I
don't
know
if
that
was
resolved
when
I'll
take
a
look
either
way.
F
Some
of
these
issues
are
still
pertinent
to
just
how
normal
memory
accounting
works
in
cube,
so
it
necessarily
wasn't
a
blocker,
but
more
of
a
thing
that
I
want
to
make
sure
that
we
were
aware
of
as
people
get
ever
increasing
guarantees
from
cuba
so
anyway,
I'll
take
a
look.
So
thanks
for
the
updates.
K
Small
couple
of
comments
as
well,
which
was
not
addressed
in
the
latest
iteration
of
this
cap,
sure.
H
C
Okay,
I
will
check
comments
that
I
will
answer.
Sorry
if
I
missed
it.
L
So
sorry,
there
was
also
another
issue
related
to
exposing
metrics
in
cubelet
to
show
new
new,
more
related,
related
stuff,
and
it
was
addressed
in
for
a
better
release
of
of
the
memory
manager,
so
so
so
that,
if
you
also
could
could
look
at
that
besides
the
kernel
memory,
accounting
issue,
because
I
think
these
are
two
major
issues
that
should
be
discussed
for
for
batteries
and
yeah.
That's
that's
everything
I
wanted
to
highlight.
Okay,
thanks.
F
Okay,
so
I'll
take
another
review
on
the
latest
updates,
either
this
afternoon
or
tomorrow
morning
and
we'll
try
to
at
least
get
this
cap
into
a
mergeable
state
to
meet
an
alpha
target.
I
know
there's
a
lot
of
desire
to
move
forward
on
this,
and
so
just
looking
at
the
comments
here,
I
can
see
some
stuff
got
recommended
for
moving
to
a
beta
phase
and
that
type
of
thing.
A
M
Yeah,
so
I
wanted
to
give
kind
of
a
quick
update,
so
we
have
the
cap-
that's
currently
still
in
a
google
doc,
but
it's
getting
converted
to
an
md
file
for
us
to
put
in
the
pr,
hopefully
sometime
today,
some
big
updates
kind
of
on
the
designer
investigation
front.
M
One
of
the
original
design
components
of
the
way
we
had
it
currently
listed
in
the
cap
was
that
we
were
only
enabling
either
all
privileged
pods
or
like
not
enabling
kind
of
mixed,
privileged
and
non-privileged
pods.
M
M
However,
we
also
discovered
a
separate
scenario,
which
is
around
service
meshes,
where
there
are
init
containers
that
are
ephemeral
that
are
privileged
containers
that
might
be
deployed
to
non-privileged
pods
that
may
require
kind
of
host
network
access.
So
in
the
scenario
where
we
do
succeed
in
the
first
step
of
trying
to
align
privileged
containers
with
the
network
compartment
of
a
non-privileged
pod,
that
might
block
this
other
scenario
with
init
containers
where
it
still
requires
kind
of
host
access.
M
So
we're
trying
to
address
kind
of
these
two
investigation
points
which
kind
of
one
leads
to
another,
so
that's
kind
of
an
ongoing
investigation.
Our
current
plan
is
to
still
kind
of
move
forward
to
see
if
we
can
get.
At
least
you
know
our
cap
in
a
situation
where
we
can
pr
and
see
if
we
can
find
ways
to
to
kind
of
close
out
some
of
those
items,
but
that's
kind
of
the
current
status
update.
F
M
Yeah
as
a
part
of
I'm
either
making
edits
in
the
google
doc,
which
I
then
transfer
into
the
md
file,
that's
kind
of
being
created
now,
hopefully,
after
today
or
after
that,
conversion
we're
just
going
to
go
off
of
an
md
file
and
and
not
kind
of
work
with
the
google
doc
anymore.
So
those
things
are
getting
slowly
kind
of
incorporated
in
I'm
making
my
way
down
the
dock
right
now,.
F
Okay
and
then
the
the
hope
was
to
get
this
in
in
this
coming
release.
M
Yeah,
that's
still
the
hope,
though.
I
mentioned
that
these
kind
of
two
scenarios
that
came
up
with
the
pod
networking,
those
were
things
that
really
you
know.
We
really
had
to
take
a
look
at
next
week,
so
it
is
coming
in
rather
hot
for
120.
we're
still
going
to
try
to
pursue
that.
M
But
if
not,
we
do
want
to
kind
of
still
have
this
cap
in
a
provisional
state
or
see
if
there's
some
kind
of
you
know
some
kind
of
way
to
still
provide
this
to
the
community
in
some
state
within
this
kind
of
time
frame
we're
still
trying
to
assess
whether
or
not
like
these
networking
issues
or
whatever
path.
We
kind
of
take
to
remedy
that
for
now
or
if
that
would
require
kind
of
significant
api
changes.
M
We
don't
want
to
have
to
roll
those
api
changes
back
whichever
path
we
choose
and
if
we
can
find
a
way
to
do
that
in
120.
That
allows
us
to
address
these
issues
either
now
or
in
the
coming
time
frame
without
like
significant
changes
to
those
apis.
Again,
that's
what
we
would
hope
for,
but
that's
kind
of
still
under
assessment,
since
it
kind
of
came
up
recently.
M
We're
having
some
discussions
as
to
like
what
like
what
you
know,
what
is
the
nature
of
of
making
those
things
possible,
and
I
think
one
thing
that
would
help
us
is-
and
we
kind
of
brought
this
up
and
say,
windows
as
well
is
so
we're
we're
familiar
with
one
kind
of
init
container
scenario
where
you
know,
there's
an
ephemeral,
privileged
container
deployed
to
a
non-privileged
pod.
I
think
we're
looking
to
find
out
other
scenarios
that
might
pertain
to,
like
other
service
meshes
or
other
kind
of
scenarios
of
that
nature.
M
So
we
can
get
a
broader
picture
of
of
you
know
these
two
kind
of
scenarios
that
seem
a
little
bit
at
odd,
with
each
other's
on
the
implementation
that
we
might
be
able
to
take
yeah.
So
I
think
that
would
help
they
are
critical,
which
is
why
I
think
we're
trying
to
assess
I'm
you
know
assessed
if
we
have
like
an
if
we
see
a
path
forward
kind
of
in
this
time
frame
or
not,
and
if
we
don't
what
we
can
do
kind
of
in
the
interim.
M
To
still
have
some
of
this
functionality
like
continue
to
be
refined
or
tested
or
or
provided
to
people.
A
Yeah
yeah,
I
agree
with
the
mark,
and
here
and
and
also
I
know
we
don't
support
all
the
use
cases,
but
do
we
if
we
could
work
out
scope
out
like
the
initials
release
and
also
support
a
certain
end
to
end
of
the
use
cases
for
this,
and
so
now
we
can
present
it
to
our
user,
and
so
that's
kind
of
the
gave
us
like
the
earlier
amber.
You
just
mentioned
that
certain
use
cases
with
you
and
no,
we
still
try
to
explore.
A
So
so,
if
we
could
do
something
and
then
we
maybe
can
allow
more
feedback
more
customer
inputs,
that's
also
value.
But
it
looks
like
right
now.
We
even
haven't
to
figure
out
what's
the
initial
scope
here.
So
that's
a
little
bit
concerned
so
because
we
just
have
couple
days
for
that
night
to
to
accept
this
one.
So
we
will
see
like
the
I.
A
I
agree
with
the
match
next
least
present
this
in
type
format,
let's
standard
club
format
and
mark
as
the
provisioning
stage
and
then
and
allow
the
community
to
share
more
use
cases
and
the
more
feedback
there.
So
maybe
that.
E
Yeah
and
like
a
lot
of
this
work
is
going
to
kind
of
need
changes
to
both
like
the
kubernetes
api
and
the
objects
that
users
interact
with
and
the
cri,
and
I
think
right
now
we're
trying
to
make
sure
that
we
have
us
like.
We
know
we
can
identify
all
of
the
proposed
cri
changes
to
not
have
to
change
that
between
an
alpha
and
a
beta
implementation.
F
Yes,
maybe
if
you
can
give
special
call
out
to
that,
we
did
meet
on
cri
discussions
and
renault
and
mike
brown
were
putting
together
a
duck.
If
there's
anything
you
find
that's
backward
edible,
it
would
be
good
to
elevate
that
as
soon
as
possible,
but
I'm
assuming,
I
think,
everything
that
was
there
was
hopefully
forward
compatible
changes.
He
wanted
to.
E
Make
yeah,
but
the
use
case
that
amber
brought
up,
I
think,
may
change
some
of
the
proposed
designs
about
for
the
cri,
so
we're
trying
to
reconcile
that
with
any
continuity
changes
or
that
we
would
need
and
how
would
it
pass
the
information
through?
But
yes,
we'll
do
we'll
keep
that
up.
A
N
Yes,
hi
quinn,
I
think,
given
the
you
know,
time
limits
today
that
we
are
running
out
of
time,
I
would
just
you
know,
invite
everyone
to
check
out
the
the
cap
and
and
leave
comments
there.
I'm
also
trying
to
you
know
include
everything
we
discussed
last
time
and,
and
you
know
more
detailed
proposal,
so
yeah
I'm
not
completely
sure
about
the
process
here,
but
I
assume
some
comments
and
and
reviews
should
be
done,
and
so
I
would
like
just
invite
those
if
anyone
can
do
that.
A
If
I
remember
correctly
last
time
the
one
of
the
questions
reached,
it
is
more
clear
of
the
use
cases
right
so
put
the
use
cases
user
story
into
your
cap.
Do
you
want
to
talk
about
that?
Well
or
maybe
we
don't
so
we
can
move
forward.
Yeah.
N
Yeah
sure
I
mean,
let
me
then
share
screen
and
just
to
quickly
I
can
show
the
them
as
well.
While
I
talk,
do
you
see
github.
L
N
Okay,
perfect
yeah,
so
I
I
mostly
described
two
user
stories,
so
one
is
really
about
you
know
debugging
and
logging
up
support
so
that
the
container
itself
can
you
know,
send
or
logs
or
order
the
user
end
user
or
to
the
same
login
login
system
and
can
include
information.
You
know
who
is
exactly
looking
this
and
the
other
you
use.
The
story
is
here
about.
N
You
know
representability
in
the
sense
that
if
you
have
a
science
you
know
community,
like
we
have
at
our
university
running
communities
as
a
job.
Job
system
to
you
know
trigger
jobs,
to
run
on
some
data
generally,
they
reuse.
Both
these
descriptions
with
some
latest
stack
on
docker
image,
but
other
hand.
You
want
to
look
inside
of
the
container,
then
what
exactly
you
know
which
exactly
the
image
was
picked.
N
You
know
the
latest
does
not
always
referred
to
the
same
underlying
image
and
so
having
this
information
available,
you
know
easily
through
the
down
down
to
the
api,
like
an
environment
variable
the
underlying
container
code
could
then
log
that
or
store
it
with,
for
example,
information.
What?
What?
What
was
input
to
the
to
the
job?
N
A
I
haven't
read
your
cap
yet
and
I
needed
that
internalize
your
use
cases
more.
So
anyone
have.
O
N
I
mean
it's:
not
it's
not
the
image
data.
It's
literally,
you
know
the
jest
of
the
image,
so
the
id
of
the
image
to
the
hash
so
that
at
the
later
time
you
can
rerun
the
same
thing
or
like
you
know
that
you
can
really
know
what
exactly
the
image
was
used.
So
if
you
are
debugging
something
like
you
know,
when
the
user
user
reports
an
issue-
and
you
want
to
bring
up
exactly
the
same
environment,
they
they
had
when
they
they
this,
they
go
there
the
easiest
way.
N
If
you
have
a
hash
based,
docker
image
yeah,
you
know,
I
think.
N
Okay,
so
yeah
the
issue
is,
it
is
available
outside
of
the
container,
and
so
what
I'm
proposing
is
to
to
allow
it
to
be
available
inside
of
the
container.
So
if
you,
you
have
download
api
already
in
the
kubernetes
and
it
allows
you
know
a
nice
list
of
things
to
be
available
downstream,
like
resources
limits
and
stuff,
and
but
you
cannot
get
information
about
the
image
which
is
being
run.
So
you
know
you
have
an
image
being
run
that
you
cannot
get
of
information.
O
N
N
Yeah,
and
so
I
would
like
image
and
digest
the
image
or
image
id,
I
mean
both
the
image
name
and
image
id
just
for
completeness,
and
I
think
it's
pretty
you
know.
O
N
O
O
Yeah,
have
you
taken
a
look
at
the
the
inspect
to
see
if
it's
there
for
the
container.
N
A
Okay,
so
looks
like
the
we
need
to
review
your
cap
and
to
think
about
more
and
and
thanks
meter.
So
we
will
follow
up
your
cap
and
is
that,
okay.
A
Thank
you
and
next
one
sergey.
Do
you
want
to
talk
about
the
two
topics
you
proposed
here.
B
Yeah,
we
don't
have
much
time
left
on
the
meeting,
but
those
are
very
small
topics.
So
first
one
is
andrew,
wrote
this
cap
and
it's
exactly
what
we
discussed
two
or
three
meetings
back
here.
It's
about
timeouts,
on
exact
probes,
so
we
want
to
start
respecting
them,
and
this
cap
introduces
a
flag
that
fixes
this
bug
as
we
identified
it
gradually.
So
you
can
disable
and
return
back
to
previous
behavior.
A
Before
this
meeting
I
told
the
circuit,
I
want
to
take
one
more
look
about
this
one.
Okay,.
B
So
derek
you
wasn't
on
that
meeting
when
we
discussed
it,
so
it
may
be
beneficial
just
to
glance
over,
but
in
a
nutshell,
it
just
neither
on
docker
or
continuously
with
respected
timeout
specified
on
probes
when
it's
exact
exact
prop.
So
we
just
want
to
start
respecting
them,
even
though
process
that
will
be
run
inside
the
container
will
not
be
killed.
So
we
will
like
this
is
the
only
gotcha
that
you
need
to
people
to
be
aware
of.
F
Okay,
yeah.
I
think
I
was
at
least
one
of
the
conversations
about
this,
so
it
anyway
don
I'm
happy
to
let
you
take
the
final
look.
B
Thank
you
and
next
one
is
runtime
class.
Ga
we
have
a
lot
of
interest
to
ga
runtime
class.
We,
like
I
put
together
this
document
outlining
some
blocking
issues
and
test
coverage
that
we
want
to
increase.
I
think
we
find
from
test
coverage
perspective.
We
we
have
some
ideas
of
what
to
do
to
improve
it
even
further,
and
I
start
a
conversation
with
gweiser
team.
B
Maybe
we
want
to
run
more
tests
with
gvisor
and
that
will
also
increase
some
test
coverage,
but
it's
all
after
debate-
and
I
think
what
we
propose
in
this
document
is
good
enough
already
and
we
have
all
the
pull
requests
out
so
once
they
approved,
we
will
have
a
good
test
coverage.
So
the
only
question
we
have
we
identified
this
feature
request
that
may
be
blocking
and
I'm
not
sure
if
team
is
on
the
meeting.
I
ask
you:
oh
tim
you're
here.
J
B
P
Yeah,
I
think
the
only
thing
I
would
add
to
that
is
we
sort
of
have
a
choice
for
if
we
wanted
to
have
a
default,
runtime
class,
either
kind
of
punting
that
out
to
third-party
admission
controller,
to
mutating
admission
controller,
to
set
that
based
on
policy
there,
you
can
have
finer
grained
controls
like
this
namespace
has
a
different
default
from
another
namespace
or
whatnot,
or
if
we
want
that
to
make
that
default
selection,
a
kind
of
built-in
part
of
runtime
class.
P
This
doesn't
necessarily
need
to
be
something
that
blocks
ga,
but
I
thought
that
in
case
it
ends
up
being
impact
being
more
impactful
on
the
api.
It
might
be
something
that
we
should
at
least
discuss
before
moving
forward
with
ga.
A
Yeah,
I
kind
of
agree
with
you,
the
team
on
this
one,
because
I
think
the
when
we
start
this.
I
think
the
default
is
kind
of
one
hot
topic
of
what
we've
been
talking
about
and.
A
And
I
I
want
to
heard
other
people's
opinion,
but
but
I
think
that
time
we
do
think
about
that's
the
blocking
for
the
ga
and
and
I'm
not
sure
how
we
are
going
to
count
that,
how
long
to
punch
that
one
and
I
had
to
have
concern
without
if
we
don't
define
those
kind
of
things.
A
So
then
we
may
end
up
or
missing
something.
After
graduate
to
the
ga
and.
A
And
we
end
up
because
we
we
used
to
have
some
cases
like
that
before
that,
so
now
we
graduate
the
gi,
but
we
didn't
really
think
through
all
those
like
default
use,
cases
and
other
config
and
and
then
later
we
found
it's
not
really
fitted
generic
use
cases.
So
it's
more
like
the.
So
that's
why
I'm
a
little
concerned,
I
I
don't
know
I
want
to
hurt
other
people's
opinion
on
this
one.
O
F
Yeah
so
runtime
class
to
me
is
an
opaque
concept,
independent
of
even
alternate
runtime
integration
choices
that
people
use
the
feature
for
right
if
you're,
using
gvisor
or
kata,
or
that
type
of
thing,
but
like
runtime
class,
is
really
like
the
in
my
mind,
the
the
back
door
to
pass
an
opaque
set
of
attributes
to
some
other
system,
but
that
opaqueness
makes
it
really
hard
to
say
there
should
be
a
default.
F
A
F
Does
that
default
differ
if
it's
a
windows
versus
linux,
pod,
yes,
yeah,
and
so
then
you
have
two
defaults
and,
like
I
kind
of
like
the
matrix
will
grow
like
I'm
thinking
back
to
one
of
my
hesitations
when
we
did
runtime
class
was
like
ever
tying
specific
behavior
to
or
meeting
to
a
particular
runtime
class,
because
it
was
basically
a
a
an
extension
point
for
vendors
to
innovate
and,
like
I
don't
know
how
we
tie
very
specific
behaviors
to
like
this
opaque
exit.
My.
A
Argument,
it
is
what
it
is
the
conform
test
right.
So
so
what
do
I
provide?
We
end
to
end
experience.
I
thought
that
also
signaled
the
ci
project,
which
and
initially
like
the
two
or
three
years
ago-
and
I
start
so-
we
basically
have
something
like
a
hint.
Some
of
the
implementation
say
what
can
excuse
to
confirm
right
so
without
some,
like
the
default
level
of
the
runtime
forget
about
the
runtime
class,
this
new
concept,
but
you
just
there-
are
some
of
the
default
runtime
right.
A
So
then
that's
it's
part
of
the
conform
tester,
so
windows
actually
is
not
confirmed
right.
So
that's
it's
kind
of
the
when
we
do
the
windows
earlier
for
do
the
windows
alpha,
ga
so
how
we
unblock
windows
container
because
we
try
to
see
propose.
Okay,
here's
the
additional
feature
beyond
the
conform
test,
because
the
huge
pushback
from
the
sig
architecture
is
they
say.
Oh
all,
the
control
control
plane
is
not
to
can
run
on
the
windows
container,
so
I
went
there
to
talk
to
them.
A
So
nato's
only
supporter
count
a
windows,
node
and
then
windows
control.
Planet,
that's
a
totally
separate
project
project
and
we
can
talk
about
later
if
needed,
but
I
want
to
support
the
windows
node
to
support
the
windows
application
like
a
workload.
So
but
then
we
have
the
question.
It
is
then,
what's
the
conform
tester?
So
now
we
come
up
like
the
signal.
The
testing
so
proposal,
which
is
the
additional
feature
required
feature
master,
have
the
feature
and
additional
feature
all
this
kind
of
things.
So
windows
is
there
so
forget
about
the
runtime
class.
A
What
is
the
default,
but
I
just
want
to
say
we
do
have
the
implies,
something
the
default
container
enter
and
but
then
we
have
also
endorsed
additional
container
runtime
and
because
we
did
the
testing
we
did
the
validation,
then
it
is
compatible
cri.
Then
we
have
like
the
additional
feature,
so
so
that's
kind
of
the
different
level
of
the
support.
The
signal
to
try
to
give
to
users
forget
about
all
those
kind
of
vendors,
that's
kind
of
community.
A
We
try
to
agree
and
if
the
windows
can
pass
off
the
conform
test
then
build
in
like
a
default
figure.
So
that's
kind
of
the
different
level
we
try
to
maintain
velocity,
maintain
support
more
feature,
but
at
the
same
time
there's
the
basic
out-of-box
things
we
can
give
to
our
customers
give
to
users.
I
mean
not.
Customer
customers
have
the
business
just
users,
that's
the
open
source
perspective.
F
Context,
but
maybe
we
I
know
we're
over
time-
we
can
discuss
this
separately.
I
guess,
but
to
me
the
pod
spec
is
the
default
runtime
class
and
if
it's
not
expressed
on
the
pod
spec,
then.
F
A
Okay,
so
need
to
follow
up,
will
follow
up
after
this
one
and
I
believe,
there's
the
other
topic
which
is
circuit.
You
listen
basically
just
call
out
to
call
out
attention
and
remind
and
see
basically.
B
Yeah,
I
want
to
remind
everybody
that
there
is
a
sidecar
container
meeting,
doodle
out
so
derek.
I
think
we
were
waiting
for
you
to
tell
your
availability.
We
really
appreciate
your
time
here
and
I
wanted
to
remind
that.
We
already
have
a
first
cri
alpha
to
beta
meeting
and
second
one
will
be
tomorrow,
11
to
12
pt,
pacific
time.
There
are
nodes
documents
to
check
out
what
happened
on
the
first
meeting
now.
Is
it
still
up.