►
From YouTube: Kubernetes SIG Node 20190305
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
B
A
Can
see
all
right
so
I'm
working
with
legacy
applications
and
they
have
a
tendency
to
take
1
to
2
minutes
to
start,
and
it
can
be
even
worse
when
there
is
some
load
on
the
notes
and
I
found
it
very
difficult
to
tune
the
settings
for
the
the
probes,
especially
the
lightness
probe,
because
it
would
kill
my
container
before
it
has
finished
to
start
so
in
this
kept.
Actually
before
doing
the
cap,
there
was
like
pull
request
dating
from
probably
eight
months
now
and
suddenly,
nothing
nothing
changed
for
a
few
months.
A
And
finally,
you
decided
to
contact
me
so
give
me
the
cap
and
let's
talk
about
it.
So
basically
I
just
proposed
to
add
a
new
setting
to
the
probes
that
that
would
allow
to
specify
a
higher
number
of
failures
before
the
the
probe
succeeds
for
the
first
time
and
then
it
all
reverse
to
the
to
the
same
behavior
as
before.
B
C
B
B
A
The
initial
delay
still
exists
and
then,
when
you
start
running
the
probes,
so
after
this
initial
delay
you
start
running
the
probes
and
if
the
probe
starts
is
if
the.
If
the
probe
succeeds,
then
we
we
mark
somewhere
that
the
container
has
started
successfully
at
least
once
and
then
we
are
back
to
the
normal
behavior
of
the
of
the
triplet
probes.
But
if,
during
the
initial
start
it
fades
this
max
initial
failure
counts
times,
then
it
is.
It
is
killed.
B
A
Yeah
here
you
can
see
that
if,
if
my
my
probe
know,
if
my
container
will
stop,
we
will
take
a
maximum
T
number
of
seconds
to
starts.
Then
I
have
to
multiply
max
initial
failure,
count
times
period
seconds
to
be
greater
than
T
to
allow
my
container
to
fully
start
before
it
is
killed
by
the
lightness
problem.
I.
A
So
there
was
like
something
in
in
the
PR:
there
was
some
concerns
about
putting
a
story
in
the
cubelets,
the
state
of
the
container
and
I'm
eight
of
the
of
the
probe,
sorry,
and
that
the
cubelets
doesn't
resist
this.
So
if,
if
at
some
point
the
the
Trib
let
restarts,
then
we
forgot
that
we
succeed
already
once
so.
A
If
there
is
like
a
very
small
chance
that
if
you
are
real
and
lucky,
if
the
the
cubelet
restarts
just
before
after
your
container
has
deadlocked,
then
you
will
have
to
wait
this
tea
time
before
it
is
killed,
but
otherwise
for
most
of
people.
If
you
don't
use
this
setting,
it
will
not
change
your
life
and
if
you
use
it
wisely,
it's
an
improvement
rather
than
yeah
Tennessee.
G
A
A
B
This
is
really
good,
actually
Derek.
Maybe
we
could
we
do
sing
as
possible
since
we
already
have
at
least
of
the
reviewer-
and
this
is
a
separated
from
this
current
a
cup,
and
we
know
from
that
one
take
time
and
there's
assigned
to
the
reviewer
for
this
one.
Instead,
the
neck,
there's
some
people
in
it.
They
always
do
review
and
some
people
never
come
to
review.
So
we
couldn't
take
time.
B
Like
the
people
said,
we
already
have
the
couple
area
right
for
the
signal,
and
then
we
define
those
area
and
people
register
through
the
want
to
be
the
reviewer
for
the
different
area
and
all
the
people
be
the
signal,
the
video
and
they
can
register.
And
then
once
we
have
the
car
and
the
proposal,
even
at
the
PR
just
fix
the
issue.
We
could
adjust
to
have
the
system
to
awning
like
the
one
by
one
you'll
pay
to
be
the
reviewer
I.
Don't
know,
I
just
accept
me
from
the
cab.
G
I'd
love
a
way
of
using
stuff
to
grow,
reviewers
and
then
growing
approvers
afterwards.
I
just
want
to
make
sure
that,
like
when
we
say
target
so
many
115,
but
we
need
to
actually
like
make
sure
people
put
eyeballs
on
something
and
so.
B
H
B
B
G
B
B
G
I
I
So
thanks
to
Derek,
the
cap
has
been
merged,
but
I'm
not
sure
if
we
still
need
to
submit
an
exception
request
to
track
this
feature
for
114
I've
put
in
a
link
to
the
I
created
an
issue
to
track
all
the
PRS.
We
we
split
up
the
PRS
for
the
code
to
make
it
easier
to
review
the
code
hasn't
got
much
review
at
all,
yet
so
I'm
just
not
sure
the
next
steps
do
we
submit
the
the
exception
request
and
continue
it
with
code
review
or
or
what
do
we
do
next
I.
G
G
Yeah
so
Lisa
and
Connor,
and
the
broader
team
I
feel
like
this.
This
particular
enhancement
fell
through
the
cracks
when
we
did
114
planning
and
I
know
the
original
design
and
went
to
the
community
repo
and
then
I
think
they
I
don't
think
we
did
the
best
job
figuring
out
feature
planning
for
114,
so
I
think
we
can't
aspire
to
do
better.
The
next
release,
but
I
personally,
would
not
be
comfortable
trying
to
pump
this
into
114
this
late.
G
G
I
Yeah
I
think
that's
fair
enough,
so
for
once,
if
team
do
we
just
we
get
the
reviews
done?
Is
there
anything
additional
I
need
to
do
at
the
moment
for
for
115,
or
do
we
just
start
reviewing
the
the
code?
Is
that
the
next
steps
yeah.
G
G
I'm
happy
to
shepherd
honest
and
probably
would
have
asked
others
to
assist.
Who
were
interested
that
had
looked
at
this
in
the
past,
so
I
know,
Seth
had
looked
at
this
area
in
the
past.
I
don't
know
if
fish
is
here
if
he
want
to
jump
in
or
David
but
I'm
sure
across
the
set
of
us
I'd
like
to
unblock
you
guys,
I'm
at
15.
B
C
So
the
sort
of
the
biggest
the
biggest
change
is
that
curve,
a
decision
from
sega
architecture
we're
moving
the
runtime
class
API
from
a
CRD
like
a
core
CRD
to
a
built-in
api.
This
is
kind
of
mirroring
the
decision
around
CSI
and
the
reasoning
around
it
is
kind
of
complicated,
but
basically
they
just
decided
that
CRTs
weren't
ready
to
support
core
api's
in
the
current
state
and
that
probably
this
will
be
revisited.
C
And
so
then,
in
terms
of
moving
to
the
beta
API
I
think
there
are
kind
of
like
a
couple.
A
couple
changes
to
the
API
one
is
getting
rid
of
the
spec,
so
runtime
handler
now
becomes
just
a
top-level
field
on
random
class.
This
is
basically
to
make
the
API
more
consistent
with
the
other
class
objects,
so
storage
class
and
priority
class
deciding
that
there
that
status
doesn't
really
have
a
place
I'm
this
and
so
then
spec
just
becomes
an
unnecessary
layer
of
nesting,
also
Tim.
C
If
you
have
a
pod
that
doesn't
request
a
runtime
class,
that'll
still
be
piped
through
as
blank
to
the
CRI,
but
the
expectation
is
that
CRI
implementations
should
now
treat
should
now
have
a
handle
or
satire
runtime
handler
setup
for
the
default
default
case.
So
in
other
words,
instead
of
just
kind
of
passing
in
an
empty
string
to
represent
the
native
run,
C
runtime
handler,
we
would
actually
set
up
a
runtime
handler
in
the
CRI
called
run
C,
or
something
like
that
and
have
that
be
explicitly
stated
on
the
runtime
class.
E
C
C
G
Out
of
curiosity,
I
wasn't
at
the
Sikh
architecture,
meeting
was
discussed
but
I'm
just
trying
to
think
through
like
the
mechanical
process.
So
we
had
a
feature.
Gate
called
run
time
class
and
it
had
alpha
storage
in
one
format
and
now
we're
changing
a
storage
format
to
another
in
beta.
Was
there
a
discussion
on
creation
of
a
net
new
feature,
gate
name
and
basically
because
you're
basically
destroyed
all
previous
iterations
of
it
or
it's
just
the
idea.
If
you
were
using
alpha
stuff,
you
should
have
destroyed
the
cluster
anyway.
It.
C
Actually
doesn't
completely
destroy
it,
so,
for
instance,
if
you
have
a
version
skew
from
your
master
to
your
nodes,
those,
like
you
know,
112
nodes
using
this
alpha,
CRD
based
API
will
actually
still
work
with
114
master,
so
they'll
still
be
able
to
read
that
v1l,
the
one
API
it
just
means
that
you
need
to
recreate
the
runtime
classes
so
that
they're
stored
in
this
new
format,
okay,
I,
didn't
come
up.
The
idea
of
using
a
different
feature
gate
didn't
come
up.
C
Well,
so
the
the
initial
runtime
class,
like
CRD
piece,
isn't
even
really
automated.
It's
part
of
addon
manager,
but
not
all
distributions
use
addon
manager.
So
the
answer
is
no,
but
you
know
distributions
that
are
creating.
Runtime
classes
can
just
set
up
code
to
recreate
those
one-time
classes
an
upgrade
or,
if
or
if
they're,
using
something
like
a
done
manager
to
continuously
apply
that
spec,
then,
as
soon
as
it
upgrades
it'll
get
applied
and
recreated
automatically.
C
And
I
guess
the
last
update
is
I,
have
a
couple
of
MPR's
to
add
a
été
test
for
a
non-default
runtime
handler.
This
was
it's
a
bit
complicated
because
it
requires
some
like
custom
cluster
configuration.
So
we
decided
to
just
add
a
new
DTE
test
and
configure
some
configure
the
container
D
nodes
with
a
test
handler
to
find,
and
we
can
kind
of
gradually
expand
the
set
of
configurations
that
have
iller
to
find.
So
we
can
test
a
broader
range
of
things
and.
C
B
G
This
always
felt
to
me
as
something
that
we
should
not
mandate
every
distributor
of
kubernetes
to
support
or
not
support
particular
names
of
runtime
classes
or
variants
of
it
all
the
way
back
to
when
we
called
them
secure
containers
which
I
thought
was
a
misnomer.
So
so
I
guess
that's
always
been
my
concern
with
runtime
class
as
if,
if
there
was
going
to
be
a
bar
that
required
this
to
get
elevated,
be
broadly
supported
by
every
distribution
in
the
world
which
I
did
not
think
was
ever
the
intention
in
the
sig.
Yes,.
B
Any
other
consent,
any
other
question
related
to
this
one.
So
so,
basically,
there's
the
good
news.
Good
news:
this
is
the
building.
Api
simplify
a
lot
of
things
in
llamaron
and
the
bad
news
is
this:
maybe
we
could
miss
one
got
40
mins
to
promote
this
to
bed.
We
also
told
me
to
wash
honestly
to
to
ask
you
to
promote
this
to
beta,
but
but
we
like
to
have
this
together.
B
I
think
the
Patrick
Patrick
you
you
use
and
you
have
cause
you
just.
F
E
F
G
F
I
had
a
quick
question
so
like
for
sig
Windows,
we've
got
a
list
of
PRS
that
are
basically
we're
trying
to
get
closed
for
fourteen
a
couple
of
those
overlap
with
needing
review
from
sig
node.
Do
you
have
some
sort
of
tracking
lists
that
you're
looking
at
on
what
you?
What
what
you're
going
to
focus
reviewers
on
before
Friday.
F
B
So
far
we
don't
have
and
do
we
could
talk
about
those
process.
If
we
sing
about
that's
more
efficient
and-
and
we
could
talk
about
those
process.
G
Think
for
115
when
we
have
a
better
crisping
planning
process,
we
go
back
to
our
normal
order,
which
have
been.
These
were
the
things
that
we
were
targeting
and
let's
have
safe
meetings
leading
up
to
the
end
of
the
release.
Saying
are
we
on
track
for
where
we
were
and
I
feel
like?
We
did
a
good
job
on
that.
It
really
says
before
114
and
probably
life's
gotten
away
for
all
of
us.
This
relates
I
think.
B
You
know
we
change
the
format.
114
is
just
because
the
much
much
more
process
of
the
cut
process.
So
that's
why
we
feel
like
the
old
way,
we're
going
to
give
that
I
need
from
that
perspective,
because
I
used
to
always
share
the
planning
upfront
and
then
we
can
talk
about
those
items,
but
this
time
they
I
have
not
much
much
much
more
process
procedure
kept
process.
So
then,
basically,
we
feel
that
actually
could
be
a
nice
match.
I
personal
show
that
could
it
be
replaced
the
old
process
I,
don't.
G
Disagree,
though,
and
I
just
think,
even
in
my
own
experience,
the
definition
of
the
kept
process
was
fluid
up
until
the
Future,
Free,
State
and
I
know.
Many
of
us
were
writing
caps
for
things
that
have
been
in
flight
before
and
I
just
think
the
transition
to
new
process
at
1:14
as
new
learnings
for
all
of
us.
Yes,
in
115
I
hope
we
have
a
list
of
caps
that
we've
identified
as
implementable
or
tracking
to
goals
and,
as
we
close
out
115,
we
can
checkpoint
the
status
on
those
caps.
G
B
F
J
Proposed
solution,
basically
for
fixing
the
CVE
previously
they
they
conveyed
the
transit
van
away
into
memory
over
time
directly
to
avoid
people
from
and
now,
but
that
I
did
actual
memory
of
use,
ancient
powers,
issues
and
the
parents.
They
have
a
new
proposed
approaches
to
temporary
quit
a
read-only
Batman
to
the
of
the
rancid
battery
to
the
fancy
state.