►
From YouTube: Kubernetes SIG Node 20200908
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Yes,
okay,
so
yeah,
I
place
basic
statistics
and
what
I
did
this
time.
I
really
like
what
you
did
last
time
is
to
go
through
all
the
rotten
prs
and
try
to
fish
out
whatever
was
rotten
unintentionally,
so
I
I
found
two
pr's
that
may
be
rotten
unintentionally,
so
if
you
can
take
a
look
at
those-
and
maybe
you
need
to
reopen
and
merge
them,
both
seems
pretty
safe
and
useful
they're
minor,
though.
A
Yeah
thanks
sergey
and
keep
us
the
community
here
and
our
pr
up
to
date,
and
also,
I
noticed
that,
because
we
kicked
out
the
discussion
last
week
about
the
proper
timeout
and
you
close
all
those
related
of
the
pr
because
they
are
out
of
state
and
also-
and
it's
not
annoying
with
kind
what
we're
discussing.
So
thanks
for
that
effort.
A
So
let's
next
one,
let's
follow
up
andrew.
I
make
you
the
co-host
if
you
want
to
share
anything.
So,
let's
move
on
the
topic.
C
Sure
can
folks
hear
me:
okay,
I've
been
having
some
audio
issues
this
morning.
C
All
right
so
last
week,
sergey
and
I
had
brought
up
the
issue
around
cubelet,
not
respecting
exact
probe
timeouts,
and
there
was
a
bunch
of
back
and
forth
and
I
don't
think
we
reached
an
agreement,
and
so
don
had
asked
me
to
kind
of
summarize
the
state
of
things
and
potentially
come
up
with
a
proposal.
And
so
you
know
at
the
end
of
this
call
if
we
can't
agree
to
it,
I'm
more
than
happy
to
put
like
create
an
official
cap
for
this.
C
So
we
can
discuss
this
further
in
the
cap.
But
ideally
we
can
agree
to
what's
being
proposed
here.
So
one
source
of
confusion
I
found
from
last
week's
meeting
is
that
there.
So
the
issue
with
cubic
today
is
that
it
doesn't
respect
the
exact
probe
timeout,
but
that
doesn't
mean
that
the
actual
exact
timeout
is
not
respected
from
the
actual
run
time,
and
I
think
that
was
a
source
of
confusion
where
people
were
wondering
whether,
like
runtime
continuity,
actually
respected
the
exact
timeout.
C
So
with
docker
shim,
the
probe
timeout
we
pass
into
docker
shim
is
not
respected
at
all,
and
so
there's
some
hard
coded
logic
in
there.
That
will
run
the
exec
and
then
it'll
inspect
the
state
of
the
index
exec
five
times
for
10
seconds,
and
so
after
that
10
seconds
any
probe
that
times
out
is
reported
as
successful
and
then
in
container
d
or
basically
of
any
other
implementation
of
cri.
C
The
time
that
we
pass
into
exec
is
respected,
and
so
after
the
timeout,
the
process
is
killed
and
and
whatnot,
but
the
cubelet
probe
worker
that
is
checking
the
error
from
the
exec
probe
is
not
checking
the
error
properly,
and
so
what's
happening
is
that
the
error
is
being
ignored
by
cubelet
any
questions
so
far
before
I
talk
about
next
steps.
E
A
good
question
andrew,
so
our
is
continually
returning
a
deadline
exceeded
on
timeout
right
now
or
is
it
returning
an
exit
code
because
in
cryo,
when
we
were
like
implementing
the
cri,
we
had
to
make
sure
and
match
what
the
cubelet
was
expecting
and
we
had
to
return
and
we?
We
were
not
able
to
return
a
grpc
error
because
that
didn't
work
as
expected
and
we
had
to
craft
the
exact
response
that
the
cubelet
side
expected,
which
was
returning
and
executed
or
something.
I
love
to
check.
C
Yeah,
so
I
do
believe
container
d
and
other
folks
who
are
more
familiar
can
correct
me,
but
I
do
believe
we
are
returning
deadline
exceeded
okay,
but
then
the
cube
yeah,
the
cubelet
has
no
special
checks
for
deadline
exceeded,
and
so
it's
just
treating
it
as
a
standard
error
which
then
the
the
cubelet
prober
considers
that,
as
just
like
an
unknown
state
and
doesn't
register
as
an
actual
failure,
which
is
really
that's
kind
of
where
the
bug
is.
Okay,.
C
Okay,
so
what
I'd
like
to
propose
is
that
we
merge
the
current
pr,
that's
open,
which
fixes
the
exact
prop
timeout
by
adding
in
logic
in
docker
shim,
to
actually
respect
the
timeout
and
return
a
proper
error
and
then
for
the
the
remote
cri
runtime
it'll
check,
specifically
for
deadline
seeded
and
return
the
proper
error
that
the
prober
is
expecting
and
then
it's
going
to
also
re-enable
the
edv
test
for
both
liveness,
except
probe,
timeouts
and
also
add
a
new
one
for
readiness
except
prog
timeout,
so
that
you
know
we
can
catch
this
regression
for
next
time.
C
So
one
concern
that
was
raised
was
that,
because
of
the
existing
behavior
today
to
to
like
pretty
much
ignore
the
timeout,
there
were
some
concerns
around
applications
today
that
rely
on
this
existing
behavior,
specifically
on,
like
liveness
probe
like
when,
when
cuba
gets
upgraded
in
my
cubelet
might
start
to
aggressively
kill
those
containers,
because
it's
now
enforcing
a
default
timeout
of
one.
Second.
C
So
I'd
like
to
propose
that
we
also
add
a
exec
probe,
timeouts
feature
gate
which
will
be
ga
and
on
by
default
from
the
start.
But
then
it
allows
users
who
may
run
into
this
issue
to
disable
that
behavior
if
they
needed
to,
and
then
we
would
also
document
in
the
actions
required
section
for
the
120
release,
notes
to
say
that
there
was
a
bug
in
cubelet
that
probe
timeouts
were
not
respected
and
we
are
now
enforcing
that
timeout
yeah.
That's
it.
B
Yeah
the
only
comment
I
left
in
the
document
right
now
is:
we
need
to
make
sure
that
when
flag
is
enabled
and
all
behavior
is
preserved,
we
still
need
to
notify
customers
that
timeout
needs
to
be
adjusted.
That's
like
pretty
soon
this
flag
will
be
removed
and
they
need
to
make
sure
that
they're
ready
for
this
removal.
A
B
And
another
question:
do
you
want
to
preserve
the
behavior
when
we
keep
executing
payload
on
docker
shim
without
returning
when
flag
is
on
or
we
want
to
time
out,
but
then
not
fail?
What's
the
idea
here.
F
C
I
I
had
this
same
question
when
I
was
digging
into
this,
because
it
surprised
me
that
we
would
just
we
like.
We
never
really
respected
the
time
out
and
it
does
seem
like
yeah
cubelet,
never
respected,
exact,
probe
timeouts
and
there
were
multiple
pr's
over
several
years
that
sergey
helped
triage
where
we
were
trying
to
fix
the
exact
same
problem,
and
there
was
a
comment
in
one
of
the
uv
tests
that
was
skipping
this
test,
because
docker
shin
didn't
natively
support,
exact
timeouts.
Even
though
cri
did
I,
I
guess.
F
The
thing
I'm
wondering
is
the
impact
of
someone
who
has
a
really
long
timeout.
Now
right
we
had
some
kind
of
default,
timeout
behavior,
but
now
there's
nothing
that
would
prevent
excessively
long.
A
Yeah,
all
your
concern,
actually,
the
the
question
actually
at
risk.
So
as
far,
I
also
share
what
I
roughly
remember
in
the
history,
because
if
I
remember
correctly,
when
we
first
have
the
cri,
we
want
to
plan
to
have
the
respect
amount,
the
docker
itself
don't
respect,
execute
sync,
if
I
remember
to
call
correctly
england,
so
we
do
this
that
issue.
So
when
we
start
off
the
cri
and
also
continuity
work,
so
we
do
plan
to
respect,
and
so
that's
why
continuity
actually
the
two
years
ago
they
do
respect.
A
So
that's
right
here.
I
just
share
from
the
chat,
because
that's
the
comment
I
had
in
the
pr
so
we'd
be
to
fix
that
problem.
I
I
remember
it's
nanta
fix
that
problem
and
the
internally
perspective.
Dude
respect
executes
things
come
up,
but
we
also
reached
that
question.
Add
to
the
signals
we
discussed
and
we
didn't
agree
on
the
semantic
at
that
time.
Back
then.
So,
that's
why
the
problem
it
is
kind
of
carried
out
today,
so
there's
a
civil
attempt
to
fix
that
problem.
But
I
remember
all
those
issues.
A
The
approach
of
the
fixed
problem
actually
is
not
follow
up
with
the
comments,
because
there's
the
other
consent,
so
so
even.
F
If
do
the
container
run
times,
whether
it's
container
d
or
cryo,
having
a
timeout
on
their
end,
that
they
basically
set
as
an
upper
bound.
B
C
A
Yes,
we
also
talked
about
this
last
week.
We
said
we
should.
We
have
like
the
really
long,
like
the
like
the
catch-all
type
of
the
time
out
at
the
continental
level.
We
don't
have
that.
F
Yeah,
do
you
know
morning,
cryo
has
a
we
hit
issues
with
probes
causing
reaping
problems
honestly,
so
I'm
just
trying
to
map
all
of
these
issues
back.
E
A
G
A
Unlimited
yeah,
so
we
so
that's
why
dark?
We
did
this
that
one
I
also
do
is
again
the
pr
last
week's
meeting
so
but
that's
more
like
the
api,
kubernetes
api
level
and
eventually
we'll
change
the
ci
so
that
the
thematic,
what
kind
of
the?
How
long?
What's
the
maximum
means
of
the
time-out
value
will
allow
for
prop?
And
we
roughly
talk
about
that
one.
But
for
each
one
like
the
liveness
and
readiness
and
all
the
problems,
they
may
have
a
different
requirement.
A
So
that's
why
we
are
kind
of
like
the
pumped
and
then
say:
okay,
can
we
layer
it
down
the
scope
and
only
affects
today's
problem
and
before
we
reach
off
that
to
the
big
part,
like
the
another
level
of
the
api
problem
and
the
semantic
problem
for
the
kubernetes.
F
That
makes
sense,
I'm
just
talking
about
oliver
in
the
background,
but.
G
F
Yeah
mm-hmm,
but
for
what's
pretty
much
I
don't
have
a,
I
think
it's
we
need
to
be
right
right,
and
so
it's
a
bug
that
we're
not
being
right
here.
So
I
don't
object
to
what
andrew
is
presenting.
I'm
just
mostly
wondering
if
a
runtime
might
want
to
clip.
H
A
maximum
time,
what
I
brought
up
yesterday
is
any
type
of
maximum
time
is
very
hard,
and
this
should
be
left
up
to
clients
to
specify,
because
a
two
minute
max
time
on
a
raspberry
pi
is
not
that
long
compared
to
a
10
second
max
time
on
a
big
compute
node.
H
A
Yeah,
I
I
think
that
I
follow
up
with
tobacco
with
that
question
last
week
and
I
agree
on
that
prospect
at
the
continent
run
time
level,
but
I
think
about
the
kubernetes
at
the
api
level,
when
we
define
the
prop,
we
could
define
the
catch
all
time
out
and
which
is
not
like
the
default
value.
A
Instant
magazine,
the
default
value
it
is
today
we
do
have
the
default
value,
which
is
really
hard
like
exactly
like
the
medical
wrist
and,
but
I
think
about
catching
on
like
we
need
relatively
long
like
one
hour.
Instead,
the
next
couple
minutes
like
one
hour
and
then
we
don't
leave
those
the
running
process
there
and
but
but
to
do
kick
remove
of
those
remove
some
like
unnecessary
deadlock
and
the
long
leaving
of
the
process
without
any
progress,
so
something
ignite.
So
we
can
reach
some
agreement.
A
So
obviously,
last
week
we
didn't
reach
agreement,
and
so
that's
why
we,
I
suggest
the
andrew
come
back
and
narrow
down
the
scope
and
fix
today's
problem,
and
one
clarified
from
what
michael
michael
brown
earlier
said.
Actually,
the
continuity
did
when
we
worked
on
the
cri
and
the
continuity
we
do
try
to
fix
the
problem
which
docker
has
because
docker
died
to
respect
after
executing.
A
Maybe
today
fixed,
I
believe,
is
not
fixed.
This
is
why
andrew
started
this
pr
at
the
first
place,
but
the
continent
did
fix
the
problem.
Is
we
didn't
feel
the
gap
in
the
last
manager?
So
after
we
fixed
this
at
the
cri
and
the
content
level,
but
we
didn't
really
didn't
define
that
semantic
and
for
the
kubernetes
and
we
so
that's
why
we
didn't
we
just
treated
that
it
is
as
the
arrow
which
unknown
arrow
and
we
didn't
handle
at
the
kubernetes
level.
A
So
this
is
what
andrew
proposal
is
just
fix
that
problem,
but
then
still
we
have
like
a
semantic
at
the
api
level.
How
we
are
going
to
handle
for
better
use
handle
the
customers.
Use
cases
is
unknown,
yet
it's
undefined.
F
C
Be
here
I
forget
that.
C
F
C
C
But
no,
but
the
problem
with
double
shim
is
that,
like
the
it
completely
ignores
the
time
out,
even
with
this
pr,
like
it'll
for
the
timeout
it'll
return
the
correct
error,
but
nothing
is
actually
killing
the
exact
process.
And
so,
if
you
have
a
probe,
that's
very
frequent
and
it
times
out
a
lot
like
that.
C
A
And
you
well,
I
think
the
signal
community
agree
upon
and
we
want
to
deplicate
off
the
darker
sham
as
soon
as
possible
and
it
takes
time
and
but
today
actually
long
time
back.
We
agree
upon.
So
that's
why
we
put
a
lot
of
effort
on
the
cri
and
other
cri
compatible
container
rental,
docker
stream.
It
is.
A
A
Math
stone
because
we
want
to
the
only
in
production
we
have
the
I
or
even
we
have
the
ied
car,
so
we
have
to
support
docker,
but
now
we're
in
the
turning
point.
Next,
we
should.
A
C
Okay,
that
makes
sense
thanks.
So
like
one
more
thing
before
we
end
this
topic,
I
think
the
reason
why
this
was
never
caught
was
because,
when
we
originally
added
the
exact
timeout
uv
test,
we
skipped
it
because
natively
didn't
support
it
anyways
and
then
I
don't
think
we
we
just
never
remembered
to
unskip
it
once
we
added
other
implementations.
A
Six
but
andrew
actually
used
the
good
point
and
currently
there's
the
discussion
about
manor.
Do
you
want
to
share
something?
There's
some
discussing
with
me
tonight
how
to
end
up
off
the
crowd
e3
test
to
the
our
public
signal,
the
test
degree
and
also
sergey
actually
has
tried
looking
into
how
to
improve
overall
trial
tests
and
also
continuity
related
of
the
cri
test,
so
so
so
that
those
kind
of
all
those
two
efforts
can
help
us
to
move
forward
and
speed
up.
The
deputy
of
the
docker
share.
E
Yes
sure
so
so
last
on
that
topic,
I
know
we
merged
that
pr,
I
allowed
to
check
on
herself
on
the
status
of
it
and
yeah.
I
mean
we
are
definitely
up
for
getting
that
test
in
a
good
good
state
and
make
it
a
part
of
the
grid.
A
Okay,
the
I
think
the
I
I
need
to
talk
to
the
circuit
online
and
figure
out.
What's
the
kind
of
status
about
the
cri
test
in
general
and
and
then
we
can
define
some
like
the
container
runtime,
we
already
have
the
ci
conform
test
to
some
extent,
but
we
need
to
formalize
that
one.
Then
we
can
say
that
and
come
up
the
deprecation
plan
for
the
docker
share
and
then
move
forward
from
there.
A
I
think
it's
both
because
I
I
require
this
crowd
test.
It
is
good,
but
there's
a
certain
things
is
not
cover
everything
right,
so
implementation
detail
still
is
matters.
So
that's
why
we
do
care
about
both
the
container
d
and
the
cryo
know
the
level
e3
test.
So
we
need
the
define
of
the
fourth
set
of
the
conform
test
for
container
rental,
so
once
we
define
that
one,
I
think
we
can
really
also
be
on
that
one.
E
B
And
for
the
previous
topic,
did
you
agree
on
proposal
from
andrew?
Do
we
need
to
move
it
forward.
B
C
A
F
Yeah,
I
don't
think
that
cups
needed
or
anything
I
I
don't
even
the
feature
gate
is
needed,
but
I
agree
with
the
it's
a
useful
fallback
for
unintended
behaviors
to
give
people
time
to
update,
but
I
I
don't
see
a
kepp
is
really
being
required.
F
I
think
you
might
need
a
cup,
though,
if
you
do
a
feature
gate
to
get
it
through
the
release
process,
and
so
it
might
be
the
shortest
cup
that
we
write,
because
the
I
think,
if
you
introduce
a
new
feature,
get
sig
release
was
tracking
if
there
was
a
corresponding
cap,
and
so
if
we
do
the
gate,
I
think
you
need
the
cap,
but
the
cap
should
be
really
brief.
A
B
Yeah
because
capos
also
drives
documentation
that
needs
to
be
put
on
a
website
right.
F
Yeah
I
ran
for
the
topic.
I
was
going
to
bring
up
later
on
empty
dirt
volumes.
I
I
think
I
ended
up
with
the
same
thought
process,
which
was
this
did
a
really
neat
cap.
Well,
I
had
a
feature
gate
because
I
wanted
to
protect
against
unintended
behavior.
So
if
we
write
a
really
shortcut
but
should
be
fine.
A
Okay,
so
mark
also
on
the
chat
with
one
one
point
which
is
actually
is
the
biggest
the
reason
we
didn't
put
the
effort
duplicate
off
the
docker
share
fast.
It
is
windows
support.
It
is
that's
so
true,
but
even
for
the
windows
support
a
lot
of
new
feature.
We
actually
added
to
the
container
d.
So
we
the
reason
we
want
to
start
talk
about
the
the
darker.
A
Shame
is
just
because
the
issue
like
this
is
coming
all
the
time,
and
so
so
we
we
add
new
feature
into
the
continuity,
but
at
the
same
time
and
the
crowd
and
by
the
same
time,
they're
darker
and
we
found
the
new
issue
and
the
new
bug
and
the
feature
parity
issue
compatibility
issues.
So
that's
why
the
community
people
have
the
people
engineer
and
try
to
move
darker
shim
along
with
the
so
that's
kind
of
the
not
making
us
like
distract
us
and
and
also
for
the
community.
I
A
J
Hi,
hello,
hello,
so
since
the
last
time
I
was
here,
the
pr
was
merged.
I
wanted
to
ask
if
there
are
any
remaining
major
concerns
about
the
sidecar
I
was
planning
to
was
in
one
of
the
comments
of
appear
that
was
emerged
to
open
a
new,
follow
a
follow-up
vr,
removing
the
information
hook
and
photo
fatale
to
both
fields
that
were
that
were
discussed
in
the
section
to
discuss
the
proposal
yeah
and
update
the
proposal
to
not
change
the
both
faces
yeah.
J
We
agreed
it
made
sense
and
yeah.
If
are,
if
there
aren't
any
major
concerns,
I
wanted
to
ask
yeah
about
these
two
call-outs.
J
Basically,
one
is
the
time
spent
to
kill
a
part,
and
the
other
one
is
how
to
split
the
time.
But
on
this
call
out
there
are
several
alternatives
on
suggestions.
So
if
we
agree
with
the
suggestion
that
should
be
fine,
but
on
the
first
callout
on
the
time
to
kill
a
pod,
there
is
no
alternative
proposed.
Just
the
problem
is
the
state.
J
Basically,
the
problem
is
the
the
sidecar.
The
cycle
cap
was
lucy
about
some
details
on
how
to
kill
a
pod
on
the
implementation
that
was
created
by
joseph
irene.
Basically
unintentionally
increased
the
time
to
kill
about
by
around
three
times
3x,
so
yeah.
I
was
thinking
what
is
the
best
way
to
to
kill
the
bot
without
increasing
the
time
three
times.
F
F
I
was
just
curious
if
we
knew
from
the
clients
who
were
targeting
this
behavior,
what
they
would
anticipate
us
as
representative
shutdown
times.
J
Sorry,
what
was
representative
shutdown
times
for
what,
like
I'm,
not
sure
I
followed
sorry.
J
Okay,
yes,
for
the
other
question
yeah,
I
think
the
suggestion
there
is
basically
to
to
say,
like
we
might
take
four
seconds,
to
kill
her
to
kill
a
part
and
and
do
the
same
behavior
that
we're
doing
now
the
what
we're
doing
right
now
that
we
don't
have
side
cars
is
we
wait?
We
send
sync
term
and
we
wait
for
a
minimum
of
two
seconds
and
what
I'm
proposing
to
do
with
cycle
is
to
do
the
same
first
for
for
non-sight
cars
and
then
for
cycle.
J
The
alternative
here
it
says
six
seconds
because
it's
taking
into
account
that
we
add
a
termination
hook
thingy.
But
if
we
don't
add
that
four
seconds
should
be
enough,
does
it
make
sense
direct
for
you.
F
Yeah,
I'm
just
refilling
my
cash
on
what
alternative
four
was.
J
J
Let's
call
it
grace
time
and
basically
you
allow
you,
you
go
and
try
to
kill
the
the
non-sized
cars,
but
if
that
fails
you
like,
if
they
don't
finish
on
time,
you
send
the
kill
and
then
you,
you
send
sick
term
to
the
sidecars
with
two
seconds
of
a
minimum.
J
F
I'm
gonna
have
to
reread
the
various
links
here
and
I
apologize
my
my
my
mental
brain
power
right
now.
F
Rick
is
not
as
strong
as
I'd
like
it
to
be,
but
I'll
I'll
comment
on
the
dock.
B
Last
time
you
mentioned
that
there
may
be
concern
about
injection
ports,
and
maybe
some
service,
mesh
team
and
red
hat
knows
better
way
with
cni.
Do
you
still
want
to
have
this
meeting
discuss
it?
I
discussed
with
the
easter
team
in
google
and
they
believe
that
the
injection
of
paws
will
will
be
here
with
us
for
a
long
time.
F
Yeah,
I
still
need
to
follow
up
with.
F
Our
rms
team,
but
for
the
particular
questions
like
I
I
I
just
remember,
this-
was
a
long
cap,
so
I'll
have
to
re-read
each
alternative
to
say
like
what
you
want
to
go
forward
on,
but
maybe
rodrigo.
If
you
wanted
to
update
the
cap
with
the
definitive
choices
and
we
can
just
iterate
on
that
pr.
That
would
be
the
right
next
step.
J
Okay,
okay,
yeah,
so
for
all
the
alternatives
that
have
for
all
the
things
that
we
want
to
discuss,
but
that
come
on
a
suggestion.
I
update
the
game
to
reflect
that
and
we'll
discuss
it
with
in
that
vr.
J
The
one
on
the
on
time
to
kill
a
pod
should
I
create
a
appear
with
a
small
div
there.
So
so
it's
easy
to
comment
on
that.
F
J
Yeah,
it's
I'm
not
so
sure,
like.
Maybe
I
can
ask
a
more
specific
question
or
if
you
haven't,
I.
B
Think
maybe
it's
a
good
idea
here
here
will
be
to
have
a
known
issues
section
and
there
we
can
have
a
pr
saying
termination
time
will
maybe
increase
three
times
and
then
on
this
pr.
We
can
comment
and
discuss
yeah
on.
B
It
wouldn't
be
alternative,
it
would
be
just
a
statement.
The
fact
that
termination
time
may
be
increased
three
times
and
we
know
about
it.
This
kind
of
known
issues
section.
J
Well,
we,
I
think
we
need
an
alternative,
because
basically,
what
we
agreed
like
in
may-
or
something
like
that-
is
that
we
don't
want
to
move
forward
with
this
in,
to
merge
this
in,
like
to
move
it
to
beta
stage
until
the
cubelet
crazer
will
shut
down.
Is
there
basically
to
interact
correctly?
We
have
been
meeting
with
with
the
guys
on
working
on
that
and
interact
correctly.
What
what
we
would
want
is
to
respect
the
time
that
is
set
by
the
cubelet.
J
No
sorry,
maybe
the
doubling
is
too
long.
So
maybe
I
can
share
this
link
on
some,
but
in
one.
K
F
J
Yes,
alternative
four
is
for
the
it's
for
how
to
split
the
time
when
killing
the
containers
and
above
right
now
I
was
talking
on
on
the
time
to
kill
a
pod,
is,
is
being
increased.
Three.
J
Times
what
what
I
got,
what
I
understood
is
like
do
a
pr
and
with
the
suggestions,
and
we
can
discuss
them,
and
this
is
why
I
was
talking
about
this
this
column,
because
it
doesn't
have
any
suggestion.
B
L
B
Four
seconds
proposal
and
we
have
a
proposal
that
can
be
commented
on
and
another
one
is.
The
time
will
increase
three
times
just
because
the
nature
of
features
that
we
implement
as
a
natural
feature.
We
want
a
wave
termination,
so
we
we
have
first
wave
and
second
wave
and
that's
why
termination
may
take
longer.
So
I
don't
think
we
have
a
proposal
to
address
this,
like
second
problem.
J
No,
no.
The
second
problem
is
not
that,
though
it's
an
implementation
detail,
the
thing
is,
the
function.
Kill
container,
ignores
the
grace
period,
override
parameter
for
the
breast
of
hooks.
So
basically,
when
you're
calling
it
you're
calling
now
like
two
times
at
least
and
then
basically,
you
are
calling
the
the
press
to
group
two
times
and
that
can
take
up
to
the
yeah
termination
grace
period
around
the
the
the
prestop
hooks.
J
J
Then
yeah
and
one
option
would
be
to
use
the
pod
deletion
with
period
seconds
that
this
is
respected
by
the
kill
container
function,
but
I
wanted
to
have
a
confirmation
that
it's
possible
to
set
this
because
the
code
assumes
it
might
not
be
always
set,
or
things
like
that,
but
yeah.
Maybe
I
can
move
this
to
no
niches
or
whatever,
and
we
can
comment
on
the
pr
there.
J
Okay,
yes,
sorry
for
the
delay,
so
I
was
trying
to
aim
this
for
queen
edges
120,
but
I'm
not
sure
if
we
all
feel
comfortable
with
it
and
yeah
how
you
think,
what
do
you
think
about
it.
F
J
Yeah,
that's
that's
october
5,
or
something
like
that.
Okay,.
F
M
Yeah,
I
just
had
a
quick
comment.
I
around
the
whole
pre-stop
hook,
not
respecting
the
time
for
like
the
shutdown
stuff.
I
think
it
would
be
ideal
actually
and
just
for
the
sidecar.
Maybe
if
you
can
answer
would
be
good
like
just
in
general.
Would
it
be
helpful
if
the
timeout
would
be
respected
by
preset
hook?
Is
that
something
we
actually
want?
It
sounds
kind
of
similar
to
the
first
issue
we
discussed
earlier
today
in
the
meeting
around
the
probe,
timeouts
not
being
respected.
J
Sorry
so
yeah,
yes,
I
think
it
will
be
helpful,
but
yeah,
I'm
not
sure
if
we
can
change
it,
but
there's
a
tricky
thing,
because
the
kill
container
function
takes
an
a
pointer
to
an
end
with
a
grey
sphere
override,
and
this
is
what
it
doesn't
really
respect.
J
But
as
far
as
I've
been
looking
at
the
code,
the
great
sphere
override
is
a
parameter.
That's
always
nil.
The
pointer
there
is
always
new,
so
I
think
it
might
be
safe
to
just
change
it
and
yeah
and
do
that
because
the
grace
peer
overhead,
like
seems
like
to,
if
you
do,
cuba
ctl,
delete
part
minus
minus
christ
period,
and
you
said
something
I
would
expect
that
to
be
said,
but
it
it
isn't
that
there
it's
at
least
since
coordinators
117..
J
M
Got
it
I
see
cool
yeah,
I'm
just
thinking
like.
Maybe
it
makes
sense
for
us
to
kind
of
consider
similar
to
the
first
case,
with
the
exact
or
sorry
the
the
probes.
You
know,
timeout's
not
being
respected,
something
like
we
also
make
a
release
note
or
something
and
start
respecting
the
timeouts
for
pre-stop
hooks,
because
it
seems
like
that
was
also
kind
of
a
long-standing
bug,
type
of
thing.
J
A
I'm
sorry
I
have
to
interrupt
it
because
for
the
time-
and
we
can
follow
up
on
the
pr-
and
I
believe,
there's
issue
documentation
all
this
kind
of
thing-
also
enhancement.
So
I
need
to
follow
on
the
pi
and
the
cap,
and
we
thank
you
and
we
have
the
stream
more
topic
dark.
Do
you
want
to
talk
about
the
this
one.
F
This
one's
really
cool
it
was
we're
having
more
users
use
kubernetes
on
larger
node
sizes,
and
this
was
an
issue
that
got
brought
to
my
attention
from
one
of
our
ai
oriented
teams.
That
the
background
was
that
the
size
of
a
memory-backed
empty
dirt
is
getting
defaulted
to
the
50
of
the
available
memory
on
the
linux
host,
and
this
produced
less
than
desirable
outcomes
when
pods
used
more
memory
and
wanted
larger,
empty
dirts
than
that
50
ratio.
So
I
put
a
pr
out
there
that
basically
changed
the
behavior
to
be.
F
The
size
of
the
empty
dirt
memory
back
volume
was
this
defaulted
to
the
size
of
the
allocatable
memory
available
to
that
pod.
So
if
the
pod
had
memory
limits
in
all
its
containers,
it
mapped
to
the
secret
value
that
was
enforced
on
the
podc
group
and
if
it
didn't
have
a
bounded
memory
limit
at
the
pod
level,
it
got
bound
to
the
node
allocatable
level
and
then
for
the
scenarios
where
people
wanted
to
partition
memory
to
be
less
than
that
on
the
size
of
the
empty
dirt
it.
F
It
goes
in
respects
the
size
limit
field
for
empty
their
volumes
to
let
you
size
it
lower,
and
I
put
that
behind
a
feature
gate,
because
I
wasn't
sure
for
earlier
discussion
if
people
would
hit
on
non-unknowns
when,
when
rolling
that
out
and
seeing
sizes
of
empty
doors,
either
grow
or
or
shrink.
Accordingly,.
F
I
don't
know
if
other
people
were
hitting
similar
issues,
but
I
was
just
trying
to
help
push
a
topic
forward
and
I
think
david,
you
reviewed
it
and
I
don't
know
if
she's
on
the
call,
but
I
I
still
have
to
add
some
unit,
testing
and
eds,
but
just
wanted
to
draw
attention
if
there
was
any
negatives
or
alternatives
that
people
would
run
into
with
empty
their
volumes
that
they
wanted
to
raise.
N
Hi
derek
this
was
a
old
problem.
It's
been
lingering
for
a
really
long
time.
So,
thanks
for
shining
the
light,
I
reviewed
both
the
cap
and
the
pr,
and
it
looks
really
good.
N
The
last
one
that
I
saw
was
the
medium
memory
needed
to
be
checked.
That
was
one
of
the
cards.
N
F
Putting
hacks
on
it,
but
yeah
thanks
tim
and
the
only
reason
I
threw
a
new
pr
out.
There
was
the
original
pr
was
dated
back
to
2018,
and
I
know
both
david
and
myself
had
given
feedback
and
hadn't
seen
that
adjusted.
So
I
was
just
trying
to
help
kick
the
cam
forward,
but
if
the
original
author
is
watching
the
recording
for
who
did
the
first
one
of
that
pr,
thank
you
for
for
raising
the
topic
to
the
community.
N
Yeah,
the
only
question
I
had
was
around
the
the
language
of
the
you
know,
description
of
the
field
itself,
and
you
already
commented
that
you
know
it
matches
the
implementation.
So
I'm
good
thank.
N
O
It's
me:
hey
hello,
hi,
hey,
it's
francesco,
hey
hello,
so
very
quick.
It
will
take
a
minute.
So,
in
the
context
of
the
topology
we're
shadowing,
we
are
identified
a
couple
of
extensions
to
the
pod
research
api.
So
we
posted
the
caps
and
links
are
in
the
notes
in
the
meeting
notes.
So
please
review,
because
we,
if
possible,
we
will
really
like
to
have
those
extensions
in
ready
for
1.20,
but
please
just
review,
and
so
we
can
keep
them
progress.
A
L
H
L
Was
hoping
to
make
it
a
well
a
short
presentation,
but
when
I
realized
there
are
a
lot
of
discussion
going
on
in
multiple
subgroups
and
we
had
medium
chris
eric
with
michael
crosby,
and
so
we
practically
we
end
up
with
list
of
ideas
with
some
after
some
discussions
after
synchronization
between
multiple
projects
might
end
up
with
several
caps
to
kubernetes,
to
contain
your
g
to
cryo
and
run
towers,
it's
not
yet
finalized
at
least,
but
we
probably
should
work
in
progress
notes.
I
posted
the
link
in
the
making
minutes.
A
A
Cool
thanks
so
so
so,
alexander
last
week
we
missed
the
night.
We
are
going
to
follow
up
about
the
resource,
request
and
limit
related
of
the
downward
api.
So
it
looks
like
we
are
not
we
haven't
there
need
more
discussing
and
to
present
before
presenting
to
this
group.
Yes,
is
that
true?
Okay,
thanks.
L
Yeah,
so
it's
if
you
start
to
touch
with
one
thing:
we
really
we
found
out
what
it
will
start
touching
other
things.
So
so,
for
example,
there
created
a
pr
which
adds
for
huge
pages
to
to
a
set
of
possible
scenarios
with
existing
downward
api,
but
it
completely
not
supporting
where
all
extended
resources
and
if
pod
or
container
wants
to
consume
extended
resources.
There
is
no
way
for
for
to
get
it
from
kublai,
so
we
need
to
well
downwards.
L
Api
was
one
thing
and
second
thing
where
runtimes
I
interested
to
get
were
extended
resources
as
well,
when,
if
we
look
at
in
overall
resources
for
what
is
passed
down
to
the
times,
we
have
a
similar
problem
with
vm
based
run
times
so
for
vm
based
run
times
on
the
time
when
the
sandbox
or
pod
is
created,
it
would
be
good
to
get
them
in
the
lower
layers,
information
about
all
the
container
resources.
So
how
many
like
will
be
in
it
containers?
L
Do
we
have
how
many
containers
it
will
be
in
this
port?
How
much
resources
is
requested
and
limits
for
this
thing
for
this
sport,
because
when
we
can
properly
create
the
vm
which
will
fit
appropriately
containers
and
and
so
on,
so
it's
it
started
to.
While
we
started
to
discuss
it
started
to
grow
like
a
snowball.
L
So
we
want
to
have
probably
somewhat
hope
discussions
and
when,
before
we
present
something.
D
Yeah,
when
we
did
the
pot
overhead
cap,
one
of
the
things
that
we
included
and
kind
of
nodded
at
was
the
need
to.
They
would
be
good
to
pass
the
information
down
with
respect
to
what
the
pot
overhead
values
are
through
to
the
cri
run
time.
But
I
that
didn't
get
done.
There
wasn't
an
immediate
use
case
for
it
and.
L
D
Dig
into
this
a
little
bit
more
and
how
to
do
more
optimal,
cpu
utilization
on
the
host.
It
becomes
apparent
that
it's
important
that
the
runtime
itself
is
able
to
have
this
information.
So
that's
a
kind
of
a
part
of
what
sasha's
saying
and
yeah
the
whole
sandbox
level.
Information
would
be
great.
It's
it's
a
bit
naive.
You
know
because
these
things
could
change
if
vertical
powder
is
scaling.
Things
like
this,
but
at
least
at
least
the
overhead
is,
is
necessary.
A
A
The
customer
could
understand
their
limit.
Of
course
today
we
this,
you
could
do
those
kind
of
things
by
read
the
c
group
value
directly,
but
but
they
don't.
P
P
A
A
We
I
I
never
reached
this
one,
because
we
do
have
some.
I
do
see
some
use
cases
like
that,
but
that
I
will
give
to
customers
suggest
because
it's
just
hard
today's
kubernetes
the
way
support.
I
just
share
with
everyone
here,
and
this
is
why
I'm
really
interested
in
that
one.
But
the
since
no
customer
really
asked
for
is
just
like
the
one
way
we
could
handle
for
certain
applications
sensitive
the
performance,
sensitive
application.
They
could
do
better
job
at
the
appliable
since
we
talked
about
the
downward
api
for
the
resource.
A
F
Anyway,
all
right,
I
was
the
idea-
was
that
any
native
resource
that
kubernetes
understood
this
should
be
a
valid
candidate
for
downward
api.
I
have
no
real
objection
to
having
a
mechanism
to
provide
any
extended
resource,
also
down
through
the
downward
api.
That
would
just
be
a
thing.
We
need
to
get
a
proposal
on.
So
to
me,
it's
no
different
than
how
quota
only
originally
supported
resources
that
cube
understood,
and
then
we
added,
like
generic
object
photos.
F
L
Yeah
yeah
caps
will
be
following
after
we
end
up
actually
in
agreement
what
we
want
to
get
but,
for
example,
my
example
with
extended
resources,
it's
not
only
to
know
when
inside
the
container,
but
also
pass
it
down
to
the
runtime
level.
So,
for
example,
I
our
custom
version
of
run
c
can
set
the
limits
of
gpu
memory
usage.
Q
F
A
Okay,
so
thanks
thanks
both
back
and
the
alexa
and
yeah
mellow.
I
saw
your
comment
to
the
limit
and,
if
you,
if
you,
if
we
are
writing,
maybe
you
can
give
a
demo
next
week
or
the
following
week
and
then
we
can
talk
yeah,
I
agree
with
you.
The
hallie
meet
there
always
have
the
kernel
issues
in
the
past.
That's
why
we
didn't
serious
the
support.
A
Okay,
we
can.
We
can
carry
on
discussing
and
yeah
that's
all
for
today
and
thanks
everyone
for
attending
today's
meeting,
and
this
is
really
good
discussing
and
we
need
to
follow
up
to
several
things
and
include
the
set
car
ordering
and
also
the
time-out
value
for
the
prop
and
just
and
also
neck,
the
downward
ap
for
the
resource.
So
looking
forward
next
week,
bye-bye.