►
From YouTube: Kubernetes SIG Node 20200901
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
I
realized
that,
basically,
I
think
exact
timeouts
for
any
container
runtime
is
not
being
respected
and
then
yeah
sergey
had
informed
me,
and
I
had
no
idea
that
there
were
actually
like
many
attempts
in
the
past
from
other
people
to
fix
the
same
issue.
So
we
figured
it'd
be
good
to
raise
this
in
the
sig
and
figure
out.
What
are
the
next
best
steps?
C
Yeah
just
to
add
some
colors
to
that
previous
attempts
were
trying
to
solve
problem
and
they
trying
to
solve
extra
issues
that
can
be
caused
by
this
problem.
So,
like
one
pr
attempted
to
kill
process
in
docker,
because
mobi
doesn't
support
the
timeout
on
process
execution,
they
were
trying
to
kill
the
process
from
a
kublet.
C
We
also
saw
this
people
raising
questions
about
readiness,
props,
have
a
process
stuck
on
a
container,
so
they
can,
over,
like
probes,
can
overwhelm
the
container
and
kill
it
and
yeah.
Finally,
we
we
have
this
issue
discussing
that
people
who
wouldn't
expect
timeouts
to
be
enforced
in
a
new
version
may
be
affected
by
that.
C
So
we
need
to
raise
awareness,
and
the
question
is
how
we
can
raise
this
awareness,
and
if
we
have
this
risk
of
affecting
customer
workload,
do
we
want
to
do
it
gradually
or
like
somehow
like
approach
it
differently,
maybe
start
with
reporting
errors,
and
then,
in
the
next
version
we
can
enforce
the
timeout
properly
so
yeah.
We
need
to
discuss
all
those
questions.
B
Let
the
process
continue
running
and
then
return
with
the
nil
error,
which
tell
which
signals
to
the
probe
that
you
know
things
passed
when
it
didn't,
and
so
by
like
introducing
a
timeout
all
of
a
sudden,
even
though
we're
now
respecting
what
the
api
is
saying,
we
should
do
if
there
are
workloads
that
are,
depending
on
this,
like
odd
behavior
of
just
letting
a
process
run
forever
and
then
we're
all
of
a
sudden
killing
the
process
that
could
lead
to
some
weird
scenarios
and
so
like
in
my
pr.
B
C
B
Ask
like
do
we
know.
Do
we
know
where
we're
at
with
cri
errors,
because
I
think
a
large
part
of
why
this
bug
exists
is
because
different
run
times
return,
different
errors
for
the
timeout
behavior,
and
so
like
with
container
d.
It's
specifically
expecting
like
a
grpc
error
code,
whereas
with
docker
shim.
I
forget
what
it's
doing,
but
then
there's
also
like
a
generic
util
exec
error
that
we're
expecting
from
the
prober
which
every
runtime
should
return
and
so
like.
B
I
think
the
problem
arises
because
we
are
like
the
actual
prober
is
swallowing
the
error
and
ignoring
it,
whereas
we
should
be
having
like
well-defined
errors
for
like
timeout
scenarios.
Why
not.
A
Yeah,
I
saw
the
alexandra
have
the
comment
on
the
deprecator
of
the
docker
shim.
Yes,
that's
what
we
wanted
to
do
for
a
long
long
time-
and
yes,
I
agree-
is
the
overdue
and
yeah,
but
based
on
what
andrew
and
sergio
said
here,
it
is
initially,
I
I
think,
there's
the
problem.
I
believe
the
android
brought
this
to
the
signal.
If
I
remember
correctly-
and
it
looks
like
it
is
the
only
docker
ship
we
need
to
problem
and
the
continuity
handled
well,
while
we
we
saw
the
continuity
of
crowd.
A
C
B
Yeah
exactly
like
we,
we
return
an
error
on
timeout,
but
the
prober
expects
like
a
nil
error,
but
then
the
result
to
be
failed,
but
we
were
returning
failed
and
error,
and
so
it
was
ignoring
it,
and
so
it
changes
based
on
the
type
of
pro
because,
for
example,
readiness
probes
are
not
ready
by
default,
and
so
we
would.
It
would
be
not
ready
forever,
whereas
liveliness
probes
start
as
successful
and
then
they
turn
to
failure
to
restart
pods,
and
so
liveliness
probes
would
be
successful
regardless
of
the
timeout.
D
Hey
this
michael,
I
think
for
for
a
lot
of
this.
We
need
some
type
of
typed
errors
in
the
cri
api
because
because,
if
it's
just
implicit,
that
c
cri
different
implementations
are
supposed
to
return
consistent
things
like
it's
never
going
to
work
like
if
we
could
have
the
have
these
things,
where
cri
either
does
grpc
error
codes
that
the
runtime
implementations
can
can
implement.
Then
that
makes
it
a
lot
easier.
I
think
and
would
solve
a
lot
of
this.
B
Yeah
I
agree
this
is,
I
think,
the
current
pr
I
have
open
is
kind
of
looking
to
be
more
like
a
stopgap
solution
to
make
sure
that
exact
timeouts
are
respected
with
existing
runtimes,
but
I
definitely
agree
that
there
should
be
generic
cri
errors
that
we
can
use
for
timeouts.
E
Actually,
on
the
low
levels
on
oci
level,
if
you
do
like,
we
see
a
compatible
runtime,
oci
exec,
you
don't.
You
are
not
able
to
specify
exact
exact
time
out,
so
it
will
work
until
it
finishes.
A
A
This
is
kind
of
a
debate
for
a
while,
because
the
I
just
share
with
here-
and
so
so
people
think
worry
about
it-
is
execute
because
execute
is
a
little
bit
free
from
it
ad
hoc
and
there's
the
one
set
of
the
debate.
It
is
worried
about
execute
the
prop
itself,
have
the
back
and
then
cause
of
the
real
container
application
container
fail
and
another
people
just
think
about.
A
Okay,
they
want
execute
prop
to
make
sure
the
container
runs
happening
and
also
the
is
ready
to
serve
their
demand,
and
so
they
want
when
they
execute
the
prop.
It
is
a
fail.
They
want
to
have
a
restart
of
the
container,
just
give
the
arrow
and
restart
the
container.
So
so
this
is
a
refresh
memory.
This
is
basically
it
is.
We
didn't
really
finish
that
discussing
and
also
that
to
work
for
the
cri
and
I
did
the
follow
up.
A
I
need
to
look
into
the
previous
discussion
where
we
we
ended
and
then
we
can
make
decisions
on
here,
but
you
just
refresh
my
memory
about
the
previous
debate.
While
we
end
there,
though
yeah.
C
So
was
the
debate
that
we
don't
have,
we
don't
need
timeouts
or
on
oci
level
or
was
the
debate.
How
was
we
need
to
run
exec
at.
A
All
so,
but
is,
is
the
firmware
it
is.
Do
we
need
the
timeout
value,
so
the
initial
name,
the
documentation,
say:
okay,
since
we
haven't
handled,
we
haven't
agree
upon,
so
we
basically
turned
off
that
time
of
the
value
at
the
time
of
the
handling
into
the
execute
implementation.
A
So
you
you
need
a
careful
neglected
when
you
execute
and
set
up
a
timeout
value
and
then
then,
by
the
day,
but
now
it
is
the
time
to
risk
this
problem.
So
we
need
to
define
on
the
cia
level
and
to
see
what
we
want
to
do.
C
So
if
you
want
timeouts
to
be
respected-
and
I
think
it's
it's-
it
has
a
lot
of
value.
We
can
go
ahead
with
latest
pr
from
andrew.
The
only
question
is:
do
we
need
to
raise
awareness
for
people
who
currently
using
like
relying
on
current
behavior,
maybe
not
even
realizing
that
they
rely
on
current
behavior
and
whose
payload
can
be
affected?
C
So
the
most
critical
issue
that
we
found
is
startup
probes
like
if
somebody
implemented
startup
exec
prop
and
didn't
realize
that
this
prop
running
longer
than
timeout,
and
we
will
start
suddenly
failing
on
timeout
and
container
will
never
start.
C
A
We
do
have
other
things
right,
the
start
initial.
We
do
have
another
field
right,
so
initial
delay.
I
did
the
startup
tab
right.
So
that's
configurable,
right.
C
It
is
configurable,
but
people
may
be
affected
when
they
deploy
a
new
version,
so
with
a
new
version
of
kubernetes
payload
that
used
to
be
working,
even
though,
like
it
was
relying
on
something
that
on
a
bug,
basically
in
our
system,
it
will
stop
working.
B
Yeah-
and
I
think
in
addition
to
startup
probes,
liveness
probes
could
be,
could
be
pretty
problematic
as
well,
because
they
start
as
successful
and
like
if
you,
if
you're
running
an
exact
probe,
that
never
timed
out,
then
that
pod
would
never
be
restarted.
But
then,
if
we
enforce
the
one
second
timeout,
then
on
a
new
version,
those
pods
could
be
restarting
quite
often.
D
D
A
Yeah,
just
this
is
yeah.
This
is
the
original
debate
and
another
thing
it
is
so
if
you
fail
and
you
just
retry
or
you
just
unreach
the
time
auto
arrow
all
you
need
restart
the
container
and
the
different
cases
that
have
the
different
requirements.
B
It's
just
a
weird
state,
because
there
was
a
bug
that
never
respected
it,
and
due
to
that
there
are
workloads
that
might
rely
on
the
buck.
So
the
the
bug
I
the
blog,
has
become
a
feature.
I
guess
in
the
sense
so
like
I'd,
be
in
favor
of
I
think
the
end
state
we
want
to
get
to
is
that
the
timeout
is
respected.
B
Maybe
it'd
be
enough
just
to
have
in
the
release
notes
like
a
big
warning
saying
this
timeout
was
never
respected.
Now
it
is
so
make
sure
your
workloads
are
adjusted
for
that.
But
I
don't
know
if
you
have
to
be
more
careful
than
that.
A
I'm
not
a
little
bit
confusing
andrew.
What
did
you
suggest
so
we.
A
We
respect
that
time
of
the
venue
and
when
the
time
out
and
we
just
kill
the
container
and
restart
the
container
or
you
suggest
we
just
announce
that
time
out
because
I
do
know
we
have
the
failed
and
and
along
that
time
out,
it
is
deprecated
more
clearly
and
then
we
just
told
the
user
to
implement
time
out
in
their
own
prop.
B
No,
I'm
saying
that
we
should
respect
the
timeout,
but
but
not
kill
the
exact
probe,
but
like
run
the
exact
for,
however
long
it
needs
to,
but
like
the
timeout
indicates
whether
the
probe
is
going
to
return
a
successful
result
or
a
failed
result.
A
So
so
that's
why
the
mackerel-
and
I
just
both
read
some
question,
which
is
what
we
debate
in
the
past,
so
your
time
out
your
time
out
that
prob,
but
it
could
be
like
the
time
out.
It
is
that
the
proper
implementation
have
problem
could
be
just
curve
too
many
things
or
be,
or
maybe
some
other
and
oh,
maybe
it's
just
because
that
time
of
the
value
set
too
small,
and
you
upgrade
your
application
right
now
and
that
to
take
a
longer
time
than
before.
A
So
in
this
case,
should
we
time
out
that
the
time
out
and
then
kill
off
the
container
or
it
is
just
to
say,
okay.
This
is
just
time
out.
So
there's
the
more
like
that
to
me
is
more
api
level
of
discussing
to
understand
the
more
context,
and
so
so
and
also
so
so
how
we
are
going
to
address
that
one.
A
C
So
don,
are
you
suggesting
that
you
have
a
new
setting
saying
like
whether
the
timeout
is
failing
continue
or
not?
Basically,
whether
timeout
is
a
failure
or
not,
and
then
we
can
introduce
this
change
with
this.
Setting
and
people
who
wants
old,
behavior
can
revert
back
and
people
who
wants
new
behavior
will
keep
it
the
default.
A
No,
I
actually,
I
would
just
think
about
the
there's.
The
good
reason
behind
we
ask
people
to
implement
of
that
time
out
in
the
proverb
and
so
then
treat
the.
If
you
have
to
do
that
and
then
treat
the
time
out
as
the
finger,
then
we
kill
the
container.
A
So
then,
basically,
it
is
pumped
this
one
into
the
decision,
it's
more
to
the
application
owner
and
also
more
to
like
the
each
application,
not
just
the
owner,
like
the
implementation,
the
developer
and
also
even
to
like
the
sie
application
si,
because
when
they
operate
new
application,
so
they
need
exam
that
new
version
of
the
application
container
and
then
reside
this
time
out
by
re-implement
of
the
execute
the
problem.
So
that's
the
reason.
Of
course
I
didn't
see.
A
So
we
need
the
answer
and
I
have
to
say
that
we
answered
that
yet
and-
and
so
so,
unless
we
answer
that
one,
I'm
not
sure
how
we
are
going
to
move
forward
on
this
one
because
definitely
have
the
regression
right
to
the
existing
use.
Use
cases.
C
I'm
sorry
I
I
didn't
quite
get
it.
So
are
you
arguing
that
we
don't
have
to
respect
timeouts
like
like?
If
you
redevelop
from
scratch,
do
we
have
to
respect
timeouts
and
are
you
trying
to
solve
the
problem
of
very
next
version?
How
do
we
introduce
timeouts
back
or
you
talking
about
long
term,
whether
we
need
this
timeout
at
all
and
maybe
on
api
surface?
It
will
be
separate
setting
whether
timeout
is
critical,
not.
F
F
I
think,
if
we
didn't
already
have
the
field
then
don,
maybe
I
can
see
not
respecting
timeouts
ever
being
a
better
option,
but
it
does
seem,
I
think,
in
the
issue.
It
is
a
bug
what
people
are
pointing
out,
that
it
isn't
respected.
It
almost
sounds
to
me
like
our
defaults
are
bad.
F
If
fixing
this
behavior
is
actually
going
to
result
in
a
large
number
of
users
being
hurt
by
it.
A
A
You've
been
waking
up
and
for
the
the
default
time
off.
Oh,
I
agree
with
you
from
high
level
and
but
back
to
the
sturgis
original
question.
If
we
build
this
frost
watch,
I
definitely
don't
want
the
time
out.
Value
by
default
is
one
second
and
for
me,
I
think
that
my
auto
value
is
by
default
should
be
way
more
longer.
That's
kind
of
the
catch-all
like
the
the
worst
scenario,
because
you
don't
want
that
way.
A
It's
been
half
hour,
so
it
will
be
definitely
set
at
a
reasonable
time
and
so
in
general
and
but.
A
I
may
don't
want
to
implement
something
like
this
arbitrary,
execute
prop
and
we'll
have
some
format,
and
this
is
initially.
We
also
argued
which
I
kind
of
don't
agree
and
and
but
that's
the
way
we
implement
and
that's
the
way
people
using
because
we
give
up.
We
compromise
is
just
for
adoption
for
easy
to
using.
So
that's
why
we
give
up
try
to
make
that
more
aligned
with
the
the
the
docker
at
that
time.
Back
then,
and
and
and
but
that's
the
that's,
the
that's
today's
reality,
but
I
just
share.
A
A
Even
we
have
the
time
out
of
value,
but
the
people
didn't
really
need
only
certain
cases
with
respected
and
I
believe
that's
the
case
is
we
still
respect
it,
but
some
use
cases
is
not
respected
and
it's
the
big
because
the
use
cases
behind
the
thing.
So
so
maybe
we
could
change
this
one
make
the
default
value
much
longer,
and
then
we
can
start
to
respect.
A
D
Would
it
be
possible
to
not
have
a
default
and
push
this
this
time
out
too
strictly
to
the
user,
because,
like
I
remember
from
the
past,
like
docker
days
and
stuff
default,
timeouts
are
extremely
hard
because
it
depends
on
machine
how
much
workload
is
doing
it.
Maybe
one
second
is
quick
on
a
high
compute
instance,
but
one
or
one
second
is
long
on
a
high
compute
instance,
but
on
a
raspberry
pi.
A
B
But
that's
really
hard:
it's
changing
the
default
either
by
like
enforcing
that
it's
required
or
like
making
it
longer
else.
Could
could
that
also
be
considered
a
breaking
change
in
some
way.
A
Yes,
yes,
but
underneath
the
even
the
breaking
change,
but
still
still
is
not
like
the
way
we
used
to
handle
certain
things.
A
Oh
relatively,
you
have
to
think
about
the
risk
right
so
giving
the
risk
and
from
the
user's
perspective
it
is
breaking
change,
so
we
still
have
to
make
sure
that
it
is
being
announced
properly,
but
and
also
it
is
the
aps-
semantics
change
and
but
but
to
consider
to
just
without
any
default
value
and
just
simply
say:
okay,
it's
the
user's
configuration,
and
now
we
don't
so
you
may
end
up-
will
cause
more
problems
right
in
the
users
production.
A
Even
we
make
a
lot
of
announcements,
so
this
is
more
from
the
cluster
healthy
perspective
to
thinking
for
the
for
the
user
before
they
they
understand.
What's
what's
the
change?
Why
we're
doing
such
things
make
such
a
semantic
change,
but
at
least
the
way
catch
all
handle
those
use
cases
for
them.
B
Okay,
because
I
do
see
a
scenario
where
expanding
the
timeout
could
become
just
as
problematic
as
enforcing
the
timeout,
for
example
like
if
you
have
a
readiness
probe
and
you
relied
on
short
timeouts
to
make
sure
traffic
to
that
probe.
Is
you
know
quickly
failing
over,
and
then
you
expand
that
time
out
to
like,
let's
say
like
five
seconds,
then
you
know
and
you're
relying
on
that
default
one.
Second,
that
could
be
problematic
too.
A
Yeah,
that's
true,
so
this
is
another
topic
we
used
to
talk
about
similar
like
the
restart
policy
and
since
the
circuit
asks,
if
people
do
from
scratch,
so
so
we
initial
implementation.
I
have
actually
is
the
separate
time
of
the
value
and
also
the
delay
time,
but
for
simplest
simplicity
for
the
part,
because
people
think
about
the
parts
back
or
id
a
little
bit
over
complicated.
A
So
we
can't
promise
and
make
that
the
same
and
the
similar
to
the
restart
policy.
We
start
policy
initially.
A
What
I
have
also
is
the
per
container
and
then
have
like
per
part
also
so,
but
also
for
the
simplexity
easy
to
using
with
compromise
and
made
that
it
is
per
pot,
and
so
I
yeah
andrew
that's
really
valid
use
cases.
Yeah.
C
Should
you
start
with
software
failures
like
instead
of
failing
on
timeout,
you
can
just
start
writing
errors
indicating
that
in
future
version
it
will
be
affected
and
maybe
have
a
feature
flag
that
will
enforce
timeouts
as
failures
and
then
in
future
versions.
We
can
safely
assume
that
people
already
saw
these
error
messages
and
they
will
start
setting
the
defaults,
and
then
we
can
switch
this
feature
flag
on
by
default.
C
So,
basically
do
we
need
to
develop
some
rollout
plan.
If
you
all
agree
that
we
want
to
go
this
direction,
we
need
to
understand
how
we
notify
users.
A
B
A
Yeah,
so
so,
since
we
also
have
like
the
innate
delayed
value,
and
so,
let's
think
about
all
those
problems,
we
also
recently
added
another
as
a
start
up
so
looks
like
the
do,
we
think
about
the
combination
of
the
init
delay,
plus
this,
like
the
execute
timeout
combination,
can
cover
all
those
scenarios.
C
B
A
A
Can
we
take
a
look
at
the
language
prop
startup
prop,
since
we
added
that
new
one
and
then
redness
prop
and
also
two
failed
elite
delay,
and
also
the
timeout
out
and
think
about
all
the
combination,
how
the
how
the
user
can
config
and
and
then
we
could
at
least
the
provider,
here's
what
we
are
thinking
from
the
signal
community
and
from
the
high
level
guidance
to
the
user.
Can
we
see
we
cover
all
those
things.
B
Sure
yeah
I
can
try
to
put
down
like
right
put
together
like
a
summary
of
the
current
state
of
this,
and
maybe
I
could
share
it
next
meeting.
A
A
G
Yeah,
hey,
my
name
is
billy
mcfaul.
I
did
have
a
topic
and
I
don't
know
if
it's
for
this
group
or
not
it's
first
time.
I've
been
here
but
I'd
like
to
bring
it
up
and
then
you
can
just
tell
me
it's
not.
A
G
Fine
and
I
couldn't
figure
out
how
to
move
that
bullet
left
it
looked
like
it
was
up
under
the
other
topic.
I
couldn't
figure
out
how
to
get
it
left,
so
there
was
an
existing
pr
from
derek
carr
on
exposing
huge
pages
through
the
downward
api
and
it
died
off
due
to
inactivity.
G
I
was
just
curious:
if
there
was
a
technical
reason,
did
it
just
die
off,
because
just
no
one
had
time
to
work
on
it
or
whether
it
was
abandoned,
because
other
methods
or
other
thoughts
were
out
there
on
how
to
do
this.
I
looked
above
in
this
document
and
some
of
the
future
work
did
have
some
huge
pages
stuff
for
118.
G
So
I
was
just
curious.
You
know
if
I
were
to
pick
up
this
pr
and
push
it
back
up
and
try
to
finish
it
out.
Would
it
get
rejected
or
is
there
people?
Is
there
any
interest
in
it.
A
I
believe
that
is
abundant
because
we
believe
there's
the
other
way
to
solve
the
original
reading,
but
I
because
I
just
opened
it
here
and
and
but
I
forgot
the
detail
that
unfortunately,
derek
is
not
here
so
who
is
the
original
author
is
derek
here
already
yeah
he's
not
here
because
yeah
so
so
we
can.
We
can
talk
about
this
one
you
can.
You
can
ask
her
through
the
signal.
This
is
the
right
community
and
right
meeting
to
talk
about
this.
A
Obviously,
and
you
can
reach
through
the
signal,
the
slack
channel
and
ask,
and
and
also
I
can-
I
can
follow
up
this
one
with
derek
directly
after
this
meeting.
So
then
we
so
we
are
share
more
about
why
we
close
that
one.
But
I
remember
we
talked
about
this
one
because
we
think
about
there's
the
other
initial
region.
So
I'm
not
sure
your
use
cases
you
you
want
this
one,
so
maybe
the
we
already
covered
what
you
want
and
or
maybe
we
didn't
so
can
you
share
more.
G
Yeah,
I'm
basically
looking
at
how
to
run
like
a
dpdk-based
application
within
a
container
and
most
dpdk
applications
need
huge
pages.
So
when
you
fire
up
the
application,
you
need
to
tell
it
basically
how
many
huge
pages
he's
allowed
to
use.
G
So
as
of
now,
I've
just
been
hard
coding
it
and
just
kind
of
like
making
sure
that
I've
given
it
enough,
but
I
would
like
to
be
able
to
pass
that
value
into
the
pod
and
allow
him
to
read
it
and,
as
far
as
I
know,
there's
no
way
today
within
the
pod
to
read
how
many
huge
pages
have
been
allocated
for
it.
E
G
Okay,
all
right,
I
mean
that
if
that's
a
workaround
or
the
preferred
way
to
do
it,
maybe
I
should
look
at
that.
I
just
know
that
this
one
was
out
there
when
I
first
started
looking
into
it,
and
so
I
was
just
following
up
on
whether
there
was
a
reason
this
one
closed
or
not,.
A
So
billy
it
is
worse
to
brought
this
up
back
to
the
dark
because
the
just
like
the
alexandra,
it
is
the
way
you
can
get,
but
it's
not
the
best
way
easy
way.
So
so
we
give
up
is
also
try
to
avoid
the
unless
a
certificate
over
over
complicated
or
code-based.
A
So,
but
if
it's
needed
it's
open
to
discussing
if
customers
so
so
restart
question
to
the
signal,
the
channel
but
just
yeah.
So
there's
the
work
wrong,
obviously,
and
but
it's
open
to
discussing,
if
we
have
more
demand
like
this,
so.
E
Thanks
don
actually
related
to
that
it,
it
might
be
interesting
to
have
a
more
discussion
about
more
generic
topic
like
to
have
some
downward
api,
which
will
expose
the
overall
container
resource
requests
and
limits
in
personal
way,
not
only
with
huge
pages,
but
also,
for
example,
like
extended
resources.
A
That's,
that's!
That's
so
right!
Yes!
Yes!
Actually
we
talked
about
this
because
the
this
is
one
of
the
features
in
the
book
people
ask
for
and
and
but
we
never
really
in
the
kubernetes.
We
never
really
may
move
forward
on
that
one.
This
is
a
good
topic.
We
can
discuss
yeah,
and
maybe
you
want
to
propose
this
one
next
week
to
talk
about
this
negative
generic
of
the
downward
api
to
nato
application
notes.
There
are
resources
limit
and
a
request.
A
G
I
have
one
more
question:
just
while
I'm
here
does
anyone
know
the
code
freeze
deadline
for
1.20,
I've
been
googling
around
and
I
don't
see
it
out
there
because
I
would
like
to
try
to
be
able
to
do
this
in
the
next
release,
and
so
I
was
curious.
What
my
time
frame
was
I'm
trying
to
get
this.
C
H
Usually,
the
schedule
is
posted
in
the
secret
in
the
secret
list,
repo
within
the
kubernetes
organization
and
I'm
googling
right
now,
but
the
the
the
deadlines
should
be
more
or
less
similar
to
the
ones
for
last
year
today
for
the
released
around
for
the
release.
That
was
around
this
time
last
year,
which
was
in
117
I'll
post
it
in
slack
as
soon
as
I
find
it
in
the
release,
notes.
H
H
H
A
Okay,
looks
like
that's
all
for
today,
and
and
also
we
have
the
two
topic
for
next
week
looks
like
maybe
potential
have
the
two
topic.
One
topic:
it
is
andrew,
come
back
and
talk
about
the
all
the
used
cases
on
the
other
prop
and
to
to
see
and
another
topic.
It
is
maybe
alexandra
and
michael
and
the
proposed
generic
downward
api
for
the
resource
request
and
and
and
the
limit
for
the
each
container
application
container.