►
From YouTube: Kubernetes SIG Apps 20210726
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
at
most
the
only
thing
that
can
happen
is
sig.
Release
will
ask
you
to
help
with
solving
any
particular
issues
with
tests,
but
when
I
was
looking
last
time,
it
looked
pretty
okay,
so
we
are
slowly
ramping
up
123,
and
so
the
discussion
elements
that
you're
gonna
cover
in
a
bit
will
be
touching
123,
mostly
in
a
lot
of
the
topics,
unless
janet
or
ken
wanna
add
something
with
regards
to
announcements.
A
Cool
so
there's
one
quick
topic,
which
is
that
delete
all
terminated
pots
flak
for
kcm.
A
I
believe
not
sure
if
we
have
the
author,
but
basically
the
idea
is
currently
we
have
a
flag
that
is
terminated,
pod
gc
threshold,
which
aldo
pointed
out
as
well
in
dpr,
but
someone
is
proposing
to
have
a
boolean
flag,
which
would
say
all
terminated
pots
are
removed,
which,
as
I
mentioned
aldo
pointed
out,
will
break
jobs
until
this
point
and
we
are
or
specifically
aldo,
is
working
on
changing
how
jobs
are
working
and,
secondly,
there's
a
question:
why
or
what's
wrong
with
using
the
the
value
one
which
is
like
literally
having
just
one
terminated
pod,
which
would
be
almost
meaning
terminate
all
instead
of
introducing
a
new
flag?
A
I'm
not
sure
if
the
author
of
the
pr
is
there.
Does
anyone
have
any
other
thoughts
on
this
pr?
B
Mean
it
doesn't
have
to
be
a
huge
cap,
but
like
I'd
like
to,
I
think
sig
apps
should
be
involved
in
this.
I
think
probably
cluster
life
cycle
would
also
want
to
be
involved
in
this,
so
I
think
at
least
both
of
our
six
should
be
there
and,
like
the,
I
think,
the
author
of
the
pr
doesn't
understand
that
this
would
be
a
user-facing
change
from
a
behavioral
standpoint
because
they
list
it
as
none
on
the
pr.
So
I'd
like
to
see
a
capture
but
like
if
nobody
minds.
B
A
And
especially
how
this
will
affect
the
current
controllers
and
everything
else,
and
what's
the
use
case,
why
using
the
current
flag
isn't
sufficient
to
achieve
removable,
terminated
parts
yeah,
okay,
thanks
again,
okay,
that
moves
us
to
the
second
topic.
Although
you
had
a
topic
about
usage
patterns
for
job
api.
C
C
So
with
with
the
introduction
of
index
jobs,
we
kind
of
have
the
possibility
of
enabling
training,
training
tasks.
Where
you
know
every
pod
has
a
identifiable
name,
you
can
use
to
communicate
between
them,
but
then
so
I
brought
this
up
to
the
kickflow
community
and
they
were
interested
in.
C
C
C
So
well
back
to
the
point:
what
are
the
missing
features
in
job
api
that
we
don't
have
today?
So
I
I
did
a
quick
survey
of
where
those
features
are
so
the
first
one
is
being
able
to
resize
jobs,
so
basically
changing
the
completions
field.
C
I
think
in
general
this
is
a
breaking
change,
but
I
talked
to
some
api
reviewers
and
basically,
if
the
controller
today
can
handle
it,
the
job
controller
can
handle
this
sudden
change.
We
should
be
able
to
do
it.
It's
if
it
doesn't
break
anything.
If
not,
we
have
to
do
it
across
two
versions.
First,
to
fix
the
job
controller,
to
be
able
to
do
it
and
then
only
the
next
variation.
We
can
have
the
api
permitted.
B
A
But
yeah
parallelism
should
be
supported
even
in
the
old
country,
even
before
your
changes.
If
I
remember
I
misspoke
yeah
completions,
I'm
not
sure
about
that
one.
Theoretically,
it
should
work.
My
only
worry
would
be
that
if
people
are
expecting
this
to
be
immutable
and
eventually
the
only
war,
the
only
one
where
this
could
affect
how
you
are
running
is
if
someone
is
expecting
that
it
will
never
ever
change
and
suddenly
halfway
through
your
job,
you're
changing
the
field,
but
that
will
be
only
on
the
consumer
side
of
things,
but
not
in
the
controller.
A
C
Right
yeah,
I
I
guess
this
change
would
need
okay,
but
yes,
because
we
could
discuss
through
that
anyway.
So
that's
first,
first
change,
that's
probably
the
easiest
one.
To
be
honest,
then
the
next
one
is
a
little
bit
more
about
job
completion
or
job
retrying,
so
yeah.
Basically
the
idea
that,
if
a
pod
fails,
there
are
some
operations
that
could
be
retried
and
then
there
are
some
cases
where
we
cannot
retry
and
we
can
base
that
on
exit
code.
So
they
do
that
in
tensorflow
operator.
C
B
Ways:
question
about
that
so,
like
really
the
way
all
the
controllers
are
dealt
with
today
is
restarts
are
pretty
much
handled
on
kublet's
side
right,
so
you
have
restart
always
and
restart.
Never
and
there's
nothing
in
between
we're,
not
talking
about
changing
the
potential
modes
for
kubelet
to
try
to
restart
and
we
would
be
actually
relaunching
a
pod
from
the
job
controller.
C
Yeah
we,
which
we
actually
do
today,
even
if
you
have
the
restart
policy
as
never
the
job
controller,
still
recreates
the
pod
if
it
failed
yeah.
B
B
A
Yeah,
that
would
be
that
would
be
handled
by
job
controller,
and
I
actually
dug
up
the
rfe
that
I
was
talking
with
eric
tune
six
years
ago,
where
we
were
talking
through
an
ability
to
have
a
set
of
in
that
particular
case.
It
was
about
when
a
job
should
be
marked
as
failed
because
of
the
exit
codes,
but
that
would
slowly
that
would
touch
basically
on
your
topic.
A
Yes,
I'll
link
that
rfe
into
the
agenda
nodes
so
that
you
can
have
a
look
at
it.
It
just
brian
commented
that
they
have
something
in
google
of
for
borg,
which
does
something
similar.
A
Although
I
think
that
we
initially
thought
talked
about
a
single
exit
code,
but
I'm
pretty
sure
that
there's
nothing
stopping
us
from
adding
a
set
of
either
a
a
range
of
codes
or
whatever.
C
A
So
that
not
quite
there
is
an
option
that
you
do
not
set
a
completion
field,
if
I
remember
correctly,
which
basically
means
the
first
completed
part
will
mark
the
job
completed.
It
doesn't
matter
how
many
parts
we
run
and
we
will
run
this.
Many
as
the
prior
lesson
says,
but
the
first
one
that
gets
completed
marks
the
entire
job
as
completed.
That's
one
of
the
sets
that
eric
was
adding
some
time
after
we
initially
wrote
the
job
controller
I
see,
would
that
work
for
you
or
you
need
to
have.
B
So
we
took
on
that
a
little
bit
I
mean
job
is
actually
counting
the
number
of
successful
completions.
So
if
you
want
the
job
to
stop
after
a
number
of
successful
like
that's,
why
I
don't
understand
what
you're
asked
like
what
what
the
ask
is
like?
Are
they
trying
to
say
what
is
the
termination?
What
would
be
the
modification
and
termination
criteria?
I
guess
I
just
don't.
I
don't
rock
it.
C
Yeah,
so
I'm
not
fully
I'm
not
fully
aware
of
this
need,
I
don't
know
all
the
details,
but
from
my
understanding
is
like
okay,
I
hit
80
of
the
pots
completely
successfully.
Then
I
can
declare
my
job
successful
and
I
can
cancel
the
rest
of
the
pods.
A
Right
but
you
can
achieve
that
through
having,
I
don't
know:
parallelism,
100
and
completions
at
80.
A
B
A
A
C
The
okay
sounds
good
yeah.
I
can
follow
up
on
that
one.
I
I
don't
know
all
the
details
and
also
I'm
not
sure
how
how
the
controller
behaves
when
you
have
bigger
number
of
parallelism
than
than
completions.
A
A
A
It
is
to
ensure
that
we
are
not
overwhelming
the
system
with
creating.
Suddenly
I
don't
know
up
to
this
many
parallelism
so,
for
example,
if
the
parallel
is
maybe
1000
we're
not
gonna
fire
up,
1000
pods,
but
rather
we
will
do
it
gradually.
I
don't
know-
and
I
I
can't
remember
off
top
of
my
head,
what
the
the
batch
size
is,
but
it
will
be
done.
I
don't
know
50
100
a
time
yeah
and.
E
C
A
Be
a
delay
between
those
batches,
I
kind
of
remember:
what's
the
mechanism
behind
the
batching,
whether
there's
some
back
off
or
it's
just
yeah,
I
would
have
to
dig
it
up,
but
there
is
some.
There
is
a
mechanism
like
those.
C
Sounds
good,
thank
you
so
yeah.
To
sum
up,
there
are
so
there
are
all
these.
So
the
the
big
question
is:
do
you
think
it's
it's
advisable
to
make
the
job
api
fulfill
these
gaps
so
that
the
job
api
becomes
the
de
facto
underlying
board
management
for
any
kind
of
job?
Do
you
do
you
share
this
view?
C
Do
you
think
we
can
progress
towards
this
goal
through
the
releases?
It's
probably
taken
a
bunch
of
releases.
A
I'm
pretty
I'm
pretty
confident
about
the
first
two,
like
I
mentioned
before.
These
are
in
any
way
they're
not
nor
breaking
changes
nor
controversial.
The
third
one
is
I'm
missing
the
picture
or
a
particular
use
case
for
the
percentage
of
number
of
completions.
So
I
would
need
to
know
a
little
bit
more
details
about
that.
A
One,
maybe
create
an
issue
and
discuss
and-
and
we
can
follow
up
there,
this
idea
and
then
eventually
well,
we
can
decide
what
to
do
with
the
third
one,
but
the
first
two,
I'm
I'm,
I'm
fully
supporting
the
the
idea
of
the
changing
completions
and
exit
codes.
B
E
B
But
if
I
remember,
I
think
the
problem
wasn't,
I
think
the
issue
that
brian
had
was
that
borg
does
this
with
an
exit
code
and
it's
kind
of
silent
and
that's
kind
of
like
opaque
to
end
users.
B
C
Okay,
all
right
I'll
follow
up
with
all
these
three
separate
I'll,
try
to
create
the
three
separate
issues,
so
we
can
do
this
or
reuse
the
one
that
is
already
there.
A
Right,
basically,
the
I
would
imagine
that
each
of
those
will
be
a
separate
cap
since
yeah,
mostly
each
of
them,
is
pretty
proud.
Maybe
they
just
change
completions,
but
just
by
the
fact
that
this
is
changing
api
we'll
have
to
go
through
a
separate
cap,
but
the
other
two,
the
second
one
like
and
said
we
need
more,
it
has
to
be
properly
described,
how
this
works
and
the
third
one
is,
we
don't
know
full
details
yet.
A
Okay,
does
anyone
have
any
questions
for
for
aldo
before
we
jump
to
another
topic.
A
Okay,
thanks
thanks
very
much
aldo
ravi
you're
up
next
with
consolidating
workload,
controller
status.
F
As
part
of
this
step,
what
would
like
to
do
is
have
the
status
that
the
current
status
that
we
already
have
available
for
deployment
controller
to
be
available
for
stateful
sets
and
daemon
sets.
The
definitions
are
perhaps
going
to
change
a
little
bit,
but
I
have
put
most
of
the
information
in
the
cap
so
like.
If
you
guys
find
time,
please
have
a
look
at
it
and
give
me
feedback.
A
My
one
feedback
that
I
that
I
have
already
since
what
we
talked
and
before
and
quickly
skimming
through
the
cap,
is
that
the
end
goal,
and
what
I
would
want
to
see
in
the
cap
is
that
this
will
apply
to
all
the
controllers.
A
The
fact
that
you
will
be
starting
with
daemon
sets
and
stateful
sets
is
an
implementation
detail
that
doesn't
that
we
can
outline
in
the
in
the
implementation
details
further
down
in
the
cap,
but
my
end
goal
and
the
overall
description
of
the
cap.
I
would,
I
would
propose,
to
put
it
such
that
it
applies
to
all
the
controllers.
The
fact
that
deployment
already
has
it
doesn't
change
the
overall
goal
of
the
clip.
B
B
It
may
not
even
be
possible
to
get
that
information,
but
the
idea
that
controllers
have,
like
you,
I've
seen
this
this
request
to
to
modify
the
state
of
the
environment
some
way,
I'm
making
progress
on
it
and
then
like
I've,
actually
changed
something,
and
then
the
change
has
propagated,
and
it's
settled
to
like
a
a
nominal
good
state
like
the
declarative,
intent
has
been
reconciled,
is
kind
of
a
common
thing-
that's
probably
applicable
across
all
controllers,
but
I
don't
know
if
workload
controllers
are
kind
of
a
specific
one
right
like
so
like
you
know,
it
really
is
about
rolling
out
pods.
B
F
When
I
started
out,
I
was
initially
thinking
of
demon
set
and
staple
set,
because
for
some
of
the
controllers
it
is,
I
mean
they
are
not
going
to
be
consistent
with
whatever
we
have
for
deployment.
So
when
I
started
with
cap,
I
had
only
demon
set
and
deployment
in
my
head.
A
I
mean
I
was
initially
thinking
all
of
the
workload
controllers
since
that's
what
we
own
and
ideally,
I
would
like
to
be
able
to
write
higher
level
logic,
reusing
cube,
primitives,
but
without
the
necessity
to
add
conditions,
because,
if
I'm
creating
daemon
set,
I
need
to
know
how
to
deal
with
daemon
said
when
I'm
creating
deployment.
There's
a
set,
there's
a
different
expectation
that
I
need
to
look
after
when
I'm
dealing
with
it.
A
I
would
like
to
be
able
to
write
a
single
observer
controller,
whatever
you
want
to
call
it
that
can
read
any
of
the
underlying
cube
controllers
and
figure
out
whether
it's
created,
whether
it's
in
progress
or
it
completed.
It's
it's
desired
action.
So
I
I
was
initially
thinking
that
at
least
all
the
workload
controllers,
including
batch
and
I'm,
including
both
batch,
but
now
that
I'm
that
you're
talking
about
it,
can
I
wonder
how
hard
it
would
be
to
actually
try
to
prescribe
this
as
a
general
pattern
across
the
board.
B
One
thing
I
could
do
is
I
can
resurface
the
stuff
that
barney
and
eric
were
talking
about
in
terms
of
trying
to
do
a
status
v2
that
was
probably
applicable
across,
but
I
mean
like,
if
we're
going
to
do
something
like
that,
it
would
really
be
like
they
would
be
expanding
the
scope
of
what's
proposed
here,
and
it
really
would
be
something
that
we
should
probably
take
the
same
architecture
because
it
would
have
to.
B
A
I
mean
I
would
probably
pursue
both
there's
nothing,
stopping
us
to
do
to
pursue
this
along
inside
of
the
workload
controllers
and
then
in
parallel.
We
can
try
to
to
push
this
further
down
and
across
all
of
the
controllers.
F
Yeah
to
be
clear,
I
think
what
you're
proposing
is
sort
of
have
a
status
controller
for
the
workloads
so,
and
it
is
responsible
for
setting
up
just
the
status
of
all
the
workloads,
depending
on
how
the
underlying
parts,
technically,
whatever
that
workload
is,
is
being
run
as
depending
on
the
state
of
those
spots.
The
status
of
the
higher
level
controller
should
be
reflected.
Is
that
what.
A
You
are
saying
I
mean,
maybe
not
necessarily
a
status
controller,
but
basically
a
status
v2
or
whatever
you're
going
to
call
it
something
that
will
allow
me
to
have
a
generic
way
of
stating
of
figuring
out
whether
a
any
kind
of
controller
it's
in
progress
is
created
or
completed
its
action.
It's
basically,
I
think
you
point
it
out
in
your
cap.
Those
three
states
that
it's
created
it's
in
progress
and
then
it
completed
so.
F
Yeah,
I
mean
those
three
states
they
can
be
achieved
even
today,
in
the
sense
that
they
can
be
made
consistent
across
all
the
controllers
yeah,
but
the
definition
perhaps
is
going
to
change
when
we
are
updating
the
status
for
those
individual
controllers.
F
But
if
you
are
just
talking
about
the
states
as
such,
yes,
we
can
have
them
for
all
the
workload
controls,
although
I
did
not
think
about
the
batch
ones
earlier.
A
A
Well,
modulo
suspension
for
cron
job,
maybe,
but
then
job
is
always.
It
will
always
fit
in
nicely
within
those
three
states.
Equally,.
F
All
right,
so
I
did
not
think
about
the
job,
but
I
I
can
update
the
cap
with
with
the
job
statuses
as
well.
A
I
mean
basically,
I
would
I
would
at
least
start
this
cap
as
status
for
all
sig
apps
own
controllers
and
then
in
parallel.
We
can
bring
this
topic
over
to
sick
architecture
and
see
what
they
think.
Whether
this
is
something
we
could
pursue,
or
we
could
recommend
initially,
because
there's
there
are,
there
are
other
options
when
we
go
to
sick
architecture.
A
There
are
two
possible
paths:
three
sega
sick
architecture
says
no
and
we'll
just
do
it
on
our
own
or
sick
architecture
says
yes,
this
is
the
requirement
for
all
controllers
or
yes,
this
is
the
recommended
way
for
controllers.
F
Yeah,
I
think
we
can
pursue
both
of
them
parallel
like,
but
the
first
thing
that
we
need
to
get
within
c
gaps
is:
are
we
fine
with?
I
mean
consensus
on
the
states
like?
Are
we
fine
with
the
states
for
all
the
virtual
controllers
and
are
the
definitions
clear
enough
to
be
used
for
all
the
controllers
like
once
we
get
some
clarity
on
that?
Perhaps
we
can
take
this
to
the
cigar
saying.
A
F
I
have
a
question
so
I
know
michael
you
mentioned
that
these
three
states
might
or
might
not
apply
to
the
crown
job
controller,
depending
on
how
we
view
it.
So
if
we
are
doing
this
for
all
controllers,
we
might
want
to
work
out
those
details
of
how
exactly
we
would
fit
the
state
of
front
job
like
controller,
which
is
doing
one
thing
at
sporadic
occasions
or
in
time.
So
that
would
be.
F
My
only
question
is
that
first,
do
we
need
to
work
out
those
details
for
crown
jobs
so
that
we
can
fit
all
cigars
controllers
and
if,
yes,
what
are
some
of
the
options
we
have
on
the
stage
to
include.
A
Yeah,
I
would
imagine
the
cap
outline
how
the
the
unified
statuses
map
to
the
current
statuses
of
those
resources,
because
it'll
take
some
time
for
both
the
implementation
to
go
through
all
of
them
and
secondly,
for
people
that
are
used
to
the
old
statuses,
how
to
translate
them
to
the
new
ones.
And
if
we
write
it
down
there
shouldn't
be
no
doubts
how
these
are
translated.
A
A
Okey
doke
the
next
topic,
adam
with
cronjob,
for
with
timezone
and
cronjob.
G
Right,
a
very
exciting
topic:
I'm
sure
everybody
really
wants
to
think
about
time
zones,
so
this
has
been
around
for
a
little
while,
I
think
we've
we've
gone
through
a
few
iterations.
There
there's
also
an
external
controller
called
cron
jogger
that
implements
this
outside
of
of
the
kubernetes
control
plane.
G
G
The
idea
behind
it
is
obviously
pretty
simple.
It
does
come
with
some
other
complications,
I
think
for
for
users
of
the
api.
As
far
as
I
know,
it's
not
not
a
breaking
change,
because
it's
a
new
optional
field
for
cluster
maintainers,
though
there
is
a
a
bit
more
of
a
requirement.
G
I
guess
you
could
you
could
not
have
a
time
zone
database
and
it
would
just
fail
to
create
jobs
in
in
time
zones,
but
the
expectation
would
be
that
cluster
maintainers
would
keep
the
time
zone
database
up
to
date,
as
it
does
change
over
time,
and
that
would
impose
a
some
burden
on
those
maintainers
and
yeah.
I
think
that's
the
gist
of
it.
I
I
want
to
figure
out
how
to
keep
moving
it
forward.
G
B
G
I
think
it
was
decided
that
we
wouldn't
do
it
primarily
because
we
wanted
to-
or
I
think
my
my
understanding
at
the
time
is
we
wanted
to
see
if
there
would
be
any
community
consensus
around
like
doing
this
and
and
the
advisement
was
to
to
build
an
external
controller
to
do
the
same
thing,
and
so
that
that
external
controller
exists
now
and
in
use
by
a
number
of
folks,
and
it's
largely
unchanged.
I
think
it's
really
the
old
implementation
of
the
ground
control
that
was
just
lifted
out
and
and
ran
as
a
separate
process.
A
Where
is
it
that
the
time
zone
database
coming
from.
G
It
would
be
a
system
package-
I
I
know
linux,
so
it's
going
to
be
tz
data.
If
you
installed
that
package
on
windows,
I'm
less
familiar
with
how
they
keep
track
of
that,
I'm
sure
there's
some
facility.
We
lean
on
in
cronjob.
We
lean
on
the
an
external.
I
think
it's
the
rob,
fig
kron
go
packet
though,
and
then
there's
of
course,
the
the
time
zone
api
from
golang
itself.
So
as
long
as
wherever
go
is
running
has
access
to
that
stuff,
and
then
we
should
be
okay.
G
As
far
as
using
the
timezone
data.
A
G
G
G
So
if
you
then
specify
one
and
tried
to
look
it
up,
I
think
the
current
behavior
is
to
just
not
schedule
that
job
and
report
a
warning,
saying:
hey,
you
specified
a
time
zone
that
doesn't
exist
or
I
don't
know
about,
and
so
it
would
be
that
same
basic
case.
Of
course
you
as
a
user.
You
need
to
know
that
hey,
I
created
this
time.
Cron
job
and
it
didn't
get
scheduled
because
of
this
reason,
but
it
should
report
it
through
events.
I
think
so
the
failure
case
there
should
be
roughly
the
same.
G
I
mean
you
could
conceivably.
We
should
be
able
to
make
it
such
that
people
could
define
any
invalid
times
of
whether
it's
because
it
doesn't
exist
or
because
the
time
zone
database
isn't
there
and
it
should
fail
in
the
same
way.
The
user,
but
that
makes
it
complicated,
I
guess,
to
diagnose.
A
A
My
only
word
is
that
what
happens
if
the
cube
api
server
for
some
for
some
reason
is
running
on
one
system,
which
has
a
database
time
zone
database,
but
the
actual
controller
logic
is
running
on
the
kit
controller
manager,
which
sits
on
a
node
that
does
not
have
because
the
api
server
would
gladly
accept
the
cronjob
as
perfectly
valid,
but
the
controller
would
not
be
able
to
properly
respond
to
it
because
it
wouldn't
know
how
to
translate
the
time
zone.
These
are
basically
they're.
The
issues
that
are
that
were
mentioned
back
then
right.
G
Yeah,
I
mean,
I
think
I
mean
there's
weird
edge
cases
with
time
zones
independent
of
even
that,
like
you
could
have
education,
where
even
the
controller
manager
could
shift
between
nodes.
Databases
could
be
different
between
those
and
it
would
also
fail
in
that
way,
yeah.
I
think,
there's
a
lot
of
edge
cases
where,
where
that
needs
to
be
present,
and
at
least
somewhat
consistent,
luckily
it
doesn't
change
that
often
so
it
is,
I
I'd
say
like
as
far
as
introducing
the
burden
of
keeping
this
up
to
date.
G
Maintainers
of
these
systems
should
be
keeping
system
packages
up
to
date
anyway,
for
security
reasons,
so
I
don't
think
there's
too
much
burden
like
if
somebody
tells
me
that
my
underlying
system
is
way
out
of
date,
I'm
more
concerned
about
the
security
implications
than
I
am
about
the
tv
data
being
out
of
date.
So,
like
I
I
I
buy
the
argument.
I
understand
the
edge
cases,
but
for
me,
there's
more
important
things
than.
F
Would
be
that,
yes,
there
would
be
edge
cases,
but
when
somebody
does
upgrade
their
time
zone
information
there's
no
way
for
us
to
figure
out
that
it
has
been
updated
right
so
like
that,
like
when
the
time
zone
data
is
missing,
it's
much
more
obvious
that
the
data
is
missing
and
we
are
failing.
But
when
that
is
updated,
or
in
other
cases
it
is
outdated.
F
We
have
no
way
to
figure
out
that
it
is
outdated
and
then
the
controller
is
behaving
in
a
wrong
way
because
of
this
outdated
time
zone
data.
That
would
be
much
more
worrisome.
In
my
opinion,.
G
Right,
I
mean
my
argument.
There
would
be
the
same
is
if,
if,
if
the
cluster
maintainer
is
not
updating,
tz
data,
are
they
updating
other
security
packages
like
it
seems
fairly
simple
for
them
to
be
doing
an
apt-get
update
or
whatever
it
is
on
their
system
in
order
to
keep
that
up
to
date,
and
they
should
be
doing
it
regularly?
G
Yeah,
of
course,
the
edge
cases
exist,
even
even
within
like
if
you
had
a
just
oh
an
hour
between
scheduling
something
right.
You
run
into
this
like
possible,
weird
edge
cases,
I
think.
G
Even
now,
in
the
conjunct
controller
time
zone
related
edge
cases
are
not
handled
like
you
could
specify
a
cron
job
that
runs
at
2
am
in
eastern
time,
and
it's
going
to
run
twice
if
you
have
that
you
have
your
kubernetes
system
set
that
that
and
that
time
zone
right.
So
we
already
don't
handle
time
zone
related
like
issues
and
that's
not
called
out
anywhere.
As
far
as
I
know,
I
think
it
would
be
important
that
we
do
call
out
these
issues,
as
we
like.
G
A
All
right,
but
the
main
idea
would
be
that
the
time
zone
is
only
a
user
specified.
It's
only
used
for
sp
by
users
to
specify
when
they
expect
this
to
be
run.
It's
not
a
time
zone.
G
A
I
would
want
to
see
their
input,
especially
that
cron
jobs
after
ga
inc
are
part
of
the
conformance,
and
we
need
to
ensure
that
any
cube
system
or
any
distribution
of
cube
is
still
conformant.
A
If
we
introduce
the
the
requirement
for
time
zones,
if
they
are
okay
with
this,
we
can
fully.
We
can
push
this
forward
as
well,
but
I
remember
they
had
those
issues
with
regards
to
the
times
and
databases.
So
I
would
like
to
hear
their
opinion
on
that
one.
A
A
A
A
A
Would
be
like
11
a.m,
pacific
and
2
p.m.
Eastern,
I
think
yeah
that's
about
right.
A
Cool
thanks
a
lot.
Does
anyone
have
any
questions
for
adam.
H
Yeah,
that's
right
yeah.
So
basically,
this
is
an
old
kip
and
the
feature
was
beated
in
one
to
one.
H
So
it's
now
it's
going
to
be
two
releases
and
one
two
three,
so
that
kind
of
satisfies
the
second
bullet
which
has
to
stabilize
the
feature
for
two
releases.
The
first
bullet
is
where
I
want
your
thoughts
on
whether
the
feature
needs
to
be
extended
to
parts
or
not.
If
we
decide
to
keep
it
specific
to
jobs,
then
there
is,
you
know,
kind
of
no
implementation
change
needed
per
se.
So
that's
basically
what
I
wanted
to
hear.
H
I
tried
to
look
up
the
meeting
notes
for
when
this
feature
was
better
in
january
and
there
the
plan
was
to
push
out
the
part
specific
changes
to
a
later
point
in
time,
but
that's
basically
what
the
history
I
got
out.
H
A
I
remember,
but
I'll
probably
I'll
refer
to
janet
to
confirm,
but
there
was
a
discussion
about
pots
and
eventually
down
the
road.
We
said
that
we're
not
gonna
pursue
it
or
eventually,
if
we
decide
it
will
be
a
separate
effort.
E
D
Oh
right,
like
let
me
chime
in,
like
I
graduated
this
feature
to
beta.
The
idea
is
that
we
could
pursue
this
feature
for
quads
independently.
We
don't.
We
didn't
really
need
to
tie
the
job
with
pods,
because
at
the
end
of
the
day
they
are
two
different
fields.
Do
different
api
fields.
D
D
If
there's
enough
interest
for
pods,
then
this
could
you
know
progress
on
its
own
pace
separately
from
jobs
again,
like
the
the
fact
that
we
are
two
different
api
fields.
Allow
us
to
basically
have
this.
You
know
separation.
A
Well,
there's
also
the
the
setting
that
is
on
a
cube
controller
manager
about
pod
gc,
which
we
talked
about
the
first
point
earlier
today,
which
well
maybe
not
necessarily
is
tied
with
a
particular
time
when
a
pot
is
removed.
A
E
A
Yeah
I
mean
I,
I
think,
and
I'm
reading
from
the
comments
that
basically
I
would
graduate
to
ga
asus
and
if
there
will
be
requests
for
pods
for
this
particular
feature,
we
can
always
revisit
that
one
and
expand
to
add
the
pod
functionality
into
it,
but
as
it
is
currently,
my
only
question
would
be.
Did
you
check
whether
there
were
any
complaints?
A
The
beta
the
ga
mentioned
two
releases
without
complaints,
so
the
complaint
will
be
my
only
question
whether
whether
there
were
any
or
open
issues
if
they
are
they're,
it
would
be
good
to
make
sure
that
these
are
addressed.
A
If
there's
nothing,
then
yeah,
I'm
I'm
okay
with
pushing
this
to
ga.
H
Yeah
so
as
of
today,
there
are
no
open
issues
again,
one
two
two
hasn't
gone
out
awfully,
so
I
don't
know
if
somebody
will
come
in
with
a
late
issue,
so
I
am
okay
to
do
it
to
closer
to.
You
know,
featured
freeze
for
one
two,
three
just
to
keep.
You
know
enough
time
in
hand,
you
know,
or
I
do
it
now
and
then
I
back
it
out.
You
know
if
we
see
a
bunch
of
issues
come
in
last
moment.
H
I
think
the
consensus
that
I'm
saying
here
is
that
if
we
right
now,
there
is
no
ask
for
pods
so
to
keep
it
separate,
and
if
an
ass
comes
in
tomorrow
then
the
feature
can
I
mean
we
can
look
at
the
history
of
this
cap
and
you
know
we
know
what
to
do
when
making
a
separate
effort.
A
E
I
just
want
to
give
ken
our
summary,
so
we
we
were
discussing
that
and
we
can
graduate
with
jobs
today
and
then,
if
we,
if
in
the
future,
we
decided
that
we
want
to
extend
to
pods,
then
we
can
add
a
new
api
field
and
have
the
controller
understand
that
api
field
and
handle
pause
as
well.
So
it
can
be
an
independent.
H
Just
to
clarify
sorry
so
is.
A
A
A
Thank
you,
okay,
thank
you
very
much,
and
that
will
be
all
for
today
enjoy
the
rest
of
your
day
and
see
you
next
time.
Thank
you
all
bye,
bye,
bye,
thank.