►
From YouTube: Kubernetes SIG Apps 20230220
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
good
evening,
good
afternoon,
depending
on
where
you
are
today,
we
we
have
February
20th,
and
this
is
another
of
our
bi-weekly
Sig
apps
calls.
My
name
is
Machi
and
I'll
be
your
host.
Today,
I
don't
have
any
particular
announcements
today.
Maybe
a
quick
reminder
that
the
freeze
is
less
than
a
month
away.
I,
remember
50,
that
something
along
14th.
A
So
please
put
your
PRS
review
as
soon
as
possible
for
reviews
as
soon
as
possible.
You,
okay,
I,
was
about
to
say
that
cool
I'm
glad
to
hear
that
somebody
can
hear
me.
A
So
that's
pretty
much
all
the
announcements
that
I
had
so
we
can
move
right
away
into
the
main
discussion
topics.
Although
you
have
a
first
topic
about
creating
pot
replacement
in
a
job.
B
Due
to
the
holiday,
but
I
can
speak
on
that,
so
you
would
like
to
yes
give
you
a
heads
up
and
potentially
discuss
now
or
when
you
drop
comments
and
the
issue
that
in
a
job
in
the
index,
jobs,
the
replacement
part
in
case
a
port
is
deleted,
is
created
as
soon
as
the
port
is
deleted,
and
this
means
that
for
a
given
index,
if
the
graceful
period
for
the
port
that
is
deleted
is
like
one
minute
then.
B
For
for
this
one
minute,
there
are
two
parts
running
at
the
same
time
that
correspond
to
the
same
pod,
and
this
is
problematic
for
some
for
some
Frameworks
such
as
tensorflow
and
in
particular
like
it
fails
with
this
exception.
That
is
shown
here,
and
we
are
thinking
of
the
proper
solution,
because
this
is
like
the
very
old
behavior
of
the
of
the
job
and
like
one
can
say
that
it's
and
maybe
it's
a
it's
an
issue
with
index
jobs,
but
index
jobs
are
already
like
GA.
B
So
it's
like
take
it
to
modify
so
with
so
one
solution
that
we
like
propose
is
to
piggyback
on
the
portfolio
policy
that
is
currently
in
beta.
So
the
idea
is
that
when
we
what
we
do
for
portfolio
policy,
it's
sort
of
related,
because
we
want
to
match
a
failed
Port
only
once
it's
failed
against
the
configured
profiler
policy,
so
that
we
have
the
exit
codes
and
the
complete
status
of
the
pod.
B
So
what
we
do
is
we
wait
with
counting
the
port
as
failed?
Only
until
it's
verified
like
it's
in
terminal
phase,
we
don't.
We
still
report
for
portfolio
policy
currently
create
the
replacement
part
as
soon
as
it's
deleted,
but
we
could
so.
This
was
purpose
that
we
could
also
wait
with
creating
the
replacement
part.
B
So
this
would
work.
However,
then
I
don't
know
it's
up
for
discussion.
If
this
is
the
proper
solution,
or
we
could
like
claim
that
it's
an
issue
of
index
jobs
and
even
the
index
job
is
GI,
we
still
fix
it
for
index
job,
but
then,
as
this
cap
here
is
unclear,
if
the
like,
currently
users,
who
who
depend
on
the
behavior
of
fast
pod,
Recreation
so
again
up
for
discussion.
If
there
are
some
comments
here,
then
yeah
I
I
will
try
to
answer.
B
A
I
have
a
question:
how
would
this
new
mode,
assuming
that
we
allow
to
replace,
only
terminated
odds,
not
fully
terminated
part,
not
running?
A
How
would
that
work
with
a
regular
job,
where
you
basically
don't
know
whether
you're
replacing
any
particular
pod
we're
basically
ensuring
that
the
number
is
correct
at
any
given
point
in
time
when
we're
running
a
job,
I
can
see
how
this
actually
fits
the
index
chart,
just
like
you
said,
because
you
have
a
particular
index
assigned
so
in
in
the
example
that
you
I'll
put
in
the
in
the
description.
It
was
about
tensorflow,
where
it
knows.
Oh,
this
is
your
force
and
we
are
replacing
the
four.
A
So
in
that
particular
case,
it
does
make
sense
I'm
having
a
hard
time
and
maybe
I'm
missing
something
how
that
would
apply
to
a
regular
job
where
basically,
job
XYZ
is
replaced
with
job
ABC
and
there's
no
clear
information
that
one
is
replacing
the
other.
B
So
that's
a
good
question,
so
we
are
trying
to.
Certainly
we
have
a
problem
with
index.l
and
that's
confirmed
we
don't
know
if
we
have
problems
with
regular
jobs.
Maybe
we
don't
how
this
could
work
is
just
you
know.
Job
controller
tries
to
maintain
the
number
of
active
parts.
So
when
a
port
is
deleted,
we
currently
don't
count
it
as
as
active
as
soon
as
it
has
the
deletion
timestamp.
So
we
could
delay
and
still
count
it
as
active
for
as
long
as
it's
not
in
the
terminal
phase
like
succeeded
the
failed.
B
A
The
expected
number
of
active
I
mean
the
implementation
details,
and
the
reasoning
behind
the
just
request
is
is
totally
reasonable:
I'm
just
trying
to
figure
out
whether
it
would
affect
other
other
types
of
job
or
just
index
jobs,
or
what
would
be
the
use
cases
for
one
or
the
other?
A
A
It
would
be
like
trying
to
hide
a
lock
of
functionality
inside
of
a
different
function,
because
that
that
wasn't
the
goal
of
the
failure
policy
to
be
able
to
control
how
a
controller
will
react
to
recreating
pods
in
a
job,
whether
it
will
be
immediate
or
it
will
be
only
after
a
pod
is
fully
removed
from
the
cluster.
A
It
gets
replaced
so
I'm
kind
of-
and
this
is
kind
of
similar
to
what
we've
been
doing
with
pdbs
recently,
where
pdb
got
introduced,
the
policy
which
allows
it
to
differently
count
terminating
pots
and
non-terminating
pods
for
the
disruptions,
so
I'm
I'm,
seeing
this
as
something
similar.
So
I
would
probably
think
about
a
little
bit
more
exploration,
whether
we
want
to
do
it
just
for
index
or
for
all
type
and
probably
would
try
to
keep
it
away
from
the
hot
failure
policy.
Maybe.
C
Great
so
I
think
in
the
case
of
non-index
jobs
you
you
can
see
that
this
is
like
super
super,
like
it's
a
big,
it's
a
behavior
that
is
from
the
beginning
of
job
and
I
got
this
comment
from
Eric
Eric
tune
that
you
can
see
there
so
in
a
model
that
non-index
shops
serve,
which
is
kind
of
like
a
worker
pulling
tasks
from
a
queue.
C
It
doesn't
really
make
sense
to
not
restart
at
a
task
as
soon
as
possible,
so
and
and
I
think
that
will
probably
be
something
some
some
users
might
depend
on
now.
I,
don't
think
we
we
can
say
the
same
about
index
jobs
but,
on
the
other
hand,
is
be
it's
also
behavior
that
is
already
there
as
GA.
So
that's
my
hesitation
on
on
making
this
Global
for
index
jobs
and,
on
the
other
hand,
the
portfolio
policy
it
it's
about
Port
failure.
C
So
what
one
of
the
fields
is
you
know
how
having
well
the
only
field
today
is
those
rules,
but
we
made
it
a
extract
so
with
the
potential
that
it
could
Span
in
the
future,
and
this
is
one
possible
expansion,
because
it
is,
it
is
a
failure
mode.
So
it's
about
deciding
what
to
do
when
a
pod
is
finished,
which
is
a
failure
mode
in
in
most
scenarios
right.
C
Most
scenarios,
whatever
is
the
lead
in
the
Pod-
is
a
controller
that
is
issuing
some
kind
of
Maintenance
or
or
preemption
or
whatnot,
so
it
does
fit
in
there,
and
it
also
fits
in
the
sense
that
we
are,
depending
on
on
the
jobs
on
the
pots
to
finish,
to
take
decisions
such
as
whether
there
is
an
exit
code
or
whether
or
whether
there
is
a
condition,
but
in
in
in
any
case
we
need
the
pot
to
finish
before
actually
taking
action.
C
So
I
think
for
me,
it's
rather
pretty
clear
that
it
fits
within
the
the
portfolio
policies,
but
my
hesitation
is
about
index
jobs
in
general,
I.
Think
if
we
had
made
the
decision
back
then,
when
we
implemented
in
the
shops
to
make
it
not
or
wait
for
pots
to
finish
before
replacing
I
think
it
would
have
made
total
sense
in
retrospective.
But
since
we
are
ready
with
a
GAA
Behavior,
a
little
bit,
hesitant
so
yeah
any
other
thoughts
on
this.
A
Strongly
Define
what
it
means
that
a
pot
failed
and
in
in
the
majority
of
cases,
people
expect
that
it
failed
is
something
such
as
it
doesn't
matter
how
long
it's
waiting
for
deletion.
It
is
marked
as
as
deleted,
or
it's
marked
for
deletion.
It
is
in
pretty
much
all
the
controllers.
A
It
is
being
period
as
one
because
non-existent
and
that's
what
we're
creating
and
for
the
majority
of
controllers
it
doesn't
make
sense,
but
for
this
one
I'm
hesitant,
because
it
would
be
basically
changing
the
definition
of
a
sales
unless
you're
talking
that
you
would
be
introducing
two
modes
within
the
trailer
policy.
A
Yeah,
that's
one
would
be
for
for
for
the
current.
D
A
Where,
where
it
matches
the
current
behavior,
and
then
there
will
be
the
I,
don't
know
deleted
or
permanently
removed,
or
something
that
would
Express
that
it's
gone
and
only
then
you
would
be
taken
an
action,
so
you
would
have
like
two
failure
modes,
one
would
be
the
soft
failure
and
the
one
will
be
the
hard
failure
or
whatever
the
name
that
you
can
come
up
with.
C
C
Which
is
yeah,
that's
that's
problematic
for
for
some
users,
but
yeah
I
mean
it's
the
process.
So
if,
if
that's
the
pathway,
we
have
to
go,
then
double
that
will
be
it
for
the
discussion.
I
think
it's
pretty
simple,
a
pretty
simple
thing
to
add
so
yeah
I,
guess,
I,
guess
the
the
point
of
of
bringing
out
this
topic
now
today
to
the
meeting
is
rather
to
treat
it
as
a
as
a
bug
and
that's
that's
what
I
we
wanted
to
to
bring.
A
At
the
park
and
in
my
favor
policy.
C
B
B
How
I
understood
that,
because
I
mean,
if
it's
a
bug,
maybe
we
don't
need
a
new
mode,
but
as
far
as
I
I,
how
I
understood
the
need
for
the
new
mode?
Is
that
it's
hard
to
attribute
that
it's
about
to
any
of
the
features
discussed.
B
So
we
would
introduce
a
new
API
field
that
introduces
this
Behavior
like
independently.
Maybe
we
can
embed
it
in
the
portfolio
policy,
but
maybe
it
requires
its
own
cap
and
like
it's
an
enhancement
or
that's,
how
I
at
least
took
the
comment
so
that
we
don't
make
it
a
bug,
but
rather
enhancement,
and
we
use
password
RDC
to
embed
the
new
new
failure
mode.
C
Yeah
I
think
I
think
that's
the
does
the
obvious
path
that
we
can
take.
We
create
a
new
feature
and
there
is
no
much
argument.
I
I,
suppose
people
need
it,
so
it
can
be
easily
added
as
a
new
feature.
So,
but
the
question
is
where
we
can
treat
as
a
bug
and
I
think
another
in
the
existing
controllers.
There
is
one
example
where
we
actually
wait
right,
which
is
a
staple
set
so
and
then
that's
that's
the
same
case
here
right.
C
It's
we're
talking
about
an
index
job,
so
it
kind
of
fits
in
that
in
that
model.
C
But
again
it's
GA
functionality.
If
we
just
talk
about
index
jobs,
it's
GI
functionality
that
we
don't
know
if
people
are
already
relying
on
it.
So
we
can.
We
cannot
change
it
right,
whereas
portfolio
policies
ongoing
feature
which
so
I
think
if
we.
C
It
it's
a
tricky
one,
because
we
we
could
change
the
behavior
in
beta
and
then
wait
for
the
next
release
to
add
a
feelback
to
CH.
You
know,
make
the
Behavior
configurable
but
again
based
on
feedback.
C
So
that's
yeah,
I,
guess
those
are
our
three
options:
right,
wait
for
the
next
release
and
add
the
field
change
the
change,
the
behavior
in
this
release
and
add
a
feedback,
the
next
release.
If
people
ask
for
it
and
three
years
back
for
industry
shop,
which
is
probably
the
the
most
controversial
option,.
A
Introducing
that
there
would
be
requiring
like
entirely
field
or
you
would
be
able
to
set
the
some
kind
of
a
policy
I'm
trying
to
read
the
first
statement
in
the
marking
part
as
it
failed,
where
we
were
here
we're
clearly
stating
that
we
are,
we
will
be
acting
on
on
failed
part
only
when
they
actually
reach
the
terminal
phase.
A
C
A
I
mean
it's
definitely
one
reasonable
argument.
My
biggest
worry
is
that
we
would
be
basically
hiding
the
ability
to
affect
how
a
pause
replacement
will
be
treated
inside
of
the
Pod
failure
policy
and
not
at
the
job
type
level
or
something
along
with
my
I'm,
not
saying
that
it's
good
but
I'm,
not
saying
also
that
it's
bad,
it's
just
that
it
in
some
way
feels
awkward.
B
And
also,
if
we
go
for
this
way,
then
I
don't
know
yet,
if
how
much
committed
to
this
we
are,
but
for
now
for
what
failures
policy
we
plan
for
GA,
plus
one
to
actually
unconditionally
wait
until
the
parties
are
really
really
filed
with
counting
it
has
failed
with
job
controller
and
then,
if
we
want
to
be
symmetric,
this
would
mean
that
when
portrayal
policies
GA
plus
one,
we
also
remove-
probably
the
if,
if
we
want
like,
if
the
argument
is
symmetry
of
particular
quality
of
handling,
when
we
count
as
failed
and
when
we
create
the
replacement
part,
then
probably
we
should
also
add
the
ga
plus
one
for
the
policy
eliminate.
B
So
if
you
like
go
to
graduation
criteria
for
GTA
in
the
blog
you,
you
can
see
that
this
is
like
what
we
planned
for.
At
least
this
is
what
currently
spanned
for
not
available.
B
B
We
wanted
actually
to
remove
this
if
so
that
to
simplify
the
job
controller
and
always
wait
for
folks
to
be
in
the
terminal
phase
regardless,
if
not
further
spicy
is
specified
or
not.
B
Maybe
that's
not
a
good
idea,
but
this
is
what
we
currently
have
in
in
plan,
and
this
would
again.
If
we
go
for
the
Symmetry
argument,
then
this
would
mean
that
also
we
eliminate
the
if
for
creating
the
replacement,
if
I,
if
I
understand
correctly
what
you
mean.
So
that's
that's
my
that's
my
thought.
B
A
The
job
as
long
as
it's
it's
fine
for
the
terminating
ones,
because
at
most
you're
changing
when
they
are
actually
accounted
as
failed
or
or
succeeded
during
the
replacement.
A
It's
a
little
bit
more
tricky
because
people
in
some
cases
might
be
relying
on
the
fact
that
we
are
replacing
failed
Parts
immediately
up
to
fulfill
the
the
current
parallels.
C
I
guess
could
a
potential
solution
be
that
another
criteria
for
graduation
to
GA
is
that
we
have
the
knob,
oh
as
part
of
the
policy,
but
then
the
question
is
which
one
is
the
default
Behavior.
A
C
One
more
proposal
just
to
give
us
some
time
just
to
gain
us
some
time,
so
we
maybe
we
can
do
this
new
field
as
as
a
criteria
for
this
cap.
C
There
is
a
so
there
is
a
new
field
to
control
when
to
replace
the
pod,
and
then
this
is
part
of
the
portfolio
policy
on
a
search
we
can
still
added
in
this
release
us
since
it's
a
new
field,
it
still
has
to
be.
Gated
still
has
to
be
false
by
default,
and
only
in
the
next
release
we
can
make
it.
A
Where
would
you
put
the
the
field
inside
of
the
Pod
failure
policy?
Yes,.
B
A
Yeah,
you
would
be
basically
breaking
the
current
index
job
users,
because
suddenly,
after
a
particular
upgrade
the
different
defaults
for
index
and
non-index
jobs
would
be
causing
the
previously
working
index,
jobs
that
have
a
different
Behavior.
C
Okay,
so
if
cgaps
is
happy
with
that
possibility
of
adding
this
field
still
disabled
by
default
in
127,
yes
am
I
right,
yes,
27.
I!
Suppose
we
still
need
the
buy-in
from
the
release,
managers
and
API
reviewers.
A
Most
likely
yeah
if
the
API
reviewers
will
be
okay
with
I,
could
probably
support
this
being
part
of
the
plus
Taylor
policy.
We'll
probably
need
to
change
a
little
bit
the
structure
description,
because
this
one
talks
about
back
off
limit
that
it
influences
back
off
limits.
Let's
say
positive
parts
will
be
a
little
bit
more,
it
will
gain
extra
hours.
C
A
Okay,
until
you
want
deep
and
Ravi
the
couple,
10
based
eviction
from
node
license
controller.
E
You
hear
me
not
unclear
okay,
yeah,
so
hi,
everyone
I'm
due
to
this
group,
but
I
think
should
know
me
so
I'm,
a
kubernetes
member
and
in
the
past,
I
mainly
working
with
the
stick.
Scheduling
made
a
few
contributions
there
and,
of
course,
this
first
time
I
attended
a
meeting
so
okay.
So
this
is
a
proposal
we
presented
last
week
and
at
the
second,
the
node
in
the
meeting
then
later
in
the
Germany,
and
that
house,
though,
and
actually
this
probably
more
related
to
this
group
and
the
stick
apps
okay.
E
So
the
idea
and
the
hopefully
is
quite
straightforward.
So
we
want-
and
we
are
proposed
and
to
reflect
the
node
life
cycle
Management
in
particular,
two
functions,
and
one
is
this
and
the
node
and
the
life
cycle
manager
and
the
tent
node
right
when
a
node
is
not
healthy.
The
second
one
is
called
a
tent
manager,
it's
dreamy
and
the
act,
and
it
was
tent
in
particular-
is
the
low
execute
tense?
E
Why
do
we
want
to
do
this,
and
so
the
main
motivation
is,
you
know,
use
cases
and
we
believe-
and
it's
General
enough
for
some
complex
workloads.
So
the
default
and
the
tent
manager
right
just
delete
and
all
these
pause
running
ports
and
when
the
node
is
marked
and
no
execute
tent,
it's
not
flexible
enough.
Of
course
you
can
use
the
Toleration,
but
it's
also
there
are
not
limitation.
E
You
have
to
change
this
and
the
entire
region
also
the
period
and
the
time,
for
example,
and
for
in
particular
for
state
for
workloads
really,
and
they
have
a
local
storage
on
their
nodes.
It's
depending
on
a
lot
of
different
conditions
right.
What
type
of
the
the
low
execute
tense
or
the
workload
properties
and
other
factors,
then
the
customer
and
really
want
to
have
a
more
and
customizable
and
flexible
and
attendant
manager,
and
make
a
decision
when
or
whether
or
not
an
Invicta
reports.
E
So
the
key
ideas
yeah
with
FDC,
needs
to
customize
this
and
attend
manager.
The
second
reason,
of
course,
is
we
noticed
the
implementation.
The
life
cycle
manager
actually
just
applied
a
certain
subset
of
the
no
execute
tent
right.
No
doubt
CLC
not
reachable
to
the
node,
but
users
and
you
can
tender
the
nodes
and,
with
any
other,
no
execute
tense.
The
tent
manager
actually
applied
to
all
this
and
no
execute
tense.
So
logically,
we
also
think
that
it
will
be
nice
to
decouple
these
two
and
functions
right.
E
E
You
can
disable
or
turn
off
the
default
tent
manager,
but
from
the
127
and
yeah
it
will
become
a
default
on.
So
with
this
feature
action
with
the
Piston
current
execution,
there
will
be
no
way
and
to
create
or
disable
the
default
and
attend
manager
and
apply
your
custom
one.
Okay.
So
basically
that's
the
idea.
We
think
the
implementation
should
be
quite
straightforward,
also
iot
expect
and
the
minimum
if
any
and
the
impact
right
on
the
past
and
or
the
back
order
accountability.
E
So
yes,
that's
pretty
much.
So
let
me
do
we
have
anything
to
add
or
comment.
D
E
E
A
I
have
a
question
for
Aldo.
What
was
the
reason
behind
removing
the
enabled
teamed
manager.
E
C
So
one
second
I
need
one
second
right,
so
the
10
based
evictions
feature
was
it
graduated
to
ga
and
before
this
the
evictions?
Oh
okay,
it's
okay,
I!
Have
it
now
so
evictions
used
to
happen
because
of
things
like
no
pressure,
things
like
that
and
they
they
were
based
on
node
conditions.
C
Yeah.
The
note
conditions
in
the
studies
so
scheduler
was
was
reacting
based
on
this
or
acting
based
on
this
and
attain-based
addictions,
move
that
feature
towards
things,
and
so
there
is
a
separate
controller
that
contains
nodes
based
on
the
conditions
and
then
the
scheduler
just
simply
relies
on
the
taint
feature
or
take
some
solarations
to
delete
parts,
and
the
reasoning
was
that
you
know,
without
with
conditions
you
cannot
as
a
pod,
you
cannot
opt
out
with
tensor
Generations.
You
can
opt
out
from
from
being
evicted.
C
So
that's
that's
the
rationale
why
we
needed
why
we
transition
to
things.
C
C
The
default
Behavior
scalar
is
relying
on
it,
so
you
always
will
need
the
detained
manager
and
that's
why
we
are
removing
this
flag
and
the
behavior,
or
we
mark
this
as
we
marked
it
as
deprecated,
expecting
it
to
always
like
this
Behavior
always
be
active,
because
if
it's
not
some
things
that
this
color
does
that
don't
work.
C
E
Yeah
so
Elder
so
I
understand
this.
So
so,
since
you
are
also
familiar
with
your
schedule
right,
so
what
we
are
proposing
and
I
think
is
basically
make
it
more
like
a
schedule
right
if
we
can
decouple,
have
a
separate
and
manageable
controller.
There's
tent
manager,
of
course,
by
default,
is
still
the
behavior
and
the
radina
behavior
right.
So
that
way
and
I
don't
think
they
have
any
impact
on
the
scheduling
side,
but
on
the
10th
menu
side.
E
If
you
decide
to
customize
it,
then
we
can
replace
this
entire
tend
to
manage
with
a
customer
one
right
and
so
so
far.
If
we
disable
this
one
completely
and
one
way,
and
that
would
be
I
think
if
it's
very
difficult,
it's
not
impossible
to
customize
this
tender
manager
right
as
we.
C
This
flag,
enabled
team
manager
actually
has
two
effects
and
that's
the
problem,
so
one
effect,
of
course,
is
that
the
team
manager
doesn't
run,
but
the
other
effect
is
that
it
well.
What
actually
runs
is
the
old
Behavior,
which
is
the
one
based
on
conditions.
C
So
if
you,
so,
if
you
want
to
basically
what
what
I'm
hearing
is
that
you
want
to
disable
the
team
manager,
but
you
probably
also
want
to
disable
the
behavior
that
the
team
manager
was
supposed
to
to
replace.
So
in
that
case,
you
don't
want
this
flag.
You
want
a
new
flag
that.
E
Yeah
yeah
yeah
I
agree
and
which
flag
or
whatever
will
be
new
flag.
So,
like
the
proposal,
yeah
of
course,
I
want
to
get
everyone's
opinion,
particularly
you
have
something
familiar
with
this
on
the
young
flag.
So
what
we
are
proposing
said
yeah
the
carbon
reflects
and
it's
not
a
life
cycle
manager
and
have
two
separate
and
component.
One
is
the
current
right.
The
life
cycle
manager
and
the
two
always
tend
to
think
and
add
the
things
to
nodes
and
then
the
second
button.
E
Let's
move
this
and
attend,
manage
your
functionality
and
the
action
on
the
Node
tent
right,
no
xq10
as
a
separate
controller
or
manager.
Then
we
can
have
a
new
flag
and
a
console
right
whether
or
not
use
the
default
one
or
if
customer
want
to
and
replace
it
with
their
own
customer
one.
They
can
just
disable
this
default,
one
yeah
conception.
We
think
this
is
probably
more
similar
and
close
to
yeah
like
a
schedule
or
thing
right.
C
So
yes,
I
I,
think
I
see
no
concern
yeah
as
soon
as
assuming
the
the
the
existing
label.
The
existing
flag
still
goes
away
and
the
old
Behavior
doesn't
come
back.
A
I
mean
the
goal
would
be
to
have
two
separate
controllers.
Currently
I'm
I'm
guessing
I
have
a
lot
under
under
the
hood.
The
thing
manager
is
being
run
as
part
of
the
node
lifecycle,
controller.
E
E
A
E
Yeah
so,
firstly,
for
example
in
the
the
default
one
right,
if
the
the
node
and
has
any
of
the
the
tensor
and
no
execute
tense
and
the
default
Behavior,
we
use
it
to
the
post.
I'll
stay
for
ports,
in
particular,
have
the
local
position
volume
of
the
data
there.
We
don't
want
to
do
that
and
on
the
other
hand,
it's
not
just
the
simple
for
another
cases,
depending
on
what
type
of
the
state
of
workloads
right
a
different
database
storage.
E
Other
thing
it
may
checklist
and
no
execute
end
right
and
also
depending
on
this
workload,
if
they
can
tolerate
to
lose
the
data
or
how
worse,
right,
overall,
cluster
Condition
it's
really
depending
on
the
application
Yeah.
In
our
cases,
they
we
have
a
customer
controller
and
want
to
have
a
full
control
of
this
and
yeah,
whether
or
not
when
to
evict
and
the
state
of
opportunity.
Like
I,
said
in
a
simple
scenario:
we
don't
want
to
just
evict
a
state
and
that
have
local
storage
on
the
yeah,
the
bad
news
or
tempting
notes.
Yeah.
E
C
E
Okay,
okay,
so
decabulated,
so
so
yeah.
The
first
thing
that's
Korea
and
those
that
combined
these
two
right,
the
Y
is
a
subset
and
tense
added.
Another
one
is
yeah,
can
can
apply
any
logical
there
and
yeah
it's
like
nice
to
reflector.
So
if
the
the
question
said,
why
not
just
rely
on
the
Toleration?
So
could
you
even
scroll
down
to
the
other
conceited
and
I
I,
put
some
comments
here
right
so
overall
and
I
think
of
high
level
comments
to
respond.
E
It
is
like
I
said:
in
all
cases,
the
decision
is
somehow
is
very
Dynamic
and
and
complicated
or
complex,
and
so
that
so
far
we
do
that
on.
The
Toleration,
like
I
said,
is,
is
still
not
enough,
because
the
tanks
can
be
added
dynamically
in
our
case
right,
even
administrator
whatever,
and
the
reason
could
be
added
that
the
second
is
the
decision,
and
we
we
have
to
look
at
a
lot
of
factors
which
can
changes
over
the
time,
make
the
decision.
E
So
it's
not
like
a
static
one,
just
added
tolerations
and
always
and
invigate
it
on
this
type
of
the
tense
or
not
evicted
on
that
type
of
tense
yeah.
So
because
the
state
four
workers,
we
have
all
different
type
of
state
or
clothes.
So
so
that's
why
and
yeah
we
we
we
see
them.
That's
a
Deeds
like,
like
other
information
from
implementation,
perspective
yeah.
E
If
we
want
to
and
just
add
or
modify
the
text
and
more
dynamically,
which
we
require
yeah
and
not
for
changes
and
yeah
the
Communication
web
hooker
other
thing,
I,
don't
know
anything
that
makes
sense
and
yeah.
Unfortunately,
I
I'm
not
sure
I'm
able
to
share
the
detail,
the
workloads,
but
we
we
did
a
scene
Yeah.
So
basically
it's
the
eviction,
logical,
dependent
based
one
yeah.
We
see
there.
There
are
some
yeah,
the
the
needs
about
it.
E
E
C
This
reminds
me
of
another
discussion.
We
recently
had
about
preemption
disabling
preemption,
for
nodes
for
for
poets,
with
low
priority.
C
C
C
They
could
have
based
on
the
nodes
problems,
so
I'm
still
not
seeing
why
there
is
a
reason,
but
on
the
other
hand,
if
it's
a
field
that
you
want
to
just
like
disable
for
your
clusters,
I
I
have
no
saying
on
that:
I
guess,
and
as
long
as
soon
as
long
as
it's
not
an
option
for
the
pods
I
mean
we
cannot
stop
you.
D
I
think
one
of
the
things
that
that
we
want
to
do
is
we
want
to
have
much
more
control
when
it
comes
to
the
eviction
or
deletion
of
Parts.
That
is
all
what
I
think
q1
is
trying
to
say
without
diverging
the
details.
So
we
have,
we
can
have
systems
that
are
much
more
knowledgeable
than
kubernetes
by
that
what
I
mean
is
should
before
Davidson
happens,
should
some
other
system
be
consulted?
D
Those
are
the
type
of
things
that
we
are
looking
at
and
we
wanted
to
see
instead
of
kubernetes,
taking
an
action
and
then
evicting
or
deleting
right
away,
actually
have
some
sort
of
Extinction
mechanism
so
that
it
can
talk
to
an
external
system
that
has
got
the
knowledge
of
the
entire
workload
that
is
spread
across
kubernetes
and
non-cubernetes
clusters,
and
we
wanted
to
have
a
bit
more
of
control
rather
than
kubernetes.
Taking
those
decisions
for
us
does.
Does
that.
D
C
D
Great
path
would
be
a
more
complicated
when
you
have
like
when
you
need
to
have
a
web
hook,
which
actually
does
the
Toleration
updates
on
the
pods,
and
there
could
also
be
the
workload
that
is
actually
distributed
across
a
kubernetes
cluster
and
a
non-cubernetes
cluster,
where
you
would
have
very
less
control
over
and
the
system
may
not
understand
the
kubernetes
API
as
such.
E
E
The
question
really
is
that,
okay,
in
most
cases,
that
we
really
need
a
customer
need
and
a
customer
want
to
replay,
replace
the
the
default
right
and
the
vermila
tender
manager,
so
yeah
we'll
try
to
argue-
and
if
we
separate
these
two
at
least
this
provider
this
and
the
flexibility
functionality
in
the
customer
maybe-
and
we
could,
depending
on
use
cases
right,
we
could
argue
and
yeah
80
of
the
customers
and
yeah
may
not
need
to
and
customize
and
or
replace
this
d41.
E
But
at
least
in
all
cases
we
do
see-
and
these
and
like
we
mentioned
so
for
state
for
workloads,
maybe
around
a
different
platform.
We
do
have
a
controller
to
manage
this
understateable
status
now
is
manipulate
the
the
tense
Toleration
yeah,
like
a
l
l
you
mentioned,
but
we
haven't
seen
this
and
it's
not
a
festival,
you
know.
So
if
we
separate
them
yeah
at
least
I
would
argue,
benefit
teams
and
then
the
user
can
replace
it.
Maybe
just
a
single
Flagship
disable
this,
the
current.
E
E
Okay,
we
have
five
more
minutes
and
so
any
further
comments
or
what's
recommendation,
can
we
just
go
ahead
and
submit
her
cap,
then
collect
more
feedback
and
let's
see
how
we're
going
to
proceed.
A
Per
se,
I
don't
have
any
objections
into
this
place,
although
I'd
be
curious
to
to
know,
maybe
we
can
of
that
evaluate
options
or
Alternatives
in
the
cap
itself.
E
A
Would
be
the
value
if
you
would,
for
example,
implemented
mutating
webhook,
not.
E
C
E
A
E
E
A
E
A
We
start
dividing
the
current
controller.
E
A
The
pain
manager
then
I'm
guessing
that
maybe
there's
some
additional
discussion
required
with
the
six
scheduling.
Maybe
there's
something
missing
in
the
in
that
mechanism,
which
would
allow
you
to
inject
your
additional
decisions
that
you're
taking
and
I
and
I
know
that
scheduler
had
a
couple
of
those
extra.
A
Points
I,
don't
know
that
would
be
good
to
double
checking
with
the
scheduling
about
their
inputs
before
eventually
proceeding
with
that.
E
Okay,
okay,
great
yeah,
yeah.
The
comments
were
taken
so
we'll
add
more
detail
about
use.
Cases
on
the
compare
yeah
also
see
why
we
want
this
approach
over
all
right.
The
Toleration
of
the
theme
and
also
on
the
schedule
side
of
things-
and
we
want
who
is
the
co-chair
of
the
sixth
scheduling,
also
my
conical
and
red
collect
some
feedback,
but
we'll
keep
him
in
a
loop,
also
Eldor
and
all
the
people
in
the
loop.
So
one
last
question
so
shoot
this
and
be
reviewing.
Many
by
this
group
seek
apps
and
of
course
we
will.
E
A
I
wouldn't
be
so
strict
about
who
owns
it.
It
looks
like
this
is
the
one
of
those
controllers
which
is
shared
between
those
three
groups:
okay,
okay,
we're
capable
that
we
want
to
see
inputs
from
all
of
them.
Okay
signal
gave
a
clear
information
that
they
are
pushing
towards,
say:
gaps
I,
don't
want
to
be
the
one
pushing
towards
six
scheduling,
but
I
do
want
to
hear
their
input
and
all
the
heads
and
objections
before
we
make
the
final
call
to
go
one
way
or
the
other.
It.
E
A
A
This
approach,
whatever
the
other
things
consider,
maybe
the
question
is,
should
the
should
that
be
change
on
the
scheduler
side
of
things
where
actually
we
are
better
off
with
with
with
this
split
that
would
be.
Those
will
be
the
question
that
I'll
be
looking
for
before
starting
the
cap.
E
A
You
very
much
that
reaches
out
to
top
of
the
hour.
Thank
you
very
much
for
for
today
and
see
you
in
two
weeks.
Bye
all
right.
Thank.