►
From YouTube: Kubernetes SIG Apps 20230320
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
C
C
C
Again
on
my
screen,
but
I
have
some
different
Networks.
B
D
You
want
to
share
your
screen
on
because
share
this
screen
for
you
as
well.
D
Loud
and
clear
tenure,
muted,.
C
Can
you
see
my
screen
now
why
it
is
okay,
I
apologize
for
for
the
delay,
yeah
I
had
a
collecting
and
your
networking
issue
yeah
really
dying.
Hopefully
you
can
hear
me
well
now:
okay,
should
I
start
and
I
can
give
a
quick
overview
of
this
proposal
on
the
covering
tent
based
Port
eviction
from
node
lifecycle
controller.
C
How
to
remove
this,
and
then
it's
a
little
bit
weird.
Why
it's
here?
Okay,
anyway?
Okay,
so
we
and
present
the
initial
idea
a
few
weeks
ago
in
the
meeting,
and
now
we
put
together
a
tab
and
submit
a
PR,
so
today,
we'd
like
to
give
a
little
bit
more
and
detailed
presentation
and
idea,
collected
feedback
and
input,
of
course,
and
we'd
like
to
get
a
review
and
hopefully
approval
and
on
this
proposal.
C
So,
okay,
let
me
start
some
background
and
the
motivation.
So
the
idea
is
the
currently
and
they
are
a
component
called
and
know
the
life
cycle
controller
and
we
believe
it's
owned
by
the
Sig
apps.
So
it
has
under
the
basically
two
and
the
basic
functionalities.
One
is
and
add
the
node
tent
if
a
node
is
detected
unhealthy,
for
example,
if
you
cannot
reach
the
kubernet
of
the
nodes,
the
second
is
have
another
component
and
called
the
tent
manager.
C
C
So
in
this
proposal-
and
we
are
proposing
and
doing
like
to
and
move
this
and
attend
manager,
basically
is
this
action
part
from
this
and
the
node
size
and
the
controller
so
make
the
two
separate
controllers.
The
One
controller
is
added
10th
right
when
it
detects
a
node
is
not
healthy,
just
like
what
it
does
today,
the
second
one
is
perform
and
the
loaded
Edition
and
on
those
painting
with
the
node
and
no
executed
effects.
C
So
the
second
Infinity
is,
of
course,
by
doing
this,
then
the
user
and
the
is
allowed
to
can
opt
out
this
default
Behavior
today,
there's
no
Defender
manager.
They
can
replace
it
with
any
of
their
customer
and
yeah,
node
and
or
tent
manager.
So
that's
some
background
and
I'm
going
to
yeah
describe
a
little
bit
Yeah
more.
The
motivation
use
cases
in
more
detail.
So
any
questions
so
far
or
comments.
C
Okay,
if
not
I'm,
moving
on
so
the
key
motivation
and
I
mentioned
NATO,
and
so
is
we
feel
in
a
lot
of
use
cases
more
advanced
use
cases
or
workload
management.
The
current
this
standard
default
10
manager
right.
C
C
Basically,
you
can
enable
tanks
on
obtained
off
right
by
default
and
the
default
tandemalities
turned
on,
but
you
can
turn
it
off
now
is
going
to
be
removed,
so
it's
basically
under
we
and
you
can
replace
this
default
and
behavior
and
with
a
customer
behavior
and
from
the
127,
and
we
also
want
to
mention
right
from
the
functionality
perspective
right.
These
two
and
we
think,
is
two
separate
and
functions
right.
Y
is
added
tense
and
it
basically
focus
on
just
the
subset
of
the
node
and
Helsing
thing
right.
C
It
is
known,
healthy
networking,
collection,
easy,
we
add
the
tense.
Another
piece
is:
is
this
tender
manager
is
act
on
this
and
tense,
but
it's
deleting
the
ports
and
removing
the
ports.
It
can
act
on
any
type
of
the
order
and
the
node
tense.
We
think
it's
also
would
be
very
and
useful,
and
the
carbon
is
two
separate
and
the
functions.
Of
course
it's
provided
not
more
in
the
flexibility
and
to
manage,
and
these
two
separate
and
yeah
functions.
So
one
of
the
key
comments
and
we
present
at
both
the
signals
and
six
apps.
C
You
said
why
don't
you
just
use
the
colorations
to
do
this
feed
right?
You
can
set
the
Toleration
the
infinity
and
then
prevents
the
can't
manage,
removing
or
deleting
the
running
ports
or
you
customize,
with
whatever
value.
So
here
is
the
the
some
of
the
arguments
and
we
we
put
together.
We
think
it's
not
that
the
Toleration
mechanism
is
not
flexible
or
good
enough
to
handle
the
keys,
and
the
first
thing
is
Operation.
Is
a
single
value
also
somehow
static
right,
it's
very
hard.
C
For
example,
if
I
want
to
make
sure
my
data
is
checkpoint,
it
will
save
or
migrate
to
a
safe
place.
Then
I
can
remove
the
ports
and
the
associate
position
volume
right.
It's
it's
very
hard
to
set
a
correlation
value-
and
let's
say
this
10
seconds
or
20
seconds,
and
to
make
this
work,
but
if
I
want
to
dynamically
and
update
this
and
update
the
the
Toleration
value
right,
it
has
through
this
and
the
web
admission
controller
and
other
thing
it
will
have
to
require.
We
add
it
to
every
single
port
in
a
clusters.
C
Specifically,
this
custom
correlations
and
we
have
to
inject
this
storage
into
all
ports
and
using
this
mutating
web
hook.
So
this
can
cause
and
another
issues
one
is,
it
could
even
potentially
interfere
with
the
admission
plugins
and
it
will
apply
the
default
tolerations.
The
second
thing
is
included
if
there
are
some
older
versions
and
then
upgrade
other
things,
so
this
correlation
upgrade
to
the
web
hook
also
can
cause
some
conflicts
and
also
who
can
do
that
with
our
back
and
a
mechanism
and
have
to
be
in
the
put
in
place
so
also.
C
Another
reason
that
we
found
out
is
this
correlation
and
then,
of
course,
you
can
not
an
average
and
the
flexibility.
Center
I
want
to
get
some
additional
information
from
external
API
to
get
this
and
information.
Basically,
it's
hard
to
interact
with
other
controller
or
the
workload
application
management
system
get
an
additional
information.
So
in
our
practice
we
just
noticed
the
Toleration
itself
right.
It's
on
one
way,
it's
a
little
bit
of
static
concept,
it's
hard
to
set.
C
The
second
is,
if
you
upgrade
update
this
and
the
corrections
values-
and
it
has
to
interact
with
not
this
and
admission
web
hook-
can
have
the
conflicting
and
create
conflicts
with
the
existing
and
ambition,
plugin,
also
the
rbac
mechanism
and
other
web
Hook
and
the
interaction
dependency
thing
and
can
be
in
the
very
and
complex.
So
by
seeing
all
these.
So
then,
this
proposal
is
again
yeah.
We
want
to
separate
these
two
functionality
and
provide
the
responsibility.
So
by
default
the
behavior
will
be
still
the
same
as
the
current
one.
C
It's
just
different
from
Conrad
one
is
this
10
manage
is
run
as
a
separate
controller.
The
second
thing
is,
we
just
can
Leverage
The,
Cube
controller
manager
right
I,
have
a
new
flag,
and
is
this
not
new
flag?
Leverage?
This
and
the
cube
controller
manage
the
flag
right.
You
can
add
this
called
attend
manager
right
then
you
can
take
it
or
turned
off
and
by
default
and
you
just
use
the
default
10
manager.
So
if
you
take
it
off,
then
you
can
run
whatever
yeah
the
your
custom
and
the
tent
manager.
C
F
C
Yeah,
okay,
so
so
maybe
let
me
and
move
to
here
and
yeah
I
need
me
to
clarify
this
and
from
The
Proposal
perspective
yeah.
So
if
you
are
not
familiar
with,
this
is
current,
you
can
think
it
is
current
and
implementation
right.
There
are
single
node
life
cycle
controller
and
two
key
functions
on
the
left
side.
This
is
the
functions
to
add
the
node
tent,
the
second
one:
that's
what
we
call
the
tender
manager
right.
They
are
internal.
They
are
applicable,
enable
tender
manager.
C
C
So
this
is
the
proposal,
and
then
we
have
two
separate
controllers
right,
managed
by
the
controller
and
the
manager,
so
the
nodal
lifecycle
controller,
basically
yeah-
maybe
this,
but
by
removing
out
moving
out
this
and
the
tender
manager
it
will
work
and
as
it
is
right
and
watch
for
the
node,
PLC
and
the
conditions
and
add
the
tens
another
separate
one.
Basically,
we
move
this
out
right.
You
do
the
similar
thing
and
the
default
to
watch
the
nodes.
Then
you
can
do
things,
but
we
just
need
for
this
one
laughing
API
and
the
change
right.
C
You
know
the
controller
manager
and
you
can
just
add
or
disable
a
controller
or
disable
or
controller
right
by
specifying
this
and
the
flag
controllers.
For
example,
in
future
we
have
a
tent
manager,
just
if
you
specify
by
default
and
it's
on
so
it's
time
to
manage.
If
you
want
to
turn
it
off
right,
you
just
space
five
controllers
and
on
the
dash
or
Midas
under
control
manager,
right
basically
disable
the
default
one.
E
No
I
mean
my
question
is:
did
you
did?
Did
you
explore
the
option
of
enhancing
the
tolerations
API
so
that
all
states
served
but
yeah?
So
that's
one
question
then
follow
up
the
design
that
you
just
showed
actually
looks
odd
to
me
because
there
is
the
component
that
adds
detained
and
if
you
go,
can
you
go
to
the
diagrams.
B
E
E
Yeah
so
they're,
not
lifecycle
controller
is
adding
attained
and
you
so
that's
not
being
disabled,
like
that.
You
don't
want
to
disable
that,
but
then
you're
disabling
the
no
execute
manager,
which
means
basically
that
there
is
an
API
field
that
is
not.
There
is
an
API
field
that
is
not
being
respected
at
all,
like.
C
Yeah
yeah,
so
so
that
night,
because
that
I
hope
it's
clear
enough
here
and
the
question
you
ask
is
that:
can
we
an
average
or
enhance
the
pentender
based
API,
whatever
yeah
I
I,
want
to
probably
I
need
to
be
the
clarification?
What's
the
enhancement
and
would
be
proposal?
But
here
I
hope.
The
idea
is
clear
right.
The
functions
now
and
I
can
say
that
this
current
won't
do
these
two
fees.
C
So
the
proposal
is
quite
the
same,
but
we
just
said
we
don't
want
it
and
the
cardboard
together
as
a
single
and
a
controller
right,
so
these
pieces
with
steel
and
remains
inside
the
node
cycle
controller.
So
we
assume
the
behavior
exactly
the
same
as
today
and
it
will
write
and
the
the
keep
watching
the
node
healthy
and
if
it's
not
healthy
and
yeah
the
node
and
not
ready
other
thing,
you
have
to
be
exactly
the
same
thing
and
add
the
node
text
next
and
then
nothing
changes
here.
C
The
second
piece
is
the
default
behavior
of
the
tent
manager
and
just
the
deleting
and
all
this
running
ports
on
not
healthy
or
this
with
this
and
no
execute
tent.
Will
also
see
just
these
pieces.
The
only
difference
here
is:
we
want
to
move
it
out
of
this
and
make
it
separate
or
independent
controller
and
manage
by
the
cube
controller
manager,
because
this
way
and
the
week
much
easier
and
the
opt
out
and
this
default
behavior
and
replace
with
customer
one.
C
But
if
the
customer,
but
the
default
Behavior
will
be
exactly
the
same
as
the
current
one.
If,
if
a
customer
and
does
not
care
about
and
yeah
the
creating
the
sum
of
the
customer,
behavior
and
I
would
guess,
even
this
should
be
transparent
to
them,
and
there
are
no
any
API
and
other
changes
here
right.
Even
this
flag
and
the
current
one
is
flag
and
it
will
be
removed
from
127
right.
So
then,.
F
I
think
you
want
to
think
what
Alo
is
asking
for.
Is
we
have
a
taint
that
node
life
cycle
controller
is
adding
and
we
are
not
acting
upon
it
in
the
in
the
new
model.
So
why
are
we
doing?
That?
Is
that
the
question
that
you're
asking
I'll
do.
E
Yes,
so
that's
part
of
the
question.
The
other
part
is
so
if
you're
disabling,
all
of
the
non-secure
team,
Behavior,
you're,
basically
and
so
users
won't
be
able
to
use
the
non-execute
things
for
a
different.
F
Person,
no,
no,
that's
that
that
is
correct.
So
the
way
it
works
is
so
it
would
go
ahead
and
still
apply
the
no
executed
time.
So
there
is
a
a
default
Toleration
manager,
which
applies
that
the
Toleration
seconds
for
the
part-
and
if
that
is
not
honored,
we
will
go
ahead
and
delete
the
the
part
that
that
is
supposed
to
get
deleted
if,
if
the
node
cannot
tolerate,
but
the
problem
is
the
tolerations
on
the
part
are
the
problem.
F
For
example,
we
do
not
know
how
much
time
a
pod
has
to
tolerate
attained.
One
of
the
examples
that
you
want
to
give
is
like
say
if
a
workload
needs
10
minutes
or
20
minutes
to
be
correctly
checkpointed,
and
only
after
that
we
can
actually
have
the
Toleration.
Have
the
Pod
be
evicted?
We
are
not
able
to
do
it
now,
because
we
do
not
know
it
yet
by
it.
F
What
I
mean
is
the
number
of
seconds
that
needs
to
be
there,
so
we
are
not
saying
that
the
taint
API
is
not
available,
but
the
Toleration
API
is
not
flexible
enough.
That
is
what
we
are
trying
to
say
and
to
answer
you
to
answer
the
second
part
of
the
question,
which
is:
what
are
the
changes
that
need
to
be
made?
I
think
the
Toleration
seconds
are
good
enough,
but
unless
they
are
backed
by
like
some
sort
of
hardback
like
today,
anyone
can
add
a
toleration
to
the
pawn
right.
F
So
if
we
have
some
mechanism,
which
actually
limits
who
can
actually
apply
those
tolerations
to
a
part,
I
think
that
would
be
a
good
starting
point
for
us
and
dynamically
changing
the
Toleration
seconds.
That
is
something
that
we
need
to
think
through.
E
F
B
E
C
Yeah
thanks
everybody
for
the
clarification
yeah
again
I
want
to
clarify
the
current
region
mechanism
will
still
be
there
right.
We
are
not
talking
about
replacing
it
just
said:
it's
not
and
flexible,
or
or
good
enough
for
the
use
cases,
and
we
have
here
so,
of
course,
even
after
this
change
and
the
correlations,
you
can
still
add
the
tolerations,
but
if,
when
you
said,
have
we
conceded
and
enhancing
the
Toleration
mechanism?
C
If
you
are
referring
that
to
the
dynamic
changes,
the
one
and
the
challenges
we
mentioned
here
to
standard
right,
one
is
sometimes
very
hard
to
determine
a
good
values
for
the
Toleration
other
things:
Dynamic,
operating
and
or
updating
maturation,
values,
yeah
we'll
get
this
and
involved
some
reason,
our
back
thing
and
also
the
good
potential
conflicts
with
another
web
hook.
Other
themes.
So
that's
why,
in
the
review
so
another
thing
and
that
we
make
and
remind
me
is
yeah
so
so
far
you
can
see
these
two
functions.
C
This
is
a
add
just
a
subset
of
the
noted
hands
right
on
KLC
networking
not
reachable,
but
to
the
tent
manager
actually
can
act
on
any
kind
of
the
no
execute
tent
right
any
tense.
So
but
the
default
behavior
and
yeah
I
want
to
change
that.
But
if
we
create
a
customer
one
of
course,
this
customer
no
execute
depend
manager.
It
could
potentially
end
act
on
any
type
of
tense
if
it
wants
right.
So
let's
use
one
of
the
use
cases,
and
we
mentioned
here.
Yeah
I
want
to
cover
a
little
bit
right.
C
The
first
one
is
that
if
we
can
use
a
stateful
like
this
thing
and
they
can,
we
can
replace
this
and
the
default
teller
manager
and
depending
on
whether
or
not
our
data
right
is
required
or
the
data
is
already
safely
migrated
or
saved,
and
then
we
can
evit
or
delete
the
post
unattended
notes
or
not
just
use
a
customer
control.
Just
last
week
and
I
chat
with
yeah
I,
don't
know
if
you
heard
of
their
company
it
provided
this
video,
a
data
platform
neocol
and
an
official
right.
C
It's
like
a
middle
layer
for
cache
data.
Other
thing
they
also
mentioned,
they
are
not
running
there
on
this
and
the
data
platform.
They
are
in
the
on
kubernetes
one
of
the
key
challenges.
Some
of
their
data
are
cached
because
they
get
data
from
the
original
Source
right
and
no
matter
it's
under
F3
or
hdfs.
So
if
the
node
and
yeah
is
not
healthy,
they
could
easily
just
restart
it
and
deploy
another
nodes
and
can
discard
all
this
and
the
data.
C
But
there
are
some
intermediate
data
and
they
created
basically
from
the
regional
data,
so
they
want
to
safety
and
receive
this
data
before
they
delete
the
data
on
all
the
nodes.
I
think
that's
also
a
good
use
cases.
So
another
thing
and
the
yeah,
the
Second
Use
case
is
I
just
mentioned
for
all
cases.
We
can
even
create
a
properly
centralized
like
this
port
10
manager,
right
it
can
handle
all
kinds
of
reports
and
eviction
cases,
and
then
that
is
standard,
not
LC
or
other
things.
C
One
thing
we
notice
is
that
sometimes
the
post
can
stock
in
the
portal
terminating
if
it's
not
terminating
the
simply
or
then
it
could
prevent
even
other
Newports
at
the
starting.
For
example,
if
they
are
PVC
and
respond
on
a
port
unless
it's
deleted,
it
is
not
and
started
Newport
using
that
PVC,
but
whether
or
not
you
can
enforce
this
and
the
division
right.
It's
also
depending
on
your
list
and
the
telematic
behavior.
C
So
that's
another
and
customer
and
hand
manager
and
use
cases
which
can
be
useless
and
new
mechanism
right
and
just
replacing
the
old
one
and
to
do
this
decision-
and
this
is
like
the
story
three
and
the
also
mentioned
here
and
finally,
yeah
I-
think
basically,
this
big
some
use
case
like
this
and
the
business
and
some
other
thing.
So
since
we
have
another
proposal,
I
don't
know
how
much
time
I
have
so
any
further
question
comments.
So
far
at
least.
C
Oh,
no
so,
okay,
so
I
talked
to
because
the
sixth
schedule
in
the
culture
and
the
week
one
is
by
Konica
and
I
talked
to
him,
and
so
this
component
and
I
believe
that
there
was
owned
by
sixth
schedule
before
but
then
I
don't
know
the
executive
history,
but
then
it's
transferred
to
the
sick,
apps,
and
so
it's
now
and
I
believe
owned
by
Sig
apps.
But
you
can
correct
me
if
wrong,
but
a
feedback
we
got
is.
C
We
also
see
see
the
sixth
scheduling
we
discussed
this
offline,
with
the
six
scheduling
and
already
so
I
think
we'll
get
and,
of
course,
feedback
and
input
from
them
as
well.
So
far,
I
I
would
see
the
the
feedback
is
positive
yeah.
This
supports
this
propose,
but
of
course
we
will
pin
them
and
make
sure
they
yeah.
F
Yeah
I
think
the
the
problem
there
was
class
who
used
to
lead,
Sig
scheduling,
I
think
he
wanted
to
work
on
this
and
at
that
point
of
time,
if
I
remember
correctly,
it
had
been
moved
to
see
gaps,
but
Klaus
has
proposed
it.
So
we
we
continued
with
the
because
it
is
actually
started.
This
controller
is
actually
started
by
KCM
from
just
on
the
code
standpoint.
F
People
wanted
it
to
be
or
class
wanted
it
to
be
owned
by
cigaps,
but
he
was
leading
six
scheduling
at
that
point
of
time
and
he
he
ordered
the
code
and
he
submitted
first
gaps
to
own
it
later.
A
Okay,
sigaps
has
not
been
active
in
developing
or
maintaining
the
code
base
a
new
life
cycle.
Very
so
I
guess
here's
the
thing
from
I
I
I,
don't
like
all
you're
asking
is
okay.
A
We
want
to
try
to
refactor
out
the
no
executaint
manager
to
run
it
in
a
separate
controller
with
a
flag,
so
I
can
turn
it
off
and
replace
it
with
my
own
functionality
from
an
API
service
perspective
when
the
manager
is
doing
its
passes,
the
tanks
should
be
the
same
right
so
like
that
should
show
up
the
same
to
the
end
user,
but
I
don't
personally
fully
understand
the
implications
for
in
terms
of
semantics,
of
what
it
would
be
like
to
remove
that
manager
and
run
it
in
a
separate
controller.
A
C
F
Yeah
so
I
think
I'm
just
trying
to
understand
your
question.
So
what
you
are
saying
is
say:
if
you
had
attained
like
say
if
we
disable
the
Toleration,
the
no
executaint
manager,
which
actually
does
the
deletion
of
the
pods
or.
A
Yeah
I'm
saying
like
I,
don't
understand,
I,
don't
quite
understand
the
the
implications
for
running
it
in
a
separate
process
that
will
be
running
concurrently
with
that
the
node
lifecycle
controller
right,
because
right
now
it's
basically
an
embedded
functionality
imported
from
the
scheduler
package.
It
runs
inside
of
a
controller,
so
I
I'm
not
opposed
to
moving.
Now
like
it
sounds
architecturally
like
not
a
bad
decision
right
like
packaging
it
separately,
so
that
it
can
be
replaced,
seems
like
reasonable.
A
reasonable
thing
to
do.
F
I
see
so
you're
saying
that
from
a
node
lifecycle
control,
it's
not
managing
the
entire
life
cycle,
because
it's
it's
not
actually
deleting
the
pods.
A
A
It
seems
like
a
reasonable
request
in
terms
of
I
would
like
to
be
able
to
replace
this
component
and
turn
it
off
and
then
run
it
with
my
my
my
own
I,
just
I
I
think
it's
reasonable
and
perspective
of
people
certainly
who
are
running
turning
up
their
own
kubernetes
custom.
Kubernetes
I,
don't
know
how
useful
it
would
be
across
the
major
Cloud
providers,
because
they're
probably
not
going
to
disable
that
in
the
API
Machinery
by
default,
they
may
allow
you
to
turn
up
clusters
like
that.
A
So
getting
this
feature
into
a
pass
might
not
be
a
thing,
but
I'm
not
opposed
to
it
anyway.
C
Yeah
I
think
it's
good
question.
Yeah
I
I
just
want
to
clarify
two
things
here.
Right
one
is
even
today
before
the
127.
You
can
use
this
flag
right
and
enable
10
managing
Force
right
and
disable
this
Behavior.
Then
you
can
run
any
of
your
customer
controller
right
here
right.
One
question
is
yeah.
We
also
mentioned
that
even
inside
the
kubernetes
there
are
some
customer
Behavior.
C
If
all
your
nodes
are
not
healthy,
the
list
Hand
manager
would
not
and
yeah
delete
in
the
running
posts,
because
that
means
all
the
workers
will
be
shut
down,
and
so
the
The
Proposal
here,
of
course-
and
finally,
you
may
just
want
to
step
with
one,
but
hopefully
it
can
be
managed
by
the
cube
controller
manager,
but
I
still
assume
yeah.
Probably
most
of
the
customers
want
right
and
just
use
the
default.
Why?
It's?
C
Okay,
then
the
behavior
will
be
safe,
but
one
interesting
question,
I,
didn't
and
put
together
and
put
into
this
proposal
is
the
name
and
after
the
proposal
and
the
names
on
the
need
to
be
seating
right-
and
this
is
not
really
the
know-
the
life
cycle,
controller
or
tent
manager.
I
would
argue-
and
it's
probably
opposite
right-
this
is
more
like
attend
manager,
setup,
add
or
remove
the
tanks
right
to
the
nodes
and
it's
no
action
another
one.
C
Probably
more
is
like
eviction
manager
right,
poor
eviction,
management,
but
I
didn't
make
the
proposal,
and
but
it's
a
good
question
also
said:
should
we
change
the
name,
make
it
make
more
sense
right
and
on
the
left
side,
this
is
more
like
a
tent
manager
right,
adding
or
removing
the
tent?
Yes,
another
one
is
the
support.
Eviction
management
is
based
on
the
tense
and
yeah
basically,
and
do
this
and
attendance
and
recorder
and
the
eviction
and
yeah
deleting
the
ports.
So
that's
the
that's
the
thing
yeah
hi.
D
I
wonder
about
the
oretically
from
what
I
was
looking
at
the
code
and
I
was,
if
I
remember
correctly.
I
was
helping
to
to
remove
the
enabled
team
manager
using
the
flag
entirely
from
the
code
base
after
it
got
deprecated
for
a
long
theoretically,
the
code
seems
separate
and
I
would
be
okay
with
with
the
change
the
the
one
thing,
as
you
mentioned,
that
I'm
not
sure,
is
the
implication.
D
What
will
happen
if
you
disable
that,
and
probably
that
would
be
the
stuff
that
would
have
to
be
explicitly
called
out
in
the
enhancement
of
first,
the
Alternatives
that
Aldo
mentioned
there's
currently
in
the
alternative
section,
there's
only
a
short
mention
that,
oh,
if
you
decide
to
go
with
web
hooks,
that's
the
only
thing
that
you
can
do
so,
probably
an
option
that
oh,
we
would
about
expanding
the
API
surface,
about
the
ability
to
modify
the
thing.
D
And,
secondly,
what
happens
if
you
disable
the
team
manager,
are
there
any
implications
to
what
will
happen
if
you
run
without
a
paint
manager?
That's
the.
F
Yeah
I
think,
although
do
you
remember
the
team-based
evictions
code,
it
has
been
long
time
since
I
looked
at
it,
but
I
think
the
enable
taint
manager
is
actually
tied
to
train
based
evictions
without
that
the
node
lifecycle
controller
would
not
use
stains,
but
it
would
still
look
at
the
node
conditions
and,
depending
on
those
node
conditions,
it
would
still
go
ahead
and
delete.
That
is
why
most
likely
it
was
named,
node
lifecycle,
controller
previously
I
could
be
wrong.
But
if
I
remember
correctly,
that's
how
the
code
path
is
hello.
E
F
F
F
To
answer
your
question
I
think
even
like:
if
you
deserve
the
tank
manager,
the
node
life
cycle,
controller
code
path
would
go
ahead
and
look
at
the
cubic
conditions,
the
note
conditions
and
then
it
would
go
ahead
and
delete
the
pods,
but
it
could
not
use
the
tainting
mechanism.
F
D
If
you
look
at
it
at
the
end
of
the
day,
we
will
still
run,
at
least
by
default.
The
same
controllers
and
the
memory
footprint
will
be
similar,
because
the
only
difference
would
be
that
the
the
team
manager
go
routine
instead
of
being
run,
it
would
be
started
separately.
C
Of
course,
if
you
want
and
opt-in
out
this
and
the
default
one,
the
replace
running
your
01.
Basically,
then
it
won't
run
this
functionality.
Then
that's
customers
and
the
responsibility
right,
whether
or
not
it
will
have
some
implementation
here
then,
but
for
all
cases
we
we
think
is
we
most
likely
will
have
this
functionality,
plus
our
customer
and
the
behavior
right.
G
Hey
this
is
me:
I
work
with
Yuan
and
Ravi.
Okay
I
just
wanted
to
bring
up
another
Point
too
about
the
like.
What
are
the
implications
of
separating
it
out?
So
one
thing,
I'm,
not
sure
that
was
pointed
out-
was
that
the
other
effect
that,
on
the
changes
that
the
node
lifecycle
controller
applies,
is
no
schedule
right.
G
So
it's
no
execute
and
no
schedule
and
just
wanted
to
point
out
that
for
the
no
schedule
it's
a
completely
separate
controller,
which
is
a
scheduler
that
acts
on
the
no
schedule
right
and
the
no
schedule
paint
can
be
applied
by
any
component.
Like
notes,
node
lifecycle
can
apply
it
or
any
other
component.
G
So
this
proposal
kind
of
aligns
with
the
way
no
schedule
works
and
that
you
know
the
lifecycle
controller
based
on
a
certain
set
of
node
conditions
is
applying
the
no
schedule
and
then
the
scheduler
is
the
controller.
That's
kind
of
really
acting
on
that
no
schedule
aspect,
foreign.
D
With
this
propolo
that
I
haven't
read
your
proposal
yet,
but
it
will
be.
What
have
we
been
talking
about?
I
would
like
to
see
being
plainly
laid
out
in
The
Proposal
such
that
anyone.
If
anyone
reads
it,
can
really
see
the
path
why
we
would
be
doing
it
this
way
or
the
other
way
around
and
in
similar
fashion,
the
Alternatives
section
being
built
with
additional
Alternatives
that
were
requested
as
well
and
I.
Think
that
should
be
sufficient
for
us
too,
because
they're,
including
that
for
128.
C
A
C
Okay,
so
I
assume.
Well,
you
are
going
to
review
it
right.
We
are
going
to
address
it
and
hopefully
we
can
move
in
properly
forward.
Okay,
thank
you
very
much
any
questions.
Yeah.
Let
us
know
return
us
yeah.
We
are
going
to
discuss
it
on
the
GitHub
or
we
can
schedule
another
session
discuss
it
if
any.
E
Just
about
this,
just
to
finish
on
this
topic,
Kenneth
you're
right-
this
is
the
ownership
of
this
controller-
is
not
well
established.
There
is
no
tags
or
anything
like
that,
so
I'll
send
a
separate
email
to
I,
guess,
cigaps
and
also
signaled,
because
I
know
they
have
a
stance
on
this
to
well.
You.
A
A
I
wouldn't
put
apps
at
the
top
on
the
list
here
like
I,
would
think
yeah
would
be
if
not
scheduling
and
then
apps
is
concerned.
Right
because,
like
anything,
you
do
with
things
and
tolerations
is
impactful
to
the
workload
controllers
for
sure,
and
then
we
do
own
Pi,
DB
and
eviction
so
yeah.
It's
we
gotta
go
figure
that
one
out
too.
E
Okay,
I'll
open
an
issue
and
involve
a
few
leads,
but
hey
okay,.
E
So
I
guess
we
can
move
into
the
next
topic
is
Kevin
here
as
well.
No!
B
A
E
So
I
brought
up
this
topic
before,
but
we
decided
to
defer
it
to
128..
E
So,
basically,
in
a
when
this
is
about
jobs
specifically,
but
it
can
be,
it
can
be
extended
to
any
other
workload
API.
So
the
behavior
we
have
today
is
that
when
a
pod
fails
or
sorry
when
you
delete
upon
it
is
marked
as
terminating
of
course,
but
pretty
much
every
controller
treats
it
as
inactive
or
something
that
needs
to
be
replaced,
and
the
job
controller
specifically
creates
a
new
pod
immediately
for
that.
For
that
terminated
part,
so
there
are
some
scenarios
where
this
fails,
such
as
the
tensorflow,
for
example.
E
If
you
create
a
replacement
part,
while
the
other
one
is
still
closing
up
this
connection,
then
you
get
a
failure
from
tensor
fuel
so
that
it
that's
the
mean
there.
E
E
E
If,
if
we
add
it
outside
the
main
benefit
is
that
well,
it's
a
it's
semantically
transferable
to
other
apis,
such
as
deployment
we've
heard.
If
you
can
see
in
the
thread
we
first
of
the
some
users
that
they
also
want
to
control
replacement
in
other
apis
such
as
deployment.
E
So
that's
one
big
motivation
to
do
it's
in
there
and
of
course,
if
we
had,
we
do
it
inside
the
portfolio
policy,
then
it's
gonna
be
very
specific
to
job
I.
Think
this
option
that
it
was
considered
while
we
were
hoping
to
doing
127
but
since
we're
doing
in
the
128
or
later
I,
don't
think
we
can
consider
this
option.
E
So
really
we
have
these
two
options
for
for
a
design
and
I
wanted
it
to
I
wanted
to
bring
it
to
the
attention
of
the
Sig,
basically
took
other
feedback,
mostly
from
other
users
that
are
not
badge
to
know
whether
it's
useful
for
a
deployment,
API
or
or
team
on
set
API.
To
also
have
this
feature
in
the
future.
A
Well,
this
is
one
thing:
I
didn't
understand
right,
so
deployment
by
its
very
nature
doesn't
declare
the
like.
You
get
n,
fungible,
pods
and
there's
a
variation
on
end
where
it
can
be
plus
or
minus
what
the
users
declared
and
attended
so
I.
Don't
think
it
really
applies
there
Staples
that
already
has
a
mechanism
for
guaranteeing
uniqueness
and
primarily
implemented
by
storage
space
in
reality,
but
it
already
exists
there
and
then
for
demon
set
like
it's
really
based
on
the
node
right,
like
demon
set,
isn't
like
based
on
a
scaling
metric
or
a
parallelism.
A
It's
based
on,
like
node,
comes
in.
If
the
selectors
match
up
you
get
a
pod
right
now,
there's
been
some
desire
to
be
able
to
run
more
than
one
pod
belonging
to
a
demon
set
like
basically
launched
a
new
pod
before
so
kind
of
the
reverse
of
what's
Happening
Here
but
I,
don't
know,
yeah
go
ahead.
Imagine
I.
D
Mean
if
you
look
at
the,
if
you
look
at
the
the
strategies
that
both
demons
set
and
deployments
Implement,
where
they
give
you
pretty
sophisticated
control
over
how
the
rollout
happens,
whether
that's
a
full
recreator,
that's
a
slow
rollout
of
of
an
application
that
already
asked
those
capabilities
and
also
to
what
what
Ken
was
saying.
Basically,
it's
hard
for
me
to
imagine
the
applicability
of
this
functionality
to
typical
workloads,
controllers
for
job
and
Statesville
said
I
was
mentioned.
That
already
has
it
it.
D
It
also
had
the
ability
to
control
how
the
rollout
has
happened.
How's
that
and
you
have
the
ability
to
affect
it
very
appropriate
update
strategies.
Yeah
yeah
go.
A
So
I
don't
see
how
you
would
like
back
or
the
similar
field
into
deployment
or
demon
set
or
stateful
set
and
make
it
useful.
But
I
can
see
how
like
putting
it
into
this,
not
well,
not
respect
but
putting
it
into
the
the
Pod.
What
do
we
call
that
field
in.
E
E
Okay,
so
going
back
to
your
question,
yeah
I
agree,
so
a
stateful
said
doesn't
have
this
problem
I'd.
Rather
that's
the
only
way
it
can
work.
You
can
only
wait
for
the
bot
to
finish
because
it
uses
the
same
public
demon
said.
I
thought
it
would
also
replace
immediately.
Does
he
not
I
actually
don't
know
all
right
about.
A
My
mind
it
does
replace
immediately
and
then
with
I
think
Clayton
added
the
max
search.
The
demon
set
right
that
allowed
you
to
Surge
ahead
right.
So
you
can
also
now
do
this
thing
like
the
reason
we
didn't
do,
that
originally
the
demon
set
is
because
demon
sets
tend
to
use
scarish
resources
like
if
you
think,
about
a
load,
balancer,
that's
using
host
Network
or
something
that's
like
a
CSI
driver
right.
A
A
So,
like
I
I,
don't
know,
I
can't
see
what
the
utility
for
demon
set
would
be
either
yeah
I
just
don't
see
the
there
are
already
a
bunch
of
features
that
we've
added
for
this
specific
behavior
of
the
workloads
that
were
meant
to
be
managed
by
those
resources.
So
I
I
don't
see
a
convergence
between
this
feature
and
it
was
my
success
like,
but
I
do
see
the
usefulness
for
batch
workloads,
so
I
would
end
up
put
it
into
the
pop
failure
policy
and
probably
do
it
under
the
existing
cap.
But.
E
There's
no
reason
to
I
mean
it
would
save
a
few
a
few
paragraphs
but
other
than
that
is
pretty
much
a
new
feature.
So
I,
it's
interesting,
I'm.
A
Just
I'm
thinking
a
bit
more
from
the
perspective
of
like
having
a
holistic
design
captured
in
one
location
right
like
if
we're.
If
we're
updating
this
to
incorporate
new
functionality,
it
might
be
nice
to
have
them
seen,
but
there
is
not
there's
not
a
huge
detriment
to
running
it
in
a
separate
Camp.
To
be
honest,
either.
E
Right,
the
current
cap
already
has
two
feature:
Gates,
okay,
so
I'll
consider
I,
consider
that,
but.
A
E
Yeah
I
mean
so
right,
so
the
only
okay.
So
the
only
use
case
here
for
deployment
seems
to
be
where
you
actually
want
just
one
but
I
don't
know
if
there's
a
way
to
like.
A
E
H
Yeah
I
just
want
to
say
that
some
time
ago,
I
created
like
this
issue
for
deployments,
which
has
like
a
couple
of
issues
that
would
benefit
from
this
behavior
of
specifying
what
to
do
with
the
terminating
pods,
like
I
can
explain
the
issues,
but
I
think
we
don't
have
that
much
time.
But
we
could
take
this
into
consideration.
E
Could
you
Target
here
in
this
in
this
issue
or
yeah
tag
it
here,
at
least,
if
you.
E
You
yeah,
we
can
use
it,
so
it
looks
like
Kevin
is
taking
it
over.
E
Okay,
okay,
yeah
Kevin
is
taking
it
over,
but
we
are
collaborating
closely.
So
we
should
be
able
to
communicate
that
okay,
but
yeah.
At
this
point
we
don't
have
a
design,
so
I
pretty
much
just
circulating
the
idea,
but
in
any
case
we're
gonna
start
with
job,
but
it
sounds
like
maybe
doing
it
inside
of
portfolio
policy.
It's
is
good
enough
or
brother.
The
best
option.
E
Okay,
I
think.