►
From YouTube: Kubernetes SIG Node 20230214
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20230214-180522_Recording_1920x1050.mp4
A
Hey
folks
welcome
to
Feb
14
2023
signode
weekly
meeting.
We
have
a
few
topics
on
the
agenda.
Let's
get
started.
A
So
the
first
one
has
is
from
Sergey,
so
basically
it's
an
announcement
that
we
are
tracking
23
gets
this
release.
So
that's
a
lot
and
I
hope
people
start
working
on
the
implementation,
so
we
can
get
them
much
soon
foreign,
so
we
missed
a
few
kits
because
we
were
not
able
to
get
either
reviews
or
the
updates
in
time.
So
we
can
continue
working
on
those
skips
for
the
next
release.
A
Okay,
let's
move
to
Peter's
topic
because
Sergey
is
going
to
join
late
Peter,
you
want
to
talk
about
the
automatic
C
group
driver
matching
proposal.
B
Hi
yeah,
so
this
issue
came
up
a
little
while
ago
and
I
wanted
to
talk
about.
So
basically
you
know
for
some
background,
the
state
of
things.
Now
when
the
cubelet
starts
up
it.
You
know
you
specify
a
c
group
driver
and
then
you
do
the
same
thing
with
either
containerdy
or
cryo
and
if
they
mismatch
then
some
weird
behavior
Can
Happen,
like
you
know,
cryo,
fails
to
create
any
containers
it
just
refuses
to
if
it
detects
the
passer
Ron.
B
So
I
wanted
to
talk
about
potentially
updating
that
I
came
up
it.
It
seems
like
it
would
be
possible
to
do
it
without
any
CRI
changes.
So
I
wanted
to
discuss
kind
of
the
approach
and
see
if
people
felt
as
though
it
was,
it
seemed
realistic.
So,
basically,
you
know
the
qubit
right
now
has
common
patterns
for
how
it
specifies
the
Pod
C
group,
and
so
theoretically,
whenever
a
you
know
a
a
run,
pod
sandbox
request
comes
in
they
could
the
tri
knows
like
what?
B
Basically,
what
it's
asking,
because
if
it's
a
system
DC
Group
driver,
then
it'll
try
to
put
that
pod
into
a
slice
and
then,
if
it's
C
group
FS,
then
it'll
have
a
path,
not
that
doesn't
specify
slicer
scope.
So
my
thought
was
having
the
CRI
just
automatically
detect.
If
there's
you
know
either
slicer
scope
in
the
name,
with
a
got
basically
to
make
sure
that
it's
like
a
path,
extension
and
then
decide
on
the
C
group
driver
from
there.
B
The
overall
goal
for
From
cryo's
perspective,
at
least,
is
to
reduce
the
number
of
configuration
options
that
are
like
duplicated
with
the
cubelet.
There
are
a
number
of
them
that
we
had
in
the
past,
and
so
we've
kind
of
been
in
the
process
of
deprecating.
Some
of
those
and
c-group
driver
is
one
that
I
think
we
could
be
Auto,
detecting
and
then
deprecate
the
configuration
option.
So
I
just
wanted
to
run
that
by
everyone
and
see
you
know
if
you
can
think
of
any
potential
pitfalls
there.
B
I
don't
think
we
would
need
a
cap
with
that
approach,
crowd
who
just
adopted
and
discontinuity
wanted
to
as
well.
That
could
be
done.
So
what
do
folks
think
about
that
foreign.
C
Like
on
this
overall
goal,
because
I
myself
have
like
debugged
many
cases
where
the
C
group
driver
has
mismatches
and
it's
it
actually
doesn't
fail
like
it
tries
to
work
and
then
kublip
basically
creates
a.
You
know.
C
group
of
s
thing
and
then
contain
like
a
runtime,
creates
a
system
D1,
and
then
you
know,
they're,
mismatched
and
all
types
of
crazy
stuff
happens.
So
it's
kind
of
a
really
bad
scenario
that
happens
and
wastes
a
lot
of
debugging
time.
C
B
So
in
this
case,
this
actually
will
spawn
a
follow-up
but
I'm
going
to
answer
that
question
directly
is
in
this
case
the
cubelet
would
be
the
source
of
Truth,
so
the
qubit
would
pass
down
the
Pod
C
group
to
run
time
and
the
runtime
would
interpret
which
the
group
driver
to
use
based
on
that.
A
Is
like
runtime
exposes
that
back
as
info,
but
that's
the
other
way
around.
B
Right
so
so
this
actually
will
so
there's
a
follow-up
so
Sasha
and
Marcus
and
Christian
and
I
chatted
about
this
before
today,
because
you
know
the
this
part
of
this
almost
kind
of
ties
in
with
some
future
work
with
pulling
C
group
management
into
the
runtime,
like
the
Pod
secret
management,
but
also
kind
of
ties
in
with
the
class
resources
work
because
class
there's
going
to
be
some
extension
to
the
runtime
status.
B
Call
that
the
Cuba
calls
into
the
runtime
asking
for
you
know
whether
the
network
is
up
and
stuff,
and
so
we
could.
We
could
extend
that
call
to
also
pass
up
the
secret
driver
and
I'm
open
to
that
as
a
as
a
way
to
do
it
more
in
a
first
class
way.
My
thought
is,
you
know
right
now
the
C
group
like
we
could
do
this
automatic
detection
and
we
wouldn't
necessarily
need
a
keper,
a
CRI
changes
to
do
that
and
that
you
know
in
the
future.
B
When
the
it's
extended
it
could
be.
You
know
the
responsibility
could
be
switched
or
we
could
just
get
started
on
the
you
know
more
legit
pass
now
so
yeah
that
that
is
a
consideration
is
you
know
who
should
actually
be
the
ultimate
owner
of
the
C
group
manager
field
like?
Should
it
actually
be
the
runtime?
Should
the
runtime
really
be
the
one
in
charge
of
the
c
groups,
and
then
the
cube?
Listen
to
that?
B
That
is
an
option
so
I
you
know
so
that
that
is
kind
of
part
of
the
conversation
as
well.
Yeah.
A
I
think
right
now,
both
the
entities
touch
it.
So
it's
kind
of
no
clear
owner
so
I
I
think
that
this
sound,
but
this
sounds
reasonable
to
me
this
plan,
do
it
doing
it
short
term
and
then
revisiting
it
with
the
runtime
info.
A
Derek
has
a
question
in
the
chat.
What
are
we
looking
for
in
the
path
that
distinguishes
V1
or
V2.
B
So
we
don't
need
to
use
the
path
to
distinguish
that,
because
the
node
will
be
Mount,
like
you
know,
I
I,
assume
we'll
just
use
like
lip
containers
like
is
the
group
of
best
functions
so
like
if
this
FSC
group
is
mounted
as
C
group
FS,
that
we
would
just
use?
B
Oh
sorry,
as
as
what
do
they
call
it
C
group
two
FS
or
something.
So
we
would
just
use
that
as
the
determining
factor
for
whether
it's
V2
and
then
otherwise
use
B1,
so
that
that
isn't
necessarily
an
option.
We
pretty
much
Auto
detect
that
now,
because
the
Cuba
doesn't
tell
you
know
the
run
time
to
use
V2
path.
It
just
lets
the
runtime
make
that
determination
and
it
itself
uses
that
makes
that
determination
when
mounting
the
pot
c
groups.
C
I
think
that
that
makes
sense.
My
only
slight
concern
here
is
like
the
run
times
at
least
I.
Don't
know
on
on
I,
don't
know
about
cryoside,
but
on
containerdy
side
they
already
have
configuration
for
the
secret
driver
right
so
like
would
that
be
just
ignored,
then
if
they
don't
match.
For
example,
like
you
know,
say
you
configure
your
containerd
to
be
to
be,
you
know
C
group
fs,
and
then
you
could
configure
equivalent
to
be
systemd.
So
let
me
just
ignore
the
the
runtime
configuration
setting
in
that
case.
B
Yeah,
so
that
is,
that
does
bring
up
a
question,
especially
if
we
we
you,
we
end
up
using
the
runtime
status
and
have
the
runtime
bees
source
of
Truth
for
the
field.
B
Then
it
would
actually
be
kind
of
like
a
horseshoe
thing
where
we'd
first,
we
currently
use
the
field
in
the
runtime
and
then
we
would
go
to
not
using
the
field
in
the
runtime
and
be
based
on
the
cubelets
field,
and
then
we
would
force
you
back
if
we
had
the
runtime
be
the
source
of
two,
so
that
might
be
a
little
bit
awkward.
B
So
if,
if
the
eventual
goal
is
to
have
the
runtime
be
the
one
responsible
for
the
C
group
settings,
then
it
might
actually
make
more
sense
just
to
keep
the
field
in
the
runtime
and
instead
go
the
legit
path
of
setting
up
the
cap
and
actually
having
the
runtime
status,
inform
the
C
group
driver
of
the
cubelet.
That
is
an
option
as
well.
E
My
understanding,
it
is
I
think
this
is
a
little
bit
the
next
startup
modeling
right.
So
so
you
could
be
like
the
sometimes
there's
the
could
be
the
runtime
stat
first
and
also
can
config
both
the
runtime
and
the
kubernetes
can
configure
the
past
so
who
should
be
the
initiate
and
configure
the
source
and
also
pass
down
to
another
authenticator
right
so
another
like
the
kubernetes
to
The,
Container,
runtime
kubernetes
start
first
and
pass
that
and
all
it
is
the
runtime
first
and
you
could
read
from
that
config
so
I
think
maybe
there's
also
incline.
E
B
Yeah
well
I
mean
in
qubit
execution
now,
like
it
waits
for
the
runtime
status.
To
start,
you
know
doing
things
because
it
you
know
it's
pinging,
on
the
waiting
for
the
network
to
be
ready,
for
instance,
which
is
solely
the
responsibility
of
the
runtime.
So
there
is
already
that
dependency,
which
is
the
opposite
of
the
way
doctor
was
where
a
cubit
would
start
Docker
with
the
docker
shim.
But
now
it's
like
the
other
way
around
where
Cuba
is
waiting
on
the
CRI
So.
B
To
that
degree
it
it
does
make
sense
for
how
to
have
the
Cuba
also
be
waiting
on
the
runtime
for
the
C
group
driver
to
be
ready,
for
instance,
so
that
it
then
can
start
initializing.
F
C
Yeah
that
makes
sense
that
one
other
comment-
I,
don't
know
honestly
I
just
want
to
bring
this
up-
is
something
like
on
container
D.
The
system
D
driver
is
configured
actually
I.
Think
on
the
runtime
class
level.
Like
you
know,
you
can
have
a
different
rental
class
I
believe
with
different.
You
know
different
drivers,
potentially
right
so
I
don't
know.
Maybe
some
people
do
something
like
that
or
they
have
some
think
they're
I'm
not
aware
we'll
be
using
it
like
that,
but
something
to
be
aware
of.
B
Yeah
and
then
to
that
degree
it
makes
further
sense
to
have
the
cubelet
well
I
guess
so,
do
you
think,
given
that
capability?
Do
you
think
it
makes
sense
to
the
cubelet
to
be
extra
flexible
with
a
c
group
driver
where
like
if,
if
so,
if
we're
living
in
a
world
where
the
run
time
is
the
one
that
tells
the
Cuba?
B
What's
the
group
driver
to
use,
but
the
containerdy
has
the
option
to
use
multiple
PC
group
drivers
should
the
cubelet
instead
of
having
it
like,
be
a
runtime
status
call,
should
it
be
able
to
Multiplex
like
per
runtime
class
or
a
runtime
status
call
versus,
like
you
know,
in
the
runtime
class,
or
should
we
just
not
you
know
is?
Is
that
a
is?
Is
that
a
feature
that
we
feel
like
supporting
in
the
cubelet
crowd?
Can't
do
that
now
it
just
uses
one
like
to
keep
it
does.
H
I,
don't
I,
don't
even
know
how
that
would
work.
I
guess
if
understood
the
comment
on
a
runtime
class
setting,
you
could
say:
use
secret
profess
versus
system
D
pass,
but
the
qubit
is
the
one
creating
the
Pod
C
group,
so
you'd
be
putting
you'd
be
I.
Don't
even
think
that
would
work,
I'm,
I'm
confused.
B
Well,
you
have
to
ask
each
time
basically
or
you
know
the
Cuba
would
have
to
ask
or
remember
the
the
specific
driver.
H
H
C
But
yeah
that
makes
sense,
I,
don't
think
it's.
It
is
kind
of
a
great
use
case,
but
but
just
wanted
to
call
it
out,
as
maybe
something
some
folks
do.
I'm
not
aware
of
anything
like
that.
My
the
scenario
is
like,
for
example,
we
eat
4G,
visor,
right
and
so
gvisor
has
its
own
system
DC
group
setting
that
you
need
to
apply
right.
That's
done
under
like
a
different
runtime
class
like
for
us.
We
just
match
it
right.
C
If
we
enable
system
D
in
in
kubload
will
also
enable
it
for
all
the
runtimes,
but
maybe
other
folks,
don't
do
that.
I,
don't
know
it's
this
one,
bringing
it
up.
B
Yeah
I'm
fine
without
that,
so
so
it
sounds
like
we
are
leaning
towards
having
the
runtime
be
the
one
to
report
up
and
instead
of
having
runtime
Auto
detect.
Maybe
we
should
just
go
the
legit
path
of,
like
you
know,
having
the
runtime
report
up
through
the
runtime
status,
what
the
C
group
driver
is
going
to
be
and
then
have
the
cubelet
auto
detect
and
just
choose
one
not
have
you
know
multiple
instead
of
yeah,
so
it
then
we
can.
B
You
know,
maybe
pursue
something
like
that
in
a
cap
for
1
28.
At
this
point.
B
Yeah,
so
that
yeah
it
unless
there's
any
other
thoughts
on
that,
that
kind
of
give
me
information
to
move
forward.
So
we
won't
do
this
Auto
detection,
because
it
would
just
involve
deprecating
a
flag
that
they'll
then
immediately
be
introduce,
so
instead
we'll
pursue
this
in
the
future
in
a
more
legit
way
in
the
you
know,
having
the
runtime
report
up
to
the
Cuba.
What's
the
group
to
use.
I
B
F
D
F
Will
I
would
like
to
comment
also
about
another
thing,
which
is
also
common
misconfiguration
between
those
two?
While
we
are
not
so
we'll
see
groups
as
one,
but
we
have
a
CPU
set
for
infrastructure
course.
So
we
have
two
independent
options:
one
on
runtime
side,
one
another
complete
side
and
we
probably
also
can
put
it
in
the
same
handshake
protocol.
A
Right,
thanks
for
the
discussion
we
can
move
on
to
the
next
topic.
So
next
is
deep
and
neon
discuss
decoupling,
no
executaint
manager
from
node
lifecycle,
controller.
J
J
You
may
be
familiar
with
like
a
Content
D
folks,
Michael
and
a
very
kind
of
honors,
so
I've
been
in
the
work
in
the
kubernetes
and
yeah
for
more
than
four
years,
I
used
to
contribute
or
not,
and
to
The,
Sleep
scheduler
and
the
scalability
group,
but
now
and
yeah
more
interesting
and
also
interesting,
contribute
to
the
sticker
and
the
node.
And
hopefully
you
can
yeah
get
more
involved
and
contribute
to
this
group.
I
I'm
getting
and
let's
see
since
I
I
still
don't
have
the.
I
So
let
me
the
couple
is
okay,
so.
J
The
idea
and
yeah
deep
and
I
discuss
a
lot
based
on
some
use
cases
and
our
observations.
The
current
and
the
node
life
cycle
and
the
controller
basically
and
combine
two
separate
functions
into
one
right.
One
is
apply
and
adds
the
node
tense
to
the
nodes,
but
this
is
only
a
subset
of
the
tens.
The
second
one
is
this
and
the
pent
manager
right
it
acts
on
this
and
no
execut
and
evictions,
but
this
can
be
applied
to
any
and
arbitrary
and
the
tense.
J
So
this
property
is
not
an
ideal
way
to
implement
it
right.
Why
is
apply
or
add
the
notes?
Another
is
act
on
any
understand
the
tense.
So
what
and
another
thing
I
want
to
also
mention
deep
Lotus
is,
of
course
so
far
you
can
email
table
on
a
default
and
the
tent
manager
right
apply
and
yeah
whatever
and
the
customer
want,
but
from
the
kubernetes
1.27
yeah
and
this
and
the
flag
and
will
be
so
off
so
there
are
no
way
to
disable
it
to
work,
and
so
any
questions
on
this
go
deep.
J
Okay,
so
yeah,
so
that's
the
current
stats
in
the
background,
so
what
we
are
proposing
is
basically
and
the
cardboard
reflect
the
current
node
and
life
cycle
and
the
controller
implementation
right
decouple.
This
function
so
basically
decouple
this
and
the
tint
manager.
This
is,
and
the
evicted
reports
based
on
the
tense
and
from
the
the
current
under
the
knife
cycle
and
the
controller
and
also
make
them
two
separate
functions.
We
can
manage
them
separately.
Of
course,
one
way
and
the
benefits
we
see
here
is
we
can
have
more
structure
flexible,
consistent.
J
We
manage
these
two
in
the
separate
function
right.
Why
is
that
a
certain
set
of
the
node
tense
to
node
and
another
one?
Is
the
act
on
this
tense
to
decide
and
how
to
evict
reports.
So
another
thing
we
want
to
mention
is
yeah
the
use
cases
here
we
have
seen
and
from
the
yeah
some.
The
real
workloads
introduction
is
for
the
complex,
the
workloads
right
and
in
particular,
there's
some
stateful
workloads
and
I
have
the
local
storage
and
the
deputy.
J
They
want
to
have
some
customer
and
more
flexible
policy
to
determine
and
whether
or
not
when
to
evict
ports
based
on
the
tent,
and
currently
we
can
use
the
candidate
or
reach.
We
don't
want
to
just
evict
right
some
state
reports
and
which
have
the
local
storage
on
a
node
when
a
node
and
is
unhealthy
or
networking
of
the
issues,
and
so
far
we
just
rely
on
the
10th
Corporation
to
do
that.
But
region
are
not
flexible
enough
right.
J
The
application
controller
may
want
to
yeah
depends
and
look
at
the
class
condition
or
sort
of
workload
characteristics
and
the
different
requirements
and
more
dynamically
designed
yeah
whether
or
not
evicted
reports.
So
from
that
perspective,
yeah,
for
example,
and
in
all
cases,
the
users
or
the
application
orders
want
to
implement
and
a
custom
and
the
port
evictions
and
sort
of
a
manager.
J
So
if
we
decouple
these
two
and
we
have
more
on
the
flexibility
and
we
can
disable
the
right,
the
default
of
data
or
eviction
manager
and
apply
and
use
any
of
the
customer
and
controllers,
another
reason
I
mentioned
is
since
1.27
yeah
the
flag
will
be
removed.
So
if
there
are
no
changes-
and
it
should
be
very
hard
to
support
this
Advanced
or
custom
motivation
policies-
okay,
so
that's
basically
the
idea
and
okay,
any
question
comments
or
deep.
You
have
anything
to
add
or.
L
J
Okay,
so
by
recovery
and
the
Deep
correct
viewer
at
anything,
so
it's
asking
me
so
far
is
in
a
code
base
right.
It's
the
implementation
is
a
single
and
controller
in
the
life
cycle.
Controller,
manage
your
qualities
right
of
code
manager,
so
by
the
company
we
means
have
completed
two
separate
controller
managers
in
a
code
base
right
of
course,
then
you
can.
You
can
disable
your
neighbor
and
separate
yeah.
K
Yeah,
like
exactly
the
the
overall
idea
was
we
do
want
the
node
lifecycle
controllers,
ability
to
react
on
node,
State
and
like
apply
these
things,
but
not
necessarily
enable
the
taint
manager
to
basically
take
action
on
any
arbitrary,
no
executing.
That's
like
beyond
the
job
of
you
know,
life
cycle
manager
today,
so
it's
kind
of
like
aligning
this
model
more
to
like
the
scheduler
model,
where
the
scheduler
acts
as
the
controller
that
takes
care
of
treating
the
no
schedule.
L
And
make
sure
I
understood
the
response,
so
you
would
want
to
have
these
as
the
strength
controllers
that
are
both
enable
in
the
existing
Cube
controller
manager
binary.
But
for
your
use
case
or
folks
who
have
similar
use
cases,
you
would
want
to
be
able
to
disable
one
of
those
controllers
in
the
existing
key
controller
binary
and
then
presumably
have
a
way
of
fulfilling
that
use
case
in
your
environment.
With
your
own
custom
solution,
correct.
J
Yeah
I
hope
to
see
yeah.
The
first
thing
I
could
point
out
since
the
the
current
one
combine
these
two
right
and
you
add
the
node
adjust
this
and
the
unreachable
and
the
amount
and
the
ready
but
functionality
wise
right
and
the
action
is
separate.
So
even
from
the
implementation
or
a
finger
maintenance
perspective,
it
would
be
nice
right
and
have
them
separate.
Like
yeah
I've
been
working
on
a
schedule
on
the
schedule
side
of
scheduler.
Look
at
the
no
schedule,
the
tent
right.
J
It
doesn't
matter
to
wear
an
attention
to
edit
or
when
to
edit
and
it
act
on
those.
So
it
would
be
nice
yeah.
But
of
course
our
use
cases
is
yeah.
We
really
want
to
support
and
more
advanced
customer
and
the
support
the
tent
and
the
eviction
policy.
We
believe,
that's
probably
yeah.
There
will
be
a
general
and
requirements
from
another
use
cases.
E
I'm
not
necessarily
against
this
idea,
I
think
of
but
I
just
want
to
mention
one
thing
so
originally
why
those
tender
it
is
in
the
northern
life
cycle
management
controller.
It
is
just
because
it
is
just
those
pens,
Also
Serve,
not
the
life
cycle,
and
so
so
now
you
want
to
expanded
that
to
have
other
tent.
So
you
want
a
central
place.
Central
controller,
to
apply
the
10th
Beyond
know
the
life
cycle,
which
is
the
good.
E
If
that's
the
cases
then
separate
that,
maybe
it's
the
good
idea,
but
there's
also
I,
want
to
see
that
that
imply
potential
a
little
bit.
Maybe
people
don't
care
some
people,
but
it
may
a
potential,
have
the
slow
down
some
of
the,
because
if
you
have
to
kill
in
a
lot
of
those
tents
and
apply-
and
you
end
up-
have
the
more
like
the
reconsolidation
loop
and
that
potential
could
have
some
a
little
bit
performance
things
so
I'm
not
sure
what
kind
of
pretend
that
you
want
to
apply
be
on
node
life
cycle.
E
On
top
of
this,
because
even
like
the
new
no
execute
to
me,
it
is
not
a
step
life
cycle
at
geosamic
standard.
So
that's
why
I'm
not
sure
what
the
kind
of
tent
can
you
introduce
more
like
what
kind
of
use
cases
we
can
justify
this
one
and
another
thing
is
I,
don't
think
about
the
there's,
the
cheetah
of
always
so
I'm,
not
sure,
necessarily
you
break
more
more
controller
and
more
things,
and
the
more
will
be
good
design
yeah.
E
If
you
have
like
the
really
good
reasons,
because
there's
the
like,
for
example,
you
have
the
more
modules
you
have
the
separate
binary
more
boundaries,
then
you
definitely
increase
of
the
resource.
The
usage
right.
So
that's
why
I
want
to
know
more
because
I
think
about
the
the
all
the
node
tense
you
are
missing
here
still,
it
is.
We
need
to
know
the
state
which
is
not
life.
Sex,
State
and
I
didn't
see
any
other
use
cases
here.
So
can
we
expand
it
that
way
more?
E
J
Yeah
yeah,
so
so
I
can
take.
First,
then
deep
can
I
add
more
comments
and
or
ideas
here.
So
firstly,
I
agree
with
yeah.
The
performance
concern,
of
course,
is
always
and
I
think
theater
speaking
right.
You
you
put
everything
together
and
yeah.
It
will
potentially
have
a
better
performance
than
yeah.
J
You
recover
them,
but
I
think
it's
a
straight
off
and
that's
something
we
probably
can
invested
more
yeah
so
but
in
kubernetes
I
would
guess
not
for
controller
right,
yes,
work
and
in
somehow
discover
week
so
go
back
to
yeah,
so
the
use
cases
and
yeah
do
we
want
to
add
additional
and
the
node
tense
and
then
act
on
them.
The
first
video
I
want
to
see
even
for
youth
cases,
we
want
to
have
a
more
customer
and
an
attend
based
eviction.
J
It
doesn't
necessarily
mean
we
want
to
add
a
more
intense
events
we
right
and,
for
example,
if
our
stateful
consultant
it
cannot
simply
said:
okay
evicted
this
ports,
even
though
that
is
unreachable
or
not
ready,
or
even
you
add
the
10th,
not
Toleration
the
time
and
it's
hard
to
see.
It
may
be
check
right
on
the
workload
status
depending
on
which
stage
the
workloaded
so
far
right
in
and
the
states
or
the
constant
conditions.
Other
things
then
look
at
the
tent
combining
all
this.
J
J
So
we
want
to
do
that
if
we
have
this
both
function
mixed
together,
so
in
any
better
way
right
to
do
this
in
particularly
and
the
since
1.27
we
are
going
to
right,
remove
this
and
drag
or
we're
going
to
support
this
type
of
feature,
of
course,
in
what
to
do
that,
we
think
it.
Probably.
The
carbon
can
provide
us
much
more
advanced
accessibility
and
flexibility
yeah.
So
that's
my
question.
J
Okay
and
of
course,
and
the
potential
Fusion
we
could
add
more
and
tense,
and
we
could
argue
these
things
can
be
added
to
this,
though
the
life
cycle,
controller,
right
and,
let's
just
add,
additional
more
but
maybe
I
manually,
right
I-
could
just
use
command
and
I
and
more
and
then
my
node
tent
manager
can
also
act
on
that.
So
we
discover
that
and
I
think,
probably
from
design
or
practice
protect
with
the
yeah
work
pad.
E
So,
basically,
what
you
thought
is
not
some
additional
tent
most
likely.
You
want
to
replace
today's
logical
how
we
apply
those
exist,
intent
to
management,
know
the
life
cycle,
so
you
have
the
customers,
the
basically
things:
okay,
yeah,
so.
J
I
can
see
yeah
I
think
this
adding
one
tense,
probably
yeah,
we
I
don't
deep,
can
add.
Do
we
have
that
kind
of
use
cases?
If
we
have
this,
of
course
we
can
support
that,
but
at
this
moment
yeah
we
think
it's
even
without
adding
additional
tense.
We
just
won't
have
a
more
advanced
than
the
contract
exponents
yeah
to.
K
Decide
the
second
tense
is
pretty
unbounded
like
it
could
be
anything
from
Cube
cuddle
that
you
apply
on
a
node
with
the
effect
of
either
no
schedule
or
no
executes,
but
the
paint
name
can
be
anything
right,
not
necessarily
unreachable
or
not
ready,
so
I
think
like.
As
far
as
the
code
is
concerned,
yeah,
they
are
too
like
completely
separate
reconciler
Loops
that
are
like
completely
independent
of
each
other
today
that
cues,
based
on
their
own
logic,
so
yeah,
the
The
Proposal
is
just
like.
A
I
E
I
hope,
your
issue
when
you
find,
because
in
the
past,
under
everything
this
the
state
I,
feel
I,
don't
know
how
customers
you
want,
but
in
the
past
we
did
discuss
a
lot
of
State.
The
transitions
should
be
more
intelligence
and
I.
Think
there's
even
half
the
time.
If
I
remember
credit
card
me
Vishnu
even
proposes
something
many
years
ago,
something
how
to
to
define
those
IG
from
Wednesday
to
another
state,
but
we
just
never
really
finish
those
work.
So
so
please
file
the
issue.
Then
we
can
look
at
those
issues.
E
So
I
don't
know
how
much
flexibility
you
want,
but
but
there's
we've
been
discussed
and
if
it's
not
that
Flex
full
of
flexibility,
maybe
there's
the
some
way
we
could
absorb
those
problems.
I
don't
know-
and
it's
just
starting
to
find
the
issue
and
the
proposal
next
yeah
yeah
I
can
say
that
flexibility,
but
I
also
say
that
today
our
transition
from
the
node
state
from
one
to
one
is
not
the
good.
J
K
A
Thanks
Steven
for
bringing
that
to
the
sake
so
next
up,
Mike
Brown
has
a
reminder
about
the
probe
granularity
announcement.
Mike.
M
Yeah,
just
just
a
reminder
need
somebody
to
take
a
look
at
it.
It's
been
around
for
a
couple
of
releases
it
it.
You
know
we
it.
The
idea
is,
you
can
have
more
granular
probes
instead
of
instead
of
saying
start
at
one.
Second,
and
you
know
check
every
second,
you
could
say
start
at
1.5
seconds
or
1.2
seconds
right,
which
would
help,
and
maybe
500
milliseconds
is
when
it
needs
to
start
and
right
now
the
defaults
just
don't
allow
for
enough
granular.
You
know
configuration.
We've
got
some
code
already
written
on
it.
M
A
A
Right
so
we
had
one
topic
from
Sergey
you're
back.
You
want
to
talk
about
the
annual
report,
tasks.
D
Hi,
sorry
for
being
late
today,
yeah
we
we'll
kick
off
annual
report.
I
think
we
have
a
couple
weeks
to
fill
it
up.
D
I
started
looking
into
this,
and
most
of
the
topics
are
quite
straightforward,
and
this
year
it
like
a
steering
committee,
even
helped
us
to
fill
out
some
portions
of
it
already.
So
if
you
have
interest
to
participate
in
that,
please
let
me
know
otherwise.
I
will
be
running
some
cleanup
and
filling
up
the
section
of
the
document,
so
yeah.
A
Great
thanks
Sergey,
so
that
brings
us
to
the
end
of
the
agenda.
Do
folks
have
any
other
topics
they
want
to
discuss.
A
All
right,
thanks
for
joining
that,
see
you
all
next
week,
bye
now.