►
From YouTube: Kubernetes SIG Node 20230919
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20230919-170603_Recording_2560x1440.mp4
A
Hello,
hello
today
is
September
19
2023,
and
this
is
a
signaled
weekly
meeting
welcome
everybody.
A
We
have
some
agenda
today.
Let's
start
with
Peter
hair
Commander.
B
Hey,
can
you
hear
me
yeah
cool,
so
I
wanted
to?
Yes,
I
have
a
couple
of
two
different
topics,
and
so
the
first
one
I
don't
have
anything
to
track
this
yet,
but
basically,
I
wanted
to
talk
with
the
Forum
about
the
scheme
of
metrics
gathering
in
for
CRI
staff.
B
First
of
all
slice,
or
you
know,
coupon
burstable,
if
it's
C
group
of
s
secret
forever
and
so
I
wanted
to
talk
about
what
everyone's
thoughts
were
on
the
way
to
address
that
Gap
like
basically,
there
will
become
if,
if
we
move
strictly
to
the
CRI
for
the
metrics
Gathering,
then
they'll
become
this
gap
of
like
you
know,
we
won't
be
reporting
metrics
on
the
system's
life,
where
we
were
previously
doing
so.
B
My
thought
on
this
was
to
have
a
I
mean,
have
a
mixed
mode.
Basically
of
this
you
know
C
group,
to
have
the
advisor
continue
to
collect
the
stats
for
the
system,
slice
and
basically,
all
of
the
c
groups
that
your
implementations
and
gathering
for,
but
the
one
question
that
I
had
was:
do
we
want
to
make
that
customizable
and
well?
The
second
question
is:
what
should
we
do
for
the
intermediary
fee
groups
within
the
coupon
slice?
B
So
the
reason
I
ask
about
customizable
is
I,
can
imagine
a
world
in
which
a
user
cares
about.
Like
the
coupons
first
of
all
usage,
there
used
to
be
metrics
being
reported
on
it.
B
If
we
just
ignored
the
entirety
of
the
cube
pods
slice,
then
there
would
no
longer
be
those
metrics
being
reported
if
we
made
it
customizable
like
have
a
flag
in
the
cubic
that
says
like
see,
advisor
collect
the
metrics
for
these
c
groups,
then
that
would
give
the
user
the
control
over,
like
you
know,
ignoring
the
whole
coupon
slice,
if
they
only
care
for
the
pausing
container
stats,
But
continuing
to
collect
for
like
system
slice,
or
you
know
any
additional
sort
of
c
groups
that
they
want
to
Define
right
now.
B
It's
just
static.
It's
just
like
a
static
value.
B
A
I
know
that
metrics
are
not
a
part
of
GA
API,
but
I
I
would
assume
that
many
people
took
depends
on
them.
So
just
just
regarding
things,
I'm,
definitely
not
an
option.
I
like
the
idea
that
you're
sharing
with
conditionally
switch
like
subset
of
metrics
from
set
advisor
to
CRI
stats.
B
Yeah,
the
basically
throughout
this
whole
process,
we
got
some
early
feedback
from
Clayton
like
two
years
ago,
basically
declaring
that
the
metrics
that
we
report
are
stable
API,
even
though
we
don't
promise
any
sort
of
stability
for
them,
because
they've
been
dependent
upon
for
the
last
20.
B
You
know
26
releases
or
something
like
that,
so
we
basically
have
to
treat
it
as
such
as
we
can't
really
change,
which
is
why
it
feels
like
you
know,
even
though
I
personally
don't
find
the
utility
in
like
reporting
the
usage
of
the
top
level,
cue
pod
slice
or
something
like
that
or
like
the
two
pods
burstable
or
something
I,
think
that
we
probably
have
to
maintain
that
and
I
think
a
compromise
between
my
position
of
like.
B
Why
would
you
need
that
and
someone
else's
potential
position
of
like
hey
I,
want
that
it's
having
it
be
customizable,
so
have
a
cube?
Look
like
that's
like
you
know.
These
are
the
slices
that
you
or
like
these
are
the
rejects
of
the
slices
that
you
should
collect
for
the
advisor,
and
then
a
user
can
be
like
okay,
I,
actually
don't
care
about
the
system.
Slice
at
all
or
I
don't
care
about
coupon,
perishable
slice,
collection
up
and
down
until
you
know,
we
actually
collect
the
potting
containers.
So
those
are
my
thoughts.
A
So
if
you'll
switch
some
methods
to
use
one
mechanisms
use
different
mechanism,
is
there
any
issue
with
metrics
inconsistency
like
let's
say
the
size
of
a
like
a
high
level
C
group,
although
burstable
or
whatever
it
may
not
be
greater
than
the
sum
of
all
the
ports
inside
this
slice
right
right,
because
we
just
collected
in
different
times?
Is
there
any
concerns
about
it?.
B
B
I
think
that
it
kind
of
end
up
ends
up
being
inevitable.
I
mean
again
we
never
have
made
any
sort
of
guarantees
about
the
constitution
of
the
metrics
so
like,
theoretically
speaking,
there's
nothing
that
says
that
we
couldn't
have
the
metrics
be
reported
at
different
frequencies,
so
that
the
like
values
don't
match
up
between
the
different
metrics
like
we
don't
promise
anything
about
frequency.
We
don't
promise
anything
about
sort
of
yeah.
B
So
how
we're
like
even
calculating
the
metrics
like
we
we
say
like
this-
is
the
you
know,
CPU
usage
seconds
or
something
like
that,
but
we
don't
really
specifically
say
like
we
promise
that
this
will
be
the
CPU
usage
total
over
the
CPU
said.
You
know
the
the
time
frequency
between
when
key
advisor.
Does
the
collection
Loop,
so
things
like
that,
you
know
we
have.
We
haven't
really
made
any
promises,
so
I'm,
not
sure.
If
we
should.
B
You
know
work
really
hard
to
maintain
that
I.
Don't
know.
A
Since
we
need
to
document
it
and
make
sure
that
this
collection
interval
much
and
we
can-
we
can
get
some
consistency.
It
is
like
close
to
be
consistent
right.
B
So
it
sounds
like
there
are
there
isn't
at
least
for
me.
So
again,
I
haven't
heard
anyone
coming
in
there
isn't
any
opposition
to
having
a
configurable
list
of
c
groups.
That's
the
advisor
collects
from
so
that
a
user
can
continue
having
the
metrics
that
they
want.
Is
that
correct.
A
Yeah
I
think
transition
should
be
handled
like
configurable
transition
is
fine
with
me
like
we,
we
need
to
move
forward.
We
cannot
just
get
stuck
on
one
API,
but
we
need
to
provide
some
way
to
migrate
and
once
we
migrated
everything
to
CRI
stats,
maybe
we
can
start
thinking
of
duplicating
old,
metrics
and
assign
that
if
you
need
those
metrics,
you
may
need
to
run
your
own
C
advisor
or
do
some
something
similar
to
that.
Yeah.
B
A
Mean
I
don't
want
anything
of
that
to
be
blocking
like
migrating
to
Sierra
stats.
That
is
a
good
thing.
Overall,
let's
concentrate
on
making
it
safe
for
customers,
but
move
forward
yeah.
All.
A
Interject
with
another
metrics
question,
I
have
an
answer:
okay,
so
yeah
I,
I
I'm
here
to
ask
advice-
and
maybe
somebody
knows
some
history
about
it.
So
I
we
recently
observed
some
working
set
bytes
to
be
zero,
and
this
work
inside
bias
is
collected
by
C
advisor.
It
is
collected
as
I
can
show
exactly
how
it's
got.
Oh
I
need
to
show
my
screen
anyway.
A
Okay,
so
the
issue
you
absorbed
and
that
work
inside
bytes
is
zero
and
the
way
we
collect
working
cell,
bytes
and
C
advisor
is
by
getting
usage
memory
and.
A
A
I,
don't
know
where
this
logic
is
coming
from
and
I,
don't
know
enough
about
Linux
kernel
to
understand
how
it
even
can
be
possible.
That's
why
I'm
here
to
ask
if
anybody
knows
and
if
anybody
can
give
advice,
but
what
it
results
into
is
that
metric
server?
It
has
some
logic
that
calculates
calculates
Port,
it
is
usage
and
it
has
this
logic.
It's
the
logic
is
saying
that
if
CPU
usage
by
individual
container
or
memory
usage
by
individual
container
is
zero,
then
it
will
disregard
metrics
for
entire
Port.
A
So
assuming
that
container
is
some
weird
State
and
it's
inconsistent
has
inconsistent
metrics,
so
I
I'm
challenging
this
logic
and
Metric
server,
but
the
challenge
is
logic
completely.
I
need
to
have
a
proof
that
working
set
by
zero
is
a
real
possibility
and
it's
not
a
error
in
how
we
collect
data.
So
if
anybody
has
any
opinions
about
it
or
knows
how
it
can
be
zero,
why
this?
A
If
statement
was
introduced
in
this
C
advisor
logic,
please
comment
on
this
issue
and
it's
a
yet
another
example
where
metrics
here
were
taken
dependency
on
how
we
collect
metrics
and
what
exactly
metric
consistency
is
supposed
to
be.
A
Yeah
metric
server
thinks
that
zero
working
set
bytes
means
that
process
is
terminated,
but
we
observe
that
it
can
be
zero
on
some
process
that
sleeps
a
lot.
We
just
don't
have
enough
access
to
this
environment,
so
we
cannot
prove
one
way
or
another.
So
still,
maybe
some
logic
that
breaking
connection.
E
A
Thank
you
yeah.
If
anybody
knows
something,
please
let
me
know-
and
this
is
another
example
where
other
tools
take
Independence
on
how
we
collect
metrics.
That's
why
we
need
to
be
very
careful
with
how
we
change
it.
A
Okay,
Peter,
sorry
for
interrupting
you,
you
want
to
talk
about
conflict
d.
B
Ude
no
problem,
so
the
other
piece
that
I
had
was
yeah
the
Sergey
you
had
as
feedback,
potentially
as
a
beta
follow-up
to
have
the
con.
You
know
Cuba
be
able
to
view
the
effective
score.
Have
it
used
to
be
able
to
easily
view
the
effect
of
the
configuration
as
a
cubelet
I
was
wondering
what
you
were
kind
of
imagining
for
that.
You
know
as
we
look
forward
towards
beta
as
far
as
I
understand,
there
is
like
a
qubit
can
say
Z
option
which
would
do
kind
of
that.
B
It
doesn't
necessarily
say
the
source.
Adding
The
Source
would
kind
of
involve
like
updating
the
qubit
configuration
options
to
all
have
like
the
source
location
to
have
like
a
duplicated
sort
of
well.
It
could
actually,
we
could
use,
reflect
and
have
a
map,
basically
for
each
of
the
fields
to
say
which
it
came
from,
but
yeah
I'm
just
wondering
what
you
were
kind
of
imagining
with
this.
A
Yeah,
mostly
on
the
config
C,
to
be
reflective
of
what
Kubota
is
using
and
I
wasn't
sure
where
configure
is
taken
its
data.
So
if
it's
taken
course
configuration
and
it's
like
completely
a
reflection
of
what
Google
is
working
against,
and
it's
totally
fine
with
me,
because
once
you
have
configsy,
you
can
Implement
anything
else
on
top
of
it
by
analyzing
it
and
exposing
yeah.
B
We
can
have
I'll
make
sure
that
we
have
some
sort
of
tests
for
that
to
double
check.
That
I
would
be
given
how
the
implementation
is
shocked,
that
that
wouldn't
be
how
it
works.
But
it
would
that's
a
good
thing
to
check
and
we'll
I'll
add
that
as
beta
criteria
to
have
an
end
and
test
or
double
check
that.
F
D
A
Feature
Gates
checks
before
so
I,
don't
know
what
else
it
doesn't
have.
So
all
right.
G
D
G
So
I
was
just
investigating
in
the
eviction
hard
and
I
have
reached
to
this
eviction.
Art
proposal-
and
this
was
management
like
when
it
will
be
proposed.
The
existing
flag,
like
image,
high
threshold
and
image
low
threshold,
like
will
be
duplicated
so
till
now
they
are
not
being
deprecated
like
so
I
want
to
confirm
this
for
the
customer
for
my
customer
like
if
they
are
deprecating
soon
or
is
there
any
plan
on
going
in
the
community
side
for
this,
so
I
want
to
hear
from
the
community.
G
Yeah
so
last
year
also
I
was
like
confirmed,
like
a
director
has
answered
this
like
this
was
to
be
deprecated,
but
due
to
some
things
like
it
was
not
deprecated.
So
I
just
want
to
confirm
like
is
there
any
work
on
going
or
is
there
any
plan
in
the
near
future?
For
this.
A
We
have
two
real
one
caps
in
the
space.
So
first
is
we.
A
What
is
it
called
like
enhanced
garbage
collection
policies?
We
will
be
doing
TTL
on
image,
likeness
and
once
image
is
around
long
and
now
we
used
for
that
period
of
time
it
will
be
removed.
It's.
A
Relevant
to
this
one,
but
it's
close
and
another
one
is
about
separation
of
image
effects
and
not
of
us.
You
can
search
for
these
steps
and
ask
either
of
them
if
they
want
to
take
this
duplication
into
scope,
I
think
if
Fields
weren't
deprecated
it
means
that
there
is
additional
work
needed
to
duplicate
them.
So
if
you
want
to
dig
up
what
this
work
was
and
enjoy,
initiate
this
work
I
think
about
doing
it
soon,
but
I
don't
think
anybody
actually
working
on
that
right
now.
C
G
Okay,
okay,
then
thank
you.
I
think
that.
I
A
Yeah,
thank
you
for
bringing
up.
We
have
so
many
so
much
of
tech,
depth
and
I'll.
Take
that
and
some
things
that
we
said
we
will
be
doing,
but
we
never
did
so
yeah.
It's
always
surprising.
A
Dixie,
do
you
want
to
talk
about
PSI,
metrics.
H
I
wanted
to
talk
about
the
kept
for
PSI
based
actions
on
the
Node,
so
I'll
quickly
summarize
it.
So
there
is
a
blocker
here,
though,
so
this
cap
is
dependent
on
run
C
120.
There
is
a
change
that
needs
to
go
in,
and
you
know
helping
me
see
if
we
can
get
unblocked
on
this
right
now.
The
timeline
for
this
is
Optimum
to
end
and
it
might
not
align
with
the
implementation
in
kids
129..
H
So
the
goal
is,
there
are
going
to
be
two
phases
in
this.
One
would
be
to
have
a
PSI
Matrix
integrated
in
the
cubelet,
and
that
would
be
exposed
in
The
Matrix
endpoint.
The
second
phase
would
be
to
utilize
these
PSI
metrics
to
set
node
conditions
and
take
actions
on
the
following:
I'll
quickly
go
over
the
design
so
today.
This
is
what
this
is,
how
the
kernel
stores
the
PSI
metrics.
H
So
to
integrate
this
we
would
have
to
add
two
new
data
structures,
one
for
the
entire
entire
row,
PSI
data
and
then
PSI
stats,
and
we
will
expose
this
in
the
metrics.
Api
Sergey
mentioned
that,
like
today
to
be
able
to
read
the
PSI
metrics
from
currency.
We
are
using
C
advisor,
but
if
the
C
advisor
less
change
is
implemented
before
it,
we
wouldn't
need
it.
H
I
needed
some
feedback,
mainly
for
phase
two.
So
once
the
PSI
metrics
are
integrated,
we
want
to
have
some
actions
based
on
the
PSI
so
for
that
I
have
recommended
that
we
introduce
a
new
config
parameter
in
cubelet.
H
That
would
allow
users
to
set
the
pressure
threshold
Beyond,
which
they
would
want
to
take
actions
on
their
nodes
and
for
that
I
wanted
to
see
if
we
could
add
two
new
node
three
new
node
conditions
for
each
resource:
node,
CPU
contention,
pressure,
memory,
pressure
and
disk
pressure
and
kernel
collects
the
PSI
data
like
here
for
three
different
time
frames.
One
is
10
seconds
60
seconds
and
300
seconds,
so
we
could
use
these
specifically
10
seconds
and
60
seconds
to
see.
H
If
you
want
to
take
an
action
so
say,
if
the
60
seconds
PSI
is
above
threshold,
then
maybe
we
could
record
an
event
that
would
indicate
there
is
a
high
resource
pressure
and,
if
the,
if,
if
the
PSI
is
above
threshold
and
is
still
trending
Higher
by
trending
higher,
we,
we
could
see
that
if
the
last
10
seconds
pressure
is
also
higher
than
the
threshold.
H
In
that
case,
we
could
set
the
node
condition
for
high
resource
contention
pressure
and
also
tint
the
node,
by
making
the
change
in
the
controller
code,
so
that
no
new
pods
are
scheduled
on
this
particular
node,
because
the
node
node
is
under
high
pressure
and
the
phase
2
could
be
guarded
behind
this
feature
flag
so
that
we
can
get
enough
feedback
and
we
we
like
perform
in
perform
enough
testing
to
understand
what
the
default
threshold
could
be
said
and
then,
on
the
base
basis
of
that,
we
can
decide
whether
we
want
to
do
the
beta
of
this
feature
or
not
so
I
plan
to
send
the
pr
for
this
cap
today
and
just
wanted
to
make
sure
that
people
around
are
aware.
E
H
So
we
don't
want
to
that's
why
it's
better
to
have
it
in
phase
two
phase.
One
would
just
be
integrating
the
PSI
metrics,
because
it's
needed
like
there
are
a
lot
of
requests
around
it
and
then
phase
two
for
phase
two.
We
can
first
have
the
POC
in
129
just
POC
and
then
do
enough
testing
and
think
about
launching
it
in
the
next
okay
explosion
sounds
good.
E
J
Awesome
yeah
I
just
want
to
say,
I,
think
it's
really
awesome
to
start
with,
with
adding
them
in
Italy
and
maybe
just
reporting
them
as
conditions
and
then
later,
once
we
have
more
information
and
usage.
You
know
we
can
come
up
with
follow-up
caps
to
to
do
things
like
the
UMD
or
other
things.
We've
talked
about
in
the
past
right
so
but
I
think
it's
built
a
good
foundation
for
adding
that
later.
A
I
wonder:
how
will
how
will
be
collecting
feedback
on
that?
Is
there
any
ideas.
D
Maybe
something
like
d
scheduler
could
be
a
way
to
play
around
with
it
initially,
where
you
can
start
off
a
victim
pods
based
off
of
these
off
of
these
values
before
it
actually
has
called
the
cute
way
to
go
basically
a
place
to
play
around
with
it.
I'm
not
sure
if
they
would
want
that
as
a
feature.
But
it's
a
you
could
hack
to
make
it
make
it
work
and
see
how
it
behaves.
K
H
K
Hi
everyone
I
did
share
this
item
in
this
sidecar
container
working
group
as
well.
K
So
people
who
attended
that
meeting
would
have
some
context,
but
the
general
idea
is
that
we
want
to
allocate
CPUs
at
a
container
level
in
the
sense
that
currently
from
Port
quality
of
service
point
of
view,
if
a
part
has
if
a
Ford
belongs
to
guaranteed
quality
of
service
class
and
within
the
container,
we
have
a
request
for
CPUs
as
integral
that's
only
when
CPU
manager
would
allocate
exclusive
CPUs,
but
there
were
use
cases
that
were
highlighted
as
part
of
sidecar
group.
K
Conversations
that
apply
to
containers
in
general
and
I
wanted
to
kind
of
discuss
that
with
the
group
here.
What
I'm
thinking
is
that
maybe
we
can
pursue
and
have
an
explicit
way
of
indicating
that
a
container
requires
requires
exclusive
CPUs
and
then
based
on
a
CPU
manager's
scope,
cubelet
flag,
that
we
can
add
to
cubelet.
You
would
have
the
ability
to
observe
a
container
independently
kind
of
independent
to
its
quality
of
service.
A
I
A
Just
cast
on
sidecar
working
group,
this
feature
will
be
very
useful
for
sidecar
customers,
but
not
necessarily
limited
to
Sidecar
and
in
other
considerations
is
we
have
more
and
more
plugins
in
NRI
that
will
do
some
CPU
allocation,
but
typically
what
happened
is
Swansea
also
pointed
out
that
in
the
right
plugins
would
not
use
like
CPU
manager,
so
so
unlikely
will
be
any
conflict
in
whatever
we
implement.
E
K
K
The
only
thing
that
we
need
to
be
careful
about
is
ensuring
that
the
container
gets
the
guarantees
that
it
would
expect
with
exclusive
allocation,
because
if
the
Pod
is
no
longer
a
guaranteed
pod,
you
would
still
require
you
know
the
the
container
that
is
requesting
exclusive
CPUs
to
have
kind
of
similar
guarantees,
and
in
order
to
do
that,
we
would
probably
have
to
make
changes
to
the
quality
of
service
evaluation
logic
and
take
into
consideration
pods
that
have
this
requirement
explicitly
indicated
in
in
the
spec,
so
that
that's
kind
of
the
the
key.
K
Rather
the
summary
of
this
proposal
in
general
and
I
think
we
have
been
having
some
discussions
trying
to
figure
out
from
use
case
perspective
as
well.
If
this
makes
sense
and
and
that's
essentially
it.
L
I
haven't
seen
your
proposal,
so
I
will
take
a
look
at
after
my
meeting
offline,
but
I
think
we
can
combine
it
with
what
Marcus
was
working
on.
So
we
explicit
Declaration
of
the
quality
of
zero
spare
bottom
per
container
and
when
we
can
cook
CPU
manager
to
it.
If
you
want.
K
Good
yeah
sure
sounds
good.
Just
take
a
look
at
you
know
the
general
idea
you
have,
and
you
know,
I'm
kind
of
open
on
how
we
want
to
express
this
resource
requirement,
I,
just
kind
of
provided
one
way
of
doing
it
doing
it,
but
like
hooking
up
with
quality
of
service
class
proposal
or
any
other
way,
I
think
I'm
open
to
it.
As
long
as
we
have
a
way.
E
K
Time
frame
like
whether
we
think
that
we
should
kind
of
track
this
fully
or
not
just
in
the
background,
but
if
you
want
to
officially,
you
know
make
sure
that
this
becomes
part
of
129
cycle.
I,
think
we
I
need
explicit
reviewers
a
pro
version.
A
And
since
there
is
an
API
change,
try
to
find
somebody
to
look
at
API
early
or
even
like
stop
by
secret
rejection
meeting.
A
Okay,
next
topic
is
VPN.
I
Hey,
it's
me
right:
hey
wait!
Yeah,
oh
yeah,
yeah
I
just
need
the
preview
for
the
pr,
so
I
think
I
answered
the
concern.
There's
a
concern
there
and
yeah
the
thing
yes
worthy
right.
You
would
yeah.
K
K
This
PR
I
think
the
the
main
concern
that
I
had
as
part
of
this
PR
is.
We
are
changing
a
very
major
design
decision
that
was
made
when
the
CPU
manager
was
introduced,
and
that
was
that
we
don't
want
the
shared
pool
to
get
exhausted.
K
With
this
proposal.
What
we
are
doing
is
we
are
kind
of
removing
Reserves
CPUs
from
the
schedule,
which
is
which
kind
of
makes
sense,
but
as
a
side
effect
of
that,
we
can
end
up
in
a
scenario
that
we
have
all
guaranteed
pods
running
on
a
node
and
we
are
not
able
to
support
you
know
best
effort
or
burstable
pods
that
should
be
running
on
shared
pool,
so
that
is
kind
of
the
main
concern
that
I
had
yeah.
I
Understood
yeah
so
I
I'll
reply
in
a
comment,
I
think
so
the
issue
you're
saying
your
concern
is
like
a
the
shareproof
should
not
be
exhausted
right.
We
need
to
at
least
have
some
CPU
for
non-granty
container
right,
correct.
I
I
think
I
think
we
have
a
check
in
a
CPU
assignment
that
that's
kind
of
blocking.
If
you
we
can
change
the
more
than
to
more
than
or
equal
to
so
yeah.
K
But
see
I
think
the
problem
with
that
is
that
here,
in
this
case,
we're
assuming
that
a
pod
or
a
container
is
requesting
CPUs.
We
could
have
containers
that
belong
to
best
effort,
quality
of
service
class
and
that
are
not
explicitly
requesting
CPUs,
and
we
are
saying
with
with
this
proposal
that
we
are
okay
for
those
kind
of
PODS
to
be
acted.
I
K
In
addition
to
that,
there
could
be
pods
that
are
previously,
you
know,
based
on
the
exist,
ing,
behavior
of
or
or
the
resource
allocation
of,
the
node.
Initially
you
know
best.
Burstable
pods
were
admitted
because
there
were
enough
resources,
but
down
the
line,
guaranteed,
pods
came
and
now
those
versions
of
the
pods
are
going
to
be
evicted
because
these
sources
are
no
longer
available.
So
that's
that
means
there's
going
to
be
a
lot
of
churn
on
on
the
Node
or
and
on
the
cluster,
and
that's
a
major
change
of
behavior.
K
I
Okay,
so
what
What's
our
position
here?
Are
we
going
to
try
to
merge
your
PR
or
what
should
we
do
like?
Oh
after
merging
we
try
to
fix
that
yeah.
K
F
I
Right
right,
so
so
how
come?
Can
we
proceed
to
the
next
step
to
finalize
where
to
which
direction
to
go?
Yeah.
K
I
think
in
general,
from
my
point
of
view,
I
think
we'd
have
to
look
at
some
alternative
approaches
of
solving
this
problem.
At
this
stage
this
proposal
seems
to
have
the
shortcomings
that
have
highlighted,
but
like
I,
don't
right
now
on
the
top
of
my
head
have
a
solution
that
would
address
the
issues
itself,
but
I
like
I'm,
not
saying
that
this
is
an
issue
that
we
shouldn't
solve
so
I
I
guess
the
answer.
Is
we
keep
thinking
about
better
solutions
for
making
sure
that
you
know
we
solve
the
problem
and
not
cause
regressions.
I
Okay,
no
regression
I
see
yeah,
who
can
I
follow
up
with
to
discuss
the
solution?
Is
it
you
or
Sergey
or.
K
K
You
you
know,
I
really
appreciate
it.
I
think
you've
been
you've,
put
a
lot
of
work
towards
this,
but
I
think
we
just
need
to
make
sure
that
you
know
we
are
doing
in
the
right
thing
and
not
causing
confusion
and
problems
for
the
the
users
that
you
know,
because
not
everyone
cares
about
exclusive
CPU
allocation
and
these
kind
of
scenarios,
but
impacting
all
the
users
which
and
a
lot
of
them
use
best
effort
and
personal
pods
impacting
them
is,
is
a
very
major
change.
So
we
need
to
be
careful
around
that.
I
G
A
Okay
with
that,
we
get
to
the
end
of
agenda
yeah,
nothing
new!
Thank
you,
everybody.
If
anything
else,
please
speak
out
now.
Otherwise,
no
I
see
somebody
in
the
document,
no,
nothing
new.
Okay!
Thank
you.
Thank
you.
Everybody
have
a
good
rest
of
your
day.
Bye.