►
From YouTube: Kubernetes SIG Scheduling Weekly Meeting for 20210701
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
C
So
the
the
code
review-
this
is
the
updated
code
review
it's
based
on
the
previous
iteration
of
this
was.
C
The
design
changed
a
little
bit
just
to
give
a
brief.
Let
me
just
go
over
this
here,
so
the
in
place.
This
is
a
cap.
The
cap
number
one
two
eight
seven
and
I
think
the
main
the
main
goal
with
this
cap
is
to
oh
sorry.
This
is
not
the
cap.
The
cap
is
so.
C
Yeah,
this
is
the
cab,
so
the
main
thing
that
we
are
doing
here
is
we
introduced
just
to
give
a
summary
of.
What's
going
on,
we
make
the
resources
mutable.
C
Previously
the
resources
used
to
be
immutable.
So
when
you
create
a
pod,
it
is
what
it
is.
The
resources
requested
never
changed
for
the
life
lifetime
of
the
part,
and
the
proposal
here
is
to
make
the
resources
mutable
and
then
add
a
couple
of
fields,
status,
resources
allocated,
which
is
what
the
kublet
agrees
it
can
fit
on
the
node
after
you
change
the
resources,
dot,
requests
and
status,
dot
resources,
which
is
the
actual
resources
that
has
been
applied
by
the
container
runtime
to
the
pod,
to
the
pods
containers.
C
So,
and
then
there
is
the
resize
which
gives
you
a
summary
status
of
what
what's
going
on
with
your
resize
request,
which
is
empty.
If
there
is
no
resize
request
in
in
the
resize
has
been
requested.
C
But
if
there
is
a
resize
that
has
been
requested,
then
it
has
four
different
values
in
progress
is
what
we
most
commonly
see
where
the
resize
the
couplet
accepts
the
resize
and
then
the
it
updates.
The
pro
spot
status,
the
resources
are
okay,
saying:
okay,
let's
say
you
have,
for
example,
resources.request
you're
requesting
500
milli
cpu,
and
you
have
that's
when
you
create
the
pod
and
you
change
it
to
600,
then
this
will
go
from
500
to
600.
C
If
gublet
has
the
capacity
to
has
the
node
allocatable
resources
to
give
that
to
you
and
eventually,
when
the
cri
applies,
that
to
the
container
to
the
running
container,
the
resources
in
the
status
gets
updated.
So
that's
a
short
overview
of
what
this.
What
this
feature
really
does.
The
main
thing
is
all
this
happens
without
restarting
the
container,
so
that
helps
applications
which
are
which,
which
are
not
very
tolerant,
restarts
continue
to
continue
to
work
as
the
resource
needs,
grow
or
shrink
without.
D
Yeah,
I
have
a
question
so
so
these
containers
that
specify
the
design
state.
So
it's
not
like
you.
If
what
I
can
new
cpu,
you
know
how
many
virtual
cpu
core
or
memory
right
can
support
both
right,
cpu
and
memory.
Is
that
correct?
Yes,
yes,
and
then
this
is
allocated
eventually
anarchy.
How
about
this
resource
actual
usage?
It
could
be
much
less
right
at
different
times.
You
know.
Even
you
say
I
need
to
change
from
500
to
600,
but
the
actual
resource
could
be
100.
So
does
this
resources
show
the
actual
resource
usage.
C
No,
it
does
not
so
the
actual
resources
usage
is
gathered
through
statistics,
usually
the
metrics
metrics
api
and
something
like
c
advisor
prometheus
their
tools
that
are
used
to
collect
that
the
driver
for
this.
The
consumer,
for
this
feature
is
me,
is
primarily
vp.
There's
a
vpa
vertical
pod,
auto
scaler,
which
is
a
the
team
based
out
of
poland,
and
they
manage
the
the
they
have
a
they
have
a
they
have
the
that
vertical
power.
C
Skill
is
a
tool
that
you
can
use
to
monitor
the
pods,
your
parts,
resource,
actual
resource
usage
and
based
on
the
actual
resource
usage.
It
makes
recommendations
to
kubernetes,
saying,
okay,
you
have
requested
maybe
two
cpus,
but
you
you're
only
using
500
milli
cpu.
How
about
we
resize
you
to
like
600
or
700.
C
They
make
the
recommendations,
and
today,
when
they
make
the
recommendation,
you
have
no
choice
but
to
restart
the
part,
and
after
this
feature
they
will
be
able
to
call
the
api
to
update
the
resources,
dot,
requests
and
limits
in
the
future,
but
today
only
they're
only
updating
the
requests
and
say:
okay,
let's
size
you
to
600
or
you
know,
increase
your
cpu
or
decrease
it.
Based
on
what
your
actual
usage
is.
C
So
in
the
ideal
use
cases,
actual
usage
will
should
never
the
recommended
usage
or
there
is
what
you
give
should
not
exceed,
should
not
fall
below
what's
being
used,
and
if
it
does
for
cpu,
then
you
will
run
suboptimally
for
memory
if
you're
using
more
than
the
api.
To
date,
the
update
container
resources,
sierra
api
will
fail.
D
C
Resize
is
just
a
summary
status,
so
this
we
are
familiar
with
right.
You
have
requests
and
limits
here.
So
your
request
is
what
cubelet
you're
asking
the
cubelet
or
the
node
to
reserve
for
you
and
limit
is
limit,
is
if
it
exceeds
that
limit
for
guaranteed
parts
it's
the
same,
but
for
burstable
you
have
okay,
I
need
minimum
of
100
cpu
but
maximum
of
200.
C
and
then,
if
you
exceed
200
cpu
you're
capped.
If
you
exceed
200
meg
of
memory,
then
you
get
killed
and
your
minimum
minimum
guarantee
is
100
m
of
memory.
That
is
the
requests
and
limits
resources
allocated
tracks.
The
requested
resource
requests
that
the
kubelet
is
able
to
give
you.
So
if
the
node
has
like
four
cpus
and
you're
requesting
500
milli,
cpu
and
kubelet
says
yes,
I
can
give
you
that,
and
initially
the
scheduler
does
that
right.
It
sees
what
the
node
has
and
then
based
on
that
it's
get.
C
It
ranks
the
nodes
based
on
the
fit
and
then
assigns
picks
one
node.
So
it
doesn't
pick
a
node
that
does
not
have
the
capability
to
give
it
500
milli
cpu.
It
will
pick
one
that
has
the
capacity
and
once
you
resize
in
this
case,
scheduler
is
not
in
the
picture.
It's
it's
a
observer.
It's
a
transaction
purely
between
api
server
and
kubelet,
but
through
the
updates.
C
C
There
is
a
race
condition.
We
had
our
initial
solution
that
we
came
up
with
had
scheduler
approving
the
resize
first
and
then
there
was
a
lot
of
disagreement
with
that
between
in
insignord,
I
think
direct,
and
they
wanted
this
to
be
a
purely
api
server
versus
scheduler
transaction
versus
a
kubelet
transaction
and
the
justification
was
we
do
support
multiple
schedulers
today
and
the
node
is
the
final
entity
that
gets
what
can
get
in
what
is
admit
admissible.
C
So
we
already
have
the
problem
for
a
race
condition
happening.
This
should
be
a
rare
event,
so,
let's
not
make
it
complicated
by
having
scheduler
approve.
First,
let's
just
deal
between
the
api,
server
and
kubelet
and
if,
on
some
occasion,
scheduler
schedules
a
part,
that's
that
reaches
after
before
it
sees
the
resize
has
taken
effect
and
after
the
part
gets
to
the
api.
The
kubelet
after
the
resize
has
taken
effect,
it'll
just
get
rejected,
and
then
it
can
be
rescheduled
to
another
node.
The
controller
will
create
a
new
one.
D
D
C
Yeah,
that
was
the
design
that
we
settled
with.
Initially
we
did
go
through
the
scheduler,
but
that
did
seem
fairly
complex
in
the
sense
that
there
was
an
appearance
of
state
being
kept
across
the
scheduler
and
kublet,
and
the
state
was
being
maintained
in
the
status
field,
which
was
not
okay,
because
status
is
supposed
to
be.
It
should
be
something
that
you
can
generate
by
looking
at
observations.
C
They
should
like
pod
phase
is
an
example
of
a
state,
that's
creeped,
into
kept
into
a
status
field,
and
it
has
been
causing
problems,
so
the
api
conventions
now
require
you
don't
maintain
any
kind
of
state
and
it
was
turning
into
a
state
machine.
So
we
did
not.
They
did
not
want
that,
and
this
is
simpler
and
scheduler
in
this
case,
with
the
change
that
I
have
right
now,
it
plays
more
of
a
role
of
assisting.
C
So
we
can
look
at
the
code.
D
Offline
too
yeah,
so
my
question:
if
scheduler
is
not
aware
of
this
resize
right
and
later,
when
scheduler
get
another
part
request,
it
doesn't
know.
Oh
the
previous
part,
you
know
size
has
been
changed,
so
it
could
fail
to
schedule
the
the
next
part
right.
It
could
happen
right
if
there's
a
lot,
a
lot
of
resizing
happens.
D
C
Scheduler
knows
yeah
yeah.
Whenever
a
pod
is
a
so
scheduler
receives
all
port
updates.
So
whenever
a
pod
is
updated,
it
updates
its
port.
Cache
skiller
is
watching
for
pods
and
then
when
the
pod,
when
cubelet,
let's
say
the
resources,
dot
requests
goes
from
500
milli,
cpu
to
600,
kubelet,
says
and
says.
Yes,
I
have
the
capability
I'll
update
it.
So
it'll
update
the
resources
allocated
in
the
pod
status.
A
Okay,
I
have
a
couple
questions
so
especially
in
the
resize
to
make
it
the
required
size
bigger.
So
is
there
any
timing
problem
say
during
the
resizing
and
so,
for
example,
you
you
request
a
resize
from
one
gigabyte
to
two
gigabytes
and
during
the
resizing
and
because
when
the
scheduler
is
not
aware
of
the
resize
containers
that
that
resource
or
resource
allocated
and
then
schedule
put
another
part
to
the
same
node,
so
can
that
conflict
with
the
with
the
resizing
decision
in
the
so
can
that
happen?
Yeah.
C
Yes
can
assign
notes
in
parallel
if
this
race
condition
occurs,
kubelet
resolves
it
by
rejecting
the
new
part
that
it
has
no
room
for
so
this
is
the
this
is
the
compromise
we
made
initial.
Our
initial
design
was
that
scheduler
approves
it
and
then
kubelet
doesn't
have
to
worry
about
at
kublet
should
have
the
capacity,
and
even
in
that
case,
if
you
have
multiple
schedulers,
the
race
condition
exists,
which
is
there
today
as
well.
So
you,
if.
C
The
node,
then
so
they
decided
it's
simpler
to
have
the
node
just
reject
the
part
new
part
that's
coming
in,
and
this
should
be
a
rare
event.
If
it
does
occur,
then
that
part
is
rejected
and
then
what
happens
to
that
part?
Of
course
it's
not
in
the
scope
of
this
cap,
but
typically
parts
are
created
by
some
controllers.
The
controller
will
see
that
it
doesn't
have
the
requisite
number
of
pods
running.
It
will
create
a
new
instance,
and
hopefully
that
gets
scheduled
to
a
different
mode.
A
Yeah,
so
just
yeah
this.
This
sounds
a
concern
to
me
because
this
seems
to
bring
the
problem
that
this
unit
can
multi-schedule
to
to
to
this
current
single
schedule
scenario
so
because
yeah
and
as
well
a
little
apart
from
this
topic,
the
the
the
worst
thing
is
that
when
you
schedule
a
path
schedule
assigned
a
node
name
right.
So
the
new
path,
as
you
mentioned
here,
if
it
conflicts
with
the
resource
allocation,
kill,
player
rejected,
doesn't
mean,
kill,
blade,
reset
its
spectra,
no
name.
A
So
that
means
this
part
will
be
painting
painting
forever,
because
it's
it's
some.
It's
assumed
to
be
allocated
to
that
node,
because
it's
no
name
has
been
set
right,
so
keep
that
rejected,
but
it's
still
pending
on
there
waiting
for
more
resources
to
be
allocated
for
it,
because
in
this
scenario,
schedule
is
out
of
the
picture
it.
A
The
path
will
be
very
likely
to
be
pending
there,
because
the
resource
utilization
has
been
pretty
high
and
there
may
be,
may
have
chance
or
may
not
have
the
chance
to
release
the
real
resources
in
in.
In
the
near
future,
on
that
network,
so
so
yeah
a
little
a
little
bit
off
the
topic
is
that
what
I
want
pursue
is
that
no
matter
for
multiple
schedule
support
or
for
the
in-place
scaling
support.
A
I
maybe
want
to
idea
to
for
the
cubelet
to
when,
when
it
rejects
the
path,
also
it
reset
the
nodeme
to
to
empty,
so
that
the
part
can
go
back
to
the
scheduling
cycle
and
find
another
node
instead
of
okay
bypass
this
part,
because
it's
no
name
has
been
set,
but
it's
a
little
bit
off
the
topic.
I
just
my
concern
is
that
now
we
have
the
single
default
scheduler
and
also
market
scheduler.
We
both
have
the
risk
potential
raising
issues
here.
C
Yes,
that
is,
I
think,
it's
a
good
suggestion.
We
could
potentially
look
at
that
as
an
enhancement
in
the
future,
because
today
resetting
the
note
name
essentially
you're
you
we're
talking
about
a
binding
sub-resource
that
clears
the
node
name
right,
which
is
what
scheduler
using
so
I
tried
to
do
something
like
that.
Initially,
I
proposed
that,
but
the
security
folks
were
not
happy
with
it
all
I
I
actually
had
had
in
mind.
Okay,
we
could
do
this.
C
We
could
reset
that
part
to
take
care
of
the
race
condition,
and
at
that
time
we
had
the
kublet
writing
to
writing
to
the
spec
for
resources
allocated
as
well.
C
Instead
of
checkpointing
the
resources
that
occur
in
status,
the
it
was
part
of
the
spec,
because
that
way,
if
the
kublet
gets
resized
or
kubelet
dies
it
and
comes
back
up
or
the
node
goes
bad,
then
the
state
is
still
there
in
the
source
of
truth,
which
is
the
api
server,
but
I
think
it
in
the
end
it
came
down
to
this
would
be
a
nice
to
have
it's
not
a
must.
C
A
A
The
before
we
don't,
we
don't
recommend
multiple
schedules.
It's
just
because
this
problem,
multiple
schedule
can
compete
for
raising
the
the
limited
resources
on
the
particular
node.
Then
then,
if
the
conflict
happens,
it's
a
big
problem
and
in
practice
it
sometimes
involves
the
devops
team
to
manually
fix
this
kind
of
problem,
so
that
is
pretty
yeah
yeah.
C
That
is
an
idea,
yeah
yeah,
I
think,
in
the
signal
discriminations,
we
came
up
with
the
potential
enhancements
where
we
can.
Scheduler
can
become
smarter
about
this
by
looking
at
not
just
the
resources
allocated,
but
the
actual
usage
and
weighing
that
in
steering
the
pods
towards
nodes.
That
was
one
of
the
things
that
came
up.
The
other
thing
that
came
up
was
having
the
scheduler
see
that
the
pod
has
reached
the
when
it
gets
rejected.
C
Things
then
the
scheduler
can,
instead
of
you,
know
letting
the
part
die.
It
can
reset
the
binding
and
then
reschedule
it
so
yeah.
E
C
B
D
A
D
C
Just
cpu
yeah,
so
in
the
scheduler
I
think
we
can
see
that
it
does
a
max
when
the
when
this
is
just
debug
prints
that
I
have
put
in
and
the
scheduler
is.
This
is
just
to
show
that
scheduler
is
aware.
D
C
Pretty
much
pretty
much
yeah,
so
there
are
two
parts
to
it.
One
is
adjusting
the
part
there
are.
This
c
groups
is
used
for
all
the
tasks
that
are
done.
What
we
do
is
we.
C
There
is
a
port
level
c
group
which
is
the
sum
of
all
the
containers,
resources
and
requests
and
limits
requests
for
cpu
limits
for
cpu
and
memory
and,
first,
if
the
resources
are
increasing,
then
we
increase
the
pod
level
c
group
request
and
limits
first
and
then
call
the
update
container
resources
cri
api,
where
the
runtime
is
responsible
for
resizing
the
container
without
restarting
that's
a
cri
api.
That's
supposed
to
update
the
c
group
configuration
without
restarting
the
container,
and
it's
not
required
it's
not
a
mandatory
thing.
C
C
I
haven't
tried
them.
I
think
gweiser
should
be
able
to
do
it
without
restarting
because
they
also
map
c
groups.
If
I
recall
correctly
firecracker,
I
have
no
clue,
I
think
so.
When
it
comes
to
memory
increases,
we've
been
experimenting
in
our
own
internal
there's,
a
project
we're
working
on
called
actos,
where
we
support
both
the
native
vm
and
pod
scheduling
in
the
same
control
plane
with
kubernetes
and
in
our
experiments.
We
found
that
we
are
able
to
increase
the
mem
cpu
and
increase
degree
cpu.
C
It's
supported
for
most
vm
platforms
through
the
libvard
apis
and
increasing
memory
works,
but
decreasing
memory
is
a
problem
because,
once
you
give
memory
to
a
system
taking
it
away
is
not
we
have
to.
We
were
mucking
around
with
some
techniques
like
ballooning
and
all
to
see.
If
we
can,
you
know,
leverage
vms
that
are
under
utilizing
their
memory.
So
there
is
a
problem
with
increasing
memory,
but
increasing
cpu,
decreasing
cpu
and
increasing
memory
works.
D
C
In
my
experiments,
yes,
it
has
worked
in
practical
usage.
I
don't
know
if
it
works,
because
kata
does
the
same.
It
maps
the
c
group
settings
into
the
containers,
runtime,
so
they're
running
a
miniature
vm
based
on
clear
os,
the
intel's
solution.
C
It's
a
combination
of
that
and
I
think,
hyper
the
the
company
I
forget
so
that
took
up
two
of
them
collaborated
and
came
up
with
kata
and
they're
able
to
map
the
c
groups
from
the
into
the
containers
into
the
cutter
containers
file
systems,
ckc
group
file
system.
So
I
found
that
yes,
when
you
try
to
decrease
the
like,
you
try
to
run
some
application
which
exceeds
its
memory
limits.
It
gets
killed
and
you
increase
the
seep
memory.
It
gets.
C
It
runs
and,
of
course,
if
the
application
is
consuming
using
that
memory
and
you
try
to
decrease
it,
then
the
c
group
it
fails-
that's
just
convenient
design,
so
the
utilization
has
to
fall
before
you
can
decrease
it.
There
is.
This
is
most
easy.
The
easiest
way
to
see
this
actually
working
in
practice
is
to
use
empty.
The
memory
backed
file
systems
memory,
backed
and
where
you
have
the
file
system
is
backed
by
actual
memory.
C
So
if
you
create
a
file
and
then
it's
charged
to
the
container's
memory-
and
you
try
to
decrease
the
c
group
settings,
it
fails
because
that
usage
is
more
than
what
you
can
decrease
it
to.
You
cannot
decrease
the
limits
on
that.
How
about.
D
The
performance
very
good
how
about
the
performance,
like
you
know,
nintendo
c
group,
just
how
much
have
you
measured
that
what's
the
performance
impact
for
this.
C
We
have,
we
have
not
looked
at
the
buff
impact
of
this
feature.
I
think
there
are
places
where
we
might
need
to
do
some
optimization,
depending
on
how
often
this
gets
utilized
in
the
in
the
node.
I
think
there
are
places
where
currently
I
have
had
to
do
some
copy.
C
The
part
which
I
think
will
have
to
you
know
slim
it
down
somehow
to
use
just
references
or
something
or
optimize
the
write,
a
plugin
that
can
you
know,
do
something
a
little
bit
differently,
as
opposed
to
passing
the
whole
part
to
the
scheduler,
the
plugins,
but
so
far,
that
will
be
one
of
the
things
that
we'll
look
at
as
we
go
into
beta
and
see
the
performance
impact.
D
C
For
for
computation
purposes,
the
way
the
when
we
call
the
when
we
call
the
a
check
for
fit
what
we
do
is
we
use
the
original
the
pods
requested.
If
the
pod
is,
if
the
cubelet
is
coming
up
restarting
and
the
part
exists,
then
we
don't
want
to
add
it
as
a
new
pod.
So
we
check
for
fit
using
the
existing
checkpointed
resources
in
that,
in
that
context
we
copy
it.
C
So
when
the
kubernetes
restarts,
we
have
to
do
this
additional
trickery,
but
I
think
that
can
be
optimized
away.
Besides
that,
that's
the
one
that
occurred
to
me
as
a
potential
thing
like,
but
then
that
should
only
happen
if
kubler,
you
know
restarts
that
should
be
a
rare
event.
C
Yeah
we're
trying
to
put
it
for
this
release.
I
think
the
biggest
the
biggest
part
of
the
code
code
change
that
I
think
tim
hawking
needs
to.
It
depends
on
what
lantau
and
that's
the
kublet
change
and
the
tim
hawk
can
see
about
the
scheduler
change.
If
you
look,
if
you
look
at
the
code
itself,
so
there
are
the
commits
that
we
have
in
this
I've
broken
it
down
into
six
major
changes.
The
api
changes
in
this
commit
and
the
cri
change.
C
I
think
this
is
mostly
okay,
but
both
api
changes
and
ci
changes
are
well
understood.
Agreed
upon
the
kublet
core
implementation
has
not
changed
much
since
the
last
iteration,
which
was
very
close
to
getting
approved,
and
I
think
abdullah
also
looked
at
the
scheduler
changes.
This
is
the
one
with
the
skiller
resource
go
to
the
last
commit
that's
there.
C
C
This
is
all
couplet
changes
so
where
yeah
scheduler
would
be
just
this,
and
I
tested
this
by
intentionally
delaying
the
pods
like
its
evaluation
of
whether
it
can
fit
so
during
that
time,
scheduler
will
not
schedule
a
pod
to
that
because
of
the
max
using
the
max
here.
A
Yeah
yeah,
I
have
a
review
yeah.
I
also
think
about
think
of
some
other
area
needs
to
to
update
it,
because
in
the
recent
release
week
we
added
we
introduced
a
new
with
so
so
each
plugin
can
register
some
events
so
that,
upon
some
events,
scheduler
can
decide
which
part
to
move
back
to
the
active
queue
and
which
not.
So
in
this
case,
we
made
some
assumption
that
the
the
path
status
part
request
is
not
mutable.
So
I
need
to
also
check
that
part.
Oh
okay,.
C
Okay,
yeah,
please
let
me
know
about
that,
because
then
I
that
would
that
would
that
would
mean
some
more
work
in
the
schedule
to
check
and
test
if
it
is
significant
like
these,
this
change
has
been
tested
in
the
last
release
as
well.
So
I
have
high
confidence
in
this.
C
The
changes
that
you're
mentioning
if
that
is
significant,
or
if
it
requires
a
fair
amount
of
testing,
then
we
probably
might
have
to
slip
it.
It's
going
to
be
alpha,
it's
going
to
be
disabled,
but
let's.
E
A
C
C
Yeah
I
tried
to
ask
this
with
release
management.
They
didn't
agree
because
they
and
even
the
api
folks
were
not
happy
with
that,
because
what
it
boils
down
to
is
that
let's
say
I
put
this
in
a
pr
this
gets
approved,
cri
gets
approved,
it
can't
stand.
Cra
can
stand
alone
by
itself,
but
the
cri
change
is
pretty
much
meaningless
without
the
feature
it's
just.
Why
are
we
doing
it
kind
of
if
this
gets
approved
and
goes
in
and
the
core
implementation
doesn't
come
in
then
this
is
meaningless.
A
C
A
So
basically,
yeah
basically
not
necessary,
so
maybe
it
needs
some
more
efforts
from
you,
because
so
in
your
pr2,
you
have
to
keep
one
commit
to
be
based
on
the
pr1.
So,
for
example,
you'll
want
api
change
and
then
it's
like
in
the
last
pr.
You
have
to
rebase
each
ps,
which
is
it's
prerequisite
so
so
that
yeah
sometimes
needs
more
efforts.
Work
from
the
from
the
developer
from
the
author,
but
yeah
it's
up
to
you.
I
just
personal
suggestion:
yeah.
E
E
C
E
C
Quickly,
so
that
I
can
do
that
change,
I
mean
I'm
available
over
the
long
weekend
to
work
on
this
if
needed,
because
this
feature
has
been
dragging
for
a
long
time
exactly
yes,
yeah
after
we
I
mean
we
have
a.
There
is
a
very
comprehensive
test
that
I
think
we
have
one
chin
from
ibm
who
helped
with
this
yeah.
C
The
the
test
that
we
have
is
a
very
common,
so
it
covers
a
lot
of
cases
so
yeah
this.
This
gives
me
a
lot
of
confidence
in
the
future
itself.
What
remains
to
be
addressed
is
the
performance
implications,
measurement
of
that
and
addressing
that,
and
I
think
there
will
be
some
potential
changes
and
mostly
so.
A
It's
more
it's
more,
I
think
it's
does
this
entrain
test
that
cover
more
on
the
keeplet
side
or
alchemy,
and
the
scheduler.
C
It
affects
everything
because
it's
it's
doing
end
to
end
and
it's
measuring
scheduler,
for
the
most
part,
is
playing
the
role
of
a
bystander
in
this
case,
and
it's
assisting
with
this
change.
The
way
I've
structured
it
there
is
a
resize
has
four
states
infeasible
deferred
where
it's
not
possible.
Infeasible
is
look
like
if
you
have
a
node
that
has
a
four
cpus
and
you're
asking
for
five,
then
it'll
never
happen
so
that
is
infeasible
deferred
is
where
okay,
the
node
has
a
capacity
of
four
cpus.
C
You
are
asking
for
three
but
there's
another
part:
that's
consuming
two,
and
you
are
at
two.
So
you
it's
four
you
can
get
three.
If
the
other
part
exits,
then
that
would
be
deferred.
So,
in
the
case
of
deferred,
we
would
be
steering
pods
away
from
this
node
and
essentially
okay,
when
the
when
that
node
clears
up,
you
will
get
resized,
so
scheduler
is
sort
of
assisting
that
in
this
case,
that's
what
that's,
how
I've
picked
up
to
do
this?
C
So
it's
not
exactly
assisting
by
like
doing
getting
rid
of
low
priority
parts
or
rescuing
them,
or
something
like
that.
But
that's
again
a
future
to
keep
the
scope
of
this.
It's
already.
You
can
see
how
many
files
are
there.
This
change
right,
it's
huge!
So
it's
we
wanted
to
scope
it.
This
has
to.
We
have
to
do
this
baby
steps
and
the
baby.
The
first
baby
step
itself
is
so
big.
C
A
Yeah
all
right,
I
think
we
just
use
40
minutes
to
cover
items
yeah.
Fortunately,
there's
no
other
items
in
the
in
your
gender,
so
yeah
one
way,
one
thing
yeah.
One
thing
I
want
to
call
up
is
that
the
code
freeze
date
for
122
is
july,
8th,
which
is
next
thursday.
So
if
you
have
any
yeah
items
and
the
piazza
will
attending
so
let
me-
and
I
have
to
learn-
I
don't
know
well
try
our
best
to
do.
C
Yeah
yeah,
please
do
your
best.
I
know
this
came
in.
I
was
planning
to
have
this
done
quite
a
bit
sooner,
but
I
was
stuck
with.
E
C
Assisting
family
situation
in
india,
with
the
kovade
and
all
and
my
own
schedule
got
delayed
a
little
bit
so
at
this
point,
I'm
like
hopeful
that
it
can
get
into
one
this
release
all.
C
D
A
Basically,
so,
basically,
if,
if
it's
a
idea
that
which
is
unfamiliar
or
unknown
at
all
to
all
of
us,
I
would
suggest
we
start
with
the
with
google
doc
to
you.
Don't
need
to
care
about
formatting,
you
don't
care
the
stuff.
A
You
just
list
all
the
ideas
and
the
motivations
and
background
there
and
protect
the
draft
design,
and
so
the
reviewers
and
the
approvals
can
take
a
look
and
see
whether
it
makes
sense
well
and
if
it
makes
sense,
we
will
okay,
okay,
so
and
we
encourage
you
to
create
a
formal
cap
and
then
we
put
more
formal
reviews
on
the
cap.
So
this
is
the
yeah
standard
way
to
raise
up
a
new
new
feature.