►
From YouTube: Kubernetes SIG Apps 20220124
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
good
evening,
good
afternoon,
depending
on
where
you
are
today,
we
have
january
24th,
and
this
is
first
instantiation
of
segaps
in
2022.
My
name
is
mate
and
I'll
be
your
host.
Today
we
have
a
quite
packed
agenda,
so
let
me
quickly
go
through
announcements.
A
There
is
an
enhancement
freeze
coming
up
halfway
through
well
the
latter
halfway
of
the
next
week,
but
there
is
a
soft
freeze
for
getting
your
production
readiness
reviews,
if
I
remember
correctly
by
the
end
of
this
week.
A
So
if
you
are
working
on
an
enhancement
for
inclusion
in
124,
I
do
hope
that
you
have
initial
round
of
reviews
from
your
viewers,
and
you
will
be
closing
the
proposals
in
the
upcoming
days.
A
If
you
have
not
make
sure
to
reach
out
to
your
viewers
and
and
try
to
get
closure
on
the
enhancements
by
this
date,
if
there
are
any
particular
enhancements
that
should
be
in
the
enhancement
tracking
sheet
that
you
know
that
weren't
added
yet
feel
free
to
ping
me
directly
on
slack
and
I'll
make
sure
that
your
pair
proposal
is
included
in
that
enhancement
tracking
sheet.
A
I
think
that's
all
when
it
comes
to
the
announcements
and
moving
on
to
the
main
topics
we
have
james
from
124
release
team.
Take
care
what
I
take
it
away
from
here.
James.
B
Thank
you
I'll
be
fast
for
matthew's
sake,
so
hi,
my
name
is
james.
I'm
the
124
release
team
lead.
I
am
going
around
to
every
sig
and
just
kind
of
saying
hello
at
this
age.
I
just
wanted
to
remind
everyone
of
dates
upcoming.
You
already
have
a
little
reminder
there.
Just
as
a
point
of
clarification,
the
c
prr
soft
freeze
is
this
thursday.
The
27th
and
enhancements
freeze
itself
is
the
week
after
is
next
week.
B
It
is
february
4th
in
utc
at
2
am
so
if
you're
in
pacific
time,
it's
actually
in
6
p.m
in
the
afternoon
of
the
third,
so
it
depends
on
your
time
zone.
Please
do
check
the
file
the
full
schedule
to
figure
out
exactly
when
that
is,
although
there'll
be
more
announcements,
kind
of
leading
up
to
it.
Other
than
that,
I
just
wanted
to
ask
if
there's
anything
that
I
can
do
to
help
you
is
there
anything
that
any
questions
queries
anything
you
need.
A
I
hear
no
questions
so,
if
I
remember
correctly,
james
and
the
entire
release
team
is
available
for
you
to
ask
any
questions
you
might
have
on
the
sick
release.
Grenady
slack
channel
so
feel
free
to
jump
there
and
ask
whatever
questions
or
ping.
I'm
pretty
sure
that
james
does
don't
mind
pinging
directly.
If
you
have
any
questions
that
you
are
afraid
to
ask
in
in
a
more
public
venue,
so
yeah
absolutely
cool.
Thank
you
very
much
james.
We
can
probably
let
you
go
by
now.
A
You
don't
have
to
stick
till
the
end
of
this
call,
but
you're
more
than
welcome
to
to
stay
with
us
on
to
the
next
topic
matthew
you
I
requested
to
be
as
soon
as
possible
because
you
want
to
catch
up
on
the
rest
of
your
night
before
hitting
the
next
day.
So
take
it
away
from
here.
C
Hey
everyone
can
hear
me
right.
C
So
just
for
context,
it
is
4am
my
time.
So
if
I
speak
like
I'm
drunk
or
something
it's
because
I
might
as
well
be,
I
just
literally
woke
up
so
I
could
present
this
so
hopefully
everyone
else
has
read
what
I'm
proposing
and
can
kind
of
discuss
it
without
me.
Answering
too
many
questions,
but
the
context
here
is
that
for
a
very,
very
long
time,
so
for
context
about
me,
just
maybe
give
some
more
context
there
as
well,
because
I'm
obviously
a
new
face-
and
this
is
quite
a
large
proposal.
C
C
Big
data
isn't
they're
big,
not
that
they
process,
big
data,
and
one
of
the
big
problems
for
a
lot
of
them
is
around
is
around
running
data
work.
Data
data
tools
often
have
this
master
worker
paradigm
and
that's
where
this
kind
of
started
from,
which
is
that,
when
you
want
to
be
able
to
scale
down
the
number
of
workers,
you
don't
want
to
randomly
kill
the
workers.
C
Like
that's
kind
of
the
concept,
you
don't
want
to
randomly
kill
the
workers,
because
sometimes
there
are
more
opportunistic
workers
to
kill
than
others.
That's
kind
of
the
idea,
but
that
kind
of
led
me
to
realizing
there's
a
whole
bunch
of
different
community
proposals
around
the
scaling
behavior
of
replica
sets,
and
so
I
thought
it
would
be
more
sensible
to
effectively
have
a
customizable
situation
where
anyone
can
implement
various
apis
and
as
we
go
forward,
we
can
we'll
probably
end
up
with
a
couple
of
these
so
effectively.
C
The
idea
here
is
that,
right
now
the
replica
set
scale
down
behavior
is
hard
coded,
meaning
with
regards
to
which
pod
is
chosen.
When
a
scale
down
happens.
You
know
you
scale
from
three
replicas
to
two:
it
randomly
picks
one
of
the
pods.
No,
it
doesn't
randomly
pick.
Sorry,
it's
it's
hard-coded
in
the
sense
that
it
it
starts
off
by
killing
any
any
inactive
pods
like
any
non-ready,
pods
unscheduleable
or
pending
whatever,
and
then
it
goes
to
newest
pods
by
by
creation
date.
C
I
think
actually,
as
of
123,
it's
logarithmically
binned,
which
is
kind
of
crazy,
and
I
can't
be
bothered
explaining
it
here.
If
you
don't
know
about
it,
but
it's
it's
it's
a
very
non-intuitive
behavior
that
one
and
then
the
next
behavior
is
no
density.
So
the
pod,
with
the
most
the
node
with
the
most
pods
on
it,
is
the
pods
from
that
node
are
killed
first
and
then,
after
that,
it's
truly
random.
Now,
anyway,
the
proposal
here
is
that
we
have
a
new
field
added
to
the
replica
set.
C
So
again,
I
am
not
super
familiar
with
the
process
of
contributing
to
kubernetes
since
like
2015.,
so
please,
please,
you
everyone!
Someone
else
is
gonna
have
to
help
me
with
that.
I
probably
won't
also
have
enough
time
to
contribute
a
lot
of
code
to
this
I've.
I've
done
a
lot
of
design
work
here.
That
can
actually
be
just
translated
into
go
code.
If
you
wanted
like
pseudocode,
but
that's
just
context
there
so
yeah,
I'm
just
gonna,
let
everyone
else
talk
now.
So
if
anyone
has
any
questions
about
this,
please
shoot.
A
I
remember
commenting,
I
can't
remember
whether
that
was
in
this
proposal
or
in
the
mailing
list.
I
was
asking
about
the
at
the
recent
edition
that
mike
did
about
randomization
of
the
scaling
down,
because
that
partially
addresses
your
your
second
example,
because
the
the
majority
of
your
description
is
probably
best
expressed
through
the
examples.
A
As
you
said
yourself
in
in
the
issue,
does
your
random
is
similar
to
what
we
already
have
implemented?
I
can't
remember
what
is
the
state
of
that
particular.
C
Sort
of
I
think
I
think
you
might
be
misunderstanding
the
examples
but
in
my
mind,
the
the
so
the
random
one
here,
it's
if
you
read
what
it
says,
it's
kind
of
like
an
implied
random
anyway.
So
there's
effectively
the
the
api
that
we're,
adding
or
I'm
proposing
to
add
is
a
list
wherever
it
ends
up
living,
whether
it
ends
up
living
directly
on
this
route
of
the
spec
and
replica
set.
C
Who
knows,
but
the
idea
is
that
it's
a
it's
a
list
element
and
when
that
list
is
empty,
there's
kind
of
an
implied
random
choice,
because
it
just
needs
that
it
needs
to
be
there
for
safety.
Otherwise,
we
otherwise
we
could
end
up
in
a
situation
where
the
scheduler
doesn't
know
which
part
to
kill
a
more
advanced
random
like
what's
described
in
the
one
that
just
got
merged
with
the
logarithmic
binding
in
my
mind,
would
actually
be
one
of
the
apis
would
be.
C
An
element
would
be
a
would
be
a
choosable
element
of
the
list
because
it's
such
an
advanced
piece
of
behavior-
it's
not
always
safe
to
do
because
it
can
take
time,
but
more
than
that
it,
it
may
not
be
the
desired
behavior
of
the
user.
So
there's
the
whole
point
about
this
feature
is,
to
just
say
we're
going
to
stop
making
assumptions
about
what
the
user
actually
wants
and
we're
going
to
ask
them
directly
and
if,
if
they
don't
specify,
we
can
default
to
current
behavior.
C
If
they
explicitly
specify
an
empty
list,
it
will
always
be
truly
random.
Just
like
you
know
what
I
mean:
that's,
that's
the
that's
the
concept
right,
that's
that's
the
proposal
at
least
again
anything
is
up
for
debate.
That
is
the
whole
point
of
discussing
it
here
before
we
raise
any
sort
of
proposal.
D
They
require
query
some
some
other
component
that
is
running
on
the
user,
on
the
username
space
or
on
the
user
nodes
and
well,
that's
potentially
a
concern
for
reliability
right
so
that
that
would
be
my
only
concern.
A
Yeah,
I
think
that
that
would
have
to
be
sorted
out
at
the
proposal
level.
A
What
happens
if,
if
we
decide
to
implement
this
kind
of
functionality,
we
need
to
ensure
that
the
controller
can
and
always
behaves
and
responds
to
a
particular
action
within
the
current
slo,
which
is,
I
don't
know,
a
couple
of
seconds,
if
I
remember
correctly,
so
we
need
to
make
sure
that
if
the
backend
that
is
configured
to
be
responsible
for
providing
that
information,
if
it's
not
responding
within
some
given
time,
we
should
just
fall
back
to
the
default
behavior
but
yeah.
I
I.
C
Told
you
so
that's
already:
that's
already
fully
worked
out.
So
if
you
go
down
so
the
only
one
where
you're
actually
probing
anything
is
the
heuristic
cost
probe.
If
you
go
to
that
one
in
its
actual
api
definition
and
I'll
explain
what
happens
here,
because
I
think
it's
oh
actually
before
we
do
that.
If
you
just
go
to
the
very
top
there's
two
places
that
could
happen
if,
in
a
more
general
sense,
there's
the
kind
of
overall
place
that
could
happen.
C
If
you
go
to
the
very
top
above
example,
there
we
go
it's
under
the
sort
of
the
scale
down
procedure,
yeah,
so
the
scale
down
procedures,
so
we
would
continue
to
kill
pending
and
unschedulable
pods.
First,
that's
just
kind
of
that's
that'd
be
stupid.
If
we
didn't
do
that,
then
we've
got
using
the
scale
configs,
which
is
the
new
field,
we're
talking
about
right.
So
then,
after
that
point
we
do
exactly
what
the
new
config
says.
If
it's
empty.
I
didn't
write
this
here.
C
We
default
back
to
the
current
default,
which
is
effectively
applying
the
scale
configs,
which
would
have
resulted
in
the
current
behavior.
That's
the
idea
of
that
from
an
internal
implementation
perspective
so
that
it's
backwards
compatible
with
current
pod
specs.
Obviously,
replica
sets
space
so
anyway.
C
So
the
thing
that
I've
said
here
is
that
if
you
look
at
I
under
2
iv,
a
there
is
a
there
is
a
global
timeout
for
each
scale
config
before
you
skip
onto
the
next
scale
config
now
now
this
is
an
interesting
question
around
slo
right,
which
is
that
at
least
for
some
of
the
scale
configs
it
the
user
can
request
effectively.
C
A
C
A
Together
for
a
single
controller
to
process
it,
we
have
a
generic
slo
for
the
majority
of
controllers
of
15
seconds.
A
Yeah,
but
that's
that's
pretty
big.
I
think
I
would
have
to
double
check
if
we
have
measured
slo
for
replica
sets,
they
might
be
significantly
lower,
but
the
generic
for
the
majority
of
controllers
is
15
seconds.
C
I
mean
killing
pods
can
take
an
incredible
amount
of
time
anyway,
for
various
other
reasons,
and
also
this
is
one
of
the
behaviors
where
killing
a
pod
is
such
a
destructive
action
that
you
really
don't
want
to
take
it
unless
you
know
that
you're
taking
the
right
action
and
this
like,
as
we
kept
talking
about
this
this
feature
specifically,
I
think
the
most
controversial
one
is
going
to
be
the
probe
one.
So
if
we
roll
down
to
that
one,
let's
kind
of
just
discuss
that
one.
C
I
think
that's
really
the
one
that
may
or
may
not
even
end
up
in
the
proposal
in
the
end
and
I'm
happy
if
it
doesn't
not
the
exact
so
like
I'll,
try
I'll
explain
what
it
is
first
and
then
I'll
show
you
how
it
actually
works.
Just
if
that,
if
that's
all
right,
go
ahead.
I
think
this
is
the
most
controversial
one.
C
So
I
think
this
is
a
good
one
to
discuss,
so
the
heuristic
cost
probe
is
effectively
the
thing
that
everyone
and
everyone
who's
building
these
kind
of
worker
master,
whatever
the
heck
you
want
to
call
them
or
even
other
types
of
apps
that
I
can't
even
imagine
anything
which
the
app
itself
knows.
What
is
best,
there's
there's
two
kind
of
possibilities.
The
first
is
that
there's
a
central
system
which
knows
the
cost
of
killing
every
pod,
in
which
case
that's
super
easy.
C
We
can
just
ask
that
central
system
and
that's
the
that's
actually
not
this
example.
That's
example
three,
but
but
so
if
we
forget
about
that
one
for
a
second
now,
let's
think
we're
really
lazy
and
we
either
don't
have
a
central
system
or
we
don't
want
to
make
one
or
our
app
is
so
simple
and
never
going
to
be
scaled,
big
enough
that
it's
going
to
be
a
problem.
So
instead
we
just
write
a
thing
which
probes
the
pod
before
it
kills
it.
C
Now,
in
this
case,
you
can
probe
it
with
a
with
a
rest
command
or
an
x
command,
just
like
any
normal
probe.
The
probes
are
slightly
different,
though,
because
we
read
the
last
line
of
standard
out
as
an
integer
from
zero
to
infinity,
with
zero
being
a
special
meaning
slightly.
A
So
is
a
a
major
problem
with
this
particular
approach,
because
the
controller
does
not
have
access
to
the
containers.
A
C
That's
fine,
but
I'm
imagining
that
it
is
possible.
Probably
I
mean
we
can.
If
it's
not
possible,
we
can
make
it
happen
that
it
because
it's
not
as
dangerous
for
the
the
place
where
the
skill
the
place
where
you're
talking
about
the
scheduler
place.
I've
honestly,
my
brain
is
too
tired.
C
That
can
probably
patch
either
a
status
or
an
annotation
field
on
the
pod,
which
marks
it
to
be
probed,
with
a
specific
probe
by
the
kublet,
which
then
updates
a
status
field
with
the
resulting
cost
from
it
because
the
whole,
you
know
what
I
mean
and
then
and
then
we
consume
that
again,
and
then
it
also
can
probably
again-
and
in
this
case,
because
it's
heuristic-
it's
actually
a
sample.
So
it's
like
the
schedule.
The
flow
of
this
would
be.
The
scheduler
decides
the
sample.
C
It
then
marks
those
pods
for
probing.
It
sounds
very
alien
marks,
those
pods
for
probing
the
cubelet,
then
probes,
all
those
pods
for
cost
and
then
and
then
sets
the
status
field
on
all
those
pods
saying.
The
current
deletion
cost
for
this
particular
request
and
we
give
it
maybe
a
request
id
or
something
is
this
and
then
the
scheduler
then
reads
that
status
field
and
goes,
and
then
it
makes
the
decision
as
to
which
one
to
kill
from
the
sample.
A
I
I
know
it
seems
simple
to
do
the
communication
you're
talking
about
in
an
ideal
world,
but
I'm
I'm
worried
that
it
won't
be
that
simple,
because
the
status,
the
cupola
does
not
expose
information
from
the
probes
into
status
other
than
the
events.
If
I
remember
correctly
so,
if
you
embed
a
particular
probe
on
a
container,
that
means
the
container
will
be
working
or
not
depending
on
the
success
or
the
failure
of
the
probe
and
the
probe
are
treated
as
yes,
it
worked.
A
So
I'm
I'm
worried
that
for
your
particular
case,
the
probes
cannot
be
easily
reused
for
driving
the
the
scaling.
The
part
of
your
proposal
where
we
would
have
an
annotation
embedded
in
a
pod
which
would
be
then.
A
Read
by
the
replicas
of
controller
is
perfectly
reasonable,
but
you
would
have
to
have
a
separate
mechanism
outside
or
within
the
con,
the
replica
within
the
replicas
within
the
control
that
well,
not
necessarily
the
controller
within
the
app
you're
running.
That
would
be
responsible
for
setting
appropriate
annotation.
A
What
I'm
thinking-
and
I
remember
that
we
had
a
discussion
I'll
do
my
might
tell
me
if
I'm
wrong,
but
I
remember
at
some
point
in
time
when
we
were
discussing
the
their
randomization
proposal
for
scaling
down.
One
of
the
discussed
proposals
was
to
have
an
annotation
on
a
pod,
which
would
say
the.
C
A
Yeah,
so
how
much
of
the
out
of
the
the
current
examples
that
you
have
listed
in
this
proposal
are
possible
to
implement
with
this
mechanism.
I'm
I'm
a
little
bit
worried
about
the
too
much
of
the
logic
being
pushed
first
on
the
user,
which
has
the
entire
configuration
for
scaling
down,
especially
that
there's
a
lot
of
users
that
are
complaining
about
the
complexity
of
the
current
controllers
and
adding
something
like
that.
Would
additionally
complicate
it,
whereas
keeping
it
separate
would
be
preferable
for
starters.
In
my
opinion,.
C
So,
like
the
reason
why
and
I've
said
it
at
the
top
here-
so
I
think
maybe
let's
even
just
go
through
the
background
of
this
thing
at
the
top
here,
so
that
we
can
kind
of
frame
it
properly,
because
I
wasn't
aware
that
you
guys
weren't
aware
of
that.
So
it
says
here
pretty
much.
That's
right!
The
idea
of
I'm
just
going
to
read
it
I'll.
Let
everyone
read
it!
Everyone
else
can
read
it
on
the
screen
right.
C
Maybe
I'll
read
it
out
for
people
who
might
be
listening
so
effectively.
The
idea
for
customizing
the
way
deployments
and
replica
sets
remove
pods
when
replicas
are
decreased.
It's
been
around
for
a
long
time.
That's
pretty
much.
What
I'm
saying
I
linked
some
issues
from
like
seven
years
ago.
C
C
So
I
describe
it
here,
which
is,
I
said,
it's
a
good
start,
but
its
usefulness
is
limited
because
it
must
be
updated
before
the
replica
count
is
decreased,
which
means
that
you
can't
use
it
with
existing
kubernetes
resources
like
horizontal
pod,
auto
scaler,
or
for
that
matter,
really
anything
that
would
actually
change
the
replicas
outside
of
your
own
system,
because
your
own
system
has
to
be
responsible
for
patching
this
annotation
and
the
reason
for
that
is
because
it
would
never
be
safe
to
automatically
update
this
annotation
because
it
would
completely
overwhelm
your
your
api,
your
cubit,
your
cuvette
yeah,
your
kubelet.
C
D
C
Yeah,
that's
obviously
gonna
completely
overwhelm
the
system
very
quickly
so
effectively.
This
means
that
this
feature
is
almost
completely
inaccessible
to
almost
all
kubernetes
users
who
aren't
building
their
own
scaling
systems,
or
it
means
that
people
are
going
to
completely
avoid
all
the
existing
kubernetes
scaling
logic
so
that
they
can
use
this
feature,
which
is
what
people
have
already
done
by
the
way.
C
So
what
this
all
this
feature
is
going
to
do
is
push
people
away
from
all
the
core
kubernetes
apis
and
into
these
really
weird
shadow
apis
that
people
are
using,
which
is
just
terrible
for
the
project
as
a
whole,
because
it
means
that,
like
whole
classes
of
application
workloads
again,
these
master
worker
ones.
I
I'm
not
going
to
be
run
using
the
replica
set
controller,
they're
going
to
be
run
by
some
custom
controller,
that
people
build
and.
A
Yeah
in
the
first
place
that
that's
not
a
problem
that
we
have
external
controllers,
it's
not
an
any
anti-pattern
or
something
bad
for
the
ecosystem.
If
we
have
external
controllers,
if
you
look
at
the
the
majority
of
the
cube
direction,
a
lot
of
the
a
lot
of
the
changes
are
happening
is
to
be
able
to
extend
kubernetes
to
fit
your
own
particular
use
case.
A
We
said
that
we
we
don't
want
to
do
it
for
many
different
reasons,
and
it's
actually
something
that
we
will
be
working
on
in
the
next
couple
of
months
to
add
in
the
core,
because
some
of
the
reasons
just
resolved
on
their
own
and
in
the
meantime,
we've
managed
to
grow
the
controller
a
little
bit
further.
C
C
Actually,
worse
than
you
people
making
their
own
controller
here,
I
misspoke
people
still
have
to
use
the
replica
set
controller,
but
what
they
have
to
do
is
they
have?
They
can't
use
any
of
the
existing
logic
around
the
replica
set
controller
to
do
scaling
so
they
have
to
write
their
own.
Scalar
is
a
better
way
to
say
it,
which
also
has
to
be
able
to
patch
a
bunch
of
pods
in
itself
as
well,
which
could
still
overwhelm
their
api
server.
A
I
know
I
know
what
you
mean
with
the
with
the
with
the
patching
of
the
pot
and
within
the
during
or
before
even
the
the
downscaling.
C
And
also
you,
and
also
it's
sub-optimal
in
the
context
of
running
these
kind
of
apps
anyway,
because
the
data
will
be
out
of
date
immediately
so
because,
because
the
actual
loop
of
scaling
down
is
not
able
to
early
exit,
for
example,
a
pod
can't
say,
I'm
free
to
kill,
there's
no
there's
no
cost,
I'm
not
doing
anything
so
it
so
effectively.
You
might
spend
all
this
work
to
annotate
a
bunch
of
pods
and
it
might
not
be
necessary
or
the
data
might
be
out
of
date
by
the
time
you
know
anyway.
C
The
point,
the
point
is
that
I
still
think
there's
a
benefit
to
allowing
or
at
least
discussing,
allowing
just
a
generic
customizable
api
for
how
the
downscale
behavior
works,
and
I
think
you'll
see
even
from
my
random
post
here.
Two
people
have
randomly
stumbled
across
it.
Users
and
other
people
have
already
started
discussing
this
proposal
in
other
issue
trackers.
So
this
I've
now
opened
pandora's
box
and
I
think
a
lot
of
people
want
this
feature.
A
I
mean
I
would
I'm
always
if
a
fan
of
starting
simple
and
at
least
two
or
three
of
your
proposals
would
be
hard
to
implement.
But
maybe
we
could
start
with
just
the
api
bits
where.
A
Because
you
yeah,
you
would
have
basically
have
a
an
external
endpoint
that
the
controller
reaches
gets
the
information
about
the
costs,
and
I
I'm
slightly
worried
about
the
the
unnecessary
complication
through
introducing
yet
another
api
for
expressing
how
to
scale
the
parts
in
a
in
a
product.
C
The
concept
here
would
be
that
all
other
apis
get
superseded
by
this,
all
others
become
depreciated,
and
when
this
becomes
a
thing
like
that,
because
like
using
annotations
is
ugly,
this
actually
doesn't
remove
those
annotations
by
the
way
as
it
actually
as
I
said,
because
the
default
behavior
is
just
to
use
the
annotation.
That's
currently
there
with
what
so
like.
So
again,
it's
a
list
right
and
if
you,
and
if
you
scroll
down
to
the
next
section,
there's
five
things
that
can
be
in
that
list,
the
first
one.
C
This
is
so
these,
and
this
is
the
object
type.
So
it's
kind
of
like
a
volume
type.
That's
why
it
looks
like
this.
You
know
whether
you've
got
like
a
you've
got
a
name
which
has
been
a
type
of
that
field,
and
only
one
of
them
can
be
filled.
You
know,
I
mean
that's
kind
of
the
concept
and
you
have
a
list
of
these
objects,
the
first
one
being
cost
annotation,
which
is
the
one
that
we
just
talked
about,
so
that
one
will
still
be
there.
C
C
C
The
idea
is,
rather
than
probing,
each
of
the
individual
pods,
you
probe
an
external
system
with
an
http
post
request
and
and
effectively
you
implement
the
api
which
I
described
above,
which
is
effectively
you
request
a
number
of
pods
and
you
send
it
a
list
of
pods,
because
the
way
that
this
list
works
is
that,
as
you
go
through
the
list,
you're
actually
removing
candidates,
so
you're
only
set.
So
you
send
you
send
all
of
the
current
running
pods
to
the
first
element
of
the
list.
C
If
the
first
element
of
the
list
is
able
to
decide
which
of
the
pods,
it
should
kill
based
off
its
own
internal
metrics.
It
just
returns
you
the
list
of
pods.
You
should
kill
if
you've
asked
for
five
pods
to
kill
and
you've
sent
it
ten
pods,
and
it
knows
which
five
to
kill.
It
will
send
you
back
five
pot
ideas,
but
if
it
can't
decide
some
of
them
say
it
can
decide
three
of
them,
but
not
two
of
them.
C
It
might
send
you
back
three
pods
to
kill
and
like
four
undecided
pods,
where
it's
like.
I
can't
distinguish
these
two
in
my
metrics,
so
I'm
just
gonna
send
it
to
the
next
one.
Then
the
next
api
receives
those
three
pods
with
a
request
of
two
pods
to
kill.
You
know
what
I
mean:
that's
kind
of
that's
the
actual
flow,
so
this
api
is
actually
just
that
implemented
as
a
rest.
Api.
You
know,
that's
that's.
C
We
also
need
to
implement
the
other
three
as
well,
because
they
become
because
again
so
that
so
that
the
default
behavior
the
implied
default
can
be
the
default
that
is
currently.
We
still
need
to
implement
cost
annotation,
probe,
node
replica
set
scale
and
pod
age
scale,
because
those
three
combined
are
the
current
behavior.
So.
A
That
would
basically
mean
if
you
don't
have
the
api
scale
config
set.
You
would
basically
always
fall
back
to
the
current
behavior.
C
Well,
we
could
do
that
as
well.
So
what
I
was
thinking
more
is
like
users
can
actually
my
thought
you're
completely
correct.
That's
one
of
the
options
we
just
have
an
if
statement
affected,
which
says
is
it
set
if
it's
set
then
use
this
new
api,
which
only
has
cost
api
scale
config.
Is
that
what
you're
proposing.
A
Yeah,
basically
that,
because
I'm
not
imagining
folks
who
has
their
own
workloads
already
running
rewriting
their,
they
can
their
configuration
bits
just
to
align
with
this
new
api.
That's
not
how
we
usually
do
things.
It
is
that
we
add
additional
fields
which
are
optional
and
if
you
specify,
then
we
react.
If
it's
not
there,
then
we
just
maintain
the
current
behavior.
C
I
I
I
get
that
I'm.
I
would
that's.
I
just
wondering
if
it
makes
sense
for
peop,
because
people
might
still
like
part
of
the
again
for
like
the
concept
of
what
I
was
saying,
where
it's
it's
used
as
the
tie
breaker
almost
like
or
in
a
case
of
a
timeout.
Potentially
it's
used
as
the
only
option
as
well
like.
So
if
the
rest
api
times
out,
you
might
still
want
to
not
just
default
to
random
straight
away,
but
you
know
but
but
but
go
to
pod
age.
C
You
know
what
I
mean,
and
I
can't
imagine
it's
that
difficult
to
just
modularize
the
code
which
is
doing
the
pod
age
right
now
and
then
and
then
just
chuck
it
into
an
api.
I
mean
maybe
I'm
wrong
but,
and
you
know
make
it,
but
I
think
the
feature
is
more
complete.
I
agree
that
getting
it
in
is
more
important
than
having
it
perfect,
but
yeah.
C
I
don't
know
if
it's
worth
spending
those
extra
cycles
to
get
the
to
get
to
get
the
more
generic
ones,
but
I
think
that's
something
for
the
rest
of
the
people
to
decide
as
to
what's
actually
important
but
yeah.
G
G
Now
I
mean
that
if
you,
if
you
ask
multiple
controllers
of
end
points,
what
should
be
the
pots
deleted,
then
they
can
behave
like
you
fall
back
to
another
one
and
the
like
in
case
these
endpoints
are
not
behaving
properly.
You
will
get
more
different
behavior
each
time.
C
I
think
that's
dependent
on
the
user's
implementation
right,
so
I
get.
I
didn't
really
consider
the
case
where
you
have
two
apis
chained
with
each
other,
and
I
was
considering
not
allowing
it
at
all.
But
honestly,
I
don't
see,
there's
any
reason
to
and
should
not
allow
it.
That
is
so
again
the
there's
only
two
possibilities
for
the
list.
C
The
first
is
that
all
of
the
pods
you
need
to
kill
are
returned
are
chosen
by
the
first
element
of
the
list
or
whichever
element
you're
up
to,
and
the
second
option
is
that
some
aren't
or
all
aren't
returned,
and
it
can't
decide
all
of
them.
You
know
so
if
it
can't
decide
any
of
them
all
those
undecided
pods
are
the
only
ones
that
are
sent
to
the
next
element
of
the
list.
So
I'm
so
the
behavior
is
purely
defined.
I'm
a
little
bit
confused.
G
Yeah,
sorry,
but
I
mean
that
the
user
has
probably
one
preferred
way
to
do
it,
and
I
think
one
item
like
if
you
want
you
will
just
set
the
one
item
you
want
and
it
can
fall
back.
For
example,
the
default
or
something.
C
I
think
users
can
make
informed
decisions
if
they're
setting
this
the
default
can
be
the
default.
But
if
they're
making
an
informed
decision
to
set
something
like
an
api
probe,
they
can
be
like.
I
understand
that
the
next
best
either
under
a
timeout
condition
for
my
api
or
under
a
I
can't
decide
decision
for
the
api.
C
C
I
I
appreciate
the
idea,
but
I
just
think
that
I
really
like
the
the
fact
that
if
we
do
it
like
this,
I
I'm
up
for
debate
around
the
api,
but
I
really
think
that
the
whole
proposal
is
not
about
it's.
That's
why
I
haven't
called
the
proposal.
Api
scale
config,
because
I
could
have
done
that
and
it
would
have
solved
my
use
case.
It's
its
proposal
around
letting
you
configure
the
downscaling
behavior
as
the
user.
E
Matthew,
I
have
a
question
around
the
behavior,
so
I
was
wondering
if
this
has
been
thought
about,
like
we
can
let
the
replicaset
controller
not
do
the
scale
link
but
have
another
controller
which
actually
does
that,
instead
of
putting
the
logic
in
the
core
controller
because
for
the
most
part
as
much
as
mentioned,
the
existing
config
itself
has
lots
of
parameters
that
can
be
tuned
and
people
are
actually
getting
confused.
So
so
adding
one
more
perhaps
may
be
a
bit
more
confusing
and
again.
E
This
is
not
me
saying
that
this
is
the.
This
is
the
use
case
that
most
of
the
people
or
most
of
the
users
of
kubernetes
are
behind.
I'm
not
saying
that.
I
do
not
know
at
this
point
of
time,
but
if
this
is
something
that
has
to
go
into
the
core
controller,
would
it
be
beneficial
to
have
this
as
an
external
controller
first
and
make
sure
that
the
existing
replica
set
controller
would
not
come
into
the
picture
and
let
that
the
other
controller
deal
with
the
scale
down
or
scale
up?
E
Yes,
this
is
something
that
has
been
beneficial,
because
this
is
what
we
used
to
do
with
with
the
scheduler
as
well
I'll
go
correct
me
if
I'm
wrong
there,
like
for
the
new
api
that
we
are
introducing
for
batch,
I
think
we
had
something
as
a
crd
existing
for
some
time
and
then
we
realized
this
can
be
brought
into
core
kubernetes,
because
we
noticed
that
this
is
a
use
case
that
most
of
the
people
are
interested
in.
E
C
E
No,
I
I
do
not
know
if
you're
familiar
with
something
called
cruise
open
crew
screws
like
they
actually
enhance
the
existing
controllers,
the
core
controllers,
with
some
functionality
that
does
not
exist
within
the
core.
E
So
have
you
thought
about
using
something
like
that
or
having
a
couple.
E
That
part
is
fine,
but
how
to
like?
We
also
want.
E
C
Like
I
I
did,
I
don't
know
what
it
seems
poor
formed
I
mean.
Obviously
we
could
add
an
annotation
for
the
replicas
that
which
says
the
scaling
doesn't
happen,
but
that
seems
like
it
would
violate
the
slo
right
if
they're,
if
the,
if
the
replicas
is
no
longer
like,
if
you're
saying
oh,
I'm
going
to
make
a
system
which
will
automatic,
which
will
which
will
update
because
effectively.
C
A
Okay,
folks,
in
the
interest
on
time,
I
still
want
to
get
to
philip's
topic
matthew
for
you.
I,
my
current
proposal
is
try
to
write
kubernetes
enhancement
proposal,
so
you
would
open
apr
against
kubernetes
enhancements
and
you
would
have
to
write
down
your
proposal
and
get
reviews
from
there
and
then
we
can
figure
out
how
to
include
that
in
the
next
releases
and
the
shape
of
it.
A
C
We
have
one
thing
we
have
found
is
that
let's
not
put
the
heuristic
cost
probe
in
there.
I
think
that
as
soon
as
this,
if,
if
we
can
get
the
api
in,
someone
will
probably
think
of
a
better
approach
than
I've
done
anyway,
you
know
what
I
mean
and
so
like
once
this
list
element
exists.
People
are
gonna,
it's
gonna
draw
people
into
it,
even
if
the
only
element
of
the
list
is
api.
Costco.
A
Yeah,
I
would
not
start
with
a
list.
I
would
start
with
a
api
scale
config
as
an
optional
element
within
the
replica
set
and
start
simple.
If
we
find
that
people
would
want
to
have
more
configuration,
we
can
start
discussing
eventual
options,
additional
options
that
we
need,
because
out
of
the
discussion
earlier,
it
was
pretty
clear
that
we
would
lean
towards
having
an
external
apis.
We,
you
would
have
to
talk
to
before
scaling
down
and
other
options
are
less
preferable.
C
I
mean
from
an
implementation
perspective,
what
the
all,
what
that
will
mean
is
that
users
will
have
to
store
all
the
information
that
the
replica
set
controller
would
currently
know
in
their
own
external
system.
Anyway,
that's
not
a
discussion
for
now.
So
so
my
question
for
you
just
about
that,
because
I
think
it's
important
for
writing
this
kp
is
this
is
adding
a
root
field.
There
is
currently
only
two
root
fields
to
the
replica
set
controller.
C
I
assume
it
can't
live
under
template
because
that
doesn't
make
any
sense
and
it
can't
live
under
metadata,
because
that
doesn't
make
any
sense.
Oh,
what's
the
other
field,
no,
no.
A
Replicas,
you
need
to
also
make
sure
that
this
is
properly
stored
at
the
deployment
level,
because
when
people
are
creating
deployments,
they
are
then
translated
into
replicas
and
further
down.
So
you
need
to
take
into
account
deployment
and
replica
asset
api.
At
the
same
time,.
C
A
A
Yeah
but
that
would
be
so.
I
would
still
start
simple
and
gather
opinions,
gather
feedback,
and
then
we
can
further
decide
on
the
shape
of
the
api
and
honestly.
A
Okay,
so
basically
I
would
start
writing
down
what
we
just
discussed
here
in
this
proposal.
Try
to
reach
some
consensus.
You
can
again
spoke
folks
on
the
mailing
list,
maybe
even
sending
it
out
to
the
kubernetes
depth
and
gather
some
feedback.
A
If
you
have
some
twitter
reach
or
some
external
folks,
you
can
even
ask
folks
and
in
the
cube
kubeflow
and
airflow
communities
about
their
thoughts
about
this
api,
and
then
we
can
turn
the
discussion
into
the
enhancements
and
slowly
start
working
on
the
implementation
and
the
next
releases.
C
I
I
think
I
I
think
thank
you
for
your
guys
time.
I
I'm
a
bit
wary
of
just
blasting
this
sort
of
feature
to
the
world,
because
it
will,
because
there
are
a
number
of
aspects
of
this
which
will
be
very
desirable
to
a
lot
of
people
like,
and
so
I
think
we
need
to
think
about,
what's
what's
possible
to
implement
sensibly
first
and
then
blast
that
you
know
what
I
mean
and
say
is
this
enough
for
you
like.
C
I
think
we
should
start
with
the
constrained
version,
which
I
think
is
exactly
what
you're
saying,
adding
one
root
field
to
the
replica
set.
Spec
called
api
scale
config,
even
just
that
calling
it
that
even
seems
like
a
good
name
and
and
then
we
can
discuss
that,
obviously,
and
then
implementing
it
almost
exactly
like
I've
said
here,
except
when
you
specify
it,
it
turns
off
all
the
current
scaling
behavior.
You
know
what
I
mean.
C
C
A
Thanks
a
lot
for
for
waking
up
this
early
or
like
in
the
middle
of
the
night.
Basically.
C
It
is
the
middle
of
the
night
awesome
thanks
guys,
I'm
gonna
drop
off,
but
thank
you
so
much
for
everyone.
If
anyone
has
any
questions
about
either
the
proposal
in
its
previous
state
or
the
proposal
in
its
new
state,
please
just
comment
on
that
issue.
That's
here.
If
they
think,
if
you
think
of
anything,
thank
you
guys.
A
A
Okay,
the
next
topic
is
from
philip
phillip
wanna,
discuss
your
proposal.
G
G
The
main
problem
is
that
there
is
a
higher
resource
usage,
because
you
need
a
couple
of
like
additional
quotes
running
at
the
same
time,
and
this
can
be
descended
by
decent
pages,
mostly
when
you
roll
out
a
lot
of
deployments
at
the
same
time
or
when
you
have
generous
termination
periods.
G
G
So
I
would
like
to
know
whatever
is
your
opinion
on
this,
and
if
I
could
yeah.
G
A
I
started
scrolling
through
the
apps
api
and
I
wonder
if
something
like
mock
search
that
we
have
for
daemon
sets
instead
of
okay.
A
Let
me
start
start
over
you're,
basically
proposing
to
add
a
field
with
a
flag
which
would
say:
oh
treat,
terminated,
pots
or
terminating
pots
as
terminated
ones,
and
proceed
with
adding
extra
extra
pots
in
the
meantime,
which
is
something
that
we
already
have
in
the
deployments
and
daemon
said
under
the
name
of
max
search,
which
basically
says
during
rolling
out
or
maybe
we
could,
and
in
those
two
cases.
A
It
currently
only
applies
during
rolling
new
versions
of
of
daemon
sets
or
or
a
deployment,
but
maybe
you
could
expand
on
the
meaning
of
that
field
within
the
replica
set
and
be
able
to
say,
oh
well.
A
A
G
A
Right,
but
that
that's
basically
what
would
you
be
able
to
achieve
through
the
max
search?
Maybe
it
wouldn't
be
cold
max
search
per
se
or
we
could.
But
that's
that's
a
separate
thing,
but
you
basically
want
to
have
an
option,
an
optional
fear
for
your
user
to
decide.
Oh,
I
will
allow
to
running
a
little
bit
more.
A
Don't
know
what
does
anyone
else
have
any
opinions,
thoughts,
questions.
I
My
read
of
this
proposal
is
that
it's
about
scaling
down
and
about
the
the
fact
that
terminating
pods
might
still
be
actively
doing
things
in
cluster.
This
is
looking
for
a
way
to
wait
for
them
to
fully
terminate
during
the
scaling
down
process.
G
Yeah,
I
think,
like
the
like,
the
the
state
of
of
affairs
we
want
to
avoid
can
can
be
achieved
like
by
a
number
of
different
operations.
For
example,
if
you
scale
down
and
then
you
scale
up,
then
you
have
number
of
terminating
pots
and.
I
Like
even
if
we,
even
if
there
was
something
about
surge,
the
the
controller
wouldn't
consider
terminating
pods,
right
or
terminating
is
not
a
form
of
running.
So
that's
not
like
it
wouldn't
be
counted
as
part
of
the
surge.
E
E
The
auto
scaler
may
scale
up
for
the
new
port
because
there
are
not
enough
resources
on
the
on
the
existing
nodes
within
the
cluster,
for
example,
because
of
anti-affinity
or
anything.
E
Yeah
like
this
is
perhaps
going
to
be
corner
case
right
and
are
the
nodes
going
back
down
or
they
are
not
happening
in
auto
scaler
like
say,
for
example,
a
new
node
has
been
spun
up
to
accommodate
this,
this
particular
port.
E
Would
the
node
be
automatically
scaled
back
once
the
terminating
part
has
been
has
reached
the
termination
stage.
G
Yeah,
I
don't
know
this
occurred
in
one
issue.
I
don't
know
about
their
setup,
but
also
there
was
like
this
other
issue
where,
like
a
person,
has
a
a
large,
a
big
max
surge
and
the
the
bots
take
a
really
long
time
until
they
terminate
so
the
cluster.
G
E
So
I
I
think
I'm
I'm
okay
with
that.
Unless
rest
of
the
folks
think
it's
it's
a
wrong
way,
but
as
far
as
this
particular
use
case
goes
where
the
node
is
being
created
by
the
node
autoscaler,
and
if
it's
not
scaling
back,
I
think
there
is
a
corner
case.
That's
perhaps
to
be
addressed
by
the
node
control,
node,
auto,
scaler
or
running
the
scheduler
after
an
auto
scaling
happens
so
that
that
particular
node
would
not
have
any
resources
so
that
no
outer
scale
can
take
it
back
so
yeah.
G
E
E
A
A
I
don't
know
a
delay,
some
delay
in
in
the
outer
scaling,
but
whatever
the
approach
we
will
take
either
by
introducing
this
functionality
you
have
or
not,
there
will
still
be
a
particular
limit
at
some
point
in
time.
You
will
have
to
face
that.
Oh
it
just
so
happens
that,
because
of
my
current
quota
or
the
limit
of
my
cluster,
where
the
cluster
auto
scaling
will
kick
in
and
will
unnecessarily
create
those
extra
resources.
A
I
don't
know
I
would
probably
I
would
probably
start
with
if
this
is
not
clear,
documenting
that
by
tweaking
max
search
and
max
and
available
you
can,
I
don't
know,
delay
your
your
rollout
one
way
or
the
other
or
speed
it
up,
but
the
the
risk
at
the
risk
of
I
don't
know
either
hitting
your
quota,
but
then,
at
the
same
time
you
always
have
to
be
aware
how
much
quota
you
have
how
much
room
you
have
to
be
able
to
to
work
with
your
with
your
application
and
what
to
do
with
it.
A
I
And
that's
what
I'm
hearing
right
is
that
terminating
pods
are
you
know,
they're
not
in
phase
running,
but
their
applications
have
not
finished
terminating
right,
they've
been
given
a
signal
but
they're
still
there
they're
not
ready
they're,
not
running
they're,
terminating
but
they're
alive.
You
know,
and
certainly
consuming
resources
and
correct
me
if
I'm
wrong,
but
but
it
sounds
like
this
proposal
is
pointing
out
that
max
unavailable
ignores
terminating
max
surge,
ignores
terminating
and
it
might
be
nice
to
have
it
not
ignore
them.
E
I
think
one
of
the
things
that
we
need
to
be
careful
about
here
is
even
if
it's
terminating
it.
That
particular
part
is
supposed
to
take
some
resources
because
because
it
has
to
satisfy
the
existing
requests
technically,
the
application
needs
to
have
those
resources
till
the
graceful
shutdown
times.
E
E
It
perhaps
has
got
nothing
to
do
with
the
way
max,
search
or
other
or
max
unavailable
is
configured.
It's
just
that.
Yes,
we
know
that
it's
a
terminating
part
and
perhaps
we
can
actually
quota.
Even
the
terminating
parts,
if
I
remember
correctly,
because
previously
it
does
not
used
to
take
quota
quota,
does
not
used
to
take
those
spots
into
consideration,
but
now
we
do
so
having
proper
quota
and.
A
Okay,
folks
we're
past
four
minutes
past
top
of
the
hour.
Thank
you
very
much
all
for
very
likely
discussion.
Yeah
go
ahead.
Phil.
A
I'll
probably
start
with
having
an
issue
opened
and,
at
the
same
time
write
down
that
we
decided
initially
to
better
document
the
situation.
If
we
have
more
proofs
that
solving
this
issue,
as
you
proposed
would
be
beneficial
for
all
more
folks,
probably
the
best
option
would
be
for
them
to
vote
it
up
on
the
issue.
A
Then
we
can
consider
implementing
that
one
in
the
near
future
and
all
that
in
all
other
cases
I
would
fall
back
to
the
docs,
because
that
usually
solves
the
most
of
the
cases.
Does
that
make
sense.
A
Yep:
okay,
thank
you
very
much
all
sorry
for
the
for
the
extra
five
minutes.
Thank
you
very
much
for
sticking
with
us
enjoy
the
rest
of
your
day.
Thank
you.
Bye
all.