►
From YouTube: Kubernetes SIG Apps 20220502
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
good
evening,
good
afternoon,
depending
on
where
you
are
today
is
may
the
second-
and
this
is
another
of
our
bi-weekly
sig
apps
call.
My
name
is
mache
and
I'll
be
your
host.
Today
we
have
a
couple
of
topics
for
the
discussion,
but
let's
quickly
go
through
the
announcements
kubecon
eu
2022
is
happening
in
roughly
two
weeks.
A
A
So
if
you
want
to
say
hi,
I'm
more
than
happy
to
hear
from
anyone
that
will
be
on
kubecon.
As
I
said,
coupon
will
be
happening
in
two
weeks.
That
also
means
that
our
next
call
on
the
may
16th
will
be
cancelled
you
due
to
that.
So
we
will
see
each
other's
in
a
month,
and
I
haven't
quickly-
and
I
haven't
checked
the
date
so
quickly
doing
that,
so
we
will
meet
each
others
on
the
30th
of
may
I
don't
recall,
being
there
any
whole
day
or
anything
like
that.
B
A
Okay,
so
with
that,
we
can
jump
over
to
the
main
topics
of
discussion,
replica
asset
controller
continuously,
creating
part
failing
due
to
assist
control,
forbidden
ravi
and
philip.
Can
you
take
it
over
from
here.
D
D
C
Yeah,
so
some
of
the
things
that
are
at
this
point
of
time
are
not
clearly
thought
out
are,
it
seems
every
component
is
doing
whatever
it
is
supposed
to
do
like,
for
example,
cubelet
says
I'm
rejecting
the
part,
because
this
is
cuttle
does
not
make
sense
to
me
and
scheduler
is
not
aware
of
that.
So
scheduler
cannot
do
anything
and
the
replica
set
controller
is
supposed
to
create
parts.
C
Now
the
other
option
which
is
slightly
related
to
this
is
to
have
back
off
within
the
controllers,
and
I
think
can
you
have
mentioned
in
the
past
that
you
are
concerned.
That
controllers
should
not
have
any
knowledge
of
the
conditions
based
on
which
they
will
back
off
or
or
something
like
that.
C
Yeah,
that
is
what
I'm
thinking
of,
and
the
the
second
condition
would
be
something
like
whatever
I
have
explained
earlier,
which
is
the
cubelet
would
specify
that
there
is
an
unsafe
system
and
the
container
can
cross
the
boundary
and
the
node
lifecycle
controller
can
actually
change
the
node
and
we
do
not
have
to
touch
anything
in
the
scheduler.
C
Sure
so
what
I'm
thinking
of
now
is
like,
as
of
now,
this
allow
unsafe
cisco
is
something
that
is
specific
to
cubelet,
and
only
cubelet
has
knowledge
of
the
parts
that
are
getting
failed
and
that
information
is
not
getting
passed
on
to
scheduler
or
the
controller.
B
You
want
to
add
it
well,
okay,
there
are
two,
like
the
controller,
doesn't
know
anything
to
back
off
as
far
as
concerns
doing
valid
pod
creation,
it's
not
concerned
with
placement
right
yeah
so
like
it
can't
be
preempted,
it
can't
say:
okay!
Well,
I'm
going
to
examine
all
the
nodes
and
actually
for
this
information
because,
like
you
just
said,
it's
not
surface
via
taint
anyway,
we
wouldn't
even
know
that
the
tank
was
available
from
the
other
side
of
the
scheduler
again,
because
it's
not
surfaced
as
a
paint
doesn't
know
not
to
place
it
there.
B
For
this
particular
instance,
it
would
to
me
what
would
make
sense
would
be
the
same
thing
as
like
if
you
were
dealing
with
a
scarce
resource
where
you
would
put
a
particular
case
and
have
a
toleration
so
that
the
scheduler
knows
how
to
do
the
right
thing,
using
kind
of
the
existing
framework
for
doing
that.
Right
so,
like
I
mean,
but
that
would
kind
of
push
it
to
the
end
user
right
like
because
the
user
would
have
to
if
they
wanted
to
use.
B
This
feature
declare
a
toleration
for
the
taint
of
cisco
available
right,
which
maybe
that's
not
not
an
awful
thing,
but
to
do
it
at
a
controller
level
for
actually
detecting
every
use
case.
It's
going
to
make
it
super
hard
for
anyone
to
write
controllers
at
that
point
aside
from
just
the
built-in
ones.
If
someone
wants
to
write
a
custom
controller
that
that's
not.
B
As
a
library
and
kind
of
make
it
generally
available
again
I
mean
that
might
be
something
that's
tractable
like
from
from
a
controller
perspective,
and
I
really
like
not
to
from
the
perspective
of
controllers,
I
wanted
to
make
it
as
easy
to
write
a
custom
controller
and
maintain
that
controller
as
possible
right
so
like
any
any
other
requirement
that
we
place
on
like
okay.
Now
you
have
to
be
concerned
with
backing
off.
B
I
will,
if
we're
gonna,
do
that.
I
want
the
on-ramp
for
that
to
be
smooth
for
anyone.
Who's
gonna
write
a
controller,
and
I
want
the
maintenance
of
that
to
be
super
easy
right
and
like,
and
I
want
it
to
be
a
clear
win
so
like
if
we're
going
to
align
that
generically
doing
back
off
detection
on
failed
pod
creation
is
something
that
we
want
all
controller
authors
to.
Do
I
think
that's
something
we
need
to
make
easy.
B
There's
no
more
places
to
put
a
note
right
so
like
it's,
not
a
trivial
thing
to
do
to
say,
like
we're
going
to
detect
cases
in
which
the
scheduler
is
failing
on
placement
and
or
kubla
is
rejecting
admission
to
the
node
and
then
try
to
back
off
there
like
it's,
not.
I
just
don't
see
a
very
like
straightforward
implementation
that
we
could
make
generic
across
all
the
controllers
and
push
out-
and
I
think,
like
matt
said
you
know
like
there
are
cases
where
we
do
this
for
demon
set
in
particular.
B
But
I
know
jordan,
I'm
sorry,
but
in
those
cases
it
is
very
like
you
know,
you
have
one
pod
per
node
and
it's
very
clear
about
why
the
rejection
happens.
It's
a
special
case,
but
it's
a
special
case
problem.
Okay,
I'm
not
I'm
not
opposed
in
general
to
the
approach
of
implementing
back
off.
I
just
think
we
have
to
be
careful
about
how
we
do
it,
especially
because
the
controllers
are
b1
and
we
don't
want
to
implement
unexpected
behaviors
so
like.
B
If
there's
a
proposal
there,
we
have
a
general
proposal
about
how
we
think
we
can
do
back
off
in
a
way
that's
backward,
compatible
and
that's
reasonable
for
a
good
set
of
use,
cases
and
approachable
for
developers
both
inside
of
core
kubernetes
and
in
the
larger
ecosystem.
I
wouldn't
be
opposed
to
that.
It's
just
like
I
haven't
seen
that
proposal
yet.
A
E
A
Of
api
call
whether
that's
update,
delete
or
creation.
If
that
fails,
yes,
we
do
back
off
and
we
will,
because
that's
a
direct
information
that
we
can
actually
we,
as
in
controller
we
can
work
with,
and
then
we
can
respond
to
the
current
situation
and
eventually
figure
out.
Oh,
the
api
is
either
overwhelmed
or
is
not
responding
or
something
else
is
going
on
and
we
can
slow
down
our
request.
A
The
problem
with
this
one
is
that
the
actual
failure
is
outside
of
our
controller,
although
at
the
same
time
as
philip
mentioned,
he
observed
1000
creation
of
the
same
pod,
I
hard
to
say
that
it
is
exactly
the
same,
but
I
wonder
if
it's
possible,
somehow
reasonably
identify.
Oh,
this
replica
set
is
having
a
another
pot
created
again
in
a
very
short
time
span,
which
might
seem
somewhat
suspicious
and
and
based
on
that
try
to
create
the
backhoe.
A
But
I
guess,
like
you
said,
can
probably
some
kind
of
a
generic
mechanism
will
be
the
best
option
for
us
to
be
to
have,
because
it
will
basically
apply
equally
across
the
board
for
most
of
the
controllers,
although
they
well
not
necessarily
the
ones
that
are
directly
creating
pots
where
they
have
closer
visibility.
To
what's
going
on,
and
I
guess.
B
B
Like
you
just
said
right
like
if
it's
a
pi,
if
it's,
if
we're
going
to
back
off
because
of
api
failures
for
contribution,
like
it's
a
clear
signal,
what's
the
signals
that
we
would
use
for
scheduling
and
placement
failures
or
node
rejection
like
what?
What
signal
could
we
possibly
use
to
back
off
reasonably
there
right?
Because
it's
not
something
that
it's
fairly
opaque
from
a
controller.
A
Yeah,
the
worst
part
that
I'm
that
worries
me
the
most
is
that
first,
not
to
mention
the
fact
that
this
issue
is
from
2019
is
the
fact
that
it's
hard
for
us
to
figure
out
how
to
properly
respond,
but
at
the
same
time
this
poses
a
great
security
risk,
well
kind
of,
because
you
can
quickly
overwhelm
your
cluster
by
just
creating
a
simple
replica
set
with
within,
with
with
forbidden
cisco,
assist
control,
which
is
yeah
well
kind
of
we.
A
We
will
prevent
your
pot
from
running,
but
still
the
controller
will
create
1000
pot
within
a
minute
and
well
it's
hard
to
react
in
in
this
busy
cluster.
Yes,
of
course
you
can
have
monitoring
and
all
that,
but
for
that
short
period
of
time,
when
you
will
actually
notice
this,
this
craziness
it'll
take
some
time.
B
I
think
the
issue
is
kind
of
sticking
around
because,
like
there's
a
lot
of
there's
a
little
bit
of
well,
you
should
do
it.
You
should
do
it,
you
should
do
it.
It
shouldn't
be
a
scheduling
problem.
It
shouldn't
right,
but
if
my
concern
about
making
it
a
controller
problem
is
now
like.
So
like
the
people
in
argo,
now
they
have
to
go.
B
Do
it
right,
like
everybody
who
writes
a
controller,
that
deploys
workloads
or
deploys
pies
now
has
to
worry
about
this
this
problem
and
it
seems
to
kind
of
break
the
unix
philosophy
to
me
right
like
solve
the
problem
once
solve
it
well
make
it
generally
available
if
we
feel
like
the
right
way
to
do.
That
is
be
a
library
that
you
know
we
distribute
to
the
community
at
large
and
then
leverage
inside
of
the
built-in
controllers.
That's
that's
a
path
forward
to
me.
It's
just
like.
B
If
that's
what
we're
going
to
do,
I
go
back
to
again.
What
is
what
are
these
signals
that
we're
going
to
base
this
library
on
to
achieve
a
reasonable
behavior?
That's
both
backward,
compatible
and
generally
useful
and
looking
at
the
problem,
none
of
it
still
seems
clear
to
me
like
nobody
like,
I
don't
see
like
well.
How
do
I
detect
that?
This
is
a
case
where
I
should
back
off.
B
The
api
machinery
is
telling
me
go,
go
go
if
you're
asking
me
to
throttle
based
on
successful
requests,
don't
know
like
that's,
we
could
put
in
generic
timers.
That
say,
like
the
the
max
threshold
for
total
creates
coming
out
of
a
controller.
Is
this
and
we'll
back
ourselves
off,
but
that
introduces
behavior
that
might
be
undesirable
from,
like
you
know,
blocking
rollouts
and
making
rollouts
more
slow
for
large,
like
for
people
who
have
very
large
deployments
or
who
are
very,
very
many
deployments
globally.
B
Throttling
and
globally
rate
limiting
the
replica
set,
controller
or
deployment
controller
is
going
to
be
a
problem
for
them.
A
Well,
that's
actually
something
that
is
already
available
because
the
pnf-
and
I
can't
remember
what
pnf
stands
for,
which
is
basically
the
mechanism
present
on
the
api
server,
where
you
can
limit
the
amount
of
requests
per
client
prior
to
unfairness.
A
That's
the
full
name
like
so
we
have
that
mechanism
and
the
cluster
administrator
can
theoretically
create
global
cues
and
somehow
limit
the
built-in
controllers.
A
So
there
are
options,
I'm
more
concerned
about
yeah
that
the
default
approach
and
the
default.
What
we
have
because
yeah
like
philip
and
robbie
mentioned
there
are
options
because
you
can
quota,
although
it
doesn't
have
it
can't
be
a
it
has
to
be
a
specific
quota
type
to
be
able
to
cap
at
all.
A
The
parts
failed
and
succeeded
failed
so
that
the
the
quota
counts,
all
parts,
not
just
the
running
ones,
because
in
the
normal
case,
where
you
specify
parts,
if
I
remember
correctly
that
that
case
is
described
there,
I
think
jordan
put
it
there.
It
will
only
quote
out
the
running
parts,
but
if
you
specify
that
with
pots
count,
if
I
remember
correctly.
C
A
A
D
Yeah
and
like
for
a
relic
signal
signal
for
this,
one
is
like
the
once.
You
create
the
bot
it
like.
The
face
goes
to
fail
and
I
think
like
what
like,
like.
As
ken
said,
the
problem
is
that,
like
this
is
this
is
the
scope
of
this
is
very
generic.
So,
even
if
you
have
some
like
replica
assets,
which
are
failing
in
some
percent
of
number
of
cases,
now
we
are
the
replicas
that
will
get
much
slower
because
of
the
back
off,
even
when
it
is
like
behaving
properly
like
from
the
customer
standpoint.
C
Yeah,
I
think
what
I
what
I
want
to
mention
is:
there
are
two
different
set
of
problems
right
I
mean
the
first
problem
that
we
have
is
having
generic
back
off
and
how
to
implement.
Is
this
the
first
problem?
The
next
problem
is,
would
back
off
solve
our
problem,
say
if
it
is
coming
from
node?
How
do
we
detect
and
special
case
certain
things
right,
which
we
do
not
want
to
do
in
the
controller?
C
The
problem
here
seems
to
be
that
node
does
not
have
a
condition
which
either
scheduler
or
not
the
replica
set
controller.
By
that,
what
I
mean
is
any
workload
controller,
it
should
not
have
any
say
in
the
the
recreation
rate
based
on
the
failure
state
of
the
pod
on
the
node.
Rather,
it
should
be
something
like
once
the
cubelet
fails.
C
Those
parts
cubelet
should
have
a
condition
on
the
node
object,
which
says
the
node
is
actually
crossing
or
the
part
is
actually
crossing
the
boundary
that
we
have
specified
for
the
node
and
the
node
life
cycle.
Controller
can
tell
that
this
particular
condition
exists
on
the
node.
It
would
be
similar
to
memory
pressure
on
the
node
or
disk
pressure
on
the
node
and
the
node
lifecycle
controller
would
paint
that
particular
node
and
we
do
not
have
to
touch
anything
in
the
core
workload
controllers.
Apart
from
having
back
off
and
the
workload.
C
If
the
user
want
to
tolerate
that
particular
change,
they
are
responsible
for
having
a
toleration
in
the
workload
saying
that
I
can
tolerate
it.
I
would
like
to
go
ahead
and
get
scheduled
on
the
node
which
or
where
the
workload
can
cross
the
cis
cutters
that
we
have
specified
or
the
boundaries
that
we
have
specified.
C
Rather,
the
node
should
specify
something
and
the
node
lifecycle,
controller
or
some
other
entity
should
say
that
the
node
has
certain
problems
and
I'm
going
to
taint
it,
and
if
you
want
to
use
it,
go
ahead
and
use
it
does
that
make
sense.
I
think
I
should
have
been
clear
earlier.
That's
what
I
was
trying
to
say.
E
B
E
So
yeah,
I
guess
the
only
tool
we
have
is
rather
no
no
labels
and
selectors
the
pots
are
using.
This
feature
should
be
pointing
to
to
the
unknown
label
that
that
means
that
this
known,
as
has
this
cctl
called
calls
allowed,
but
then,
but
then
it's
pretty
much
on
the
user
to
to
configure
all
of
these
these
things.
E
So
I'm
kind
of
falling
into
the
idea
that
yeah,
maybe
we
should
just
have
the
back
off
and
we
just
need
to
detect,
detect
the
reason
why
a
pod
was
not
admitted
because
we
already
have
backups
for
jobs
and
it's
based
on
on
the
it's.
Also.
It
also
pays
attention
to
the
exit
code
for
the
for
the
containers.
E
B
E
E
E
So
I
guess
we
we
will
have
to
do
something
similar.
Just
to
just
this
is
just
to
protect.
The
the
control
plane
right
is
is
not
about
the
whether
the
boss
should
run
or
not
in
this
node.
I
think
that,
unfortunately,
that's
how
that
has
to
be
on
the
user
to
configure
properly,
because
things
that
don't
provide
a.
C
Number
two
is
we
have
this
default
tolerations
plugin
in
kubernetes,
where
we
can
specify
what
are
the
default
tolerations
and
what
is
the
time
that
we
can
specify,
which
is
an
admission
plugin.
So
that
is
why
I
thought.
Okay.
If
we
want
to
have
something
like
that,
either
user
can
explicitly
specify
or
we
can
have
a
default
toleration
like,
for
example,
in
demon
sets.
We
provide
the
default
tolerations
for
certain
teams,
so
that
is
one
of
the
reasons
that
I
have
gone
with
this
approach.
C
I'm
not
saying
that
pains
are
like
the
best
ones,
but
my
main
goal
here
is
not
to
touch
anything
in
the
scheduler
and
not
to
touch
in
the
controller,
because
if
we
touch
something
in
the
controller,
we
may
have
to
keep
track
of
all
the
conditions
which
can
actually
lead
to
back
off.
So
that
is
doing
that
generically
as
hard
as
what
I'm
thinking.
C
Yeah,
so
I
think
derek
has
mentioned
the
like.
I
think
he
has
responded
to
one
of
the
questions
like
one
existing
mechanism
on
the
cubelet
side.
To
have
this
kind
of
condition.
Is
they
forgot
the
exact
name,
but
they
have.
C
They
have
one
mechanism
for
which
or
through
which
they
can
say
that
the
pod
is
not
going
to
be
in
a
hard
failure
stage,
meaning
it
will
be
in
a
blocked
stage
or
some
other
stage,
but
not
a
hard
failure
and
the
pod
will
be
in
pending
state
and
the
parts
would
not
be.
The
further
parts
would
not
be
created,
but
derek
is
not
in
favor
of
having
a
soft
failure,
but
rather
he
wants
a
hard
failure
to
happen.
On
the
cube
left
side.
C
B
So
I
guess,
like
the
thing
I'm
still
trying
to
think
about
at
a
higher
level.
Is
this
here's
what
the
customer
is
saying
right
like
okay,
I've
got,
I
deployed
my
cluster
and
I
configured
all
of
my
coupelets
to
not
allow
unsafe
systems
and
then
either
my
previous
deployments
that
I
have
somewhere
else
that
I'm
going
to
apply
to
this
new
cluster
or
these
new
deployments.
I'm
making
are
going
to
try
to
run
these
cis
cuddles
and
they're
going
to
get
rejected.
B
So
like
one
of
two
things
is
happening
there
like
either
I
misconfigured
my
cluster
when
I
turned
it
up
or
I
don't
really
understand
the
nature
of
my
cluster
configuration
when
I'm
trying
to
deploy
my
workload-
and
you
know
it's
because
this
is
a
parameter
that
you're
passing
the
kubelet
on
node
turn
up.
It
may
not
be
clear
to
the
user
what
the
operator
has
actually
configured
right.
B
So
there's
a
mismatch
and
then
because
there's
no
node
in
the
cluster
that
you
can
actually
run
this
pot
on
the
scheduler's,
just
gonna
keep
trying
to
throw
it
around
and
like
leave
out
other
teams
and
scheduling
predicates.
That
may
be
attached
to
the
deployment,
but
it's
never
going
to
be
able
to
schedule.
B
If
we
add
back
off
here,
how
is
the
customer
actually
like
what?
What
is
the
impact
that
that
feature
would
have
versus
the
cost
of
implementation?
I
guess
it's
kind
of
my
thing
right
because
it
does
seem
like
there's
not
a
broad
consensus
on
like
there's
this
great
generic
signal.
We
could
use
to
back
up
back
off
of
to
implement
a
backup
off
of
that
would
solve
this
case,
and
if
we,
even
if
we
did
this
workload,
is
still
misconfigured
and
the
customer's
never
going
to
be
able
to
run
it
right.
B
Like
there's,
basically
like
to
me
that
the
root
problem
seems
like
we
have
this
feature
in
kubelet
that
it
doesn't
communicate
its
expectations
or
surface
it
in
a
way
that
the
consumer
of
kubernetes,
like
the
developer,
who's,
deploying
it
can
reason
about,
and
whatever
we
do
here,
isn't
going
to
fix.
That
problem.
C
So
if
we
just
implement
back
off,
can
I
think
it
does
not
solve
the
problem,
but
it
actually
reduces
the
time
by
which
the
customer
is
going
to
hit
this
particular
problem.
So
that's
why
I
told
back
office
is
not
a
complete
solution
to
it
like
eventually,
the
cluster
state
may
reach
that
stage
where
it
has
got
thousand
parts
or
n
number
of
pods,
which
can
break
the
cluster,
but
having
back
off
would
increase
the
time
by
which
the
cluster
admin
is
going
to
hit.
That
state.
A
I
don't
think
we
we
will
be
able
to
close
the
discussion
on
this
topic
here.
There
are
a
couple
of
follow-ups
that
need
to
happen
before
we
can
revisit
this
topic.
So
probably,
let's
wait
for
the
discussion
with
the
signal
to
happen.
A
B
Yeah,
I
don't.
I
also
personally
would
like
to
better
understand
the
intended
use
of
this
feature
like
because
one
thing,
that's
not
clear
to
me
like
just
a
ferrari
is:
is
it
ever
so?
Okay
does
sig
node
intend
for
there
to
be
mixed
mode
clusters,
where
some
nodes
are
configured
with
a
kubelet
that
has
this
enabled
and
some
nodes
are
not
right.
That
would
be
a
one
question.
B
What
is
the
intention
from
the
sig
node
side
of
interaction
with
higher
level
controllers
and
I'd
like
to
get
their
their
opinion
on
how
they
see
this
feature,
interacting
with
scheduler
and
with
controller,
especially
before
committing
anything
on
our
on
the
sig
side?.
C
Yeah,
I
think
it
makes
sense.
So
one
thing
I
have
heard
from
signord
and
I
have
pasted
whatever
derek
was
mentioning
last
time
in
the
chat
is
he
wants
the
failure
to
happen
immediately
and
it
should
be
a
heart
failure
at
the
same
time,
it's
a
flag
on
the
cubelet
side,
so
some
of
the
nodes
are
going
to
have
this
particular
feature
enabled
some
of
them
are
not
going
to
have
this
feature
so
like.
How
can
we
solve
this
problem
at
the
controller
level
using
back
off?
C
B
To
me,
if
that's
the
intended
use
case
right
like
if
the
idea
is
that
I'll
have
a
mixed
mode
cluster,
where
I
have
a
set
of
nodes
that
I'm
going
to
allow.
These
are
basically
like
somewhat
unsafe
nodes,
where
I'm
going
to
allow
code
to
run
that
has
cis
cuddles
that
I
consider
to
be
unsafe,
based
on
a
security
context,
and
then
I
have
a
whole
other
set
of
nodes.
Where
no
that's
that's
completely
not
allowed,
then
it
is
a
scheduling.
Problem
right,
like
the
scheduler,
should
be.
B
If
that's
the
intended
use
case,
I
would
argue
that
the
scheduler
should
be
aware
and
solve
it
up
front
right,
like
you
should
schedules,
okay.
Well,
this
has
unsafe
systems.
I
have
this
amount
of
capacity
where
I
can
run
that
I'm
gonna
put
it
there
or
I'm
gonna,
keep
it
impending,
because
I
can't
even
launch
it
because
I
don't
have
enough
resources
to
actually
schedule
it
there
based
on
the
resource
limits
and
requests,
and
if
it's
not
like
that
is
a
scheduling
problem
to
me
right.
B
E
C
E
So
we
we
need,
we
need
a,
maybe
even
a
new
api
to
signal.
I
guess
we
have
generations,
but
but
if
we,
if
we
go
the
tolerations
path,
then
we
need
mission
controller
to
add
things
to
everything
or.
C
Dirk
told
that,
yes,
we
can
actually
surface
it
as
an
api
in
the
status
and
whether
scheduler
would
actually
look
at
it
and
then
place
the
parts
or
if
the
node
lifecycle,
controller
or
some
other
entity
can
use
the
existing
scheduling
mechanisms
like
teens
and
tolerations
or
affinity
or
something
we
can.
We
can
discuss
about
that
later.
E
Yeah,
maybe
I
guess
we
can
bring
this
discussion
to
six
scheduling,
but
just
thinking
quickly,
I
think
maybe
the
admission
should
set
an
old
affinity
on
under
the
livestream.
No
dry
cycle
would
add
this
label
to
the
node.
E
That's
the
mechanism
we
have
today,
but
I'm
not
aware
of
any
existing
admission
controllers
that
are
adding
affinities.
I
might
be
wrong,
but.
C
Yeah,
so
I
think
one
thing
I
can
get
started
on
is
come
up
with
a
design
document
in
1.5
have
the
discussion
across
signal,
sig
scheduling
and
sea
gaps,
and
and
then
I
think
we
can
go
from
there.
So
part
of
the
thing
that
I
wanted
to
bring
up
here
is:
would
the
generic
back
off
mechanism
be
helpful,
irrespective
of
whatever
we
choose
for
solving
this
problem?
I
think
that
is
the
thing
that
I
wanted
to
bring
up.
C
B
There
are
mechanisms
that
the
api
machinery
has
for
doing
that
right,
so
is
the
right
thing
to
do:
to
try
to
build
effectively
an
sdk
or
some
type
of
library,
client
side
that
we
distribute
across
immunity
or
community,
or
can
we
leverage
the
existing
features
with
respect
to
queuing
and
fairness
in
the
api
machinery
to
ensure
that
we
don't
overwhelm
it
from
the
door
right
like?
Is
there?
A
Maybe
that
will
be
also
another
reasonable
value
to
discuss
this
issue
with
since
api
machinery
is,
is
usually
affecting
all
the
areas
that
they
could
be
also,
the
topic
could
be
brought
up
for
one
of
their
next
calls,
and-
and
we
can
hear
their
general
opinion-
that
I
think
would
be
the
original
approach.
A
Okay,
I
think
we
have
sufficient
follow-ups
on
this
topic.
I'm
not
sure
we
have
the
person
behind
this.
The
next
topic.
A
A
E
If
we
still
have
time,
I
can
comment
a
little
bit
on
this
issue.
Yeah
go
ahead.
I
think
I
think
I
I
thought
this
could
have
been
a
bug,
but
wasn't
aware
of
of
users
hitting
it.
E
Basically,
when
you
use
we
use,
what's
it
called
q
after
it's
not
necessarily
gonna
queue
after,
if
you
queue
earlier,
let's
say
I
say:
q
after
five
minutes
and
then
another
q
comes-
and
it
says,
q
now
or
q,
in
a
minute
that
q
in
a
minute
is
the
one
that's
going
to
happen
and
q
after
five
minutes,
that's
going
to
be
ignored
it.
It
basically
is
disappearing
and
that's
the
bug
here
that
we
are
only
queuing
after
for
the
ttl
or
active
deadline.
E
Sorry
at
the
job,
the
first
time
we
see
the
job,
so
I
think
that
the
easy
solution
is
just
a
queue
after
every
time
we
sync
the
job,
because
every
time
we
sing
the
job,
we
already
got
rid
of
the
older
cue
after
so
I
think
that
would
be
my
my
proposal
and
I
think
this
this
vr
is
way
too
complicated
for
the
reasoning.
A
Oh
yeah,
if,
if
if
this
is
about,
I
can't
remember
if
I
looked
at
it
earlier
or
not,
but
if
this
is
about
the
act
of
the
nine
seconds
and
we
only
and
the
and
we
only
after
once
it
is
true.
I
remember
that
very
well
that
the
after
or
generally
the
queue
works
such
that
it
will
pick
the
it
will
pick
the
one
that
is
closest
to
be
executed.
A
Because
we
are
heavily
using
that
mechanism
for
the
schedule
for
cron
jobs
so
yeah,
I
definitely
agree
that
having
whenever
we
are
whenever
the
deadline.
Second
is.
F
A
We
should
be
calculating
the
the
delay
between
now
and
whatever
is
left
with
the
active
deadline
seconds
and
always
ensuring,
at
the
end
that
we
will
requeue
this
job
after
some
time,
because
I
assume
that
the
problem
is
basically
coming
from
the
fact
that
somebody
created
a
job
with
a
act
of
deadline
seconds.
Then
I
don't
know
perform
a
modification
on
it
or
something
like
that,
and
that
caused
the
act
of
daylight
seconds
to
not
be
calculated
properly.
More
or
less.
E
A
There
are
some
edge
cases
when
this
when
this
can
happen,
and
I
I
can't
remember
what
what
were
there,
but
the
solution
was
to
use
the
the
add,
after
which
will
always
ensure
that
we
will
recheck
the
job
at
whatever.
The
time
is
to
complete
the
overall
execution.
A
Okay,
although
I
guess
you
you'll
you'll,
comment
on
the
pr
I'll
I'll
have
a
a
second
look
at
it
later
this
week
as
well.
F
Hey
guys,
I
just
wanted
a
quick
question
about
the
status
of
the
sig,
apps
and
cli
reviewer
mentoring
club.
Is
that
still
happening?
I
need.
A
To
double
check
with
paris
I
haven't
heard
from
her
it's
I
remember
she
was.
She
was
supposed
to
create
a
list
for
us
and
then
we
would
follow
up
with
with
the
two
groups,
which
will
be
one
for
people
that
are
not
yet
cube.
A
Contributors
or
cube
car
cube
work
members,
because
those
people
will
require
a
different
set
of
preparation
and
different
set
of
guidance
and
then
the
separate
list
of
folks
who
already
are
passed
out
that
requirement
and
will
require
more
guidance
around
the
actual
reviewer
and
preparation
towards
that.
A
So
I
remember
that
somebody
already
recently
pinged
me
on
slack
with
regards
to
this,
I'm
hoping
to
ping
paris
later
this
week
and
resync
with
her,
where
we
are
and
with
eddie,
and
and
bring
it
back
to
the
actual
motion.
A
Given
that
we
will
be
having
kubecon
in
roughly
two
weeks,
it
might
slow
down
a
little
bit
since
a
lot
of
folks
will
be
preparing
for
that,
I'm
hoping
to
in
the
worst
case,
I'll,
follow
up
with
virus
during
kubecon,
and
then
we
can
return
to
this
sometime
in
june.
In
early
june.
A
Okay,
if
there
are
no
other
questions,
I
think
we
can
close
the
call
at
this
point
in
time.
A
Okay,
thank
you
very
much
all
enjoy
the
rest
of
your
day.
Take
care.