►
From YouTube: Kubernetes SIG Apps 20211004
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
good
evening,
good
afternoon,
depending
on
where
you
are
today
is
october
4th-
and
this
is
another
of
our
bi-weekly
sick
apps
calls.
My
name
is
macia
and
I'll
be
your
host.
Today
we
have
two
announcements.
So
next
week
during
kubecon
north
america,
we
have
a
sick,
apps
updates
I'll
link
that
one
in
this
cat.
So
if
you're
interested
feel
free
to
listen,
join,
ask
questions.
A
A
So
with
that,
I
can
pass
over
the
voice
to
jordan,
who
wants
to
talk
about
pdbs.
B
C
I
wanted
to
bring
up
a
topic
that
came
up
around
pdbs.
It's
a
little
unclear
to
me
at
least
which
sigs
I
guess
own
this
area,
it's
it
feels
sort
of
halfway
in
between
sig
node
and
sig
apps.
C
I
think
originally
pdb
was
designed
to
be
like
app
facing
because
it
makes
use
of
things
like
selectors
and
interacts
with
controllers
and
dovetails
nicely
with
something
like
running
a
deployment
and
having
so
many
replicas
and
then
not
wanting
all
of
the
replicas
backing
your
service
to
be
taken
down
at
the
same
time.
C
But
then
the
control
point
where
pdb
is
actually
accomplished.
Something
is
the
eviction
endpoint,
which
is
primarily
used
for
draining
nodes.
I
mean
other
other
people
can
use
that
as
well,
but
draining
nodes
is
the
main
use
case,
I'm
aware
of
and
the
one
that
most
deployments
most
kubernetes
deployments
make
use
of
for
managing
their
node
pools.
Anyway,
a
question
came
up
about
how
to
treat
not
ready,
pods
right
now.
C
Under
some
circumstances,
a
not
ready
pod
can
be
evicted
regardless
of
the
pdb
and
then
in
other
circumstances
the
pdb
is
consulted
and
written
to
as
part
of
evicting
that
not
ready
pod,
and
so
it
seems
like.
There
are
two
two
consumers
which
are
who
are
expecting
different
things
from
this.
One
consumer
is
very
app
focused
and
saying
these
pods
already
aren't
ready.
C
Why
are
you
blocking
me
from
deleting
them
if
they're
not
ready,
they're,
already,
not
being
included
in
service
endpoints
they're
already
disrupted?
Why
is
pdb
even
involved?
This
is
deadlocking
me,
so
I
linked
to
a
open
issue
around
that.
Another
consumer
is
explicitly
using
not
readiness
as
a
way
to
sort
of
get
pdbs
into
the
mix
for
preventing
additional
deletion
of
pods,
and
so
in
the
sig
apps
thread.
There's
been
some
people
chiming
in
saying,
oh
yeah.
C
C
There's
discussion
in
the
thread
about
how
safe
the
use
of
pdbs
is
for
the
sort
of
going
not
ready
and
preventing
removal
use
case.
I
I
would
argue
that
it
is
not
safe
at
all,
because
it's
racy,
depending
on
whether
the
controller
has
observed
those
paws
not
going
ready
or
going
not
ready.
So
if
again,
it
seems
like
there's
two
use
cases,
one
is
sort
of
service
app
oriented
and
one
is
like
back
end
data
safety
oriented.
C
So
anyway,
that's
that's
the
context.
I
guess
I
wanted
to
raise
awareness
of
it
for
sig
apps
folks
that
hadn't
seen
the
thread
or
weren't
really
paying
attention
to
the
implications.
In
my
mind,
there's
two
things
we
have
to
resolve.
C
We
need
to
resolve
the
deadlocks
of
pdbs
for
not
ready
pods,
and
we
also
have
to
like
resolve
the
unsafe
use
by
people
who
are
depending
on
it
for
data
safety,
and
there
was
a
big
long
discussion
in
sig
node
last
week,
which
you
can
go,
look
at
the
recording
of,
but
the
feedback
there
was.
C
C
C
Yeah
I'll
stop
talking
now
and
yeah
any
any
other
thoughts
or
feedback
or
context
that
I
missed.
A
A
Specifically
the
problem
of
how
you
treat
the
not
ready
parts
is
even
not
in
unison
between
controllers,
so
our
stateful
set
controllers
has
a
clear
limitation
pointed
out
that
if,
during
a
rollout,
a
pod
gets
stuck
for
whatever
reason,
we
will
not
proceed
with
a
further
rollout,
which
is
kind
of
touching
this
area,
because
you
might
end
up
with
a
health
broken
stateful
set
that
you
need
to
manually
intervene
to
be
able
to
unblock
it.
So
that
will
be
somewhat
similar
to
the
camp
of.
A
A
If,
for
whatever
reason,
the
old
one
somehow
gets
back,
the
new
one
will
actually
get
deleted,
not
the
old
one,
although
with
the
recent
changes
to
the
algorithm,
how
we
scale
down
replica
set.
A
I
I
guess
that
the
conclusion
that
you
pointed
out
that
this
has
to
be
configurable
is
probably
valid,
because
we
already
have
both
cases
available
in
the
current
controllers,
even
though
that
there
is
a
request
for
stateful
sets
to
also
do
a
similar
step
to
not
care
about
data,
but
if
a
pod
is
not
ready,
delete
it
and
proceed,
and
a
lot
of
folks
try
to
open
issues.
A
D
So
it's
been
a
little
while,
since
I
looked
at
this
stuff-
but
I
remember
some
of
the
issues
here,
I
I
think
one
of
the
challenges
has
been
that
the
the
disruption
controller
and
the
eviction
logic
has
to
have
separate
thresholds
for
what
is
or
for
how
to
compute
or
how
to
consider.
D
If
a
part
is
healthy
or
not
so
you
can
have
parts
that
are
too
unhealthy
to
be
considered
to
be
counted
by
the
controller
but
or
how
would
it
be
they're
they're
too
unhealthy
to
be
considered
included
when
the
destruction
control
counts.
The
number
of
allow
disruptions,
but
there's
also
sort
of
they're
too
healthy
to
be
affected
by
the
eviction.
D
D
C
Yeah,
so
I
I
think,
there's
agreement
that
we
can't
just
like
fix
the
bug
and
let
not
ready
stuff
be
deleted
in
123.
like
it's.
It
requires
more
that
would
make
the
current
risks
that
people
are
depending
on
pdbs
unsafely.
That
would
make
it
very
unsafe
for
them.
So
that's
not
a
good
idea.
C
I
think
we
also
can't
just
fix
the
bug
to
like
make
pdb's
more
deadlock
prone
for
people
who
are
currently
using
them
successfully
in
cases
where
it's
clearing
out
not
ready.
Pods.
C
What
that
seems
to
imply
is
that
there's
work
to
be
done
to
make
this
behavior
configurable,
and
it's
not
clear
to
me
who
is
motivated
to
do
that
work
or
who
is
planning
to
do
that
work.
This
kind
of
gets
back
to
it
not
being
super
clear
whether
sig,
node
or
sig.
Apps
is
responsible
for
this
component.
B
Jordan,
to
be
clear,
this
is
not
going
to
be
a
temporary
solution,
because
I
read
clayton's
response
on
the
thread
and
I
think
he's
suggesting
to
to
make
this
to
be
working
for
some
time
so
that
both
the
parties
are
okay
and
then
move
on
to
the
first
solution
that
you
are
proposing.
Am
I
right
or.
C
I
don't
think
so.
I
think
his
expectation
was
that
the
pdb
consumer
should
be
able
to
indicate
which
which
behavior
they
want.
C
C
C
So
yeah
I
wanted
to
raise
it
here
and
see
if
people
here
had
additional
options
I
saw
or
like
where,
where
sig
apps
sees
this
compared
to
other
apps
efforts.
C
From
my
perspective,
this
is
like
a
ga
feature
and
api,
which
has
a
couple
really
sharp
edges
and
all
the
conversation
and
the
threads
about
people
saying
oh,
no,
we're
using
pdbs
to
like
prevent
our
cluster
from
getting
toasted,
or
this
like
data
from
getting
lost
like
that
makes
me
really
anxious.
When
I
hear
people
depending
on
this
and
knowing
that
it's
not
fit
for
that
use
currently.
B
So,
if
I
remember
correctly,
I
think
this
feature
was
written
by
david
oppenheimer
when
he
was
in
the
sixth
scheduling
space.
So
at
some
point
of
time,
scheduling
was
also
owning.
This
particular
piece,
if
I
remember
correctly,
I
could
be
at
home.
C
Possibly
I
haven't
seen
anyone
from
scheduling
involved
in
it.
I
I
don't
know
if
it
was
actually
scheduling,
owning
it
or
yeah.
I
mean
the
node
eviction
is
the
main
consumer
of
it,
and
the
selection
mechanisms
seem
to
integrate
with
like
how
replica
sets
and
services
select
active
pods,
which
is
more
apps
oriented.
Yes,
yeah.
D
A
D
If
it's
something
I
forget
that
I
can
commit
to,
but
I
interested
yes
can
I
commit
to
it.
I
can't
answer
that
right
now,.
A
I
mean
it's
definitely
given
where
we
are
in
the
release
cycle.
A
That
will
be
something
atsunus
targeted
for
124,
because
we
would
have
to
have
a
cap,
even
though
there
is
an
overall
agreement-
and
I
I
guess
the
only
question
is
what
the
api
will
look
like
for
this,
although
jordan
quickly
shortly
describe
what
it
could
potentially
look
like,
I
guess
as
an
alpha
feature
and
it
it.
It
will
naturally
has
to
go
through
the
usual
feature.
Gate
alpha
beta
all
the
way
to
ga.
C
Yeah
I
just
to
clarify
priority.
I
it
would
be
helpful
to
know
from
siggap's
perspective
like
what
else
is
going
on
that
is
considered
higher
priority
than
this.
If
we're
looking
for
someone
to
staff
this,
to
my
mind,
a
ga
feature
that
people
are
using
to
protect
against
data
loss,
which
we
know
has
gaps
or
race
issues
that
can
allow
that
data
loss.
That
seems
like
it
should
preempt
other
like
new
greenfield
alpha
type
work.
C
So
I
mean
if
morton
is
interested,
that's
great,
but
I
would
like
to
see
a
couple
people
working
on
this,
ideally
from
a
couple
companies,
so
we
don't
miss
124
for
the
first
part
of
a
solution
to
this.
A
C
Okay,
I
will
add
an
ai
to
the
meeting
and
tag.
I
guess
mache
to
track
some
of
that
down
and
important
to
find
out
whether
or
not
that's
a
possibility.
Okay,
I
think
timing-wise
like
we're
in
good
shape.
If
we
start
thinking
about
it
now,
we
should
be
able
to
get
something
resolved
so
that
we
can
hit
the
ground
running
in
124.
C
A
It's
definitely
worth
to
start
writing
this
down
as
soon
as
possible,
and
then
we
can
fiddle
on
the
api
bits
and
and
then
the
implementation,
I
guess
the
pr
link
there
is
pretty
much
a
good
solution,
but
instead
of
doing
this
always
it
has
to
be
conditional.
So
I
guess
the
only
major
work
is
wrapping
this
thing
in
a
future
gates
and
you
should
be
mostly
ready.
So
it's
it's
not
hard
from
the
code
standpoint.
C
Yeah,
I
think
it'll
mostly
be
agreement
on.
Are
these
the
two
use
cases,
and
is
this?
I
think
the
discussion
on
the
current
issue
has
shown
there's
a
lot
of
people
with
a
lot
of
different
expectations.
So
the
sooner
we
can
say
here
are
the
two
use
cases
we
see.
We
want
to
make
these
safe.
Here's
how
we're
going
to
do
it.
A
E
A
A
B
Yeah,
we
just
want
to
give
an
update
on
this
cap
that
we
started
a
couple
of
months
ago.
This
is
related
to
having
statuses
or
consistent
statuses
for
all
the
workload
controllers.
A
B
Yeah,
so
we
initially
got
some
feedback
from
mache.
We
have
tried,
including
some
of
the
the
the
batch
related
workloads,
so
that
should
give
especially
for
the
job
cron
job
at
this
point
of
him
does
not
have
a
status.
That's
what
we
have
mentioned
in
the
gap,
but
I
believe
we
have
tried
addressing
much
your
comments,
and
this
is
up
for
one
more
round
of
reviews.
A
Okay,
does
that
have
any
questions
for
ravi
or
phillip,
or
everyone
just
needs
to
have
a
look
about
this
work
that
that
will
be
happening
for
over
the
next
couple
of
releases,
most
likely.
A
Okay
hearing
none
does
anyone
else,
have
any
last
minute
topic
that
they
want
to
bring
up
with
the
group.
A
Okay
hearing
none
in
that
case,
I'm
gonna
close
the
call
with
this
one.
Thank
you
very
much
all
and
see
you
on
kubecon
next
week
and
in
on
see
gaps
in
two
weeks.
Thank
you
all
bye-bye.