►
From YouTube: Kubernetes SIG Apps 20230724
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
The
first
one
is
about
the
PR
for
changing
the
staple
set
pod
management
policy
and.
A
B
C
B
B
Okay,
so
it
was
about
the
Pod
management
policy.
I
will
open
the
change,
so
the
goal
was
to
see
how
to
make
progress
on
it
and
I
will
show
again
the
use
case
that
we
have
and
why
we
think
it's
a
good
feature,
and
then
you
can
see
how
we
can
move
forward
with
it.
So
the
goal
is
to
change
the
people
to
be
able
to
change
the
pad
management
policy
of
an
existing
step.
Foot
sets
at
the
moment
it's
forbidden
in
the
validation
code.
B
So
basically
just
the
feature
is
the
feature
is.
C
A
B
B
Of
a
stateful
set,
especially
because
order
ready
is
a
different
one
and
parallel
is
maybe
more
how
to
say
we
can
Implement
more
behaviors
with
modeling.
B
Basically,
the
use
case
where
I
can
change
is
my
main
use
case
for
databases
not
only
where
otherwise
is
very
good
for
bootstrapping
and
scaled
and
scale
up
operations,
because
about
split
brains
in
terms
to
limit
on
others
leave
and
join
oppression
because
you
might
not
want
to
have
10
nodes
training
or
leaving
at
the
same
time
is
very
good
because
you
can
quickly
recover
if
your
pads
are
deleted.
B
So
basically,
if
your
stat
percentage
in
parallel,
we
all
come
up
at
the
same
time,
whereas
not
already
you
have
to
wait
for
all
of
them
to
start
in
order.
So
the
main
use
case
is
on
most
most
common
Impressions
to
be
not
already,
and
if
you
have
an
issue
you
can
switch
to
parallel
or
you
can
switch
to
parallel
and
then
Implement.
Some
features
with
a
controller
and
I.
B
Think
if
you
go
to
the
to
the
issue-
and
there
is
somebody
which
is
that
means
that
did
a
pull
request,
but
basically
the
honest
implementary
preventing
you
from
doing
that
is
I.
Think
it's
this
one
is
a
validation
cards
that
don't
let
you
do
it
like.
There
is
a
it
says,
a
waitlisted
I
think
the
changes
that
you
can
make
and
it's
just
not
white
listed
and
the
question
I
think
was
whether
it
is
safe
to
allow
this
or
not.
And
basically
the
issuer
stopped
there
on
this.
B
A
A
You
can
potentially
have
the
user
orphan,
the
pods
from
the
staple
set
and
create
this
and
then
delete
the
staple
set
and
then
create
another
one
with
a
different
part
management
policy
that
could
that
should
work
and
in
terms
of
changing
the
policy
I
think
it's
usually
more
likely
that
you
can
lose
the
policy
rather
than
restricting
it,
because
restricting
a
policy
would
be
breaking
a
lot
of
users,
but
then,
if
you
lose
it,
it's
not
going
to
break
users,
but
I
think
the
the
thing
that
I
think
might
be
worse.
A
B
C
B
B
A
Have
you
have
you
checked
why
it's
why
the
validation
was
added
in
the
beginning,
I'm.
B
Not
sure
it's
something
that
can
but
I
think
it's
more
like
it
was
restricted
by
defaults.
I
think
the
cut
is
made
to
be
like
you
are
alone
doing
some
kind
of
follow.
These
10
objects
quickly.
B
I
think
it
has
to
be
implanted
one
by
one
by
default.
Nothing
is
implemented
and
you
have
to
add
the
ones
that
you
want
to
be
able
to
modify
and
by
default
changing
something
is
restricted.
B
It's
like
it
sounds
like
nobody,
so
they
need
to
do
it,
so
it
doesn't
have
a
bit
none.
But
it's
a
this.
B
We
can
I
think
we
can
speak
with
scan
how
to
fix
this.
B
All
right
so
so
here
I'm,
mostly
looking
for
feedback
and
to
see
where
the
community
is
heading
to
just
discuss
it.
B
So
what
I
want
to
discuss
is
two
things:
service
level,
health
check,
so
applications
and
the
production
of
PVC,
anused
and
cgaps,
but
both
are
intertwined
on
the
menu
space
that
we
have
is
we
want
to
do
null
Avail,
maintenances,
so
no
disruption
on
the
cluster,
where
we
are
running
a
stateful
workloads
in
practice.
B
Basically,
my
team
is
operating
like
five
different
databases
and
we
use
local
storage,
so
the
data
is
using
local
buses
and
buildings
on
the
several
thousands
of
nodes,
and
we
have
another
team
that
is
responsible
for
operating
the
human
test
cluster
and
we
have
implemented
the
Anita
fence
between
our
two
teams
to
be
able
to
do
maintenances
and
right
now
it's
relying
on
pdb-
and
we
have
a
few
limits
with
it.
One
is
the
ability
to
expose
a
service
or
workload
levels
and
it's
checking
because
the
pdb
overrides
only
and
Readiness
crops.
B
That
is
global
in
a
way
and
cannot
be
easily
reflected
on
on,
but
on
this
and
I
will
go
deeper
on
the
on
the
strength
and
what
is
that
to
get
me
if
you
have
any
questions
or
remarks,
and
the
second
part
is
that
typically
stateful
workloads,
the
both
computes
and
Storage,
and
we
have
some
cases
where
we
want
to
avoid
during
a
methods
on
another.
B
If
we
want
to
go
deeper
on
service
levels,
I'll
check
as
I
was
saying.
Basically,
when
you
are
operating
especially
with
cluster
with
databases,
you
have
a
state
that
can
be
node
prepared
on
the
stage
that
is
global
to
the
database.
For
example,
what
can
be
Global
is
the
data
residency.
That's
integrated,
like
could
be
some
data
somewhere,
but
each
part
is
running
perfectly
well.
B
You
can
have
like
issue
with
the
cache
or
even
degraded
performance
like
his
service
level,
and
you
want
to
find
a
way
to
expose
this
information
to
the
part
description
budget,
to
prevent
eviction
and
to
prevent
the
maintenance
from
a
building.
So
a
voluntary
disruption
to
happen
on
your
service
on
your
stateful
set.
B
It
means
that
the
database
might
change
the
tiny
text
policy
for
the
controller
to
see
that
the
database
is
NFC
is
rather
slow,
like
it
takes
10
30
seconds
and
the
pdb
can
be
much
faster
than
that
for
for
the
time
elapsed
between
the
time
where
the
database
is
then
C
and
we
see
it,
we
can
dive
evicted,
multiple
Parts
in
this
case,
and
also
if
the
controller
is
not
running,
you
are
left
without
any
protection
at
all
and
there
is
no
way
to
block
the
system.
B
B
Is
done
the
Readiness
problem,
this
pod
will
be
unready
and
it
will
block
the
pdb
if
you
have
only
one
disruption
allowed
and
you
can
have
the
same
issue
as
the
other
one,
as
is
the
moment
where
the
database
is
NLC
and
the
moment
where
the
pad
will
switch
to
NHC.
That
can
be
like
up
to
a
minute.
What
can
be
when
the
studies
in
a
way
that's
unprotected?
B
Also
because
you
there
is,
you
can
only
protect
with
a
pdb
but
can
only
be
seen
by
one
pdb.
It
can
be
only
protected
by
one
pdb
and
some
cases.
Some
implementation,
where
you
have
midi
pdb
like,
for
example,
one
for
each
available.
You
can
have
some
kind
of
conflict.
B
So
it's
not
ideal
and
the
last
thing
is
to
add
Evolution
validation,
webwork
on
the
pulse,
eviction,
server,
Source.
Basically,
every
time
something
will
try
to
evict.
There
will
be
a
synchronous
call
to
see
to
some
controller
somewhere
that
will
check
the
health
on
accessibility
eviction
on
it,
so
I
think
for
us
doing,
Second
Use
doing
it.
This
way
is
the
best
way,
because
first
we
don't
have
this
issue.
B
We're
on
the
state
is,
has
changed
why
it
takes
time
for
the
state
to
update
here
you
do
the
check
when
you
access
the
depiction
and
what
was
the
other
points
and
if
the
controller
is
not
running
the
webbook
we
fly
so
you
won't
process
an
eviction
and
I
think
we
are
going
toward
the
not
exactly
radiation
web.
Okay,
I
will
speak
about
it
later,
but
I
don't
know
if
there
is
a
movements
in
the
community
to
answer
this
problem
of
service
level,
heart
attack,
I,
don't
know.
B
A
B
A
D
D
Use
case
here
at
Apple,
and
one
of
the
things
that
came
up
is
like:
can
we
actually
Outsource
the
definition
of
what
healthiness
means
to
an
operator
which
is
managing
the
workload
and
stuff
instead
of
having
the
Pod
level
per
level
Gates
be
the
only
or
else
XP
the
only
way
to
determine
if
the
entire
workload
is
ready
or
not?
Can
we
have
the
workload
operator
decide
and
then
tell
if
the
part
if
the
entire
set.
B
B
B
I
think,
in
our
case,
the
thing
is
that
we
have.
B
Two
levels
like
one
is
operators,
and
typically
we
run
two
controllers
like
it's
hard
to
have
one
thing:
that
not
all
the
knowledge
like
the
the
pattern
might
know
about
the
highest,
but
we
might
want
to
add
other
things
to
it.
Other
parameters
to
check
before
doing
anything
like,
for
example,
I'm
thinking
about
the
performance
monitoring
like
if
you
bridge
the
slos
that
you
provide
and
the
product
cannot
help.
You
really
help
you
with
that.
B
In
this
case,
you
cannot
really
do
the
oppression
and
I
think
there
is
also
the
part
where,
in
terms
of
Maintenance
like
it
depends,
I
guess
on
the
implementation
of
how
you,
under
the
maintenance,
but
typically
it's
a
bit
reactive,
where
it's
the
method
systems
that
will
ask
you
to
enter
this
node
in
maintenance
through
a
drain,
for
example,
and
then
the
I
guess
the
controller
the
operator
will
have
to
have
something
to
manage
these
parts.
I
think
it's
done
on
a
hook
where
I
don't
know
is
that
done.
It's
different.
B
B
Think
it
foreign.
D
Yeah
I
think
I.
Think
if
you
have
Siri
cut
our
other
like
monitoring
tools
that
that
you
want
to
use
perhaps
having
an
operator
which
understands
the
slos
of
the
individual
workflow
and
based
on
that,
if
it
can
update
what
healthiness
means,
I
think
it
should
be
good.
But
even
if
you
have
multiple
controllers
to
do
the
same
thing,
you
can
have
different
conditions
right,
so
one
could
be
performance
related
and
the
other
could
be
what
the
actual
liveness
helpedness
sorry,
the
Readiness
and
all
those
things
could
mean.
D
C
C
D
D
The
best
course
of
action
would
be
to
at
least
get
started
with
the
GitHub
issue.
If
it
is
not
there
and
then
have
a
follow-up
meeting
like
once,
we
discuss
asynchronously
on
the
GitHub
issue.
Does
that
make
sense.
B
B
C
B
Let's
we
can
do
it,
foreign.
A
B
Just
also
to
add
for
conversation,
basically,
we
have
cases
where
we
are
running
a
stateful
application,
and
we
have
data,
as
it
is
local
on
the
Node
on
pdb
cannot
really
protect
all
the
cases
because
it
only
protects,
innovate
data
when
we,
when.
C
B
Is
running
when
the
pad
is
not
running,
you
might
have
a
passes
on
volume
that
is
bound
to
something,
and
it
won't
protect
the
case
on
why
it
can
be
where
it
can
be
bad,
especially
in
the
context
of
maintenance.
Is
that
when
you
have
a
there
are
some
cases
where
birds
can
be
forcibly
ejected
from
another
unemployable?
B
There
is
no
pressure
like
a
FML
disk
nut
pressure,
and,
if
you,
if
your
or
not
all
the
pads
are
running
mantons
can
be
performed
on
the
on
the
Node,
whereas
we
know
that
the
PVC
should
be
kept
like
the
PV
and
the
PVC
should
be
kept
because
there
is
a
degraded
States
above
them.
So
basically
the
eviction
mechanism.
You
cannot
evict
in
a
way
data
which
doesn't
really
make
sense,
but
there
are
cases
where
we
want
to
prevent
a
drain
from
being
successful.
B
B
But
you
might
want
to
avoid
this
monetary
disruption
as
much
as
you
want
yeah,
it's
a
it's
a
very
prestigious
case,
but
it's
really
a
protecting
against,
but
on
the
solutions
that
we're
working
on
to
servers
of
these
issues
is
to
have
some
kind
of
non-level
description
API,
where
we
can
create
a
description
of
a
node
and
it
will
check
for
budgets
which
works
a
bit
like
pdb,
except
you
can
select
the
pod
you
can
select,
but
you
can
select
PVCs
and
you
can
also
perform
a
new
service
level,
service
level
and
check
and
Forex.
B
So
that
suggestion
where
we're
heading
I,
don't
know
if
your
feedback,
if
you
ever
seen
people
with
the
same
kind
of
issues,
others
here.
D
A
D
B
Here,
basically,
when
you,
so
it's
still
working
first,
so
it's
experimental
on
outside.
Basically,
you
create
a
new
note
description
on
the
notes,
so
you're
skipping
nodes
and
there
is
each
application
each
workloads,
they
can
have
their
budgets
and
there
is
a
controller
that
will
make
the
connection
between
pads
running
on
nodes
and
PVCs
being
bound
to
nodes
through
the
PV
persistent
volume.
B
D
B
Basically,
the
issue
is
eviction
in
a
way
is
that
it's
always
bad
level,
and
we
need
we
needed
something.
A
way
to
how
to
select
to.
A
D
Makes
sense-
and
we
have
similar
problems
as
well
and
the
way
we
are
trying
to
work
around
this.
We
have
tender
and
again
this
is
not
open
source.
Some
API
in,
like
we
have
extended
pdb
to
actually
make
a
call
to
an
external
system
which
understands
what
is
the
state
of
the
node.
That
is
how
we
have
been
working
around,
but
I
think
having.
D
Mechanism
for
like
having
generic
extension
mechanism
for
pdv
would
be
great,
where
we
can
make
either
external
calls
or
talk
to
another
service
which
may
be
within
the
bounds
of
kubernetes
or
outside
the
bounds
of
kubernetes.
That
is,
that
is
how
we
are
looking
at.
Like
say,
for
example,
can
you
have
something
in
a
part
spec
which
says
you
have
run
the
health
checks
and
say
if
they
are
failing,
so
the
part
is
not
in
ready
State.
D
Can
you
also
check
with
an
external
service
which
understands
what
the
state
of
the
node
is
right?
So
that
is,
that
is
how
we
are
trying
to
Vector
on
problems
but
again
having
some
something
in
the
spec
which
can
make
either
a
HTTP
call
or
something
to
another
service
which
understands
what
is
the
state
of
the
node
and
then
says?
D
Yes,
it
is
LD
or
S,
it
is
not
healthy,
might
be
a
good
way
to
look
at
and
I
think
we
had
this
discussion
in
the
past,
where,
where
some
of
the
leads
were
okay
with
having
an
extension
minism,
but
they
wanted
us
to
discuss
with
cigarch
if,
if
it
is
possible
to
do
like
adding
something
to
the
spec,
which
makes
like
sort
of
have
an
extension
mechanism
to
pdb.
D
B
Okay,
in
your
use
case
was
it
was
a
use
case
of
maintenance
on
nodes,
or
was
it
more
generic
than
that.
D
A
B
Yeah,
in
this
case,
not
node
level
description,
it
would
be
for
the
in
a
way
these
things
attach
the
nodes.
So,
basically,
if
you
are
doing
maintenance,
like
a
simple
reason
like
normal
result,
they
won't
create
not
level
disruptions
that
will
be
created
by
The
Operators
of
the
the
team
responsible
for
the
turbulent
test
cluster
or
maybe
like
a
system
like
Auto
scalers,
our
maintenance
system.
They
will
be
the
one
creating
a
disruption,
but
not
the
users,
so
the
users
only
provide
the
budget
to
show
them
the
to
advertise
constraints
and.
B
C
B
Needs
this
amount
of
constrainting,
it's
really
for
stateful
applications
that
are
really
critical.
B
B
But
yeah.
B
A
B
For
that,
I
can
show
if
anybody
is
interested,
but
that's
why
we
are
heading.
D
I
think
Cube
always
has
been
like
application,
Centric
infra.
If
they
want
to
bring
in
node
API,
it
may
be
hard
to
convince,
but
if
you
can
represent
those
node
conditions
via
some
health
checks
at
the
pdb
level,
I
think
it
might
make
sense.
It's
just
that
we
are
working
around
the
problem,
no
doubt
about
it,
but
to
me,
like
Cube,
always
has
been
like
application
center
infra.
So
that's
why,
like
treating
the
nodes
as
like,
that's
is
something
that
people
may
not
like.
Does.
B
A
D
If
you
can
represent
those
node
conditions
as
a
way
of
saying
that
there
is
some
external
entity
which
is
saying
that
the
eviction
should
be
stopped
at
the
Port
description
budget
level.
I
think
it
would
be
much
more.
B
Whole
well,
you
have
there.
A
B
D
Yeah
to
be
clear
that
actually
falls
in
the
apple
right
so
like
if
the
required
data
is
not
present.
There
has
to
be
a
way
for
for
pdb
to
know
that
I
cannot
drain
or
the
pdb
controller
to
know
that
I
cannot
proceed
with
evicting
the
part,
because
the
required
data
which
is
needed
by
the
app
is
not
present.
D
Yes,
that
actually
ties
back
to
what
we
were
discussing
earlier,
but
it
has
to
be
something
more
natural
in
terms
of
how
the
part
and
the
workloads
and
the
data
needed
for
the
workload
is
act
is
actually
represented.
B
Yeah
I
agree
like
having
on
the
Node
level,
is
really
like
more
convenience
because
PV,
you
really
have
to
know
the
the
node
for
the
API,
should
be
aware
of
the
node
browser,
an
eviction
it's
already
on
the
pattern.
So
there
is
a
need
for
something
like
I.
Don't
think
it's
a
perfect
API,
but
it's
twice.
It's
the
only
way.
The
clean
way
we
found
to
really
expose
this
eviction
of
potential
data.
B
D
C
They
they
probably
are
missing
a
picture
like
a
eviction,
intent
that
you
want
to
know
that
you
should
leave
the
note
and
you
you
want
to
migrate
the
data
or
migrate.
The
workloads
like
eventually
so
that's
like
that
would
be
good
to
be
captured
in
the
API
as
well.
D
The
code
is
something
that
we
need
to
think
through
like
should
we
have
some
some
way
to
make
a
HTTP
or
grpc
call
just
like.
So
if
you
look
at
scheduler
like
we
have
list
of
core
scheduler
plugins
that
are
in
the
scheduler
binary,
and
you
can
also
make
some
Extinction
HTTP
calls
or
hdbs
calls
to
other
constraints
that
you
want
to
have
which
are
customized
and
they
will
be
honored
by
the
scheduler
say.
If
it
fails,
scheduler
says
no.
D
So
if
we
have
some
extension
mechanism,
which
says,
apart
from
checking
the
Pod
Readiness,
there
is
a
weight
in
the
spec
to
make
a
call
to
some
other
service,
which
says,
which
perhaps
is
going
to
do
node
level
checks,
the
data
level
checks
that
are
needed
and
then
come
back
and
say:
yes,
you
can
go
ahead
and
do
it
or
you
can
consider
the
pot
to
be
not
healthy
enough
to
make
the
eviction
I,
think
that
might
be
okay,
but.
D
D
So
I
can
go
ahead
and
create
an
issue
for
this
and
we
can
discuss
on
GitHub
and
then
perhaps
come
back
for
the
next
meeting
and
see
where
it
goes.