►
From YouTube: Kubernetes SIG Apps 20210920
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
good
evening,
good
afternoon,
depending
on
where
you
are
today,
we
have
september
20th,
and
this
is
another
of
our
bi-weekly
six
cli
calls.
My
name
is
mati
and
I'll
be
your
host.
Today,
quick
announcements,
we
are
rarely
after
the
123
future
freeze
date
or
enhancement
freeze
date,
and
we
have
a
couple
of
weeks
for
shipping
123
items
if
you
need
support
from
myself
or
ken
or
janet
feel
free
to
reach
us
on
sig
app
slack
channel.
A
But,
most
importantly,
in
a
couple
more
weeks
there
will
be
a
sea
gaps
update
during
kubecon,
north
america
and
los
angeles.
Unfortunately,
none
of
us
will
be
there
in
person
due
to
different
circumstances,
but
we
are
very
hopeful
that
the
next
one
and
I
have
no
idea
where
it
will
be
held
but
it'll,
be
europe
somewhere
in
spring
next
year.
We
are
very
hopeful
that
the
next
one
will
happen
in
person
with
significantly
more
people
present
there.
A
I
put
the
link
to
the
to
our
presentation
so
that
you
can
mark
it
and
watch
it.
So
I
think
that's
the
most
important
topic
and
we
can
move
on
to
the
discussion.
I
haven't
seen
james
popping
up,
nor
he
did
respond
to
my
pings
on
on
slack,
so
we
are
jumping
james
this
time
and
we
can
start
with
sydney
sydney.
I
assume
you're
our
first
time
in
sega,
so
feel
free
to
introduce
yourself
and
tell
a
little
bit
about
it.
Oh
one,
simple
question:
are
you
planning
to
share
your
screen?
A
I
would
love
to
share
my
screen
if
possible.
Okay,
so
let
me
let
me
quickly
make
co-host
because
that's
needed
to
be
able
to
share
your
screen
and
I'll
stop
sharing
so
that
you
should
be
okay.
You
should
be
good
now.
B
All
right,
let
me
just
get
set
up
to
share
my
screen,
but
to
give
you
a
little
bit
of
an
intro.
Yes,
this
is
my
first
time
at
a
sig
meeting
in
general
and
sig
ops,
especially
I'm
super
excited
to
be
here
today.
I
have
like
a
custom
kubernetes
operator
that
I
wrote
and
I
wanted
to
see
if
there
was
any
interest
in
it,
I'm
really
struggling
to
present
my
screen.
A
View
there
should
be
a
you
should
be
able.
There
should
be
something
called
share:
screen
a
green
button
you're
using
application
or
the
web.
One.
A
When
you
click
the
share
screen,
it
should
allow
you
to
pick
which
part
of
your
screen
you
want
to
share
all
the
way
down
to
a
particular
screen.
That's
what
I've
been
seeing
and.
B
A
Worries
that
happens.
Do
you
have
your
presentations
somehow
published
somewhere
so
that
I
could
share
them.
B
It's
just
a
keynote.
Unfortunately,
I
can
just
do
it
verbally
if
people
don't
mind
that
the
most
of
the
content,
that
was
visual,
was
kind
of
just
a
animation,
explaining
the
difference
between
the
way
that
kubernetes
native
labels
deployments
and
the
thing
that
I
built,
but
to
not
have
us
delay
any
further.
Let
me
just
tell
you
a
little
bit
about
the
thing
I
wrote
so
yeah
go
ahead.
I
work
on
a
custom
controller
that
helps
deploy.
B
Applications
that
cannot
use
well
do
not
necessarily
fit
the
same
mold
of
deployments
that
can
be
blue-green
or
implemented
in
a
way
that
uses
services
as
a
way
to
manage
the
dependencies
in
between
deployments,
and
it
doesn't
necessarily
allow
like
it,
doesn't
really
fit
the
kubernetes
native
micro
service
architecture.
B
So
my
controller
does
not
replace
blue
green
deployments.
If
you
use
blue
blue
green
deployments,
go
ahead,
keep
doing
that,
it
does
not
replace
stateful
sets
staple
sets,
are
useful
and
have
their
own
niche
same
with
deployments.
We
are
not
replacing
deployments.
B
What
we
are
doing
in
our
controller
is
effectively
grouping
sets
of
deployments
so
that
from
version
a
of
deployment
first,
second
and
third,
you
can
go
to
version
b
of
deployment
first,
second
and
third
in
a
rolling
manner
such
that
the
deployments
act
effectively
like
pods,
the
same
way
that
a
replica
set
manages
pods,
and
so
then,
instead
of
rolling
between
rolling
within
a
single
deployment
with
a
replica
site,
you
roll
between
two
different
deployments
within
a
I
call
it
deployment
set,
and
what
this
allows
you
to
do
is
adhere
to
dependencies
as
you
bring
up
your
first
second
and
third
deployments.
B
So
when
you
switch
versions,
you're
able
to
deploy
a
whole
new
set
of
deployments
that
are
for
second
and
third,
but
perhaps
they're
incompatible
with
the
first
version
you
deployed,
and
then
secondly,
it
allows
you
to
maintain
that
version
incompatibility.
So
we've
got
dependencies
and
we
have
this
somehow
inherently,
your
microservices
team,
perhaps
is
not
very
great
about
reverse
version
compatibility
and
making
sure
that
you
can
work
with
all
the
other
systems,
all
the
other
services
that
need
to
deploy
in
unison.
So
this
really
manages
it.
B
For
you
and
you
don't
have
to
necessarily
worry
about
version
incompatibility,
because
it's
built
into
the
structure
of
how
you're
deploying
your
deployments.
Does
anybody
have
any
questions
so
far
or
things
I
think,
are
concerns?
Or
why
are
we
doing
this.
B
Okay,
so
in
essence
that's
what
it
does
and
it
does
it
through
three
different
resource
definitions.
We
have
the
deployment
set,
which
is
like
the
big
bucket.
We
have
the
deployments
at
version
group,
which
is
effectively
like
a
replica
set
for
the
same
way
that
replica
sets
live
in
deployments.
You
don't
really
touch
them
and
it
just
kind
of
manages
the
underlying
objects
and
the
underlying
objects
mastered
inside
those
version.
Groups
are
deployments
and
the
deployments
get
kind
of
spawned
off
of
this
object
called
a
deployments
template.
B
So
our
three
resources
are
the
deployment
set
the
flow
and
set
version
group
and
the
deployments
of
template
and
the
way
that
you
create
this
is
you
would
post
a
deployment
set
object
and
it
would
have
your
three
templates
listed
underneath,
so
it's
just
by
name
so,
first,
second
and
third,
and
then
you
would
post
three
templates
names
like
first
second
and
third,
and
each
one
of
those
templates
would
have
a
dependency
array.
Basically
like
oh
first,
doesn't
depend
on
anything
because
it's
the
first
deployment
to
go.
B
Second
depends
on
first,
because
it's
the
second
dependent
that
needs
deployment.
That
needs
to
be
up,
and
then
lastly,
third,
would
depend
on
both
first
and
second,
and
so
in
this
way
you
have
the
ability
to
describe
dependency
between
your
deployments
and
then,
when
you
want
to
update
and
roll
to
your
new
version,
you
would
basically
just
repost
the
deployment
set,
but
instead
of
like
first
second
and
third
you'd
be
like
first
version.
B
Two
second
version,
two
third
version,
two
and
then
you
post
three
new
templates
with
that
new
deployment,
config,
basically
updated
with
the
same,
depends
on
like
array
and
then
that
would
bring
up
a
whole
new
version
and
kind
of
like
have
it
independently
roll
between
those
two
versions
that
you
said,
like
version
one
version
two,
and
as
far
as
the
actual
deployments
that
template
goes
it's
basically
a
deployment.
Spec
object
with
a
little
bit
of
extra
data
that
lets
you
do.
That
depends
on
logic,
on
top
of
it.
B
B
So
there's
obviously
blue-green
deployments,
but
the
reason
why
I,
like
the
controller
that
we
built
a
little
bit
better
because
when
you
do
blue
green,
you
have
to
basically
do
200
of
your
resource
allocation
when
you
do
versions
and
with
ours
we're
actually
able
to
roll
gradually
in
between
and
we've
also
implemented,
max
unavailable
and
max
search
between
these
deployments.
B
So
you
can
say
like
oh
max
unavailable
is
30
and
you
drop
30
and
you're
able
to
gradually
roll
and
only
consume,
like
100
of
your
resources
between
these
two
things,
while
also
maintaining
the
ability
to
like
roll
back
and
other
fun
things.
Another
solution
that
exists
today
is
the
kubernetes
like
sig
application
project,
but
I
feel
like
that
focuses
mostly
on,
like
the
application,
abstraction
of
like
tying
deployments
and
staple
sets
and
services
all
together
and
not
necessarily
focused
on
versioning
incompatibilities
or
rolling,
at
least
that's
my
understanding
of
the
project.
B
If
I'm
wrong
feel
free
to
correct
me
and
then
another
project
that
exists
is
argo
rollouts.
I
know
that's
very
common
and
very
popular,
but
again
it
kind
of
only
focuses
on
blue
green
or
canary
and
underlyingly.
B
It
relies
on
the
replica
set
basically
of
the
deployment
to
manage
the
version
rules,
whereas
what
we
want
is
to
roll
between
two
different
deployments
and
then,
lastly,
like
you,
could
do
this
really
cool
thing
of
having
each
one
of
your
services
be
its
own
actual
kubernetes
service
and
I'm
using
like
liveness
probes
and
like
readiness,
probes
and
stuff
to
kind
of
manage
the
rolling
in
between
these
two
services.
B
However,
that
still
requires
that
you
have
like
robust
version
compatibility
between
the
servers
as
you
roll,
even
if
you
roll
each
service
independently,
and
it
means
that
for
some
legacy
applications
you'll
have
to
break
out
each
one
of
the
components
of
your
monolithic
service
into
a
separate
service,
and
not
everybody
wants
to
do
that.
So
I
can
pause
for
questions
here,
but
I
can
also
keep
talking
a
few
that's
happening.
B
A
B
All
right,
so
in
essence,
what
this
project
is
really
aiming
to
solve
is
allowing
gradual,
rolling
between
incompatible
versions,
minimizing
the
research
resource
footprint
during
a
role
and
deploying
like
monolithic,
flash
legacy,
applications
that
don't
necessarily
have
the
kubernetes
micro
architecture
in
mind
so
that
you
know
you
can
adopt
this
first
and
then
work
with
your
application
team
to
break
up
your
monolith.
B
But
this
way
you're
able
to
move
to
the
cloud
and
you're
able
to
move
to
these,
like
kubernetes
service
providers
and
move
off
of
existing
like
legacy
hardware,
if
you
want
so
that's,
that's
it
thanks
for
listening.
I
know
that
was
a
lot
and
I
probably
talked
really
fast.
So
if
there's
anything
I
can
repeat
or
clarify,
please
let
me
know.
C
Thanks
that
was
awesome.
I
guess
my
first
question
is
okay,
so,
like
from
what
I'm
from
what
you're
saying
is
I
have
this
application?
That's
composed
of
multiple
deployments.
The
deployments
are
like
a
b
and
c
a
depends
on
a
new
version.
The
new
version
of
a
depends
on
the
new
version
of
b
and
a
new
version
of
b
depends
on
the
new
version
of
c
to
roll
out.
So
under
that
constraint,
you
said
you
had
a
rollback.
C
If
you
issue
a
rollback,
how
would
rollback
work
in
a
way
that
actually
was
like
kept
the
application
alive
when
you
roll
back,
or
is
that
just
not
like
a
non
goal.
B
No,
that's
that's
a
great
question.
So,
basically,
if
you
have
a
b
c
and
then
a
prime
b
prime
c
prime,
what
happens
is
abc
gets
scaled
down
to
like
max
and
available
so
it'll
be
70
available
and
then
a
prime
b
prime
c
prime.
Depending
on
your
worst
case.
Let's
say
it
comes
up,
a
little
bit
doesn't
come
up
at
all.
Then.
What
happens
is
that
old
deployment
doesn't
actually
get
deleted,
so
it
exists
at
like
whatever
it
could
pass.
So,
let's
say
you're
able
to
bring
up
20
over
here.
C
So
like
with
this
with
this
approach,
what
you're
doing
is
for
a
b
and
c
there's
a
service
in
front
of
b
and
there's
a
service
in
front
of
c.
When
you
create
a
prime
b
prime
and
c
prime,
you
have
a
b
prime
service
and
a
c
prime
service
as
well,
for
which
the
pods
of
a
used
a
prime
used
to
communicate
with
b,
prime
and
c
prime
yes,
yeah,
okay.
B
C
Cool
and
then
I
guess
my
other
question
is
so
how
how
is
ingress
handled
at
the
front
right
so
like
if
you're
using
services
to
handle
east-west
traffic
between
your
deployment,
the
north-south
traffic,
I
mean,
assuming
it's
not
all
back-end,
but
you're,
actually
pushing
some
traffic
into
your
capacity
in
order
to
serve
user-facing
requests
right
when
you're
doing
this
roll-out.
Is
there
any
like?
Are
you
just
using
an
ingress
to
route
between
them,
or
is
that
managed
independently,
as
well.
B
So
the
ingress,
I
believe,
just
maps
to
both
of
like.
Let's
say
you
have
front-end
services,
like
it
just
maps
to
both
versions
of
finance
risk,
because
both
of
them
should
be
able
to
take
traffic.
If
they're
up,
you
might
need
to
tune
like
max
unavailable.
If
you
want
to
drop
at
all,
if
you
don't
want
that
but
yeah,
so
we
just
kind
of
split
across
the
two
different
versions.
C
Okay
and
then
not
only
but
not
last
time,
I
have
two
more
questions.
What
about
horizontal
pod,
auto
scaling
like
traditionally
with
most
of
the
approaches
that
that
attempt
to
do
resource,
sensitive
updates,
rolling
updates?
You
would
just
use
auto
scaling
and
like
let
traffic
pressure,
create
the
the
scaling
requirement
for
the
individual
subset
so
like
it
seems
like
the
community
is
largely
letting
the
ingress
determine
the
shape
of
the
the
workload
as
opposed
to
scaling
up
the
workload
statically
and
then
letting
it
just
handle
the
traffic
coming
through.
B
Oh
yeah,
and
we
do
that
too,
so
one
of
the
slides
that
I
actually
did
accidentally
skip
over
is
the
features
slide.
So
we
do
rolling
updates
between
versions.
We
are
hpa,
compatible.
We've
implemented
a
max
and
available
on
mac
search.
We
have
vpa
compatibility
coming
and
we're
also
working
on
kita,
skilled
object,
compatibility,
but
basically
hpa
looks
at
the
template
object
that
we've
created
because
it
has,
it
has
a
skill
sub-resource,
and
so
what
happens
is
that
the
template
object
then
tells
the
underlying
deployment.
B
What
its
replica
should
be,
but
hpa
talks
to
the
deployment
set,
template
object.
We've
also
implemented
a
cool
like
inheriting
replicas
feature
where,
basically,
if
you
have
hp
enabled
on
abc,
then
when
you
roll
to
a
prime
b
prime
c
prime,
you
can
inherit
the
replica
account
that
hp
had
previously
described
for
the
old
object
so
that
you
have
a
very
small
one.
B
You
don't
have
to
worry
about
the
fact
that
you're
creating
a
whole
new
deployment
and
trying
to
figure
out
what
that
magic
number
is
for
how
many
pods
do
you
need?
That's
quite
nice.
C
And
then
my
last
question
was
you
gave
out
a
couple
of
so
you
talked
about
arguci
as
a
potential
solution,
the
other
one
that
didn't
actually
come
to
mind,
because
it
is
very
different
than
than
the
approach
that
you're
taking
what.
C
Is
flux
2.0
in
prior
versions
of
flux
you
weren't
able
to
really
in
a
very
concise
way,
specify
dependencies
between
resources,
but
in
flux,
2.0.
You
can
specify
kind
of
arbitrary
dependencies
between
resources,
which
would
allow
you,
in
theory
to
achieve
something
similar.
So
I
was
wondering:
did
you
guys
pilot
that
and
kick
it
out
because,
like
basically
to
make
flux
work,
you
have
to
be
all
in
on
get
ops
or
was
really
just
not
aware
of
that?
That
is
technology.
B
B
A
Yeah,
so
you
mentioned
explicitly
three
deployments
in
your
controller.
Is
there
any
particular
reason
why
you
pick
three
or
is
it
just
you
can
you
can
have
as
many
as
you
wish
being
managed
by
the
by
this
particular
controller.
B
Oh
yes,
so
I
gave
examples
of
like
a
b
and
c
or
for
second
and
third
just
to
give
you
an
idea
of
like
the
three
cases
that
you
might
have
in
your
deployment.
So
like
your
deployment
that
doesn't
have
anything
that
depends
on
anything
the
middle
one,
that's
kind
of
in
between
and
then
the
top
one
that's
just
like,
depending
on
everything
else
in
the
stack.
But
it
doesn't
even
need
to
be
a
linear
tree.
B
It
can
just
be
like
you
have
some
that
are
like
you
know
your
top
tier
your
mulch
here
and
your
bottom
tier
and
the
the
lists
are
basically
just
lists
of
the
things
that
expects
the
that
deployment
expects
to
be
up
before
it
can
be
fully
available.
And
what
I
mean
by
that
is
not
that
deployment
a
has
to
be
fully
up
before
deployment
b
can
be
up,
but
actually
we
do
this
on
a
pod
level.
B
So
if
let's
say
each
one
of
them
has
ten
pods,
if
one
pot
of
a
is
up,
then
one
pot
of
b
can
become
up
and
then
c
can
have
one
pot
as
well
and
it
kind
of
grows
in
this
pyramid
shape
of
like
all
right.
As
long
as
there's
something
beneath
me
that
I
depend
on
that
exists,
then
we
can
do
it
and
it
can
be
like
multi-dimensional
like
you
can
have
two
things
that
are
like
your
top
tier
and
a
bunch
of
different
bottoms.
A
So
you're
not
looking
at
men
available
specified
at
the
deployment
level.
You
basically
always
look
in
under
the
covers
and
ensure
that
there's
one
running
which
theoretically
allows
serving
traffic.
My
next
question
was,
I
don't
know,
I'm
just
crazy
when
it
comes
to
avail
making
stuff
generic.
A
Is
it
only
meant
to
to
manage
deployments
or
are
you
are
thinking
or
I
don't
know?
Maybe
you
already
started
implementing
a
broader
scope
of
workloads
such
as
daemon
said
stateful
set,
especially
that
at
the
beginning
you
did
mention
a
couple
of
them.
So
does
it
allow
all
possible
built-in
q4
cloud
controllers,
or
only
a
subset,
or
only
currently
just
deployments.
B
Currently
just
deployments,
I
think
that
is
more
of
a
resourcing
issue,
just
because
for
a
very
long
time
there
are
very
few
of
us.
Slash
only
me
working
on
this,
so
we
were
trying
to
you
know,
hit
the
mvp
and
build
out
the
feature
set
that
we
needed,
but
there's
no
reason
that
I
can
think
of
that
would
prevent
us
from
generalizing
this
to
the
generic
workloads
that
kubernetes
supports,
which
are
there
difficulties,
but
yeah.
A
Yeah,
I
think
the
biggest
issue
is,
and
that
was
that
that
probably
goes
all
the
way
back
to
the
earlier
days
of
kubernetes
was
that
we
don't
have
a
good
way
of
having
a
any
kind
of
template
for
a
resource.
You
would
have
to
have
a
unstructured
data,
which
would
be
then
translated
as
a
proper
template
for
stateful
set
demo
sets
deployment
whatever
you
want
to
support,
or
you
would
always
have
to
have
a
full
specification
of
all
the
possible
controllers
explicitly
specified
in
your
configuration
file
cool
that
sounds
pretty
interesting.
C
I
won't
work.
No
one
else
has
any
go
ahead.
How
do
you
keep
the
so
is?
Is
do
you
have
a
dependency
analysis
in
place
to
ensure
that
if
the
user
specifies
like
the
cyclic
dependencies,
which
would
prevent
it
from
actually
making
progress
to
give
some
feedback
and
say
you've
got
a
dependency
cycle?
We
can't
actually
roll
this
out.
B
That
would
be
very
smart.
We
do
not
have
that
at
the
moment.
We
kind
of
statically
generate
the
dependency
tree
when
we
generate
the
deployments
on
our
side.
So
we
know
that
at
the
point
of
creation
we
don't
have
any
cycles
in
our
dependency
structure,
but
yeah.
We
should
probably
check
that
if
people
are
just
willingly
adding
in
free
form,
yaml
and
saying
I
want
this
deployment
to
depend
on
this
one
and
vice
versa,
then
it's
never
gonna
come
up.
C
Yeah
in
single
ins
and
like
if
it's
a
simple
dependency
graph,
I
wouldn't
see
it
as
being
particularly
troublesome.
I
think
like
one
of
the
kind
of
founders
of
the
prime
two
of
them,
clayton
and
brian.
Both
have
been
very
opposed
to,
like
we
always
kind
of
advise
users,
try
your
very
best
not
to
have
dependencies
between
individual
deployments,
because
it's
going
to
make
your
life
in
terms
of
releasing
like,
even
if
it
works
today
at
a
very
small
scale.
When
you
get
to
a
larger
scale,
your
life
is
going
to
be
hard.
C
So
it's
one
of
the
problems
you
may
want
to
get
in
front
of,
as
opposed
to
waiting
too
long
to
address
right
so,
like
I
could
imagine
much
more
complicated,
larger
applications
that
take
this
approach
would
have
dependency
cycles
that
they
would
just
not
be
aware
of
when
they're
releasing
it
like.
It's
just
somebody
adds
this
new
depth
and
all
of
a
sudden,
oh
well,
it
depends
on
def
a
and
now
you've
got
this
cycle
when
it's
unresolvable.
C
B
I
think
you
know
you
might
want
to
face
that
problem
before
you
start
deploying,
but
I
think
what
it
does
do
is
that
once
you
have
that
dependency
structure
described
that
it'll
just
enforce
it
for
you.
I
I
know
from
experience
having
seen
our
deployments,
that
when
there
are
dependencies,
lots
of
other
things
happen
like
you
can't
even
like
have
hp
scale
up
your
deployments
intuitively,
because
you
know
you
can't
just
scale
up
your
front
end
without
scaling
up
your
back
end
and
other
fun
things
so.
A
B
My
intention
is
to
donate
or
just
make
it
available
at
the
very
lead
like
very
least
to
people,
because
I
think
that
there
are
probably
other
applications
that
could
benefit
from
this
as
well,
because
two
years
ago
I
was
looking
and
there's
absolutely
nothing,
and
now
we
have
a
few
other
projects,
but
none
of
them
quite
fit
the
bill
as
far
as
timeline
goes,
I'm
most
interested
in
this
right
now,
so
I
think
I'll
be
pushing
this
pretty
hard
on
my
own
personal
timeline.
B
I
don't
have
any
date
guarantees,
though
so
in
the
near
future.
This
is
what
I'll
be
working
on.
I
don't
know
how
long
these
processes
take,
but
I'm
you
know
going
to
be
doing
this
pretty
much
every
day,
so.
A
Cool,
we
will
be
definitely
glad
you
to
see
you
join
our
future.
Sega
apps
call.
A
Okay
hearing
none.
Thank
you
very
much
to
me
for
for
interesting
presentation.
Does
anyone
else
have
any
topics
that
I
want
to
bring
and
talk
with
the
group.
A
Okay,
hearing
none,
if
that's
the
case,
we
have
about
30
minutes
left
still.
I
would
like
to
use
this
time
so
that
we
can
go
through
issues.
We
have
a
bunch
of
issues
that
has
not
been
looked
at
for
for
a
bit,
so
I
would
like
to
use
this
time
for
us
to
be
able
to
go
through
at
least
the
recently
created
ones
and
try
to
find
an
owner
if
you
feel
like
you
would
like
to
give
it
a
chance,
feel
free
to
volunteer
and
we
can
work
on
those
topics.
A
The
first
one
is
and
I'll
post.
The
link
in
the
chat
as
well
is
on
delete
for
daemon
sets
terminates
running
pods
without
waiting
for
the
pod
to
exit.
A
Anyone
interested
in
digging
through
after
demand
side,
controller.
A
Looks
like
a
valid
issue
to
me,
probably
neither
volunteer
who
would
be
interested
in
looking
into
that
one.
A
Okay,
I'll
have
a
look
at
it
after
this
call
and
I'll
try
to
respond.
This
one
is
similar
job
controller.
So
since
that
will
be
my
area
of
my
new
pace
so
yeah
my.
C
A
He
said
that
he
couldn't
get
it
to
run,
even
though
that
there
was
a
taint,
it
says
yeah,
so.
C
A
A
A
Oh,
so
he
was
expecting
that
the
part
will
continue
to
run,
even
though
the
no
was
excluded,
which
basically
means
yeah.
That's
not
how
it
will
work.
If
you
set
a
explicitly
anti-affinity
rule,
the
note
controller
will
check
and
the
parts
will
not
match
what
the
controller
has
specified
or
what
it.
What
is
specified
in
the
spec
of
the
controller,
so
it
would
obviously
kill
the
the
pod
and
he
expects
the
pot
to
continue
to
run.
A
But,
like
I
said
I
would,
I
would
need
to
have
a
look,
a
closer
look
and
try
to
follow
the
steps
I'll
make
sure
to
give
it
a
try.
Unless
you
want
to
do
it
again,.
A
A
A
A
A
He
created
the
chart
and
then
removed
it
and
the
car,
and
at
the
moment,
when
he
was
checking
the
garbage
collector
did
not
complete
its
mission
all
or
alternatively,
there
is
a
problem
that,
if
so
garbage
collector
works,
that
it
has
to
have
a
complete
view
of
the
cluster
at
any
point
in
time,
meaning
that
if
any
of
the
ap,
especially
external
api
services,
is
not
present,
it
cannot
continue,
and
I
I've
been
answering
so
many
questions
from
our
customers
about
this
issue-
that
they
have
namespaces
stuck
in
terminating
and
most
frequently.
A
A
Held
removing
the
resources,
so
this
seems
like
a
similar
issue
that
either
the
person
decided
to
remove
I'm
with
tim
on
this.
I
don't
think
this
is
yeah,
but
no,
this.
A
A
A
Make
scheduling
directive
mutable
for
jobs,
so
this
is
actually
being
worked
on
because
we
have
a
cap
that
was
merged
just
recently,
and
if
I
remember
correctly,
I
was
even
commenting
on
this
one.
A
Whatever
labels,
we
are
missing,
just
trash,
accept
it
and
we'll
resign.
I
remember
who
was
working
on
it,
whether
that
was
aldo
or.
A
A
That's
an
interesting
can,
do
you
have
any
thoughts
about
this
one.
C
The
way
I
would
do
it
would
be
to
use
several
separate
staple
sets
and
if
you
want
to
ingress
them
into
like
ingress
traffic
into
all
of
them,
have
a
selector
that
selects
across
them
and
then
have
a
separate
pod
disruption
budget
per
zone
right.
So
you
know
basically
intentionally
manage
your
capacity
if
you're
that
sensitive
to
it.
A
You
want
to
comment
about
this.
A
A
A
Cool,
remember
that
this
is
oh.
I
can't
assign
you
yet.
A
A
Yeah
I'll
do
that.
I
just
need
to
figure
out
how
to
remove
sig
apps,
and
I
want
to.
A
A
B
A
The
problem
is
with
with
different
validation
for
stateful
sets
and
for
plot
to
control
it
by
created
by
stateful,
set.
A
C
A
Yes,
so
that's
that's,
probably
what
jordan
is
also
saying:
someone
else
was
trying
to
do
conversions,
but
definitely
conversion
are
not
the
way
to
go
with
this.
One.
A
Okay,
we
have
five
minutes
left.
Let's
give
it
a
try
to
one
more.
A
I
need
to
double
check
that
one
and
I'll
have
a
look
at
this
one,
because
that's
something
that
we
need
to
double
check.
Okay,
I
think,
with
three
minutes
left.