►
From YouTube: Kubernetes SIG Apps 20181001
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
we're
recording
so
again
guys
be
careful
or
people
be
careful
anything
you
don't
want
recorded.
You
probably
shouldn't
say
so
again.
This
is
the
October
first
meeting
of
cig,
apps
and
I
believe
we
had
one
announcement
to
start
with,
which
was,
if
you've
been
given
an
invitation
to
vote
for
the
steering
committee
you
should
consider
doing
so
soon.
I
believe
voting
closes
in
the
very
near
future.
It's
an
important
thing.
This
is
your
community
and
you
should
take
ownership
of
you
know
who
steers
that
community
and
who
runs
it.
A
So
for
application
status,
well,
we've
been
discussing
about
recently
and
we'll
open
a
proposal
for
it
shortly
is
kind
of
what
Matt
butcher
originally
suggested
on
the
cap.
When
it
first
started,
we
could
aggravate
the
status
of
all
of
the
known
workloads
and
things
that
actually
have
status
and
of
the
application
itself,
so
that
would
just
mean
doing
things
like
inspecting
the
status
of
deployments.
Inspecting
the
status
of
staple
sets.
A
Replica
sets
probably
not
as
much
if
they're
managed
by
a
deployment,
but
if
they're
not
in
your
application,
selects
them
directly
than
those
as
well
replication
controllers
and
then
kind
of
ending
them
together.
So
if
all
of
them
are
not
green,
then
the
application
is
not
green.
The
status
wouldn't
be
healthy.
We're
also
considering-
and
this
kind
of
runs
into
a
workloads
conversation
what
we
might
do
with
the
workloads
API
to
make
readiness
more
consistent
across
the
surface
and
not
just
with
respect
to
the
workloads
API,
but
for
the
entire
API.
A
Are
the
replicas
ready
and
we're
also
giving
some
thoughts
I
like
what
does
that
actually
mean,
and
does
that
differ
for
deployments
and
staple
sets
in
any
way
like
for
a
deployment
you
can
set
max
on
availability,
which
kind
of
indicates
like
I'm
willing
to
tolerate
this
much
for
a
staple
set.
It's
almost
like
if
it's
under
replicated
in
any
way,
you're,
probably
not
100%
green,
like
all
those
pods
should
be
running
or
something's,
probably
wrong.
A
Daemon
set,
as
has
some
other
similar
semantics,
so
we're
trying
to
figure
out
exactly
how
to
a
grenade
status
there,
and
we
want
to
open
up
kind
of
proposal
against
application
status
for
that
and
implement
that
in
a
controller,
some
of
the
other
components
of
the
system
don't
really
have
status
like
comping
map
doesn't
have
any
status.
Basically,
any
structured
data
objects,
comping
maps
and
secrets,
and
then
the
status
of
services
is
kind
of
weird.
A
It
does
have
a
status
object,
but
that
really
only
communicates
like
for
a
load
balance
service.
The
IP
address
it's
been
assigned
by
the
cloud
provider
or
by
the
underlying
networking
implementation
that
does
the
load
balancing
for
status
of
the
service
itself.
You
really
want
to
inspect
the
endpoints
subsets,
I,
guess
and
then
look
at
if
all
of
those
endpoint
subsets
are
already.
So
there
is
a
notion
of
having
ready,
endpoints
and
already
addresses
and
unready
addresses
the
terminology
correctly.
So
that
might
be
something
we're
interested
in
doing
as
well.
A
The
really
interesting
piece
I
think
comes
with
CR
DS,
so
a
couple
of
guys
on
the
API
machinery
side
are
looking
at.
What
does
that
mean
for
CR
DS
like?
Is
there
a
way
that
we
can
have
standard
conventions
for
CRD?
B
Me
I,
don't
wasn't
thinking
about
this
I
think
you've
thought
about
a
lot
more
than
I
have
at
this
point,
but
I'm
in
the
operator
pattern
world
right
now
and
also
thinking
about
like
with
CRD
is
that
represent
deployed
something
what
are
the
appropriate
things,
but
the
status.
How
can
it
be
useful?
How
do
you
a
grenade
status
is
amiss?
You
know
across
multiple
resources
into
something
that's
useful
and
generally
valid
than
that
so
I'm
definitely
interested
in
that
topic.
Ok,.
A
So
there's
only
a
couple
of
people
participating
right
now,
but
we'd
like
to
get
broader
feedback.
So
the
intention
is
to
open
up
an
issue
against
the
application,
repository
I
believe
there's
one
person
at
all
ready,
but
we
wanted
to
kind
of
offer
an
initial
proposal
and
get
feedback
generally
from
people
if
they
think
this
is
useful
or
it'd
be
a
creep.
C
C
I
think
you
could
have
been
working
with
my
sequel
operator
and
it's
kind
of
weird,
because
even
though
the
statehood
set
is
up
and
running,
the
my
sequel,
replicas
might
be
lagging
because
each
other.
So
we
use
a
separate
thing
for
Orchestrator
to
figure
that
out
and
it
would
be
nice
if
it
all
becomes.
The
kubernetes
becomes
the
source.
Now.
C
C
C
A
That's
gonna
be
so
Cooper
days
itself
doesn't
have
insight
into
the
individual
replication
topology.
If
you're,
using
like
a
my
askew
operator,
that's
using
Pro
Kona
to
stand
up
replicated
most
well
using
I,
guess
nodb,
we're
not
gonna
have
sight
into
that.
We
could
maybe
provide
better
hooks
in
order
to
try
to
stop
the
rollout.
But
generally,
the
responsibility
of
the
controller
of
the
operator
itself
is
to
manage
that
rollout
and
detect
when
it's
under
replicated
when
the
replication
topology
is
broken
and
to
fix
it.
B
A
D
A
One
of
them
is
Mac
Serge
min
available
right.
So
in
general,
deployment
has
the
notion
of
Mac,
Serge
and
maximum
available,
because
it's
kind
of
your
heuristic
in
terms
of
how
it
rolls
out
pods
right,
like
it,
can
burst
ahead
in
order
to
say:
okay,
well,
I'm,
going
to
create
three
or
four
extra
replicas,
while
I'm
turning
down
the
other
replica
set,
and
it
can
say
like
well,
if
you
don't
have
this
many
available,
then
just
stop.
A
We
don't
have
that
semantics
with
staple
set,
and
initially
it
really
didn't
make
sense
because
of
how
staple
set
works.
Like
generally
speaking,
in
the
default
operation,
it's
going
to
go
one
at
a
time
and
weep
for
each
pod.
That's
a
predecessor
to
become
ready
before
starting
the
successor
and
then
block.
If
one
fails
for
a
lot
of
distributed
systems
that
really
doesn't
matter
so
much
like
for
zookeeper
SCD,
for
instance,
when
you
turn
them
up,
the
pods
are
able
to
conform
consensus,
be
a
leader
election
and
a
lot
of
these
systems.
A
A
So
I
just
wanted
to
bring
this
issue
up
and
see
because
there's
not
a
lot
of
people
who
have
said
anything
about
it,
but
when
to
see,
if
there
was
more
interest
in
doing
this,
they
just
kind
of
make
people
aware
that
someone's
thinking
about
it
and
that
it
might
be
something
we
could
consider
doing
if
people
are
really
interested
in
it
or
if
there's
value
like
if
there
any
use
cases,
people
can
think
of
their
explicit
as
to
why
they
might
need
this
feature.
It
might
help
motivate
moving
it
forward.
A
little
bit
more.
E
Okay,
new
I
can
give
a
quick
+1
to
that.
I
I
definitely
have
some
workloads
that
you
know
I,
don't
care
about
the
order
in
which
the
pots
come
up,
I'd
like
for
them
to
all
spin
up
as
quickly
as
possible,
and
the
application
knows
how
to
handle
the
topology.
That's
just
by
looking
at
its
own
pod
name.
A
So,
but
you
can
do
that
now,
with
burst
mode
right,
so
bursts
says,
go
all
in
parallel.
This
would
be
to
add
to
basically
add
a
throttle
on
it,
so
they're
like
if
you
have
X
availability,
stop
tearing
it
down
or
only
do
3
in
parallel,
only
do
5
in
parallel
for
deployments.
It
probably
matters
a
lot
more
because
they
tend
to
be
quite
a
bit
larger
than
quorum
formers.
A
That
you'd
run
inside
of
a
staple
set
like
for
ED,
CD,
I,
think
the
max
you
run
is
maybe
11
before
it
starts
to
creating
horribly
I
forget
which
one
it
is,
what
the
exact
limits
are
resume
keeper.
After
seven,
you
start
seeing
performance
downgrading
unless
you
start
using
observers,
so
a
lot
of
strongly
consistent
systems,
which
generally
are
the
thing.
This
applies
to
you
yeah,
it's
they
don't
grow
too
huge
and
even
things
that
use
it
like
Kafka,
for
instance,
leverages
zookeeper
for
its
own
consensus
protocol.
A
You
can
launch
your
Kafka
brokers
in
parallel
and
be
pretty
safe,
but
how
many
do
people
generally
want
to
launch
in
parallel?
Simultaneously
and
what's
the
largest,
the
largest
cop,
the
clusters
out
there
that
are
usually
managed
individually
or
in
the
tents,
not
like
hundreds,
let
alone
thousands.
So
if
burst
rollout
works
there
do
we
need
to
throttle
it
like
I.
Just
don't
have
I,
don't
have
any
use
cases
that
I'm
aware
of
where
it
makes
a
lot
makes
a
lot
of
sense.
A
You'd
probably
need
to
do
it
with
custom
metric
support,
as
opposed
to
just
doing
it
off
of
duty
cycle
or
memory
utilization
like
Cassandra
would
be
an
example
where,
if
you
do
it,
based
on
memory,
utilization
or
duty
cycle
compaction,
for
instance,
would
trigger
you
to
scale
up,
and
you
you
wouldn't
want
to
do
that.
That's
just
going
to
probably
make
you
worse
off.
I
could
think
like
if
you,
if
you
did
it
for
my
SQL
and
you
were
using
uniformly
replicated,
not
clustered
MySQL.
A
That
could
probably
work
if
you
were
looking
to
increase,
read
fan-out,
but
you'd
actually
have
to
make
sure
that
you
were
scaling
up
based
on
read
fan-out
and
not
right
pressure
right
like
if
all
of
the
traffic
is
right
pressure
going
to
the
master,
and
then
you
start
speeding
up
more
read
replicas,
then
you
have
to
replicate
even
further
to
push
more
data
to
a
new
replica
for
read.
You
continuously
scale
up
and
you're,
not
actually
taking
any
pressure
off
of
the
cluster.
A
A
E
A
Yeah,
but
so
for
so
usually
for
scale
up
right,
like
you're
not
going
to
decrease
your
availability
as
you
would
be
scaling
up
or
the
staple
set,
you
would
just
add
more
nodes
and
the
max
and
min
that
you
would
scale
up
to
and
down
from,
would
probably
be
based
just
on
the
auto
scale
configuration
not
so
much
the
max
on
avail
or
max
max
urge
or
Maxima
vailable.
But
you
say
maybe
from
a
greediness
perspective
like
you,
don't
want
it
to
scale
up
to
jump
up
instantaneously.
It's
like
10
pods,.
A
Well,
if
anybody
has
thoughts
are
once
a
plus
one
or
minus
one
or
add
comments,
please
check
the
issue
out
then
another
thing
we
had
come
up
is
we
removed
the
Reapers
from
Kuby,
Caudill
I
believe
in
110
going
into
111,
and
it
didn't
actually
change.
So
this
says
staple
stat
delete
behavior
changed
it's
not
entirely
accurate.
The
delete,
behavior
for
stateful
said
itself
didn't
change,
but
that'll
equals
the
leaf
for
Kuby.
Cuddle
with
respect
to
staple
said
changed
previously.
A
What
the
reapers
would
do
is
scale
the
staple
set
down
to
zero,
but
for
you
prior
to
actually
doing
a
delete
so,
regardless
of
whether
you
did
foreground
or
background
propagation,
what
would
end
up
happening
is
you're
actually
gonna
scale.
The
samples
are
down.
You're
gonna
get
the
ordered,
graceful
termination
that
you
get
with
a
scale
down,
and
then
it
deletes
now.
It's
just
issuing
a
delete
and
when
that
ends
up
looking
like
is
a
full
parallel
delete.
There's
some
comments
here
that
we
never
intended
for
order
graceful
termination
on
delete,
which
is
true.
A
We
didn't
implement
that
from
a
controller
side,
but
it
was
a
semantics
that
was
provided
by
Ruby
cuddle
and
a
lot
of
people
use
Kuby,
cuddle
and
scripting
the
interface
of
the
system
for
their
automation.
So
there
is
an
expectation,
that's
kind
of
changed
there.
The
kind
of
feedback
we're
looking
for
here
is
so
so
we
know
what's
happened
and
we
have
ideas
around
how
we
might
fix
it
or
change
it,
because
it
would
be
a
change.
What
we
want
to
know
is:
is
it
valuable
to
do
so?
We're
people
depending
on
this?
A
Do
people
want
this?
Is
it
something
we
should
go?
Do
we
could
probably
use?
We
could
have
the
controller
interact
with
finalized
errs
on
the
pods
in
order
to
actually
make
this
happen
server
side,
which
is
where,
if
we're
going
to
do
it,
we
should
have
done
it
in
the
first
place
and
if
there's
interest
in
it,
we'd
like
feedback
on
like
okay,
this
is
important
to
us
as
a
community.
You
guys
should
go.
Do
this.
D
So
I
think
one
of
the
important
things
to
note
here
right
is
that
coop
control
taking
this
out
was
a
non
backwards,
compatible
change
right
that
for
somebody's
experience
it
was
a
breaking
change
right
and
that
that
catches
me
I
mean
it's
one
of
those
frustrating
points
that
people
complain
about
when
you're
upgrading
versions
of
kubernetes
some
of
these
kinds
of
things
change
and
non
backwards
compatible.
We
definitely
don't
follow
semantic
versioning
and
releasing
this
stuff.
D
A
Was
a
very
it
was
a
very,
very
long,
long
conversation
that
was
over
a
period
of
years
and
we
never
actually
wanted
to
have
the
Reapers.
The
Reapers
were
in
Kuby
cuddle
kind
of
as
a
necessity,
because
certain
things
weren't
even
available
server-side
when
they
initially
launched
and
they
were
the
Reapers-
were
actually
kind
of
slightly
broken
in
their
own
way.
So
then,
when
we
got
garbage
collection
and
various
forms
of
delete
semantics
in
oh
I,
don't
know
there
isn't
one
six
one.
A
Seven,
we
kind
of
carefully
started
thinking
about
removing
the
Reapers
over
a
period
of
a
couple
of
years,
so
it
was
a
long.
There
was
a
change
that
was
a
long
time
in
the
making,
but
I
think
when
they
ripped
it
out.
They
didn't
fully
consider
that
it
wouldn't
be
completely
transparent
in
the
case
of
stateful
set
with
the
scale
behind.
A
A
If
you
do
it
that
way,
if
order
termination
is
something
that's
valuable,
one
of
the
things
was
that
most
people
who
are
deleting
the
staple
set
aren't
deleting
it
because
they
want
to
they
needed
to
go
out
gracefully
they're,
deleting
it
to
get
rid
of
it
unless
you're
doing
a
cast
or
non
cascade
and
delete,
which
is
a
slightly
different
thing,
but
non
cascading
delete
is
well
supported
and
nothing's
changed
there.
So
I
can
see
why
people
are
like
ads,
probably
not
like,
based
on
the
promises
that
are
actually
made
for
stateful
set.
A
So
I
mean
the
question
to
me
from
here
is:
do
we
want
to
make
that
easier
again?
Is
it
worth
adding
the
finalizer
integration
and
a
staple
set
to
enable
this
behavior
in
a
single
way,
because
it
would
really
technically
be
a
feature
from
the
workloads
API
side
as
opposed
to
a
bug-fix,
and
you
know
again
it's
something
we're
talking
about,
but
if
we're
going
to
go,
do
it
it
would
be
something
that
we'd
want
feedback
from
end-users.
First,
in
terms
of
like
this
is
valuable
to
us,
something
you
should
go.
Do
you.
A
Would
be
a
finalizar
in
the
controller,
but
a
more
general
hook
across
the
entire
API
surface
might
be
valuable,
but
that's
another
conversation
with
Sagarika
texture,
because
they're
still
at
a
place
where
it's
like
go
figure
out.
What
lifecycle
hooks
are
actually
there
across
the
API
surface
and
tell
us
a
good
story
about
that
before
we,
let
you
add
any
more.
A
Agreed
I
want
to
make
people
aware
and
then,
if
people
want
to
leave
comments,
feedback
or
request
it
as
a
feature
and
say
it's
valuable,
that
would
help
drive
decisions
as
to
whether
we
do
it
or
not,
and
then
this
last
one
which
is
also
stateful
set,
so
can
troll
back
from
a
broken
state.
This
is
sort
of
true,
but
it's
again
it's
not
a
bug.
It
would
be
a
feature
when
we
roll
that
with
staple
set,
we
tried
to
do
the
safest
thing
possible,
which
is
if
something
is
broken.
A
The
user
has
to
intervene
to
unbreak
themselves
before
we
start
automating
anything
again,
because
we
don't
want
to
automate
things
that
make
your
life
worse
right
effectively.
We
took
all
the
let's
do:
what's
absolutely
safest,
which
is
generally,
if
you're
a
student
of
robotics,
you
do
the
same
thing.
The
robot
never
moves
same
thing
for
a
controller.
A
We
don't
have
to
do
it
that
way
like
if
it's
a
this
is
a
feature
that
I
could
see.
A
lot
of
people
would
want
being
able
to
automate
roll
backs.
If
your
staple
set
is
broken,
we
could
totally
implement
detection
that
okay,
what
you're
really
attempting
to
do
is
roll
back
to
a
previously
known
good
state
like
you're
within
the
revision
history
of
the
staple
set.
You
just
want
to
reapply
it
to
get
healthy
again
and
not
block
on
that
rollback.
That's
something!
That's
totally
doable!
A
So
again,
there's
only
like
four
comments
here,
one
from
Anthony.
It
was
kind
of
basically
saying
like
yeah.
We
probably
just
did
this
to
be
safe,
but
if
we
want
to
move
forward
with
this,
it
would
be
good
to
get
some
signals
and
some
feedback
from
the
people
in
cig,
apps
and
our
end
users
just
say
yeah.
This
would
be
hugely
valuable.
Can
you
go
do
this
like
the
the
interest
that
we
get
on?
A
A
A
And
the
last
thing
was
so:
we're
still
have
one
area
of
the
API
surface:
that's
in
beta,
which
is
cron
job
jobs,
GA
the
rest
of
the
work
loads
API
is
G,
a
cron
job
is
still
beta,
so
how
am
I
just
kind
of
one
to
get
a
feeling
on
how
important
is
it
to
people
to
move
at
the
GAR
and
how
it
for
people
using
it?
How
happy
are
people
with
it?
Well,.
D
I'll
speak
up
real
quick
here,
I
use
it
I've
got
stuff
running
in
it.
All
the
time
and
I
haven't
had
any
problems
with
it.
Like
many
people
who
were
treating
beta
stuff
as
GA
and
running
production
on
it,
I
think
it's
in
use
today,
and
so
that
my
question
is:
what's
stopping
it
from
being
moved
to
GA.
D
F
A
Janet
recently
got
a
PR
emerged
for
TTL
for
jobs,
so
we
could
stop
so
one
thing
about
job.
Is
people
tend
to
create
lots
of
them
and
not
necessarily
think
about
how
they
might
clean
those
jobs
up?
And
it
causes
lots
of
pressure
on
people's
control
planes
both
on
Google's
cloud
on
everywhere
else
that
I've
I've
seen?
A
Repetitively,
but
now
cron
job
also
does
have
its
own
garbage
collection
kind
of
built
in
and
just
the
other
thing
is
kind
of
making
sure
the
garbage
collection
and
the
cleanup
that's
in
cron
job
is
robust
as
well,
and
it
seems
to
be
like
I,
don't
I
haven't
seen
a
lot
of
feedback
where
it's
like
chronic
cron
job
is
breaking
my
cluster
and
there's
been
some
active
community
contribution
around
it
too,
which
is
cool,
so
I,
guess
like
what
it
comes
down
to.
Is
we,
as
a
community
kind
of,
have
to
decide?
G
D
Is
it
the
kind
of
thing
we
could
shoot
for
in
the
next
release
or
the
following
release
like
say,
okay,
a
lot
of
people
are
using
cron
job.
Today,
let's
go
round
it
out
and
get
a
GA.
Is
that
something
I
I
know
folks
are
busy
in
this
next
release
cycle,
some
folks
more
than
others?
Is
this
the
kind
of
thing
we
can
get
done
in
the
next
release
cycle
and
actually
shoot
for
or
if.
A
D
Because
a
lot
of
people
are
using
it
and
treating
it
as
GA
is
something
you're
just
gonna
go
use,
I
would
suggest
we
do
it
because
I
mean,
if
we've
kind
of
got
out
while
it's
beta,
but
then
lots
of
people
are
treating
it
as
GA
and
we're
like
well
there's
a
bunch
of
bugs
and
rough
edges.
But
then
lots
of
people
are
using
it
right.
It's
that
kind
of
thing
of
let's!
D
A
D
And
I've
been
using
it
and
I
haven't,
had
issues
with
it,
but
I'm
not
excessively
using
it
I'm
using
it
for
tasks
that
happen.
You
know
once
twice
few
times
a
day.
That's
about
it!
Nothing
excessive
and
I
haven't
had
any
issues
but
and
the
API
is
worked
fine
for
me,
but
I'm.
Just
one
data
point:
yeah.
F
D
A
D
A
D
D
I
just
dropped
a
link
into
chat
here
and
if
you
actually
just
search
for
cron
job
in
there,
it
shows
up
a
couple
of
times.
You're
gonna
see
what
they're
depending
on,
and
so
you
see,
some
people
are
depending
on
this.
For
for
production
environments,
they
built
a
system
on
top
of
it
to
make
it
easier
for
developers
to
specify
what
they
want
and
not
have
to
know
the
whole
cron
jobs
back,
but
they're
they're
depending
heavily
on
it.
It
says
here
and
and
I'm
sure
others
are
too
so
because
people
are
relying
on
it.
A
C
A
Let
me
see
if
I
understand
what
you're
saying
you're
using
a
local
valium
provisioner.
That's
prove
that's
basically
using
note.
Are
you
using
local
PDS
and
local
PVCs
or
you're,
using
regular
PDS
with
a
local
provisioner?
That's
just
taking
a
chunk
of
note
and
storing
it
locally,
and
then
the
volume
effectively
disappears
in
the
node
dies.
C
Correct
so
it's
one
of
the
ions
in
kubernetes
and
what
that
does
is
it
on
a
schedule?
It
keeps
are
discovering
the
volumes
that
are
configured
on
a
node.
Basically,
you
set
a
directory
and
whatever
mount
points
are
present
in
that
directly
to
present
that
as
one
of
stream
abilities
and
once
you
make
a
claim
that
basically,
the
volume
provisioner
acts
first
and
then
kubernetes
scheduler
provisions
the
board
on
that
node.
There,
the
ball
in
spirit,
I.
A
C
A
C
A
C
I
can
tell
you
what
I'm
doing
today.
I
are
basically
just
our
cron
job,
which
is
to
set
these
things
and
then
goes
and
deletes
the
morning.
A
So
hey,
if
you
use
your
cron
job,
that's
fine,
but
that's
what
I
was
basically
gonna
suggest:
I,
don't
even
know
if
you
have
to
delete
the
pot
itself.
But
if
you
delete
the
PBC,
you
should
get
a
new
PBC
created
and
you'll
unbind
from
the
the
biome
claim
and
you'll
come
back
up.
What
I'll
take
as
an
action
item
is
to
go
talk
to
Assad
and
Michelle
from
the
sick
storage
side.
A
So
the
answer
may
be
that
this
is
the
way
it
was
intended
to
work,
and
we
expect
a
manual
intervention
in
all
cases,
but
there
may
be
something
that's
either
coming
down
the
pipe
or
that
allows
you
to
configure
the
system
in
order
to
say.
Okay
will
this
volume
this
node
has
been
down
for
X
amount
of
time?
I
want
you
to
unbind
the
volume
from
the
PBC,
because
we
don't
expect
it
to
ever
come
back.
C
E
A
A
The
reason
the
name
label
is
there,
which
you
can
get
the
ordinal
from,
is
because
there's
so
there's
a
pattern
where
somebody
creates
one
service
per
pod
and
the
staple
set
in
order
to
expose
a
distributed
system
where
you
need
to
access
the
members
of
the
staple
set
individually
outside
of
the
color
cluster,
using
a
load
balanced
service
right,
which
is
why
it's
a
label
if
the
annotation
is
meant
to
be
something
that's
only
processed
by
machines.
If
it's
not
meant
to
be
selectable,
then
it
should
be
an
annotation.
A
E
That
make
sense.
Yes,
it
does,
and
I
did
outline
one
use
case.
It's
really.
It
doesn't
make
a
lot
of
sense
for
everyday
use.
I
don't
think,
but
if
you
have
multiple
staple
sets
working
together
in
some
way,
it
may
be
nice.
For
example,
if
you're
representing
a
shorted
database
and
each
shard
is
the
zone
stateful
set,
then
you
might
want
to
select
across
each
of
those
staple
sets
by
the
pod
or
null.