►
From YouTube: Kubernetes SIG Apps 20230501
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Yeah
I
can
I
can
talk
about.
This
is
mingloo
and
the
ticket
was
so.
There
was
a
separate
issue.
Talk
about
that
I'll
send
a
meeting
request
to
be
a
Sunday
request
for
that
issue
to
be
added,
I
don't
know,
have
you
received
that
yet
let
me
send
to
the
so.
This
is
the
current
issue.
I'm
looking
at
it,
I
just
sent
everyone
to
the
to.
B
I'll
send
a
link
to
to
everyone
on
the
chat.
B
So
the
issue
was
manifest
itself
as
if
we
create
a
stateful
set,
the
the
staple
set.
So
there's
a
limit
on
this,
the
number
of
characters
allowed
for
the
staple
set
for
the
pod
and
I
realized
not
only
that
the
revision,
the
the
controller
revision
hash
also
enforce,
has
been
enforced
for
more
strictly
rule
because
the
hash
is
longer.
The
hash
is,
is
a
is
an
unsigned
30
32,
so
that
that
translates
to
like
10
characters.
B
So
basically
that
takes
away
extra
10
characters,
the
the
Paul
name,
which
is
63
the
minus
10,
the
minor
you
know,
then,
then
there
there's
a
there's
a
Dash
as
a
as
a
delimiter,
so
we
only
allow
52
characters
so
actually
I
pushed
the
pr
earlier
like
a
couple
weeks
ago.
B
Just
to
you
know,
allow
the
truncate,
the
the
previous
just
just
take.
Take
the
prefix.
A
B
Yeah,
but
the
Paul
name
limit
is,
is
I.
Think
the
Paul
name
limit
is
longer
right
because,
because
the
controller
revision,
hash
controller
revision,
hash
is
the
staple
set
name
plus
the
hash.
The
hash
takes
up
to
10
characters,
which
is
in
32,
it's
an
ensigned
n32
and
the
Paul
name,
just
the
the
part.
Just
the
staple
send
name
plus
the
replica
right
I,
don't
think
the
replica
will
go,
you
know
takes
the
full
n32,
so
usually
we
will
hit
what
I've
seen
is
the
promise
we'll
hit
the
controller.
A
B
Somebody
else
commented
on
like
what's
his
name:
Aldo
Aldo
commented
on
the
the
issue
as
as
well
as
the
pr
right.
C
Yes,
I'm
here.
D
C
Can
provide
some
thoughts
so,
if
I
understand
correctly
today,
if
you
create
a
staple
set
with
a
name
that
is
too
long,
then
the
the
pots
are
never
created
right.
Is
that
correct.
C
B
Let
me
let
me
clarify
so
I
shouldn't
say
the
Pod
cannot
be
created
because
because
of
the
revision,
because
the
control
revision,
hash
limit
was
violated
right.
C
C
Understand
that
yes
yeah
so,
but
the
result
is
that
you
have
a
staple
set
that
has
no
pods.
So
essentially,
this
is
a
broken
stifle
set
and
we've
had
scenarios
like
this
in
API
reviews-
and
this
is
one
of
those
scenarios
that
qualifies
for
is
trading
validation.
C
So
we
could
strengthen
the
validation
of
the
staple
set
name
itself
so
that
you
cannot
create
stateful
sets
that
have
a
name
longer
than
64
63.
B
C
E
E
Thing
we
should
do
is
take
a
step,
so
we
have
a
couple
of
different
issues
with
name
lengths
with
respect
to
the
workload
controllers
right.
So
there's
one
with
replica
setting
limb.
You
can
have
a
deployment
that
generates
a
replica
set
that
has
an
invalid
name.
There's
this
one
I
wasn't
aware
of
that.
You
could
generate
a
controller
revision
inside
of
a
staple
set.
That
has
a
name
that's
too
long,
but
the
same
thing
can
happen
with
a
persistent
volume
claim.
E
So
there's
some
and
jobs
also
don't
have
a
crime.
Job
doesn't
have
a
check
when
it
does
job
creation.
So
you
could
get
the
same
error
there.
So
I
I
think
we
could
Point
solve
these.
We
should
probably
take
a
step
back
and
like
look
at
General,
name,
validation,
constraints
inside
of
the
workloads
apis,
or
does
that
seem
controversial
to
this
group.
C
I
I,
agree,
I,
think
it's
better
to
fail
this
workload,
api's
creation,
rather
than
have
a
have
a
failed
bot
creation
and,
as
I
was
saying,
I
think
this
qualifies
for
it's
not
even
it's
kind
of
breaking
backwards
compatibility,
but
at
the
same
time
you
already
have
a
broken
stifle
set.
So
users
had
to
have
had
to
take
action
if
they
were
creating
staple
sets
with
longer
names,
so
we're
just
moving
the
validation
sooner
right
or
yeah.
B
Yeah
I
think
I.
Think
I
think
that
what's
interesting
here
is
the
revision
right.
So
the
the
revision
in
the
staple
set
is
part
of
the
the
label,
but
the
revision
in
the
in
the
in
the
deployment
is,
is
The
annotation
right
so
like
in
like,
if
I,
if
I
create
a
if
I
create
a
deployment.
I
will
never
have
this
problem.
As
long
as
I
can
pass
the
the
deployment
check.
I
will
not
have
hit
the
label
one,
so
this
is
I
can
pass
the
okay.
B
E
Yeah
they're
they're,
coming
so
deployments,
create
replica
sets
and
replica
sets
create
pods.
The
replica
set
also
acts
to
some
extent
as
a
revision.
History
of
the
deployment
itself
for
stateful
set
and
for
demon
set
it's
different
they're.
The
controller
revision
acts
as
a
history
for
revisions
to
the
user's
declared
intent
and
they
create
pods
directly.
So
when
you
get
name
problems
with
a
deployment,
it
manifests
in
an
ability
to
create
a
replica
set
of
the
correct
size
like
basically,
you
can't
create
the
replica
set.
Generally
speaking
or
there's.
E
We
have
to
go
look
at
what
the
exact
issue
is.
Either
you
couldn't
create
the
replica
Center
you
create
the
replica
set
and
the
replica
set
couldn't
create
the
Pod
for
stateful
set
it'll
manifest
as
a
pod
creation
error
or
potentially
a
persistent
volume
claim
creation
error,
depending
on
which
naming
constraint
you
hit
and
then
for
demon
set
I,
don't
think
we
actually
have
an
issue.
There
I
think
that
one
works
I
I,
don't
at
least.
B
C
E
It's
an
indication
we
could
so
annotations
I
think
you're
allowed
to
change
a
little
bit
more
freely,
but
labels
because
they're
not
selectable.
Right,
like
you,
can
have
automation
that
looks
at
annotation,
but
it's
not
actually
user
selectable,
because
labels
labels
are
user,
selectable
modifying
those
tends
to
be
not
Backward
Compatible,
but
in
this
case
so
I
mean
the.
E
The
first
argument
you
made
was
like
okay,
so
for
the
staple
set,
if
we
add
a
stricter
validation
in
theory
that
isn't
strictly
speaking,
compatible
like
loosening
validation
is
always
Backward,
Compatible,
adding
new
validation,
that's
more
restrictive,
technically,
isn't,
but
because,
in
effect,
you're
creating
a
broken
workload
anyway.
No
one
who
who
passed
through
that
validation,
was
getting
anything
that
worked
so
you're,
not
really
breaking
anyone
from
a
workload
perspective,
but
with
label
selectors.
C
So
yeah,
my
point
is
that
we
could
potentially
go
through
the
application
period
of
three
or
four
releases,
I
think
so
yeah
I
think
we
need
to
double
check
if
that's
possible,
but
recently
there
have
been
some
efforts,
especially
in
the
job
API
too,
to
move
all
the
labels
from
single
name
from
like
their
name
into
well-known
kubernetes
dot,
IO,
slash
label
right,
so
I
think
we
could
take
the
opportunity
to
to
do
this
migration
to
add
the
well-known
prefix
and,
at
the
same
time
remove
the
pref,
the
prefix,
the
prefix
name
from
this
hash,
because
it's
not
really
useful,
so
I
I
think
we
could
do
those
two
things
at
this
time
and
of
course
it's
gonna
take
a
few
releases,
but
this
is
the
right
thing
to
to
do.
B
C
I'm
sure
API
API
Machinery
had
good
reasons
about
it.
I
don't
know
what
those
are
it
might
be.
Some
I
think
we
respect
a
few
DNS
rules.
It
might
be
related
to
that,
but
I'm
not
sure.
C
Yeah
the
labels
are
different,
RFC
I,
don't
remember
the
number,
but
there
might
be
some
relationship
to
that.
If
you're
asking
the
issue
we
can
dig
into
it,
but
I
don't
think
it
would
be
possible
to
change
that
I'm,
not
really
sure
but
I
I
I,
don't
think
so.
C
B
The
what's
the
next
step
like
how
should
how
should
we
proceed
with
this.
C
So
I
think
my
suggestion
would
be
instead
of
instead
of
cutting
the
the
name
which
is
I,
think
it's
what
European
is
doing.
We
can
proceed
through
this
deprecation
period
of
first
removing
the
prefix
from
I
mean
we,
we
have
to
add
a
new
label
and
a
new
label
deprecate
the
existing
one.
The
new
label
only
has
the
prefix.
Sorry
only
has
the
hash,
and
additionally,
we
can
add
validations
such
that
the
staple
set
name
is
up
to
60
63
characters,
so
those
would
be
my
that
would
be.
B
Okay,
I
think
I
think
for
the
second.
The
second
proposal
you
have
is
the
for
the
staple
set
up
to
you
know:
64
characters
check
that
has
already
been
checked
right.
It's
been
validated
when
the
Staples
set
has
been
created,
I
mean
crack
at
me.
If
I'm
not
wrong,
I
mean
I
mean
if
I'm
wrong,
I,
don't
I,
don't
think
that's
necessary.
C
C
Yes,
duplicate
the
old
one,
create
a
new
one
and
we
can
take
the
chance
to
add
the
prefix,
which
is
I.
Guess
it
could
be
just
kubernetes
or
will
it
be
stapleset.com?
A
F
Batch.Kubernetes.Io
job
name
I,
think,
is
that
one
and
then
there's
a
same
one.
Yeah
I
think
it's
batch
drubernetes.io
for
the
job
one,
because
it's
under
the
batch
API
scope.
F
Looking
at
it
at
least
well
that
batch
one
was
I,
just
I
took
that
one,
because
I
think
we
used
it
for
the
finalizers
or
The
annotation
I
can't
remember
which
one,
but
it
does
look
like.
There's
a
stateful
set
pod
name
label,
stateful
set
Dot,
kubernetes.io
and
I'm
wondering
if
maybe
they
broke
if
they
broke
them
out
by
which,
which
higher
level
controller
represents
the
label.
E
Staple
set
that
one
was
a
little
bit
different.
We
implemented
that
because
people
wanted
to
be
able
to
select
particular
pods
inside
of
a
staple
set
when
they
were
turning
up
certain
structured
storage,
workloads,
I,
yeah,
yeah
I,
don't
know
if
that's
probably
applicable
to
the
other
controllers.
F
Yeah
so
the
controller
revision,
hash,
yeah,
unguarded,
yeah
I,
don't
know
what
I
guess:
yeah
I
I
chose
the
batch
one
only
because
that
was
there
was
already
an
existing
label
in
the
job
controller
that
used
that
one
so
and
I'm
sure
apps
dot.
Kubernetes.Io
makes
sense
to
me.
E
Some
of
the
labeling
and
naming
conventions
predate
the
cap
for
consistent,
labeling
and
naming
right
and
then
we're
left
in
place
because
you
know
don't
break
people
unnecessarily.
E
There
is
I
gotta
go,
find,
find
the
ticket
I
mean
Tim,
Hawkins
I.
Remember
he
was
involved
in
the
conversation.
A
bunch
of
people
were
involved
in
a
conversation
for
specifying
the
the
correct
things
to
use
for
built-in
controllers,
for
the
correct
labeling
schemes
and
and
naming
conventions
for
the
attributes.
I
have
to
go.
Look
it
up
that
I
can't
remember
what
the
exact
outcome
was
offhand
or
if
it's
generally
applicable
to
this
particular
situation.
C
B
Yeah
thanks
so
how
that
works
is.
This
is
my
first
first
meeting
and
like
should
I
continue,
my
PR
or
or
your
your
your
organization.
Your
team
is
going
to
implement
this.
C
I,
don't
have
any
Cycles
to
work
on
this,
so
you
can
go
ahead.
Okay,
do
you?
Normally,
you
would
kind
of
lay
down
these
details
in
in
an
issue
or
even
a
PR,
I
I
think
that's
fine
and
we
need
to
involve
the
API
reviewers
and
yeah.
We
proceed
there.
Yeah.
B
C
First,
once
this
meeting
is
done,
just
we'll
have
a
summary
of
what
we
discussed
and
the
next
step
will
be
to
start
start
a
PR.
There
are
a
few.
There
are
a
few
guides
around
deprecation
and
validation.
Tightening
things
like
that,
depending
on
the
one
we
need
to
do,
I
can
I
can
send
those
those
links
once
once.
A
F
Yeah
I'm
yeah
I'm.
Here
let
me
look
at
what
I
put
on
the
agenda
yeah.
So
this
is
a
a
cap
for
it's,
allowing
for
the
job
controller
to
be
a
bit
smarter
about
when
you're
terminating
pods.
If
you
want
to
include
sorry
when
like
when
you
have
a
terminating,
we
we
automatically
Mark
as
failed,
and
then
we
start
new
pods,
and
there
are
cases
I
I
meant
I
mentioned
in
the
cap
that
cause
this
problem
to
behave.
Well.
F
There
was
a
general
thought
that
this
is
also
useful
for
deployments,
but
I'm
a
little
kind
of
stuck
on
how
to
come
up
to
an
agreement
on
the
naming
of
these
fields
and
whether
or
not
we
should
have
them
in
the
same
have
both
deployments.
Flash
replica
sets
and
the
jobs
in
the
same
cap
or
kind
of
separate
these
into
two
separate
cups,
even
though
I
do
think.
The
implementation
of
these
is
very
similar.
I
think
that
was
about
the
original
thought
of
why
we
wanted
to
consolidate
into
a
single
cup.
F
C
Some
thoughts
on
I
did
look
at
it
on
I
have
the
feeling
that
the
deployment
and
replica
set
discussion
is
much
longer
so
I'm.
You
know
my
selfishly
selflessly.
Oh
sorry,
my
priorities
are
around
job,
so
I
would
rather
separate
the
capsule
one
one.
The
details
on
deployment,
don't
block
the
job
progression
and
I
like
the
idea
that
you
have
a
new
field
just
to
keep
track
of
the
number
of
pots
that
are
terminating
and
then
we
we
have
this
control
to
whether
terminating
pods
block
Recreation
right.
F
Yeah
for
the
job
yeah
and
then
the
a
lot
of
discussion
you
are
right
is
around
the
deployment
and
the
name
for
that.
One
which
I
will
admit
I,
don't
have
strong
feelings
or
knowledge
on
that
matter.
So
I
do
think
it
that
it.
The
cap,
could
get
easily
blocked
I
found
the
spot
was.
We
could
have
two
different
feature
toggles,
but
I
do
understand.
That
does
make
it
very
confusing
for
the
outcome
of
a
cat.
F
But
yeah
so
I
guess
I
can
see
the
I
mean
I.
Think
I
can
I
could
get
the
two
caps
created
and
then
maybe
we
can
get
the
reviews
on
the
job
part
and
then
keep
the
discussion
open
for
the
deployment.
But
I
guess
the
question
for
you,
although
is:
do
you
think
that
the
API
do
you
want
to
try
and
make
the
API
for
this
be
similar
between
deployments
and
jobs?
F
F
I
think
there's
a
few
cases,
I
I
think
for
deployments.
I
can't
remember
what
the
rationale
was,
but
for
the
jobs
it
was
I
know,
although
you
pointed
out
I
think
the
original
issue
was
with
tensorflow
and
how,
if
you
have
a
terminating
pod
with
the
same
index,
it
should
be
fully
terminated
before
you
start
a
new
one.
Is
that
right.
F
And
then
for
deployments,
I
think
it
was
a
pretty
long
standing
eye
issue,
so
I
I,
don't
wanna
I
actually
am
interested
in
trying
to
get
both
of
them
done,
but
the
deployment
there
are
different
cases
for
deployments.
That
I
remember.
There
was
a
pretty
long
running
issue.
That's
why
Philip
kind
of
brought
this
up.
C
C
E
A
next
job,
all
right
so
I
mean
the
thing
about
like
for
deployment,
the
use
cases
that
the
pods
are
supposed
to
be
fungible
right
like
if
you
have
many
of
them,
yeah
they're
they're,
not
they
don't
have
like
you
know,
unique
Network,
coordinates
or
unique
names.
The
names
are
generated
on
the
fly
right.
So
the
idea
is
you,
you
have
probabilistically.
If
you
have
n
replicas,
you
have
n
ish
pies,
depending
on
what
you're
doing,
with
rolling
updates
and
Mac
Surge
and
mint
availability
and
all
that,
but
the
guarantees
we're
providing
there.
E
I
don't
know
if
it
will
be
consistent
to
try
to
like
say
well
we're
going
to
keep
some
of
them
as
active,
even
when
they're
like
I,
don't
know
if
that
would
like
really
benefit
the
use
case.
I
get
the
use
case
for
index
job
in
particular,
though
that
makes
that's
a
little
bit
more
compelling
so
I
mean.
E
Maybe
it
does
make
sense
to
try
to
support
this
feature
primarily
for
a
job
to
begin
with,
because
there's
a
clear
use
case
for
the
support
of
index
jobs,
and
we
want
to
use
this
as
a
primitive
in
in
order.
You
know
to
provide
some
guarantees
around
that
I
guess
I
would
have
a
larger
concern
around
how
like
without
fencing.
How
does
can
you
actually
provide
this
guarantee.
E
Like
I
mean
doing
it,
this
way
provides
probably
better
Assurance
than
what
you
could
give
someone
today,
but
you
know
under
Network
partition.
You
could
still
end
up
with
two
pods,
one
of
which
that's
you
know,
marked
as
terminating
but
not
really
in
any
way
available,
because
it's
on
a
node,
that's
basically
gone,
and
you
know
until
it
times
out
and
the
nodes
removed,
and
we
say
like
okay,
this
it's
gone
forever.
You
still
end
up
with
some
bad
Network,
coordinates
there
or
a
bad
process.
It's
not
actually
available.
C
E
F
Yeah
I
think
that
I
posted
an
issue
from
last
year,
where
I
think
Phillip
first
brought
this
stuff
and
then
his
was
Auto
scaling
of
nodes
and
tight
environments
and
driving
up
Cod
Cloud
costs
and
the
other
is
they
haven't.
Met,
have
like
a
scarce
resource
and
having
the
terminating
one,
it
might
be.
They'll
have
to
wait
for
it
to
get
started
until
that
thing
is
fully
reclaimed
or
whatever.
F
Those
are
the
two
cases
I
think
for
the
deployments
that
he
called
out
in
the
issue,
but
yeah
for
for
jobs,
I
think
it
was
yeah
the
index
jobs,
the
yeah
there's
quite
a.
G
Maybe
to
clarify
when
we're
talking
about
terminating
Parts
I
would
consider
in
the
state
of
a
part.
In
other
words,
I
was
assuming
that
pod
is
alive
in
certain
traffic
or
working
as
intended,
because
it
seems
to
be
that
if
pot
has
been
stuck
in
terminating
and
it's
not
alive
and
I,
think
that's
what
Bruce
stock
was
mentioning.
I
think
Kenneth
was
saying
before
that
it's
actually
beneficial
to
create
new
part
as
soon
as
possible,
even
with
previous
Parts
like
in
terminated
to
avoid
the
outage.
G
I
understand
the
use
case
in
scars
environment,
where
you
want
to
when
you
Bound,
By
Number
of
nodes
or
pods,
and
you
want
to
keep
it
to
the
minimum.
But
at
the
same
time,
if
my
pod
being
Mark
is
terminated
and
it's
not
available
anymore,
even
though
it's
lingers
due
to
cube
with
diet
or
whatever
I
want
new
pod
as
soon
as
possible
to
prevent
the
interruption
in
the
service,
even
if
that
would
result
in
multiple
Parts,
simultaneously
running
or
being
appearing
in
API.
F
Yeah
I
think
generally,
our
I
think
the
thought
was.
This
would
not
be
a
that.
We
would
not
be
changing
the
behavior
out
under
from
underneath
people,
but
allowing
an
opt-in,
Behavior
I.
Think
that's
why
I
think
we
have
it
as
we're.
Gonna
have
as
an
API
field
to
own
like
recreate
pods,
once
they're
fully
terminated,
or
something
like
that.
So
I
think
that
but
I
do
understand.
Your
I
can
understand
a
case
where
you
would
want
them
to
be
recreated
as
soon
as
they
get
marked
as
terminating.
F
So
I
think
going
forward
what
I'm
hearing
I
think,
maybe
I
will
separate
the
Caps
and
get
the
job,
ideally
I,
think
I
know.
I
would
like
to
try
and
get
this
if
possible.
This
Captain,
too
I
have
the
pr
open
for
the
code,
so
it's
just
mostly
getting
the
cap
understanding
the
design
in
the
API
field,
but
I
think
I
have
a
pretty
good
understanding,
implementations,
I'm,
hoping
to
try
and
get
at
least
a
job
into
this
release.
F
Is
there
any
any
okay
I
see
all
those
thumbs
up
so
I'll
assume
that
sounds
like
a
good
way
forward.
F
And
just
curious:
when
when
do
we
decide
what
features
go
into
1.28.
B
F
F
Well,
I
yeah
I
guess
it
will
I'll
get
them
in
a
state
to
kind
of
separate
the
caps
and
get
the
discussion
going
on
a
deployment
and
then
hopefully
get
the
job
one
in
a
better
state.
A
H
H
H
C
C
A
C
A
A
Oh
I
was
talking
a
mute,
I
was
asking,
does
anyone
have
anything
they
want
to
discuss.