►
From YouTube: Kubernetes SIG Apps 20211101
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
recording
is
started
welcome
to
the
november
first
monday
morning,
meeting
of
sick
apps,
I'm
kenneth,
owens
I'll,
be
hosting
the
meeting.
Co-Hosting
will
be
jennico
and
let's
jump
into
the
agenda.
A
Okay,
as
an
announcement,
you
can
see
the
apps
updates
from
the
kubicon,
my
jack
jane
and
I
all
took-
turns
presenting
that's
pretty
cool
and
then
for
the
discussion
topics
we
had
today.
One
was
progress,
deadline
seconds
and
the
semantics
thereof,
with
respect
to
what
we
do
with
deployment.
A
So
in
looking
at
the
documentation,
we
never
really
do
say
exactly
what
the
semantics
are
for
progress
deadline
seconds
and
it's
there's.
There
are
a
couple
of
kind
of
different
things
there.
So.
A
B
C
C
And
I
don't
I
don't
really
have
like?
I
think
that
would
be
a
downsides
thing
like
who
could
possibly
be
consuming
the
way
that
it
works
today?
That
would
be.
That
would,
like
feel
impacted
by
the
change
that
I'm
suggesting,
and
I
don't.
I
don't-
have
a
clear
story
on
that.
So
if
I
I'm
happy
to
step
back
and
wait
and
see
if
anybody
else
has
a
a
clearer
downsides
pitch.
C
Okay,
so
the
upsides
pitch
is
that
they're,
like
people
who
create
deployments,
is
useful
to
have
a
sign
for
them
for
when
it's
time
to
step
in
and
see
if
they
need
to
adjust
anything
to
like
assist
the
deployment
controller.
C
C
That's
you're
not
available
false,
yet
but
you're
not
making
the
expected
amount
of
progress,
so
an
admin
or
whoever
it
is
that
created.
The
deployment
is,
is
being
asked
to
get
involved
and
see
what's
going
on,
and
maybe
something
needs
to
get
kicked
or
whatever
to
help
recover
and,
and
so
the
upside
pitch
is
that
now
this
behavior
would
apply
to
external
disruption
that
happens
outside
of
a
rollout
where
it's
not
bad
enough,
yet
to
be
available
false.
C
But
the
deployment
controller
is
frustrated
from
its
lack
of
progress
and
something
is
happening
that
seems
to
be
worth
the
admin
or
somebody
like
a
robot
controlling
deployment.
Somebody
needs
to
step
in
and
give
the
deployment
controller
an
assist
because
it
feels
like
it's.
It's
not
making
the
progress
that
it
expected
to
make.
D
A
B
C
Yeah,
if
it
would
be
helpful
for
me
to
to
phrase
it
in
a
less
pr
way,
I'm
happy
to
to
do
that
if
we
want
to
pun.
C
So
is
it
like,
I
feel,
like
I
kept,
seems
a
bit
heavy,
but
I
can
do
that.
Is
there
like
a
a
midpoint
pitch,
like
I
tried
on
the
the
openshift
side
of
things
in
a
bug.
C
A
A
Do
we
reasonably
see
a
way
where
this
would
break
clusters?
If
we
added
it
like?
Will
people
get
unexpected
behavior
that
they
were
that
deviates
from
what
they
depend
on
today,
so
they
upgrade
their
cluster
and
all
of
a
sudden,
the
new,
the
new?
So
like?
We
always
said
that
for
conditions
you
should
not
look
at
them
as
kind
of
a
state.
Machine
and
bases
should
also
not
be
a
state
machine
right.
A
There
are
conditions
that
you
look
for
and
you
shouldn't
like
tight
loop
and
expect
them
to
come
in
a
particular
order,
and
we
do
call
that
out
in
the
documentation,
but
we
don't,
as
far
as
I
can
find
in
the
open
source.
Documentation
ever
provide
guidance
on
specifically
this
condition
and
deployment
progression
in
terms
of
how
to
interpret
them
right.
So,
like
we've
never
actually
told
people
like
this
is
what
what
it
does
and
what
you
should
expect.
A
A
A
Writing
up
the
issue,
I
think,
might
be
helpful
to
make
the
discussion
a
little
bit
more.
I
guess
inclusive
would
be
a
way
to
say
so
that
people
don't
have
to
look
through
the
entire
pr
and
have
a
a
deeper
comprehension
of
the
deployment
controller
code
in
order
to
chime
in
so
that
that,
well,
like
the.
C
C
It's
attempting
to
like
that,
the
replica
set
controller
is
trying
to
to
add
a
new
new
replica
pod
and
not
making
progress
or
making
progress,
and
currently
it
will
just
sit
progressing
true
forever,
and
the
change
that
I'm
proposing
is
that,
instead
of
sitting
progressing
true
forever,
if
the
replica
set
controller
is
not
actually
getting
a
new
pot
out
for
progressing
deadline
seconds,
that
it
would
transition
to
progressing
false
with
progress
deadline
exceeded.
So
so
people
who
would
be
impacted
are
people
who
are
looking
at
progressing.
That's
the
only
thing
is
changing.
C
C
So
there's
just
one
replica
set
they're
watching
the
progression,
condition
on
the
deployment
and
they're
expecting
it
to
stay.
True
in
the
absence
of
actual
progress,
and
like
I
don't
know,
on
the
openshift
side
of
things
we
have
there's,
it's
not
entirely
clear
what
progressing
means.
There's
like
lots
of
corner
cases
around
the
edge.
A
That's
the
fiddly
bit
right,
like
I
mean
progression,
really
didn't,
have
any
meaning
outside
the
context
of
a
roll
out
so
like
in
thinking
about
it.
Do
people
look
at
progressing?
Do
they
look
at
that
field
and
think
about
like?
Is
this
when
this
field
changes
has
a
rollout
been
triggered,
and
is
that
a
good
way.
C
Another
option
that
came
up
was
just
to
avoid
confusion.
If
there
is
disruption
outside
of
a
rollout
should
progressing
even
go
true
at
all
right.
So
for
a
replica
set
it's
more
clear
because
replica
sets
don't
have
I
mean
I
guess
you
could
change
the
number
of
replicas
that
you're
asking
for,
but
other
other
parts
of
the
replica
set
are,
are
not
like
high
changing
things,
I'm
not
even
sure,
what's
allowed
to
change,
but
so
so
it
it
makes
sense
to
me
that
a
replacement
set
goes
progressing.
C
D
A
F
Hey,
can
I
give
it
the
take
sure
yeah.
I
think
the
disadvantage
is
that
it's
like
blurring
the
difference
between
the
progressing
and
available
condition
since,
like
now,
the
progressing
condition
is
more
constrained,
and
with
this
we
are
just
saying,
observe
the
rollout
and
once
that
happens
like
until
the
progressive
line
is
reached
check
for
available.
If
everything
is
fine
and
then
you
can
go
progress
and
the
line
exceeded,
so
I
guess
it
could
be
useful
to
have
this
behavior
for
some,
but
it
could
make
the
condition
like
more
more
complicated.
F
B
Yeah
to
answer
your
question
ken,
I
think
some
people
are
actually
going
to
use
the
status
to
to
decide
what
the
current
phase
the
the
rollout
is
in
or
the
current
state
the
deployment
is
in,
so
people
can
be
looking
at
that
status.
So
that's
that's
something
that
we
need
to
balance
like
with
this
change
is
it
is?
Is
it
the
same
behavior
that
we
used
to
have
earlier?
C
A
Graph,
at
least
you
can
get
a
good
hit
across
the
open
source
community
of
who's.
Depending
on
that
code
right
and
there
may
be
private,
you
know
there
are
other
repos.
You
can
look
in
and
say
that
commonly
interface
with
deployment
to
do
other
things,
and
you
can
take
a
look
and
see
how
they're
using
it
so
like
that
is
kind
of
the
due
diligence.
I
wanted
to
at
least
take
a
look
at,
so
I
can
get
kind
of
an
idea
more
broadly
across
the
ecosystem.
A
You
know
how
common
would
we
think
this
being
try
to
extrapolate
there,
and
you
can't
necessarily.
We
don't
have
insight
into
people's
custom
tooling,
that
they've
built
inside
of
their
own
organizations,
but
you
know
if
it's
like
never
referenced
anywhere
on
github,
it's
kind
of
a
strong
signal.
It's
like
well.
E
C
And
what
was
this
this
is
like
goes
package
like
the
the
the
consumer
links
from
their
their
package,
godox,
rendering
that
you
said
source,
sorry,
you're,
referencing,
some
kind
of
fish,
yeah,
okay
source
graph.
What
is
that.
A
So
if
you,
if
you
use
it-
and
it's
like
you,
can
you
can
pay
for
it
as
a
product
for
your
own
repositories,
but
they
they
index
open
source
repositories
for
free.
So
if
you
install
it
as
like
a
chrome
plug-in
or
even
an
ide
plug-in,
you
can
take
a
look
at
a
line
of
code
and
it'll
find
other
repositories
in
the
open
source
ecosystem
that
reference
that
code.
A
So
you
can
use
it
to
look
at
like
the
kubernetes
client,
for
instance,
for
deployment
and
see,
like
you
know,
what's
being
referenced
there,
you
can
use
it
to
cross
even
repositories
inside
of
the
kubernetes
organization
as
well.
A
So
it's
it's
a
code
search
tool.
It
helps
you
kind
of
look
at
where
code
is
being
used
across
repositories.
C
I
can
try
to
figure
that
out.
How
is
that
so
there's
going
to
be
lots
of
like
if,
if
folks
are
consuming
this
via,
like
a
command
line?
Client,
that's
already
been
compiled
right,
the
the
the
it's
hard
to
bridge
the
multiple
hops
from
like.
Are
they
using
like
go
for
deployment
status
and
the
progressing
condition
is
one
hop
that
is
at
least
semi-structured,
but
then,
after
that,
there,
the.
If
you
hop
more
degrees
of
separation
away
from
that
go,
it's
gonna
presumably
become
harder
to
follow
that
trail
so
like.
C
If
people
are
using
coop
control
to
wait
on
progressing,
I
I
think
it's
probably
going
to
be
hard
to
catch
that
with
a
search
tool,
but
I
can
I
can
try
to
find
this
thing
and
poke
around
a
bit
and
see
if
anything
turns
up,
because
we
don't
have
to
have
an
exhaustive
search.
We
just
need
to
find
like
one
counter
example
to
sync.
This
idea.
A
D
D
Both
the
notes
from
last
meeting
and
the
recording
has
like
goes
into
quite
a
bit
of
detail
on
what
the
issue
is,
so
this
is
essentially
trying
to
instead
of
choosing
one
way
or
the
other,
it
basically
just
says:
let
the
users
decide
if
how
they
want
to
handle
those
pods
so
yeah.
Unless
there's
any
questions,
I
wasn't
expecting
a
big
discussion.
It's
more
getting
a
view
of
the
cap
and
see
if.
A
A
D
D
A
Yeah
yeah,
I
get
the
context,
I
mean
it.
It
makes
sense,
I
think,
even
in
the
cafe
it
calls
out
that,
like
the
only
reason
not
to
do
this
is
if
we
think
we
have
to
carry
the
code
and
it's
not
going
to
be
useful.
The
end
user,
but
we
already
kind
of
have
significant
evidence
via
open
issues
and
other
conversations
that
people
do
want
to
want
this
level
of
control.
So
I
mean
seems
reasonable
to
me.
D
B
And
you
can
tag
us
in
this
like
we
have
also
started
working
on
this,
but
we
did
not
put
up
the
gap
yet,
but
I
think
we
can
work
together
on
this.
Like
me,
and
philip
can
help
you.
A
All
right,
those
were
the
only
discussion
items
we
had
scheduled
for
today.
Does
anyone
have
anything
else
they'd
like
to
discuss.
G
Hey
guys,
I'm
marianne
I'm
from
ws,
and
I
would
like
to
discuss
since
we
have
time
with
this.
Let
me
put
here
on
the
agenda.
G
So
basically,
we
have
a
huge
cluster
with
hundreds
of
pods
and
our
application
is
architecture
in
a
way
that
all
requesters
are
shuffle
shadow
to
three
parts.
Each
party
yz-
and
if
you
one
of
those
spots
fail
for
some
reason
are
terminated.
G
We
are
okay
to
like
the
request
is
is
okay,
but
if
more
than
one
fails
we
return
500
to
the
customer,
and
we
are
seeing
during
load
upgrades
that,
like
those
updates
are
taking
days,
and
we
would
like
to
try
to
speed
up
those-
and
we
created
this
issue
suggesting
that
maybe
you
could
have
some
feature
on
the
pad
disruption,
but
just
to
allow
would
put
disruptions
in
my
easy
because
you're,
okay
during
all
the
opi
grades,
if
we
are
just
disruption,
multiple
pods
from
yz,
our
application
gonna
be
okay,
like
we
sparked
that,
but
we
can
guarantee
today,
because
we
don't
know
if
we
increase
this
number.
G
G
So
the
behavior
would
be
like
doing
node
upgrades.
I
want
to
make
sure
that
I'm
okay
with
multiple
parts
being
disruption
if
they
are
in
the
same
easy
or,
for
example,
if
they
are
have
the
same
label
only
but
across
different
labels.
I
wanted
to
make
sure
that
the
only
one,
for
example
or
another
number
is,
is
the
max.
A
G
Because
you
use
the
same,
like
stateful
set
across
like
have
a
scene,
a
single
state
of
set
across
multiple
easies
and
use
the
same
like
a
pod
label
for
for
the
podcast
budget,
and
we
like,
we
can't
have
multiple
like
we
try
to
see
if
you
could
have
a
selector
that
allows
us
like
to
during
deployment
only
this
option,
one
of
the
the
apart
from.
Why
is
he,
and
we
are
working
in
fact
on
something
to
do
that,
but
would
be
that?
G
I
think
a
discussion
smart
if
it
is
something
that
would
be
nice
to
have
on
kubernetes
itself.
A
I
see
I
I
get
what
you're
saying
so
like
what
you
don't
want.
You
don't
want
to
like.
So
one
way
to
achieve
the
same
result
would
be
to
take
the
state
like,
instead
of
having
one
staple
set,
manage
your
capacity
explicitly
inside
of
each
zone
and
then
use
a
pi
disruption
budget
per
zone.
In
order
to
take
to
tolerate
the
disruptions
you
want
in
that
particular
intersection
of
failure
domains,
but
you
don't
want
to
do
that.
You'd
rather
just
have
one
staples.
A
So
how
are
you
guaranteeing
that
the
state
will
set
like
you're,
using
scheduling,
predicates,
guaranteed
scheduling,
predicates
to
ensure
that
the
distribution
is
spread
across
ohms?
Yes,.
A
A
It's
something
that
theoretically
possible
to
do
without
breaking
backward
compatibility,
but
in
terms
of
the
logic
inside
of
eviction
and
disruption,
it
yeah-
I
don't
I
can't
off
hand
say
like
this
is
this
is-
would
be
easier
to
implement
or
not
in
a
reliable
way.
D
I
can
also
I
I'm
not
sure
how
easy
to
implement
so
would
it
be
possible
to
achieve
something
like
this
by
setting
up
one
pod
disruption,
budget
per
zone,
you
can't
have
overlapping,
so
they
sort
of
have
to
be
separate
ones,
but
would
I
work.
A
D
G
Yeah,
this
is
something
that
you're
you're,
considering
also
to
like.
Instead
of
one
single
single
like
step,
stateful
set
have
one
very
easy,
and
then
you
have
a
specifically
partnership
disruption
but
badges
and
for
those
and
make
sure
that
you,
during
all
the
p
rates,
won't
only
upgrade
to
one
per
time,
but
this
is
a
lot
of
work
to
like
migrate,
this
jewish
cluster.
G
A
And
we
can
take
a
look.
I
can't
we
can't
promise
anything
obviously
today,
but
it's
something
we
could
take
a
look
at
and
see
what
the
see
the
feasibility
is.
It
would
be
like
I
mean,
because
the
kind
of
two
approaches
like
if
you're
asking
sig
apps
to
do
it.
It
means
one
of
the
maintainers
or
contributors
here
would
have
to
go
in
and
take
a
look.
Do
the
feasibility
feasibility
analysis
write
it
kept
to
propose
the
work
to
do,
and
then
you
know
kind
of
drive
that
work
forward.
A
The
other
approach
is
if
it's
something
that,
if
you
were
looking
to
make
a
contribution
going
in
and
doing
the
due
diligence
to
see
like
how
feasible
is
it
to
implement
and
then
proposing
that
as
a
kepler
proposing
that
is
work
to
do
that
we
could
help.
You
drive
would
be
another
approach
right
about
that
like
if
you're
willing
to
implement
it
like
it's
a
little
bit
easier
for
us
to
like,
say:
okay.
Well,
you
know
we
can
help
shepherd
it
and
we
can
be
responsible
for
maintenance
of
the
code.
A
If
we
decide
to
accept
the
contribution
and
all
that
where
right
now
is
we'd
have
to
go,
take
a
look
and
see
if
we
can
drive
it.
So
it's
kind
of
time
permitting
based
on
the
contributors
in
the
sig.
But
it's
something
we
can
consider
for
sure.
G
Oh
yeah,
it
would
be
nice
maybe
to
have
a
some
some
contribution.
Take
a
contributor
take
a
look
just
to
just
see
if
this
is
something
feasible
to
to
be
done.
If
it's
gonna
be
a
lot
of
work
or
is
it
if,
if
industry,
that
is
interesting
to
have
this
kind
of
feature,
and
then
you
can,
I
can
discuss
internally
if
it's
something
that
we
wanted
to
contribute.
A
Okay,
does
anyone
have
any
other
topics
you'd
like
to
discuss.
A
Okay,
if
not
I'll,
we'll
give
everyone
back
24
minutes
thanks
for
coming
thanks
for
bringing
topics
and
thanks
for
participating,
see
everyone
in
two
weeks.