►
From YouTube: [SIG Apps] 20201102
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I'm
recording
okay
welcome
everyone
to
cigarettes,
today's
the
2nd
of
november.
My
name
is
adnan
and
I'll
be
reading
this
today,
so
quick
announcement
starts
off.
1
120
beta
1
was
released
on
tuesday.
A
And
jumping
into
discussion
here-
I
guess
this
is
somewhat
of
announcement
as
well.
So
in
two
weeks
we'll
be
canceling
the
next
meeting
for
the
gaps
on
november
16th,
because
it
will
be
the
week
of
kubecon
cloud
native
con
north
america.
A
And
then,
as
it's
q4
coming
up
to
the
end
of
the
year,
we'll
be
canceling.
We're
probably
gonna,
cancel
the
week
of
or
the
week
after,
the
u.s
thanksgiving.
Unless
folks
here.
A
All
right
and
then
december
28th,
it's
probably
gonna,
be
a
a
break
for
many,
so
we'll
most
definitely
come
to
that
one.
It
sounds
like
we'll,
probably
go
ahead
and
cancel
monday
30th
as
well
sorry
30th
november
as
well,
unless
anyone
would
like
to
speak
out
and
have
a
strong
opinion,
otherwise.
A
Okay,
so
the
next
item
on
the
agenda
here
is
this:
the
notion
of
failed
in
controllers,
and
this
sticking
to
an
issue
where
stable
set
does
not
update
the
pod
image.
A
B
Yeah
I
added
to
the
agenda,
so
the
topic
is
not
only
for
stateful
sets,
I'm
pretty
sure
that
each
and
every
single
controller
might
stumble
upon
this
issue.
For
some
controllers,
it's
a
little
bit
bigger
issue
for
others.
B
It
is
not,
but
basically
the
idea
is
that
a
lot
of
people
think
that
if
an
image
poll
fails-
or
there
are
some
issues
with
running
an
image
that
is
actually
in
a
failed
state-
and
unfortunately
it
happens
so
that
the
the
cubecut
will
get
information
is
well,
it's
not
straight
failed
or
not
failed.
B
But
a
lot
of
people
think
that,
for
example,
image
failed,
pull
is,
is
actually
a
failed
container,
which
is
not
because
it
is
actually
pending
container,
and
this
problem
is,
I
remember
that,
seeing
that
for
for
jobs
a
lot,
this
one
was
recently
brought
to
my
attention
with
regards
to
stateful
sets,
and
the
question
is
like:
until
now,
we
usually
said
that
a
failed
container
is
one
that
has
a
status
failed
and
explicitly
failed
in
the
status,
but
there's
a
lot
of
people
that
think
that
the
failed
container,
even
though
printed
as
such
by
the
cube
cuddle,
is
actually
not
failed.
B
Field
selectors
and
they
were
filtering
for
status
phase,
and
the
phase
is
super
old
mechanisms
for
explaining
that
whether
a
pot
is
running
or
not
and
oftentimes
failed
over.
There
means
the
pot
is
actually
running
and
there's
a
lot
of
inconsistencies
between
that
and
people
were
complaining.
So
I
was
curious
what
others
think
of
what
our
approach
should
be
on
these
failed
states.
C
I
don't
think
image
pool
is
failed
for
what
is
worth
it's
just
an
intermediate
state.
The
registry
could
have
been
down.
You
could
have
had
the
wrong
permissions,
it
doesn't
mean
it
won't
be
successful
next
time
and
that's
if
we
try.
It's
not
failed
right.
So
I
understand
people
not
getting
the
correct
message,
but
I
don't
think
introducing
a
notion
of.
B
Case
that
I
linked
at
the
agenda
is
when
they
used
a
wrongful
spec,
so
they
knew
it
is
a
wrong,
but
for
us
running
the
container,
it's
hard
for
us
to
justify
whether
there
is
a
missing
pull
secret,
whether
it's
a
wrong
pull
spec
per
se
or
there's.
As
you
mentioned,
a
temporary
problem
with
reaching
the
register.
Just
it
just
happens
that
the
registry
is
temporarily
down
or
one
of
the
intermediate
steps
that
yeah
you
know
it.
C
E
Or
will
we
have
to
make
those
types
of
changes
into
a
v2
anyway
right?
It's
like
for
stateful
set?
I
could
imagine
introducing
something
where
you
have
an
option
to
allow
a
rollout
to
progress,
even
when
you're,
using
ordered
ready
in
the
event
that
some
deadline
has
expired
for
a
particular
pod
right-
and
I
understand,
like
I
think
about
that-
and
I
understand
how
that
would
be
very
useful
for
users.
E
But
I
also
think
that
if
we
did
anything
to
enable
that
by
default-
or
we
implement
a
feature
like
that-
that
fundamentally
changes
the
expectations
and
behavior
of
staple
set,
we
may
break
more
people
than
we
help
right,
because
now
you
have
this
nude
behavior,
that's
unexpected
that
everyone
has
to
adopt
and
that
that's
kind
of
concern.
One
concern,
too,
is
like,
I
think
you
know
like
deployment
and
replica
set,
for
instance,
in
the
way
they
batch
and
in
the
way
they
manage
expectations
with
respect
to
replication
for
pods.
E
Do
try
to
implement
some
level
of
sensitivity
for
the
failure
of
downstream
components
during
replication,
but
how
much
of
that
do
we
want
to
pull
in
is
to
be
universal
for
all
of
the
workload
controllers
right
because
I
like
not
only
does
it
add
complexity
across
the
entire
code
base.
E
And
you
know
if
we
make
it
if
we're
trying
to
push
toward
a
place
where
it's
easy
for
people
to
implement
custom
workload
controllers
in
order
to
meet
their
specific
use
cases,
so
everything
doesn't
have
to
be
a
built-in,
and
then
we
implement
a
lot
of
complicated
logic
inside
of
the
core
controllers
that
we
expect
them
to
mirror.
We
increase
the
barrier
to
entry
for
them
to
come
in
and
actually
do
that
we
make
it
harder.
So
that
would
be
my
secondary
concern.
C
We
also
have
a
high
level
concept
right.
We
have
progress,
deadline
seconds
for
deployments,
which
is
telling
the
users
on
a
higher
level
that
the
deployment
is
stuck.
Something
is
broken,
it
is
likely
the
world
is
failed,
but
we
still
retry
and
we
just
don't
have
the
concept
in
the
other
controllers
and
it
sets
the
condition.
So
they
can
see
it
right.
Yeah.
B
E
Not
being
schedulable
right,
controller
tries
to
create
a
pod,
scheduler
says
I
can't
put
you
anywhere,
because
I
don't
know
you
want
a
scariest
resource,
that's
unavailable
or
you
you're,
scheduled
for
the
wrong
operating
system.
For
instance,
right,
like
your
container,
is
an
image
isn't
compatible
with
the
operating
system
of
the
underlying
node.
I
want
to
schedule
you
on
all
of
that
is
opaque
from
the
controller
respectively.
E
Like
one
thing
we
could
do.
If
we
want
to
go
down
this
route,
is
we
could
try
to
work
with
sig
scheduling
and
sig
node
in
particular,
possibly
network
as
well,
and
get
them
to
kind
of
propagate
errors
up
in
a
more
meaningful
way?
And
then
we
could
kind
of
just
implement
it
as
a
pass-through
which
might
be
that's,
that's
a
fairly
low
barrier
to
entry
in
terms
of
like
a
paradigm
of
what
the
best
practice
is.
E
B
Just
to
say
to
send
a
clear
signal
to
eventual
issues
or
questions
about
such
functionality
to
have
a,
I
don't
know
either
pre-made
document
where
we
would
be
pointing
people
to
oh.
This
is
the
the
kind
of
the
trouble.
This
is
the
kind
of
a
point
that
we
discussed
several
times,
and
this
is
the
outcome,
or
this
is
what
you
should
be
expecting.
B
C
C
C
That's
what
I
do
personally
like
if
I'm
looking
for
which
balls
are
not
running
in
my
cluster
readiness
is
a
great
indicator
because
it
covers
the
failed
ones,
the
ones
that
are
failing.
For
I
don't
know
external
resources,
it
doesn't
have
to
be
image,
pool
right,
you
could
do
vaget
or
something
on
the
beginning
of
your
container
startup
and
that
could
be
failing
as
well
and
millions
of
other
stuff.
So
readiness
is
what
they
should
be
looking
for,
at
least
in
my
opinion,
but
I
certainly
understand
those
questions
arise.
B
Right
either
that
or
that
the
question
is
where
we
could
put
some
some
kind
of
information
around
this.
E
We
could
add
a
section
to
the
workloads
concepts
on
the
main
page
and
have
some
specific,
like
faqs
or,
like
you
know,
comment
issues,
maybe
a
concept
on
operationalization
or
something
like
that.
So
so
it's
just
you
know
common
expectations
for
the
end
user,
when
they're
working
with
our
apis.
That
might
not
be
a
bad
plan.
E
B
Yeah,
like
I
mean
it's
a
set
of
frequently
asked
questions
with
regards
to
controllers,
would
definitely
be
handy,
especially
that
some
of
those
questions
that
we've
been
answering
over
and
over
and
again.
E
The
interesting
thing
is
for
a
lot
of
the
questions
that
we
answer
over
and
over
again.
If
you
dig
deep
enough
most
of
the
time
there
is
something
documented
somewhere,
it's
just
maybe
the
organization
of
our
documentation
isn't
what
what
other
people
ins
like.
I
can
find
it,
but
like
apparently,
other
people
can't
and
the
fact
that
I
can
find
it
isn't
valuable.
E
I
mean
discoverability
and
organization
is
kind
of,
like
the
the
I
think.
Maybe
what
the
problem
is
we're
facing
with
communication
with
the
end
user
I
get,
we
can
put
some
thought
into
it
and
see
what
we
could
do
better
and
maybe
even
talk
to
one
of
the
sigs.
That's
very
ux
oriented
to
see
where,
where
they're
kind
of
what
their
position
is
and
like
how
they
would
do
it.
E
A
I
was
just
gonna
say
what
would
be
the
the
way
to
to
get
that
done.
Would
we
go
and
create
a
for
an
issue
in
the
docs
repo
or
something.
E
E
As
a
more
kind
of
concrete
thing
that
this
got
me
thinking
of
so
for
progress
deadline,
seconds
right,
like
I
believe,
was
already
brought
up
that
we
have,
I
think,
tomas
brought
it
up
that
we
we
have
this
for
our
deployment,
but
we
don't
have
it
universally.
E
I
don't
know
if
it
makes
as
much
sense
for
demon
set
like
you
know,
like
demon
set,
is
very
sensitive
to
nodes
entering
and
leaving
the
cluster.
It's
sensitive
contains
and
tolerations,
which
may
change
and
affect
what
gets
rolled
out
where
so
so
it's
a
little
bit
harder
to
reason
about
the
the
user's
expectations
with
respect
to
the
throughput
of
pod
launch
and
the
throughput
of
replication
for
a
demon
set,
but
it
might
make
sense
for
staple
said.
E
So
do
we
think
it's
valuable
enough
and,
like
I
do
think
it's
something
for
staple
set
where
we
could
add
it
in
a
backward
compatible
way,
because,
ultimately,
the
only
end
of
it's
not
going
to
affect
how
the
workloads
are
launched.
It
would
affect
communication
of
an
error
state
to
the
end
user
and
the
status
object
of
the
staple
set,
which
we've
already
kind
of
made
clear
for
even
for
v1
objects.
The
status
may
be
mutated
in
the
future.
Is
it
worth
pursuing
that?
E
Do
we
think,
as
a
as
kind
of
a
point
solution
to
to
having
a
better
error
message
and
communication
when
a
staple
set
is
blocked,
because
a
single
pod
is
down
like
not
something
I've
thought
of
because
like
for
me?
If
it
happens,
it's
usually
pretty
straightforward
to
like
look
at
it
and
say:
oh,
it's
blocked
it's
not
rolling
out,
because
this
pod
is
broken
and
it's
not
coming
up.
C
I
think
it
could
be
useful
for
stateful
set
and
I
do
share
the
concerns
about
the
diamond
set,
because
if
your
cluster
gets
scaled,
like
your
timeouts
may
be
pretty
different
and
yeah
for
stateful
said
I
see
it
looks
like
you,
have
a
contour
managing
a
stateful
set
and
that
just
doesn't
do
what
you
do
manually
right.
So
this
this
makes
it
observable
to
operators
or
other
controllers
that
try
to
work
with
stateful
sets
and
those
would
get
a
clear
signal.
C
A
Okay,
well
anything
else
on
that
particular
issue.
A
Cool,
I
guess
we
can
move
on.
That
was
the
last
on
the
agenda.
So
does
anyone
else
have
anything
they'd
like
to
bring.
A
A
If
not,
there's
we
have
about
half
an
hour
left,
so
we
could
also
do
a
triage
if
folks
are
interested.
Otherwise
we
can
end
this
one.
A
H
C
C
A
B
Yeah,
but
they
mentioned
that
it's
only
sometimes,
and
the
sometimes
is
perfectly
valid,
because
it
will
happen
that
the
controller
will
be
trying
to
override
something
that
was
overwritten
by
a
different
controller
and
we
just
retry
right.
So
this
sometimes
is
perfectly.
E
A
Me
say:
aks.
A
A
H
C
C
C
A
B
B
Someone
just
wants
the
job
to
copy
every
pod
and
I'm
pretty
sure
that
someone
is
running
a
single
paw
and
single
pod
job.
Send
it
my
way:
okay,
I'll
respond
and
I'll
make
sure
to
close
this
one.
A
A
C
Time
out
for
the
condition,
that's
that's
usually
put
not
coming
up
or
something,
but
I
I
would
have
to
pull
down
the
logs
and
right
proof.
It's
not
us.
B
B
B
A
So
this
would
be
before
it's
even
in
in
the
server,
because
customize
uses
street
strategic
merge
pad
to
apply.
C
E
C
E
C
H
B
That's
not
surprising,
and
I
I'm
pretty
sure,
there's
there's
already
an
open
issue
about
it.
What's
number
eight
eight,
oh.
A
A
A
H
A
B
B
C
C
B
Context,
oh
delete
options.
B
Yeah
I'll
have
a
look
and
I'll
eventually
comment
or
what's
going,
what
can
happen
with
this.
A
B
B
C
B
B
Yeah
there
are
people
that
will
continue
saying.
Well,
you
are
wrong
and
we
need
it.
I
C
H
D
H
A
Okay
support
and
require
explicit
time
zones
for
recurring
jobs.
A
B
I
think
I
was
looking
at
something
like
that
and
something
like
that
is
already
available
thing,
but
I
would
send
it
to
over
to
the
no
team
they
own
that
one.
E
E
They
want
some
hybrid
between
burstable
qos,
with
only
two
burst
modes
like
they
want
reserved
qos,
but
in
two
different
kind
of
categories,
like
a
startup
category,
where
you
get
guaranteed
one
gigabyte,
one
cpu
with
the
same
limit
and
then
dropping
to
200,
after
which
you
can't
do
but.
C
E
Way
like
it
doesn't
make
sense
from
a
scheduling
perspective.
They're
just
gonna
have
to
go
with
burstable
qos
with
200
meg,
200
m
duty
cycle
as
they're
low
and
then
one
gig
one
cpu
is
their
high.
E
Yeah,
because
I
mean
that's
the
honest
truth,
like
you
just
said:
if
it
restarts
you
converse
to
that
and
that's
that's
the
the
expected
behavior.
So
that's
really
what
your
workload
shape
looks
like
and
there's
not
anything
we're
going
to
do.
That's
more
intelligent
with
the
additional
information
from
the.
A
A
B
It's
one
of
the
possible
directions.
I
think
it's
already
on
my
name,
but
no
it's
not.
You
can
send
it
my
way.
Yeah.
They
just
did
it
wrong.
Okay,
yeah!
We
can
consider
that,
but
that
will
be
only
after
we
do
the
tenure.
C
C
H
A
A
A
H
B
H
A
H
B
So
I'm
not
sure
what
he's
talking
about
because
they
meant
that
should
be
included
by
default.
If
you
want
to
get
rid
of
them
and
that's
a
separate
topic,
send
it
my
way
I'll
respond
I'll,
try
to
figure
out
what
they
mean
read
through
this
yeah.
C
Demonstrate
ignore
some
some
danes
on
the
notes
right,
so
they
could
possibly
come
back.
Yes,
yes,
but
say
so
you
have
a
demonstrate,
for
I
don't
know
sdn
or
some
some
networking
there
you
you
actually
want
to
drain
the
bots,
not
the
demo
set
itself,
I
suppose
so
there
would
be
different
use
cases.
I
suppose.
D
A
Bye
and
just
a
reminder,
we
won't
be
meeting
next
week
because
it's
kubecon
so
we'll
see
you
all
in
four
weeks
right.
Thank
you.