►
From YouTube: SIG Apps' Zoom Meeting 20200727
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
welcome
to
the
july
27th
caps
meeting.
I'm
janet
guo,
I'm
your
host
today
and
together
with
me,
is
ken
owens
and
matt
farina
they're
going
to
be
our
co-host,
so
we
we
don't
have
on
other
any
discussion
topic
on
the
agenda
other
than
the
box
scrub.
A
Okay,
I
think
that's.
That
means
that
we
are
going
to
do
box
work
can
and
you
want
to.
C
D
B
Okay,
so
this
first
one
is
to
update
the
volume
claims
template
I'll,
assign
myself
there's
ongoing
work
on
a
cap
for
updating
persistent
volume,
claims
and
staple
set
between
sig
apps
and
stick
storage.
So
I
can
link
this
to
that
work
and
hopefully
just
close.
B
B
Able
to
figure
out
precisely
what
they
were
looking
for
from
us
like
it
seems
like
this
is
a
scheduling
issue
potentially
and
yeah
like
there's,
not
enough
information
really
for
me
to
reproduce.
D
D
B
Right
so-
and
I
mean
maybe
discussion
like
yeah
to
me-
I'm
not
sure
that
there's
a
bug
I
just
it,
looks
like
it.
He
created
a
pod
and
it
failed
the
schedule.
But
I
didn't
look
at
it
too
in
depth
and
there's
not
like
a
whole
bunch
of
logging
from
the
api
machinery
to
like
do
an
in-depth
kind
of
review
of.
What's
going
on.
E
B
B
Well,
you
can,
I
mean,
depending
on
the
max
surge
and
min
available
configuration
right,
you
could
make
it
so
that
to
some
degree
of
probability
you
would
only
you
would
never
share
it,
but
I
mean
like,
if
you
did
it
with
read,
write
many.
That
would
make
sense
for
sure.
B
D
B
So
it's
going
to
look
yeah,
it
would
look
hairy
in
terms
of
any
type
of
implementation,
so
yeah,
I
my
gut,
would
say
no,
but
I
don't
want
to
be
unfriendly
so
like.
If
the
person
really
wanted
to
think
deeply
about
it
and
offer
a
kept
for
how
it
would
work,
we
could
take
a
look,
but
just
out
of
kind
of
like
trivially.
It's
not
something
that
I
I
would
understand
how
we
could
do
easily
or
how
we
could
do
in
a
way
that
would
be
cogent
and
good
for
users.
B
So
this
do
we
just
want
to
ask
this
person
to
go
to
slack.
B
A
B
B
Correct
and
like
that,
we
yeah
this
is
basically
works
as
intended.
Do
we
want
to
leave
it
open
for
continued
comment
or
close.
D
A
F
B
B
B
But
the
so
I
mean
like
based
on
the
current
documentation,
for
what
we
do
today.
This
is
works
as
intended,
because
we
don't
clean
the
jobs
up
and
that's
that's
by
design
with
the
with
the
garbage
collection
for
jobs.
It
actually
sounds
like
it
would
probably
address
this
person's
issue
a
bit
more,
but
if
they're
using
a
cron
job
as
well,
crime
job
does
have
a
max
history,
so
it
should
clean
those
jobs
up
eventually,
anyway,.
B
Yeah
so
I
mean
like
I
don't
I
like.
The
only
thing
I
could
I
could
think
is
for
for
this
person.
We
could
at
least
link
them
to
the
the
job
controller.
What
the
garbage
collector
for
jobs,
work
and
say
that
that's
alpha
feature
that's
coming
and
then
maybe
tell
them
about
give
them
more
documentation
pertaining
to
how
crime,
job
and
job
actually
work,
because
they,
I
think,
they're,
probably
a
bit
confused
about
what
the
expected
behavior
of
the
system
is.
A
H
H
H
D
D
B
This
one
we're
already
talking
about.
Oh,
I
kind
of
doubt
this,
but.
B
B
Right
right,
but
there
so
from
the
api
machinery
perspective,
the
correct
result
seems
to
be
returned
and
you
know
it's
implemented
in
the
rest
controller
endpoint
for
appsv1.
But
it
looks
like
when
they're
using
kuby
cuddle
to
apply
it.
B
It
doesn't
error
out
right,
so
I
mean
that
seems
like
an
error
inside
of
the
cli,
because
it
definitely
works
if
you
hit
the
endpoint
directly
at
the
api
machinery
level
and
inside
of
the
controller.
There's
really
nothing
like
you
know,
that's
that's
actually
looked
at
there
right
like
we
blocked
this
in
the
api
machinery,
so
I
would
actually
move
this
to
cli
because
it
really
looks
like
it's
something
to
do
with
apply
and
if
federica.
F
D
B
Yeah
I
mean
I
could
add
us
back
on
if
we're.
If
we
want
to
support
if
we're
interested
in
taking
a
further
look,
it's
just
like
it's
not
con.
It's
the
code
doesn't
even
exist
in
the
controllers.
It
exists
inside
of
the
api
machinery
and
the
code
instead
of
the
api
machinery,
clearly
functions
and
has
for
many
many
releases
now.
So
I
mean
by
process
of
elimination.
It's
definitely
an
apply
bug.
It's
not
server
side
apply.
It's
definitely
client
side
apply.
So
it's
it's
got
to
be
coupe.
B
C
F
F
B
Yeah-
and
I
I
don't
like,
but
at
a
higher
level
so
conceptually
to
me-
the
pod
ready
condition
is
about
the
status
of
the
pod.
You
have
node
ready
conditions
in
order
to
to
give
you
the
status,
the
node
right,
if
a
node's
not
ready,
you
could
get.
Basically,
the
pod
might
not
even
schedule
there.
There
are
a
lot
of
different
things
that
can
happen
closely.
B
D
Me
yeah:
if
a
boat
isn't
ready,
if
a
note
isn't
ready
like
there
should
be
election
happening
right
so
right,
that's
how
that's
how
we
would
recover
from
it
and
those
statuses
are
distinct.
They
aren't
no
noticeable.
Is
q
blood
state
like
if
cubelet
can't
connect
to
the
api
it
transitions
to
note
ready
and,
as
you
said,
like
put,
readiness
means
you
can
hit
the
ready
endpoint.
D
B
I
mean
I
can.
I
can
see
kind
of
a
point
that
if
the
node
isn't
ready,
the
pod
is
not
ready
to
receive
traffic,
but
we
already
have
a
control
mechanism
for
this.
G
B
B
F
F
F
It's
elastic
fluent
kibana,
okay,
at
least
that's
what
I
think
they
mean
by
it,
which,
if
it's
not
running
right,
that's
a
support,
question
yeah!
It
definitely
is.
B
C
B
It
depends
on
which
solution
it
is
a
lot
of
them
are
partner
supported,
usually
no
usually
they're,
actually
quite
helpful,
I'll
give
them
that
like
there
are,
there
are
people
on
the
marketplace
team
that
were
my
colleagues
at
google
that
I
I
could
maybe
point
them
to,
but
I'm
not
going
to
do
that
on
github.
B
B
B
D
D
B
Unless
somebody
like
who
else
is
on
today,.
H
Mike
spritzer,
okay,
I
haven't
been
around
here
before,
but
I
have
some
interest
developing
in
this
sig.
B
H
B
B
I
think,
in
this
case
basically
they're
dependent
on
pod
gc
right,
I
mean
there's
not
much.
We
can
do
for
them.
There.
D
B
C
B
It
I
mean
so
like
it
with
a
large
number
of
evictions
across
the
entire
cluster.
I
could
see
how
you
could
enter
a
selector
to
look
for
things
and
it
could
get
annoying
and
you
might
want
those
to
go
away
faster
than
they
actually
are,
but.
D
B
E
H
H
F
F
H
H
B
The
scene
before
thing
you
can
actually
do
with
the
api
server
like
there.
There
are
examples
of
how
you
can
use
the
api
machinery
to
set
up
effectively
a
lock
for
leader
election
and
the
same
thing
can
be
done
to
make
sure
that
only
one
crime
job
is
actually
going
to
start
executing
its
code
simultaneously.
So
you
could
use
that
for
serialization.
B
H
H
B
E
B
Is
like
they,
they
periodically
have
a
bunch
of
crime
jobs
that
they're
launching
across
their
clusters
simultaneously
in
order
to
sync
repositories
that
all
run,
let's
say
every
10
minutes.
What
they
want
to
do
is
start
the
law
at
10
minutes,
but
when
the
jobs
launch
have
some
mechanism
to
stagger
execution
so
that
they
don't
all
actually
run
simultaneously
so
trivially,
you
could
have
them.
You
could
set
the
crime
schedule
slightly
differently,
but
because
it's
a
deadline,
scheduler,
you
could
still
get
large
amounts
of
overlap.
E
C
D
B
I
Hey,
oh,
hey
guys,
I'm
ally.
I
was
working
with
my
jason's
past
two
to
three
weeks
on
reworking
crown
job
controller
to
the
informer
cap,
yeah
you're,
the
guy
who's
taking
the
gm.
I
Hopefully
it
works
out,
so
I
had
a
few
questions
regarding
this
feature,
so
the
way
I
know
crown
job
works
right
now
is
that
we.
I
We
have
like
a
deadline,
after
which
the
crown
job
will
list
all
the
jobs
it
has
started
or
it
is
supposed
to
start.
And
then,
if
it
is
not
past
the
deadline,
it
will
start
those
jobs
right.
I'm
not
able
to
figure
out
right
off
the
bat
how
the
jitter
period
and
the
deadline
would
work
with
one
another
in
a
coherent
manner,
because
one
makes
it
hard
for
the
other
in
terms
of
engineering
to.
B
H
Yeah
I
gotta
say
this:
if
you
want
this
done
accurately
this,
this
sounds
a
lot
like
the
queuing,
because
then
we
added
in
the
api
server.
I
don't
have
a
quick
answer
now,
but
I
I
think
we
need
to
think
about
whether
there's
something
common
that
could
be
broken
out
or
something
here.
H
I
Yeah
can,
with
regards
to
the
previous
question,
I
would
leave
it
with
mate
for
now,
and
if
I
mean
I
I'll
keep
it
in
the
back
of
my
mind
as
I
work
on
the
on
the
new
controller
and
if
something
pops
up
I'll,
add
a
comment
there
and
then
take
it
forward.
If
that
helps.
B
H
H
I
D
F
B
B
Just
yeah,
I'm
gonna
leave
this
one
alone.
If
it
pops
back
up
or
it
gets
further
attention,
we'll
go
back
to
it,
but
I'm
fine.
I
think
we
can
let
it
go
stale
if
nobody
else
sees
it.
I
would
assume
that
if
deployment
was
not
deleting
pods
at
a
global
scale,
this
would
have
gotten
a
lot
more
attention
by
now.
B
But
if
it
was
really
if
that
was
really
broken,
even
if
it
was
a
115,
I
think
it
would.
I
would
go
back
and
take
a
look.
F
B
I
I
still
don't
understand
what
this
would
look
like
like
it.
Okay,
so
if
you
scale
across
two
azs
right
with
this
person
is
saying
like
you
have
two
pods
and
one
az,
and
then
you
have
one
pod
and
then
another,
and
then,
when
you
select
the
victim,
you
select
the
victim.
That's
in
the
separate
az.
B
What
what
feature
could
we
implement?
That
would
make
that
better.
You
know
like
if,
if
you
really,
if
you
want
better
availability,
go
with
four
right,
I
mean
you
could
still
get
in
a
scenario
where
you
did
take
down
both,
but
it's
more
unlikely
like
deployment
is.
B
B
Jerick
to
me
I
mean
that's,
that's
the
major
concern.
If
we
tried
to
do
anything
with
it
like
not
only
does
it,
it
wouldn't
just
be
a
particular
workload.
We'd
have
to
do
something
across
the
entire
workloads
api
surface
and
then,
moreover,
anyone
writing
a
custom
controller
would
have
to
be
encouraged
to
do
the
same
in
order
in
order
to
be
consistent,
right
and
basically
we'd
be
we'd,
be
taking
the
position
that
workload
controllers,
be
they
customer
built
in
should
be
scheduler,
predicate
aware,
which
is
the
opposite
of
the
position.
H
So
could
we,
it
still
seems
a
little
bit
bad
to
say
you
know
we're
going
to
have
replicas
that
kill
the
wrong
guy
into
scheduler
fix
it
up.
Could
we
instead
have
a
way
of
having
replica
set?
You
know,
delegate
or
defer
to
the
scheduler
to
pick
the
one
to
kill.
B
F
H
B
Well,
yeah,
but
the
way
it
would
work
is
like
thomas
said,
if
you
so
we,
the
workloads
controllers,
create
and
delete
pods
right
in
terms
of
balancing
those
pods
across
capacity.
That's
not
what
the
workload
controllers
do.
The
scheduler's
job
is
to
and
the
descheduler,
but
sig
scheduling
is
responsible
for
balancing
capacity,
like
your
workloads
compact
across
your
actual
capacity.
H
C
B
Yeah,
okay,
we
can
so
we
got
through
a
bunch
of
issues
today,
we'll
I
think,
start
with
probably
pr's
next
time
and
try
to
be
more.
We
will
try
to
actually
make
sure
we
dedicate
the
time
to
do
our
scrub
on
a
weekly
basis,
because
we've
missed
the
past
two
meetings
and
not
actually
gotten
it
done,
not
that
that
was
a
bad
thing.
I
mean
we.
We
did
have
other
important
matters
to
discuss,
particularly
for
two
caps
in
a
proposal
so.
H
That's
a
quick
question
sure,
so
I'm
developing
a
need
to
have
something:
that's
like
a
demon
set,
but
instead
of
a
pod
per
node,
it's
a
pod
for
some
other
sort
of
object.
Is
this
something
other
people
have
worked
on
or
considered.
B
H
All
right,
thank
you,
but
it
seems
like
a
kind
of
generic
idea.
I'm
thinking
I'll
offer
it
upstream,
since
it
is
pretty
generic.
B
So
upstream
in
the
sense,
is
you
you'd
do
a
crd
and.
H
Okay,
yeah,
the
use
case
is
actually
it's
kind
of
like
node,
but
you
know
we're
making
a
control
plane
for
things
that
run
on
machines
that
are
not
members
of
the
cluster.
H
H
H
B
The
original
way
is
third-party
resources.
Then
both
aggregated
api
servers
and
custom
resource
definitions
evolved
at
the
same
time.
Kind
of
the
community
preferred
what
people
have
been
pushed
to
is
to
use
crds
and
crds
are
kind
of
like
the
thing
that
sig
architecture
is
really
encouraging.
In
terms
of
how
you
do
custom
extensions
to
the
api
machinery,
there
are
some
use
cases
that
you
can
only
use
an
aggregated
api
server
for
and
they're
they're
they're.
H
A
Okay,
I
think
that's
it
thanks
everyone
for
coming,
see
you
next
time
thanks.
Thank
you.