►
From YouTube: SIG Apps Weekly Meeting 20200824
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
everybody
welcome
to
the
monday
august
24th
meeting
of
sig
apps.
We
have
new
demos,
a
few
discussion
items
and
then
bug
scrub,
so
discussion.
Item
number
one
is
failed
pot
handling.
B
But
there
are
cases
where
cubelet
can
still
fail
the
port
and
it
is
still
left
in
the
api
because
that
that's
a
different
type
of
eviction-
and
there
was
the
issue
that
I
linked
there
from
one
of
the
users
who
was
complaining
that
we
don't
clean
those
puts
at
all
right
and
those
spots
are
intentionally
left
there.
So
user
can
find
out
why
cuba
elected
that
spot,
but
maybe
maybe
we
could
clean
them
like
after
a
week
or
some
reasonable
time.
A
A
B
A
A
C
A
We
could
sync
with
kubelet,
but
honestly,
if
we're
not
asking
node
to
change
its
behavior,
because
really
what
we're
saying
is:
okay,
kubelet,
you
can
keep
doing
like
you've
been
doing
and
we
will
handle
any
of
these
for
you
and
really
kublet
shouldn't
be
deciding
when
to.
I
don't
know
where
you
would
put
that
in
kubelet.
That
would
make
sense
right
like
that,
would
just
be
a
bunch
of
machines
pounding
the
api
machinery.
A
I
was
kind
of
thinking
of
the
scheduler
interaction,
but
there
really
isn't
one
there
either
right
like
once:
it's
evicted,
it's
not
the
scheduler
is
never
really
going
to
look
at
it
again.
A
A
So
you
can
go
first.
I
said
this
is
going
to
be
evicted
and
have
a
new
name
set.
So
it's
not
going
to
be
a
consideration
for
the
scheduler,
so
it's
not
in
any
pack
for
their
code.
It's
not
really
in
any
pack
or
kubla
code,
but
we
wouldn't
want
to
put
it
there.
So
we'd
be
added
another
controller
to
controller
manager.
That
would
that
would
manage
the
garbage
collection.
A
Currently
api
machinery
pretty
much
owns
gc
outright,
so
we
would
probably
have
to
sync
with
them.
A
So
that's
not
clear
because
when
janna
was
proposing
modifications
to
java
and
pi
gc,
they
were
very
involved
with
it
and
I
think
they
claimed
ownership
at
the
time,
but
I
also
think
at
this
juncture,
they'd
be
willing
to
seed
it.
If
we
want
to
take
it
over
so
yeah,
I
guess
we
can
sync
with
them.
Does
anyone
else
have
thoughts.
B
A
A
A
Antoine's,
not
here,
we'll
put
the
bug
scrub.
Last
after
we
get
through
the
discussion
points.
He
we
want
to
talk
about
application,
crd
and
we'll
see
if
he
comes
on.
If
he
pops
on
before
we
start
the
bug
scrub
and
we
can
see
what
it
what
he
was
thinking.
This
one
was
kind
of
interesting,
so
cost
based
scheduling
or
scaling
down
of
pods.
D
Yeah
I'm
here,
can
you
hear
me
yeah
cool
great,
so
this
is
a
like
at
least
five
years
old
issue,
which
I
found
in
the
scheduling
issues
and
in
short,
when
you
scale
down,
you
do
not
always
pick
the
right
victim
and
when
you
pick
a
wrong
victim,
you
can
diverge
from
your
scheduling,
plan
or
constraints
and
like
eventually,
your
pots
are
distributed,
the
way
that
it
doesn't
correspond
with
the
scheduling,
predicates,
scores
and
so
on.
D
So
this
is
about
approaching
this,
and
if
you
take
a
look
at
the
issue
closely,
there's
like
a
lot
of
discussion,
other
ideas,
how
to
approach
this,
and
I
basically
just
summarized
who's
there
and
I'm
just
suggesting
that
we
use
an
annotation
at
the
beginning.
We
annotate
each
pot.
We
rank
the
pod,
like
as
I
need,
a
score
which
will
describe
how
much
important
the
pod
is,
and
then
every
controller
will
read
the
annotation
and
take
it
into
account
when
deciding
which
pot
should
be
the
right
victim.
D
So
right
now
the
process
of
deciding
which
part
is
victim
is
quite
simple
and
in
general
control
doesn't
see
entire
cluster.
So
I
also
suggest
that
we
run
another
component
which
will
watch
the
entire
cluster
and
run
the
pods.
So
there
are
actually
to
be
two
parts,
the
first
one
where
we
will
need
to
update
every
controller
to
take
the
annotation
into
account
and
the
other
part
we
are.
D
Basically,
we
provide
a
default
component
which
might
be
the
disk
scheduler
or
like
updated
version
of
it,
which
will
do
the
ranking
and
if
the
default
component
will
be
won't,
be
sufficient.
Basically,
any
customer
or
user
of
kubernetes
can
implement
its
own
component.
That
will
do
the
ranking,
for
example.
In
case
there
are
some
like
proprietary
solutions
or
for
security
reasons
where
the
code
just
can't
be
shared.
D
So
the
I've,
if
you
take
a
look
into
the
google
doc,
there's
a
link
to
the
cap,
where
I
describe
the
approach
and
the
important
dates
and
the
the
reason
why
I'm
talking
about
this
today
is
just
to
talk
about.
If
you
think
or
feel
that
updating
controls
to
read
annotations
is
a
right
way
or
approachable
way,
or
if
you
think
that
this
can't
fly.
D
It's
going
to
be
the
new
component
right,
so
right
now,
yeah
and
the
the
the
new
component
will
try
to
use
the
scheduling
framework
and
make
the
same
decisions
or
like
that's
the
idea
that
the
component
will
import
this
scheduling
a
framework
and
try
to
be
as
much
aligned
with
the
scheduler.
A
Okay,
so
I'm
trying
to
understand
how
this
works.
Okay,
so
let's
say
I
have
a
deployment
right.
The
plymouth
creates
a
replica
set.
Replica
set
is
scaled
up.
Each
of
these
pods
are
created
from
the
pod
template
spec
in
a
uniform
way
right
at
some
point,
prior
to
being
persisted
in
storage,
the
pods
annotations
are
modified.
Where
does
that
happen?.
D
D
So
if
I
understand
the
question
correctly,
we're
run
like
some
form
of
the
scheduler
that
will
write
after
ipod
is
created.
D
D
D
Well,
it
doesn't
matter
when
it
lands
on
the
node,
because
it's
about
scaling
down
so.
A
Right
but
I
don't
understand
how
it
works,
because
okay,
so
if
you're
using
hard
spreading
like
if
you're
using
required,
scheduling
predicates,
then
there
it
just
won't
schedule
in
the
event
that
you
know
you
can't
meet
the
scheduling
predicate
as
defined.
But
if
you're
using
preferred
scheduling
if
you're
using
soft
spraying
predicates.
A
So
then
like,
if
you,
if
you
don't
have
like
if
it's
not
based
on
an
actual
topology
of
of
the
capacity
of
the
workload
as
it's
spread
across
the
cluster,
I
mean
I
don't
I
don't
it's
not
clear
to
me
how
that
would
work
from
a
controller
standpoint.
It
would
basically
be
opt
in
right
so
like
if
even
if
sig,
let's
say
sig
apps
agrees
to
modify
the
controllers
when
we're
selecting
victims
to
take
the
annotation
into
account.
A
In
the
replica
controller,
aside
from
the
risk
inherent
with
bugs
introduced
in
adding
it,
it
probably
isn't
that
bad
on
our
side
and
then
if
people
are
creating
their
own
controllers,
which
would
be
my
primary
concern
like
I
don't
if
other
people
who
are
doing
like
like
if
you
look
at
openshift
operators,
for
instance
right
if
they're
creating
their
own
custom
workload
controllers.
That
already
have
knowledge
about
the
topology
of
the
workload
that's
being
orchestrated,
and
they
don't
want
to
opt
into
this.
A
They
can
basically
ignore
the
annotation-
and
you
know,
say:
okay
whatever
right
and
for
us
we
can
wait
until
it
materializes
as
an
annotation,
that's
actually
introduced
in
a
kubernetes
version
before
we
start
work
at
all.
Right,
so
we
can,
we
can
get
confidence.
There
are
some
nice
properties
there,
but
my
kind
of
like
with
soft
spreading.
A
If
that
adds
value,
if
like
we're
making
bad
guesses-
and
we
can,
we
believe
that
this
cost
based
accounting,
that's
weighted
based
on
the
the
scheduling
topology-
will
allow
us
to
make
better
guesses.
I
can
buy
that,
but
it's
not
clear
to
me
exactly
how
that
plays
out
in
all
the
cases.
B
D
Yeah
like
issue-
I
did
not
think
about
this
case,
but
we
can
ignore
other
parts
in
a
depending
state,
maybe,
and
I
think
we
delete
pending
first,
that
was.
B
A
D
Yeah,
okay,
like
if
you
do
not
consider
the
annotation
like,
can
you
always
make
a
better
decision?
I
mean
like
if
you
use
the
automations
like,
can
it
get
wrong
or
like
will
you
get
a
worse
solution.
D
A
So
for
staple
set,
it's
not
going
to
matter
anyway
in
most
cases
right,
like
only
in
the
burstable
case,
and
then
that
doesn't
really
affect
scale
down.
We
always
do
like
yeah,
so
I
don't
think
it
matters
for
staple
set
for
deployment
the
the
topology.
I'm
sorry
for
demons
that
the
topology
considerations
are
probably
quite
a
bit
different
right,
because
you're
not
you're,
not
down
scaling,
replicas
right,
you're
you're,
associating
a
pod
with
the
individual
node
as
the
node
is
created
within
the
cluster.
A
So
for
those
two
controllers,
I'm
not
sure
that
the
applicability
is
very.
I
don't
think
we
care
as
much
like
it's
just
not
really
going
to
affect
the
outcome
for
the
batch
controllers.
A
You're
dealing
with
run
to
completion
workloads
anyway,
right,
like
these,
aren't
things
that
you
scale
up
and
down
to
run
forever.
You
may
increase
the
parallelism,
but
there's
probably
not
a
lot
of
value
to
considering
an
additional
annotation
if
you,
if
you're
playing
around
with
the
parallelism
of
a
job,
that's
running,
if
we
even
allow
that
that's
not
a
common
use
case
that
I've
worked
with.
A
To
be
honest,
so
really
we're
talking
about
deployments
and
when
we
talk
about
deployments,
the
replica
set,
controller
and
replication
controller
are
the
things
that
are
actually
scaling
them
up
or
down.
So
those
would
be
the
the
things
that
we
care
about
and
like
there
is
a
valid
case
that
it
would
be
desirable
for
the
end
user
that
we
could
optimize
optimize
for
scheduling.
Predicates.
A
That's
been
a
request
that
came
back
a
while
ago,
but
we
really
didn't
want
to-
and
this
was
in
discussion
with
node
and
scheduler-
tie
the
logic
for
scheduling
predicates
tightly
into
the
controllers
right,
because
it
breaks
encapsulation
and
separation
of
concerns
effectively.
It's
like
not
an
awesome
design
principle
for
the
system,
as
we've
architected
it.
So
this
seems
like
very
lightweight
the
bar.
A
It
doesn't
increase
the
barrier
to
entry
for
new
people,
writing
a
controller
because
you
can
just
ignore
it
and
implement
whatever
behavior
you
have
and
really
thinking
about
it-
and
you
know
this
could
be
wrong
because
it's
just
an
initial
assessment.
It's
really
a
replica
set
and
replication
controller
and
probably
not
replication
controller
because
effectively
we're
well
the
way
the
replica
set
and
replication
control
are
implemented.
Now
you'd
actually
implement
it
once
in
the
code
base
and
get
it
in
both
places.
A
But
you
know
for
those
two,
that's
what
we
would
care
about
and
if
we
can
give
better
scheduling
based
on
it,
that's
not
that's
not
bad.
For
people
who
run
small
replica
sets.
I
think
the
the
thing
is
that
I
don't
know
if
the
value
is
super.
It's
like
the
idea
behind
replica
set
is
you're
going
to
run
a
large
number
of
fungible
replicas
and
if
you're
really
concerned
about
disruption,
you
increase
the
replication
right.
A
So
if,
if
you
want
to
probabilistically
ensure
that
no
zone
or
iraq
or
whatever
doesn't
have
sufficient
capacity
during
an
update,
you
tweak
the
max
surge
parameters
and
then
available
parameters.
And
then
you
tweak
the
replication
to
ensure
that
you
have
sufficient
capacity
to
tolerate
the
planned
disruption
of
a
rollout.
A
For
scaling,
how
do
you
preserve
it?
You
know
like
if
so,
if
I
have
a
spread
for
like
let's
say
I
have
three
availability
zones
or
three
racks
and
I
spread
five
pods
across
it
and
then
I
scale
down
the
four.
It's
not
clear
to
me
what
what
predicate
I'm
trying
to
preserve
at
that
point,
like
you,
wouldn't
even
spread
right,
but
if
I
scale
down
from
like,
let's
say
five
to
four,
I'm
always
going
to
have
some
imbalance.
A
A
D
You
can
just
scale
you
can
scale
down
by
10
and
if,
with
unlock,
you
can
just
delete
other
parts
in
one
zone
and
left
with
all
the
posts
just
in
two
zones.
D
Yeah
like
yeah.
I
don't
have
any
like
specific
you,
you
use
cases
or
like
customer
issues
dealing
with
that.
So.
A
I
mean
like
the
overall
arching
thing
is
like
what
we
have
now
is
at
least
hypothetically
good
from
a
standpoint
of
probability
that
if
you
have
three
zones,
10
nodes
per
zone,
10
pods
per
or
one
plot
per
node,
so
30
nodes.
If
you,
if
you
scale
down
10,
it's
very
unlikely
that
you
get
fat,
that
non-uniform
distribution
of
capacity
across
clusters
and
then
the
rescheduler
exists
to
actually
fix
things
like
that.
In
the
event
that
you
you
do
cause
it.
D
E
A
For
sure,
and
that's
what
I'm
saying
like
the
initial
point-
was
that
what
this
buys?
You
is
a
better
decision
about
termination
about
selecting
a
victim
so
that
you
don't
have
to
pay
double
disruptions
on
a
scale
down.
You
don't
have
to
pay
for
the
reschedule
I
just
I'm
trying
to
like
figure
out
how
much
of
a
problem
it
actually
is
for
end
users,
though
I
mean
like
so
let's
say
it
does
take
an
hour
for
the
reschedule
to
kick
in
and
take
the
cost
aside.
A
If
it
takes
an
hour
for
the
rescheduler
to
kick
in
either
like
they're,
basically
two
scenarios,
if
you're
spread
across
multiple
zones,
either
when
you
scaled
down,
you
had
an
imbalance
cluster
for
an
hour,
but
it
wasn't
really
problematic
because
you
didn't
lose
any
additional
capacity
right.
The
the
problem
becomes.
A
A
Then
you
have
to
scale
back
up
and
rebalance
across
the
two
original
zones,
but
you
would
have
to
do
that
anyway,
so
like
from
from
an
availability
perspective,
I'm
not
sure
it
actually
buys
you
much.
I
I
guess
it's
just
primarily
removing
the
cost
associated
with
having
the
rescheduler
put
your
pies
back
on
the
cluster.
A
B
D
So,
like
I
suggest
like
so,
I
suggest
the
annotation
for
alpha
and
once
it
gets
like
promoted
to
beta,
then
I
suggest
that
we
use
either
a
field
in
a
port
status
or
a
crd.
B
B
A
If
I
remember
correctly,
that
was
the
decision
we
could
take
it
back
up
there,
but
I
mean
like.
If
so
I
guess
at
a
another
point:
did
you
present
this
to
six
scheduling.
A
So
I
would
say
this
out:
I
would
definitely
not
block
doing
this.
Like
advice
I
would
give,
is
yeah
don't
go
with
the
alpha
annotation,
but
you
know
we
don't
have
to
if
the
scheduling
sig
approves
of
this
generally
and
wants
to
move
forward
this
path,
the
controllers
don't
have
to
consume
the
annotation
until
it's
basically
stable
right,
like
you,
can
have
the
alpha
annotation
in
there
or
whatever
version
of
the
annotation
in
there
and
get
the
work
done
in
the
scheduler.
A
And
ideally
you
would
want
it
done
in
the
scheduler
first,
so
that
the
controllers
can
consume
something,
that's
relatively
stable
in
terms
of
api,
and
then
you
know
we
could
consume
it
once
it's
kind
of
in
there.
I
would
if
I
was
pushing
this
kept
forward,
I
would
get
schedulers
input
first
and
make
sure
that
all
the
implications
behind
scheduling
are
understood
there
and
like,
if
they're,
really,
if
they're
sold
on
the
idea,
and
they
want
to
move
forward
with
it.
D
C
Could
probably
help,
for
example,
when
we
create
pods,
the
pods
are
scheduled
by
the
scheduler.
So
if
there's
something
that
can
help
the
controller
to
decide
which
one
to
scale
down.
C
Using
the
same
logic,
for
example,
it
goes
through
instead
of
deleting
a
pot.
It
goes
through
something
to
decide
through
through
the
scheduler
to
decide
which
part
to
scout
down,
but
I
couldn't,
I
feel
it's
something
that
could
live
outside
of
the
replica
set
controller.
A
Makes
sense
I
get
what
you're
basically
saying
so
you're
saying
that,
rather
than
have
the
controller
rather
than
invert,
control
of
scheduling
decisions
in
the
case
of
scaling
and
have
the
controllers
have
to
deal
with
it,
you
have
some
other
api,
then
delete.
Then
the
controllers
use
that
to
say
like,
for
instance,
for
this
replica
set
and
all
pods
selected
by
this
label.
I
need
the
scheduler
to
select
one
for
termination
and
then
the
scheduler
actually
goes
through
and
deletes
it.
A
B
D
Asynchronously,
who
are
you
asking
here
too
much.
B
So
he
proposed
that
we
will
have
some
kind
of
api
that
we
will
use
and
say:
hey,
delete,
three
ports
that
belong
to
this
port
selector
right.
Basically,
and
we
will
note
it
somewhere
in
the
api
or
we
will
have
an
end
point-
that's
basically
will
do
it
live
during
execution
of
that
call
right
and
I
was
asking
which
of
those
she
would
prefer,
or
she
was
thinking.
C
A
Okay,
sure
I
don't
care
anybody,
I
don't
think
eddie
like
she's.
She
just
she
said
we
should
have
a
mechanism
and
describe
that
I
said
oh
yeah
and
you
could.
I
brought
up
the
api
concern,
but
neither
of
us,
I
think,
have
I've
thought
through
completely
like
a
mechanism
that
would
do
it
right,
like
there's,
there's
different
ways
but
yeah.
Maybe
I
think
it's
valid
to
say
that,
like
maybe
the
annotation
isn't
the
right
way
to
achieve
the
desired
state
and
we
could
we
can.
A
A
But
what
I'm
not
hearing
is
a
lot
of
pushback
that
I
think
there's
there's
questions
like.
Is
this?
How
valuable
is
it
like?
How
many
users
will
it
help
and
like
you
know
how
how
much
impact
will
it
have
on
the
community
if
we
do
it?
So
from
a
prioritization
perspective,
there's
some
questions,
but
I'm
not
hearing
anybody
say
outright
like
this
is
crazy
and
we
shouldn't
do
it
either.
C
B
D
Okay,
like
also
to
today,
like
so
brian
grant,
mentioned
50
years
back
that
the
scheduler
is,
he
knows
like
what's
the
best
for
the
pots
right
now,
but
like
it,
these
because
doesn't
know
what's
the
best
for
the
pots
like
in
the
future,
so
like
it
knows,
what's
right
right
away,
but
like
after
days,
you
can
have
completely
different
demands
on
how
the
pots
should
be
distributed.
D
So
it's
something
that
like
cannot
be
decided
by
this
picture
so
using
another
component
that
will
include
the
scheduler
decisions.
D
The
component
can
also
include
additional
constraints
which
take
the
future
into
account,
like
metrics,
for
example,
and
based
on
those
to
decide
which
parts
should
be
scaled
down.
So
it's
just
about
the
cube
and
also
seekers
scheduling
like
made
very
clear
that
the
jira
component
is
not
meant
to
decide
which
pot
gets
scaled
down.
So
it's
like
completely
independent
of
that.
So
that's
why
why
I'm
deciding
like
to
build
a
new
component,
yeah
and
so
on?.
A
Okay,
so
I
think
from
from
to
summarize
what
we've
kind
of
the
feedback
we've
kept
given
so
far?
No,
no,
no
one
in
sick
apps
is
saying
that
we're
opposed
to
this
idea
in
general.
I
think
it's
kind
of
clear
what
the
objective
is.
There
are
some
questions
with
respect
to
how
many
users
are
impacted
by
this
and
how
valuable
it
would
be
for
the
community,
but
that
doesn't
mean
we
shouldn't
do
it.
A
A
So
it's
okay
to
take
a
risk
and
do
some
things
that
we
think
will
be
valuable
without
complete
evidence
that
it
is,
but
from
a
prioritization
perspective
supporting
this
over
like
crime,
job,
ga
or
pdbga,
isn't
something
we're
gonna
do
and
then
we
like,
I
think,
the
feedback
we
can
give
so
in
terms
of
whether
it's
the
scheduler
or
another
component
that
actually
implements
selecting
the
victim
and
terminating
it.
A
The
mechanism-
that's
required
of
the
controller
side,
I
think,
is
where
we're
collecting
the
most
feedback
and
a
couple
of
members
of
the
state
have
requested
some
time
to
to
kind
of
think
through
the
idea
of
annotations
or
whether
there's
a
better
mechanism
in
the
api
machinery
that
we
could
use
to
implement.
The
interface
between
whatever
component
would
take
care
of
the
descheduling
and
victim
selection
and
the
the
core
controllers.
I
think
the
other
thing
is
that
we
don't
think
this
is
probably
applicable
for
staple
set.
A
It's
not
clear
what
we
would
do
here
for
demon
set
or
if
it
makes
sense,
there
are
other.
There
are
definitely
topology
constraints
for
demon
set.
That
are
very
interesting,
especially
in
terms
of
the
update
strategy
for
demon
set,
which
is
something
we've
talked
about
for
a
while
and
something
clayton
has
open
kind
of
kept
out
about,
but
it
for
this
particular
use
case.
A
B
In
the
end,
there
can
be
more
than
just
one
controller
deciding
the
priority
right.
So
maybe
something
to
consider
for
the
next
time
is
if
those
two
could
be
merged
together
and
how
you
could
deal
with
multiple
items
because
yeah,
I
posted
posted,
a
link
to
the
enhancement
into
the
chat
as
well
and.
A
That
makes
sense,
and
then
also
that
is
like
I
didn't.
I
didn't
try
to
push
and
dig
deep
into
the
scheduling
considerations,
but
in
theory,
kubernetes
does
allow
for
you
to
run
multiple
simultaneous
schedulers
and,
like
thomas
just
indicated,
if
this
was
going
to
be
something
where
you
were
trying,
you
probably
have
to
think
around.
How
do
I
run
multiple
simultaneous
versions
of
this
yeah?
There
are
other
scheduling,
considerations
that
we
really
didn't
touch
on
here,
but
I
think
that
would
be
a
conversation
that
you're
gonna
have
to
have
with
six
scheduling
too.
D
Yeah,
definitely,
I
will
talk
to
them
as
soon
as
I
can
and
yeah
like
anyway,
thanks
for
your
time
and
for
like
thinking
about
it-
and
I
will
try
to
address
your
comments
as
soon
as
I
can.
A
Thank
you
we'll
give
you
we'll
give
you
the
feedback
on
the
on
the
cap
and
try
to
open
up
a
discussion
about
it,
and
thank
you
for
your
contribution.
A
E
Yeah
so
first
yeah,
I
I
like
it,
I
think,
okay,
so
a
proposal
if
someone
has
questions
about
it,
but
I
wanted
to
talk
with
the
community
about
the
future
of
the
crd
and
get
your
point
of
view
about
it.
So
is
everyone
aware
about
the
cid
application,
cld
right.
E
So
so
yeah,
as
you
know,
it's
to
like
a
top-level
resource
that
was
came
out
of
the
working
group
of
definition
to
like
collect
all
resources
that
belongs
to
one
application,
an
entity
and
it
started
as
a
crd,
and
I
I
faced
like
some
challenge
for
adoption
because
of
that,
so,
for
example,
like
a
few
like
few
weeks
ago,
I
wanted
to
integrate
it
with
kubernetes
dashboard
and
the
answer
was
like
it's:
if
it's
a
crd,
we
don't
integrate
with
it
and
speaking
with
a
lot
of
users.
E
It
was
like
also
the
same
issue
saying
oh
I
I
can't
install
crdm
my
cluster
openshift
cluster
or
everything.
Basically,
it's
even
on
customize
or
tweetings,
around
packaging
and
managing
application.
E
That's
a
blocker
like
we
can't
be
involved
in
something
that
is
optional,
like
we
could
base
our
tools
on
something
that
maybe
the
user
doesn't
have
so
now.
It's
add
an
extra
logic
to
say:
is
a
resource
exists
or
not,
etc.
So
just
keep
like
work
around
that
and
reinventing.
I
see
customizer
some
some
things
that
is
close
to
it,
etc.
Everything
and
make
me
think
it's
like
what
we
want
to
do
with
this
resource.
Do
we
want
to
continue
on
that
or
just
like
achieve
the
projects?
E
Is
there
a
possibility
to
not
be
a
crt
yeah?
That's
that's
the
main
question
I
had
for
today
and
and
just
to
finish,
going
back
a
bit
to
the
context
where
I
started.
I
said
he
was
like
so
logic
was
like.
We
start
as
a
crt
and
answer
is
adoption
we
discuss
if
we
want
to
have
it
core,
but
clearly
the
fact
that
it
is
a
crd
and
outside
like
not
building
in
the
installation.
I
don't
care
if
it's
called
not
but
at
least
deployed
with
the
distribution.
E
E
A
built-in
type
like
the
fact
is
not
built
in
like
from
the
feedback.
It's
it
blocks,
adoption
for
for
the
tools
they
prefer
just
to
be
to
rip
off
a
user
also
to
get
their
own
things.
E
F
So
antoine
thanks
for
bringing
this
up
and
by
the
way
I'm
later
than
you,
so
you
weren't
the
latest
one
to
show
up.
I'm
sure
I
don't
think
it's
going
to
be
able
to
land
as
a
built-in
type
anytime.
Soon,
though,
because
I
think
one
of
the
criteria
was
proving
use
and
success
first,
and
so
we
don't
have
a
lot
of
use
yet,
which
is
probably
going
to
end
up
being
a
blocker.
A
E
A
The
opposite
is
true,
and
supporting
supporting
crds
that
live
outside
of
built-in
types
as
officially
supported
parts
of
the
ecosystem
is
more
the
direction
that
the
project
is
going,
but
so,
if
the
blocker
is
being
a
built-in
type,
there
might
not
be
a
lot.
We
can
do
to
address
it.
A
If
the
blocker
is
a
lack
of
prevalence,
that's
something
we
can
think
about
and
see
what
we
can
do
to
get
better
adoption
morton
had
reached
out
to
the
helm
community
to
get
it
to
see
if
it
could
be
incorporated
as
part
of
helm
b3
from
their
perspective.
It's
like
okay,
we'll
do
our
own
thing.
You
can
use
the
application
crd
if
you
want
to,
and
there
should
be
support
for
it.
One
of
the
problems
that
we
found
in
experimenting
with
doing.
A
That,
though,
is
that
it's
very
clunky
to
kind
of
generate
and
use
the
application
see
already
there
for
customize
it.
So
customize
has
some
issues
with
crd
support
in
general,
in
terms
of
being
able
to
like
generating
them
might
be
something
you
can
do,
and
we
could
look
at
like
doing
plug-ins
for
customized
to
do
application.
Crd
generation,
which
was
something
that
people
were
working
on
for
being
able
to
mutate
it
based
on
a
patch
file.
A
That's
something
that's
an
open
issue
that
anand
actually
opened
against
customized
recently
that
that's
problematic
to
begin
with,
so
there
could
be
one
of
the
reasons
that
compatibility
with
customize
might
be.
A
thing
is
that
crd.
A
A
So
I
mean
one
of
the
major
distributions
of
kubernetes,
for
both
public
cloud
and
for
cross
cloud
and
for
on-prem
is
adopting
it,
so
I
mean
potentially,
and
they
did
give
some
publicity
to
it
as
well,
but
potentially
leveraging
that
might
be
might
be
a
good
thing.
The
other
suggestion
I
have
that
I
didn't
follow
up
on,
because
it
was
other
people
who
were
primarily
kind
of
leading
the
effort.
A
I
just
tried
to
get
some
people
in
the
room
to
talk,
but
there's
some
overlaps
between
the
metadata
associated
with
operator
hub
and
application
crd
and
one
of
the
things
we
wanted
to
see
that
we
could
actually
antoine.
I
believe
you
were
one
of
the
people
who
was
kind
of
involved.
In
that
conversation.
A
Are
there
things
we
can
do
there,
where
we
can
kind
of
either
converge
toward
one
thing
or
see
how
these
can
complement
each
other
for
custom
controllers
in
operator
hub?
But
I
I
think
it
sounds
like
your.
Your
issue
might
be
more
of
a
like
globally
ui
and
ci
than
kind
of
the
back
end
side
like
I
want
kubernetes
dashboard
to
support
it
and
yeah.
E
It's
no.
I
I
completely
understand
that
this
is
like
the
approach.
At
least
I've
tried-
and
I
say,
like
I,
I
talk
about
the
blocker
because,
like
I
can
make
sure,
for
example,
the
description
for
communities
dashboard
and
and
that's
the
same
same
things
when
we
talk
even
internally
with
openshift
and
and
or
om,
is
like
it's
not
built
in,
so
we
prefer
go
our
own
like
own
way.
E
Basically,
it's
it's
too
much
trouble
to
to
to
rely
on
something
that
the
user
might
have
or
might
may
not
have,
instead
of
owning
those
things
themselves.
So
one.
F
Yeah,
hey
antoine,
I
like,
where
you're
going
with
this.
Might
I
suggest
I'm
willing
to
help
work
on
it.
We
come
up
with
a
plan
to
improve
adoption
and
try
to
lower
the
barrier
for
others
to
pick
it
up,
I'm
willing
to
sit
down
and
work
on
it
if
you
want
to
get
together,
I'm
happy
to
do
that,
but
I
think
we
might
want
to
start
trying
to
come
up
with
an
overall
plan
to
increase
adoption
and
make
it
easier
or
identify
bottlenecks.
E
Yeah,
I
I'm
completely
done
for
it
and
okay,
let's
let,
let's
maybe
I'll,
say
maybe
basically,
I
need
some
yeah
support
to
at
least
reach
out
to
communities
and
and
try
to
see
how
we
can
include
that
and
if
it
still
makes
sense
to
include
on
this
resource.
A
The
other
people
I
so
linguistic,
gke
and
anthony
has
kind
of
bought
into
it.
I
think
eks
and
amazon,
like
very
different
in
talking
to
them,
they're
like
what
ui
right,
like
they
they're
they're
they're,
their
version
of
they
don't
have
the
same
type
of
functionality
that
azure
and
gke
kind
of
implement
from
a
ui
perspective,
but
we
haven't
ever
talked
to
the
people
from
azure
to
see,
if
maybe
they
would
be
interested
or
ibm
or
oracle
right.
A
So
like
another,
we
have
one
major
cloud
provider,
that's
adopted
it
having
more
than
one
would
might
be
a
very
compelling
kind
of
use
case
for
standardization.
I
would
say
okay,
but
the
thing
I
don't
know
is:
is
dashboard
integration
worth
it
at
this
point.
A
Source
dashboard,
because
so,
if
you
run
most
security
scanning
tools
on
this
kubernetes
cluster
and
you're
running
the
dashboard
internally,
it'll
flag,
it
as
a
security
vulnerability
and
tell
you
to
uninstall
it
so
so
my
question
is:
like:
is
kubernetes
open,
source
dashboard
utilization
growing
or
has
it
become
come
down
to
the
point
where
everyone's
using
something
like
octan
or
another
dashboard
for
for
their
for
their
kubernetes
clusters?.
A
I
actually
don't
know
so
I
mean
because
we
like-
and
this
is
just
my
impression-
and
I
don't
want
to-
I-
don't-
want
to
bad
mouth
anybody
who
works
on
dashboard
or
badmouth
the
open
source
dashboard
in
general,
but
in
talking
to
to
users
in
the
community,
it
does
seem
like
people
are
moving
away
from
the
open
source
dashboard
because
of
the
known
security
vulnerabilities
associated
with
it.
I
think,
even
under
the
the
gov
spec
kubernetes
security
standard,
this
dashboard
is
something
you
shouldn't
run
so
so
I
don't.
A
I
don't
know
if
like
like.
That
was
one
of
the
reasons
why
I'm
just
like.
I
don't
know
if
we
should
push
there
for
a
tighter
integration,
just
because
it
doesn't
seem
like
it's
something,
that's
being
invested
in
heavily.
F
Dashboards
in
general,
I
mean,
if,
I'm
being
completely
honest,
are
really
hard,
because
many
of
the
distros
are
trying
to
provide
a
unique
dashboard
in
order
to
have
their
business
differentiator
there,
which
means
kubernetes,
dashboard
gets
under
invested
in
which
ultimately
ends
up
driving
people
to
other
dashboards,
and
it's
hard
to
get
movement
on
a
truly
useful
one
or
even
know
what
useful
is
to
whom
and
create
one.
And
so
it's
a
complicated
space.
A
It's
complicated,
I
mean
like
state
of
the
art,
as
far
as
I
can
tell
at
least
my
personal
favorite,
I
should
say,
is
often
from
brian
lyles
at
vmware
that
it's
nice
because
it
does
impersonation
and
if
it
restricts
your
access
to
the
cluster
based
on
the
credentials
that
you
actually
present
it
doesn't
have
to.
You
can
run
it
locally
as
opposed
to
writing
it
in
cluster.
You
can
run
it
in
cluster
and
it
doesn't
have
the
same
level
of
security
vulnerability.
A
So
it's
something
that
I
can
internally
distribute
in
my
organization
and
people
can
actually
use
without
the
security
and
trust
teams
being
like
whoa
hold
on.
No,
so
you
know,
but
for
openshift
they
have
a
very
like
you
said
they
have
a
very
nice
dashboard.
Gke
also
has
a
very
nice
and
well
invested
in
dashboard.
A
It's
not
clear
that
we're
going
to
get
the
community
investment
unless
we
try
to
go
out
and
build
it,
but
but
I
don't
the
meta
point
was
that
you
know.
I
don't
know
if
that
route
for
application.
Crd
is
what
we
would
want
to
push
on
so
much
as
trying
to
get
adoption
of
major
organizations
or
people
who
are
building
tools
for
them
like
it
would
be
nice,
maybe
to
talk
to
brian
and
see
if,
like
we
could
get
some
adoption
in
octane,
even
though
it
does
have
some
pretty
good
crd
support
to
begin
with.
F
Me,
let's
come
up
with
a
plan
and
reasons
for
it.
I
can
work
with
you
offline
on
it.
A
Okay,
well,
we
didn't
get
to
a
full
bug
scrub.
We
only
have
a
few
minutes
left,
so
I
would
like
to
know
if
any
I
didn't
see
anything
that
was
a
super
high
priority.
It's
burning
right
now.
We
need
to
jump
on
top
of
it
bug
and
we
need
to
put
some
resources
on
it.
This
is
anyone
aware
of
any
super
hot
right
now
we
need
to
throw
somebody
at
it,
because
it's
we
it's
burning
bugs
that
we
should
bring
up
like
in
this.
A
A
Okay,
I
will
take
that
as
a
negative
next
time,
because
we
didn't
get
a
bug
scrub
in
this
time.
We
should
definitely
do
one
again.
A
A
All
right
so
definite
bubble
scrub
for
next
time,
thanks
everybody
for
attending
and
thanks
everybody
for
the
contributions
that
they
brought
up
or
offered.
There's
some
great
discussions
and
I'll
see
you
in
two
weeks.
If
anything,
urgent
comes
up,
you
know
you
can
always
hit
us
on
slack,
have
a
good
one
thanks
again.