►
From YouTube: SIG Cluster Lifecycle - Cluster API Office Hours - Special session for Addon Management - 20220428
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
So
we
are
gathered
here
together
today
to
talk
about
add-ons
and
cluster
api,
specifically
a
proposal
that
you
fabrizio
initiated
a
few
months
back
about
leveraging
helm
or
something
like
helm,
basically
leveraging
a
pre-existing
packaging
platform
that
has
lots
of
community
backing
to
do
some
of
the
work
on
behalf
of
cappy
add-ons
reconciliation.
B
So
essentially
we
have
a
proposal
that
outlines
what
that
might
look
like
and
that
proposal
is
still
in
progress.
So
I
welcome
so
if
I
have
that
browser
tab
open
like
everyone
else,
I'm
maintaining
many
browser
tabs
but
we'll
we'll
provide
a
link
in
the
chat
and
also
in
slack
to
encourage
folks
to
participate
in
that
proposal.
More
concretely,
jonathan
tong
and
I
have
been
working
in
parallel
on
a
prototype
of
what
this
might
look
like.
B
Jonathan's
been
doing
that
work
in
his
own
github
repo.
Thank
you
fabrizio
for
the
link,
and
I
kind
of
wanted
to
the
the
main
purpose
of
this
meeting
is
to
to
get
folks
who
are
interested
in
add-ons
and
cluster
api
sort
of
make
sure
that
everybody
understands
that
there
are
some
folks
working
on
a
prototype,
maybe
to
sort
of
restart
some
momentum.
I
know
this
has
been
something
that's
been
stop
started,
stop
started
for
many
years
now,
because
it's
a
fairly
complex
surface
area.
B
B
So,
anyway,
I'm
glad
to
see
lots
of
folks
here
and
are
both
jonathan
and
myself
hosts.
B
Looks
like
that's
true,
okay
cool,
so
maybe
to
open
it
up.
Does
anybody
want
to
add
any
questions?
Ask
any
questions,
or
should
we
just
kind
of
go
through
what
we've
got
so
far
and
that
will
hopefully
that
will
surface
questions?
Anyone
have
anything
they
want
to
bring
up
around
just
cluster
api
atoms
in
general.
B
Okay,
jonathan,
you
want
to.
A
One
is
that
one
of
the
goal
of
faster
pi
is
to
leverage
in
the
on
the
ecosystem,
and
this
is
what
we're
trying
to
do
these
so
to
use
what
already
exists
in
the
ecosystem
for
using
for
orchestrating
a
dawn.
So
we
are
not
reinventing
a
doll
management.
We
are
just
orchestrating
something
that
does
that
already.
Does
it
on
management
in
in
coordination
with
the
cluster
life
cycle
managed
by
copy,
and
this
is
why
the
proposal
is
called
cluster
api
add-on,
orchestration
and
elm
is
the
tool
that
that
we
are
using.
A
A
We
provide
a
clean
way
to
work
in
the
life
cycle
of
that
dawn.
So
when
I,
when
I
do
install
when
I
do
upgrade
et
cetera,
et
cetera
with
the
lifecycle
of
the
cluster
but
yeah,
we
are
starting
from
the
foundation
and
that's
only
my
three
months.
B
Clarifying
mike.
D
That
way,
and-
and
I
wonder
if
you
could
talk
to
some
of
the
challenges
that
maybe
using
helm,
charts
solves
there,
because
I
just
you
know
everyone's
used
to
manifests
helm
is
like
well,
many
people
are
used
to
home,
but
I
don't
think
helm
is
like
completely
ubiquitous.
You
know.
A
A
We
know
that
that
this
space
is
super
opinionated
and
people
like
the
flavor
of
adult
management.
Okay,
what
what
problem
we
are,
so
what
capabilities
we
would
like
to
to
reuse
so,
first
of
all,
tool
like
elms
are
already
have
already
solved
a
lot
of
problem
that
are
related
to
a
dominant,
and
I
I
make
you
an
example
packaging.
How
do
you
distribute
your
your
alone?
A
Okay,
another
one
templating,
how
you
allow
some
variables
some
information
about
the
cluster
to
flow
in
you're
done
management
once
you're
a
donor
in
the
cluster?
A
So
what
I?
What
we
expect
is
that
people
that
opt
in
in
ends
or
argo,
cd
or
different
tooling,
they
want
data
done
and
their
application
managed
in
a
consistent
way
because
they
basically
leverage
on
a
single
tool
chain.
So
the
that's
the
advantages
we
re
use
some
capability,
packaging,
templating,
eventually,
reconciliation
of
the
style,
staff,
etc,
etc,
and
we
we
we,
we
basically
met
the
customer
where
they
are
instead
of
proposing
something
different.
This
is
the
main
advantage
in
looking
to
reuse
what
the
community
has.
Instead
of
reinventing
the
wheel.
D
D
You
know
just
our
manifests
and,
of
course,
that
doesn't
solve
the
problem
of
injecting
variables
that
might
exist
in
the
cluster
into
something
you
might
want,
but
I
just
I
wanted
to
kind
of
expose
some
of
those
details,
because
I
think
you
know
there's
kind
of
like
the
simplest
layer
that
everyone
understands
and
then
everything
else
is
something
you
have
to
learn
to
kind
of
get
into
these
things.
So
thank
you
for
expanding
on
that.
B
Although
that's
probably
going
to
be
out
of
scope
and
fabrizio,
maybe
doesn't
want
me
to
say
that
so
we
don't
get
folks
hopes
up,
but
it
it
would
actually
be
really
easy,
with
reduced
functionality
to
just
use
yaml
and
and
have
the
the
ability
to
like
define
affinity
between
cappy
clusters
and
this
new
crd
and
then
just
you
know,
kind
of
like
cluster
resource
set
kevin.
F
E
Opinionated
space,
what
I'm
wondering
is,
can
it
be
more
narrowly
focused
in
the
problem
space?
So
it
doesn't
step
on
other
things
so
much
but
like,
for
example,
starting
up
a
cube,
adm
cluster.
E
E
But
they're
kind
of
add-ons,
also
right
and
and
maybe
the
networking
layer
is
kind
of
an
add,
an
add-on
layer
too,
and
pack
more
the
more
the
things
like
olm
like
expect
core
dns
to
be
there
and
working
in
cube
proxy
in
an
stn,
linear
or
argo,
or
you
know
all
the
other
like
ways
of
doing
package
management
or
managing
the
things
kind
of
expect
that
base
layer,
cluster
and
so
we've
we've
got
the
the
really
low
level
stuff.
E
We've
got
that
little
bit
of
add-on
in
between
that's
much
more
variable
than
that
really
base
cluster,
and
then
we've
got
like
everything
else
built
on
top
of
it,
and
I'm
wondering
if,
if
we
can
scope
the
add-ons
for
discussion
here
in
such
a
way
that
it's
really
just
that
middle
piece,
that's
maybe
less
religious
and
there
might
be
more
common
ground
that
can
be
found.
There.
B
So
I
I
would
actually
advocate
that
one
of
the
things
that
we
agree
to-
or
this
is
my
my
idea-
is
that
we
don't
define
what
an
add-on
is
as
part
of
this
work
stream,
and
so
we
we
simply
define
a
set
of
functionality
that
we
fulfill
and
that,
if
you
can
use
that
set
of
functionality
to
do
something
on
your
cluster
in
an
add-on
like
way,
then
that
thing
that
you're
doing
it
qualifies
as
an
add-on.
But
we
don't
worry
about
sort
of
fulfilling
first
principle.
B
Definitions
of
what
an
add-on
is
so
is
core
dns
an
add-on
is
calico.
An
add-on
is
cloud
pro
out
of
tree
cloud.
I
don't.
I
agree
with
you
kevin
that
that
there
is
no
easy
way
to
to
to
find
where
the
frontier
begins
and
ends.
A
Yeah,
that's
that's
the
first
point
in
in
the
document
we
try
to
provide,
and
we
already
iterated,
two
or
three
times
on
on
what
is
the
definition
of
a
cluster
of
dawn,
so
in
tldr
it
it
doesn't
really
make
sense
to
use
what
we
are
dealing
today
only
for
addons
that
are
really
related
to
the
cluster
lifecycle,
addons
that
you
need
to
install
when
you
create
the
cluster
add-ons
that
you
need
to
update
when
you
you
update
the
cluster
like
c
cpi
csi,
in
my
opinion,
in
my
opinion,
other
add-ons
does
not
really
make
sense.
A
Let
me
say
this
is
the
scope
that
we
are
trying
to
limit
addons
or
kubernetes
application
because
of
the
nar
communities
application
which
you
need
to
for
your
clusters
to
work
and
which
life
cycle
is
is
linked
to
the
cluster,
but
really
if,
if,
if
you
have
better
idea-
and
you
can
help
us
in
shaping
out
the
definition,
this
is-
I
agree
with
you.
This
is
an
important
part
of
this
proposal.
E
Yeah,
I
I
think
you
run
the
danger
of
like
well.
Why
do
I
need
the
olm?
Then?
I
can
just
make
my
operator
deploy
with
the
add-on
tool
and
then
like
some
of
the
the
best
practices
around
being
able
to
scope
an
operator
to
a
namespace
aren't
easily
possible
with
that
tool.
But
people
don't
know
that
until
they
dig
through
the
details
enough
and
there's
just
a
lot
of
confusion
that
can
can
happen
when
things
aren't
scoped
well
up
front.
I
think.
B
Yeah,
I
agree
with
that.
I
I
do
think
that
a
lot
of
this
can
be
clarified
by
strongly
stating
that
the
this
proposal
is
not
it's
not
being
put
forward
in
order
to
better
solve
problems
that
have
existing
solutions.
So
you
lay
out
things
like
core
dns
like
those
there
are
solutions
for
that
that
cluster
api
could
not
really
exist
at
cuba.
It
couldn't
exist
in
its
current
way
if
qubit
am
wasn't
handling
those
things.
So
those
are
pro.
Those
are
solved.
Problems
like
cluster
api
doesn't
have
the
problem
of.
E
E
B
Right
and
I
I
see
that
mr
q
adm
has
his
hand
raised,
go
ahead,
lumiere.
F
Very
nice
qualification
he
gave
me
there,
but
you
know
I
wanted
to
point
out
that
I
think
we
want
to
cut
coordinates
out,
not
because
it's
like
an
add-on
or
not
an
adam.
It's
actually
it's
a
problem
for
us
to
maintain
it
because
of
the
migration
library
that
it
has
to
import.
F
So
it's
like
you
have
to
have
an
operator
for
core
dns,
but
if
both
costa
rica
and
kubernetes
need
this
album,
maybe
it
should
be
like
a
separate
entity
like
a
separate
application
that
both
of
them
can
deploy
and
maybe
kubernetes
can
skip
it
costs.
Api
users
can
deploy
it
and
customize
it
in
the
way
that
they
want.
F
But
we
cannot
do
that
today
and
actually
the
quest
api
users
of
cuba
dm
have
to
skip
the
built
in
atom,
and
maybe
they
can
use
a
helm
or
something
else
to
deploy
it
on
their
own.
F
But
the
space
is
really
really
entangled
and
going
back
to
the
topic
of
weather
and
adam
is
a
application.
I
think
that
if
you
design
this
solution,
it
is
like
workable
at
some
point.
People
start
using
it.
Maybe
they
can
actually
use
the
hooks
to
treat
their
cluster
applications
as
add-ons,
so
basically
they
might
want
to
upgrade
their
applications
by
with
the
same
hooks
that
you
use
for
the
add-ons
proposing
this
solution.
F
So
I
I'm
generally
in
favor
of
this.
I
still,
of
course,
about
the
overall
ecosystem.
I
have
concerns
that
we
are
designing
a
solution
for
something
high
in
the
stack
like
a
quasar
api
and
lowering
the
stack
we
have
some.
We
are
not
going
to
have
this
huddle
manager
that
is
workable
for
qb.
F
B
B
This
is
this
is
really
a
cluster
api
solution,
and
one
thing
we
haven't
mentioned
thus
far
is
one
possible
problem
area
that
this
is
going
to
solve
is
for
common
componentry
that
exists
across
a
fleet
of
clusters,
so
it
doesn't
have
to
be.
I
think
we've
been
focused
on
how
core
a
thing
is
in
terms
of
its
add-on-ness,
but
it
doesn't
have
to
be
like
a
core
part
of
the
cluster
functionality.
B
It
could
just
be
a
very
domain-specific
set
of
complementary
componentry
that
I
you
know
managing
my
hundreds
of
clusters
install
on
you
know.
I
installed
the
same
componentry
in
common
across
all
those
clusters,
so
having
a
cluster
api
aware
solution
that
can
where
I
can
define
that
one
place
and
then
cluster
api
will
take
care
of
rendering
that,
across
all
the
clusters
that
have
those.
E
That
that
may
push
the
packaging
requirements
to
back
to
the
developers
who
now
need
multiple
packaging
mechanisms
to
concern
themselves
with
right,
like
like
some
folks,
might
their
preferred
way
of
packaging.
Things
might
be
like
the
olm
say,
but
they
might
need
to
package
it
also
as
an
add-on
in
order
to
reach
some
other
subset
of
folks.
E
I
it's
it's
kind
of
unavoidable.
I
think,
for
kind
of
that
that
bootstrap
the
low-level
component
place
that
it
it's
maybe
unavoidable,
but
at
the
higher
level,
the
the
ability
to
take
an
app
from
kubernetes
cluster
to
kubernetes
cluster
and
deploy
it
you
know
commonly
without
changes
across
all
the
clusters.
That's
one
of
kubernetes
really
strong
points.
We
have
to
be
careful
not
to
step
on
that.
I
think
arun.
G
I
just
mentioned
it
in
the
chat
like
we
are
close
to
the
halfway
mark
and
I
think
add-ons
is
a
good
topic
and,
as
kevin
has
mentioned,
it
is
going
to
be
a
religious
topic
with
a
lot
of
discussion.
So
should
we
schedule
separate
time
for
that
and
go
on
with
the
actual
helm
demo,
and
I
think
many
of
us
are
looking
forward
to
how
it
would
work.
And
of
course
this
is
a
fundamental
question.
G
But
I
do
not
want
this
to
become
a
bike
sharing
sort
of
a
discussion
with
everybody
adding
their
idea
about
what
an
add-on
should
be
like,
and
I
mean
I
I
understand
that
you.
G
We
should
have
a
separate
discussion
for
that.
Yeah,
don't
say
that,
but
I
just
want
to
see
what
is
what
is
available
now
and
come
to
the
discussion
separately
and
add
my
opinions
as
well.
I.
B
I'm
totally
fine
with
that,
unless
there
is
significant
opposition,
so
jonathan,
I
think
you
are
co-host,
do
you
want
to
so?
This
is
jonathan
tong
and
I
work
on
cap
z
together
and
he
has
been
prototyping
what
this
might
look
like.
I
Give
me
a
sec
to
share
cool
so
so
I
guess:
we've
been
prototyping
a
controller
using
cube
builder
as
a
way
to
build
up
proof
of
concept
and
also
to
experiment
with
some
features
that
could
inform
how
we
decide
to
finish
up
the
proposal.
So
to
test
out
our
controller.
We
made
a
sample
of
the
crd
called
helm,
sharp
proxy,
and
in
there
we
have
a
label
selector.
That
selects.
B
I
A
I
The
stop-
oh
okay,
I'll
just
I'll
just
switch
the
window
if
we
need
but
but
yeah.
So
this
is
an
example
of
the
crd
we've
defined.
We
have
a
selector
that
lets.
You
select
the
workload
clusters
we
want
to
install
it
on
so
here
we
try
to
match
labels
with
cloud
provider
azure
true
and
for
the
helm
chart.
We
specify
the
repo
url
the
chart
name
and
the
release
name.
We
want
to
install
as
well
as
the
version.
I
I
So
we
can
set
cloud
cloud,
controller
manager,
dot,
container
resource
management
limit
cpu
to
six
and
this
another
thing
we've
been
experimenting
with
is
say:
we
want
to
have
the
cluster
name
for
each
workload,
cluster
to
be
you
to
be
resolved
at
runtime,
so
we've
been
experimenting
with
adding
this
sort
of
syntax.
Where
anything
in
these
double
braces.
We
can
specify
the
the
cluster
resource
and
access
the
fields
in
there.
B
I
Yeah,
I
guess
just
to
add
on
a
bit
more.
This
is
sort
of
the
first
iteration
of
the
proof
of
concept,
slash
prototype,
and
here
our
goal
was
to
install
helm
as
a
library
and
be
able
to
install
the
helm
chart
upgrade
it
when
we
see
a
change
and
delete
it
and
so
far
we've
imported
home
as
a
library.
Even
though
there's
a
isn't
an
sdk,
it
seems
pretty
straightforward
so
far
to
implement
and
with
that
we
can
go
ahead
and
go
to
switch
to
console.
I
B
I
One
second
there's
the
zoo:
do
you
know
how
to
get
rid
of
the
zoom
thing
all
right?
I
got
it
so
we
can
cap
b.
B
Yeah
there
it
is
cool
so
yeah,
that's
that's.
This
is
an
example
of
a
implementation
of
the
crd
health
chart
proxy
that
we
would
be
installing
on
our
management
cluster,
and
then
this
would
target
all
clusters
running
on
the
management
cluster
that
had
the
cluster
label
cl
the
cloud
fighter.
Azure
true
label,
I
think
in
the
in
the
cluster
spec
itself
in
the
cluster
resource
set
right,
jonathan,
yeah,
yeah
and
the
cluster
itself.
B
I
B
So
yeah
kevin
you've
asked
a
question
in
chat.
Great
question:
that's
really
important
to
understand
is
that
the
I
think,
at
least
in
terms
of
the
scope
we're
working
under
that
is
a
defined
scope.
Is
that
we're
we're
solving
the
problem
for
installing
add-on
componentry
on
workload
clusters,
not
on
the
management
clusters
themselves?.
B
A
A
Is
there
a
relation
and
did
what
is
this
adona
cpi
cni.
I
This
this
is
cloud
provider
asher.
This
is.
B
That
is
such
a
good
question
that
is
going
to
be
dependent
on
each
individual
chart
for
this
chart.
I
could
speak
to
because
I
actually
maintained
this.
This
chart
in
the
cloud
fighter.
Azure
repo
this
chart
does
that
automatically
so
helm
provides
a
primitive
called.
It's
like
capabilities.cubeversion,
or
something
or
other
so
for
for
for
charts
that
deliver
things
that
are
sensitive
to
the
version
of
kubernetes
that
are
actually
running.
B
You
can
actually
templatize
that
in
the
chart
itself,
so
the
way
that
it
works
for
cloud
provider
is
that
the
outa
tree
cloud
fighter
for
azure
maintains
separate
releases
that
map
to
specific
kubernetes
releases.
So
if
you're
on
123,
you
need
to
download
you
need
to
use
the
version
whatever
of
the
cloud
fighter,
azure
runtime,
and
so
we
in
the
in
that
chart
itself.
B
There
is
logic
that
will
reference
the
correct
cloud,
controller
manager
and
cloud
node
manager
image,
depending
on
the
version
of
kubernetes,
that's
detected
at
runtime
when
you
install
the
chart.
So
there
are
other
ways
that
this
could
also
be
expressed
depending
on
the
chart,
but
because
helm
provides
that
primitive
capability
it
would.
It
would
make
the
most
sense
in
as
far
as
this
crd,
assuming
that
we're
using
helm
to
leverage
that,
as
opposed
to
adding
additional
introspection
functionality
into
the
controller
itself
running
on
the
cluster
api.
A
Yeah
this
makes
sense.
It
is
just
a
thing
that
is
important
to
point
out
in
the
document
because
it
is
kind
of
elegant
and
but
yeah.
It
is
a
requirement
for
for
the
elm
chart
outdoors.
So.
A
I
That's
not
that's
something
we're
planning
on
getting
too
soon
actually,
but
actually
going
back
to
what
you're.
Seeing
before
I
know
you
mentioned
in
the
document
we're
talking
about
adding
a
compatibility
matrix
to
look
to
match
the
chart
version
in
the
kubernetes
version
so
jack.
Are
you
saying
that
what
helm
does
solves
that
problem
for
us,
or
do
we
have
to
implement
that
when,
like
that
chart,
doesn't
provide
it
on
its
own?
If
I'm.
B
Understanding
this
correctly
so
helm
provides
a
solution,
but
it's
up
to
the
chart
author
to
actually
leverage
that
and,
as
folks
have
suggested
in
this
call,
you
know
there's
like
a
million
different
ways
that
people
are
using
package
managers,
including
help
so.
B
It
for
granted
that
that'll
be
the
case.
That
being
said,
I
think
it's
appropriate
for
an
mvp
type
thing
they're
working
on
here
to
to
call
that
out
of
scope,
and
we
can,
I
think,
based
on
maybe
further
discovery
in
the
ecosystem.
We
can
discover
how
practical
it
would
be
to
ship
something
like
this
and
essentially
say
if
you,
if
you
need
to
deliver
your
add-on
in
a
way,
that's
specific
to
certain
versions
of
kubernetes,
then
you
must.
B
Sorry
but
yeah
does
that
answer
your
question
about
upgrade
fabrizio.
Essentially
we
haven't
done
it
yet
the
way
that
I
think
it
would
work
if
we
would
just
do
an
a
subsequent
helm.
Install
and
helm
is
smart
enough
in
assuming
that
you
have
sensitivity
defined
in
the
home
chart
to
kubernetes
version.
B
Helm
is
smart
enough
that
it
would
re-render
the
actual
complete
set
of
kubernetes
specs
and
it,
and
there
would
be
a
delt
there'd,
be
a
diff
there
compared
to
the
existing,
because,
following
the
upgrade,
you
would
have
a
new
version
of
kubernetes
or
if
it
wasn't,
let's
say
the
only
difference
is
between
1
21
and
122,
but
122
123
124
all
get
the
same,
then
the
helmet
stall
would
do
would
essentially
do
nothing,
but
that's
actually
work.
We've
had
that
jonathan
had
to
do
ourselves
because
helm
doesn't
do
that.
It's
not
that
smart
actually.
A
That's
fine,
I
I
I
know
this
is
this.
Is
in
prime,
I.
I
think
that
what
this
is
important,
so
the
idea
is
that,
with
this
prototype,
and
and
this
is
where
orchestration
kicks
in
and
makes
and-
and
we
are,
let
me
say-
becoming
really
different
than
clusters-
who
said?
Is
that
with
this
voc
we
are,
we
are
not
only
installing,
but
also
orchestrating
and
orchestrating
is
secret.
B
G
B
G
So
I
was
just
curious:
you
could
do
a
helm
list
of
things
which
were
installed
by
the
crd,
so
how
did
helm
understand
that
by
actually
I
have
not
used
helm
that
much
so
at
one
point
helm
used
an
operator,
but
nowadays
we
don't
use
those
operators
and
homeworks
directly.
I
suppose
so
how
did
helm
understand
that
this
cluster
I
mean
the
operator
actually
installed
it.
B
I
think
we
probably
need
a
helm
expert
on
here,
but
I
I
think
what
what
helm
does
at
runtime
is.
It
takes
the
set
of
results
from
this
from
the
various
kubernetes
resources
that
it
installs
and
it
it
bubbles
that
up
as
a
as
a
like
terminal
state.
So
it
would
fail
if
any
one
of
those
resources
fail
to
install,
and
then
it
keeps
all
that
data
somewhere.
You
know
in
that
cd,
whether
it's
a
config
map
or
something
I'm
not
exactly
sure
how
it
works.
Under
the
hood.
G
So
the
other
thing
was
so
just
because,
since
the
helm
sees
that,
then
somebody
can
do
a
helm,
uninstall
of
cloud
provision
azure
right
and
would
it
would
the
operator
kick
in
and
reconcile
and
actually
install
it
again.
B
Jonathan,
I
think
that's
unimplemented,
but
I
think
that
would
be
the
idea
that
would
be.
That
would
certainly
follow
the
cluster
api
patterns
and
you
we
would
be
very
okay,
yeah
non-idiomatic
we've
gotten
to
yet,
but
that's
definitely
something
we
want
to
do
yeah.
I
would.
I
would
imagine
that
would
be
that's
just
how
cluster
api
works.
So
we
would
want
to
do
that.
B
G
All
right
and
one
last
question
now
this
is
regarding
fabricios
itself,
so
for
things
like,
for
example,
cube,
proxy
and
so
on,
there
is
a
sort
of
complex
orchestration
wherein
when
you
have
a
rolling
upgrade,
you
have
a
newer
version
coming
in
and
the
older
version
deployment
still
needs
to
be
there.
So
there
is
a
sort
of
a
handoff
wherein
the
whole
thing
comes
up
and
then
it's
killed.
G
B
Helm
is
going
to
rely
upon
the
leader
elect
winner
of
the
control
plane
to
report
the
version
of
kubernetes
running
on
the
cluster,
which
is
not
going
to
always
be
the
same,
especially
during
upgrade
situations
as
every
single
node
running
in
that
cluster.
So,
as
far
as
I
understand
it,
helm
will
will
take
what
the
control
plane
returns
back.
That's
the
version
of
the
cluster.
E
B
If,
for
example,
let's
say
helm
does
an
upgrade
based
on
a
cluster
transition
and
one
of
the
underlying
pods
gets
scheduled
in
a
node,
that's
been
upgraded
an
hour
later,
assuming
that
the
the
we're
not
dealing
with
a
static
pod
here,
the
that
pod
would
obviously
get
cordoned
and
drained
in
a
graceful
upgrade
situation
when
it
gets
scheduled
on
to
another
node.
B
A
Yeah
is
it
let
me
so
sorry,
scott.
I
just
answered
to
these
questions,
so
I
think
that
we
don't
have
a
solution,
but
we
are.
We
are
laying
out
the
foundation.
So
if
you
think
that
today,
what
we
have
today
is
the
cougar
proxy
manifest,
which
is
are
coded
in
cover
me
and
with
that
you
can
not
do
really
much
okay.
A
Now
what
what
we
are
giving
you
is,
you
can
write
your
own
manifest
and
we
we
are
giving
you
this
or
adult
orchestration
that
we
looking
into
the
cluster
events,
and
so
these
these
provide
the
foundation
for
for
orchestrating,
create
a
first
add-on.
The
latest
first
demon
set
the
latest
second
one,
etc,
etc,
etc.
So
we
are
building.
J
Scott
yeah,
I
was
just
wondering
if
the
idea
had
come
up.
I
know
you're
using
match
labels
right
now
for
the
clusters,
but
if
match
fields
or
match,
expressions
was
used
here,
even
though
they're
less
friendly
syntaxes,
it
could
be
used,
then
perversion
and
then
could
just
by
adding
the
match
fields
in
there.
J
B
I
like
this,
I
like
this
exercise
where
we're
imagining
this
is
a
real
project
and
we're
setting
out
our
next
release,
milestone
good
call
out
guillermo.
K
Yeah,
I
have
a,
I
have
a
question
when
you
mentioned
that
home
already
has
the
capabilities
to
map
internally
different
versions
of
the
charts
of
different
releases
to
different
versions
of
kubernetes,
which,
in
my
in
my
mind,
was
one
of
the
the
first
goals
of
this
kind
of
project
to
have
the
ability
to
do
those
things
where
it's
with
helm
or
any
other
tool,
and
that
made
me
think
well,
it's
it's
kind
of
like
a
first
solution.
Just
to
say,
hey
helm,
already
supports
this.
K
We
don't
need
to
implement
it
at
the
proxy
level.
I
can
already
do
that
right
and
then,
as
you
said,
we
can
talk
in
the
future
if
most
of
the
charts
don't
support
this
functionality.
Hence
we
should
probably
implement
it
at
the
proxy
level,
but
if
we
are
leaving
it
up
to
the
add-on,
orchestrator
or
item
provider,
however,
we
call
it
to
solve
those
kind
of
problems.
K
K
What
is
the
minimum
set
of
requirements
that
or
the
minimum
set
of
patterns
that
this
framework
is
adding?
Because,
as
of
now,
the
only
thing
that
I'm
seeing
is
we
are
going
to
have
a
controller
that
either
uses
the
hooks
or
watches
objects
in
the
cluster
and,
based
on
on
that
performs
some
kind
of
operation
based
on
some
crd
that
it's
unique
to
that
provider
right,
I
I
haven't
seen
yet
anything
that
you
know
a
common
pattern
that
all
the
add-on
orchestrators
will
follow.
I
don't
know
if
I'm
making
sense
there
or
not.
B
B
A
something
like
cluster
resource
sets
with
orchestration,
so
basically
a
controller
that
sits
behind
a
crd
that
has
an
affinity
with
clusters
running
you
know
in
cappy,
but
with
with
with
primitive
orchestration
crud
as
like
a
first
class
feature
which
is
different
from
crs
in
terms
of
what
would
be
different
between
helm
and
some
other
implementation.
B
I
think
the
the
the
those
differences
would
probably
be
reflected
pretty
clearly
in
the
specific
crd
specs
that
those
specific
implementations.
B
Would
use
if
that
makes
sense,
but
I
want
to
go
back
to
your.
You
made
a
remark
about
the
that
the
ver
there's
a
there's,
a
the
chart
version
correlates
with
kubernetes
versions.
It's
not
that's,
actually,
not
exactly
right.
The
version
of
a
chart
is
simply
like
any
software
release.
It's
just
a
immutable
reference
to
a
chart
at
a
point
in
time.
The
part
that
handles
if
I'm
on
kubernetes
122
do
this
else.
B
If
I'm
covering
123
do
that,
that's
that's
baked
into
the
chart
itself,
that's
literally
part
of
the
templating
of
the
chart
and
it's
it's.
Irrespective
of
the
version
of
the
chart
a
chart
author
might
make
you
know
they
might
choose
to
maintain
separate
versions
of
charts
that
deal
with
separate.
You
know
versions
of
kubernetes,
but
that
would
prob
the
reason
that
this
capabilities
thing
in
helm
exists
as
a
primitive
is
to
prevent
that,
because
it's
a
lot
of
work.
K
B
So
for
bc,
you
have
something.
A
Please
comment
on
on
that
then,
depending
on
that
down
manager
that
you
choose,
you
can
get
them
for
free
by
like
we
are
doing
for
n
or
you
have
to
implement
them
in
in
your
proxy
and
orchestrator.
So
we
go
back
to
the
idea
that
we
leverage
whatever
the
community
offers
as
much
as
possible.
So
whenever
we
we
can
avoid
to
reinvent
the
wheel,
let's
do
it
and
and
at
the
extreme
this
controller
will
be
really
simple,
because
hermes
is
a
fantastic,
is
a
great
tool.
A
Maybe
if
we
use
something
else,
there
are
some
capability
missing
and
and
our
control
became
more
complicated,
but
but
I
think
that
in
term
of
general
capability
and
problem
that
we
have
to
solve,
we
have
a
good
list.
Please
chime
in
and
yeah
that
that
that's
an
interesting
part,
because
if
I
look
at
these
from
cappy
maintainer,
I
agree
with
a
few.
This
is
a
prototype.
We
are
using
n,
but
I
really
would
like
that
this
become
a
pattern
or
a.
A
E
You
know
that
the
one
big
difference
I
see
here
is
is
that
you're
you're
managing
workload
on
the
remote
cluster
from
the
management
cluster
right,
that's
kind
of
different
from
all
the
other
tools
that
I've
seen
and
maybe
if,
if
it's
couched
that
way,
you
know
as
a
way
of
maintaining
remote
clusters,
then
it's
it's
not.
B
E
B
B
No,
I
don't
think
so
this
really.
This
really
is
a
cluster
api.
Specific
thing
I
mean
I
I'm
happy
to.
I
would
be
happy
if
there
are
any
outcomes
that
could
positively
affect
the
broader
ecosystem.
But
what
we
have
been
talking
about
really
is
scope
to
cluster
api,
we're
we're
focused
on
cluster
api
customers.
A
D
Mike
and
you
know,
and
then
how
does
this?
How
does
that
interact
with
the
notion
of
a
joined
cluster
you
know,
does:
is
this
workflow
still
applicable
in
that
kind
of
case
or
in
a
joined
cluster?
Would
you
create
the
workload
first?
Let
the
add-ons
run
then
move
the
cluster
api
pieces
into
that
cluster.
D
D
B
Has
you
know
using
cluster
api?
Primitives
knows
how
to
get
a
cube
config
to
the
workload
cluster
for
that
cluster
it
may
be
the
same
cluster,
but
that
just
confuses
things
to
worry
about
it
at
that
level.
It's
better
to
think
in
abstractions,
like
the
cluster
api
cluster,
is,
is
authoritative
for
how
to
contact
the
the
workload
cluster.
B
A
Maybe
maybe
maybe
not,
I
think
that
distinctive
value
here
is
is
we
are
going
to
integrate
with
cluster
api
so,
and
this
makes
your
addons
basically
to
pop
up
in
a
cluster
api.
According
to
the
cluster
api,
manage
the
the
life
cycle
of
the
cluster
managed
by
cluster
api,
so
yeah
and
at
the
end
we
are
working
with
many
clusters,
but
the
orchestration
tool
is
very
really
specific.
A
A
B
I
Yeah
yeah,
so
I
think
where
we
were
before
that,
so
we
can
get
the
values
out
of
the
home
release
we
installed
and
we
can
see
that
for
here
we
resolved
the
name
of
the
workload
cluster
using
that
field.
B
So
this
doesn't
really
show
that
much
value.
Imagine
that
you
were
running
a
fleet
of
a
thousand
clusters,
then
this
is
super
cool,
because
you
could
do
this
with
helm
now
with
a
single
cluster,
I
just
wanna,
if
anybody's
thinking.
What's
the
big
deal
about
this,
the
big
deal
is,
if
you
had
a
thousand
clusters,
all
with
that
matching
label.
E
B
I
mean
at
least
for
this
phase
of
things.
If
something,
if,
if
we're
able
to
create
value
that
has
general
application,
then
certainly
in
a
follow-up
phase,
we
can
discuss
how
to
broaden
the
scope.
I
Yeah,
so
I'll
also
to
add
on
another
part,
we
where
this
can
have
value
in
cluster
api
is
that
we
can
leverage
this.
So
we
can
reference
a
lot
of
the
specific
abstractions
and
resources
in
cluster
api
as
well,
but
to
us
to
wrap
up
the
demo.
If
we
delete
this
resource,
we
should
see
that
it's
deleted
off
the
workload
cluster.
B
Thank
you,
jonathan.
I
think
in
order
I
will
call
on
scott.
I
think
lumiere
you
may
have
had
your
hand
raised
earlier.
If
maybe
you
changed
your
mind,
go
ahead.
Scott.
J
J
All
of
the
other
tools
that
I've
seen
that
are
multi-cluster
from
a
single
management
cluster
all
except
a
cube,
config
secret,
which
we
already
have
in
capi.
So
you
can
not
need
to
have
the
helm,
reconciliation
and
let
it
would
make
it
broader
of
supporting
flux.
Cap,
argo.
Anything
that
would
be
wanted
by
just
adding
basically
that
shim
layer
of
giving
where
the
cube
config
path
is
what
the
version,
whatever
someone
would
want
to
take
out
of
the
cluster
spec
back
into
those
resources
and
let
the
other
tools
handle
the
actual
reconciliation
into
clusters.
B
Sounds
this
all
sounds
awesome
I
I
would
just
based
on
my
experience
in
this
world.
I
would
say
the
the
best
way
to
marshal
that
inertia
toward
general
utility
would
be
to
build
this
thing
for
cluster
api
and
just
make
it
awesome
and
then
invite
folks
to
make
the
next
iteration
that's
more
general.
B
You
know,
especially
now
this
is.
This
is
literally
something
jonathan
is
just
doing
in
his
github
repo.
So
there's
there's
no
danger
of
us
building
something
that's
going
to
sort
of
set
some
ieee
standard
for
the
next
30
years.
Without
enough,
you
know
broad
generalization,
but
I
mean
I'm
super
happy
to
hear
you
be
able
to
connect
those
dots
so
easily.
Scott.
That's
awesome.
G
Yeah
thanks
for
the
nice
demo
john,
I
had
a
question
which
is
more
generic
on
the
add-on
side.
So
we
know
that
some
clusters
are
very
powerful
and
strong
and
some
clusters
are
very
rudimentary.
So
is
there
a
thought
of
having
some
sort
of
a
configurability
on
a
per
cluster
basis?
So
if
I
have
a
workload
cluster,
which
is
a
big
data
cluster
which
needs
flow
a
lot-
and
that
requires
some
more
resources
or
some
different
kinds
of
charts
as
compared
to
something
which
is
our
daily
workflow
and
not.
B
Doing
much
yeah,
that's
interesting,
I
mean
one
thing
I
could
imagine
would
be
if
there
was
some
way
to
so.
You
know
in
in
jonathan's
crd
example:
there's
a
cpu
limits
or
or
resources
configuration
there.
You
could
maybe
imagine
computing
that
at
runtime
from
something
that
you
get
from
the
cluster
api.
G
So
the
reason
I'm
talking
about
is,
if
helm
solves
it,
it'll
solve
it
in
a
helm
way,
but
I
want
to
know
if
there's
a
cappy
or
a
generic
method,
you
want
to
think
about
for
all
add-ons
in
general
and
solve
it
that
way
as
compared
to
the
helm
solution
alone
for
breezer.
Do
you
want
to
take
a
shot
at
that.
A
A
G
B
All
right
this
was,
I
think,
really
lively
way
more
people
than
I
expected.
So
I
really
appreciate
vince
reaching
out
yesterday
during
the
cappy
office
hours
to
encourage
us
to
more
broadly
advertise.
This
meeting,
I'm
glad
we
recorded,
I
mean
this
is
great.
I
I
agree
for
a
reason.
Let's,
let's
meet
back
up
here
in
a
couple
weeks,
if
I
see
the
same
folks,
that's
a
really
good
sign
and
jonathan,
and
I
will
keep.
I
think
it's
it's
time
to
really
lean
into
the
proposal
a
little
bit
more.
B
So
please,
folks,
if
you
really
are
interested
in
this,
let's
hang
out
on
that
google
doc
a
little
bit
and
get
that
into
we
kind
of
in
the
race
between
prototype
and
and
spec.
The
prototype
is
kind
of
out
run
the
the
spec
a
little
bit.
So
let's
get
the
spec
caught
up
to
reflect
some
of
the
learnings
from
the
prototype
and
from
these
discussions.
I
Yeah
and
to
add
on
we're
also
trying
to
figure
out
what
the
crd
is
going
to
look
like
so
far.
What
we
have
is
mostly
just
to
test
our
controller,
and
you
know,
see
what's
possible
so
in
terms
of
that,
like
the
ux
is
still
in
development.
So
if
you
guys
have
any
suggestions
on
how
to
do
that,
especially
for
the
configuring
cluster
variable
stuff,
that
would
be
really
helpful.