►
From YouTube: Cluster Addons meeting: 2020-03-31
Description
Meeting notes: https://docs.google.com/document/d/10_tl_SXcFGb-2109QpcFVrdrfnVEuQ05MBrXtasB0vk/edit#heading=h.yugdr2ba2zfv
A
B
A
B
A
B
C
A
B
A
You
know
in
the
bootstrapping
process
like
needs
life
cycle
that
matches
with
the
cluster
at
some
level,
but
when
I
think
about
the
coup,
proxy
or
replacement
feature,
that's
one
of
the
biggest
use
cases
empathizing
with
the
community
and
with
like
third-party
vendors
and
projects
that
are
producing
networking
solutions
that
replace
coup
proxy
and
then
having
Covidien.
Being
this
very
supported
upstream
tool
that
you
know
is
in
practice
endorsed.
You
know
by
cluster
API,
and
the
kind
of
thing
is:
there's
no
way
to
disable.
Really
the
installation
of
good
proxy
right
now,
right.
B
E
Add-Ons
to
you
know
have
a
way
to
express
kind
of
these
dependencies
like
conflicts
with
you
know,
cube
proxy
or
replaces
cube
proxy,
or
something
like
that.
In
addition
to
kind
of
other
things
like
you
know
requires
you
know,
kubernetes
be
configured
with
a
pod
cider
that
matches
this,
or
you
know
some
way
to
expose
this
information
in
a
way
that
we
can
consume
at
bootstrapping
time
from
cluster
api.
E
C
So
there's
kind
of
two
levels
of
answer:
I
think
one
one
is
that
we
very
much
wanted
to
be
able
to
to
strip
out
the
hard-coded
stuff
from
places
like
cube,
ADM
and
then
so.
That
gives
you
a
way
to
solve.
The
replaces
cube
proxy
question
because
you
just
have
like
nothing
or
you
know
whatever
the
absolute
minimum
is
and
then
you
and
then
everything's
an
add
on
including
cube
proxy.
C
C
C
C
E
E
C
I'm,
pretty
familiar
with
stalling,
CNI
and
and
thinking
about
so
I
believe
I've
had
the
issue
open
for
about
three
and
a
half
years
that
we
need
to
be
able
to
back
to
the
cluster
pod
Network,
so
either
firm
from
kubernetes,
but
in
the
in
the
context
of
installing
and
bringing
up
a
cluster
I
mean
I,
I
think
I
think
that
that
can
be
a
top-down
thing.
You
know
the
person
requesting
a
cluster
can
say:
here's
the
here's,
the
cider
that
should
be
used,
or
probably
a
set
of
them
for
for
ipv6
and
so
forth.
C
E
C
A
As
a
mechanism
that
that
does
make
sense,
but
yet
there's
certainly
the
UX
consideration
when
thinking
about
cluster
api,
kuba
Dems
component
config
the
cops
configuration
you
know,
the
clusters,
api
from
eks
cuddle
and
then
combining
that
with
some
of
these,
like
we've,
proposed
the
add-on
installation
api
which
doesn't
need
to
run
from
inside
the
cluster
that
could
so
from
a
context
of
bootstrapping
right
using
the
same
API
right
to
to
get
that
ons
into
the
cluster.
A
When
there's
no
CNI
is
a
thing
that
was
the
proposed
integration
with
Cugat
hem,
but
composing
things
like
the
EPS
cuddle,
cluster
API
or
cluster
API
itself
with
the
Covidien
component.
Config
gives
you
the
bare
minimum
like
validated
objects
that
would
allow
you
to
specify
the
same
thing
in
multiple
places
that
certainly
been
kind
of
my
like
intention
or
expectation
of
how
the
add-on
installer
would
compose
with
tools
like
KU
vidya.
A
You
know
so
that
you
could,
from
the
start,
you
know
say:
I'm
not
using
ku
proxy
I'm
using
psyllium
with
these
flags
enabled
I
have
machines
that
are
of
this
kubernetes
version.
This
operating
system
in
this
kernel
I'm
using
Covidien
to
bootstrap
the
API
server
and
scheduler
with
you
know
these
arguments.
My
nodes
are
annotated
in
these
ways,
so
there
are
route
reflectors
or
whatever
right
and
there's
a
lot
of
like
the
same
value.
You
know
being
used
in
multiple
places
right
or
values
that
are
related
to
each
other
or
calculated
from
each
other.
A
But
then
these
objects
have
to
be
composed
in
a
similar
way.
That
deployments
and
ingress-
and
you
know,
service
annotations
with
external
DNS
or
something
would
need
to
be
composed
together
and
then
oftentimes
you're
also
passing.
You
know
like
in
that
example,
a
DNS.
You
know
fully
qualified
DNS
entry
like
into
your
application,
so
it
can
do
routing
or
something
and
so
like
as
somebody
who's
working
with
all
of
these
API
objects
and
composing
them
together.
A
A
E
A
B
Not
totally
sure
why
we
talking
about
templating
and
such
things
in
these
context
be
kind
of
theme.
More
of
this
thing
that
it
seems
like
what
we
should
be
discussing
on
this
topic
is
whether
whether
there
should
be
a
high-level
capability
like
an
idea
of
a
high-level
capability
in
the
cluster
say:
core
DNS
has
the
provides
the
DNS
capability.
A
CNI
add-on,
provides
a
general
metric
of
ability
and
some
CNI
atoms
provide
a
service
load.
Balancing
capability
and
q.
B
Proxy
is
just
a
service
labonz
capability
provider,
or
something
of
that
like
that
I,
don't
know,
what's
the
the
the
best
language,
but
something
that
that
would
manifest
itself,
and
you
know
in
a
fairly
high
level,
hopefully
simple
enough
API,
where
we
could.
You
know,
look
at
a
cluster
and
say:
okay,
this
cluster
has
a
service
load,
balancing
capability
implemented
by
Q
proxy
and
are
we
gonna
go
and
get
rid
of
Q
proxy
and
install
psyllium
instead
and
whatever
the
whatever
the
networking
capabilities
provided
would
be?
We
could
swap
that
out
or
work
with
it.
B
C
C
We
also
need
to
be
able
to
I
think
replicate
what
we
have
today.
You
you,
what
we
thought
should
be
possible
is
to
take
the
hard-coded.
You
know
hard-coded
config,
for
core
DNS
and
replace
it
with
a
soft
coded
add-on
config
for
accordion.
We
thought
that
should
be
possible.
Look.
We
thought
that
should
be
the
simplest
possible
thing.
No
suddenly
and
immediately
fell
into
this
trap.
You
know
you
need
to
do
an
IP
address,
computation.
F
E
So
one
of
the
other
challenges
that
we
see
from
the
cluster
API
side
is
also
infrastructure
requirements
that
may
be
required,
especially
in
the
CNI
space.
So,
for
example,
you
know
if
you
use
one
CNI,
you
need
to
ensure
that
be
excellent.
Traffic
is
enabled
between
nodes
or,
if
you
use
another
one,
you
need
to
have
this
port
enabled.
E
You
know
with
this
protocol
to
ensure
that
things
work
and
you
know
trying
to
fit
the
right
infrastructure
requirements
for
the
right.
You
know
C&I
provider
in
particular,
but
we
also
can
look
at
other
other
things
as
well.
If
we
consider
things
like
CSI
providers
or
Cloud
Controller
managers
in
scope
as
well,
those
could
also
introduce
I
am
requirements
between
the
infrastructure
provider
and
the
plug-in
installed
as
well.
And
how
do
we
express
those
requirements
in
a
way
so
that
we
can
ensure
that
you
know
those
infrastructure
requirements
are
met?
B
Yeah
and
well,
however,
the
cluster
api
is
much
better
position
than
cube
admin
right.
Cluster
api
could
actually
do
something
about
it
and
on
the
on
the
on
the
point
regarding
the
ports,
I
would
imagine
it
should
be
possible
to
actually
read
ports
that,
like
a
demon
set
uses,
obviously
that's
a
little
bit
indirect,
but
you
probably
look
at
the
reports
that
are
actually
being
declared
in
the
daemon
set
or
whatever
the
pauses
are
yeah.
E
But
that
also
requires
us
to
kind
of
introspect,
the
C&I
provider
and,
and
that
sort
of
thing
that's
where
I
was
get
trying
to
get
into
the
concept
of
kind
of
exposing
metadata
that
we
could
consume.
Rather
instead,
that
could
be
in
a
more
kind
of
version
and
supported
format
that
we
can
more
easily
rely
on,
because
you
know
otherwise.
C
C
C
So
if
I,
if
I,
think
about
it's
kind
of
a
similar
situation
in
in
CNI
world,
where,
where
we
said
there
we're
actually
not
going
to
put
all
these
things
in
the
spec
we're
going
to
put
them
in
a
separate
place
which
we
call
conventions
and
and
basically
it's
a
lot
more
flexible
people
come
along
and
say
it's
so
back
in
back
in
your
ok,
someone
can
say:
well,
here's
para
maana,
here's
a
kind
of
a
CR
D,
that's
going
to
tell
me
about
I
am
requirements,
and
people
on
the
consuming
side
can
implement
that
people
on
the
plug-in
side
can
add
it.
C
There's
gonna,
there's
gonna
be
what's
a
good
word.
You
know
kind
of
jostling
in
that
space.
This
there's
gonna
be
multiple
people
coming
up
with
comparable
requirements
and
they're,
not
quite
exactly
the
same
and
but
yeah
I,
guess
I,
guess
I
feel
it
could
could
shake
out.
You
know,
there's
probably
not
thousands
of
these
things.
There's
probably
a
couple
of
dozen
and.
E
You
know
these
things
better
than
you
know
somebody
working
more
generically
in
the
space,
so
they
they're
more
easily
to
be
able
to
be
authoritative
over
these
things,
but
also
you
know
just
having
a
way
that
we
can.
You
know
through
contracts,
you
know,
consume
that
information
in
core
cluster
API
means
we're
not
kind
of
hard
coding.
You
know
as
much
on
the
course
I'd
and
and
I'd
like
to
see
something
similar
here,
so
that
you
know
it's
not
on
the
add-ons
project
to
kind
of
curate.
C
D
D
A
This
idea
of
exposing
metadata
for
the
add-on
and
relating
that
metadata
to
the
add-on
is,
is
interesting
to
me
operator.
Fermor
has
done
some
of
that
in
relation
to
an
operator
there's
this
challenge
that
we
hit.
Whenever
we
talk
about
grouping
these
the
actual
things
that
make
up
the
add-on
together
and
we've,
we
keep
running
into
this
missing
idea
of
packaging,
and
so,
like
tools
have
have
done
this
to
varying
degrees.
You
know
with
git
repos
with
charts,
and
we
have
the
OCI
packaging
kept
that
like
uses
docker
images
at
as
transport.
A
Getting
figuring
out
at
an
API
level
like
what
the
thing
this
apply
to
is
interesting
enough
add-on.
Installer
configuration
also
does
this,
so
there
there's,
probably
some
minimum
amount
of
introspection,
that's
necessary
from
something
like
cluster
api,
to
dig
into
the
add-on
to
at
least
find
the
metadata
custom
resource,
or
something
that
we
create.
Is
that
something
that
you
were
thinking
like?
Is
that
how
you
would
think
of
it?
Look,
Jason
or
or
anybody
else
else,
Rek
as.
E
E
A
What
we've
proposed
the
central
component
is
a
library
and
an
API
that
are
just
responsible
for
doing
a
top-down
approach
and
storing
a
minimal
amount
of
state
about
what
is
applied,
and
so
that
allows
you
to.
You
know.
Whenever
you
invoke
the
add-on
installation
library,
take
the
packaging
format,
that's
specified
and
the
reference
to
that
package
and
then
make
sure
that
it's
either
applied
or
removed
from
the
cluster
and.
E
A
A
E
Okay,
I'm
just
thinking
about
things
particularly
more
like
cute
proxy
that
are
tied
to
you,
know
the
particular
version
of
kubernetes
that
you're
running
in
the
cluster,
because,
obviously
you
know
those
ones
that
are
more
dependent
are
going
to
be
more
of
a
challenge
on
errand
to
ensure
lifecycle
is
up
to
date.
On
those
then
say
some,
that's
more
a
little
bit
more
disconnected
like
Cordy
and
s,
for
example,
yeah.
E
I'm
thinking
more
along
the
lines
of
you
know
the
version
of
core
DNS,
somewhat
disconnected
from
the
version
of
kubernetes,
and
you
have
a
little
bit
more
play
there
in
you
know
version
drift
and
worrying
about
you
know:
updating
the
life
cycle
of
those
components
versus
something
like
cube
proxy,
where
you
know
you
kind
of
have
to
make
sure
that
you're
in
you
know
at
least
n
minus
one.
You
know
for
the
related
kubernetes
cluster
I.
C
C
You
know
that
there
may
be
things
that
are
that
are
just
too
complicated,
yeah,
I
guess,
I'm
I,
don't
know
I'm
trying
to
I'm
always
trying
to
get
the
the
simplest
thing
that
could
possibly
work
out
and
in
contact
with
the
real
world
where
we'll
learn
much
more
than
we
will
just
trying
to
think
about
it.
So.
C
A
Yeah
I
mean
upgrading
good
proxy
from
a
good
bit.
Am
standpoint
has
been
you
know,
not
the
most
complicated
thing
and
so
like
when
the
cluster
you
know
goes
through
the
upgrade
like
supporting
a
new
application
of
the
coup
proxy
daemon
set,
and
then,
if
you
want
to
do
something
more
specific,
like
target
nodes
of
a
particular
kubernetes,
a
version,
that's
a
certainly
within
the
realm
of
like
a
declarative,
apply.
E
Yeah,
it
gets
kind
of
complicated,
though,
because
when
we
looked
at
it,
we
recently
tackled
this
when
we
started
dealing
with
upgrades
with
cluster
API
mm-hmm
and
the
way
that
we're
handling
it
we're
not
doing
the
cube
ATM
upgrade
operation
because
we're
trying
to
keep
individual
machine
instances
more.
You
know
independent,
so
we're
not
trying
to
mutate
them
after
we
create
them.
E
Is
we
basically
had
a
reimplemented,
what
cube
ATMs
doing
to
upgrade
cube
proxy
at
the
end
of
that
control,
plane
management,
but
as
we
did
that
we
realized
that
now,
if
we
update
that
daemon
set
after
we
do
the
control
plane
now
we're
updating
that
daemon
set
for
all
the
workers,
then
may
not
already
be
that
aren't
updated
yet
so
updating
the
daemon
set
at
the
end
of
the
control
plane.
You
get
into
the
situation
to
where
you're
running
the
newer
version
of
cute
proxy
bendy
cubelet
that
it
is
running
on
those
worker
nodes.
F
C
A
Don't
think
that
this
is
so
different
from
a
lot
of
updating
workloads
in
kubernetes.
There
is
I
think
a
a
missing
link
here.
What
I
hear
in
this
upgrade
story
is
that
a
node
local
daemon,
that's
supposed
to
be
versioned
in
relation
to
the
version
of
the
kubernetes
components
that
are
running
on
it.
Mainly.
The
couplet
is
not
being
scheduled
in
that
manner,
and
we
have
tools
to
constrain
this,
mainly
I'm
thinking
of
node,
selector
and
its
equivalents.
A
E
A
E
A
E
A
It's
in
it's
solving
the
issue
of
imperative
state
required
during
the
upgrade
I.
Think.
If
you
can
solve
that
problem
by
using
the
scheduler,
then
you
are
back
into
declarative,
bland
and
then
providing
custom
images.
Upfront
cleaning
up
a
namespace.
When
you
know
no
machines
of
a
particular
version
exist
anymore,
like
these
are
things
that
can
happen
in
the
declarative,
reconciliation,
loop
inside
of
cluster
API
or
kuba
TM,
or
any
cluster
management
tool.
A
Yeah
I'm
not
sure
if
I'm
clearly
expressing,
hopefully
you
can
see
a
little
bit
of
where
I'm
coming
from
here
but
yeah.
Just
to
like
what
Brian
said
about
like
reaching
for
the
simplest
thing,
it
would
not
be
the
first
time.
In
fact,
we
almost
have
like
a
prior
art,
a
pattern
of
upgrade
strategies
and
bugs
and
miss
applied.
A
You
know
logic
inside
of
things
like
kuba
diem,
for
you
know
getting
the
components
to
upgrade
across
a
kubernetes
version,
being
you
know,
kind
of
just
enough
to
make
it
work,
and
we
certainly
could
you
know,
as
like
future
versions
of
the
tool
evolved.
Like
pick
a
different
upgrade
strategy,
I
mean
that's
I,
think
it's
fine
to
question
that.
E
E
But
that
you
know
I'm
not
too
worried
necessarily
about
cube
proxy,
but
when
we
start
talking
about
the
lifecycle,
management
of
various
CNI
providers
and
other
kind
of
add-ons,
you
know
I
want
to
make
sure
that
whatever
that
you
know
upgrade
process
is,
is
something
that
you
know
a
consumer
such
as
cluster
API
doesn't
have
to
know.
You
know
all
of
the
details
about
you
know
all
of
the
various
supported
add-ons
that
we
wish
to
kind
of
support.
E
You
know
that
that
we
would
have
to
codify
logic,
basically
to
manage
a
lifecycle
of
all
these
things.
You
know
in
a
different
way.
You
know
anything
we
can
do
to
kind
of
formalize
that
lifecycle
management
helps.
You
know
that
consumption
of
various
you
know
add-ons
that
the
community
may
produce
yeah.
A
Certainly,
the
idea
of
removing
something
when
all
of
the
things
that
would
require
it
to
exist
is
interesting,
but
yeah
I
think
as
much
as
we
can
encourage
a
declarative
lifecycle.
We
should
attempt
to,
of
course,
the
community
to
produce
add-ons
that
provide
solutions
that
are
declarative
like
and.
E
E
You
know
right
now
in
cluster
API,
where
we're.
Actually
you
know
we
have
that
step
where
we're
ven
during
the
library
from
core
DNS
to
you
know,
inspect
the
version
that
we're
going
from
and
to,
and
you
know,
converting
the
config
if
we
need
to
and
all
the
mechanics
of
swapping
the
config
map
that
feeds
into
core
DNS.
E
All
that
as
part
of
the
upgrade
is
all
kind
of
hard-coded
logic
that
we
have
wrapped
around
the
the
core,
DNS
migration
library
and,
ideally,
that's
something
that
we
would
like
to
not
even
have
to
think
about
other
than
just
saying.
We
want
to
update
core
DNS
from
this
version
to
this
version,
as
requested
by
the
user
to.
F
Mm-Hmm
I
think
that's
a
really
good
point.
Actually,
we've
been
talking
about
like
packaging,
add-ons
and
operators,
and
you
know,
we've
been
looking
at
the
coop
declarative
kudos
are
the
COO
builder
declarative
pattern
that
takes
set
of
yellow
files
and
applies
them,
but
that's
often
not
sufficient
to
do
a
declarative,
update
between
versions
like
I
think
it's
the
point
that
you're
making
it's
a
really
important
one,
and
maybe
we
should
talk
about
that.
Some
more
yeah.
A
The
other
bit
here,
that's
interesting
is
the
leakiness
of
API
machinery
in
trying
to
manage
things
that
do
not
use
API
machinery,
so
core
DMS
being
configured
by
a
core
file,
not
having
readily
available
tools
that
are
integrated
into
kubernetes.
You
know
to
deal
with
the
core
file
and
its
associated
API
required.
It
ends
up.
You
know,
being
that,
if
you
want
to
work
with
an
upgrade,
then
you
write
a
bunch
of
code
and
import
libraries
that
you
don't
feel
comfortable,
importing
and
owning
you
know
into
Cappy
and
the
same
your
into
the
same
problems.
A
It's
it's
really
interesting
to
see
like
in
the
helm
space
because
they
have
yeah
mol
and
the
values
merging
semantics,
like
the
solution
in
helm.
For
this,
that
kind
of
generically
applies
is
to
template
the
config
file
and
then
convert
the
config.
You
know
into
the
values
animal
so
that
you
can
do
merging
and
overrides
and
all
that,
and
then
somebody
decides
to
maintain
that.
Hopefully,
so
it's
much
better
if
we
can
have.
A
F
A
F
You
run
into
problems
where
there's
an
existing
configuration
and
you're
updating
to
the
new
version,
and
now
someone
has
to
make
the
decision
of
which
parts
of
that
configuration
the
user
owns
and
which
parts
of
the
configuration
the
system
owns
and
can
automate
it
and
like
whether
it
should
automatically
update
or
migrate
fields
or
sections
or
whether
it
should
I'm
give
up
and
say,
you've
edited
some
stuff.
That's
I,
don't
know
how
to
upgrade,
and
you
need
to
resolve
this
manually.
That
kind
of
thing
yeah.
A
This
makes
me
think
of
like
when
you
do
an
apt
update,
like
D
package
like
marks,
the
package
is
old,
config
right
or
even
if
you
uninstall
it
like.
If
the
config
was
not
managed
by
the
package,
if
the
configure
shops
don't
match
right,
then
it
says:
oh,
a
residual
config
like
left
on
the
disk
and
yeah.
That's
it's
really
really
tricky.
I
think
that
there
is
a
lack
of.
A
Practice
of
like
composition,
so
our
core
file
right
now
that
we
ship
by
default
in
cubed
iam
just
has
the
single
core
file
field.
But
when
I've
been
working
with
some
multi
cluster
networking
examples
kind
of
just
on
my
own
and
what
I
found
is,
if
you
want,
like
multiple
systems,
collaborating
on
the
core
file
to
say,
configure
multi
cluster
DNA
in
a
mash
or
something,
then
you
it's
best
to
break
up
the
core
file
into
different
pieces
and
then
like
import
them
and
have
environment
variable
overrides
and
things.
A
A
A
Hopefully,
you
guys
can
still
see
the
screen,
but
it's
basically
in
was
it
customized
lib
Cordy
mess
here
the
I'm
using
a
get
ops
approach
to
override
the
core
file
that
is
produced
by
kuba
diem
in
order
to
add
an
additional
import
statement,
and
then
the
extra
zones
environment
variable
is
added
after
cluster
local
inside
of
the
kubernetes
plugin
config,
and
you
can
break
this
up
even
more
with
like
smaller
imports.
For
you
know
the
the
error
reload
and
you
know
health
configuration
right
like
that-
could
be
imported
from
a
different
core
file
same
thing.
A
For
like
the
metrics
thing,
it
could
be
imported
from
the
different
core
file,
and
so
this
this
is
a
very
general.
You
know
default
configuration
that
does
work
for
most
people
and
then
I've
kind
of
decorated
it
with
these
additional
points
of
extension,
so
that
you
can
provide
a
different
config
right
that
is
optional.
A
So
here's
a
patch
for
the
deployment
config,
which
allows
you
to
specify
an
optional
core
DNS
configure
inside
of
the
config
map
right.
So
this
contact
map
doesn't
have
to
exist
in
order
for
core
DNS
to
run,
but
if
it
does,
when
the
core
DNS
pod
is
created,
or
actually
this
is
managed
by
deployment.
Yes,
so
that
that
would
always
work
yeah.
A
F
Right,
I
think
the
the
pattern
is
a
really
good
example,
I
think
when
we
start
talking
about
how
can
we
say
something?
How
can
we
preserve
this
property
for
any
add-on?
I
think
that's
where
it
gets
tricky,
because
you
have
this
problem
even
with
you
know
we're
talking
about
a
core
file
here,
but
you
have
this
problem
with
Kubb
resources
as
well.
F
How
do
I
know
if
a
field
of
a
particular
object
is
owned
by
some
system
component
that
will
overwrite
things
or
reset
defaults
versus
what
the
user
intent
is
indicating
like
there's
there's
some
distinction
based
on
whether
a
field
is
an
aspect
versus
a
status,
you
know.
Typically,
statuses
system
owned
in
spec
is
user
intent.
So
it's
not
always
true,
especially
when
you
start
talking
about
multiple
operators,
interacting
with
each
other,
trying
to
share
components
share
EP.
Is
you
almost
need,
like
an
ownership
model
for
subsections
or
fields
of
kubernetes
api?
It's
so.
A
But
yes,
I.
E
Think
that
brings
to
mind
a
particular
pain
point
that
we've
seen
with
implementing
stuff
with
CR
DS
and
trying
to
persist
across
what
we're
calling
a
pivot.
But
anytime,
you
have
to
be
able
to
survive
a
backup,
a
restore
of
a
CRD
based
resource
with
status
of
resource,
and
that's
you
know.
Sometimes
your
controller
needs
to
be
able
to
store
information
that
still
needs
to
be
persisted
across
that
backup/restore
and
the
only
place
he
can
really
do.
E
But
I
noticed
we're
almost
out
of
time
and
I
wanted
to
bring
up
sadef
cluster
API
proposal
before
we
get
before
we
completely
run
out
of
time.
If
that's
okay,
yeah
totally
so
I
know,
there
is
some
some
potential
overlap
with
cluster
addons
work
with
the
with
the
proposal
as
it
stands
today,
but
we
do
also
see
some
value
independent
of
cluster
add-ons
to
still
be
able
to
just
basically
take
arbitrary,
yamo
and
one-time
apply
it.
E
A
Yeah
I'm,
certainly
not
interested
in
in
block
anybody
blocking
anybody
from
making
progress
on,
seeing
how
cluster
API
can
create
usable
clusters
and,
what's
the
most
appropriate
way
to
or
the
the
best
way
to
communicate
about
how
to
put
these
things
together.
Like
do
you
guys
want
to
see
like
an
enhancement
proposal,
do
you
want,
like
just
comments
on
the
post,
apply
stuff
or
a
PR,
so.
E
E
You
know
some
time
together
between
somebody
from
that's,
that's
familiar
with
the
cluster
API
project
with
somebody
who's.
You
know
more
familiar
with
the
addons
work
and
and
see
what
we
can
do
to
kind
of
hash
out
that
proposal
or
we
can
just
go
ahead
and
you
know,
go
POC
phase
with
it,
which
you
know
it
I
think
it
depends
on
whoever
we
get
involved,
both
from
the
cluster
API
side,
and
then
we
add
on
side,
I'm,
fine,
with
either
approach
and
I.
Think
you
know.
G
E
A
Cool
yeah
I've,
been
wanting
to
play
with
the
post,
apply,
configure
I'm.
Actually
it's
like
the
next
next
step
on
this
multi
cluster
networking
thing
that
I
was
working
on
I
wanted
to
see
what
it
would
look
like
to
use
like
post
apply,
config
like
a
new
one.
You
know
just
like
try
and
upgrade
a
component.
It's
because
I
am
my
mental
model
of
how
that
converges.
When
there's
a
cluster
seems
to
think
that
there's
some
creative
ways
to
use
it
so.