►
From YouTube: Cluster Addons meeting: 2020-06-23
Description
Meeting notes: https://docs.google.com/document/d/10_tl_SXcFGb-2109QpcFVrdrfnVEuQ05MBrXtasB0vk/edit#heading=h.jhnwwzui8o8g
B
A
C
But
I
didn't
end
up
with
anything
concrete
us
in
terms
of
ideas.
I
also
don't
see
some
people,
the
country,
but
I
just
wanted
to
get
the
feedback
from
this
group
like
how,
once
we
have
this
year,
the
present
in
the
question.
The
controller
is
running.
How?
How
are
we
going
to
upgrade
the
version
of
the
controller,
the
CRT
and
also
instances
of
the
coastal
resource,
like
we
don't
have
a
pattern
to
my
understanding
for
that
yet
defined
in
the
seek
for
the
tools
and
I
wanted
to
get
feedback
from
this
group.
E
So
I
think
what
sorry
I
saw
Justin
how
to
sand
raise
I
didn't
mean
to
jump
in
there
go
ahead,
Justin
no
come
on
alright,
so
my
question
was:
is
you
know,
I
think
the
issue
more
comes
around.
How
do
we
manage
your
lifecycle
management
of
the
operator
itself?
You
know
we
don't
want
to
put
that
logic
directly
into
cube
ATM,
because
then
we
have
the
same
problem
that
we
have
today
with
updating
core
DNS
itself.
F
Yeah
and
I
think
I
think
what
you
said
is
is
right
and
that
the
operators
are
supposed
to
be
easier
to
update
like
they
should.
Those
should
always
be
a
coop
kind
of
apply
so
that
there
is
an
advantage
there
I
think
there's
another
advantage,
which
is
at
least
in
theory.
A
particular
version
of
the
operator
should
apply
to
more
versions
of
Cortana,
so
it
would
say
patch
release
of
Cortinas.
For
example,
you
wouldn't
expect
that
you
would
have
to
update
the
operator.
F
We
haven't
defined
a
way
that
the
operator
can
say:
I,
don't
support
that
version
yet,
nor
a
way
that
the
controlling
tool,
cuvee,
DM
or
whatever
it
may
be,
can
say.
Oh
I
need
to
update
the
operator
because
it
is
a
like
you've
asked
to
go
to
a
coordinates
version.
That
is,
that
is
beyond
the
range
I
think.
F
The
operator
should
know
that
the
version
is
not
supported
and
should
stop
should
refuse
to
upgrade
to
that
version,
the
expectation
being
that
either
you'll
notice
that
the
accordionist
version
needs
to
be
updated
or
that
the
updated
accordion
s
operator
is
actually
coming
anyway,
because
qadian
was
actually
doing
an
upgrade
of
both
and
they
just
happened
to
land
in
the
in
the
wrong
order.
Type
thing.
E
So
I
think
there's
another
challenge
here
that
we're
glossing
over
a
little
bit
too,
and
that
is
at
the
end
of
the
day.
We
want
coordinates
running
with
some
type
of
configuration
right
now.
We're
modifying
that
directly
through
the
core
file
that
we're,
creating
and
injecting
in
as
a
config
map,
I
believe
with
the
operator
model.
We're
trading
that
out,
for
you
know
an
instantiation
of
the
CRD
that
the
operator
is
going
to
define
for
its
configuration
and
we're
gonna
have
to
manage.
E
You
know
how
do
we
interact
with
that
from
cube
ATM
as
well,
depending
on
the
version
of
the
operator
that's
installed,
which
is
a
similar
challenge
to
what
we're
dealing
with
a
little
bit
today
with
you
know
migrating
the
core
file,
so
we
need
to
make
sure
we're
not
trading.
You
know
core
file
management
for
complexity
around.
How
do
we
manage?
How
do
we
interact
with
the
CRD?
That's
driving
the
operator.
A
That's
a
really
interesting
point:
I'm,
not
full
from
not
fully
understanding
the
problem.
Am
I
understanding
you
Jason,
that,
like
basically
kuba
Dan
has
things
that
it
wants
to
put
in
the
core
file.
It
wants
to
help,
make
things
easier
for
users,
and
then
you
want
to
be
able
to
maintain
that
decision
across
upgrades
of
core
DNS
when
kuba
name
is
used
up
great
cluster
correct.
E
E
You
know
the
API
Machinery
tooling,
around
CRT
conversion
webhooks
to
do
some
of
the
stuff
so
that
we
don't
necessarily
have
to
worry
about
providing
an
older
version
and
having
that
work
with
a
newer
version
of
the
operator
as
long
as
whatever
CRT
we're
putting
in
is
supported
through
the
conversion
webhooks,
but
we
still
have
potential
issues
with
you
know
like
additive
API
changes
or
new
functionality,
that's
being
exposed.
You
know
as
part
of
the
operator
and
tying
into
those
configurations
and
knowing
you
know,
potentially,
when
they're
available
versus
when
they're
not.
A
A
Basically,
you
can
you
can
create
like
a
core
file,
D
directory
and
have
other
core
files
being
included
in
your
primary
one.
That's
one
technique
for
allowing
the
user
to
have
the
power
of
extending
the
configuration
without
actually
having
to
mutate
the
thing
so
that
the
state
that
you're
trying
to
preserve
becomes
less
important
and
I.
Think
what
we
have
done
in
the
CR
D
is
provide
a
core
file
template
so
that
the
C
or
D
is
and
wrapping
every
bit
of
the
core
file.
A
So
that
I,
don't
I,
don't
see
that
there
are
any
regressions
from
moving
to
the
C
Rd
and
then,
when
you
talk
about
maintaining
the
custom
resource
instance,
as
you
know,
the
C
or
D
potentially
changes
there
is
the
conversion,
webhooks
I.
Suppose
if
you
were
to
move
to
something
that
was
radically
different
than
providing
a
core
file
template
or
list
of
them,
then
that
would
have
to
be
that
could
potentially
be
a
different
upgrade.
E
I'm,
just
I'm
still
wondering
like
looking
at
the
consumption
side.
It
seems
like
we're.
Gonna
have
to
you
know,
know
quite
a
bit
about
the
various
operators
and
maybe
the
like
compatibility
matrix
with
the
version
of
kubernetes
that
we're
wanting
to
install
with
that
operator
to
manage
it.
I
haven't
heard
anything
yet
that
would
indicate
that
the
add-ons
project
is
trying
to
kind
of
solve
that
part
of
the
problem.
E
I'm
also
thinking
a
little
bit
beyond
just
like
cube
ATM
here
as
well,
because
if
you
look
at
like
how
we
handle
the
lifecycle
management
of
a
cluster
and
cluster
API,
we
rely
heavily
on
cube
ATM
for
the
initial
bootstrapping
and
the
joining
workflow.
But
when
we
do
the
upgrade,
we
don't
actually
trigger,
like
the
cube
ATM
upgrade
workflow.
So
if
part
of
upgrade
would
mean
that
cube,
ATM
has,
to
you
know,
update
the
version
of
say
the
core
DNS
operator
and
the
cube
proxy
operator
or
whatever
add-on
operators
it's
working
with.
E
A
So
that
was
what
we
had
proposed.
The
mechanism
for
cubed
M
would
be
right.
It's
like
this
library
that
lives
inside
of
Cooper
96
that
just
has
a
simple
API
and
does
a
couple
of
plies
and
that
that
library
would
be
easily
been
durable
and
cheaply
venerable
into
other
projects.
As
an
alternative,
you
could
just
exact
the
Covidien
phase
without
actually
using
the
upgrade
portion
of
the
workflow.
So.
E
One
of
the
one
potential
issue
that
I
see
there
is
that
it
seems
like
with
the
library
approach,
at
least
your
lockstep,
with
having
to
keep
up
and
release
updates
to
update
that
library.
For
example.
Right
now,
we
have
decoupling
between
cube
ATM
with
cluster
API
to
where,
as
long
as
cube
ATM
accepts
the
version
of
the
config
that
we're
generating
you
know,
there's
no
additional
kind
of
maintenance
work
to
support
the
newer
versions
and
we've
been
trying
to
avoid
kind
of
you
know
having
to
encode
some
of
that
information,
if
at
all
possible,
yeah.
C
There
was
a
particular
problem,
at
least
on
my
site,
I
think
the
bridge
who
cannot
more
comments
to
this
particular
implementation.
But
for
me,
particularly
one
problem
was
that
there
was
no
way
to
pipe
the
image
requirements
from
the
library
to
the
consumer
of
the
library.
So
if
you
are
installing
Cordilleras,
there's
no
way
to
get
the
version
in
the
image
repository
or
set
a
custom
image
in
custom
image,
repository
to
the
other
installation
that
I
I
thought
that
this
is
a
major
requirement
for
a
lot
of
people.
It's.
A
A
A
Because
it's
no
longer,
like
part
of
the
code
base,
that
those
manifests
are
generated,
they
come
from
place,
that's
version
for
pible,
so
somebody
could
put
that
into
their
own
private.
You
know
github
Enterprise
instance.
Somebody
could
put
it
on
an
HTTP
server.
They
could
use
a
local.
You
know,
file
for
customized
patch
or
provide
an
inline
patch
I
think
is
the
imagined
user
experience
there
and
I
mean
that
I
guess
it's
if
you
were
kuba
DM
and
you
wanted
to
build
something
that
had
like
an
image
override
collection.
That
sounds
like
a
weird
feature.
C
A
A
It
is
a
capability
we,
as
as
the
add-ons
working
group
you
know
or
not,
trying
to
tell
somebody
how
they
should
install
their
add-ons.
We
have
some
good
recommendations
and
talked
a
lot
about
it.
There
are
definitely
benefits,
but
also
you
know,
come
with
there's
overhead
if
just
using
a
CRD
requires
another.
You
know
special
set
of
knowledge
and
maintenance,
but
yeah
I,
think
using
a
core
DNS
operator
does
give
you
the
benefit
of
not
having
to
manage
a
core
DNS
configure
your
code.
C
C
F
So
we
did
indeed
send
to
PR
and
we
got
some
good
feedback.
It's
around
slightly
different
topics.
I'd
say
cops
is
going
to
well
by
default.
Cops
will
manage
the
lifecycle
of
the
operators,
as
it
does
today
so
tied
to
the
kubernetes
version.
The
kubernetes
version
will
imply
a
set
of
operator
versions
and
also
a
set
of
CR
DS,
the
versions
of
the
CR
DS.
F
It
will
also
I
say
by
default,
because
we
all
have
some
way
to
pick
up
that
like
we're,
basically
adding
the
ability
to
add
other
objects
to
your
cluster
and
we'll
pick
up
the
idea
that
some
other
objects
represent
an
operator.
So
if
we
recognize
via
a
sort
of
well-known
label,
because
we
can
define
a
well-known
label
become
or
something
like
that,
because
it's
a
limited
subset
of
things
we
we
are
building
into
cops.
If
we
see
that
you
have
one,
we
will
turn
off
our
default
on
the
operator
side.
F
So
that's
how
you
can
override
them
and
similarly
the
same
thing
on
the
instance
of
the
CRT
on
the
CR.
We
will
auto-generate
one
for
you
if
you
don't
override
it.
If
you
override
it,
we
will
accept
that
if
you
start
overriding,
it's
pretty
much
on
you
right,
like
it's
the
same
story
we
have
today
with
like.
We
have
the
ability
to
set
any
for
any
of
the
mapped
flags
we
have,
so
you
can
set
a
cube,
API
server
flag,
but
it's
the
set.
The
rules
are.
F
If
you
set
a
flag,
we
don't
make
compatibility
guarantees,
whereas
if
you
use
the
like
higher
level
abstractions,
we
are
able
to
like
guarantee
compatibility,
because
we
will
map
that
to
the
correct
flags,
even
if
those
flags
shift
like
the
grains
of
sand
through
an
hourglass.
That
is
the
kubernetes
versions,
so
I
I
there
may
be
other
problems
but
I.
We
have
other
problems
that
we've
identified,
which
are
that
it
sounded
like
what
you're
saying
around
images.
F
There
are
people
that
primarily
for
security
reasons
more
than
for
the
overhead
want
to
keep
the
current
behavior
where
effectively,
there's
no
dynamic
replacement
of
like
bring
in
a
different
version
of
an
image.
So,
in
other
words,
they
wouldn't
want
to
change
the
version
of
the
core
DNS
CR,
the
instance
of
the
CRD
and
have
it
update,
and
so
that
basically-
and
they
would
prefer
to
have
to
not
have
the
coordinates
operator
able
to
like
have
the
large
are
back
Commission's
that
are
back
currently
entails.
F
F
We
have
to
exit
that
operator
in
some
weird
mode,
like
with
some
flag
like
a
one-shot
mode,
and
we
have
to
figure
out
whether
that's,
okay
or
not,
and
we'd
almost
have
to
pull
em
in,
like
all
of
that
through
to
like
do
a
sort
of
client-side
expansion
mode,
but
at
root
we
have
to
decide
whether
that's
even
plausible
as
something
we
want
to
support.
So
that's
our
that's
the
that's.
The
update
from
cops
land,
I,
guess,
I,
don't
know
how
that
compares
to
what
you
saw
on
coop
ADM
or
on
your
side.
C
Yeah
I
was
but
pushing
for
adding
this
as
soon
as
possible,
but
we
raise
too
many
questions
in
the
meeting
last
time.
I
guess
if
some
deep
release
is
V
1,
alpha
2
of
the
seer
D
for
the
operator
for
one
reason
or
another.
How?
How
are
you
planning
to
manage
this
in
cops?
We
plan
to
introduce
a
compressor
webhook
like
how?
How
is
the
combustion
going
to
happen?
Yeah.
F
I
mean
if,
if
there
is
a
be
one
out
for
two
and
if
you
went
out
for
one
well,
let's
suppose,
there's
a
V
1
into
V
2,
because
alphas
are
trickier
right,
let's
positively
wannabe,
then
yes,
we
would
have
a.
We
would
have
to
install
the
conversion
web
book
and
we'd.
We
would
probably
with
a
particular
kubernetes
version.
As
of
a
particular
kubernetes
version.
We
would
start
installing
the
newer
version
of
the
core
dns
operator
that
supported
both
v1
and
v2.
We
would
install
the
web
hook.
F
The
way
we
actually
do
release
is,
as
we
tie
them
to
the
kubernetes
version,
rather
than
the
cops
version
just
so
that
yeah,
it's
just
our
strategy,
but
so
as
of
a
particular
kubernetes
version,
you'll
be
able
to
start
using
those
features.
Unless,
of
course,
you
replace
the
manifest
and
install
all
the
core
dns
next
version,
manifest
in
which
case
you'll
probably
use
them
whenever
you
want,
but
then
it's
on
you
and
it's
sort
of
your
you
become
responsible
for
like
oh,
it
turns
out.
F
C
F
H
Sorry
for
introducting
here
it's
only
mentioned
there
a
couple
of
caveats
around
that
whole
process.
None
of
them
insurmountable.
It's
just
I'm,
not
sure
the
Ducks
spell
it
out
in
the
best
way.
Right
now,
but
conversion
web
books
don't
convert
the
underlying
stored
object
in
@c
G.
So
if
you
don't
have
something
that
comes
around
and
reads
and
rewrites
the
object
back
at
the
right
version
and
you're,
not
you
can't
query
kubernetes
to
find
out
what
the
stored
version
is.
H
C
C
I
shared
a
couple
of
links,
and
hopefully
these
are
up-to-date,
but
they
they
mention
the
other
storage
version,
Cadiz
I,
think.
But
overall,
this
is
like
a
complication
that
it
feels
like
Costa.
Rica
has
to
do
it.
Keep
alien
custody
drops
cards
to
do
it,
and
this
is
where
we
are
getting
into
the
space
of
multiple
tools,
trying
to
solve
a
very
complicated
problem.
F
Yeah
I
would
hope
that
we
can
identify
a
pattern
like
if
we
can,
although
I
don't
think,
we
have
to
agree
a
pattern
right,
we're
not
gonna
force.
There's
a
great
pattern.
I
think
we
can
identify
I
hope
we
can
identify
a
pattern
in
that
like
if
we
can
agree
that
we're
if
cups
shows
that
the
approach
of
like
a
particular
kubernetes
version
is
tied
to
a
core
dns
operator.
C
Yeah
and
again
this
problem,
we
either
implements
the
migration
logic
of
core
DNS
in
all
the
tools
separately
by
importing
the
migration
library
or
what
we
have
is
potentially
the
future
is.
Everybody
has
to
include
a
custom
convolution
logic
to
quote
unquote,
convolution
logic
in
all
the
tools
until
or
unless
we
have
a
sort
of
arrived
library
that
does
that,
which
is
no.
We
saw
for
problem
introduced
another
problem
and
I.
Today,
I
spoke
with
Fabrizio
and
others,
and
basically
we
are
solving
one
management
problem,
introducing
management
complexity,
so
yeah
I.
A
So
it's
a
because
like
yeah,
if
you
want
to
install
a
web
hook,
if
you
want
to
install
new
service,
account
or
updated
our
back
roles,
to
be
something
different,
you
can
create
objects
with
new
names
and
migrate,
the
app
to
those
we
could
even
add
a
key
that
says
whether
or
not
it
should
be
pruned
before
or
after,
and
if
you
have
that
in
your
installer,
it
means
it's
pretty
sufficient,
because
most
of
these
things
should
be
very
declarative.
I
think
we've
done
a
lot.
C
A
C
C
C
A
I
think
it's
pretty
fair
to
expect
that
a
user
can
just
decide
to
use
the
new
API
so
implementing
conversion
logic
for
those
things
directly
in
the
tool,
at
least
it's
not
necessary
for
every
operator.
If
you'd
like
to
add
some
UX,
you
know
and
take
on
that
maintenance
inside
of
your
tool,
you
can
you
can
build
a
subsection
of
it
that
helps
them
migrate.
Their
config.
F
That
exact
point
like
we've
talked
amend
that
we
we
might
want
some
extension
to
cout
builder
or
whatever
it
is
that
would
do
client-side
expansion
of
an
atom.
One
could
imagine
a
similar
expansion,
a
similar
expansion
of
cube
builder.
That
would
like
expose
the
web
hooks
in
a
way
that
can
be
used.
Client-Side.
If
that's
useful,
I,
don't
know,
I'm
still
not
entirely
clear
whether
you
need
it,
need
it
I
guess
you
do,
because
you
want
to
add
a
well
that's
I.
Guess
you
do
and
then
so
we
could.
F
We
could
maybe
do
that
I,
don't
yet
know
of
a
way
that
doesn't
involve
running
doctor.
So
I
don't
know
if,
in
the
general
case,
I
don't
know
if
that
would
be
acceptable
to
like
suppose
we
added
that
to
goo
builder
or
to
coupe
builder
generated
controllers
or
whatever
gonna
call
them.
Would
that
be
acceptable?
Would
that
be
a
good
thing
for
you
in
Cupid
am
start
using
that
for
their
version
version,
yeah.
C
I
made
projects
in
general
not
to
the
key
baby
I'm
only
if
there
is
a
way
to
harmlessly
without
using
a
direction
convert
this
stuff.
C
F
C
F
A
F
I
see
so
they
want
a
new
field,
they
have
a
v1
out
for
one.
It's,
the
new
flow
doesn't
give
enough
for
two
and
we
currently
MIT.
Essentially
we
taught
what
we
currently
say.
We
said,
but
like
every
migration
so
far
is
like
do-it-yourself,
like
you
have
to
upgrade
upgrade
from
extensions
v1
beta
1,
2,
apps,
v1
deployment
or
whatever
it
is,
and
that's
a
human
operated,
client-side
conversion
logic
I
see
what
you're
saying
that
makes
me.
A
Yeah
and
when
somebody
wants
to
do
that,
you
know
say
across
100
clusters,
which
is
realistic
for
some
deployments,
then
in
that
case
you
know,
you're
you're
gonna
want
to
do
a
tool
based
migration
of
some
sort,
with
some
human
validation
and
yeah
I
agree
with
you
luminaire
that
the
problem
of
managing
config
migrations
across
API
changes
is
an
underserved
use
case
inside
of
kubernetes.
This
is
the
kind
of
problem
that
keeps
me
up
at
night,
because
the
machinery
that
we
have
built
is
very
married
to
go
and
comes
with
a
lot
of
gos
limitations.
A
There
always
needs
to
be
an
escape
hatch
for
a
user
to
be
able
to
either
mutate
or
provide
their
own
config,
because
a
tool
will
not
do
it
properly
for
every
use
case
and
people
might
just
decide.
Hey
I
don't
want
to
actually
use
that
old
field
that
I
was
using,
even
though
there
was
an
equivalent
I
want
to
change
the
strategy.
You
know
or
I
want
to
provide
this
value
or
I.
Don't
like
the
new
default.
You
know
those
kinds
of
things
you
can't
take
that
away
from
the
user,
but
yeah.
A
A
A
C
There's
also
something
that
we
can
take.
We
can
borrow
as
an
idea
from
get
workforce
so
get
operations
locally
in
a
terminal
or
fairly
low
level.
People
have
to,
for
instance,
rearrange
their
patches
so
get
gift.
It
gives
them
an
option
to
have
an
interactive
overview
of
the
list
of
patches.
You
know
we
get
rebase
e,
so
the
same
way
when
you
have
a
folder
with
hundreds
of
manifests,
we
can
have
tooling
that
can
give
you
interactive
diff
of
what
changes
in
each
file.
C
So
this
is
the
same
like
wall
of
operation
that
people
have
to
perform
any
like
every
day
anyhow.
So
this
is
people.
This
is
going
to
reach
people.
You
know
like
of
conforting
level
problem
is
that
if,
of
course,
we
don't
have
agreement
that
we
we
want
these
tools
to
exist
in
the
first
place
that
commercial
tools
that
class
site
conversions,
it's
the
problem
starts
there.
C
A
F
F
It
doesn't
yet
tackle
many
of
the
hard
problems,
like
version
upgrades,
for
example,
and
it
doesn't
tackle
it's
actually
it's
getting
there.
We
can
prob'ly
use
our
existing
mechanisms
for
upgrading
the
operator
because
we
do
specify
the
version
of
that
and
but
it
doesn't
tackle
like
client-side
expansion
or
anything
like
that.
This.
F
Of
cops
yeah,
so
channels
is
our
things
that
we
fetch
over
the
Internet
that
allow
for
dynamic
updating.
So
it
tells
you
like
the
newest
version
of
the
newest
recommended
version
of
kubernetes.
Is
this
version?
For
example?
You
can
ways
overwrite
it,
but
if
you
don't
specify
them
it's
like
you
know,
this
is
the
current
stable
version,
for
example,
and
in
this
directory
we
are
adding
the
ability
to
put
manifests
there,
so
we
can
instead
of
baking
them
into
the
binary.
F
We
can
go
and
fetch
a
particular
version
version
of
that
manifest,
and
currently
that
version
is
hard-coded
in
cops.
We
could
imagine
putting
that
version
into
the
channels
file
which
is
itself
a
gamma
file,
so
that
we
could
say
that
if
you
don't
specify
a
version,
the
stable
version
of
the
coordinates
operator
for
this
version
of
kubernetes
is
this
version
like
I,
think
I
called
it
zero
one
zero
or
something
or
zero
zero
one,
and
so
we
like
I,
think
this
goes
back
to
the
previous
point
about
rubidium,
which
is
you
know
we
don't
necessarily.
F
A
F
A
F
F
We
should
build
a
more
precise
mechanism
than
our
back
right,
which
is
or
whether
we
should
actually
the
people
that
worry
about
that
would
actually
probably
better
served
by
client-side
expansion
anyway,
like
you
know,
if
you
want
strict
like
if
you're
gonna
be,
if
you're,
if
you're,
if
you're
the,
if
you
care
about
that
or
about
Commission,
maybe
you
actually
care
about.
You
know
from
the
start,
seeing
exactly
what's
gonna
be
applied
in
your
cluster.
A
Yeah-
and
this
is
the
point
that
we've
touched
on
from
the
very
beginning
which
is
potentially
like
building
one-shot
operators
and
jobs,
you
know
and
having
tools,
install
temporary
are
back
I'll
kind
of
like
a
pseudo,
but
even
if
you
were
to
install
an
operator
right
and
then
just
revoke
its
our
back,
keep
it
end
up
with
an
error
in
operator
so
that,
like
there,
there
are
ways
to
kind
of
achieve
that
sort
of
temporary
privilege
escalation
for
that
kind
of
user.
A
H
F
A
G
A
H
Since
we
just
have
a
couple
minutes
left,
maybe
some
clarification
on
that
problem.
I'm,
not
sure
I
followed
it
to
the
start,
to
the
degree
that
if
everyone
was
discussing
it
as
an
example,
that
issue
is
that
the
core
file
is
part
of
the
config.
And
so,
when
you
update
the
operator,
you
would
need
to
encode
some
translation
between
core
files.
Is
that.
A
The
approach
in
the
operator
right
now
is
to
provide
a
core
file
template
and
that
some
of
the
static
fields
that
are
provided
in
the
C
or
D
are
available
for
interpolation.
So
things
like
the
like
cluster
IP
and
stuff
like
that.
But
yes,
the
the
real
more
generic
problem
is
just
that
when
CR
DS
get
updated
or
when
a
Kubb
API
gets
updated,
doesn't
even
have
to
be
a
C
or
D.
A
A
The
API
server
knows
how
to
migrate
these
objects
and
produce
machine,
readable
equivalents,
but
that
doesn't
necessarily
mean
that
it's
the
user
intended
equivalent
and
it
also
doesn't
produce
a
commented.
You
know
user
minimally
defined
like
user
equivalent
like
for
those
objects.
So
this
gets.
We
had
the
same
problem
with
the
component
config
api's,
that
are
used
in
Kuby
DM
and
in
third-party
CR
DS,
where
people
have
to
do
manual
upgrades
or
do
some
weird,
like
correcting
said,
replace
things
on
their
own
and
over
here
else.
A
You
end
up
with
suboptimal
migrated
configs,
and
this
is
largely
because
of
some
limitations
in
our
traditional
API
machinery.
There
is
some
new
API
machinery
available
that
was
used
for
the
server
side,
apply
implementation
and
that
API
machinery
could
be
useful
in
building
some
better
tool:
assisted
migration,
workflows
for
users.
H
A
Can
it
could
probably
produce
an
equivalent
config
in
many
cases,
from
a
machine
readable
perspective,
but
what
happens
on
the
next
conversion
right,
the
things
that
the
user
had
not
specified
get
the
defaults
for
API
number
two.
Now,
when
you
try
to
go
to
a
PM
number
three,
then
you
end
up
with
the
defaults
of
the
old
version
and
the
user
never
specified
those
things
you
know
see.
H
An
issue
with
user
intent
but
like
from
the
API
perspective,
there's
always
an
object
that
is
conforming
to
whatever
the
particular
level
of
the
API
is
and
that
that
input
always
results
in
the
consistent
output
of
the
config,
for
whatever
our
answer
in
this
case,
Cordia
knows
you're
using
it.
So
the
I
think,
if
you
use,
if
you
buy
into
the
model
and
you
buy
into
the
machinery,
I,
don't
see
a
huge
problem
as
long
as
there's
a
way
for
a
conversion.
What
book
to
say,
I,
don't
know
how
to
convert
this.
H
A
Yeah
and
a
lot
of
this
as
well,
is
like
that
that
has
to
happen
in
the
cluster
after
you're,
applying
objects
and
tools
want
some
of
this
to
be
client-side
for
dry-run
features
and
things
like
that,
because
they
want
to
be
able
to
use
a
new
field.
You
know
in
that
field
is
not
available
in
that
version.
The
yeah
it's
particularly
problematic
when
you're
trying
to
upgrade
a
cluster.