►
From YouTube: SIG Cluster Lifecycle - Cluster API - CAPI & Managed Services (extra meeting)- 20220301
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Cool
well
thanks
everyone
for
turning
up.
This
is
a
extra
meeting
of
cluster
api
to
talk
about
managed
kubernetes
within
capi,
specifically
things
like
eks,
aks
and
gke,
and
this
has
come
about
via
various
discussions
and
issues
that
have
sort
of
been
raised
during
the
implementation
of
cluster
class.
A
A
So
I
put
a
couple
of
items
in
initially
on
the
agenda
just
to
give
some
background
and
then
hopefully
that
will
then
drive
conversations
from
there
on
so
whitney.
I
put
you
as
the
first
gender
item
because
yours,
your
issue
was
the
one
that
sort
of
kicked
off
this
discussion.
B
Yeah,
okay
yeah.
I
actually
presented
this
issue
during
happy
office
hour
I
think
two
weeks
ago,
but
if
you
can
open
that
issue
returned.
B
Yeah
so
yeah.
The
reason
we
are
actually
having
this
conversation
is
that
I
was
actually
working
on
adding
cluster
class
support
for
eks
and
encountered
an
issue
which
is
documented
in
this
issue.
So
for
eks
in
in
kappa
we
have,
we
don't
have
a
distinction
between
control,
plane
and
infra
kind.
We
have
only
one
kind
called
aws
manage
control
plane.
B
So
when
creating
the
cluster
class,
I
needed
to
use
that
the
aws
manage
control
plane
for
both
control,
plane,
reference
and
infrastructure
reference,
and
I
think
that's
not
what
cappy
is
expecting,
because
kep
actually
get
these
two
fields
and
clone
them
or
to
create
the
from
from
template
to
the
actual
object.
B
But
since
I'm
using
it
like
twice
here,
it
actually
generated
the
two
control
plane
for
the
aws
manager,
control
plane
and
it's
a
crea.
It's
creating
like
all
sorts
of
issues
in
the
controller
and
and
of
course
like
we
cannot
have
like
too
many
control
planes
for
eks.
B
So
that's
why
I
brought
up
this
issue
and
we
are
having
this
discussion.
What
manager
services
should
drag
in
capi?
So
that's
the
introduction.
A
Cool
thanks
money
go
for
it.
You
soon.
C
So
I've
been
having
a
few
discussions
with
folks
around
this,
and
it
seems
like
the
meta
issue
that
we
have
here
is
that
for
managed
clusters
it
seems
like
we're
trying
to
satisfy
two
contracts
with
one
one
crd,
which
is
something
that
was
that
wasn't,
I
guess
expected
by
cappy.
We
didn't
we've,
never
enforced
it
through,
like
webhook's
check-in.
If
the
refs
are
both
different.
C
So
it
seems
like
it's
similar
to
one
of
those
cases
like
for
api
server
port,
where
we
have
like
providers
using
the
fields
in
an
unexpected
way
of
from
capi.
So
like.
C
Yeah,
I
I
I
like
I
can
see
the
benefit
with
for
like
managed
cluster,
which
is
like
the
simplification
of
of
the
number
of
objects
to
create,
but
now
with
cluster
class.
C
That
seems
to
be
less
of
a
concern,
especially
that
if
we
wanted
to
make,
for
example,
this
something
officially
supported
that
would
be
a
like
a
huge
breaking
change
in
the
sense
of
like
all
of
client.
All
of
the
clients
basically
would
be
broken
because
they'd
need
to.
C
They
would
need
to
adapt
in
the
sense
of
okay,
sometimes
the
infrared
might
be
maybe
present
or
not
so.
Things
like
we
were
discussing
with
this
would
read
you.
Things
like
cluster
cuddle
describe,
for
example,
would
be
also
broken.
A
Cool
yeah,
it
was,
I
don't
know
if
it
would
be
helpful
to
based
on
based
on
that,
just
to
go
through
the.
I
guess
the
thought
process
that
we
went
through
in
in
kappa
and
the
reasons
why
we
went
that
route
just
to
to
frame
it
based
on
what
you
just
said
there.
So
when
we
originally
implemented
eks
in
in
kappa,
we
did
have
that
distinction.
We
had
on
the
infrasight.
We
had
something
called
aws
managed,
control
plane.
A
Oh
sorry,
adam
is
managed
cluster
and
that
that
satisfied,
the
the
the
infra
provider
contract.
So
you
know
it
has
ready
and
it
set
the
control,
plane
endpoint,
and
then
we
also
had
on
the
control
plane
side.
We
had
the
aws
managed
control
plane,
which
basically
did
most
of
the
work
and
then
the
reason
why
it
did
most
of
the
work
was
there
is
this
sort
of
blurring
of
the
lines?
A
I
guess
when
it
comes
to
eks,
which
is
what
I
know
so,
for
instance,
things
like
I
put
an
example
in
here
is
traditionally
we
create
the
load
balancers
at
wayne
cappy,
I
say
traditionally:
load
balancers
for
the
api
server
are
created
in
the
infra
provider,
whereas
for
managed
servers
like
eks
they're
actually
created
as
part
of
the
control
plane.
So
when
you
create
the
eks
control
plane,
it
creates
you
a
load
balancer
for
you.
A
So
we
were
in
this
weird
situation
where
the
control
plane
provider
was
creating
the
ek's
control
plane
and
getting
the
load
balancer
the
api
server
endpoint
and
it
was
having
to
communicate
back
to
the
aws
managed
cluster
to
satisfy
the
control
plane,
endpoint
contract
back
to
cappy.
So
we
had
this
weird
watch
going
on
between
the
control
plane
and
the
the
infra
provider.
A
So
then
we
were
like
well
actually.
If
the
managed
cluster
is
only
there
to
satisfy
the
contract-
and
it's
not
actually
doing
any
reconciliation
itself,
then
we
were
like
well,
one
type
or
one
kind
could
could
satisfy
both
contracts
and
that's
when
we
decided
to
go
down
that
route
so
that
that's
how
we
got
there
not
saying
that's
the
right
thing,
but
that's
how
we
get
there
got
there,
go
for
it
to
seal.
D
Sorry
mute
issue
yeah.
I
guess
now
is
a
good
time
to
point
out
that
we
came
to
the
exact
same
conclusion
in
cabsie
for
azure
managed
cluster
separately
on
our
own,
except
we're
a
bit
behind
in
the
sense
that
we
haven't
actually
made
the
change.
Yet
we
have
have
you
paste
it
below
the
pr.
That's
been
open
for
a
long
time,
but
because
it's
such
a
breaking
change,
just
removing
a
crd.
D
We
have
been
careful
of
that
and
it
just
wasn't
the
right
time
to
do
it
like,
as
we
were
just
going
to
v1
so
and
for
us,
aks
is
still
experimental
in
capzi,
so
we
still
aren't
a
place
where
we
could
do
that,
but
we
haven't
done
it
yet.
C
Yeah
so
yeah,
I
I
think
I
think
for
me
for
managed
clusters.
One
of
the
like
infrastructure
ready
might
be
a
bit
confusing
because
you're
not
only
you're,
only
supposed
to
set
that
when
you
have.
You
know
your
load
balancer
ready,
but
in
the
case,
in
the
case
of
management,
managed
clusters
that
gets
created
out
of
the
control
plane
providers.
C
We
would
need
to
reflect
that
within
the
semantics
of
infrastructure
ready,
because,
even
if,
like
you,
don't
set
it
immediately
when
provisioning,
it
can
an
eks
control,
plane,
endpoint
and
and
setting
like
the
and
setting
layer
afterwards,
like
the
control
plane,
endpoint,
it's
still
gonna
get
copied
back
to
back
to
to
the
cluster
to
the
clusters
object,
and
it
will
still,
I
think,
unblock
workers.
C
So,
if
like,
if
that's,
if
that's,
if
that's
an
issue,
then
we
can
probably
amend
those
semantics.
A
Cool
okay,
I
should
be
taking
notes
here
as
well,
so
make
sure
we're
capturing
all
this
yeah.
So
I
I
guess.
E
F
I'm
trying
to
give
my
to
wrap
up
my
mind
around
this
problem,
so
I
think
that
we
have.
There
are
two
or
three
layers
to
the
problem,
so
the
first
of
all
is
from
a
user
point
of
view,
so
if
I
think
of
the
problem
from
a
user
point
of
view
or
the
value
itself
of
copy,
basically
this
copy
provide
a
declarative
api
that
somehow
abstract
the
users
away
from
the
infrastructure,
and
this
is
getting
more
and
more
true
with
cluster
class
and
then
with
the
recent
effort.
F
F
The
infrastructure
that
you're
using
this,
let
me
say
the
first
layer
second
layer-
is
implementation,
and
I
understand
I
understand
the
richard
concern
that
this
is
not
ideal
and
we
have
to
maybe
work
together
to
see
if
there
is
a
something
that
we
can
do
to
make
this
easier
for
the
providers
to
achieve
this
user
consent,
consistency
and
the
third
layer.
The
third
problem
that
I
I
think
is
is
yeah.
F
F
So
providers
are
not
left
alone
to
find
a
solution
and
we
can
kind
of
embed
this
in
the
contract,
and
so
we
ensure
the
consistency,
let
me
say
before
implementing
stuff
instead
of
after
when
we
find
inconsistency
across.
So
these
three
layers
use
the
point
of
view
where
I
think
we
should
be
consistent
implementation,
where
we
can
probably
have
a
discussion
and
see
if
we
can
improve
things
and
yeah
and
having
a
better
feedback
loop.
A
Yeah,
I
think
I
yeah
that's
a
good
description
of
the
three
problems.
I
think
so
I
guess
yeah.
I
completely
agree
with
you
that
we
shouldn't
be
putting
the
onus
on
the
the
user
about
understanding
the
you
know
the
differences
between,
let's
just
say,
you're,
even
using
eks,
that's
right
kappa!
They
shouldn't
have
to
understand
the
differences
between
eks
and
and
an
unmanaged
cluster.
You
know
that
with
eks
you
have
to
use
the
same
reference
type
for
both
control,
plane
and
infra.
So
yeah
I
get
that
consistency.
F
C
C
I
think
we
it
might
be
worth
discussing,
like
you
know,
reusing
the
aws
cluster
versus
just
introducing
some
managed
cluster.
I
think
that,
like
having
two
types
of
clusters
might
be
also
confusing,
like
for
users
versus
just
ensuring
that
the
aws
cluster
controller
or
the
infra
cluster
controller
in
general,
for
providers
is
able
to
tolerate
just
a
cluster
with
control,
plane,
endpoint
and
infrastructure
already
in
the
status.
D
Yeah
for
cabzi,
we
have
a
different
cluster.
We
have
an
azure
managed
cluster
for
the
managed
cluster
cluster
infrastructure
and
I
think
just
because
there
are
so
many
options
in
azure
cluster
that
are
not
applicable
to
manage
clusters
like
things
in
the
spec
that
you
can
set
as
a
user
that
are
only
that
only
make
sense
for
self-managed
clusters.
I
I
don't
know
if
it
would
really
work
well
to
conflate
the
two
togethers.
A
Yeah
I
thought
yeah
we
had
exactly
the
same
scenario
where
it
did.
It
didn't
match
very
well.
A
So
cafe,
stefan.
G
I
think,
even
if
they
match
well
today,
how
can
we
ensure
that
they
that
the
concepts
behind
those
resources
will
match
good
in
the
future?
So
if
you
now
try
to
only
have
one
aws
cluster
and
the
features
in
our
features
and
the
features
in
the
managed
service
will
diverge
over
time,
it
could
become
a
pain
to
keep
them
kind
of
compatible.
C
C
If,
if
we're
able
to
to
make
the
fields
within
the
infra
cluster
as
optional
technically
like
that,
should
be
feasible
without
introducing
like
another
object
but
yeah
in
general,
ideally
like
whether
we
go
with
one
option
or
the
other,
I
think
we
probably
want
to
align
between
providers
so
that,
like
we,
have
a
consistent
experience
between
each
one
of
the
providers.
A
Yeah,
I
I
yeah.
I
we,
like,
I
said,
that's
another
issue
as
well,
isn't
it
providing
the
consistency
in
the
infra
providers
and
then
we've
also
got
the
the
issue
of
of
having
to
to
populate
information
from
the
control
plane
via
the
infra
provider
cluster
to
satisfy
that
contract.
So
we
have
this
weird.
A
You
know
so
from
within
that
that
cluster
we're
gonna
have
to
watch
the
control
plane
as
well
to
make
sure
that
we
can
satisfy
that
contract.
So
yeah,
I
don't
know
how
we
do
that.
Well,
I
don't
know
how
we
can
do
that
if
it
when
we
tried
to
do
that,
it
felt
messy
from
an
implementation
point
of
view,
go
for
it,
jack.
H
Hey
so
I
just
wanted
to
comment
on
the
optional
part.
So
if,
if
we
use
optional
as
a
way
of
overloading
as
a
sort
of
implementation
detail
on
the
data
type
to
overload
different
cluster
contexts,
we're
actually
removing
removing
the
semantic
value
of
optional
within
one
particular
context
or
another.
H
So
in
a
non-managed
cluster
context,
certain
properties
must
be
optional
and
in
a
different
in
a
managed
cluster
context,
we
can't
suddenly
make
like
we
need
to
enforce
optionality
or
required
property
enforcement
in
the
particular
context
we're
in
so
we
kind
of
then
are
going
to
have
some
sort
of
meta,
optional
or
meta-required
implementation
detail
that
allows
us
to
continue
that
enforcement.
When
we're
in
like
a
managed
context
or
an
unmanaged
context,.
I
Yeah
I
wanted
to
share
some
contacts
on
how
we
kind
of
solved
these
things
for
our
manage
to
another
offering.
So
we
have
more
use
cases
than
the
one
where
we
want
to
manage
the
life
cycle
of
the
infrastructure.
So
for
that
we
use
the
externally
managed
annotation
in
the
in
the
infra
resource,
and
then
we
have
a
separate,
a
different
hosted,
control,
plane
thing
that
is
basically
running
a
control
plane
up
spots.
I
So
that's
how
we
achieve
the
decoupling
between
the
different
parts
and
then
we
can
still
reuse
all
the
automation
from
the
say,
aws
provider,
for
automated
machine
management,
because
we
have
again
that
still
the
infra
resources
still
exists
when
it's
externally
managed.
So
that's
kind
of
how
we
do
this
today.
A
So
so
I
guess
from
this
discussion:
do
we
want
to
solve
all
of
these
problems
or
so
that
we
have
the
consistency
in?
I
guess,
representation
of
the
infrastructure
across
managed
and
unmanaged
clusters
within
the
same
provider.
A
Then
we
have
the
to
the
second
point
of
fabrizio,
which
was
the
actual
technical
solution
about
how
we,
how
we
implement
a
managed
service
and
and
get
get
the
required
data
back
to
the
infra
provider
from
the
control
plane,
and
then
I
think
the
third
point
for
british
joe's
was
a
place
where
the
providers
can
come
along
and
ask
for
guidance
when
they
come
to
do
this.
A
So
out
of
those
I
says,
four
problems.
How
many
of
those
do
we
want
to
tackle.
C
I
think
that
like
having
a
dedicated
sync
like
for
so
that
provider
can
bring
up
like
issues
regarding
apis
is
definitely
worthwhile,
be
it
with
like
carving
out
a
part
of
the
office
hours
or
spinning
up
a
new
meeting.
But
I
think
this
is
definitely
something
that
we
need
given
the
given
the
issues
and
inconsistencies
that
we've
seen
like
within
within,
like
within
reusing
the
contract.
C
So
I
think
it's
definitely
gonna
benefit
us
providers
and
also
cappy
in
general,
the
I
think,
the
other
one
that
is
almost
like.
It
feels
like
p0,
which
is.
We
basically
need
aks
and
eks
to
work
with
cluster
class,
because
otherwise
that
might
be
blocking
users
from
spinning
up
managed
clusters
easily
through
cluster.
A
Api,
I
guess
you're
on
your
last
point
there
about
making
aks
sneakers
work
with.
C
Cluster
class
yeah
so
so
far
like
aks
works,
because
I
guess
it
has.
It
won't
have
the
issue
that
kappa
sees
because
scco
pointed
out
like
there's
already
a
managed
cluster
in
place.
So
it's
really
mainly
about
like
ensuring
that
we
have
kappa
and
eks
working
with
cluster
class.
I
think
that's
that's
worthwhile.
A
C
I
guess,
and
I
can
cecile
for
reservations,
control
and
can
chime
in
but
like.
C
I
think
that
in
general
we
don't
want
to
break
like
clients
and
requiring
such
change
of
or
allowing
such
possibility
of
you
know
the
infra
cluster
being
optional
might
be
breaking
like
external
clients
or
even
clients
that
we
ship,
such
as
cluster
cut
off
so
yeah.
I
think,
like
the
most
conservative
approach
would
be
like
reintroducing
an
infamous
cluster.
In
my
opinion,.
B
Yeah,
so
we
can
discuss
more
in
kappa
predecessor
that
way,
but
richard
ish
any
problem
reintroducing
this
share
date.
It's
is
it
like
normal
for
the
existing
users
or
it'll
break
something.
A
A
J
Yeah,
I
think
we
need
to
be
backward
compatible
for
the
existing
ones
since
vg8
eks.
So
it
is,
it
is
a
little
bit
problematic,
but
we
can
have
a
work
around
like
auto.
Creating
infrastructure
managed
a
cluster
for
the
ones
that
doesn't
have
one
like
we
are
having
in
multi-tenancy.
We
are
also
creating
identities,
so
this
should
solve
it
for
the
existing
like
users,
but
definitely
we
should
document
that
we
are
not
supporting
a
cluster
class
for
the
ones
that
doesn't
have
that.
That
doesn't
upgrade
to
the
latest
kappa
version
that
we
introduced.
C
So
missing
yeah,
especially
that
like
today,
we
don't
support
like
moving
from
normal
clusters
to
classic
clusters
or
clusters
referenced
in
a
cluster
class.
So
technically,
if
users
upgrade
or
for
existing
clusters,
they
would
still
behave
the
same
and
the
issue
would
arise
only
if
they
spin
up
like
new
clusters.
But
if
they
spin
up
new
clusters,
then
we
would
have
the
right
types
and
the
right
templates.
A
Cool
yeah,
I
I
guess
this
is
short-term
solutions-
would
get
cluster
class
working
with
kappa.
I
I
guess
still
in
my
head,
it
still
doesn't
feel
a
natural
fit
as
a
long-term
solution,
because
we
are
only
introducing
the
infra
provider
just
to
satisfy
the
contract,
whereas
all
the
work
will
still
be
done
in
the
control
plane
provider.
A
Because
of
that
because
control
plane
provider
creates
infrastructure,
and
you
can't
necessarily
separate
the
two,
but
I
guess
it's
a
short-term
thing
to
get
things
working
that
that's
fine,
but
I
guess
that
they're
confident
for
me
in
my
head.
It
still
doesn't
feel
a
natural
fit
currently,
with
the
you
know,
the
separation
of
the
responsibilities.
But
I
guess
that's
a
long
term
discussion.
I
don't
know
who
did
you
raise
your
hand?
First.
D
Thanks,
I
was
just
going
to
say.
Another
point
of
inconsistency.
Maybe
to
consider
is,
I
believe,
the
way
that
kappa
and
cabsie
are
doing.
Providers
for
managed
clusters
is
different
right
now,
whereas
cab
z,
reuses
the
azure
provider
for
infrastructure
for
managed
clusters,
there's
no
separate
provider
for
a
gas,
whereas
eks
has
a
provider
for
eks
or
tapa,
has
a
provider
for
eks.
So
that's
also
slightly
different
from
a
user
perspective.
It's
inconsistent
in
how
that
works.
F
Yeah,
I
I
was
thinking
about
your
point
that
it
feels
not
natural.
I
think
that
I
kind
of
agree,
but
I
also
think
that
this
is,
let
me
say
it.
It
comes
with
a
cost,
but
it
is
also
an
opportunity.
F
So
what
if,
tomorrow
user
asks,
for
instance,
I
don't
know
to
have
a
bastion
office
post
in
front
of
the
aks
cluster
or
whatever
bit
of
additional
infrastructure
that
is
not
provided
by
default
by
the
aks
abstraction
or
whatever,
so
having
a
an
abstraction
that
basically
split
ups,
the
split
ups,
let
me
say
the
two
components,
maybe
that
it
could
be
used
for
long
term.
Second,
when
I
this,
let
me
say
an
implementation
detail,
we
can
also
take
this
offline.
F
What
I'm
thinking
is
that,
currently,
if
I
go
to
write
a
history,
you
are
assigned
the
responsibility
to
create
the
control
plane
with
everything
so
load
balancer
to
the
control
plane
manager
so
to
the
control
plane
controller
instead,
maybe
that
a
nicer
solution
is,
is
to
assign
the
responsibility
to
create
the
control
plane
or
to
delay
the
control
plane
to
the
infra
provider
and-
and
you
have
a
simpler
control
brain
provider
that
only
take
care
of
the
life
cycle.
Maybe
implementation
detail.
We
can
take
this
offline.
A
That's
actually
not
a
bad
idea.
I
think
when
we
first
implemented
it,
we
were
fixated
well.
I
was
excited.
Obviously
we
I.
I
was
fixated
on
the
fact
that
the
control
plane
provider
should
be
creating
the
eks
control
plane,
but
yeah.
It
could
have
the
separation
of
responsibilities
that
the
infra
creates
it
and
then
the
yeah.
That's
that's
a
good
point.
A
Yeah
keep
an
eye
out
of
that
yeah
on
on
cecil
on
your
point.
We
did
originally
have
a
separate
provider
for
eks,
but
we
we've
since
merged
most
of
those
actually
into
the
main
infra
provider
or
but
yeah
pretty
much.
We
have
now
because
it
was
quite
painful
when
we
we
needed
the
cluster
cut,
to
win
it's
to
say:
aws,
aws
eks
as
well,
so
yeah
we
merged
them,
go
for
it.
Stefan.
G
Yeah
just
want
to
say,
regarding
short
term
midterm,
long
term
solution.
The
longer
you
wait,
the
harder
it
gets
to
get
everything
working
together.
So
it
would
be
already
absolutely
non-trivial
today
to
get
cluster
class
to
work
with
what
we
currently
have
aws.
G
D
Yeah
also
just
to
pile
into
what
stephan
just
said.
I
realized
that
kappa
is
already
past
the
no
point
of
no
return
of
not
having
aks
experimental
anymore,
but
we're
still
in
that
sweet
zone
for
cabsie,
but
we
plan
on
not
staying
there
for
too
long.
So
if
we
do
want
to
make
a
major
change
to
the
manage
cluster
contract
for
consistency
across
providers,
it
would
be
really
great.
A
Yes,
yes,
good
point,
so
then
maybe
we
we
have
to
get
together
then
on
decide
the
changes
that
we're
going
to
make
and
whether
between
the
both
providers,
especially
before
gcp,
implement
gke
support
as
well.
A
C
Yeah,
I
think
this
is
likely
gonna
warrant
the
proposal
to
outline
like
managed
services,
and
we
can
include
like
the
the
main
ones
today
and
we
can
go
from
there.
I
guess.
A
A
Just
think
of
yeah
cool,
all
right,
brilliant
anyone
else
have
any
more
points
for
discussion.
F
I
I
think
that
we
should
go
on
with
yes
in
suggestion
to
carve
out
some
time
in
the
office
hour
for
provider
for
contract
improvement.
So
we
ask
feedback
from
provider.
If
there
are
stopper-
and
we
kick
up
the
discussion,
then
we
see
going
on
if
this
is
the
server
separate
meeting
or
stop
stuff
like
that.
But
let's
try
to
put
this
in
the
agenda.
I
take
an
action
item
for
this
and
we
with
regards
to
the
to
the
document
about
how
manager
providers
should
look
like
peacefully
to
loop,
mean.
A
Brilliant
is
there
any
more
for
any
more.
A
Cool
so
I'll
just
recap
before
we
all
disappear,
so
we're
going
to
make
time
in
the
office
hours
for
providers
to
be
able
to
come
along
and
discuss
any
issues
that
they
are
having
with
the
current
api.
A
I
One
quick
question
so
that
all
sounds
great
to
me
so
to
proceed
on
that
on
that
discussion
between,
like
to
be
able
to
define
what
it
means
like
manage:
kubernetes
you're
planning
to
put
a
google
doc
or
something
else.
A
A
I
think
we're
probably
it's
good
place
to
stop
unless
anyone
else
have
it
other
than
anything
else.
A
Cool
okay
once
again,
twice
three
times
cool.
Well,
thank
you
for
coming.
That
was
really
helpful
thanks
richard
thank.