►
Description
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
A
Okay,
welcome
everyone.
B
It
is
another
cluster
api,
azure
provider,
office
hours
meeting.
We
are
part
of
sig
cluster
life
cycle,
kubernetes
umbrella
project
and
therefore
we
comply
with
their
standards
of
conduct
and
which
you
can
read
about
in
our
document,
but
basically
boils
down
to
everybody,
be
nice
to
each
other
and
try
and
use
the
raise
hand
features
so
that
we
don't
talk
over
each
other.
B
If
you
have
a
second-
and
you
want
to-
please
put
your
name
in
on
the
attendee
list
here.
Let
me
share
your
screen.
B
C
I'll
just
take
30
seconds,
I'm
mike
giuseron,
I'm
with
adobe.
I've
met
a
few
of
you
over
prs
or
over
zoom
before
hoping
to
contribute
more,
not
just
to
cabsie,
but
the
cappy
in
general.
We're
hoping
to
adopt
it
more
thoroughly
at
adobe
right
now,
we're
just
in
kind
of
the
poc
stages,
but
we're
hoping
to
move
very
rapidly
from
poc
to
production
usage,
hopefully
within
the
next
month
or
two.
C
Yeah,
I
see
still
you
put
in
the
chat
about
the
subnet
deletion
bug.
If
we
want
to
chat
about
that,
I
got
about
10
minutes
left
before
I
have
to
drop.
I
hate
to
interrupt
any
other
introduction,
but.
C
So
update
on
that,
I
did
a
bunch
of
refactoring
yesterday,
I'm
in
the
process
of
testing
that
this
morning
I
switched
the
approach
that
I'm
using
on
it,
because
I
wasn't
able
to
properly
mark
mock
out
the
the
service
to
do
the
the
unit
test.
C
So
what
I
did
was
I
switched
the
subnet
spec,
much
like
it
has.
The
is
v-net
managed
where
it's
setting
it
at
the
where
the
cluster
creates
it
and
the
managed
cluster
our
managed
control
plane
creates
the
subnets
there.
It
already
sets
that
property
to
the
subnet,
so
I'm
basically
following
that
same
approach
now
I'll
probably
be
pushing
up
a
pr.
C
I
got
a
kid
thing,
so
probably
within
the
next
four
hours
for
review.
If
that
works,
for
you
all,
if
not,
we
can
always
go
with
the
other
approach
that
jack
had
done
about
just
allowing
the
v-net
to
do
the
deletion
instead.
C
I
I'm
really
fine
with
either
way
just
because
I've
learned
a
lot
taking
this.
You
know
trying
to
get
this
pr
through,
so
I
really
defer
to
you
all
as
which
approach
you'd
rather
take
on
it.
D
Cool,
I
I
think
they
both
work.
I
think
so
credit
to
cecile
for
observing
that
subnets
shouldn't
be
under
our
deletion
enforcement
at
all.
So
that's
really
her
idea.
I
think
that
that's
the
more
elegant
idea,
but
I
think
that
we
should
merge
the
one
that
works
first
and
that
passes
tests
and
everything.
D
So
if
it
sounds
like
yours
is
closer
to
that,
so
let's
get
that
in
so
we
can
discuss
ways
to
get
that
to
your
staging
environment,
so
you
can
actually
validate
it
at
scale
and
then
it'd
be
super
easy
to
follow,
along
with
the
deletion
solution
after
after
the
fact,
which
should
be
functionally
equivalent.
A
D
D
Is
this
time
good
for
folks,
because
I
would
love
to
see
good
turnout
week
after
week
we
we've
been
straddling
pst
pacific
time
in
european
time,
and
it's
really
hard
to
find
a
good
time
is
mike
how
how
likely
are
you
to
come
week
after
week?
If
this
is
the
time
we
meet.
C
About
every
other
week
is
good
for
me
just
because
I
also
work
with
european
folks,
and
so
it's
you
know.
This
is
kind
of
the
golden
hour
all
right.
This,
the
7
a.m,
to
9
a.m.
Pacific
is
always
difficult
times
for
seems
like
everyone,
but
yeah.
I
have
every
other
week
meetings.
It
tends
to
be
so
it's
really
hit
or
miss
for
me,
but
I
do
want
to
make
this
a
priority,
so
I'm
gonna
try
to
make
it
as
often
as
I
can.
E
A
B
E
E
I
think,
let's
just
I
started
spreading
slack
and
you
know
maybe
a
doodle,
I
don't
would
you
would
you
be
able
to
take
that
yeah?
I
don't
know,
let's
not
change
it
without.
You
know
asking
people,
but
I
mean
at
this
time
works
for
everyone
to
just
keep
it.
B
B
All
right
next
thing
is
import
aliases.
I
just
want
to
mention
that
we
merged
a
pr
that
standardizes
the
import
aliases
for
cappy
and
cap
z.
They
were
already
doing
this
in
capi
and
kappa
and
actually
other
projects,
and
we
hadn't
gotten
that
far
we
have
the
linter
in
place.
The
import
has
linter,
but
we
hadn't
declared
the
rules,
so
matej
looked
around
the
code
base
and
noticed
that
we
import
some
pretty
important
packages
as
just
six
or
seven
different
aliases,
and
so
now
we
don't.
B
So
if
we,
if
we
eventually
merge
that,
then
we'll
have
very
standardized
imports,
tldr,
you
may
have
to
rebase
a
lot
of
stuff
as
to
me,
take
home
from
all
this.
A
D
Yeah,
just
in
case
there
are
folks
here
who
don't
understand
what
an
import
alias
is.
Can
you
explain
what
an
important
port
alias
is
and
what
this
all
really
mean
in
practice
because
go
ahead
if.
B
You
I
mean
if
you've,
if
you've
written,
go
code,
you've
very
likely
used
import
aliases.
Even
if
you
didn't
know
that
was
the
term
for
them.
It's
when
you
have
an
import
line
at
the
top
by
default.
If
you
just
import
a
package,
the
very
last
element
of
the
package
name
is
how
you're
going
to
refer
to
that
stuff
for
the
rest
of
the
module
right.
B
So
if
it
you
know
if
it
was
blah
blah
cluster
api,
slash,
exp,
blah
blah
api
or
v1
beta
1,
let's
say
was
the
last
part
and
throughout
the
package
by
default,
you're
going
to
refer
to
that
as
v,
one
beta
one
dot
function
or
constant
or
whatever
a
lot
of
times.
That's
verbose
or
collides
with
something
else.
B
You
might
have
two
v
one
beta
ones
or
there's
a
convention
in
the
code
base
that
we
always
call
this
package
something
else,
and
so
you,
before
your
import
line,
you
put
in
an
alias
and
you've,
probably
all
seen
these.
But
if
you
haven't
it's
pretty
simple,
but
the
alias
can
be
whatever
you
want
and
it
can
be
different
in
any
module.
So
there's
a
linter
that
says
hey
if
you're
going
to
import
cap,
z,
experimental
and
you
should
aliase
it
as
the
same
thing
everywhere.
B
B
B
You
know
v1
alpha
4,
just
calling
it
like
v1
in
a
particular
module,
because
that
sort
of
made
logical
sense
from
the
point
of
view
the
module
anyway,
it's
all
standardized
now
you'll
probably
get
you
might
get
some
linter
complaints
when
you
write
new
code,
but
I
think
that's
all,
for
the
best
and
like
jack
was
saying
this
just
feels
like
something
goes
should
enforce
on
its
own,
because
otherwise
the
code
base
kind
of
spirals
a
bit
out
of
control
in
terms
of
imports.
D
Alias
is
because
you
have
a
collision,
which
is,
by
definition,
very
specific
to
the
file
that
you
are
using
to
organize
your
code
in
a
particular
library,
and
so
a
particular
package
is
going
to
collide
in
a
very
specific
way
in
a
specific
file
which
is
going
to
inform
how
you
want
to
aliase
it,
because
you
want
to
sort
of
semantically
distinguish
it
from
the
thing
that
it's
colliding
with.
So
you
can
read
code
in
that
file
and
not
make
not
be
confused
about
what
reference
goes
to
what
and
then
the
interesting
thing
is.
D
If
you
have
a
large
enough
project
that
same
library
in
a
different
file,
the
different
module
may
collide
in
a
slightly
different
way,
which
would
sensibly
sort
of
motivate
the
original
author
of
that
code
to
use
an
entirely
different,
alias
because
in
that
file
it
might
have
there
might
be
a
better
way
to
sort
of
distinguish.
No.
This
is
really
this
package
like,
and
that
may
those
things
actually
make
sense.
I
think
so.
D
The
reason
that
these
alias
inconsistencies
exist
are
actually
usually
defensible
in
my
experience,
but
in
the
context
of
an
entire
code
base,
that's
supposed
to
work
together
and
when
you
start
becoming
an
expert
in
a
code
base
and
going
from
this
module
to
this
module
to
this
module
and
you're
used
to
sort
of
referring
to
a
particular
external
package,
it
can
be
very
difficult
to
have
to
remember
in
your
head,
okay,
I'm
in
this
library,
which
refers
to
this
standard
package.
D
B
Yeah,
really
thanks
to
matteos
for
noticing
that
it
was
a
problem
because
fixing
it
was
totally
mechanical.
It
took
like
an
hour,
but
maybe
the
take
home
is
the
thing
I
was
about
to
type
here
now,
whenever
you
see
like
cluster
v1
dot,
something
in
capi
or
cap
z,
it's
always
referring
to
cluster
api
api
v1
beta
1.
same
thing
with
exp
v1.
B
D
Yeah,
I
don't
know
what
happened
to
my
remaining
text
matter.
In
that
thing
I
was
going
to
say
this
is
this
is
actually
ashitash's
work,
but
I
just
wanted
to
in
case
he
or
no
one
else
put
this
down
as
an
agenda.
I
wanted
to
talk
about
it
so
ashosh.
Thank
you
so
much
for
that
pr.
So
do
you
wanna
quickly
describe
that
work
because
you
did
the
work
and
you
should
get
all
the
credit.
F
Yeah
sure
so
this
is
mostly
you
know:
csi
upgrade
test
because
entry
csi
drivers
have
kind
of
been
migrated
away,
and
just
we
need
some
confidence
that
when
we
go
to
a
new
version
where
entry
csi
drivers
are
not
there,
the
grid
is
successful
for
the
existing
clusters
that
has
been
created
via
capi
and
capsule.
F
F
Hopefully
so
when
we
do
upgrade
to
the
next
version,
for
example,
1.23
we
need
to
put
a
flag,
otherwise
the
updated
upgrade
will
fail.
So
that's
why
I
thought
you
know
it's
it's
better:
to
have
a
csi
upgrade
test
to
make
sure
azure
csi
drivers,
you
know,
are
working
fine
when
we
do
upgrade,
and
this
will
be
a
short
lived
test
once
we
are
through
this
version.
We
can
possibly
remove
this
test.
A
D
Fully
lgtm
approved,
it
is
just
because
it's
a
large
chunk
of
work
trying
to
tackle
a
non-trivial.
I
mean
it's
really
great.
What
you're
doing
it's?
It's
not
easy
to
do.
That
and
part
of
me
is
like
I
don't
know
the
exact
right
way
to
do
this,
so
I
would
actually
love
for
permission
from
the
community
to
ignore
that
instinct
and
because
you've
done
such
a
good
job
at
triaging,
all
this
net
new
stuff
from
the
existing
tests,
I
feel
like
we
there's
a
there's
a
good
case.
D
We
made
that
we
can
merge
this
keep
the
test
infra
coverage
at
pre-submit,
which
is
today
on
demand
per
pr,
as
we
maybe
refine
and
iterate,
and
figure
out
the
best
way
to
organize
and
maintain
these
types
of
tests
in
pepsi,
and
then,
when
we
feel
better
about
that,
we
can
sort
of
quote,
unquote,
promote
the
test
to
be
a
periodic
test
and
get
folks
to
look
at
it
regularly
and
understand
how
it
works
and
maybe
document
the
scenarios
a
little
bit
more.
D
F
F
D
Goku,
I'm
not
so
convinced
that
the
1.2.1.1.2
to
1.3
upgrade
story
isn't
actually
functioning
equipment
for,
say
1.2
to
1.6.
You
know
seven
months
from
now,
so
we
may
want
to
keep
these
tests
in
for
longer.
We
could
also.
I
could
also
imagine
a
world
in
which
we
invest
a
little
time
to
generalize
this,
so
that
we
can
author
standard
upgrade
tests
and
give
it
some
sort
of
criteria.
D
That
says,
if
you're
upgrading
from
1.2
to
1
to
1.3
use
these
tests
and
kind
of
maintain
those
and
then
have
a
separate
abstraction
where
we
are
able
to
basically
say
run
the
1.2
to
1.3
upgrade
tests,
and
that
way
you
can
provide
a
mod
this
this
pr
that
you're
submitting
provides
a
model
for
a
whole
host
of
back
compat
validation
that
we
would
probably
benefit
by
having
in
the
project
go
ahead.
Cecile.
E
So
we
do
have
some
upgrade
tests
right
now,
like
kubernetes
version
upgrades,
would
it
be
valuable
to
add
the
csi
migration
in
those
like
as
jack
is
saying
as
like
a
more
long-term
thing
of
being
able
to
test
arbitrary
back-and-forth
things
instead
of
having
like
a
separate
job
that
does
another
upgrade
just
for
csi
migration.
D
Asterisk
remind
us
what
is
the
significant
marker
between
cap
c
1.2
and
cap
c
1.3?
That
is
interesting
from
a
csi
driver's
perspective.
It's
like
a
downstream
thing.
It's
not
capsy,
specifically
right.
F
F
From
I
mean
if
prior
one,
let's
say
1.2221.23,
then
you
know
suddenly
in
1.23
csi,
as
our
disk
you
know,
is
disabled
from
entry,
and
that
means
that
the
existing
cluster
that
got
upgraded
to
1.23
they
need
to
install
the
new
azure
desk
csi
drivers.
F
But
there
is
a
case
where
we
use
both
cloud
provider
azure
as
well
as
azure.
This
csi
driver
externally
we
need
to
set
a
flag
and
that
is
required
only
when
we
do
upgrade
from
1.22
to
1.23,
and
the
reason
is
that
one
of
the
old
cubelet
you
know,
needs
to
talk
to
the
cloud
provider
hazard.
F
And
so
that's
that's
how
you
know
we
just
need
to
set
the
flag
to
tell
the
youth
old
cubelet.
You
know
you
know
to
keep
talking
to
the
newer
controller
manager.
There
is
some
communication
that
needs
to
be
established,
so
I
think
I've
put
that
somewhere
in
the
your
description,
if
not
I'll
put
that
so
that's
how
it
is
crucial
for
1.22
to
1.23
update.
Does
that
make
sense.
D
Perfectly
I'm
I'm
sorry.
I
conflated
that
with
cap
c,
so
that
doesn't
make
perfect
sense,
and
it
also
does
make
cecile's
point
very
salient,
so
I'm
not
as
familiar
with
the
current
upgrade
tests
as
I
could
be.
D
The
interesting
thing
that
you
have
incorporated
ashtosh
with
these
tests
is
very
particular
template
scenarios
that
address
the
very
particular
upgrade
scenarios
you're
trying
to
validate
and
cecile.
How
do
we
deal
with
that
particular
surface
area
in
our
current
upgrade
test?
How
do
we
choose
which
types
of
configuration
cluster
configurations
to
run
these
upgrade
tests
against?
That
might
be
the
part.
That's
the
most
non-trivial
in
terms
of
integrations
with
the
existing
stuff.
E
Yeah
we
we
don't
right
now,
it's
just
a
like
default
template.
Just
like
a
default
flavor.
It
doesn't
specify
it
just
it
basically
delegates
to
cappy
to
decide
the
template
so.
D
Yeah,
it's
really,
I
ideally
we
would
be
able
to
set
like
say
the
kubernetes
version
as
the
foundational
significant
bit
when
we're
doing
these
like
say
we
make
a
test.
That's
like
starts
from
122
and
validates
everything
as
it
relates
to
upgrading
to
123,
but
that's
actually
in
in
the
real
world.
D
That's
actually
probably
not
right,
because
there
is
no
such
thing
as
a
122
kubernetes
cluster
there's
a
million
different
types
and
different
different
types
of
clusters
have
different
upgrade
gotchas
and
they
don't
all
of
the
potential
gotchas
from
say
122
to
123
or
from
any
other
two
versions
can't
always
be
tested.
At
the
same
time
like
you
can't
just
create
a
a
big
set
of
all
the
122
original
configurations
and
all
of
the
deltas
they
don't
they're,
not
always
going
to
work
together.
E
So
if
it
helps
the
way
it's
set
up
right
now
is
there
is
like
there
are
multiple
scenarios.
Actually,
I'm
just
remembering
this.
There
are
multiple
scenarios
for
upgrades,
so
like
there's
like
scale
in
upgrade
where
you
delete
the
control
planes
before
adding
new
ones.
E
D
We'd
have
to
we'd
have
to
change,
because
the
the
we
have
to
change
the
original
cluster
configuration
to
be
the
the
one
that
is
matt.
Your
your
browser
is
still
sharing
by
the
way
yeah
we
have.
We
have
to
change
the
template
so
that,
because
I
think
ashtosh
you
you've
intentionally
created
a
122
like
source
template
that
doesn't
have
the
stuff
that
123
needs
in
order
to
prove
that
there's
a
path
forward
right.
F
E
E
That's
like
a
one-off
that
we
have
to
test
this
like
migration
right,
like
you
probably
will
want
to
do
this
again
for
out-of-street
cloud
provider
once
that
is
like
great,
and
there
will
be
I'm
sure,
other
kubernetes
things
that
come
that
we
want
to
deal
with
in
upgrades.
D
Yeah,
I
would
say,
set
an
appropriately
aggressive
time
box,
because
you've
already
spent
a
bunch
of
time
on
this,
and-
and
I
mean
the
good
news-
is
that
the
the
time
you
spent
on
this
has
functionally
proven
that
there's
a
path
forward,
and
it's
literally
documented
in
that
pr.
So
for
any
other
folks
in
the
community
who
are
having
this
problem,
I
I
I
feel
like
we
have
a
really
good,
concrete
story
right
now
that
exists
in
the
public
that
folks
can
use
to
refer
to.
D
So
that's
great,
so
yeah,
please
don't
spend
a
week
trying
to
integrate
into
the
existing
upgrade
test,
but
maybe
just
a
quick
half
hour,
45
minute
pass
through
the
existing
upgrade
surface
area
and
because
you're
already
familiar
with
how
you
did
this
upgrade,
you
should
you're
you're
better
than
any
of
us
at
being
able
to
assess
how
viable
would
this
be
to
integrate
into
the
existing
upgrade
solutions.
Or
is
this
going
to
require
an
entire
refactor
of
the
existing
upgrade
surface
area?
D
In
which
case
I
would
advocate-
and
we
could
see
if
the
rest
of
the
community
agrees-
that
we
just
merge
europe
in
the
current
state
and
we
can
tackle
the
fact
that
we
have
debt
and
our
upgrade
tests
over
time
and
not
block
the
csi
story
on
that
existing
tech
debt.
Does
that
make
sense.
D
Okay,
cool
thanks
again,
you
know,
whatever
happens,
you've
already
set
a
great
pattern
for
a
whole
host
of
I
mean
you
know
my
mind's,
spinning
with
all
the
possibilities
of
getting
more
test
coverage
and
all
these
things
like
this
is
really
the
the
value
proposition
of
cluster
api
and
cap
c.
So
it's
super
exciting
to
think
of
how
we
can
automate
our
way
towards
you
know,
demonstrating
that
value
proposition
with
every
pr
it'll
be
super
great
sure.
E
Yeah,
I
was
just
gonna,
say
zayn
joined.
Maybe
we
should
talk
about
the
machine,
cooler,
yeah.
G
Actually,
like
yeah,
I
think
like,
if
that
probably
investigation
is
going
well,
so
we
till
we
have
more
data.
I
think
that
we
can
join
it.
I
I
just
wanted
to
like
gather
some
thoughts
from
here
on
a
slightly
bigger
topic
before
I
take
it
to
the
cluster
api,
which
is
around
like
the
how
cluster
apis
deal
with
manage
kubernetes
clusters
right.
So
we
have
like
a
lot
of
thoughts
going
on
like
on
the
cluster
api,
which
was
built
actually
like
in
in
my
opinion.
G
G
So
aks
team
is
providing
like
really
good
solution
in
that
sense,
like
let's
say
around
certain
operations
on
the
health
of
the
node,
removing
the
node
when
it's
not
healthy,
draining
it
or
these
other
other
other
points
and
cluster
api
is
actually
doing
the
same
right.
G
So
we
had
this
incident,
for
example,
where
it's
trying
to
delete
the
nodes,
but
I
was
just
imagining
what
happens
if
cluster
api
doesn't
delete
the
nodes
right
like
yeah,
and
it
doesn't
happen
anything
because
aks
will
come
and
it
will
delete
the
nodes
and,
like
the
node
draft
map,
will
be
updated
in
any
case
because
nodes
are
deleted.
G
So
just
from
a
maturity
perspective
like
I
was
asking
if
it
would
be
same
to
go
and
talk
with
cluster
api
to
introduce
certain
flags
like
you
know,
feature
flags
or
not
like
to
run
where
we
certain
features
are
totally
disabled
because
we
say
this
is
managed
by
the
managed
solution
right.
So
if
the
provider
type
is
like
aks
or
like
azure
capacity
managed
just
skip
all
this
like
smartness
or
intelligence
that
we
have
added
to
the
cluster
api
because
we
might
run
into
some
issues
right.
D
So
I
expect
cecil's
thinking
something
similar
to
me.
There
is
an
existing
proposal
out
to
standardize
this
across
cloud
fighters.
Are
you
aware
of
that
saying
I,
my
google
doc?
Okay
great
since
he'll,
beat
me
awesome
so
in
in
chat
she
pasted
a
pr
thread
which
links
to
a
proposal.
D
I
mean
it's
a
starting
point:
it's
not
gonna
solve
everything,
but
I
certainly
it's
on
my
to-do
list
to
get
a
little
bit
more
engaged
there
and
it's
it's
slightly
challenging
because,
as
you've
mentioned,
I
mean
you,
you
are
no
small
part
responsible
for
the
fact
that
we
kind
of
hit
the
ground
running
in
cap
z
to
prove
this
whole
story
out
with
aks
before
cappy
had
sort
of
standardized
this
in
their
api
spec.
D
So
now
we
have
to
kind
of
continue
to
move
that
forward
for
customers
who
are
literally
running
businesses
on
top
of
the
solution,
while
at
the
same
time
taking
a
kind
of
abstract
step
back
and
and
defining
how
this
is
going
to
look
and
then
figuring
out
how
we're
going
to
in
in
cap
z-
and
I
does
kappa-
have
an
existing
thing
like
like
we
have.
Is
there
an
eks?
Yes,.
D
D
So
super
glad
you
brought
this
up.
I
think
that
we
should
all
engage
and
to
the
best
that
we
can
make
the
spec
what
we
want
it
to
be,
because
it
is
the
right
thing
to
do
in
the
community
to
eventually
adopt
that
spec
and
so
we're
gonna.
Probably
do
it
no
matter
what
so
it
should
incorporate
some
of
our
opinions
that
we've
learned
from
the
last
nine
months
of
actually
using
it.
So
your
hand
is
raised.
E
Yeah,
I
think,
the
so
this
doc.
This
proposal
is
mostly
around
basically
re-thinking
things,
so
that
eks
can
support
cluster
class.
The
way
that
aks
is
set
up
in
cap
z
right
now,
it
technically
doesn't
need
this.
E
We
could
support
cluster
plus
debate
because
we've
set
it
up
in
a
way
where
it
already
fulfills
the
contract,
but
because
of
how
capa
set
solution,
it
doesn't
work
with
cluster
class,
there's
like
a
major
issue
and
so
they're
trying
to
rethink
it
and
in
me
thinking
that
that
they
kind
of
came
up
with
like
a
third
option,
which
is
neither
what
cavs
or
cap
are
doing
as
like.
This
is
how
we
should
do
manage
clusters
in
cap
or
in
cluster
api
providers.
E
So
that's
like
jack
was
saying
that
would
be
a
great
change
down
the
line.
If
we
decide
to
adopt
this,
we
don't
have
to
adopt
it,
but
I
could
see
that
bringing
problems
later
if
we're
doing
things
kind
of
differently
than
all
other
providers
and
then
but
yeah
regarding
like
feature
flags
and
like
turning
off
like
features
of
cluster
api.
E
So
if
we
say
like
oh
cluster,
api
is
not
going
to
delete
nodes
that
might
work
fine
for
aps,
but
that
that
will
potentially
be
a
problem
for
like
some
other
provider
that
doesn't
delete
the
most.
So
it's
not
something
that
we
can
just
like
turn
off
for
everyone,
but
like
just.
I
think
your
idea
is
interesting,
like
it's
worth,
maybe
starting
an
issue
about
this
or
a
discussion
of
like.
G
Yep
I
forgot
to
research.
No,
I
I
wanted
to
like
yeah
there's,
like
several
approaches
that
you
know
we
could
go
around.
For
example,
I
agree
like
cluster
apis,
like
talking
with
you,
like
the
kubernetes
objects
like
the
nodes
and
everything
like
that
with
kubernetes
clusters,
and
it
works
at
that
level
and,
like
maybe
closing
certain
options
will
be
very
hard
and
like,
in
that
sense,
like
getting
some
agreement
on.
That
will
be
like
a
like
a
bigger
challenge
on
that,
but
I
was
more
thinking
also.
G
The
capsi
is
the
actual
like
who
is
dominating
right
like
what
the
cluster
api
can
do
or
cannot
do
in.
This
sense
like
we
are
the
ones
who
are
generating
the
cube
config
for
that
right.
So
if
we
just
take
away
from
that
like
saying,
okay,
you
cluster
api
is
not
supposed
to
delete
nodes
or
is
not
supposed
to
do
this,
because
we
agree
that
aks
will
handle
all
the
possible
scenarios
of
that
ones.
That
would
be
one
case
right.
G
I
agree
that
would
not
be
the
most
elegant
one,
because
we
will
see
a
lot
of
errors
and
everything
that
is
happening
on
the
cluster
api
side,
but
still
it
would
be.
If
someone
comes
yeah
like
I
cannot
delete
that,
then
we
could
say
yeah.
This
is
because
you
have
enabled
that
feature,
and
that's
like
opt-in
like
this
is
not
by
default.
E
Yeah,
I
don't
think
that's
ideal,
like
you
said
also
because,
like
whenever
a
controller
runs
into
an
error,
it
looks
right
away
and
so
you're
going
to
have
a
controller
running
like
the
clappy
controller.
It's
like
trying
to
delete
the
nodes
and
failing
every
time
and
it's
going
to
eventually
enter
like
exponential
back
off
and
then
it's
not
going
to
reconcile
for
real
for
like
and
it
might
get
stuck
halfway
through
and
not
do
something
that
comes
afterwards.
That's
like
actually
critical
or
important
for
your
machine
full.
E
D
Do
you
mind
if
we
go
a
little
bit
into
the
weeds,
I'm
actually
more
curious
about
the
in
inverted
situation,
we're
describing
because
I
think
the
the
delete
situation
is
actually
the
easy
one.
What's
more
tricky
to
me
is
when
aks
has
aks
as
an
example
has
a
good
reason
to
say,
recycle
five
nodes
in
between
reconciliation,
loops
and
so
now
you've
got
on
the
cappy
side.
You've
got
say
five
machine
pool
machines
that
don't
exist
anymore.
D
That's
fine!
The
that
I
think
that
association
is
fairly
easy
to
to
walk
through
and
garbage
collect,
but
now
you've
got
five
net.
New
nodes
is
cappy.
Currently,
as
author
designed
to
be
able
to
be
like
okay
cool,
I
can
make,
I
can
make
novel
associations
and
add
new
five
machine
pool
machines
based
on
the
existing
of
these
new
five
new
nodes.
Is
that
something
you
get
for
free
in
cappy
right
now,.
D
Yeah
great
great,
these
are
all
things
that
we
should
be
testing,
but
yes
great,
that
that's
the
one
that
that
I
was
gonna
so
suzanne
and
I
have
been
working
on
slack
to
repro
scenarios
that
you're
seeing
in
the
real
world
and
there's
mainly
the
delete
scenarios,
what
I'm
trying
to
focus
on,
but
eventually
I
want
to
now
that
I'm
in
this
I've
got
this
momentum
going
for
testing.
I
I
want
to
be
doing
these
kind
of
things,
because
these
are
definitely
you
know.
D
Some
of
the
folks
on
this
call
know
a
little
bit
about
what
aks
is
working
on
and
experimenting
with,
and
certainly
node
health
automatic
node
recycling.
All
these
things
are,
you
know
for
folks
who
aren't
using
cluster
api
and
cluster
api.
These
are
sort
of
first
class
principles
of
cluster
api
that
folks,
who
use
cluster
api
you
take
advantage
of,
and
no
doubt
eks
and
aks
and
gke,
observe
these
and
want
to
implement
these
first
class
conveniences
for
its
customers,
independent
of
cluster
api.
But
how
do
you?
D
G
Yeah,
I
think
we
yeah,
so
I
I
was
just
trying
to
figure
out
like
some
ways
to
like
move
forward
in
some
like
more
more
holistic
way,
because
I
I
think
I
like
cluster
ap
and
cluster
api
provider
azure,
but
like
there's
always
going
to
be
like
some
issues
that
are
like
will
be
beyond
azure,
like
capsi,
that
were
introduced
by
cluster
apa
right
and
that
will
affect
us
and
somehow
it
might
be
feel
like.
If
we
will
be.
G
I
will
take
closer
look
into
the
cluster
class
definitely
and
like
see
if
there
are
like
possibilities
to
design
some.
You
know
like
a
cluster
api,
which
is
where
they
manage
clusters
like
like
first
class
citizens
in
that
sense,
because
otherwise
I
I
have
like
I've
been
asked
question
like
if
classic
api
is
the
really
choice
for
managed
one
like
if
we
can
just
use
some
other
platform
too,
because
if
aks
is
giving
us
free
everything.
G
Why
not
just
use
like
let's
say
crosspin
or
everything
like
that?
But
I
I
don't
see
like
it's
the
same
thing
at
all.
Honestly,
it's
just
like
the
small
details
like
that
are
like
in
that
sense,
like
could
be
adjusted
where
we
could
have
like
really
great.
Like
a
fleet,
you
know
like
the
equivalence
fleet
solution,
because
I
don't
see
any
other
product
to
be
able
to
do
that
right.
G
So
it
should
be
like
a
supporting
bare
metal
like
or
self-managed
and
managed,
or
any
any
any
kubernetes
cluster
that
exists
out
in
the
world
and
to
join
them
under
the
like
umbrella
cluster
api.
Because
later
we
we
are
trying
to
build
some
things
that
are
like
on
the
life
cycle
of
the
cluster
right.
So
when
a
cluster
is
created,
I
want
to
do
certain
things
or
not
when
it's
deleted
or
not
and
taking
it
from
the
kubernetes
operator
patterns,
it
becomes
really
like
out
of
the
box
like
we
get
it
free
right.
G
So
this
is
something
that
definitely
like
others,
don't
have
right
now
and
especially
like
so
I'm
just
trying
to
find
like
which,
which
what
would
be
like
the
official
story
or
like
like
stand
on
that
like
how
how
we
will
make
sure
that
we
don't
want
to
overrule
aks
and
we
want
to
take
profit
of
all
of
the
feature
of
aks.
G
But
we
also
want
to
use
cluster
api
because
it
is
like
the
common
ground
call
it
multi-cloud
hybrid,
like
whatever
situation
someone
is
dealing
with
right,
because
no
one
else
will
come
and
provide
this
cluster
interface
under
cluster
api
that
is
providing
today
but
yeah.
I
will
take
a
look,
definitely
and
come
back
next
time.
B
Yeah
any
other.
B
D
D
What's
intrinsic
in
in
the
platforms
themselves,
two
or
three
years
from
now,
that's
possible,
in
which
case
cluster
api
may
become
just
additional
complexity
that
isn't
doesn't
justify
itself
so
to
speak,
but
certainly
that's
not
the
case
now
and
so
yeah.
We're
super
happy
to
be
working
with
you,
zayn
and
other
customers
who
want
to
do
this
like
we.
We
really
are
sort
of.
We
have
the
opportunity
to
solve
these
problems
for
the
first
time
and
hopefully
get
them
right,
because
now
is
the
time
to
get
them
right.
G
B
B
A
Hi
I'm
william
or
willie.
I
just
joined
microsoft
this
week
and
I'm
really
looking
forward
to
working
with
cluster
api
and
capsi
and
still
kind
of
feeling
my.
B
A
Around
and
during
on.
A
Process
but
I
do
want
to
sit
into
a
these
meetings
and
get
a
better
feel
for
a
campsie
in
general,
so
yeah
nice
to
meet
everyone.
A
A
F
Oh
okay,
sure
I
just
wanted
to
do
jackie.
Just
sorry
like
I
was
not
able
to
unmute
myself
so
jack.
I
I
just
updated
the
pr
a
bit
for
the
part
that
I
was
speaking
about.
What
extra
needs
to
be
done
in
terms
of
flag
and
pr
description?
Sorry,
I
I
didn't
had
put
that
and
also
I
linked
a
similar
pr
that
was
done
in
kappa
on
that
vr
and
then
I'll.
Look
with
sicily
situation
how
to
integrate
with
existing
ones,
and
then
also
you
know.